text
stringlengths 0
128k
|
---|
Is there a java SDK
Is there a java SDK, how can I use the program call instead of the command to call the API?
Closing as duplicate of https://github.com/istio/istio/issues/9683
|
double.toStringAsFixed() sould have an option to truncate instead of round
toStringAsFixed will round the double to the nearest number, but sometimes this is not desirable. I would be helpful if we could choose if the String is generated using round, ceil, floor or truncate.
Since num is sealed, adding optional parameters to methods should be effectively non-breaking.
The biggest issue here is that we don't have the corresponding functionality in JavaScript, where we use the Number.prototype.toFixed function. That makes it very hard to add this extra functionality and compile it to efficient JavaScript.
Can we make a javascript only implementation of truncate and etc, and not have to rely on the language?
It's actually remarkably hard to implement a correct float-64 "toString" behavior, and I have no reason to believe it should be easier to do trunc/floor/ceil than round.
You have to convert the binary representation to a decimal representation, and know when to stop in order to be efficient. Both JS and Dart native uses a highly specialized piece of code (yep, same code, Mozilla uses a different one) to do that
That's the complexity of the code we'd need to implement in JavaScript. I don't think that's something we'd want to create - and maintain as part of the platform libraries.
I see... should I close this then?
(still waiting for the dartvm to make a come back to Chrome... a man can dream)
We can keep it open as a request, and if JS adds the functionality we need, or, say, we think WASM-compiling the C++ code becomes a viable approach, then we can revisit it at that point.
|
Is it possible to auto paste and click a button in the website by chrome extension?
I would like to make a two steps operation on a video URL by using chrome extension. First, paste the URL to another website's textbox.(For example, https://www.urlgot.com/). And then click the "search" button. Could I do it automatically by chrome extension? Is this operation allowed by chrome? I got a list of URLs of video and corresponding captions to download. So I want to do it automatically by calling related websites. I just wonder these could be done by chrome extension.
Thank you for any advice.
Your extension will need a content script to access the web page. Then it's a rather trivial DOM operation so yeah it's possible, but the question is too broad as it comprises the basics of DOM and the extension architecture. Break down the task into parts and look for examples for each one.
@wOxxOm Now I only want to do the paste and click steps. I will try to use content script as you said. But I am new in Chrome scripting. Could you give me some hints or sketch about script? It is a pitty that I could not accept your advice as an answer. Thank you!
|
A Remarkable Case of Elevated Carcinoembryonic Antigen after Surgical Treatment of Rectal Cancer: A Search for its Mysterious Cause
an ongoing elevation of serum CEA levels, for which she underwent four sequential PET-CT scans within one year without any sign of malignancy. Other causes of elevated CEA levels were investigated and excluded by additional blood tests and imaging studies. Available literature was extensively reviewed but revealed no further possible explanations for the high CEA serum level. Conclusion: The manifestation of an exponential rise of CEA levels following the treatment of colorectal cancer in the absence of abnormalities is a rare presentation and remains a mystery. The cause of the elevated CEA is yet to be elucidated.
Introduction
Serial measurements of carcinoembryonic antigen (CEA) serum levels can be indicative of recurrent disease after treatment of colorectal carcinoma and is therefore frequently used as a biomarker in follow-up [1]. However, it is also elevated to a significant degree in a number of other malignant and non-malignant conditions. This is due to its diverse functions in cell adhesion, in intracellular and intercellular signaling, and during complex biological processes such as cancer progression, inflammation, angiogenesis, and metastasis [2]. Common malignant causes for an elevated carcinoembryonic antigen include ovarian cancer, breast cancer, thyroid cancer and non-small cell lung cancer [3]. Benign causes contain cigarette smoking, mucinous cystadenoma of ovary/appendix, cholecystitis, liver cirrhosis, pancreatitis, inflammatory bowel disease and several medications [4]. In this case, we present a patient with ongoing elevating CEA levels without a clear cause.
Case Presentation
A 57-year-old female with a 32 pack per year smoking history and a past medical history of bilateral fibroadenoma and cataract and a family history of cardiovascular disease presented with a persistent change in bowel habits with rectal bleeding. Patient received no medication at the time and did not have any known allergies. During colonoscopy, a malignant looking tumour was found at 11cm from the anal verge. Pathological findings confirmed the diagnosis of adenocarcinoma of the rectum. The CEA level was elevated, 10.4ng/ml (normal level <5ng/ml), and CT and MRI imaging showed a cT3N2 rectal tumour without evidence of distant metastases. Additional findings on the 18F-FDG PET-CT scan were benign nodules in the lung, described as calcifications and a pancreas cyst with a homogenous aspect without solid components for which follow-up was advised. The patient received neoadjuvant chemoradiation therapy (25x2Gy + 1300mg capecitabine on treatment days). The capecitabine was discontinued after the first dose because of coronary spasms. Radiotherapy was continued and 6 weeks after final treatment, a re-staging MRI was performed. This showed a partial response (ycT2N1) and a laparoscopic low anterior resection with an end-to-end stapled anastomosis was performed.
Despite initial good clinical recovery, patient developed a leucocytosis on day 4 postoperatively and a CT-scan was executed which revealed anastomotic leakage. A re-laparoscopy with a deviating ileostomy was performed, together with placement of a pre-sacral drain and ceftriaxone/metronidazole intravenously. After this, recovery looked promising until patient developed a pre-sacral abscess for which she received an Endo-sponge® for drainage. At the same time, patient suffered from stoma dermatitis which was healed after surgical revision of the ileostomy. Further recovery was without further complications. Final pathology revealed γpT3N0 adenocarcinoma. Patient underwent regular follow-up according to the Dutch National Oncological Guidelines and the ileostomy was closed 3,5 months after the initial surgery [5].
I Follow-up -the Rise of CEA
The first postoperative CEA level was 2.8, followed by 4.4 three months later. Patient was asymptomatic and during this early follow-up, she underwent ileostomy closure surgery without complications. Because of the rise of CEA, 18F-FDG PET-CT scan was performed without any signs of recurrent disease. Three months later, patient was seen again, this time complaining about changing bowel movements resulting in frequent defecation up to 8 times a day. On rectal examination, the anastomosis was palpated and was considered unremarkable and no palpable rectal mass was found. CEA further increased to 10.0 and therefore another 18F-FDG PET-CT scan was executed, again no abnormalities were found. Three months later, patient had similar complaints and experienced mild weight loss and this time CEA had risen to 25.6. Again, a 18F-FDG PET-CT scan was performed showing no signs of recurrent disease. Three months later, patient was seen again, this time with a CEA level of 57.2.
As three sequential 18F-FDG PET-CT scans had previously not shown any sign of recurrent disease, patient was referred to a tertiary oncological centre for additional advice. A fourth 18F-FDG PET-CT scan within one year was performed, this time together with a MRI scan of the rectum, both without any abnormalities. On all scans, the calcifications in the lung and the pancreas cyst did not differ from previous findings. Colonoscopy was performed and was negative for recurrent disease. After the second opinionwithout new insightspatient underwent follow-up and at her following visit, CEA levels continued to rise, up to 163.6. Again, a 18F-FDG PET-CT scan was made without abnormalities. At this visit patient complained of headaches, abdominal pain and further weight loss (3kg in 3 months). She was referred to the neurologist for further consultation. An MRI of the brain showed no sign of metastatic disease or other abnormalities.
After one year of follow-upwithout any signs of recurrence, despite an extremely elevated CEAdiagnostic laparoscopy was performed as a last resort. The entire abdomen was evaluated but no signs of recurrent disease were found.
II Other Examination for Causes of CEA Elevated Levels
During follow-up, several additional blood tests and imaging were performed to rule out other causes of elevated CEA. Since CEA is often elevated in other malignant diseases, additional tumour markers such as carbohydrate antigen 19.9 were tested but were within normal values (CA 19.9 11.7u/ml (<37u/ml)). Thyroid hormones were tested, both to rule out thyroid cancer as well as endocrinological disorders, since hypothyroidism is a common endocrinological disorder in which CEA levels can rise [6]. Neither thyroid (stimulating) hormones, nor parathyroid hormones were abnormal. Patient had a pack year history of 32 years but had stopped smoking for over 9 years at the time of diagnosis and did not restart during illness or follow-up. Other types of cancer (breast, NSCLC) and inflammatory disease (cholecystitis, pancreatitis, IBD) that are potential causes of elevated CEA were all ruled out by the sequential 18F-FDG PET-CT scans.
Discussion
CEA levels have been shown to be associated with tumour burden in patients with colorectal cancer and can therefore be used as marker in follow up [1]. It can however also be elevated in a number of benign conditions and other malignancies. In the present case the more commonly known conditions that cause an elevated CEA were excluded: ovarian cancer, pancreatic cancer, gastric cancer and thyroid cancer [3]. In addition, a literature search revealed several case reports presenting rare entities that cause elevated CEA levels, such as head and neck cancer, mucinous adenocarcinoma of the lip, hypothyroidism, a apocrine hidrocystoma and lithium use, but none of these were helpful for our patient [7][8][9][10][11][12]. With elevated CEA levels as high as in our patient, distant or local recurrence of the rectal cancer was on the top of our differential diagnosis list, followed by a second primary tumour. The patient underwent four sequential 18F-FDG PET-CT scans that did however not reveal a recurrence or a second primary tumour, and more specifically no change in the pancreatic cyst or the benign looking lung nodules. A recent study shows that 18F-FDG PET-CT is sensitive, specific, and accurate in investigating patient with elevated CEA and without known primary malignancy [13]. When CEA serum levels rise above 14.31ng/L, the diagnostic value of 18F-FDG PET-CT for malignant tumours becomes even more reliable [14]. Peritoneal metastases are known to elude standard imaging, and because our patient had some abdominal complaints and weight loss eventually a diagnostic laparoscopy was performed [15].
For the patient the most difficult part in the follow up has been how to cope with the distress of the uncertainty. The second opinion in a dedicated oncology referral centre, while not providing any definitive answers, mainly served to alleviate some of this distress. Both the patient and the doctor know there is an increasing likelihood of recurrent disease with at present no signs or clues where this would be located, and if there would still be a chance for cure. It could turn out to be a solitary metastasis that can be surgically removed, it could be diffuse bone marrow invasion with a dire prognosis, or it could be something entirely different. The only option we currently have is to wait and repeat the imaging, mindful of Sir William Osler's quote: "medicine is a science of uncertainty and an art of probability".
|
Make it official?
Thank you for creating this ❤️ 🎈
Are you interested in placing this under @floating-ui/solid?
@lxsmnsyc it's okay that it doesn't support interactions, just the positioning is fine. If created, that would be a separate package like @floating-ui/solid-interactions, but yeah, the interactions package is vastly more complex and is still in v0, so it wouldn't make sense to translate it especially as it's in flux still.
I see, that makes sense. I do feel the urge to work on it given that I'm also working on solid-headless.
Regarding the API, is it possible to match @floating-ui/react-dom like this if it makes sense, or better to use external signals? In React you can access the elements via position.refs.reference.current:
I did consider it initially however given that Solid doesn't have refs, reference and floating is going to be a setter function ((ref: T) => void) but the thing is, users won't be able to access the underlying ref value, hence why the current API design is to delegate the ref assignment to the user and the user must pass the value to the useFloating function instead.
I am open to discuss this further.
I see, that makes sense 👍
It should be relatively straightforward to duplicate the react-dom folder as solid in Floating UI's repo and replace files with the index.ts file you've created — the build setup should work with it. Would you be willing to make a PR?
After that, we can create a documentation page on the website (which will likely be similar to your current README)
Yes, that's sounds great. I'll do it in my free time, thanks!
|
what is the name of the designer of the incline whose upper end was located in the neighborhood that has the zip code of 15210 ?
john h. mcroberts
|
# [Dice Roller](http://alexa.amazon.com/#skills/amzn1.ask.skill.cf49c20b-96c9-4229-9ac1-723362eb6693)
 0
To use the Dice Roller skill, try saying...
* *Alexa open dice roller*
* *Alexa start dice roller*
* *Alexa launch dice roller*
Dice Roller is an application to allow you to ask Alexa to roll a dice for you ! Useful if you do not have a dice and you want to play games.
***
### Skill Details
* **Invocation Name:** dice roller
* **Category:** null
* **ID:** amzn1.ask.skill.cf49c20b-96c9-4229-9ac1-723362eb6693
* **ASIN:** B01LYKA12Q
* **Author:** SupHerman
* **Release Date:** September 22, 2016 @ 05:12:04
* **In-App Purchasing:** No
|
using DesignTE.View.NoteEditing;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace DesignTE.View
{
public sealed class SideTabControl : TabControl
{
private const string NoteTabTitle = "笔记编辑器";
private const string TaskListTabTitle = "任务";
public SideTabControl()
{
Dock = DockStyle.Fill;
//SideBarTabs.Alignment = TabAlignment.Bottom;
ImageList imageList = new ImageList();
imageList.Images.Add(DesignTE.Properties.Resources.sticky_note_pin);
ImageList = imageList;
NoteTab = new TabPage(NoteTabTitle) { ImageIndex = 0 };
NoteEditor = new NoteEditor { Dock = DockStyle.Fill };
NoteTab.Controls.Add(NoteEditor);
TabPages.Add(NoteTab);
}
public NoteEditor NoteEditor { get; set; }
public TabPage NoteTab { get; private set; }
//TODO: Should be handled the way 'View Calendar' ribbon command is done.
public TabPage TaskListTab
{
get
{
return TabPages.Cast<TabPage>().FirstOrDefault(p => p.Text == TaskListTabTitle);
}
}
}
}
|
The type 'string' is not compatible with the type 'seq<'a>'
I have no idea why the following code doesn't compile with F# 4.0 on VS2015 RC:
That's quite strange since there's no error if I run the same code in F# Interactive. Is it a compiler bug or am I missing something?
module Number
let digitNames = ["zero"; "one"; "two"; "three"; "four"
"five"; "six"; "seven"; "eight"; "nine"]
let toWord (number:seq<char>) =
number
|> Seq.map (fun n -> digitNames.[int n - 48])
|> String.concat " "
let toWord' (number:string) =
number
|> Seq.map (fun n -> digitNames.[int n - 48]) // Error: The type 'string' is not compatible with the type 'seq<'a>'
|> String.concat " "
let toWord'' (number:string) =
(number :> seq<char>) // Error: The type 'string' is not compatible with the type 'seq<'a>'
|> Seq.map (fun n -> digitNames.[int n - 48])
|> String.concat " "
You must be developing a portable library I think. For strange reason, the profile 7/78/259 System.Runtime.dll binary doesn't record System.String as implementing IEnumerable<char>. I checked this by looking at:
ildasm "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.
NETCore\v4.5\System.Runtime.dll"
Likewise the Profile7 facade DLL doesn't implement this:
ildasm "C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.
NETPortable\v4.5\Profile\Profile7\System.Runtime.dll"
Later "4.6" profiles do implement this. AFAIK it's not possible to the get the Visual F# Tools to target the 4.6 portable profiles (Profile31, 32, 44, 84, 151, 157)
You can workaround this by using seq { for c in "F#" -> c } which uses the F# rule that strings have a "GetEnumerator()" method.
Closing this as it's not strictly speaking a bug - the portable profiles are what they are.
|
U.S. DEPARTMENT OF JUSTICE, IMMIGRATION and NATURALIZATION SERVICE, Petitioner/Cross-Respondent, v. FEDERAL LABOR RELATIONS AUTHORITY, Respondent/Cross-Petitioner.
No. 92-4652.
United States Court of Appeals, Fifth Circuit.
June 25, 1993.
Douglas Ross,' Dept, of Justice, Washington, DC, for petitioner-cross-respondent.
Richard Zorn, Wm. R. Tobey, William E. Persina, Sol., Arthur A. Horowitz, David M. Smith, Federal Labor Relations Authority, Washington, DC, for respondent-cross-petitioner.
Alexia Fay McCaskill, American Federation of Gov’t. Emp., William Kanter, Deputy Staff Director, U.S. Dept, of Justice, Mark D. Roth, American Federation of Gov’t. Emp., Washington, DC, for intervenor.
Before POLITZ, Chief Judge, REAVLEY and BARKSDALE, Circuit Judges.
POLITZ, Chief Judge:
The United States Immigration and Naturalization Service seeks review of the determination by the Federal Labor Relations Authority that it committed an unfair labor practice. The FLRA seeks enforcement of its order. For the reasons assigned, we grant the petition for review, in part and order enforcement in part.
Background
This dispute has its genesis in revisions by the INS in its policy on the use of firearms by employees. Negotiations between the agency and the employees’ collective bargaining representatives, the National Border Patrol Council and the National Immigration and Naturalization Service Council of the American Federation of Government Employees AFL-CIO, concluded with several unresolved disputes. The INS contended that six proposals advanced by the unions were nonnegotiable because they addressed matters reserved to management’s discretion. After mediation was deemed likely to be ineffective, the unions asked the Federal Service Impasses Panel to review the matter. Before the Impasses Panel acted, however, the INS implemented its revisions, both those agreed upon and those in dispute. The Impasses Panel thereafter determined that it did not have jurisdiction because negotiability was controverted. At the unions’ request, the FLRA reviewed the negotiability of the six proposals and determined that only Proposal 5 and portions of Proposals 1 and 2 were negotiable. The INS sought our review of the negotiability of Proposal 5. In a decision rendered on October 20, 1992, we ruled that Proposal 5 was not negotiable.
Shortly after seeking FLRA review of the negotiability issue, the unions brought unfair labor practice charge's against the INS for implementing the revisions before the Impasses Panel had ruled. On April 30, 1992, prior to our decision on the petition for review of the negotiability order, the FLRA decided that the INS had violated section 7116(a)(1), (5), and (6) of the Federal Service Labor-Management Relations Statute. The INS timely petitioned for review and the FLRA cross-applied for enforcement of its order.
Analysis
The issue before us is whether an agency commits an unfair labor practice by implementing a change in a condition of employment when a union challenge is pending before the Impasses Panel and it is subsequently determined that the change is a nonnegotiable management prerogative. We conclude that neither the agency’s refusal to submit to the jurisdiction of the Impasses Panel nor its unilateral implementation of the change is an unfair labor practice.
The Federal Service Labor-Management Relations Statute, part of the Civil Service Reform Act of 1978, was enactéd in an effort to make the government function more efficiently and effectively. The legislation codifies the right of federal employees to organize and the duty of management to bargain, but tailors these rights and responsibilities “to meet the special requirements and needs of the Government.” In section 7101(b) Congress directed that the statute “be interpreted in a manner consistent with the requirement of an effective and efficient Government.”
If the parties bargain to impasse and mediation does not resolve their differences, the statute authorizes either side to invoke the services of the Federal Service Impasses Panel. The Impasses Panel is empowered to impose specific contract terms on the parties “unless [they] agree otherwise.” While a matter is pending before the Impasses Panel, under' FLRA rule the parties must maintain the status quo to the extent consistent with the necessary functioning of the agency. Failure to do so constitutes an unfair labor practice.
Certain matters, however, statutorily are exempted from the scope of mandatory bargaining, including, as pertinent herein, an agency’s internal security practices and the assignment of work. If management contends that a change falls within an exempted area, the Impasses Panel lacks authority to proceed unless and until the negotiability issues are resolved, subject to a limited exception defined by the FLRA. We agree with the reasoning of the FLRA as expressed in Commander Carswell Air Force Base, Texas and AFGE that the purposes of the statute are best furthered by allowing the Impasses Panel to resolve those disputes involving negotiability that are controlled by existing FLRA precedents. To that we would add “and existing controlling judicial precedents.”
In the case at bar, claiming nonnegotiability the INS implemented its policy revisions before the Impasses Panel declined jurisdiction. Ultimately it was determined that all of the changes, except for portions of two of the union’s proposals, were nonnegotiable. The INS concedes that it committed an unfair labor practice with respect to implementation of those measures found negotiable, but otherwise it denies wrongdoing. The FLRA insists that it was an unfair labor practice to implement any of the changes, negotiable or not.
Our 1984 decision in U.S. Dept. of Justice, INS v. FLRA persuades that the position taken herein by the FLRA is untenable. In the cited case, the INS implemented changes in employment conditions while a representation election was pending. Determining that the changes involved areas reserved to management’s discretion, we held that the INS had not committed an unfair labor practice because the FLRA was not authorized to suspend management rights. We therein stated:
Congress provided specifically in 5 U.S.C. § 7106 that “nothing in this chapter shall affect the authority of any management official of any agency” to exercise the rights reserved to management by that section.... By using the word “nothing” ..., Congress clearly expressed its intent with regard to management’s exercise of the rights which had been reserved to it. The use of such words makes it obvious that Congress did not intend to let the Authority decide whether, in its judgment, it was “necessary” for the INS to [make the desired changes] during the pendency of the election.... Construing the statute to allow the Authority to promulgate a rule which would bar management from exercising its reserved rights during the pen-dency of a representation question would hardly lead to an INS which was as effective and efficient as possible.
Similarly here, the position urged by the FLRA would suspend management rights pending Impasses Panel action. Neither the language nor spirit of the statute would so permit. Whereas unilateral implementation during Impasses Panel proceedings of a change that is determined to be negotiable might be an unfair labor practice, we hold that unilateral implementation of a change determined to be nonnegotiable is not.
The petition for review is GRANTED with respect to Proposal 5. Conversely, the cross-application for enforcement is DENIED with respect to Proposal 5 but is GRANTED with respect to the negotiable parts of Proposals 1 and 2.
. Dept. of Justice, INS v. FLRA, 975 F.2d 218 (5th Cir. 1992).
. 5 U.S.C. §§ 7101 et seq.
. S.Rep. No. 95-969, 95th Cong., 2d Sess. 4, reprinted in 1978 U.S.C.C.A.N. 2723, 2726.
. 5 U.S.C. § 7101(b).
. See also Dept. of Justice, INS v. FLRA, 991 F.2d 285 (5th Cir. 1993).
. 5 U.S.C. § 7119(b)(1).
. 5 U.S.C. § 7119(c)(5)(C); see also American Federation of Government Employees, AFL-CIO v. FLRA, 778 F.2d 850 (D.C.Cir.1985).
. Dept. of the Treasury, BATF and National Treasury Employees Union, 18 F.L.R.A. (No. 61) 466 (1985); see also National Ass'n of Government Employees v. FLRA, 893 F.2d 380 (D.C.Cir.1990).
. 5 U.S.C. § 7106(a)(1), (2)(B).
. American Federation of Gov't Employees, supra.
. 31 F.L.R.A. (No. 37) 620 (1988).
. 727 F.2d 481 (5th Cir.1984).
. 727 F.2d at 488.
. We therefore do not accord the deference normally owed to the interpretation of the agency charged with implementing the statute. See U.S. Dept. of Justice, INS, 975 F.2d at 225.
. See also American Federation of Gov’t Employees, 778 F.2d at 857 ("although the Labor-Management Act makes it an unfair labor practice to 'fail or refuse to cooperate in impasse procedures and impasse decisions ...,’ § 7116(b)(6), an agency is not guilty of an unfair labor practice if the FLRA or a reviewing court later determines that the issue was nonnegotiable”); Dept. of Treasury, BATF, supra (agency did not commit an unfair labor practice in implementing an Order while Impasses Panel proceedings were pending because the Order was not subject to the duty to bargain).
|
Edited by C. R. Matthews
In order to fully understand the folding mechanism of a protein, it is necessary to characterize every species populated along the folding pathway from the initial denatured state to the final native conformation. For several small proteins that fold with a two-state transition, direct characterization of folding is limited to the folded and unfolded states. An increasing number of small single-domain proteins, however, have been shown to fold with multi-state kinetics[@bib1 bib2 bib3 bib4 bib5 bib6 bib7 bib8 bib9] indicative of the population of partially folded intermediates along the pathway. These proteins provide an opportunity to dissect the structures populated en route to the native state, and both the structural and the dynamical characterization of these species have provided key insights into the organization of structure during protein folding trajectories.[@bib10 bib11 bib12] Studies of the folding mechanisms of the bacterial DNase-specific immunity proteins, Im7 and Im9, are particularly powerful as, despite their similarity in sequence,[@bib13] Im9 folds with an apparent two-state transition, while Im7 folds with a three-state transition via an on-pathway populated intermediate ([Fig. 1](#fig1){ref-type="fig"}).[@bib5 bib10 bib14 bib15 bib16 bib17] φ value analysis and native-state hydrogen exchange experiments have shown that the Im7 intermediate contains three of the four native helices (helices 1, 2, and 4) packed around a specific hydrophobic core that lacks helix 3.[@bib16 bib18] Selective destabilization of the native state by mutation (L53A I54A) to such an extent that the intermediate becomes the most stable species at equilibrium[@bib19] has allowed structural analysis of this partially folded state by NMR.[@bib20] The intermediate was found to contain native-like secondary structure in helices 1 and 4, partial formation of helix 2, the absence of helix 3, and a fluid, rather than a uniquely structured core.[@bib20] By using chemical shift analysis, hydrogen exchange, and φ values as restraints for molecular dynamics (MD) simulations, models of the kinetic intermediate have been proposed,[@bib10 bib14 bib20] in which helices 1, 2, and 4 are aligned in a native-like topology but their docking is reorganized to allow nonnative interactions to form between helices 2 and 4, while residues that ultimately form helix 3 remain highly disordered.[@bib10 bib14]
The extent of conformational heterogeneity within the Im7 folding intermediate is difficult to verify using ensemble techniques since such measurements typically yield parameters that are averaged over the entire ensemble of conformations within each population. By contrast, single-molecule experiments are ideal for the detection and characterization of rarely populated conformations in heterogeneous ensembles.[@bib21 bib22 bib23 bib24 bib25 bib26 bib27 bib28 bib29] Diffusion single-molecule fluorescence energy transfer (smFRET), in particular, is a powerful technique for studying the structural and dynamic properties of unfolded and folded protein subpopulations at equilibrium.[@bib21 bib22 bib27] Importantly, this technique can be used to quantify and characterize each species within an ensemble, even when populated to just a few percent. We have previously used smFRET to measure the effects of denaturant on the conformational properties of the native and unfolded states of Im9 and demonstrated that the unfolded ensemble becomes significantly compacted at lower denaturant concentrations while the native state shows only minor effects as chaotrope is titrated.[@bib30]
Here, we describe new experiments using diffusion smFRET, designed to directly probe the structural properties of Im7 in its native and trapped intermediate ensembles. We achieve this by measuring the FRET efficiency between dye donor--acceptor pairs introduced at defined points in the protein sequence, chosen to allow us to monitor the relative conformational arrangement and dynamics of each of the four helices at different points in the folding landscape. As well as examining the folded and intermediate states, the low stability of the double-dye-labeled trapped intermediate provides a means of directly monitoring the structural properties of the unfolded state under physiological conditions. Upon addition of kosmotrope under these conditions, the unfolded ensemble is highly compact and displays an increase in the peak width of *P* values, possibly reflecting a reduction in the rate of conformational exchange among iso-energetic unfolded but compact conformations, suggesting that the search process for folding is highly constrained from the onset.
Experimental design and characterization of cysteine variants of Im7
In order to investigate the structural properties of the unfolded, intermediate, and folded states of Im7 using smFRET, we identified solvent-exposed residues close to the center of each helix (Q17, V36, Y56, and K70 in helices 1, 2, 3, and 4, respectively), and cysteine residues were then introduced (see [Methods](#sec1){ref-type="sec"}) at pairs of these sites in both wild-type Im7 and the trapped intermediate (Im7 L53A I54A, referred to as I^eqm^).[@bib19] The variants are named according to the helices to which the dyes are attached; for example, Im7 Q17C K70C is referred to as Im7 H1H4. In total, three pairs of variants were studied for Im7 and Im7 I^eqm^ with dye-attachment sites in H1H4, H2H4, and H1H3. To amplify the sensitivity of smFRET to conformational changes in Im7 H1H3, we inserted 15 glycine residues into the loop connecting helix 1 and helix 2, in both Im7 and Im7 I^eqm^. The H1H3 variants are therefore named Im7 GlyH1H3 and Im7 I^eqm^ GlyH1H3. The expansion of this loop has previously been shown to have no effect on the folding mechanism of native Im7 or its folding intermediate (G. Spence and S.E.R., unpublished data).
Before labeling with fluorescent dyes, each Im7 variant was characterized using tryptophan fluorescence emission, equilibrium denaturation ([Fig. 2](#fig2){ref-type="fig"}; [Table 1](#tbl1){ref-type="table"}), and 1D ^1^H NMR (data not shown) to ensure that the mutations introduced had not perturbed the structural properties of the native and intermediate ensembles. The incorporation of cysteine was found not to substantially alter the structure or stability of native or partially unfolded Im7 variants. Thus, the native and intermediate states of all variants were destabilized by ≤ 4.5 kJ mol^− 1^ compared with their wild-type counterparts ([Table 1](#tbl1){ref-type="table"}) and all native variants gave rise to tryptophan fluorescence emission spectra that are highly quenched ([Fig. 2](#fig2){ref-type="fig"}, broken lines), indicative of the native-like stacking of the sequentially distant His47 and Trp75 pair.[@bib5 bib15] By contrast, the fluorescence emission spectrum of all of the trapped intermediate variants were more intense than those of either the unfolded or native states, consistent with formation of the previously identified hyper-fluorescent intermediate state.[@bib5 bib15] Finally, 1D ^1^H NMR spectra were similar to those observed previously for each state, confirming that introduction of two cysteine residues did not significantly alter the structure of the intermediate or native states.
Donor and acceptor fluorophores (Alexa 488 and Alexa 594) for FRET were then introduced into each variant by a two-step procedure (see [Methods](#sec1){ref-type="sec"}) that yielded proteins labeled with a single donor--acceptor pair. Steady-state anisotropy measurements exciting each fluorophore gave anisotropy values for all variants (both in folded and unfolded states) from 0.10 to 0.19 ([Table 1](#tbl1){ref-type="table"}), suggesting that both dyes have a high degree of flexibility in all conditions and, hence, are useful as FRET probes.[@bib27 bib31]
Conformational ensembles of the native and intermediate states of Im7
We first investigated the distribution of inter-dye distances for the three pairs of Im7 wild-type and I^eqm^ variants containing donor and acceptor dyes in different helical pairs using single-molecule diffusion experiments under conditions most commonly used for Im7 folding studies (0.4 M Na~2~SO~4~ at pH 7.0 and 10 °C). The FRET efficiency, herein referred as the proximity ratio or value (*P*, see [Methods](#sec1){ref-type="sec"}),[@bib21 bib32] was determined for each sample by measuring the number of detected donor and acceptor photons Id and Ia, respectively, from a single diffusing protein molecule in an integration time of 0.5 ms (see [Methods](#sec1){ref-type="sec"}). Histograms were then constructed of the *P* value, typically from 5000 such single molecules ([Fig. 3](#fig3){ref-type="fig"}). A peak centered on *P* = 0 of varying intensity is observed in all histograms and originates from proteins with a fluorescent donor but photobleached acceptor.[@bib21] This peak was ignored in subsequent analyses. Examination of the proximity ratio histograms for the native variants (bottom panels, [Fig. 3](#fig3){ref-type="fig"}a--c) shows a distribution at a high proximity ratio that fits well to a single Gaussian distribution, consistent with a single species being populated under these conditions. By contrast, the proximity ratio histograms for all the Im7 I^eqm^ variants show more complex behavior (top panels, [Fig. 3](#fig3){ref-type="fig"}a--c) and a second Gaussian is required to fit the data adequately. The peak at a higher proximity ratio in each case (*P* ≈ 0.90) likely reflects a folded conformation with short inter-helical separation populated by the intermediate. The peak at lower proximity ratio (*P* ≈ 0.75) reveals the presence of an ensemble of structures with a larger mean inter-dye separation reflecting the population of a more unfolded species.
Comparisons of the peak positions between the three helix pairs reveal further insights. For helices 1 and 4 ([Fig. 3](#fig3){ref-type="fig"}a), a highly populated species with *P* ≈ 0.93 for both Im7 H1H4 and Im7 I^eqm^ H1H4 is observed, indicative of efficient FRET and suggesting that these helices are closely packed in the trapped intermediate, consistent with previous MD simulations of this state.[@bib10 bib14] The mean proximity ratio of the most compact species for Im7 I^eqm^ H2H4 and for Im7 H2H4 ([Fig. 3](#fig3){ref-type="fig"}c) is also similar (*P* ≈ 0.90), implying that the distance between helices 2 and 4 in the intermediate ensemble, on average, is also native-like. For each of these helix pairs, the weight-averaged mean inter-helix distance for the intermediate ensemble derived from MD simulations[@bib10 bib14] ([Fig. 1](#fig1){ref-type="fig"}) is identical with the inter-helix distance in the native state (15.2 and 15.1 Å and 17.6 and 17.7 Å for Im7 H1H4 and Im7H2H4 variants, respectively, measured by the distance between the center of mass of each helix pair across the ensemble).[@bib14] By contrast, the positions of the folded peaks for Im7 GlyH1H3 and Im7 I^eqm^ GlyH1H3 ([Fig. 3](#fig3){ref-type="fig"}b) differ slightly, with mean proximity ratios *P* ≈ 0.82 and *P* ≈ 0.90, respectively. This suggests that helices 1 and 3 are, on average, in closer proximity in the intermediate state than in the native state and therefore in a nonnative relative orientation in at least the majority of molecules studied here. This is in contrast to the simulations that predict the weight-averaged separation to be greater in I^eqm^ and hence a smaller *P* value relative to the natively folded protein (18.0 and 14.7 Å, respectively).
In addition to revealing inter-dye distributions, analysis of proximity ratio histograms can provide insights into conformational dynamics.[@bib21 bib27 bib32 bib33 bib34] While the width of the proximity ratio distribution for a homogeneous and static species, such as the native state, is dominated by instrumental shot noise, additional broadening can indicate static conformational heterogeneity or inter-conversion between two or more distinct species on a time scale slower than, or similar to, the integration time used. The mean proximity ratios and width of the distributions of the trapped intermediate variants for the H1H4 and H2H4 pairs are indistinguishable from those of their wild-type analogues. This suggests little heterogeneity within the intermediate ensemble relative to its folded analogue.
Trapped intermediates populate an unfolded ensemble in the absence of denaturant
Examination of the proximity ratio histograms for each of the I^eqm^ variants shows that an ensemble with a lower *P* value is co-populated with the folded state of each species (top panels, [Fig. 3](#fig3){ref-type="fig"}a--c), suggesting that an expanded or partially unfolded state is populated for these variants under native conditions. To elucidate the nature of this species, and to further study the folded members of the intermediate ensemble, we varied the solvent conditions from those known to favor a less-compact, more unfolded state (high-pH, low-sodium sulfate concentration) to those that have previously been shown to preferentially stabilize the intermediate state over the unfolded state (low-pH, high-sodium sulfate concentration).[@bib35 bib36] The proximity ratio histograms for the three Im7 I^eqm^ variants under these conditions are shown in [Fig. 4](#fig4){ref-type="fig"}. In all three cases, the relative population of the more highly unfolded species (lower *P* value) decreases with increasingly acidic conditions and/or in the presence of 0.4 M Na~2~SO~4~ while the more compact intermediate species concomitantly increases in a two-state transition. It may be expected that the ratio of the peak areas for the folded and unfolded species should reflect the relative stabilities of these variants, which are different as a consequence of the insertion of pairs of cysteine residues in different regions of the protein ([Table 1](#tbl1){ref-type="table"}). For example, in [Fig. 4](#fig4){ref-type="fig"}, at pH 6, the relative population of the unfolded to folded state is high for I^eqm^H1H4 while it is lower for both I^eqm^H2H4 and I^eqm^H1H3, which have similar stabilities. However, it should be noted that in our experiments, it is not molecules that are quantified but events in bins above a threshold. While these differences are actually accounted for in shot noise width analysis (see later), such effects complicate quantitative analysis of relative peak areas.
These data therefore support the assignment of the less-populated, lower-proximity-ratio species observed for the trapped intermediates to the unfolded state. It is noteworthy that the mean proximity ratio of the unfolded states of all the trapped intermediate variants shows a significant pH dependence, with apparent expansion as the pH is increased while the mean proximity ratios of the folded states of the trapped intermediate variants remain constant across the pH titration, obviating the pH dependence of the dyes as the origin of this effect. Note that both the donor and acceptor dye quantum yields have no significant pH dependence in the range 4--9.[@bib37 bib38] The isoelectric point of wild-type Im7 is ≈ pH 5. Therefore, as the pH is increased, it is plausible that the expansion in the unfolded state may be caused by electrostatic repulsion due to an increased negative charge. Importantly, at pH 7 and in the absence of Na~2~SO~4~, the unfolded state ensemble is highly populated for all I^eqm^ variants ([Fig. 4](#fig4){ref-type="fig"}, second row from the top). This provides the opportunity to study the unfolded ensemble of Im7 in weakly denaturing conditions, the initial starting species for the Im7 folding reaction.
The unfolded Im7 ensemble becomes increasingly compact under mild denaturing conditions
In order to explore the nature of the folded and unfolded species further, we performed a denaturant titration with the three Im7 and Im7 I^eqm^ helix pairs in the absence of kosmotrope. Shown in [Fig. 5](#fig5){ref-type="fig"} are the mean proximity ratios and mean peak widths obtained from such titrations, using urea as the denaturant. The positions of the folded peaks (corrected for refractive index, see [Methods](#sec1){ref-type="sec"}) for all Im7 I^eqm^ and Im7 folded species ([Fig. 5](#fig5){ref-type="fig"}a, top panel, squares) change by less than *P* ≈ 0.06 as a function of urea concentration over the detectable range, suggesting little difference in compaction consistent with these ensembles representing stably structured states with a well-defined fold (note that given the large error, the data for Im7 I^eqm^ GlyH1H3 are not included in this analysis). Similar minor compaction with decreasing urea concentration has been previously observed for the homologous protein Im9 when labeled with dye at positions 23 and 81 (helix 1 and close to the C-terminus, respectively)[@bib30] as well as for the native states of other proteins.[@bib39 bib40] It is unlikely that these effects arise from restricted dye motion as anisotropy measurements suggest that each fluorophore has a high degree of conformational freedom. There is always a possibility that for a given set of labeling positions, the presence of urea changes the local environment of the dye in a way that causes some quenching due to the specific local sequence. In this regard, it has recently been shown that Im7 is a frustrated protein with a highly malleable core and this may lead to slight changes in conformation.[@bib41]
In contrast with the behavior of the native and folded intermediate species, a substantial decrease in the inter-helical distance of the denatured state ensemble is observed as the denaturant concentration is decreased for all of the Im7 variants ([Fig. 5](#fig5){ref-type="fig"}a--c, top panels, triangles). In all cases, the effect is more pronounced than the changes observed for the folded ensembles, such that the *P* value changes by \> 0.12 for all proteins between 0 and 6 M urea. Compaction of the unfolded state has been observed previously for other proteins at low concentrations of denaturant.[@bib23 bib25 bib27 bib40 bib42 bib43 bib44 bib45 bib46 bib47 bib48 bib49] Similar compaction of the denatured state at low denaturant concentration is observed with FRET donor--acceptor fluorophore pairs attached to the N- and C-termini of Im7 (data not shown), suggesting that compaction in the denatured state is isotropic with all regions of the protein showing similar behavior, rather than being confined to specific regions of the polypeptide chain as was observed with CspTm.[@bib45]
As discussed above, the width of peaks in proximity ratio distributions contain information about dynamic conformational heterogeneity.[@bib21 bib27 bib32 bib33 bib34] The width of the distributions representing the native state or folded members of the intermediate ensemble ([Fig. 5](#fig5){ref-type="fig"}a--c, center and bottom panels, squares) is generally in excellent agreement with the expected shot noise limited width calculated as a function of denaturant concentration ([Fig. 5](#fig5){ref-type="fig"}a--c, center and bottom panels, broken lines, see [Methods](#sec1){ref-type="sec"} and Ref. [@bib30]). The peak width of the unfolded species for all variants ([Fig. 5](#fig5){ref-type="fig"}a--c, center and bottom panels, triangles) is independent of urea concentration under strongly denaturing conditions (\[urea\] \> 3 M) but slightly broader than that predicted by shot noise ([Fig. 5](#fig5){ref-type="fig"}a--c, center and bottom panels, broken lines, see [Methods](#sec1){ref-type="sec"}). This broadness offset has also been observed for protein L,[@bib34] CspTm,[@bib27] and Im9.[@bib30] The unfolded state for all variants shows significant additional broadening in the transition region, where both the folded state and unfolded state are populated ([Fig. 5](#fig5){ref-type="fig"}a--c, center and bottom panels, triangles). These findings suggest that the unfolded state ensembles of all six variants become not only more heterogeneous but also, on average, more compact, in weakly denaturing conditions, consistent with slowed fluctuations relative to the 0.5-ms measurement time between isotropically collapsed ensembles of conformationally heterogeneous states that occur under conditions favoring folding.
The properties of the native, intermediate, and unfolded ensembles in the presence of sodium sulfate
Further studies of the properties of the folded and unfolded states of Im7 H1H4 and Im7 GlyH1H3 and their analogous trapped intermediate constructs were performed in the presence of a kosmotrope (0.4 M Na~2~SO~4~, sample raw data are shown in [Fig. 6](#fig6){ref-type="fig"} and summarized in [Fig. 7](#fig7){ref-type="fig"}). Studies on Im7 I^eqm^ H2H4 and Im7 H2H4 were not included, as the presence of 0.4 M Na~2~SO~4~ results in poor separation of the distributions corresponding to the folded and unfolded species in the transition region, ruling out their quantitative analysis (data not shown). The stabilizing effect of the kosmotrope results in the folded intermediate species of Im7 I^eqm^ H1H4 and Im7 I^eqm^ GlyH1H3 being populated up to 3 M urea and the native states of Im7 H1H4 and Im7 GlyH1H3 up to 5.5 M urea ([Fig. 7](#fig7){ref-type="fig"}a, squares). This allows the effect of denaturant on the inter-helical distance (*P* value) and distribution width of these species to be quantified over a wider range of urea concentration than was possible hitherto. Similar to the denaturant titration in the absence of kosmotrope, no significant compaction is observed for the folded Im7 H1H4 and the folded Im7 I^eqm^ H1H4 conformations as denaturant concentration is decreased ([Fig. 7](#fig7){ref-type="fig"}a, top panel, squares). For the Im7 GlyH1H3 variants, contrasting behavior is seen: compaction occurs towards low denaturant concentrations, which was unclear in the absence of kosmotrope ([Fig. 5](#fig5){ref-type="fig"}b, top panel, squares, and [Fig. 7](#fig7){ref-type="fig"}b, top panels, squares) with a change in *P* value by up to 0.13 from 0 to 3.5 M urea. The cause of this apparent compaction is unclear and may be a consequence of the inserted glycine sequence and will require structural data with higher resolution to resolve. While differences are observed in inter-helical distance, the widths of both the H1H4 and GlyH1H3 native and intermediate folded ensembles remains shot noise limited across the entire denaturant range studied ([Fig. 7](#fig7){ref-type="fig"}a and b, middle and bottom panels, squares). This suggests that despite the significant compaction seen in the GlyH1H3 native and folded intermediate, the ensemble of interactions between helices 1 and 3 and 1 and 4 have native-like dynamics and homogeneity, irrespective of the concentration of denaturant.
The unfolded state ensembles of the proteins Im7 H1H4 and Im7 Gly H1H3 and their trapped intermediate analogues in the presence of 0.4 M Na~2~SO~4~ show a dramatic dependence of their inter-helical separation with denaturant concentration ([Fig. 7](#fig7){ref-type="fig"}a and b, top panels, triangles). Strikingly, in 0 M urea, the unfolded state ensemble becomes so compact~,~ (*P* ≈ 0.8) that its *P* ratio is close to that of the native ensemble. Correlated with this, a dramatic and highly significant change in the width of the unfolded state of the trapped intermediate is observed in mildly denaturing conditions ([Fig. 7](#fig7){ref-type="fig"}a and b, center panels, triangles), which is not observed for the folded state observed in the same experiment, obviating instrumental or processing artifacts as the cause.[@bib34] For both Im7 I^eqm^H1H4 and Im7 I^eqm^GlyH1H3 at denaturant concentrations between 1 and 3 M, the distribution width increases as the inter-dye separation decreases. The distribution width of the unfolded peak then decreases again as the concentration of urea is decreased below 1 M, possibly reflecting a decreased rate of conformational exchange among unfolded but compact conformations.
The results of this single-molecule study uncover new insights into the properties of the different species populated during the folding of Im7. The peak positions and distribution widths for the natively folded Im7 H1H4, Im7 H1H3, and Im7 H2H4 variants are narrow and at high *P* values as expected based on the known structure of Im7.[@bib13] Comparison of these data with those for their trapped intermediate analogues allows differences in the structure and dynamics of each state to be identified. Analysis of the proximity ratios between helix pairs suggests that the separations between helices 1--4 and 2--4 are indistinguishable between the native and folded intermediate states, at least for the positions measured. Interestingly, the proximity ratio of folded Im7 I^eqm^ GlyH1H3 is slightly higher than that of Im7 GlyH1H3 (*P* ≈ 0.90 and 0.82, respectively, [Figs. 3b and 7b](#fig3 fig7){ref-type="fig"}, top panel), indicating a shorter inter-residue distance, on average, for residues 17 and 56 in the intermediate ensemble, possibly consistent with this region being unstructured and occupying a highly nonnative location within the intermediate ensemble.[@bib10 bib15] φ value analysis for the intermediate state of Im7 revealed nonnative interactions between side chains in helices 2 and 4, specifically involving residues towards the C-terminal end of helix 2, including residues F41 and V42.[@bib16] Presumably then, the changes in side-chain packing needed to reach the native state involve subtle reorganization rather than large-scale inter-helix movements that would have been detected in the smFRET studies described here. These data accord with the analysis of the trapped intermediate by NMR, which found native-like secondary structure in helices 1 and 4, a partial formation of helix 2, and the absence of a structured helix 3.[@bib20]
The data also allow benchmarking of the ensemble of intermediate states calculated using MD simulations.[@bib10 bib14] For Im7 H1H4 variants, there is excellent agreement; both techniques suggest that the arrangement of this helix pair is similar in both the native and intermediate states and there is little structural heterogeneity in this equilibrium ensemble. In this study, a similar result was observed for the Im7 H2H4 variants. This is in contrast to MD simulations of the intermediate ensemble that predicts a broader distribution of distances for the helix 2--4 pair than in the native state.[@bib14] Interestingly, however, comparison of the weight-averaged helix--helix distance for helices 2--4 calculated from the simulations demonstrates that these distances are identical for the native and folded intermediate state (17.7 and 17.6 Å, respectively). If the intermediate ensemble predicted by simulation is accurate, then this suggests that the conformational heterogeneity observed between helices 2--4 results from conformational exchange on a timescale faster than the timescale of the single-molecule experiment presented here.
The smFRET experiments also enabled the properties of the unfolded, intermediate, and native states to be determined as a function of the concentration of urea. Generally, the native and folded intermediate species were found to be only marginally compacted, as judged by the proximity ratio, as the denaturant concentration decreased, and were unaffected by the presence of 0.4 M Na~2~SO~4~ in agreement with previous studies using equilibrium denaturation.[@bib35 bib36] The peak width for the native and folded intermediate states was also generally found to be insensitive to denaturant concentration with the exception of Im7 I^eqm^ GlyH1H3 in the absence of 0.4 M Na~2~SO~4~. Introduction of cysteine pairs into Im7 resulted in destabilization of both the native state and the trapped intermediate state (I^eqm^ variants) by ≤ 4.5 and ≤ 3.3 kJ mol^− 1^, respectively. This proved to be advantageous, allowing the characterization of the unfolded state at very low concentrations of denaturant. This allowed observation of the equilibrium collapse (coil--globule transition) that is usually masked by the folding transition. Our data reveal that substantial compaction of the unfolded state occurs, as predicted by Alonso and Dill[@bib50] and observed previously for several other proteins using both single-molecule fluorescence[@bib25 bib27 bib42 bib44 bib45 bib51] and other techniques.[@bib51 bib52 bib53 bib54] Such data have been modeled as a continuum of substates[@bib25] or by using an analytical polymer model.[@bib51] In the latter approach, smFRET data reporting on the coil--globule transition were used to calculate the end-to-end distance probability distribution for several proteins as a function of interaction energy (or denaturant concentration). These data demonstrated a continuous contraction and narrowing of the distribution as denaturant concentration decreases. Conversion of these data to radii of gyration allowed the expansion of the denatured state relative to the native state to be extrapolated to the absence of denaturant where the unfolded state was found to be only 30% larger than that of the folded state,[@bib51] akin to the significant collapse observed in this study in the absence of urea. Furthermore, the free energy of denatured state collapse has a linear dependence on denaturant concentration with a similar gradient to that for the folding reaction, suggesting that the effect of denaturant is mediated through the collapse transition of the denatured state.[@bib55]
At low concentrations of denaturant, the proximity distribution width for the unfolded states of each variant is also dependent on the denaturant concentration ([Fig. 5](#fig5){ref-type="fig"}), possibly due to slower exchange between iso-energetic conformations in the unfolded energy basin at low chaotrope concentrations.
In the presence of 0.4 M Na~2~SO~4~, the unfolded states of Im7 I^eqm^ H1H4 and Im7 I^eqm^ Gly H1H3 are populated even in the absence of denaturant, and a striking new observation is possible ([Fig. 7](#fig7){ref-type="fig"}a and b, middle panel, triangles): After an initial increase in the width of the unfolded ensemble as described above, in mild denaturing conditions, the width narrows again as the urea concentration approaches zero ([Fig. 7](#fig7){ref-type="fig"}), reflecting the preferential population of a more homogeneous or less dynamic unfolded conformation. It is possible that both the compact unfolded state in 0.4 M Na~2~SO~4~ and the more expanded unfolded species in the absence of Na~2~SO~4~ belong to the same unfolded ensemble (i.e., they are not distinct thermodynamic states). In such a scenario, the addition of Na~2~SO~4~ would preferentially stabilize the more compact unfolded species, resulting in conformations with higher *P* ratios dominating the unfolded ensemble. The precise nature of the interactions that stabilize these compact states is still debated and could involve either specific or nonspecific hydrophobic interactions.[@bib51 bib53 bib55] Such highly compact, unfolded states could be favorable for efficient folding, limiting the conformational search to the native structure. Further studies by NMR will be needed to determine the properties of this state in atomistic detail, akin to the analyses of other nonnative species of Im7 and other proteins.[@bib18 bib20 bib56 bib57 bib58 bib59] Severely destabilized variants of Im7, where the unfolded state is populated in the presence of kosmotrope and in the absence of denaturant, may provide an ideal starting point for such studies.
Chemicals and reagents
Alexa Fluor 488 and 594 C-5 maleimide were purchased from Invitrogen (UK). Fluka brand reagents (Sigma-Aldrich, UK) were used for all single-molecule measurements. Urea was recrystallized in analytical-grade ethanol prior to use.
The desired mutations were introduced into hexahistidine-tagged Im7[@bib16] using QuikChange Site-Directed Mutagenesis Kit (Stratagene, UK). Im7 double-cysteine variants are named using the wild-type residue number (ignoring the His-tag) and the residue to which it has been mutated (e.g., K70C is lysine 70 mutated to cysteine). The proteins were purified to homogeneity as previously described[@bib36] and their identity was verified using mass spectrometry.
Characterization of the unlabeled protein
Fluorescence emission spectra were recorded in a Photon Technologies International Quantamaster C-61 spectrofluorimeter at 10 °C using protein at a concentration of ≈ 5 μM in 50 mM sodium phosphate buffer, pH 7.0, 0.4 M Na~2~SO~4~, 1 mM ethylenediaminetetraacetic acid, and 4 mM DTT and in the same buffer containing 8 M urea. The fluorescence of tryptophan and tyrosine residues was excited at 280 nm, and emission spectra were collected with a scan rate of 1 nm/s between 300 and 450 nm. Buffer blanks were subtracted and the spectra were normalized to the emission maximum of the unfolded state in 8 M urea.
Ensemble equilibrium denaturation of unlabeled protein
The stability of the Im7 double-cysteine variants was determined by equilibrium denaturation using urea titration. The samples were made from stock solutions of buffers containing 50 mM sodium phosphate buffer, pH 7.0, 0.4 M Na~2~SO~4~, 1 mM ethylenediaminetetraacetic acid, and 4 mM DTT in the absence or presence of 9 M urea. The solutions were mixed in appropriate proportions to give final urea concentrations ranging from 0 to 8 M (in 0.2-M increments) and a final protein concentration of ∼ 5 μM. Time-based fluorescence measurements were performed at 10 °C using a Photon Technologies International Quantamaster C-61 spectrofluorimeter. The samples were excited at 280 nm, and the emitted light at 360 nm was measured over a 60-s period. After signal averaging, we plotted the intensity as a function of urea concentration, and the data were fitted to a two-state transition as described previously.[@bib5] To compare the denaturation profiles of the variants, we converted the raw data to fraction population of native molecules.[@bib60]
Labeling with fluorophores
Each double-cysteine variant at a concentration of 3 mg/ml was first labeled with a 0.65 molar ratio of Alexa Fluor 594 C-5 maleimide in 50 mM sodium phosphate and 10% dimethyl sulfoxide, pH 7.3 (labeling buffer), for 45 min at room temperature. Singly labeled protein was separated from unlabeled and doubly labeled protein by anion-exchange chromatography using a prepacked 6 ml Resource Q anion-exchange column on an ÄKTA explorer system equilibrated with 50 mM sodium phosphate pH 7.3 (buffer A). Proteins were eluted with 0--50% gradient over 14 column volumes using buffer A with 1 M NaCl. Singly labeled protein at a concentration of 1 mg/ml in 6 M urea was then labeled with a 2-fold molar excess (over the thiol concentration) of Alexa Fluor 488 C-5 maleimide in labeling buffer for 2 h at room temperature. Double-labeled protein was separated from singly labeled protein by anion-exchange chromatography as described above. This procedure allows the preparation of highly pure proteins, each labeled with a single donor and acceptor pair, but results in an A/B, B/A mix. Remaining traces of unreacted dye were removed by gel filtration using a Superdex Peptide HR 10/30 column in 50 mM sodium phosphate buffer, pH 7.0. At each labeling step, the identity of the product was confirmed by electrospray ionization mass spectrometry.
Steady-state anisotropy data were measured using a HORIBA Jobin Yvon Fluorolog spectrophotometer with double-labeled protein solution at a concentration of 100 nM at 10 °C in 50 mM sodium phosphate buffer, pH 7.0 in the presence or absence of 8 M urea. Each sample was measured with excitation at both 488 and 594 nm, 60 times over the course of 1 min. This was repeated three times, and the mean anisotropy value was calculated.
Single-molecule experiments were performed using a custom-built confocal microscope described in Refs. [@bib21], [@bib30], and [@bib61]. Solutions contained 0--8 M urea (in increments of 0.5 M) in 50 mM sodium phosphate buffer, pH 7.0, 0.01% (w/v) Tween 20 with 0.4 M Na~2~SO~4~, unless otherwise stated. In addition, 1.5 mM [l-]{.smallcaps}carnosine, 2 mM mercaptoethanol, and 10 mM (1,4-diazabicyclo\[2,2,2\] octane were included as oxygen scavengers and to suppress blinking, to minimize the magnitude of the zero peak. Singlet oxygen and dark states (possible triplet-state population) are thought to be the main cause of premature photobleaching of the acceptor dye. Inclusion of these compounds at the stated concentrations has no effect on the thermodynamic parameters (*M*~un~ and Δ*G*~un~°) obtained for this protein.
The protein concentration of each sample was ≈ 50 pM, and all measurements were performed at 10 °C. Data were collected by observing the transient bursts of fluorescence produced by diffusion of single molecules into and out of the confocal detection volume, using an integration time of 0.5 ms. Ratiometric analysis of single-molecule data was performed as described in Ref. [@bib30] using custom algorithms written in the data analysis software package Igor Pro Version 5.06a (Wavemetrics Inc., USA). The proximity ratio, *P*, was calculated using;$$P = \frac{\left( {I_{\text{A}} - \left\langle B_{\text{A}} \right\rangle - \left\langle {\varphi I_{\text{D}}} \right\rangle} \right)}{\left( {I_{\text{A}} - \left\langle B_{\text{A}} \right\rangle - \left\langle {\varphi I_{\text{D}}} \right\rangle} \right) + \left( {I_{\text{D}} - \left\langle B_{\text{D}} \right\rangle} \right)}$$where *I*~A~ and *I*~D~ are the acceptor and donor signals in each 0.5-ms interval, respectively, 〈*B*~A~〉 and 〈*B*~D~〉 are the mean background signals for the acceptor and donor channels, respectively, and φ is the mean cross-talk (leakage) of fluorescence from the donor into the acceptor channel, determined to be ≈ 10% in a separate experiment using a concentrated donor-only sample (data not shown). Note that the FRET efficiency *E* is calculated from the ratio of the detected acceptor signal to the total signal (donor + acceptor) in the same integration time:$$E_{\text{FRET}} = \frac{I_{\text{A}}}{\gamma I_{\text{D}} + I_{\text{A}}}$$where *I*~A~ and *I*~D~ are the uncorrected number of donor and acceptor photon counts per counting interval. In this study, the ratio γ was assumed to be equal to 1. In this limit, a proximity ratio is the FRET efficiency and can be converted, if required to a distance (*R*~0~ = 54 Å for the dye pair used in this study).[@bib27]
A SUM threshold criterion was then applied to the data in order to select only integration times that contained valid single-molecule events and to reject the background;[@bib21 bib32]$$\left( {I_{\text{A}} - \left\langle B_{A} \right\rangle - \left\langle {\varphi I_{\text{D}}} \right\rangle} \right) + \left( {I_{\text{D}} - \left\langle B_{\text{D}} \right\rangle} \right) \geq T,$$where *T* is the particular threshold used. Histograms of the accepted proximity ratios were then constructed and fitted with the sum of two or three Gaussians with the following formularization:$$\text{Occurrence}\left( p \right) = \sum\limits_{n = 1}^{n = 3}{\frac{a_{n}}{w_{n}\sqrt{\pi/2}}\exp\left( \frac{- 2\left( p - p_{0,n} \right)^{2}}{w_{n}^{2}} \right)}$$where *a*~*n*~ is the area under the curve, from the baseline (fixed = 0) of curve *n*. *w*~*n*~ = 2σ~*n*~, where σ~*n*~ is the standard deviation of curve *n*. *p*~0,*n*~ is the mean proximity ratio (peak top for a Gaussian) of curve *n*. Fits were otherwise unconstrained. The mean proximity ratio values obtained from the fits were corrected for changes in the average refractive index of the solution (due to different urea concentrations) following Refs. [@bib27] and [@bib30].
Peak width analysis
Peak width analysis was performed as described in Ref. [@bib30] using a normalized formularization of the expected shot noise distribution with proximity ratio[@bib32] and using algorithms written in Igor 5. The predicted shot noise limited width of a given species is then:$$2\sigma\left( m,S \right) = 2N\frac{\sqrt{m\left( 1 - m \right)}}{\sqrt{S + 1}}$$where σ is the standard deviation, *m* is the mean proximity ratio of the species, *S* is the mean total signal of the identified single-molecule events in that species, obtained by analysis of the actual single-molecule bursts contributing to that species, and *N* is a normalization factor. Using *N* = 1, the width of the species of interest, in this case the folded peak, is underestimated, as previously described.[@bib30] Assuming that the native state of the labeled Im7 variants can be described as a homogeneous, static species, their width therefore represents the shot noise limit for our particular set of experimental conditions. Therefore, for each of the native peaks of the three variants, Im7 H1H4, Im7 H2H4, and Im7 Gly H1H3 in 0 M urea, a normalization pre-factor was generated. The pre-factor was then used to determine the expected shot noise contribution at all urea concentrations for both the native and denatured ensembles, of each Im7 helix pair, using Eq. ([5](#fd5){ref-type="disp-formula"}).[@bib30] In order to investigate the widths of the folded and unfolded peaks of the labeled trapped intermediate proteins as a function of urea concentration, we used the pre-normalization factor generated using the wild-type variant with fluorophores in identical locations. The reasoning behind this approach is that the intermediate may be heterogeneous and dynamic; therefore, the width of this species cannot be assumed to be due to shot noise alone.
We thank Graham Spence for construction of the glycine insertion wild-type variants, Joerg Gsponer for providing additional MD data, and Claire Friel for helpful discussions. S.D.P. was funded by a Wellcome Trust studentship (065520/Z/01/A), and C.G. was supported by the University of Leeds. D.J.B. is a White Rose Doctoral Training Centre lecturer founded by the Engineering and Physical Sciences Research Council.
::: {#fig1 .fig}
Schematic representation of the folding landscape of Im7 involving a populated on-pathway intermediate. Species on the folding pathway are indicated by the following: U, unfolded state; I, intermediate state; N, native state; TS~1~ and TS~2~, early and rate-determining transition states. Representative members of the ensembles calculated by MD simulations[@bib10 bib14] are shown for TS~1~, I, and TS~2~. The structure of native Im7 (Protein Data Bank code [1AYI](pdb:1AYI))[@bib13] is shown in the native well. The segments forming the four helices in the native state are colored red (helix 1), green (helix 2), purple (helix 3), and yellow (helix 4).
::: {#fig2 .fig}
\(a) Normalized fluorescence emission spectra of the folded and denatured states of the wild-type Im7 double-cysteine variants and Im7 I^eqm^ double-cysteine variants. The broken black, green, red, and gray lines correspond to Im7 GlyH1H3, Im7 H2H4, Im7 H1H4, and wild-type Im7 in 0 M urea, respectively. The continuous black, green, and red lines correspond to Im7 I^eqm^GlyH1H3, Im7 I^eqm^H2H4, and Im7 I^eqm^H1H4 in 0 M urea, respectively. The continuous blue lines correspond to the average of all the unfolded states in 8 M urea. All signals were normalized to the maximum signal of the denatured state in 8 M urea. (b) Urea-induced equilibrium unfolding curves of the wild-type Im7 double-cysteine variants and I^eqm^ Im7 double-cysteine variants. Denaturation was monitored by tryptophan fluorescence and normalized to the fraction of folded molecules present. Filled circles correspond to I^eqm^ double-cysteine variants, open circles correspond to the wild-type Im7 double-cysteine variants, and filled gray triangles correspond to wild-type Im7. Color coding of the different helical pairs is identical with that shown in (a). The continuous lines show the fits to a two-state transition.
::: {#fig3 .fig}
Proximity ratio histograms from smFRET measurements on Im7 and Im7 I^eqm^ species in 0 M urea with the fluorophores attached to (a) helices 1 and 4, (b) helices 1 and 3, and (c) helices 2 and 4 at pH 7, 10 °C with 0.4 M Na~2~SO~4~. Black lines show the fits obtained from summing one or two Gaussian distributions (red lines).
::: {#fig4 .fig}
Proximity ratio histograms showing the effect of pH and Na~2~SO~4~ on the distribution of populated species in the I^eqm^ Im7 double-cysteine variants at 10 °C. As the pH is decreased and the concentration of Na~2~SO~4~ is increased, the unfolded state is depopulated and the intermediate state is populated for all variants studied. Black lines show the fits obtained from summing one or more Gaussian distributions (red lines).
::: {#fig5 .fig}
Mean proximity ratio and mean peak width for the folded and unfolded states for (a) Im7 H1H4 and Im7 I^eqm^H1H4, (b) Im7 GlyH1H3 and Im7 I^eqm^ GlyH1H3, and (c) Im7 H2H4 and Im7 I^eqm^H2H4 as a function of urea concentration at pH 7.0, 10 °C in the absence of Na~2~SO~4~. Triangles and squares represent the unfolded and folded states, respectively, and filled and open symbols correspond to the Im7 I^eqm^ and Im7 variants, respectively. The dotted blue and red lines represent the shot noise predictions for the unfolded and folded species, respectively. The error bars are ± 1 SD.
::: {#fig6 .fig}
Selected proximity ratio histograms of the equilibrium urea denaturation of (a) Im7 I^eqm^H1H4 at pH 7 in the absence of Na~2~S0~4~ and (b) Im7 I^eqm^H1H4 and (c) Im7H1H4 at pH 7 and 0.4 M Na~2~S0~4,~ 10 °C monitored by smFRET. Black lines show the fits obtained from summing one or more Gaussian distributions (red lines).
::: {#fig7 .fig}
Mean proximity ratio and mean peak width for the folded and unfolded species of (a) Im7 H1H4 and Im7 I^eqm^H1H4 and (b) Im7 GlyH1H3 and Im7 I^eqm^ GlyH1H3 as a function of urea concentration at pH 7.0 and 0.4 M Na~2~SO~4~, 10 °C. Triangles and squares represent the unfolded and folded species, respectively, and filled and open symbols correspond to the Im7 I^eqm^ and Im7 variants, respectively. The dotted blue and red lines represent the shot noise predictions for the unfolded and folded species, respectively. The error bars are ± 1 SD.
::: {#tbl1 .table-wrap}
Biophysical characterization of the Im7 variants used in this study (at pH 7, 10 °C)
Variant Δ*G*~un~° (kJ mol^− 1^)[a](#tblfn1){ref-type="table-fn"} *M*~un~ (kJ mol^− 1^ M^− 1^)[a](#tblfn1){ref-type="table-fn"} Anisotropy
-------------------- ---------------------------------------------------------- --------------------------------------------------------------- ------------- ------------- ------------- -------------
Im7 24.6 ± 1.3 4.9 ± 0.7 ND ND ND ND
Im7 GlyH1H3 22.4 ± 0.8 5.6 ± 0.1[d](#tblfn4){ref-type="table-fn"} 0.11 ± 0.07 0.11 ± 0.03 0.17 ± 0.05 0.13 ± 0.08
Im7 H2H4 20.1 ± 0.8 4.7 ± 0.1 0.15 ± 0.07 0.12 ± 0.06 0.14 ± 0.02 0.12 ± 0.05
Im7 H1H4 20.8 ± 0.6 4.9 ± 0.1 0.10 ± 0.06 0.11 ± 0.05 0.14 ± 0.03 0.12 ± 0.04
Im7 I^eqm^ 10.1 ± 0.3[e](#tblfn5){ref-type="table-fn"} 3.4 ± 0.2[e](#tblfn5){ref-type="table-fn"} ND ND ND ND
Im7 I^eqm^ GlyH1H3 7.9 ± 0.9 4.5 ± 0.3[d](#tblfn4){ref-type="table-fn"} 0.12 ± 0.08 0.11 ± 0.05 0.17 ± 0.06 0.10 ± 0.05
Im7 I^eqm^ H2H4 7.8 ± 1.2 3.5 ± 0.3 0.18 ± 0.07 0.12 ± 0.03 0.17 ± 0.04 0.12 ± 0.03
Im7 I^eqm^ H1H4 6.8 ± 1.1 3.8 ± 0.2 0.19 ± 0.07 0.11 ± 0.06 0.17 ± 0.02 0.11 ± 0.03
The thermodynamic parameters (Δ*G*~un~° and *M*~un~) were calculated from chemical denaturant equilibrium unfolding experiments. Errors are those from fits of the data to a two-state equilibrium model.
Measured in the presence of 0.4 M Na~2~SO~4~.
Anisotropy of donor fluorophore measured at 488 nm.
Anisotropy of acceptor fluorophore measured at 594 nm.
*M* values for the two GlyH1H3 variants are significantly larger than their analogues without glycine insertions, consistent with the introduction of 15 glycine residues.
Taken from Ref. [@bib19].
S.D.P. and C.G. contributed equally to this work.
Present addresses: C. Gell, Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, 01307 Dresden, Germany; D. A. Smith, Avacta Group Plc, York Biocentre, Innovation Way, York YO10 5NY, UK.
|
Multiple conditionals with ng-show AngularJS
I have a ng-repeat that builds a table from a web service. I render 6 values like this: med:1 lab:1 pl:1 in one cell. I need to make a decision based on those 6 values. My ng-repeat code is like this:
<tr ng-repeat="(pid, value) in patient|groupBy:'pid'">
<td>{{pid}}</td>
<td ng-repeat="(disease, value) in patient|groupBy:'name'">
<span ng-repeat="item in value|filterBy:['pid']:pid" ng-model="disease-conditial">
{{item.src}}:{{item.total}}
</span>
<!--This is the code that I'm trying to make to work-->
<span class="label label-danger" ng-show="
(item.src == 'med' and item.total > 0) and
(item.src == 'lab' and item.total > 0 and
(item.src == 'pl' and item.total > 0))">Yes
</span>
</td>
</tr>
Please consider this sudo code. I need three conditions:
if src == med && total > 0
and
if src == lab && total > 0
and
if src == pl && total > 0
then Yes
if src == med && total == 0
if src == lab && total > 0
if src == pl && total > 0
then Maybe
if src == med && total == 0
if src == lab && total == 0
if src == pl && total == 0
then No
I was reading about ng-switch but this directive doesn't support conditionals with dynamic data $scope.something == 1
Does Angular has any built in directive where I can achieve this?
The data in the cell looks like this:
+-----------------+
|med:1 lab:1 pl:1 |
+-----------------+
I need to evaluate the value of med && the value of lab && the value of pl to make the decision of Yes, Maybe, No.
Update 7 SEP 2016
The data that comes from the object is as follow:
Array[50]
0:Object
$$hashKey:"object:18"
name:"Alcoholism"
pid:"1"
src:"lab"
total:"1"
__proto__:Object
1:Object
$$hashKey:"object:19"
name:"Alcoholism"
pid:"1"
src:"med"
total:"1"
__proto__:Object
2:Object
$$hashKey:"object:20"
name:"Alcoholism"
pid:"1"
src:"pl"
total:"0"
__proto__:Object
Once again, scenario 1. if med > 0 && lab >0 && pl >0 then Yes. Scenario 2. if med == 0 && lab > 0 && pl > 0 then Maybe. Scenario 3. if med == 0 && lab == 0 && pl == 0 then No.
First of all, you shouldnt be doing this at the view, you should use a function in ng-show: ng-show="showData()", and do the conditions in the controller... Second is not "And", is &&
What do you mean the condition maybe?????? Use && operators instead of and, and I agree with @GustavoGabriel, if you can create a function to define your logic in your controller, it will be much much easier.
Well if you remove the parentheses (operator precedence makes the parentheses useless when all conditions are 'and') you will see that item.src will never be 'med' and 'lab' and 'pl' at the same time, so the expression will always evaluate to false.
I change the and for && and doesn't work.
You can use a ng-switch with item.src like this:
<div class="animate-switch-container" ng-switch on="item.src" ng-show="item.total > 0">
<div ng-switch-when="med">med</div>
<div ng-switch-when="lab">lab</div>
<div ng-switch-when="pl">pl</div>
</div>
With the ng-show in the entire div will make the same and validation.
This is a good clean example of both the ngSwitch and ngShow directives
Fabio; the logic is as fallow... ng-switch-when="med" and the value for med == 1 then Yes.
Even though it can be a good example of the use of ngSwitch and ngShow, this is not the answer to the question. It does not follow the logic (barely) explained in the question.
I ill prefer you to get single value from controller which you want to display on view but ng-bind can also help you render required data
<span class="label label-danger" ng-bind="item.total>0 && (item.src === 'med' || item.src === 'lab' || item.src === 'pl')?'yes':'no'"> </span>
Muslim; let me try your solution.
Your code returns just two conditions, My problem has three instead.
I think you are just putting your conditions wrong. I'm doing some guessing here, but what I think you are trying to do is this:
<tr ng-repeat="(pid, value) in patient|groupBy:'pid'">
<td>{{pid}}</td>
<td ng-repeat="(disease, value) in patient|groupBy:'name'">
<span ng-repeat="item in value|filterBy:['pid']:pid" ng-model="disease-conditial">
{{item.src}}:{{item.total}}
</span>
<!--This is the code that I'm trying to make to work-->
<span class="label label-danger" ng-show="
((item.src == 'med' || item.src == 'lab' || item.src == 'pl') and item.total > 0)">
Yes
</span>
<span class="label label-danger" ng-show="
((item.src == 'med' && item.total == 0) || (item.src == 'lab' and item.total > 0) || (item.src == 'pl' and item.total > 0))">
Maybe
</span>
<span class="label label-danger" ng-show="
((item.src == 'med' || item.src == 'lab' || item.src == 'pl') and item.total == 0)">
No
</span>
</td>
</tr>
It is not a problem with the ng-show directive, it is a problem with the conditions inside it.
Let me try that Alvaro. I'm wondering why I got a -1 vote for this question?
I don't know, I'm not who gave the negative vote to you. Maybe because the problem itself does not relate to AngularJS but with the boolean condition but, honestly, I don't know. The best way to avoid downvotes is to follow StackOverflow's question guidelines. Good luck with your try!
Alvaro; I think you are missing a (. In the first line you are closing right after the 0 but I don't see where should opened. Any way, I tried that line of code and is rendering Yes in all my cells. I moved the missing ( right before item.total and at the very beggingitem.srcand got the same result, all cells withYes`
Please, try the solution now. There was a missing '(' at the very beggining of the fist condition.
Sorry again, there was another '(' at the beggining of the third condition :). I edited my answer.
It is rendering Yes, Maybe, No in all the cells. At some point in my code I had the same outcome. For some reason ng-show can't discriminate the values and create the conditions.
Now I have to go to bed, a little bit late here in Spain :), but it makes sense. Tomorrow I'll try to take a look if you didn't get any successfull answer. Kind regards.
In the meantime, try to get us a working plunker which reproduces the error with some representative data to make some testing.
Thank you Alvaro!!
Hi again Gilbert. I'm having a quick look at your question again and I notice that something must be wrong either with the logic or with de way you iterate over your data. The conditions I gave to you don't make much sense because if item.src === 'med' and item.total === 0, it will match two of the conditions. Please, edit your question adding a full example of the JSON object you are iterating over (I mean your 'patient' object/array) to take a look at it. Thanks.
|
Talk:US Navy Vessels (Contact '57)
what is the source of these images? Wingman1 01:31, April 21, 2013 (UTC)
|
Duplication of boards in Set Assignments?
I seem to have the same board appearing twice in set assignments (this is the<EMAIL_ADDRESS>account). Not sure if this is a recurring problem:
It's worth noting that these are definitely the same board as when I assign a group to one instance, it also appears on the other instance
So I'm pretty confident this is just the list of boards changing between page load and "Load more" being clicked. The server has had a new board added to the top of its list, but the client doesn't know about this. So it asks for boards 6-12 and gets the 6th-become-7th board twice.
It's perfectly correct behaviour from the server, but I agree the front end might do something smart like hide the duplicate.
|
Angular 6 validators pattern for latin characters is not working
I am using reactive forms with validators. For the arabic name field I am using:
Validators.pattern('[\u0600-\u06FF ]*')
But I can't figure out the pattern for latin letters of A-Z and a-z.
I tried:
Validators.pattern(/^-?([a-z]\d*)?$/)
But it didn't work.
regexr is awesome for working out regular expressions
try to apply this pattern: /^[ءآأؤإئابةتثجحخدذرزسشصضطظعغفقكلمنهوىيًٌٍَُِّْٰ]*$/
for arabic :https://stackoverflow.com/questions/12847333/include-arabic-characters-in-javascript-regular-expression
i use this regex:
Validators.pattern('^[a-zA-Z][a-zA-Z0-9-_@\.]{2,20}$')
And
Validators.pattern(/[\u0600-\u06FF-/ ]/) for arabic
The answer was by simply using:
Validators.pattern('[a-zA-Z ]*')
To allow numbers with them:
Validators.pattern('[a-zA-Z0-9 ]*')
|
Protein sequence pattern-matching python
I'm working on a matching algorithm for protein sequences. I'm starting with an aligned protein sequence, and I am attempting to convert a mis-aligned sequence into the correctly aligned one. Here is an example:
original aligned sequence: ----AB--C-D-----
mis-aligned sequence: --A--BC---D-
The expected output should look like this:
original aligned sequence: ----AB--C-D-----
mis-aligned sequence: ----AB--C-D----- (both are now the same)
I'm told to be very specific about my problem, but the sequences I'm trying to match are >4000 characters long, and look ridiculous when pasted here. I'll post two sequences representative of my problem, though, and that should do.
seq="---A-A--AA---A--"
newseq="AA---A--A-----A-----"
seq=list(seq) #changing maaster sequence from string to list
newseq=list(newseq) #changing new sequence from string to list
n=len(seq) #obtaining length of master sequence
newseq.extend('.') #adding a tag to end of new sequence to account for terminal gaps
print(seq, newseq,n) #verification of sequences in list form and length
for i in range(n)
if seq[i]!=newseq[i]:
if seq[i] != '-': #gap deletion
del newseq[i]
elif newseq[i] != '-':
newseq.insert(i,'-') #gap insertion
elif newseq[i] == '-':
del newseq[i]
old=''.join(seq) #changing list to string
new=''.join(newseq) #changing list to string
new=new.strip('.') #removing tag
print(old) #verification of master-sequence fidelity
print(new) #verification of matching sequence
The output I get from this particular code and set of sequences is:
---A-A--AA---A--
---A-A--A----A-----A-----
I can't seem to get the loop to correctly delete unwanted dashes in between the letters more than once, because the rest of the loop iterations are used in an add dash/delete dash pair.
This is a good start to the problems here.
How can I write this loop successfully to obtain my desired output (two identical sequences)
There is no loop in this code sample
Thanks for pointing that out! I guess I lost the loop command in the shuffle
I edited your code and it is giving correct output now:
seq="----AB--C-D-----"
newseq="--A--BC---D-"
seq=list(seq) #changing maaster sequence from string to list
newseq=list(newseq) #changing new sequence from string to list
n=len(seq) #obtaining length of master sequence
newseq.extend('.') #adding a tag to end of new sequence to account for terminal gaps
print(seq, newseq,n) #verification of sequences in list form and length
for i in range(len(seq)):
if seq[i]!=newseq[i]:
if seq[i]=='-':
newseq.insert(i,'-')
elif newseq[i]=='-':
newseq.insert(i,seq[i])
else:
newseq.insert(i,seq[i])
else:
newseq=newseq[0:len(seq)]
old=''.join(seq) #changing list to string
new=''.join(newseq) #changing list to string
new=new.strip('.') #removing tag
print(old) #verification of master-sequence fidelity
print(new) #verification of matching sequence
output:
----AB--C-D-----
----AB--C-D-----
and for AA---A--A-----A-----:
---A-A--AA---A--
---A-A--AA---A--
This algorithm, as the previous one, does not consider the possible mismatch between specific positions, different size strings and has no backtracking if a better solution arises afterwards. Please consider looking into dynamic programming.
I will definitely pursue dynamic programming for future work. This code works in general for my immediate purposes, though (the sequences are always the same order, there is only one solution, and this code works for different size strings). Thanks!
The problem of sequence alignment is well known and its solution is well described. For an introductory text, see the Wikipedia. The best solution I know involves Dynamic programming, and you can see an example implementation in Java at this site.
|
Adopt use of black instead of yapf
This change is switching our code-formatter from yapf to black, which
become far more popular, mainly because it is not configurable. This
means that project maintainers no longer have the option to introduce
their own oppionated settings.
Black is now hosted under official Python Foundation GitHub organization
and was already adopted by major projects like pytest and django.
Another serious benefit is speed, black is incomparebly faster than yapf
and can be used as a pre-commit hook.
Please review this, is only reformated by black, the faster we do the switch the easier it will be.
Add a pyproject.toml with the following:
[tool.black]
skip-string-normalization = true
@decentral1se Addressed your concerns but I will make another change to also change the quotes as I think that one would be useful too (for consistency). Still, it makes no sense to delay adopting your suggestion as the first step.
Regarding existing PRs, we have only 30 now and their authors can easily rebase them and re-run linting, not a real deal. We used to have >100 ones so now we are much better.
@decentral1se Addressed your concerns but I will make another change to also change the quotes as I think that one would be useful too (for consistency). Still, it makes no sense to delay adopting your suggestion as the first step.
Huh!? The whole point is to not change the quotes to reduce the diff. I have no idea what you mean by "consistency" in this context. My concern is to address consistency (keep using single quote style we already have) while yours is to break consistency and increase the diff size. Please spare me this purism of "not having a config" because Black's "philopsophy" or whatever..
@decentral1se please re-review, the current code followed all your comments. I just want to merge it ASAP because otherwise I will have to rebase it after each other PR merged.
|
138 The Twentieth General Meeting.
trigonia, &c., &c. The shells recorded as occurring at Swindon were not numerous^ only reaching twenty species. But he found that in his own collection he had increased the number to something like eighty species : some of them of very great interest. After passing through the Portland beds, which formed the greatest thickness of the quarries, they came to the series of beds called the Purbeck, also largely developed on the coast near Swanage. In passing from the Portland to the Purbeck, they as it were passed from one world into another. The Portland beds were marine formations, and the Purbeck were fresh- water deposits. The bed of most especial interest in the quarries here, was about 18 inches in thickness. If he said that it was full of interesting things, .that would be an exaggeration, but great care and time were required for their discovery. At Swindon the beds were horizontal, but at Swanage they were disturbed. At Swanage twenty-three species of Purbeck mammalia had been taken, very nearly all of them little kanga roos. The remains were almost all lower jaws, and very small. The strata at Swanage were vertical, but at the top of the cliff they lapped over ; and here, himself and some friends had executed a work not unattended with danger. While detaching a stone it gave way suddenly, and struck one of his friends, who was thrown over the edge of the cliff, and rolled down seventy feet. Just below that, there was a sheer precipice of one hundred feet. They had to go round a quarter of a mile to get at him, and of course anticipated that he had been killed. What was their astonishment when after shaking himself together a little he asked if he was much hurt? They carried him home, and actually in four days he had not only recovered, but was at work at the same spot again ! That, however, was a digression. Some years ago he had examined the similar bed at Swindon, and found fom* of the genera of mammals which occur at Swanage. These four were small insectivorous mammals — little kangaroos. There were also six or seven reptiles, one possessing teeth of remarkable form.
But now he had shown them that ages ago Swindon was actually the home of a species of small kangaroos, they might ask him if there were any traces of the food of these creatures. There were
|
package com.cameron.test;
import android.graphics.Color;
import android.os.Bundle;
import android.support.annotation.ColorInt;
import android.support.v4.content.ContextCompat;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
import com.cameron.materialcolorpicker.ColorPicker;
import com.cameron.materialcolorpicker.ColorPickerCallback;
public class MainActivity extends AppCompatActivity implements ColorPickerCallback {
private final String COLOR_VALUE = "colorValue";
private ColorPicker colorPicker;
private View colorView;
private int currentColor;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
setTitle(getString(R.string.toolbar_title));
Button openColorDialog = findViewById(R.id.open_color_picker);
int defaultColor = ContextCompat.getColor(this, R.color.colorPrimary);
currentColor = savedInstanceState != null ? savedInstanceState.getInt(COLOR_VALUE) : defaultColor;
colorView = findViewById(R.id.color_image);
colorPicker = new ColorPicker(
this, // Context
0, // Default Alpha value, this can be omitted
0, // Default Red value
0, // Default Green value
0 // Default Blue value
);
// Various configurations, all of the below are optional
colorPicker.setCloseOnDialogButtonPressed(true)
.setDialogButtonText("CONFIRM")
.setCloseOnBackPressed(false)
.showButtonAsTransparent(true)
// Since this activity already implements the ColorPickerCallback,
// this last configuration is technically unnecessary
.setCallback(this);
// The dialog will be reset on orientation change. This is an
// example of how to retain the color value in such a case
colorPicker.setColor(savedInstanceState == null ? defaultColor : currentColor);
colorView.setBackgroundColor(savedInstanceState == null ? defaultColor : currentColor);
openColorDialog.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
colorPicker.show();
}
});
}
@Override
protected void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
outState.putInt(COLOR_VALUE, currentColor);
}
/**
* Thanks to android's window leaks, we need to dismiss the
* dialog when the device is rotated
*/
@Override
protected void onStop() {
super.onStop();
if (colorPicker != null) colorPicker.dismiss();
}
/**
* One way to get values from the Color Picker is by implementing the
* {@link ColorPickerCallback} on a class level, as can be seen here.
*/
@Override
public void onColorChosen(@ColorInt int color, String hex, String hexNoAlpha) {
Log.d("Pure color", String.valueOf(color));
Log.d("Alpha", Integer.toString(Color.alpha(color)));
Log.d("Red", Integer.toString(Color.red(color)));
Log.d("Green", Integer.toString(Color.green(color)));
Log.d("Blue", Integer.toString(Color.blue(color)));
Log.d("Hex with alpha", hex);
Log.d("Hex no alpha", hexNoAlpha);
// Once the dialog's select button has been pressed, we
// can get the selected color and use it for the
// background of our view
colorView.setBackgroundColor(color);
Toast.makeText(this, "ARGB: " + hex + " | RGB: " + hexNoAlpha, Toast.LENGTH_LONG).show();
}
/**
* When the color values from the dialog are changed, this method will
* be called. Here, we'll just change the color of the dialog's button.
*/
@Override
public void onColorChanged(@ColorInt int color, String hex, String hexNoAlpha) {
Log.d("Color", String.valueOf(color));
Log.d("Hex", hex);
Log.d("Hex no alpha", hexNoAlpha);
// Save the color selected so we can retrieve it again
// when the device is rotated
currentColor = color;
colorPicker.setDialogButtonTextColor(color);
}
}
|
Multilevel analysis of anemia and associated factors among women of reproductive age (15–49 years) in Liberia: Evidence from the 2019/20 Liberia demographic and health survey data
Background Anemia is a global public health problem, principally affecting young children and reproductive-age mothers. Although anemia is a main public health concern in low-income countries, there is no evidence about its prevalence and associated factors among women of reproductive age in Liberia. Thus, the purpose of this study was to identify the prevalence and associated factors of anemia among women of reproductive age in Liberia. Methods We used the data extracted from the fifth Liberia Demographic and Health Survey (LDHS-V) that were carried out between October 2019 and February 2020. The sample was chosen using a stratified two-stage cluster sampling procedure. Overall weighted samples of 4027 women of reproductive age were used in the analysis. Data weighting was carried out to obtain reliable estimates and standard errors as well as to restore the representativeness of the data. Stata version 14 software was used for data extraction, coding, and analysis. We used multilevel analysis to identify the significant factors associated with anemia among women of reproductive age. Results The prevalence of anemia among women of reproductive age in Liberia was 44.51 (95% CI: 42.97–46.04). From these, about 23.10% of women of reproductive age were mildly anemic, 20.63% were moderately anemic and 0.78% was severely anemic. In multivariable analysis; women with the groups of 20–24 years (adjusted odds ratio (AOR) = 0.72, 95% CI: 0.56, 0.92), 25–29 years (AOR = 0.57, 95% CI: 0.43, 0.77), 30–34 years (AOR = 0.59, 95% CI: 0.43, 0.83), 35–39 years (AOR = 0.56, 95% CI: 0.41, 0.79), 40–44 years (AOR = 0.61, 95% CI: 0.43,0.87), 45–49 years (AOR = 0.57, 95% CI: 0.39,0.82), overweight (AOR = 0.83; 95% CI: 0.70, 0.98), obese (AOR = 0.72; 95% CI: 0.58, 0.88), using modern contraceptive methods (AOR = 0.61; 95% CI: 0.52, 0.72), and being from the Northcentral region (AOR = 0.55; 95% CI: 0.43, 0.72) were significantly associated with lower odds of anemia. However, being pregnant (AOR = 1.34; 95% CI: 1.04, 1.73) and having higher parity (3 children or more) (AOR = 1.40; 95% CI: 1.03, 1.93) were significantly associated with higher odds of anemia. Conclusion In the present study, the prevalence of anemia in women of reproductive age was relatively high. Therefore, it is better to provide special emphasis on high-risk groups such as pregnant and multiparous women.
Introduction
Anemia is a disorder in which the number of erythrocyte cells (hemoglobin levels) is inadequate to meet the physiologic desires of the body tissue [1].It is a main worldwide public health problem, principally affecting young children and reproductive-age mothers [2][3][4].Globally, over 500 million (33%) women of reproductive age suffer from anemia which has a long-term negative impact on both the health of mothers and their children as well as economic development [5].However, the highest-burden (49.7%) of anemia in women of reproductive age is found in sub-Saharan Africa [6].
Anemia has significant long-term adverse impacts on the health of general populations; especially women are among the vulnerable groups because of their experiences of menstruation, pregnancy, and childbirth-related hemorrhage [7].Anemia in women of childbearing age causes low productivity because of decreased work capacity, high infection risk because of its effect on immunity, termination of pregnancy, and maternal death [8][9][10][11].Furthermore, maternal anemia is associated with adverse neonatal health outcomes like premature birth, mental retardation, small birth weight, and decreased baby iron stores, which may ultimately lead to child death [10][11][12][13].
Even though the etiologies of anemia are multifactorial, it may be caused by nutritional and non-nutritional causes [14][15][16].Due to the high demand for iron during pregnancy, breastfeeding, and menstrual period, iron deficiency is the most common cause of anemia in women of childbearing age [10,17].
The world health organization (WHO) considers anemia a serious public health problem when its prevalence is above 5% [38]; however, the majority of the evidence shown above indicates that the burden of anemia among mothers of reproductive age is greater than 20%.WHO has established a worldwide aim of accomplishing a 50% decrease in anemia prevalence among women of reproductive age by 2025 [11], even though it is difficult to achieve this aim in the recent trend.Although anemia is a main public health concern in low-income countries, there is no evidence about its prevalence and associated factors among women of reproductive age in Liberia.Thus, the purpose of this study was to identify the prevalence and associated factors of anemia among women of reproductive age in Liberia.
Data source, sampling technique, and population
We used the data extracted from the fifth Liberia Demographic and Health Survey (LDHS-V) that was carried out between October 2019 and February 2020.The LDHS has performed a stratified, two-stage cluster sampling technique.In the first stage, a total of 325 clusters were selected using a stratified two-stage cluster sampling technique.In the second stage, a fixed number of households (30 households for each cluster) were selected using a systematic sampling technique.For this study, we used the woman's data (IR) file and an overall weighted sample of 4027 women of reproductive age.
Variables of the study
Outcome variable.The dependent variable for this study was anemia level, which was determined by the mother's pregnancy status; when non-pregnant a hemoglobin level <12.0 g per deciliter (g/dl), and when pregnant a hemoglobin concentration <11.0 g/dl were considered as anemia.Based on severity, anemia was also categorized as mild (if hemoglobin levels were between 10.0 and 10.9 g/dl and 10.0 and11.9g/dl for pregnant women and non-pregnant women, respectively); moderate (if hemoglobin values were between 7.0 and 9.9 g/dl); and severe (if hemoglobin level <7.0 g/d) for both pregnant and non-pregnant women.In our study, we re-classified anemia status as anemic, which was coded as "1" and non-anemic, coded as "0".Independent variables.According to different literature, the explanatory variables included in the study were individual-level and community-level factors.Individual-level variables considered were classified as maternal-related factors and household-related factors.The maternal-related factors were age of the mothers (categorized as 15-19 years, 20-24 years, 25-29 years, 30-34 years, 35-39 years, 40-44 years, and 45-49 years), educational status (no primary education, primary education, and secondary and above), occupational status (working and not working), marital status, having ever had a terminated pregnancy (yes and no), parity, perception of distance from the health facility, modern contraceptive use, current pregnancy (yes and no) status, breastfeeding, body mass index, and respondents slept under the mosquito net.The household-level factors include wealth index (poorest, poorer, middle, richer, and richest), sex of household head, household size, media exposure (made from 3 factors: frequency of listening to radio, frequency of watching television, and frequency of reading newspapers), type of toilet facility (improved and non-improved), and source of drinking water.The community level factors were residence (urban and rural) and region (Northwestern, Southcentral, Southeastern-a, Southeastern-b, and Northeastern).
Data management and analysis
Data extraction, coding, and analysis were done using Stata version 14 software.Data weighting was carried out throughout the analysis to obtain reliable estimates and standard errors as well as to restore the representativeness of the data.Descriptive statistics were done using frequencies and percentages.A multilevel binary logistic regression model was fitted to determine associated factors of anemia because of the hierarchical nature of LDHS data.In LDHS, study participants were nested within clusters, and we assume that participants within the same cluster are more likely to share similar characteristics than participants in another cluster.The independent and equal variance assumptions of the traditional logistic regression model are violated in this situation.As a result, a sophisticated model must be used to account for the heterogeneity between clusters.Four models were developed during the multilevel analysis: the first (null model), which only incorporated the dependent variable; the second (Model I), which only included individual-level factors; the third (Model II), which only included community-level variables; and the fourth (Model III), which included both individual and community-level variables.To detect the clustering effect or variability, the intraclass correlation coefficient (ICC), median odds ratio (MOR), and proportional change in variance (PCV) were checked.Model comparison was done using deviance (-2 log-likelihood (LL)), and the model with the lowest deviance was declared to be the best-fitted model.To select the variables for the multivariable logistic regression analysis, a binary bivariable logistic regression analysis was initially performed, and variables with a p-value of less than 0.20 were selected as candidates for the multivariable logistic regression analysis.Variables with a P-value of less than 0.05 in the multivariable logistic regression analysis were considered significant factors associated with anemia among women of reproductive age, and an adjusted odds ratio (AOR) with a 95% confidence interval (CI) was reported.
Ethical considerations
Ethical approval and participant consent were not required for this study because it was a secondary data analysis of publically accessible survey data from the MEASURE DHS Program.The authors requested DHS Program and permission was obtained to download and utilize the data for this study from https://www.dhsprogram.com/data/dataset_admin/login_main.cfm.The datasets contain neither household addresses nor names of individuals.There are no names of participants or household addresses recorded in these data sets.
Sociodemographic characteristics of women in Liberia
An overall weighted sample of 4027 reproductive-age women (15-49 years) was included in the final analysis.The mean [±SD] age of the women was 29.2 (±10.01)years.The highest proportion (21.66%) of women was in the age group of 15-19 years and nearly 39.26% of the respondents were unmarried.About 45.87% of respondents had secondary education and above.Around 64.77% of women had media exposure, and 63.04% of them had no job or were currently not working.Most (84.74%) of women were from households with an improved source of drinking water, and about half (50.98%) were from households with an unimproved type of toilet facility.About three-quarters (74.60%) of women did not use modern contraceptives.Concerning residence and sex of household heads, about 62.46% and 60.02% of respondents were urban dwellers and male-headed households, respectively.Regarding region, about half (50.70%) of respondents were from the Southcentral region (Table 1).
Anemia prevalence among women of childbearing age (15-49 years) in Liberia
In our study, the prevalence of anemia among women of reproductive age in Liberia was 44.51% (95% CI: 42.97-46.04).The study revealed that 23.10% of women of reproductive age had mild anemia, 20.63% had moderate anemia, and 0.78% had severe anemia.The prevalence of anemia was higher in women from the Northwestern region (52.36%) and lower in those from the Northcentral region (37.27%)(Fig 1).
Results of the random effect analysis and model selection
In this study, ICC, MOR, and PCV were used to assess the random-effects model analysis.The community-level variability was measured by both ICC and MOR.The ICC value in the null ).We used the last model to identify the significant factors associated with anemia among women of reproductive age in Liberia (Table 2).
Discussion
Anemia among women of reproductive age is a significant public health concern in developing countries due to their increased need for iron during pregnancy, breastfeeding, and menstrual blood loss [10].In the current study, the prevalence of anemia among women of reproductive age in Liberia was 44.51% (95% CI: 42.97-46.04),which is consistent with a systematic review conducted in developing countries [39].The prevalence of anemia in this study was higher than in a previous study conducted in Ethiopia [40], Rwanda [24], Democratic Republic of Congo [19], East Africa [5], Nepal [25], and South and Southeast Asian countries [41].However, the prevalence in this study was lower than in studies carried out in India [42] and Vietnam [43].The variation in anemia prevalence across countries is likely due to the differences in sociocultural, geographical, and dietary-related factors between countries.Moreover, the high burden of anemia among mothers in Liberia might be due to their social and biological vulnerability to anemia.Furthermore, in low-income countries, particularly Liberia, access to iron-rich food is insufficient because of their low socioeconomic status and limited access to and underutilization of health care, which may contribute to anemia.
Our study indicated that respondent age, body mass index, modern contraceptive use, current pregnant status, parity, and being from the Northcentral region of Liberia were significantly associated with anemia.Older age groups of women had lower odds of anemia than younger age groups (15-19 years).This finding is in agreement with different studies done elsewhere [5,18,19,44].An increased risk of anemia in relatively younger women might be because of the adverse effects of poor dietary iron intake and the increased demand for iron imposed by iron loss during menstrual blood loss, pregnancy, and lactation [17].We also found that overweight and obese women had lower odds of anemia as compared to women with normal body weight, and this is in agreement with other studies [24,25,41,45].A study conducted in China indicated that overweight and obese mothers had higher iron consumption rates than normal body weight mothers [46].Previous studies revealed that a higher socioeconomic status is associated with good nutritional status [47], preventing infection [48], better access to health care services, and improving other living conditions [47,49], all of which in turn increase iron intake and prevent anemia.
This study also showed that the use of modern contraceptive methods was significantly associated with anemia.A woman who used modern contraceptive methods had a lower risk of developing anemia as compared to women who did not use any contraceptive, and this is supported by different studies [5,6,24,25].This might be due to the preventive effects of modern contraceptives on menstrual blood loss, pregnancy, and birth-related complications, which in turn, reduce the burden of anemia due to recurrent blood loss [50,51].Simultaneous iron supplementation is also obtained, especially in those mothers who have taken oral hormonal contraceptives, and this could prevent anemia [52].
In this study, we also found that pregnant women had higher odds of anemia as compared with non-pregnant women, and this is in agreement with other studies reported elsewhere [5,19,21,26,33].This is because pregnant women need more iron to support their intrauterine fetal development.The second probable reason could be that anemia during pregnancy may result from micronutrient deficiencies, infections, or genetic disorders of the erythrocytes such as thalassemia; all of these are common during pregnancy [53].Women with a higher parity (3 children or more) had a higher odds of developing anemia as compared to women with no children and this is consistent with other previous studies done elsewhere [54,55].This is because the prevalence of anemia increases with the number of pregnancies.
Furthermore, in our study, the region was significantly associated with anemia among women of reproductive age.The odds of anemia were lower among women who were living in the Northcentral region as compared to women from the Southeastern-b region of Liberia.The first possible explanation for the difference in the proportion of anemia could be variation in sociocultural status, availability and accessibility of health care services, economic status, and dietary-related factors between regions within the same country [56].Additionally, variation in the prevalence of anemia could be due to the discrepancy in the proportion of women taking iron supplements and getting dewormed between regions [57].
Strengths and limitations of the study
The study has several strengths.Firstly, it was based on a large weighted sample size of nationally representative data.Secondly, appropriate statistical analysis was performed using multilevel analysis to consider the hierarchical nature of the LDHS data and get a reliable estimate.Thirdly, we strongly believe that the study has the potential to provide insight for policymakers and program managers to design appropriate intervention strategies for the problem both at regional and national levels since it is based on the national survey data.Conversely, the study has limitations.Since LDHS data was based on participants' self-report of variables, there might be a probability of recall bias.Also, since this study was cross-sectional, it is difficult to show the temporal relationship between outcome and explanatory variables.We didn't address some independent variables like parasitic infection (such as malaria and intestinal parasitic infestation) in the analysis since these variables are not available in the LDHS data.
Conclusion
In the present study, the prevalence of anemia in women of reproductive age was relatively high.We found that older age, a higher body mass index, the use of modern contraceptive methods, and being from the Northcentral region of Liberia were significantly associated with lower odds of anemia in women of reproductive age.However, being pregnant and having higher parity were significantly associated with a higher prevalence of anemia.Therefore, it is better to provide special emphasis on high-risk groups such as pregnant and multiparous women.
Table 1 .
(Continued) was 5.9%, revealing that 5.9% of the total variability of the level of anemia in women of reproductive age was because of differences between clusters whereas the remaining unexplained 94.1% of the total variability of the level of anemia was due to individual differences.
https://doi.org/10.1371/journal.pone.0296747.t001Fig1.The prevalence of anemia among women of reproductive age in Liberia.https://doi.org/10.1371/journal.pone.0296747.g001modelAdditionally,thehighest MOR value(1.16) in the null model supports the fact that there was significant clustering of anemia in women of reproductive age.Moreover, the highest PCV value (0.28) in the last model (model III) indicated that 28% of the variation in anemia among reproductive-age women was explained by both individual-level and community-level variables.The final model (model III), which contains both individual and community-level factors simultaneously, was chosen as the best-fitted model for the data as it had the lowest deviance value (5432.10
|
Board Thread:General Discussion/@comment-30714040-20170922183405/@comment-25516840-20170922194500
Coolan28 wrote: HEIL JOPEDE wrote: I don't think you have a ego.
|
import { Slider } from "./shape";
/*
* @Author: Shirtiny
* @Date: 2021-06-23 17:21:45
* @LastEditTime: 2021-06-24 22:56:47
* @Description:
*/
export enum Colors {
red = "#e72528",
blue = "#2a79af",
weakBlue = "#6eb6f4",
geekblue = "#85a5ff",
gold = "#ffd666",
orange = "#ffa940",
cyan = "#13c2c2",
green = "#52c41a",
volcano = "#fa541c",
lime = "#389e0d",
pink = "#eb2f96",
purple = "#722ed1",
darkPurple = "#292F4C",
gray = "#8c8c8c",
}
export class Theme {
colors: typeof Colors;
shapes: {
slider: Slider;
};
constructor() {
this.colors = Colors;
this.shapes = {
slider: new Slider()
}
}
}
|
also to a sinus or vessel acting as a ventral heart. Supra-spiracular line: in caterpillars, margins the spiracles superiorly. Supra-stigmatal line: = supra-spiracular lines.
Nenroptera, serving as oars or organs of locomotion. Swimming paddles: terminal appendages of mosquito pupae. Swoked: smoky, suffused with gray or blackish. Sylvan: species inhabiting forests or woodland areas. Symbiogenesis: the method of origin of social symbiotic relation among ants
|
Centaurea, Michell’s Blue Bonnet Special Florists’ Super-Giant Strain. Blooms are fully double, with clear color and Jong stems, making this strain extra fine for florists’ cut-flower purposes. Large, deep blue. Trade pkt. 25c; 0z. 75c; I4lb. $2.50; Ib. $7.5 Cyclamen,
All-America Selection for 1948—Silver Medal Award A striking new color combination never before seen in Sensation Cosmos—deep rose petals overlaid with a large, well-defined zone of rich crimson, the first bicolor Cosmos to be developed. Destined for immediate popularity in the nation’s gardens, it is the first postwar novelty. Trade pkt. 50c; 140z. 90c; oz. $3.00. HENRY F. MICHELL CO., 516-518 Market st., PHILADELPHIA 5, PA.
|
import sys
from main import return_minizers, parse_fasta_file, Minimizer
def load_hashtable(hash_file):
"""
Function that will receive data from minimizer_table
and write it in the file on disk. This will include
parsing of existing data in dictionary minimizer_table
in specific format.
:param minimizer_table: dict (dictionary of minimizers)
:param hash_file: str (path to the file in which you want to save hash table
:return:
"""
minimizer_table = dict()
with open(hash_file, "r") as file:
for line in file:
parsed_line = line.split(":")
key = parsed_line[0]
positions = parsed_line[1]
mini = Minimizer(positions, key)
minimizer_table[key] = mini
return minimizer_table
if __name__ == "__main__":
"""
Argument for this script should be path to the file
where hashtable is saved and string which you want to
be found (which is a minimizer saved in hash.txt).
For example:
python MinimizerHashTableQuery.py hash.txt TTTCA
"""
hash_file, string_to_find = sys.argv[1], sys.argv[2]
minimizer_dictionary = load_hashtable(hash_file)
output_file = "out.txt"
try:
with open(output_file, "w") as out:
out.write("Found string: {}, Positions: {}".format(string_to_find, minimizer_dictionary[string_to_find].position))
except KeyError:
with open(output_file, "w") as out:
out.write("Minimizer string {} not found".format(string_to_find))
|
export function selectBook(book) {
// selectBook is an ActionCreator, it needs to return an action,
// an object with a type property.
// every action has a type (describe the purpose of the action) and payload (data that describes the action taken )
// BOOK_SELECTED - keep it uppercase
return {
type: "BOOK_SELECTED",
payload: book
};
}
|
John H. Kassing v. John M. Durand and Henry C. Durand.
Real Property—Secret Trust—Fraud.
1. He who goes into a court of equity seeking relief, must do so with clean hands.
•2. A secret trust in real estate resting upon an agreement to hinder or delay creditors, will not be enforced.
3. A court of equity will afford no relief to a debtor who has transferred his property for the purpose of defrauding his creditors, and subsequently seeks as against the transferees to recover back the same.
[Opinion filed June 25, 1891.]
Appeal from the Circuit Court of Cook County; the Hon. Loein C. Collins, Judge, presiding.
John H. Kassing filed his bill in the court below setting forth that about the first of September, 1876, he was the owner in fee simple of lots 4, 5 and 9, in block 14, in Hewbury’s addition to Chicago, subject to a mortgage to secure the payment of about $10,000, with interest thereon at the rate of nine per cent per annum, payable semi-annually; that at said time he was indebted to different people, which indebtedness he had not at that time the ready money to pay, although he was the owner of property worth more than enough to pay such indebtedness; that he was at that time acquainted with appellees, John M. and Henry C. Durand, who were then engaged in the wholesale grocery business in the city of Chicago, and he was then indebted to them, or the firm of which they were members, and that said firm held his promissory note therefor for the sum of $550; that being so indebted to appellees, and not having the ready cash at that time to pay the same, he went to them for the purpose of securing the payment of the note to them, and borrowing money from them to pay his indebtedness, and to take and follow their advice in the matter of giving them security; and that he told them exactly how he was financially situated; what property he owned; that he wished to secure the payment of the note to them aforesaid, and to borrow money from them to pay 'his indebtedness; that it was then understood by them that he was the owner of the real estate heretofore mentioned and that there was then no money due on the said mortgage; and it was then suggested by a lawyer employed by said appellees, that the best way for them to obtain security for the payment of the promissory note held by them and the repayment of any money that they might loan to him, would be for appellees’ firm to enter up judgment trpon said note, to issue execution thereon, levy the same upon the real estate hereinbefore mentioned and sell the interest of appellant in said real estate and bid it in in the name of one or both of said appellees^ and to wait until there was interest due upon the mortgage and not to pay the same, but to have the mortgage foreclosed, and at the foreclosure sale either one or the other of appellees should bid off said lots covered by the mortgage, in the name of one or the other of them, and take a deed therefor; and it was also suggested by said lawyer that appellant should file a petition in bankruptcy in the District Court of the United States for the Northern District of Illinois, to be declared bankrupt, and to have an assignee appointed, and that said assignee in bankruptcy should sell and convey all the interest of appellant in said real estate, and that appellees, or one of them, would attend the sale thereof to be made by the assignee in bankruptcy and bid off said property and receive a deed therefor, and in that way appellees would be able to have in their own hands the legal title to said real estate, and it could be held by them as security to them for the payment of said note, and for the repayment of any and all moneys they should loan and advance for and on behalf of appellant; and it was then and there agreed between appellant and appellees that the aforesaid things suggested and advised by said lawyer should be done.
That accordingly appellant permitted appellees to obtain the title to said real property in the manner suggested; that in pursuance of said arrangement on the 14tli day of October, 1876, a judgment was entered in the Circuit Court of Cook County in favor of John M. Durand and Henry C. Durand, Calvin Durand and Cornelius E. Connolly, as plaintiffs, and against appellant as defendant, for the sum of §562 and costs; that execution was issued upon said judgment, and delivered to the sheriff of Cook County, and the sheriff thereupon levied upon the said real property on the 27th day of November, 1876, and sold appellant’s interest in the same and improvements thereon for the nominal sum of $594, and issued to said John M. Durand a certificate of sale thereof; that afterward appellant, acting under the advice of said appellees and their lawyer, on the 4th day of November, 1876, filed his petition in bankruptcy in the District Court of the United States for the Northern District of Illinois, and was declared a bankrupt, and Eobert E. Jenkins was, on or about the 6th day of January, appointed assignee in bankruptcy of appellant’s estate; and that thereafter, to carry out the arrangement with appellees, the interest coupon on said mortgage which next fell due wras not paid, payment being withheld for the purpose of creating an excuse for foreclosing the said mortgage, and thereupon the United States Mortgage Company took steps to foreclose said mortgage; advertised for sale the property covered thereby, and sold the same on or about the 28th day of February, 1877; and that upon the suggestion of said appellees and their lawyer, appellant procured one Bernard Niggemyer to attend the sale, and as he is informed and believes, the. said Niggemyer, and the said John M. Durand, and the lawyer of appellees, attended the sale, and the said John M. Durand bid off the said property at said sale, with all the improvements thereon, for the sum of §12,800, in the name of the said Henry C. Durand, said JohnM. Durand being the only bidder therefor, and that thereafter on the 28th day of February, 1877, the said United States Mortgage Company executed and delivered to said Henry C. Durand a warranty deed of the premises so sold at said foreclosure sale, and that thereafter, on or about the 26th day of March, 1877, the said Eobert E. Jenkins, as assignee in bankruptcy, sold appellant’s interest in the real estate hereinbefore described, together with appellant’s interest in other real estate, at public auction, to Henry C. Durand, for the sum of $315, and executed and delivered to said Henry C. Durand a deed of the premises so sold. The bill sets forth that after this was done the proceedings in bankruptcy were dismissed, but when is not stated. The bill further alleges that thereafter, on the 19tli day of March, 1878, the said premises sold under said execution, not having been redeemed from the said sale made by the sheriff of Cook County, the said sheriff executed and delivered'to the said John M. Durand a sheriff’s deed of the premises so sold.
Appellant further sets forth in his bill that all of the aforesaid transactions were made under an arrangement with appellees for the purpose merely of placing the title to appellant’s said real estate in the said appellees as security for the indebtedness of appellant to appellees’firm, and the money advanced by appellees to make said purchase, and such other moneys as appellees might have to advance for taxes and to protect their lien upon said property, and that lots 4 and 5 were, when he made the arrangement, worth $30,000, and lot 9 was then worth $2,000.
The bill'sets forth that appellant remained in possession and control of said premises, rented the same, collected the rents, made repairs on them and paid appellees large sums of money down to the year 1886; that from some time in the year 1876 or 1877 down to 1886 he paid to appellees a large sum of money, the exact amount of which he is unable to state as he kept no account of the same, but trusted entirely to appellees’ keeping a correct account thereof: that, such payments were made by him to apply on the indebtedness he was owing to appellees for their advances as aforesaid; that how much money appellees advanced and paid out on account of said premises under their arrangement with appellant, appellant does not know, but that appellees have a full and complete account of the same; that in 1886, appellees notified appellant that he could collect no more rents from said premises, but they would employ a real estate agent to manage the property for them, and that since 1886 appellant has not been allowed by appellees to have anything to do with the premises, but the rents have been collected and paid to appellees, and they are now in possession of the premises to the exclusion of appellant; and they now repudiate the arrangement which they made with him and insist that they took the title to said premises entirely for their own use and benefit and that appellant has no interest or right in or to said premises or claim upon them in any way whatever.
The bill prays for an accounting by appellees as to their receipts and expenditures on account of such premises, and that appellees be compelled to convey said premises to appellant, and to surrender to him all deeds, leases, writings and tax receipts which they or either of them have pertaining to said premises.
A demurrer to the bill was sustained and the bill dismissed; from the decree sustaining the demurrer and dismissing the bill, the complainant below has appealed.
Mr. A. B. Jenks, for appellant.
The first question is, can an absolute conveyance in equity be proven by oral testimony to be a mortgage ?
That proof can be so made seems to be positively settled by the following authorities: 1 Jones on Mortgages, Sec. 293; Miller v. Thomas, 14 Ill. 428; Tillson v. Moulton, 23 Ill. 600; Shaver v. Woodward, 28 Ill. 277; Smith v. Cremer, 71 Ill. 185; Union Mutual Life Insurance Co. v. White, 106 Ill. 67; Workman v. Greening, 115 Ill. 477; Slee v. Manhattan Company, 1 Paige Chy. 48.
Has the statute of frauds any application to a case of this kind, in equity, where a fraud was perpetrated by the persons in obtaining the legal title to real estate from the owner, and who now seek to maintain such legal title as against the owner of the property ?
It is said that “ the correct view appears to be that equity will at all times lend its aid to defeat a fraud notwithstanding the statute of frauds.” Browne on Statute of Frauds, 3d Ed., Sec. 438.
“A simple illustration of the rule that when the statute of frauds has been used as a cover to a fraud, equity will relieve against the fraud, notwithstanding its provisions, is found in a case reported by Yiner, and stated by him to have occurred in Lord Nottingham’s time, and to have been the first instance in which any equitable exception to the statute appears. There was a verbal agreement for an absolute conveyance of land, and for a defeasance to be executed by the grantee; but he, having obtained the conveyance, refused to execute the defeasance, and relied upon the statute; but his plea was overruled and he was compelled to execute according to his agreement. Here the attempted fraud consisted, not merely in refusing to do what he agreed, but in deceiving the plaintiff out of his property.” Browne on Stat. of Frauds, 3d Ed., Sec. 441.
“ It was not intended by the adoption of the statute (frauds) to facilitate the perpetration of and to protect fraud, but to prevent it; and the courts have never lent themselves to assist or protect fraud, which the statute did not sanction by its adoption. The courts will not permit the statute to be used as an engine of fraud, as its purpose was its suppression.” Union Mutual Life Ins. Co. v. White, 106 Ill. 67.
“ The question is presented on this record whether, as the transaction has assumed the form of a sale, a court of equity can effectuate the original intention of the parties, by declaring it a mortgage. It is objected that the agreement, resting alone in parol, the statute of frauds will prevent it from being carried into effect. Under Sec. 12 of our Conveyance Act, and "upon general equitable principles, this court repeatedly held that deeds in form absolute, might be shown to be mortgages in fact. Such has been repeatedly held, as in the case of Wynkoop v. Corning, 21 Ill. 570, it was said that courts are not estopped from looking into all the facts and circumstances of a deed absolute on its face, to ascertain whether a Joan of, and security for, money was really intended.” Reigard v. McNeil, 38 Ill. 400.
“ Courts of equity strongly incline to treat all securities for money, or to indemnify, as mortgages; and when a purchaser of lands, at or before a judicial sale, promises to extend the redemption beyond the time allowed by law, the transaction will be treated as a mortgage of the land sold, the real right of the creditor extending no further than full satisfaction of his debt. All such cases, however, are controlled by the circumstances attending them.” Pensoneau v. Pulliam et al., 47 Ill. 58.
In the case of Slee v. The President and Directors of the Manhattan Company, 1 Paige Chy. Rep. 47, it is said by Judge Betts (page 55): “It is unquestionably the rule of this court and at law, that a deed, however absolute the terms may be upon its face, if really only intended to secure a debt, will be deemed a mortgage, though the defeasance is by parol. 1 Johns. Chy. R. 594; 4 Id. 167; 6 Id. 417; 7 Id. 40. It never loses the character and quality of a mortgage, and the right of redemption, as an inseparable incident, can not be restrained or clogged, even by the stipulation of the parties. Clark v. Henry, 2 Cowen, 324, in error. Lapse of time, in this case, can not affect the remedy; for as against the mortgagor, possession by the mortgagee for any period short of twenty years will not bar the equity of redemption. Anon. 3 Atk. 213; Moore v. Cable, 1 Johns. Chy. R. 385.”
Messrs. M. B. & F. S. Looms, for appellees.
The transaction, being in fraud of creditors, can not be successfully assailed by either of the parties thereto.
In the case of Dunaway v. Robertson et al., 95 Ill. 419, the court say (at page 426): “ The rule is familiar that no man can be permitted to found a claim on his own iniquity—Frustra legis auxilium quaerit qui in legem comittitP The court continues to say: “It was held by this court in Miller v. Marckle, 21 Ill. 152, that where a transaction was tainted with fraud, as between the parties to it, a court will not assist either, but will leave them in the position in which they have placed themselves.” In support .of the decision the case of Smith et al. v. Hubbs, 10 Me. (1 Fairfield) 71, was cited, where Mellen, C. J., says: “ Whatever the parties to an action have executed for fraudulent or illegal purposes, the law refuses to lend its aid to enable either party to disturb. Whatever the parties fraudulently or illegally contracted to execute, the law refuses to compel the contractor to execute or pay damages for not executing; but in both cases leaves the parties where it finds them.” The object of the law in the latter case is, as far as possible, to prevent the contemplated wrong, and in the former, to punish the wrongdoer by leaving him to the consequences of his own folly or misconduct. Bolt v. Rogers, 3 Paige, 154, was also cited, where the court say: “Whenever two or more persons are engaged in a fraudulent transaction to injure another, neither law nor equitjT will interfere to relieve either of those persons, as against the other, from the consequences of their own misconduct.” In conclusion, in the case of Miller v. Marckle, it was "said: “The rule we have adopted seems best calculated to frustrate the designs of parties who engage in transactions of a fraudulent character, saying to them most emphatically, keep what you have got, be it notes or mortgages, but seek not our aid to enforce the one or the other, or, on the other hand, to relieve against them.”
The case above cited (Dunaway v. Robertson et al.), like the case at bar, was heard and decided on demurrer to bill of complaint. The bill was filed for the purpose of setting aside certain deeds which the complainant had executed to his father, in the lifetime of the latter, or to compel the other heirs to make a re-conveyance, alleging that “ fearing he (complainant) might in the then future become involved in litigation, which, because of its tediousness and =expensiveness, would probably embarrass him, hut not for the purpose of avoiding the payment of any debt or obligation, legal or equitable, due from him, or previously contracted by him,” complainant made the deeds. It was held that averment in the bill that the deeds were not made for the purpose of avoiding the payment of any debt or obligation, was but the statement of a conclusion, and did not relieve the conveyances of the fraudulent character attributable to them by reason of their being made under an apprehension of embarrassment from anticipated litigation.
So in the case at bar, the assertion of appellant in his bill “ that he had no intention whatever of hindering or delaying his other creditors,” is but the statement of a conclusion and does not relieve the transaction narrated of the fraudulent character attributable to it by reason of its evident intention to defraud the creditors of appellant. It is not necessary there should be any actual defrauding of creditors intended. It is sufficient if there was an intention to place property beyond their reach for the time being. Phelps v. Curts, 80 Ill. 109; Nesbitt et al. v. Digby et al., 13 Ill. 387; Fitzgerald v. Forristal, 48 Ill. 228; Ryan v. Ryan, 97 Ill. 38.
A debtor in failing circumstances is only allowed to place his property beyond the reach of his creditors by making a general assignment of it; by devoting it unreservedly to the payment of his debts; and not with a view to his advantage in delaying until a favorable time the appropriation of the property for such purposes. Nesbitt et al. v. Digby et al., supra; Phelps v. Curtis, supra.
A transaction consisting of a purchase, through a third person, of mortgaged premises at a foreclosure sale, may be deemed fraudulent as to the mortgagor’s creditors, the evidence showing a scheme to keep the land from them by means of the foreclosure and purchase. Simmons v. Johnson, 48 Hun (N. Y.), 131.
A party to a covinous transaction intended to cover the property from creditors, shall not give in evidence his own fraud, whether it be an absolute deed or a mortgage, or however it be mingled with other arrangements between the parties. Gill v. Henry, 95 Pa. St. 388.
An intent to delay creditors must be conclusively presumed from a conveyance which does delay, them. In re Smith, 4 Ben. 1.
A secret trust in real estate, resting upon an agreement made to hinder and delay creditors, will not be enforced. Fast v. McPherson, 98 Ill. 496; Moore, Adm’r, etc., v. Wood, 100 Ill. 451; Tyler v. Tyler, 126 Ill. 525.
The grantor of a deed made to baffle creditors, can not be relieved against its operation, though it was in fact intended as a mortgage. Ybarra v. Lorenzana, 53 Cal. 197; Lill v. Brant, 6 Ill. App. 366.
A conveyance of land made by a debtor to his attorney at the suggestion of the latter, with mutual intent to defraud the. client’s creditors, vests the legal estate as between the parties to the deed. York v. Merritt, 80 N. C. 285.
The fact that judgments were confessed for bona fide debts, does not exclude fraudulent intent, especially where the manner in which the judgments were confessed, the agency resorted to, and the use made of them, stamp the transaction as an attempt to keep creditors at arm’s length, and to enable the debtor to retain possession and control the property. Abey v. Schwab, 31 N. Y. S. R. 139.
If a man, knowing that a scheme is fraudulent, and that the natural outcome of it will be to defraud some innocent person, goes into it solely for the purpose of making money out of it, though he may not be equally in fault with another who is the moving party in the fraud, and influences him by his persuasions and representations, a court will, on the ground of public policy, deny him any relief against the other party. Knight v. Linzey (Mich.), 45 N. W. Rep. 337.
Kumerous other cases to the same effect as those already referred to might be cited, establishing beyond question the doctrine that, while transactions of the kind narrated in appellant’s bill of complaint, are, or may be void as to existing creditors, they are perfect and effectual as between the parties, and can not be set aside by either of them in case they become dissatisfied with the transaction.
The alleged agreement of appellees to re-convey constituted an express trust, and, being verbal, was void under the statute of frauds.
The bill of complaint sets up, in direct and express words, a positive verbal agreement on the part of appellees to re-convey to appellant the premises in question; there is no pretense that any contract, agreement or memorandum in writing of any kind was ever made or signed by them, or either of them, to that effect; and nothing is shown to take the case out of the operation'of the statute*
The statute provides (Sec. 9, Chap. 59) that “ All declarations or creations of trusts or confidences of any lauds, tenements or hereditaments shall be manifested and proved by some writing signed by the party, * * * or else they shall be utterly void and of no effect. Provided, that resulting trust or trusts, created by construction, implication or operation of law, need not be in writing, and the same may be proved by parol.”
There can be no pretense in this case that there was a resulting trust within the exception of the statute, as a resulting trust can not be created unless the money of the eestui que trust (appellant) was used in acquiring title to the property. It can not be created by agreement or contract. Remington v. Campbell, 60 Ill. 516; Sheldon v. Harding, 44 Ill. 68; Holmes v. Holmes, 44 Ill. 168; Roberts v. Ware, 40 Cal. 634.
It must, moreover, be shown that the money was actually paid, directly or indirectly, by appellant. It is not sufficient to show that he requested appellees to do so, and promised to repay them what they paid for the same. Kendall v. Mann, 11 Allen 17, 546; Baxter v. Baxter, 22 Cal. 579.
r Appellant does not claim that any of his own money was used in the purchase, either at the sheriff’s sale, the mortgagee’s sale, or the sale under the bankruptcy proceedings, through which several proceedings appellees acquired title, but concedes that appellees furnished the money, but agreed verbally to re-convey on being reimbursed therefor by him.
In the case of Stephenson v. Thompson, 13 Ill. 186, where an agreement existed in parol between A and B, that A should pay for certain lands, and on being reimbursed by B therefor, A should convey them toB, and the lands were sold at sheriff’s sale and bought by A with his own money, and conveyed to him by the sheriff, it was held that the contract between A and B, being verbal, was void under the statute of frauds.
If the bill had sought to establish a resulting trust, by construction, implication or operation of law, it would have been no defense to have alleged that the trust was not created by some instrument in writing. But, as stated, the bill alleges an express trust, and to this allegation the defense that the agreement to re-convey—or declaration of trust—-is not in writing, is applicable. Scott v. Harris, 113 Ill. 447; Biggins v. Biggins, 133 Ill. 211; Adams v. Adams, 79 Ill. 517; Wilson v. McDowell, 78 Ill. 514; Lill v. Brant, 6 Ill. App. 372; Green v. Coates, 73 Mo. 115.
Waterman, J.
The bill filed by appellant is for the enforcement of a contract alleged to have been made by him with appellees.
The first question that arises is, whether the contract as stated is one which a court of equity will enforce. The rule that no one should be permitted to take advantage of his own wrong—JWullus cowmodum eapere potest de injuria sua propria—is familiar.
We are therefore called upon to inquire what it really was that appellant alleges was agreed to be done, and what has been done in pursuance of such agreement.
The allegation is made that he told appellees that he wished among other things to borrow money of them with which to pay his debts, and that they expressed a willingness to loan him money for such purpose; but it does not appear that any attempt to obtain money for such purpose was actually ever made. Practically, according to the bill, the advances and the application for advances were confined to the money necessary to buy in for the use of appellant his property when offered for sale.
That the scheme was one well calculated to enable appellant to defraud his creditors, is apparent.
Lots 4 and 5 were at the time he made this arrangement, worth §30,000, and lot 9 was then worth §2,000. Upon these the only incumbrance shown to have existed was for the sum of §10,620. In this property appellant had an equity of over §20,000, yet he arranges to have it sold, and it was sold to appellees, first on a judgment for §562, then two of the lots were sold for §12,800 on the mortgage. A secret trust, as appellant says, was created in his favor, he apparently having lost this valuable equity.
Next, as a part of the arrangement, and lest some of his creditors might redeem from the judgment sale or seek to reach his secret equity, he filed a petition in bankruptcy. By this last proceeding appellant practically stayed the hands of his creditors, suspended the operation of the ordinary remedies they possessed, and compelled them to go into the bankrupt court in order to reach his property.
By the adjudication of bankruptcy the valuable secret equity which he had in this property passed to his assignee, became a bankrupt asset to be sold, the proceeds thereof to be divided among his creditors.
Appellant also, by the filing of his petition in bankruptcy, placed himself in a position where he might, by the payment of but a percentage upon the claims against him, obtain a discharge that would free him from all his indebtedness.
In this proceeding, in which, under his solemn oath, he summoned his creditors into a court of his own choosing and compelled them there alone to look for the ascertainment and payment of their claims, he was bound to the exercise of the utmost good faith. It nowhere appears in the bill he has filed in this cause, that in the bankruptcy court he at any time disclosed the fact of his ownership of the valuable equity in this property, he now tells us that he then had. So far as appears, his creditors and his assignee were suffered to remain in ignorance of this, a fact which it was his moral and legal duty to at once make known.
That he did not do so, and that his creditors were defrauded by his action and his silence, is apparent from- the fact that this valuable equity, amounting to over §20,000, was on the 26th day of March, 1877, sold by Robert E. Jenkins, his assignee, to Henry C. Durand, for his, appellant’s, use and benefit, for the sum of §315.
And the arrangement for this proceeding under which his creditors were swindled out of this great sum, a court of equity is now'asked to lend its aid in carrying out.
As to the lots alleged to have been worth the sum of §30,000, the work of secreting all evidence of appellant’s title and interest seems to have been complete February 27, 1877. As to the remaining lot, then worth §2,000, the right of creditors to redeem was finally cut off March 19, 1878.
Appellant began, as he says, to collect rents and pay them over to appellees ; this continued until 1886 ; appellant keeping no account of how much he thus paid over, as he neither kept nor asked for an account of what appellees had paid for or kept on behalf of the premises. Twelve years thus passed away, when appellee saw fit to, as they told him, employ another real estate agent to look after the property; nearly four years after this he filed his bill.
It is a fundamental principle that he who goes into a court of equity seeking relief, must do so with clean hands.
The allegation in the bill that appellant had no intention of hindering or delajdng his creditors, is similar to that made in the case of Dunaway v. Robertson, 95 Ill. 419, concerning which the court say: “ This is but the statement of a collusion and does not control in the case.”
The arrangement for an actual sale of appellant’s property under the circumstances described would be for a fraudulent purpose as regards creditors, notwithstanding any assertions to the contrary. Dunaway v. Robertson, supra.
One of the most common occasions for the enforcement of the rule that he who comes into equity must do so with clean hands, arises in cases -where a debtor has in any manner transferred his property for the purpose of defrauding his creditors and afterward seeks as against the transferee to recover back the property. The door of a court of equity is always shut against such a claimant. Pomeroy’s Equity Jurisprudence, Sec. 401; Wheeler v. Sage, 1 Wall. 518 ; Bolt v. Rogers, 3 Paige, 156 ; Riedle v. Mulhausen, 20 Ill. App. 68; Ryan v. Ryan, 97 Ill. 38; Fitzgerald v. Forristall, 48 Ill. 228 ; Phelps et al. v. Curts et al., 80 Ill. 109 ; Nesbit v. Digby, 13 Ill. 387.
It it true that according to the allegations of the bill, appellees were engaged with appellant in a fraudulent scheme, but in such case the rule is in pari delicto potior est conditio defendeniis. Miller v. Marckle, 21 Ill. 152.
A secret trust in real estate resting upon an agreement to hinder or delay creditors, will not be enforced. Fast v. McPherson, 98 Ill. 468; Moore v. Wood, 100 Ill. 451.
The agreement as stated was one which could not fail to hinder and delay creditors; the bankruptcy proceeding alone hindered and delayed them.
Appellant did not make or state in his bill a cause entitling him to relief in a court of equity, and the bill was properly dismissed. Judgment affirmed.
|
User:Ttocserp/sandbox
According to connoisseurs
In a paper for the 2013 Oxford Food Symposium, Tan and Bursa identified the features of the art or craft of making and serving Turkish coffee, according to the traditional procedures:
* Roasting. Ideally the best green Arabica beans are medium-roasted in small quantities over steady heat in a shallow, wrought-iron roasting pan. It is crucial to stop at the right moment, then transfer the beans to the next stage:
* Cooling. The beans are allowed to cool down in a wooden box and absorb excess oil. The kind of wood is claimed to affect the taste, walnut being the best.
* Pounding or grinding. The beans must be reduced into a very fine powder. The fineness of the powder is crucial to the success of Turkish coffee since it affects the foam and mouth feel. (According to one source, the particle size should be 75-125 microns.) Strict connoisseurs insist that they must be hand-pounded in a wooden mortar, although it is difficult to do this while achieving an uniform fineness. Consequently it has become more usual to grind them in a brass or copper mill, though it does make for drier particles.
* Brewing. It is essential to use a proper cezve. This vessel is a conical flask, being wider at the base than at the neck, and is made of thick forged copper. (A common sized cezve will make one cup of coffee, and they can easily be ordered online in many western countries.) Cold water, several teaspoons of the ground coffee (at least 7 grams per person) and any sugar are put in the cezve and it is put on the fire. The tapering shape of the vessel encourages the formation of foam and retains the volatile aromas. The coffee should never be allowed to come to a rolling boil, and must not be over-done. "This stage requires close monitoring and delicate timing since a good Turkish coffee has the thickest possible layer of froth at the top". Some think that the metallic copper helps tp improve the taste.
* Serving. The cezve has a spout by which it is poured into the serving cup. While the cup design might not seem to have anything to do with the taste of the beverage, connoisseurs say it makes a difference. The best cups are made of porcelain with a thin rim: it affects mouth feel A long cultural tradition emphasises the pleasure of being served coffee in beautiful cups, which are family heirlooms. The beverage is served together with a glass of water which should be sipped first to cleanse the mouth. Other cultural traditions affect the guest's appreciation of the beverage and the conviviality of the occasion, including story-telling, fortune-telling, and so forth.
While some of these stages may be curtailed in modern coffee drinking, for example the coffee might be purchased already roasted and ground, the rituals and paraphernalia (e.g. the anticipatory smell of the roasting beans) do act on the imagination and have a psychological effect.
xxx
La pelota era un barco improvisado de cuero utilizado en América del Sur y Central para cruzar ríos. Era similar en algunos aspectos al bull boat (bote de cuero) de América del Norte o al coracle de las Islas Británicas, pero a menudo no tenía armazón de madera ni estructura de soporte interna, y dependía enteramente de la rigidez del cuero para mantenerlo a flote. Por lo tanto, podía transportarse a caballo y montarse rápidamente en caso de emergencia, y era una habilidad rural común. El barco era remolcado por un animal o un nadador humano, siendo las mujeres consideradas especialmente diestras. Las pelotas podían transportar cargas sustanciales (lo común era alrededor de un cuarto de tonelada) e incluso pequeñas piezas de artillería. Continuaron utilizándose hasta bien entrado el siglo XX.
Necesidad
General Manuel Belgrano recalled taking a small revolutionary army across the Corriente River in 1811 with nothing but two bad canoes and some pelotas. The river was about a cuadra (80 metres) wide, and unfordable. He noted that most of his men knew how to use a pelota, implying that it was standard rural knowhow.
Not all countrymen knew how to swim, however: it depended on the region. The cavalry troopers of General Paz were from the Province of Corrientes, where everyone did. Crossing a river at night, holding on to the mains or tails of their swimming horses - their arms, ammunition, uniforms and saddles safely dry in pelotas, which they had improvised from rawhide saddle blankets - they surprised and defeated the enemy at the Battle of Caaguazú.
at 350-1
pp. 37, 39, 54, 168-9
==Fuentes=
Professional pelota towers
The Spanish Empire established a postal service linking Buenos Aires in the Atlantic world with Lima in the Pacific. At intervals along this 3,000 mile route, posts were set up at where fresh horses could be obtained and there was (very) basic sleeping accommodation. These posts were often beside rivers. At each place a person was put in charge who got no salary but was rewarded with valuable legal exemptions. Private voyagers were encouraged to travel with the mail, being forbidden to take their own horses.
Where the rivers were too deep to be forded the postal service appointed pasadores whose function was to carry passengers and mail across in pelotas. They were not allowed to charge much for the mail but were able to recoup themselves from the private travellers. Thus, at some places there were official pelota towers - persons who swam across rivers and pulled the boat with their teeth - whose charges were regulated by law.
The most notable crossing was at the Río del Pasage (today called the Juramento River), which lay on the road between Tucumán and Salta. It could be forded quite easily in the dry season, but when the waters rose it grew wide and deep, with strong currents and eventually, turbulent waves. An artillery officer wrote that the river brought down logs that endangered pelota and swimmer alike; the latter had to be adept to dodge them. He recorded that the service was still functioning in 1833, despite the need for a bridge, for none had been built owing to bureaucratic inertia. At this spot Indian women were celebrated pelota towers. It is not clear when the service ceased to function.
Pelotas in reality, and in European art
Surviving images of pelotas drawn by European artists always depict them with some form of wooden bracing, as if they assumed one must be necessary to give them rigidity. Verbal descriptions by reputable observers, like Azara or Dobrizhoffer, make it clear they seldom possessed one; it would have defeated the object
Rosas
Shumway, Jeffrey. "A veces saber olvidar es también tener memoria”: La repatriación de Juan Manuel de Rosas, el menemismo, y las heridas de la memoria Argentina." O. Barreneche y A. Bisso (Comps.), El tiempo pasa, la historia queda. Ayer, hoy y mañana son contemporáneos (2010): 93-132.
xxx
1. "Paraguayan War" is the preferred usage in the English language, certainly in serious scholarly writing.
The JSTOR library is a database of nearly all recent high-quality scholarly articles in the English language. The facts speak for themselves:
(Source: JSTOR, interrogation of search engine provided, 29 April 2024, all articles.)
The larger (albeit lower quality) Google Scholar database paints a broadly similar picture:
Likewise, there are clearly more books with "Paraguayan War" in the title, than "War of the Triple Alliance. (Source: Google Books, interrogate intitle:field.)
2. The 1864-1870 war is little known outside South America. By itself, the title "War of the Triple Alliance" doesn't tell the international reader anything. Which triple alliance? There have been quite a few in human history. To suppose "Triple Alliance", without context, must mean the South American one, is parochial. "Paraguayan War" at least points the reader to the right continent.
3. The title "War of the Triple Alliance" can be ideologically loaded. It was increasingly hijacked by the revisionists of the 1970s, with their conspiracy theories of an invisible plot to "get" Paraguay. But it was the war that caused the triple alliance, not the other way round. The war actually began and developed in 1864, between Paraguay and Brazil alone; there was no triple alliance then, just a Paraguayan army sacking the Mato Grosso's capital. Not until after Argentina's province of Corrientes was invaded in April 1865 did Argentina make an alliance with Brazil - its traditional enemy.
|
Talk:Kwamibuster/@comment-37547855-20191012073538
so basically...?
all went back to the box and cat noir now thinks marinette isn't ladybug Xc
|
package org.lilyproject.indexer.model.indexerconf;
import java.util.HashMap;
import java.util.Map;
public class DynamicFieldNameTemplateResolver implements NameTemplateResolver {
private Map<String, Object> values = new HashMap<String, Object>();
public DynamicFieldNameTemplateResolver(Map<String, Object> values) {
this.values = values;
}
@Override
public Object resolve(TemplatePart part) {
if (part instanceof ConditionalTemplatePart) {
ConditionalTemplatePart cPart = (ConditionalTemplatePart)part;
Object condVal = values.get(cPart.getConditional());
if (condVal == null) {
throw new NameTemplateEvaluationException("Variable does not evaluate to a value: " + cPart.getConditional());
}
if (!(condVal instanceof Boolean)) {
throw new NameTemplateEvaluationException("Variable is not a boolean: " + cPart.getConditional());
}
if ((Boolean)condVal) {
return cPart.getTrueString();
} else {
return cPart.getFalseString();
}
} else if (part instanceof VariableTemplatePart) {
VariableTemplatePart varPart = (VariableTemplatePart) part;
Object value = values.get(varPart.getVariable());
if (value == null) {
throw new NameTemplateEvaluationException("Variable does not evaluate to a value: " + varPart.getVariable());
}
return value;
} else if (part instanceof LiteralTemplatePart) {
return ((LiteralTemplatePart)part).getString();
} else {
throw new NameTemplateEvaluationException("Unsupported TemplatePart class: " + part.getClass().getName());
}
}
}
|
Accepted the offer but no response from Human Resources Manager
I have been interviewd with XYZ Company, and got the offer letter. I signed and sent it back the same day evening
There were 6-8 documents which needed to be filled, signed and send. I did fill all the documents online but the one document I initially filled it hand written and later did filled it in pdf and while sending email I attached the hand written one. Immediately sent a mail stating that ignore the previously attached attachments as one of the file is not so clean so consider the attached in the next email.
Does doing this give a negative impact to human resources manager? Will they revoke the offer for this? I called them this morning and left voicemail and no response.
How long ago did you send them the documents? Resending the documents shouldn't reflect badly on you at all, and they definitely shouldn't pull back the offer over something so inconsequential.
I received the documents on friday sent them on monday evening. Does human resource managers have right to revoke offer or is it hiring manager who must make it? I emailed monday and called them this morning left a voicemail. Does that calling immediately make any negative impact. @David
Breathe Nikhil, you're panicking. So far you have not done anything that I would say warrants revoking an offer. Mailing one evening and calling the next morning for an update is an overreaction though, and if you constantly contact them it will certainly look bad. Give them a week to get back to you - 8 documents is not a small amount and they probably have a good deal of paperwork to do before anything else can happen.
Thank you David. Yea I know its an overreaction, but I didn't call for update just called and left a note saying just giving a call to check if you need anything else from me.
Does doing this give a negative impact to human resources manager?
Would be guessing but I don't see how this can cause such negative impact.
Will they revoke the offer for this?
I seriously doubt it.
You already went through all the interview process and were successfully accepted; sending a sub-optimal document is hardly something that disqualifies someone for a job position.
I know you are excited for this new job but don't torture yourself. At this point you are already in, so don't let this minor incident make you doubt of yourself.
|
Program stuck on startup
After the environment is set up, I use only one GPU GeForce RTX 3080 to run the script.sh in the startup manual, and I find that the program will be stuck on this page for more than ten minute:
and Then the program reports an error
I don't know why this happens, can you help me bro?
@MLuminous I'm sorry to bother you. Have you successfully run the program now? Is it a program running on windows?
@MLuminous @charlesCXK @ChrisLiang2020 I'm sorry to bother you, has this problem been resolved? I also encountered an error similar to yours. I need your help, thanks.
|
package com.genexus.specific.java;
import com.genexus.common.interfaces.IExtensionNativeFunctions;
import com.genexus.platform.INativeFunctions;
import com.genexus.util.IThreadLocal;
import com.genexus.util.IThreadLocalInitializer;
public class NativeFunctions implements IExtensionNativeFunctions {
@Override
public IThreadLocal newThreadLocal(IThreadLocalInitializer initializer) {
return new InheritableThreadLocalSun(initializer);
}
class InheritableThreadLocalSun extends ThreadLocal implements com.genexus.util.IThreadLocal
{
private IThreadLocalInitializer initializer;
public InheritableThreadLocalSun()
{
}
public InheritableThreadLocalSun(IThreadLocalInitializer initializer)
{
this.initializer = initializer;
}
@SuppressWarnings("unchecked")
public Object get()
{
Object obj = super.get();
if(obj == null && initializer != null)
{
set(initializer.initialValue());
return super.get();
}
return obj;
}
}
@Override
public INativeFunctions getInstance() {
return com.genexus.platform.NativeFunctions.getInstance();
}
}
|
// Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
// This is licensed software from AccelByte Inc, for limitations
// and restrictions contact your company contract manager.
// Code generated; DO NOT EDIT.
package public_player_binary_record
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"strings"
"github.com/go-openapi/runtime"
"github.com/go-openapi/strfmt"
"github.com/AccelByte/accelbyte-go-sdk/cloudsave-sdk/pkg/cloudsaveclientmodels"
)
// GetPlayerPublicBinaryRecordsV1Reader is a Reader for the GetPlayerPublicBinaryRecordsV1 structure.
type GetPlayerPublicBinaryRecordsV1Reader struct {
formats strfmt.Registry
}
// ReadResponse reads a server response into the received o.
func (o *GetPlayerPublicBinaryRecordsV1Reader) ReadResponse(response runtime.ClientResponse, consumer runtime.Consumer) (interface{}, error) {
switch response.Code() {
case 200:
result := NewGetPlayerPublicBinaryRecordsV1OK()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return result, nil
case 401:
result := NewGetPlayerPublicBinaryRecordsV1Unauthorized()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return result, nil
case 404:
result := NewGetPlayerPublicBinaryRecordsV1NotFound()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return result, nil
case 500:
result := NewGetPlayerPublicBinaryRecordsV1InternalServerError()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return result, nil
default:
data, err := ioutil.ReadAll(response.Body())
if err != nil {
return nil, err
}
return nil, fmt.Errorf("Requested GET /cloudsave/v1/namespaces/{namespace}/users/{userId}/binaries/{key}/public returns an error %d: %s", response.Code(), string(data))
}
}
// NewGetPlayerPublicBinaryRecordsV1OK creates a GetPlayerPublicBinaryRecordsV1OK with default headers values
func NewGetPlayerPublicBinaryRecordsV1OK() *GetPlayerPublicBinaryRecordsV1OK {
return &GetPlayerPublicBinaryRecordsV1OK{}
}
/*GetPlayerPublicBinaryRecordsV1OK handles this case with default header values.
Record retrieved
*/
type GetPlayerPublicBinaryRecordsV1OK struct {
Payload *cloudsaveclientmodels.ModelsPlayerBinaryRecordResponse
}
func (o *GetPlayerPublicBinaryRecordsV1OK) Error() string {
return fmt.Sprintf("[GET /cloudsave/v1/namespaces/{namespace}/users/{userId}/binaries/{key}/public][%d] getPlayerPublicBinaryRecordsV1OK %+v", 200, o.ToJSONString())
}
func (o *GetPlayerPublicBinaryRecordsV1OK) ToJSONString() string {
if o.Payload == nil {
return "{}"
}
b, err := json.Marshal(o.Payload)
if err != nil {
fmt.Println(err)
return fmt.Sprintf("Failed to marshal the payload: %+v", o.Payload)
}
return fmt.Sprintf("%+v", string(b))
}
func (o *GetPlayerPublicBinaryRecordsV1OK) GetPayload() *cloudsaveclientmodels.ModelsPlayerBinaryRecordResponse {
return o.Payload
}
func (o *GetPlayerPublicBinaryRecordsV1OK) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
// handle file responses
contentDisposition := response.GetHeader("Content-Disposition")
if strings.Contains(strings.ToLower(contentDisposition), "filename=") {
consumer = runtime.ByteStreamConsumer()
}
o.Payload = new(cloudsaveclientmodels.ModelsPlayerBinaryRecordResponse)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewGetPlayerPublicBinaryRecordsV1Unauthorized creates a GetPlayerPublicBinaryRecordsV1Unauthorized with default headers values
func NewGetPlayerPublicBinaryRecordsV1Unauthorized() *GetPlayerPublicBinaryRecordsV1Unauthorized {
return &GetPlayerPublicBinaryRecordsV1Unauthorized{}
}
/*GetPlayerPublicBinaryRecordsV1Unauthorized handles this case with default header values.
Unauthorized
*/
type GetPlayerPublicBinaryRecordsV1Unauthorized struct {
Payload *cloudsaveclientmodels.ModelsResponseError
}
func (o *GetPlayerPublicBinaryRecordsV1Unauthorized) Error() string {
return fmt.Sprintf("[GET /cloudsave/v1/namespaces/{namespace}/users/{userId}/binaries/{key}/public][%d] getPlayerPublicBinaryRecordsV1Unauthorized %+v", 401, o.ToJSONString())
}
func (o *GetPlayerPublicBinaryRecordsV1Unauthorized) ToJSONString() string {
if o.Payload == nil {
return "{}"
}
b, err := json.Marshal(o.Payload)
if err != nil {
fmt.Println(err)
return fmt.Sprintf("Failed to marshal the payload: %+v", o.Payload)
}
return fmt.Sprintf("%+v", string(b))
}
func (o *GetPlayerPublicBinaryRecordsV1Unauthorized) GetPayload() *cloudsaveclientmodels.ModelsResponseError {
return o.Payload
}
func (o *GetPlayerPublicBinaryRecordsV1Unauthorized) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
// handle file responses
contentDisposition := response.GetHeader("Content-Disposition")
if strings.Contains(strings.ToLower(contentDisposition), "filename=") {
consumer = runtime.ByteStreamConsumer()
}
o.Payload = new(cloudsaveclientmodels.ModelsResponseError)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewGetPlayerPublicBinaryRecordsV1NotFound creates a GetPlayerPublicBinaryRecordsV1NotFound with default headers values
func NewGetPlayerPublicBinaryRecordsV1NotFound() *GetPlayerPublicBinaryRecordsV1NotFound {
return &GetPlayerPublicBinaryRecordsV1NotFound{}
}
/*GetPlayerPublicBinaryRecordsV1NotFound handles this case with default header values.
Not Found
*/
type GetPlayerPublicBinaryRecordsV1NotFound struct {
Payload *cloudsaveclientmodels.ModelsResponseError
}
func (o *GetPlayerPublicBinaryRecordsV1NotFound) Error() string {
return fmt.Sprintf("[GET /cloudsave/v1/namespaces/{namespace}/users/{userId}/binaries/{key}/public][%d] getPlayerPublicBinaryRecordsV1NotFound %+v", 404, o.ToJSONString())
}
func (o *GetPlayerPublicBinaryRecordsV1NotFound) ToJSONString() string {
if o.Payload == nil {
return "{}"
}
b, err := json.Marshal(o.Payload)
if err != nil {
fmt.Println(err)
return fmt.Sprintf("Failed to marshal the payload: %+v", o.Payload)
}
return fmt.Sprintf("%+v", string(b))
}
func (o *GetPlayerPublicBinaryRecordsV1NotFound) GetPayload() *cloudsaveclientmodels.ModelsResponseError {
return o.Payload
}
func (o *GetPlayerPublicBinaryRecordsV1NotFound) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
// handle file responses
contentDisposition := response.GetHeader("Content-Disposition")
if strings.Contains(strings.ToLower(contentDisposition), "filename=") {
consumer = runtime.ByteStreamConsumer()
}
o.Payload = new(cloudsaveclientmodels.ModelsResponseError)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewGetPlayerPublicBinaryRecordsV1InternalServerError creates a GetPlayerPublicBinaryRecordsV1InternalServerError with default headers values
func NewGetPlayerPublicBinaryRecordsV1InternalServerError() *GetPlayerPublicBinaryRecordsV1InternalServerError {
return &GetPlayerPublicBinaryRecordsV1InternalServerError{}
}
/*GetPlayerPublicBinaryRecordsV1InternalServerError handles this case with default header values.
Internal Server Error
*/
type GetPlayerPublicBinaryRecordsV1InternalServerError struct {
Payload *cloudsaveclientmodels.ModelsResponseError
}
func (o *GetPlayerPublicBinaryRecordsV1InternalServerError) Error() string {
return fmt.Sprintf("[GET /cloudsave/v1/namespaces/{namespace}/users/{userId}/binaries/{key}/public][%d] getPlayerPublicBinaryRecordsV1InternalServerError %+v", 500, o.ToJSONString())
}
func (o *GetPlayerPublicBinaryRecordsV1InternalServerError) ToJSONString() string {
if o.Payload == nil {
return "{}"
}
b, err := json.Marshal(o.Payload)
if err != nil {
fmt.Println(err)
return fmt.Sprintf("Failed to marshal the payload: %+v", o.Payload)
}
return fmt.Sprintf("%+v", string(b))
}
func (o *GetPlayerPublicBinaryRecordsV1InternalServerError) GetPayload() *cloudsaveclientmodels.ModelsResponseError {
return o.Payload
}
func (o *GetPlayerPublicBinaryRecordsV1InternalServerError) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
// handle file responses
contentDisposition := response.GetHeader("Content-Disposition")
if strings.Contains(strings.ToLower(contentDisposition), "filename=") {
consumer = runtime.ByteStreamConsumer()
}
o.Payload = new(cloudsaveclientmodels.ModelsResponseError)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
|
package no.kartverket.glrenderer;
import java.util.ArrayList;
import no.kartverket.geometry.Line;
import no.kartverket.geometry.Pos;
/**
* Created by janvin on 11/05/17.
*/
public class GlLines {
ArrayList<GlLine> lines = new ArrayList<GlLine>();
public GlLines(){
}
/**
*
* @param lines
*/
public void setLines(Line[] lines){
this.lines.clear();
for(int i = 0; i<lines.length;i++){
GlLine l= new GlLine();
Line line = lines[i];
l.setPositions(line.p1,line.p2);
l.setColor(ArScene.LINE_COLOR);
l.setWidth(ArScene.LINE_WIDTH);
this.lines.add(l);
}
}
/**
*
* @param lines
*/
public void setLines(ColorLine[] lines){
this.lines.clear();
for(int i = 0; i<lines.length;i++){
GlLine l= new GlLine();
ColorLine line = lines[i];
l.setPositions(line.p1,line.p2);
l.setColor(line.color);
l.setWidth(line.width);
this.lines.add(l);
}
}
/**
*
* @param mvpMatrix
*/
public void draw(float[] mvpMatrix) {
for (GlLine line: lines) {
line.draw(mvpMatrix);
}
}
}
|
// +build !windows
package minikube
import (
"bytes"
"fmt"
"os/exec"
"github.com/kyma-project/cli/internal/cli"
"github.com/kyma-project/cli/internal/root"
"github.com/kyma-project/cli/internal/step"
)
func addDevDomainsToEtcHostsOSSpecific(domain string, s step.Step, hostAlias string) error {
notifyUserFunc := func(err error) {
if err != nil {
s.LogInfof("Error: %s", err.Error())
}
s.LogInfof("Execute the following command manually to add domain entries:\n###\n sudo sed -i.bak \"/"+domain+"/d\" "+hostsFile+" && echo '%s' | sudo tee -a /etc/hosts\r\n###\n", hostAlias)
}
s.LogInfo("Adding domain mappings to your 'hosts' file")
if root.IsWithSudo() {
s.LogInfo("You're running CLI with sudo. CLI has to add Kyma domain entries to your 'hosts'. Type 'y' to allow this action")
if !root.PromptUser() {
notifyUserFunc(nil)
return nil
}
}
_, err := cli.RunCmd("sudo",
"sed", "-i.bak",
fmt.Sprintf("/%s/d", domain),
hostsFile)
if err != nil {
notifyUserFunc(err)
return nil
}
cmd := exec.Command("sudo", "tee", "-a", hostsFile)
buf := &bytes.Buffer{}
_, err = fmt.Fprint(buf, hostAlias)
if err != nil {
notifyUserFunc(err)
return nil
}
cmd.Stdin = buf
err = cmd.Run()
if err != nil {
notifyUserFunc(err)
return nil
}
return nil
}
|
It starts from trouble by covering something that I think gets a little bit misunderstood. We covered it before. I want to cover it again. Why? Because again, the audio, the lighting, the video just wasn't, I feel like up to snuff. So that being the case, let me move this. We have HBO Max. And so I was just looking at something. They gave us a new, they updated it, they didn't really update it. They just made it to where you get more commercials, which I can't, I can't stand it anyway. So I'm bothered, I'm upset about that right now. But again, they did a good job because what did I do? Upgrade it to the, to the, to pay what, $5 or $10 more a month so that now I pay for or listen to ads. So they got me. It worked. So anyway, hope you all are doing well. I want to go ahead and jump into it. This particular part of Romans that Paul is speaking of tends to kind of get lumped in with something that those who are Calvinists believe relates to their point now. The point that they try to make it relate to is election, which I believe in election. And it does speak on some issues regarding election. But that's just not the point that, that Paul is making here. So because of that, I want to go ahead and cover what he's saying. And the reason why we're doing this is this kind of piggybacks off of a little bit of what we were saying yesterday we've said in the past. And so it's, it's appropriate to go ahead and deal with it now. And that is this issue, this distinction between the church and Israel. So that being the case, well, is, is Corrie alive? Yes. I believe so. I believe, I believe he's alive. Both. I'm both. Oh, no. Yep. I'm alive and we're live. So let's go and jump into it. Anyway, let's go ahead and jump into nine. Now again, we talked about our hermeneutic before. We talked about how, how we read the Bible matters. How we read it can determine where we take it. How we read it can determine how we understand it. And so because of that, it's very important that we try to be as consistent as possible. And remember guys, and I don't, and I think people, there is for some people, a natural knee jerk response to just disagree, to just disagree. Because Corrie, I, I know some of the things, some of the conclusions that you've come to, and they're different than mine. And so the reason why I'm going to disagree, not because of what you're saying might be right or might be wrong. That's not the issue. The issue is because of, I don't like the conclusion that you come to. I come to a different conclusion. And if I know that this sort of hermeneutic that you employ brings a person to this conclusion, well, then I reject it. Even if the, even if the hermeneutic is sound, even if the hermeneutic is proper, I'm just going to, to reject it. Wait a minute, I'm looking for something. And I almost knocked the screen over. Hold on it. I always, I always do this, always. There it is. What I was looking for is actually in my pocket. I'm sorry. I want to tell you what I'm looking for. It's been a rough day. It's been a rough day. Taking care of my daughter, she was getting her ID and locked the keys in the car. It's one of those days. It's just one, and it's still not over with. Still not over with. I've got to leave out again. But hey, part of life, part of life. But anyway, so your hermeneutic matters. And again, I say this again, and I'll keep saying this. Look at the way that you read everything else. Look at the way that you read everything else. The way that you read everything else should be the same way that you read the Bible. But wait a minute, Corey. The Bible is far more important, far more superior, far more powerful than any other book. Correct. That's God's doing. The power behind what you're reading is God's doing the reading that you're employing. That's not God. That's you. God, God didn't infuse your reading, your ability with this knowing that he gives you the ability to understand and receive what it's saying. But how you read any other book and think about it, we can all read other books. We can read stories and come to the same conclusion. We know what happens in those stories. We read this one. Why? Because sometimes we have vested interests in what it says. And so we want to avoid that. And by the way, what you'll notice is what I'm sharing, what we're covering. Doesn't change an awful lot of what you believe. You may think it. You may think it does. But when you step back and look at it, it didn't really change a whole lot. And what we're going to see is that there is a bit of a difference with Israel in the church. And as always, we have to deal with this. Calvinists aren't Christians. Neither are Armenians. Here's my question. Why aren't Calvinist Christians? Now, there's a lot of Calvinists in the church. And I'm not one, but I agree with a lot of Calvinism. Why aren't they? We can't. I believe I'm going to take your word for it. Your name is revivalist for Christ. So question is, can you share with us why Calvinists, those that hold to Calvinism, aren't Christians? Be interesting, because they worship John Calvin. Hey, by show of hands, who worships John Calvin? OK, can I talk to you for a second, brother? This is just me and you. This is just me and you. You're going to have to have more mature arguments than that. You really are. You really are. First of all, let's say if you had an actual point to make, you're not going to make it that way. You're not going to make it that way. What do Calvinists believe? Matter of fact, forget what Calvinists believe. Let's talk about what you believe. Can't even start the Bible study, because you threw a bomb in the middle of the floor of a grenade, and so we're going to go ahead and jump on it so that no one else can get hurt. What does it take for a person to become a Christian? Revivalist for Christ? What does it take? What is required for a person to become a Christian? If a person has placed their faith in Christ, Christ pays it to them, to Christ, who is God, pays a debt that we couldn't pay. And they place their faith in them, in that, in him. They profess that Jesus is Lord. Are they saved? Surely you'd say yes. If you say no, then it would be because you're not a Christian. So therefore, you would say yes. They are Christians. They are saved if they believe that. We don't know if they actually truly believe, but if they believe that, then they're safe, right? All right, cool, we're getting somewhere. Let's see what you're going to do is you're going to start going down a list. You know why you're going to start going down a list? Revivalist for Christ? Because I think now that you realize, uh-oh, I stepped in it. I stepped in, I threw a bomb out there, and I was just hoping no one would take it on, but we're going to take it on. We're going to take it on. So I just gave you what's required for salvation. Obviously realized that they need to be saved, that they're a sinner. Now, that being the case, if they place their faith in Christ, are they saved? Yes, they are. If you say no, then you're not a Christian because you don't understand what being Christian is. But you would say yes. Now, a Calvinist tries to figure out how they became a Christian. So too does an Armenian. So too does a provisionist. So too does someone who's dispensational. So too does someone who doesn't know what they are. Okay, and they just feel like that they were elected. They can be saved. Please don't say anything like that again. I'm helping you out. I'm helping you out. I'm trying to help you out, okay? All right, so now, that being the case, let's go ahead and jump into... No, wait a second. Let me see what you just said. It's coming up slowly on my end. Calvinists say Jesus didn't die for everybody but scripture says otherwise. Okay, fine, let's say they're wrong. And by the way, I agree with that. I agree with that. I think that they're wrong on that, but that doesn't make them not a Christian. So you're not saved by what you know or how much you know. You're saved by who you know. I want you, so what I just said, I want you to go back in the video, clip that part out, do a screen record, edit it, save it so you can hear what I just said and save that. You are not saved by what you know or how much you know. You are saved by who you know. Do you know Christ? Jesus says to them, I never knew you, depart from me. Do you know him? So copy that, save that, remember that. That's gold, that's good stuff. That is good stuff. So remember what I just told you on that part and then also remember this. I just had to go ahead and put the little intro in there again because I think it's awesome. All right, now let's go back to the channel. I mean, to the video or to the scriptures, got us all out of whack. Anyway, let's start in Romans, is this Romans nine? Romans nine, let's make sure. Yeah, Romans nine, there we go. Paul, now first of all, let's talk about what Paul has covered. Paul has talked about earlier, really the difference in being Jew or being a Gentile and is there really any difference? Because he says that we all are under sin, we all sin, we all mess up, we all are just rotten people, we all do bad stuff. And there is no real difference. So he says, what's the benefit of even being a Jew? He says, well, in this way, one specifically that they have been given the oracles of God. We're gonna deal with that in a little bit. They have been given the oracles of God. Why? Does anybody know why it's important that he makes that distinction, why he brings that out, that they have been given, the Jews have been given the oracles of God? Does anybody know? Because it's important to know that. Now he goes on also to talk about how we have given, we are now in right standing and we are made in right standing because of our faith. The first person to show us this example and trusting God was Abraham, right? Now he wasn't the first one specifically to believe because obviously Abel did. Hebrew says so. Obviously Noah did, Hebrew says so. And so we've got other folks, but we are out of this lineage coming through Abraham. And there's a reason why we're coming through Abraham. And so we have this faith that saves us. And so now we have been made right. And so we don't have to live a life of sin. And he says, should we keep on sinning that this grace of God has given us, that it may abound? Well, obviously not. God forbid that we should do that. But we still do struggle with sin though. That's what Paul talks about. The good that I wanna do, I don't do the bad I wanna stop doing, I continue to do. But then he says, but just be clear though, just to be clear, all of us that are in Christ Jesus, there is nothing that will separate us. Nothing. And so now we make our way to chapter nine, but before we do, does anybody know why it's important to bring out the fact that these oracles of God were given to the Jews? Does anybody, the reason is, is because God has given them initially, initially, initially. And we're gonna see why that's important. Initially has given them the responsibility to spread the gospel. You go and bring it. Initially to spread the gospel, to preach Christ, to give the commands of the Lord. Initially they do, but then something happens. Something happens and doesn't really go right. So let's go ahead and jump to chapter nine and we'll cover this. By the way, we also are going to utilize the timelines as well. Let me go ahead and put them on the screen because it's gonna be very important that we see these. We're gonna utilize these to some degree, because I want to start using this particular tool, the New Testament timeline that I gave you. And if you look at it, where is Romans at? Well, one, in terms of the epistles, it was written 57 AD. And even more than that, if you look at the type of doctrinal points that it primarily brings out, it brings out a soteriological emphasis. Well, it's gonna talk about, since he is an apostle to the Gentiles, he speaks about and is speaking to the Jews, but he also brings in Gentiles. I mean, I'm sorry, I had it backwards. He's an apostle to the Gentiles, but he's gonna bring up an issue regarding the Jews. And so that brings us to Romans nine. So Romans nine, he says, I'm telling you the truth in Christ. Now this is after he's stated that nothing can separate us. This is after he's talked about our salvation being secure. He says, I'm not lying, my conscience testifies with me in the Holy Spirit that I have a great sorrow and unceasing grief in my heart. For I could wish that I myself were cursed separated from Christ for the sake of my brethren, my kinsmen, according to the flesh who are Israelites. So who is he having this sorrow for? That part shouldn't be too difficult to understand. He's got this burden, this sorrow, this unease about who? About Israel, about Jews. Now some are gonna wanna say that this is some sort of spiritual Israel. This is not spiritual Israel. Spiritual Israel, first of all, news flash guys, news flash. There's no such thing as spiritual Israel. There's no such thing. There's no scripture that says that, okay? There's no scripture that says so. So, but if you think that he's speaking about Jews in the spirit, then what you're doing now is what we call allegorizing or spiritualizing text. You're looking for a deeper meaning when, first of all, in this case, the plain reading is clear. He says that I wish that I myself could be accursed for who, for my brethren. Well, could it be his Gentile brethren? It could until he says this, my kinsmen according to the flesh, okay? My kinsmen, muqadah sarkah, which is according to the flesh who are Israelites. So we know he's speaking of actual Jews, ethnic Jews. To whom belongs the adoption and the sons and the glory and the covenants and the giving of the law and the temple services and promises whose are the fathers and from whom is Christ according to the flesh. So he's making it clear he's speaking about Israel. But now notice what he just said. Notice what he puts in the camp of the Jews. He says to whom refers to these Jews, Israelites belong the adoption as sons and the glory and the covenants. So are you trying to say, Paul, that these covenants belong to the Jews? Well, yeah, that's what he's saying. That's what he's saying. Now, we covered this before and I don't wanna lose anyone here, but we talked about it before that when we looked at all of the covenants, be it the Abrahamic covenant, I mean, the Noaic covenant, even if you feel like there is a such thing, even if you feel like there's a such thing as the cat is in here, if you feel like there's a such thing as an Adamic covenant, that's fine. It's not a named covenant, that's fine. But then the Noaic covenant, the Abrahamic covenant, the old or mosaic covenant, the Davidic covenant, the new covenant, the covenant might not specifically be for this group over here or that group. It may be specifically surrounding or about Israel, but that doesn't mean that someone else doesn't also reap a benefit. That someone else doesn't get any blessing from that. Remember, specifically the Abrahamic covenant, he makes this covenant with Abraham and he says, in you, all the nations of the earth shall be blessed. So, yeah, the covenant is with Abraham and what he's gonna do with his people, but in that covenant also is a nice little nugget, as a matter of fact, that states that through this, all of the other nations and the people of the world shall be blessed, not just Jews, but in this case, Gentiles, okay? So, the covenants that were given to them, just because they were given to Israel does not mean that other nations or other people who are not Israel won't get blessed by it or with me. Very, very, very important, okay? So, let's go back to it. The covenants, the giving of the law, clearly that's for the Jews and the temple service, the promises, who's are the fathers, that being these Jewish men and from whom is Christ according to the flesh, who is overall God blessed for every man. But it is not as though the word of God has failed. Why is he bringing this up? It is not as though the word of God has failed. Why? I can't see him, but I can hear him. But the Jews have been given the oracles and so they preached the gospel to who? As Jesus told them to do in Judea, in Sumeria, and now to the rest of the world. But now they're not receiving it. Why? Well, before we continue, before we continue, let's go back to where God brings us up. God says in Deuteronomy chapter 32 verse 21, because Israel, even as early as Deuteronomy, has started to stray. And they're going after other guys, they're just disobedient. And so look what he says. He says, they have made me, this is God speaking, made me jealous with what is not God. These false gods, these idols, things they shouldn't go after. They have provoked me to anger with their idols. So I will make them jealous with those who are not a people. Now, if we go to verse 16, look what they did to him. He says, they made him God, the Jews made God, jealous with strange gods. So they make God jealous. So what does God say he's gonna do? I'm gonna make them jealous. How so? Well, these very same people that you began preaching to, you guys are gonna start waning and they will be the dominant group, the influx of people coming in. They will be the believers. And eventually they will be the ones who will preach to you. Now, the problem is, that's not gonna sit well with Jews. Proud as they can be. That's not going to sit well with the Jews. But what Paul makes a statement, and a lot of folks don't catch it, the meaning of it, when Paul says that we are ministers of the new covenant, if it's to be taken all of us, now, some take it as just Paul and the apostles, or certain apostles, or a lot take it that this statement, ministers of new covenant, is referring to Gentiles, who will then serve as ministers of the covenant to the Jews. We'll cover this issue of the new covenant in just a little bit, but let's go back to Romans nine. He says, but it is not as though the word of God has failed. So in other words, the word of God towards Israel has not failed. We'll get to that in a little bit, for they are not all Israel who are descended from Israel. Now, by the way, he brings this up. He's gonna bring into mind something that we read about and we forget about as we interpret this, but we shouldn't forget about what he's bringing up when we interpret it. Here's what I mean. Nor are they all children because they are Abraham's descendant, but through Isaac, your descendant shall will be named. That is, it is not the children of the flesh who are children, but the children of the promise are regarded as the sentence. So his point is, was Isaac Abraham's only actual child? No, but in terms of how God sees it, he's not of Abraham. He's not of Israel. Even though, hey, listen, if we went and took a DNA test, it would come back with Abraham that Abraham, you are the father. This is your child. You are the Papi, but this is not what God's talking about. So therefore, therefore he's speaking of, in this case, a physical lineage, guys. Don't miss this point. Verse eight, that is, it is not the children of the flesh who are children of God, but the children of the promise who are regarded as descendants. Now, the tendency would be for us to jump in and to say, yeah, he's talking about us because we are spiritual descendants. This is not what he's talking about. He's clearly, he's still talking about his kinsmen of the flesh who are Israel. Verse nine, for the word of promise, at this time I will come and Sarah shall have a son. And not only this, but there was Rebecca. Also when she had conceived twins by, you're not gonna do it. He's not gonna do it this time. The cat is not gonna do it this time. He came to sniff my water the last time he came licking on my water. It ain't gonna happen this time. I'm on to you. Can you please leave? Can you? Sorry. I didn't mean to push him that hard. Sorry, but sorry, cat. Sorry about that. Anyway, he shouldn't be in the way. Anyway, throwing me off. Anyway, so his focus is physical Israel. He says, for though the twins were not yet born and had not done anything, now he's gonna get into a point that some people want to make this all about. And it's not about, although it brings up elements, it brings up the first thing that we ought to remember about God. Above all else, God is sovereign. That part we don't get around. Above all else, God is sovereign. So he says, for though the twins were not yet born and had not done anything good or bad, so that God's purpose, according to his choice, and this is where Elk again, which is election, that's why some of your version might say, according to God's purpose, according to his election, same thing, would stand not because of works, but because of him who calls. It was said to her, the older will serve the younger, just as it is written, Jacob, I have love, but Esau, I have hated. So now, he's going to use this to speak about Israel. Does that mean, Corey, that we shouldn't say that God does not elect? No, we shouldn't say that, God does elect. God does choose. But the point, the purpose of what he has to say here is not really so much about election, although he's bringing up election, but he's doing it in regards to Israel, because what was his question? What was the statement that he had just raised in verse six? He says, but it's not as though the word of God has failed. So what God has determined to do is going to happen. It's going to happen. God's word does not fail. And if he has a plan for Israel, that plan will come to fruition. We cannot say that Israel is an afterthought or Israel have their chance or any of those things. It could not be that, you know what? Israel had all these years and they just sinned, disobeyed God. Well, we do too, because as long as Israel has been disobedient to God, the Gentiles have been too. It's not like the Gentiles would come back and say, yeah, you know, we've been devoted to God. No, no. And so just as God chose with Gentiles, we're going to find out that it's going to do the exact same thing with Jews. And so this nation will not be lost. I mean, will be saved? No, every Jew will not be saved. So let's keep going. What shall we say then? There is no injustice with God. Is there? May it never be. For he says to Moses, I will have mercy on whom I will have mercy and I will have compassion on whom I will have compassion on. So then it does not depend on man who wills or the man who runs, but on God who has mercy. Now, is that a truth, a universal truth that we can apply not just to Israel, but also to Gentiles? Absolutely, absolutely. This is a universal point. And so he says it's not dependent on us and our will, but on God upon his will. He'll have mercy on whomever he wants to have mercy on. Whomever he didn't want to have mercy on, he will not. For the scripture says to Pharaoh, for this very purpose I've raised you up to demonstrate my power in you and that my name might be proclaimed throughout the whole earth. So then he has mercy on whom he desires and he hardens whom he desires. There's a reason why Paul brings this up. Matter of fact, let me ask you guys a question. Paul's bringing up this issue of having mercy on whomever he wants to and hardening whomever he wants to, okay? But what does that have to do with Israel, Corey? What does that have to do with Israel? Well, we're gonna see that he's gonna have mercy on Israel, but he's also gonna harden Israel. While at the same time having mercy on people who he didn't need to have mercy on, which are the Gentiles. And so if he has mercy on them, he's gonna have mercy on Israel at some point in time. But this hardening part, he also is going to, there's going to be a hardening that Paul is going to bring up in just a little bit. So there's a reason for that and we'll see if that's the case in just a little bit. For who, I'm sorry, verse 19. Who will say to me then, why does he still find fault? For who resists his will? On the contrary, who are you? Oh man, who answers back to God? Who are you? Just like Joe, to come to God and ask you, why would you do such a thing, God? Why do you do it this way? How can I, I can't resist your will. He's, but who are you to answer back to God? The thing molded will not say to the molder, why did you make me like this? Or does the potter have a right over the clay to make from the same lump, one vessel for animal use and another for common use? What if God, although willing to demonstrate his wrath and to make his power known, endured with much patience, vessels of wrath prepared for destruction? What does that mean? What is he getting at? How, you're saying what if God, although willing to demonstrate his wrath, so God, what if God who wants to demonstrate his wrath makes his power known and make his power known? What if that same God endured with much patience, those very same vessels prepared for destruction? But explain that, Paul, not understanding that. Paul will get there. And he did so to make known the riches of his glory upon vessels of mercy, which he prepared beforehand for glory. So, and let me just say this at a time. No, that's not how that works. But what God is going to do is you're going to see him show his mercy on those folks, actually, let me put it this way, how do I put this way to kind of keep in touch with this? The people who he's going to have mercy on, they also can't say that they have a right to boast about it. We know that, but what God is going to do, two things. He's going to show his wrath and he's going to show his love, his mercy. We've talked about it before, that these two things is justice, him being a just God and being a loving God show up in the same way that, excuse me, those, if justice is going to be showed and carried out, it's going to be carried out on all of us, and it should be. But he also wants to show mercy and love on a son and he can do so. So he can in turn also take and prepare vessels to be glorified with him. He can take and prepare vessels, as he says, that were prepared beforehand for glory, even us whom he also called, but not among Jews only, but also for among Gentiles. So he's going to call not just Jews, but also Gentiles, are with me. So there will be people who have been chosen who will be in every different ethnic category, be they Jew or Gentiles, all the other nations. And so all the people will be able to say, amen, praise God, but nobody who is not in heaven, no one in hell will be able to say, you made me this way. And so we see how God is going to pass by you and now it's God's turn to show his glory. It's God's turn to show who he is. It's God's turn to have mercy on whomever he wills. That was his point. God gets to have mercy on whomever he wants to have mercy on, not on you. If God doesn't want you, that's not on him. So if God does want you, that's still not on you. So now let's continue. He says, even us whom he called, but not among Jews only, but also from among the Gentiles. As he says it in Hosea, this is it, listen guys. As he says in Hosea, I will call those who were not my people, my people, and her who was not my beloved, beloved. Why are you bringing this up, Paul? I thought we were talking about you having this issue with Israel and about them you wish that you would be a curse for their sake. So why are you bringing this up? It's like Paul just starts just throwing some stuff in there. Well, the stuff that he's throwing in there is just not, just because he wants to store it. He's not just, you know what? Let me just pick this passage and that passage. Let me confuse him, no. Remember what we were reading, Deuteronomy 32. In Deuteronomy 32, he says that I will make you Jews jealous and how am I gonna do so? Well, look what he's saying right here. I'm gonna call a people who are not my people to be my people and those who are not beloved I'm gonna call them beloved. Who is he speaking of? Gentiles. So what is he doing? So think about this for a second, guys. I want you all to picture this. I've got these Jews over here who I've done all these wonderful things to. I've shown up and shown out in their view that all these mighty wonders and what do they do? Turn their back on me. So what does he say? You have made me jealous because you went after these other guys. So what will I do? Now keep in mind though, guys, when he said this he still has his intentions on bringing Israel back. He still has his intention on bringing Israel back. How was he gonna bring Israel back? Us? What is it? What is it gonna be the tool that he uses to bring Israel back or to make them jealous? Us? That's why he says, I will call those people who are not my people and those who are not beloved, beloved. And that will do what? Cause jealousy in the ranks of Israel. Now he didn't say how long it's gonna take but it's a process. It's a process but look what he says. Let's continue. And it shall be in the place where it was said to them you are not my people. There they shall be called sons of the living God. Speaking about us, Isaiah cries out concerning though the number of the sons of Israel be like the sands of the sea it is the remnant that will be saved. Stop for a second, hold the presses, turn all the machines off. Look what he just said. He said it is the remnant that will be saved. It is a remnant, a portion that will be saved. That will be saved. Question, a remnant of who? A remnant of Jews, a remnant of Israel. Or with me. Spider monkey, no it's not the 144,000 it's gonna be more than that. There gonna be 144,000 during the tribulation that we're gonna see of these Jews. It's gonna be more than that. But I'll give me a second and I'll try to flesh this out. So he says that there will be a remnant that will be saved. How? That's gonna be interesting. For the Lord will execute his word on the earth thoroughly and quickly. I like that he'll execute his word on the earth thoroughly and quickly. And just as Isaiah foretold, unless the Lord of Sabbath had left to us a posterity we would have become like Sodom and Gomorrah. I mean, and resemble Gomorrah. What does he mean? Whether it's nothing or nobody left of Sodom and Gomorrah, right? But he is going to leave a posterity. In other words, descendants. So he's saying that we're not gonna be like them. There will be a remnant, some descendants of this group of Israel with God forever. And so that's his point. Again, he says, is it as though that God's word, go back to verse, oh yeah, we're in verse 29. Let's go back to verse, I don't make sure I stay in verse 29. Let's go back to verse six. How did he broach this? He says, but it is not as though the word of God has failed. So remember what he's saying? It's not as though the word of God has failed. It has not failed. Uh-oh. What is it? Oh, I'm sorry. 929. I'm sorry. I pushed the wrong thing. So just as Isaiah foretold, unless the Lord of Sabbath had left us a posterity that is descendants, we would have become like Sodom and Gomorrah and resemble Gomorrah. What then, or what shall we say then? That, look what he says. Shall we say that the Gentiles who didn't even pursue righteousness, that they attained it? I mean, we were pursuing it. Us Jews, I'm now playing the part of the Jews. Us Jews pursued righteousness. The Gentiles didn't. Yeah, I know we kept messing up, but at least we messed up while we're pursuing. They weren't even pursuing. So are you telling, are you mean to tell me God? God, that us Jews who pursued righteousness, we won't get it, but the Gentiles who didn't, they will. Look what he says. Even the righteousness which is by faith, but Israel pursuing a law of righteousness did not arrive at that law. So the issue was Israel was trying to gain it by the law, by doing stuff. The Gentiles, of course they were told how to do this, got it by faith. Now, the Gentiles aren't the only ones that arrived at it by faith. Obviously the Jews, there are many Jews who beat into the punch, who were saved by faith before the Gentiles, but by and large, Jews want to do things in a religious fashion. They want to do stuff. Verse 32, why? Because they did not pursue it by faith, but as though it were by works. They stumbled over the stumbling stone. Who's the stumbling stone? That's Jesus. So the Jews stumbled over the very thing that was going to save them. Just as it is written, behold, I lay in Zion a stone of stumbling, a rock of offense. By the way, where is this written at? This is written in Isaiah, Isaiah 28. Isaiah is given this prophecy about this savior that's going to come. This Messiah that's going to come, they won't get it. Remember that guys, they won't get it. When will the Jews, by and large, understand and receive Isaiah's prophecy correctly? They will in the future. So look what he says. And he who believes in him will not be disappointed. So look what he says in verse one of chapter 10. He says, brethren, my heart's desire and my prayer to God for them is for salvation. For who? For Israel, his desire and his prayer for them, the them is Israel for their salvation. He could not be speaking of Gentiles because Gentiles are gaining salvation. It's the Jews that are now turning their back. Isn't that funny? In just less than 20 years, 20, 30 years after Christ's death and the Jews bring the gospel, remember on the day of Pentecost, that was Jewish. The 12 apostles, Jewish. The people that came, the multitude, Jewish. And then just a few years after that, they began turning away, they began turning away. All day long, I've stretched out my hands to a disobedient nation. That's what God has said. So, for I will testify about them that they have a zeal for God but not in accordance with knowledge. For not knowing about God's righteousness and seeking to establish their own, they decided to establish their own righteousness. They did not subject themselves to the righteousness of God for Christ is the end of the law for righteousness to everyone who believes. So, by the way, the end of the law is Christ. When Jesus said, I didn't come to destroy the law, but to fulfill it, there it is. This word, to end it, the Greek word telos, which is to complete, to finish it, to end it, that's what Christ did. Christ is the end, the completion of the law. How is that, Cory? Make this part plain, what we've covered this before. The end of the law, the law requires that there be atonement for your sin. That's what puts you in right standing. The only way to put you in right standing is that there be propitiation made and then you place your faith in the propitiation. The propitiation is made for all, I believe, but all won't put their faith in it. The propitiation was made for Israel. Israel just did not put their faith in it. And so, Christ, who is that propitiation, he was also the scapegoat. He's also the high priest that mediates. They just didn't put their faith in it. Some did, many did not. And so, he's the end of the law. And then he says, verse five, for Moses writes that the man who practices righteousness, which is based on the law, shall live by that righteousness. In other words, if you want to be saved, you want to be right through the law, fine, fine. If you want to be right during the law, I mean with the law, fine, stay that way. Stay that way. But guess what? You're gonna have a hard time keeping it. You're gonna have a hard time keeping it, which you can. So, they want to live by that, but the righteousness based on faith speaks as follows. Do not say in your heart, who will ascend into heaven? That is bring Christ down, or who will descend into the abyss? That is to bring Christ up from the dead. But what does it say? And believe in your heart that God raised from the dead, you will be saved. Why is that important for the Jews? Because the Jews know something. They know that there is but one Lord. And to confess that Jesus is that Lord, that's a big step for them. That's a major statement for them. For with the heart, one believes resulting in righteousness and with the mouth, he confesses. Resulting in salvation. Look what he says, for whoever believes in him will not be disappointed. For there is no distinction between Jew and Greek for the same Lord is Lord of all, abounding and riches for all who call on him. For whoever calls on the name of the Lord will be saved. Be that Jew or Gentile. The problem is the Jews aren't doing it. Look what he says though. Look what he says though. Now this passage, and it's right to use this passage. It is right to use this passage as kind of a rallying call for evangelism. It really is. But I want you all to see who this passage is intended for. Paul is talking about Gentiles evangelizing the Jews. Paul is talking about getting the Jews who originally had this, getting them to hear it. Look what he says. How then will they call on him whom they have not believed? This is speaking about Israel, guys, the Jews. How will they believe in him whom they have not heard? And how will they hear without a preacher? How will they preach unless they are sent just as it is written? How beautiful are the feet of those who bring good news of good tidings? However, they did not all heed the good news. For Isaiah says, Lord, who has believed our report. This word report is a word I co-ace, which is report, okay? Remember this word right here that's highlighted on the right? It's the Greek word I co-ace, which is a report, a hearing, okay? So faith comes from I co-ace, hearing, not hearing. That's not it. Faith doesn't come from this. You hear it over and over again. If that were the case, hearing it over, let's just drill it just up. No, that's not where faith comes from. Faith comes from the report. Understanding what happens. Now, how many times does that take? Depends on the person. But ultimately it's gonna be, as Paul said earlier, the will of God. He's the one that's gonna do the work in the heart. Not cause you keep hearing it. You know what it makes sense after a while. It makes sense. How do we know this is not the verb hearing? This is the noun, because again, let's just go and put it on the screen. Faith comes, now the faith out of or comes from I co-ace. That is a noun. Matter of fact, if you don't believe me, look underneath my face, my picture. It says that this is a noun, a feminine, singular. Oh, okay. I'm sorry. I thought I saw the cat. He's over asleep that quick. But it's a noun. So faith comes from the noun hearing, not the verb hearing. So from this hearing, whose report shall we believe? That's the word that's quoted here from Isaiah. And so it says faith comes from the report and the report by the word of Christ. That's what this is saying. Now, again, can you apply this to our evangelistic efforts here? Sure. The same still holds true, but look at the topic that Paul is covering. He's talking about these Jews hearing this report. Okay. And again, again, if we go and look and says, whose report, where's this come from? This comes from Isaiah again. Romans, the book of Romans, and the book of Isaiah are tied together. So let me put this on the screen. If we look on here, we look on the screen. Isaiah is prophesying during the Southern Kingdom. They are getting ready guys. They're on their way to be put out of the land. Captivity is taking place. You're gonna see, now you got this splinter kingdom, the Northern Kingdom and the Southern Kingdom. The Northern Kingdom we call Israel. The Southern Kingdom is Judah. They're both the same in terms of their nationality. Okay. By the way, this is my whole point that I was trying to make with mankind, leading a government, two generations or two kings after God has chosen David, then here comes Solomon and then everything falls apart. Same thing would happen if Christians were leading the world. Same thing would happen. I wouldn't care if we had a Christian president, if England had a Christian prime minister, if Trudeau and Canada was replaced by a Christian. If the Ayatollah all of a sudden in Iran became a Christian. Same thing. Gonna fall apart. You know why? Cause it's a man up there. That's why. And just so you ladies don't get out of pocket. If it was a Christian lady leading, same thing. Probably sooner. I'm kidding. Anyway, get back to the court. But Isaiah and Romans are tied together. I said before that there are certain books that you need to know as you're reading other books. For example, if you're going to read the book of Hebrews, you better be familiar with the book of Leviticus. Do the round of me as well. Because some of the same language and really the whole point, the audience in Hebrews are Jews. Jews who have placed their faith in Christ as Messiah and they are falling back on what they know of the law, which is why you have to understand Leviticus. Cause the book of Hebrews bring this up. Same thing here. Paul keeps bringing up a lot of quotes. I mean, a lot of quotes from Isaiah. So almost like Paul is there. I don't, Paul is the second coming up of Isaiah. He just keeps bringing up Isaiah for good reason, for very good reason. Remember what Isaiah brings up? Isaiah brings up a lot of prophecy about Jesus. A lot of prophecy about Jesus is crucifixion, his death, all of that stuff. Okay. He brings up all of that. And so for that reason, it is important to understand Isaiah. The Jews will not accept it, which is why they're going to look back in time. And they'll say he was bruised for our iniquity. They'll use the past tense. Are you with me? So let's go back into it again. Verse 18, but I say, now this is how you know that he's speaking about Israel. But I say, surely they have never heard. Have they? Surely they have never heard. Have they ever heard? Have the Jews ever heard? Yes, they have. Look what he says. Indeed they have. Indeed who has? He's not speaking about Gentiles. Heard of what? This report that he just brings up. Surely these Jews have heard the report. So he says, their voice has gone out into all the earth and their words to the ends of the world. But I say, surely Israel did not know. Did they? First Moses says, I will make you jealous by that which is not a nation. So do y'all see how what Paul is bringing up in Romans 10 is about Israel hearing the gospel? Why does he need Israel to hear the gospel? Because they're not getting saved, which is what he brings up in Romans nine. Are you guys with me? And here it is again. And Isaiah is very bold and says, I was found by those who did not seek me. I was found by those who did not seek me. I became manifest to those who did not ask for me. Are you with me guys? You follow me? But as for Israel, he says, all day long I have stretched my hands to a disobedient and obstinate people. I've been called disobedient. I've never been called obstinate because I wouldn't have understood what the obstinate. What kind of big words are you using teacher if you call me disobedient to a disobedient and obstinate people? Israel, now look what he says though. I say then God has not rejected his people. Has he? What was the answer guys? Put the question in reverse order. Has God rejected his people? The answer is no. Paul says initially in verse nine, man, I wish I could be a curse from my country in Israel. They're not coming to faith. But then 10 he says they need to hear the word. They just need to hear the report. Now who has heard our report that Isaiah initially brings up? He brings us here. So we need to get the word of them. We need to get the report to them. And eventually it's gonna happen. Now how is it gonna happen? Remember God is going to have mercy on whom he wants to have mercy on. So he has not forgotten about his people. That's how he starts off chapter 11. Has God forgotten about his people Israel? No, he has not. May it never be, Paul says. For I too am an Israelite, a descendant of Abraham of the tribe of Benjamin. Now that is to let us know that even though Israel by and large is not accepting her Messiah savior, there still are Jews who are doing so. Paul principally, he's one of them. He's like, listen guys, some of us Jews are getting it. Not a lot of us. That's the problem. But some of us Jews, even today, we have what we call messianic Jews, those who have placed their faith in Christ. And so there's gonna be kind of a trickling, a drip, a drip of Jews, say that fast, a drip of Jews each year all the time. There's always gonna be some Jews who are placing their faith in Christ. Well, you know what? This makes sense. And usually if you find someone who is open to listening, a Jew, and someone can explain to them how all we're talking about guys is the fulfillment of these Jewish prophecies of the Jewish scriptures. That's all we're saying. It's a fulfillment. And they begin to see the Jewishness of Christ and what has happened and how the atonement that they know of and they're familiar with that it was brought out and made better by Christ as the High Priest, as the sacrificial offering and the scapegoat. And they see all this Jewish imagery in Christ and what do they do? They place their faith in him. And that's gonna happen even more so later on. So let's continue. Verse two, God has not rejected his people whom he foreknew. Or do you not know what the scripture says in this passage about Elijah? How he pleads with God against Israel. Lord, look what he says. Remember when Elijah was upset and bothered and kind of just frustrated? Yes, Alf, I'm gonna take some questions in just a little bit. I'm gonna take some questions in just a little bit. Lord, they have killed your prophets. They have torn down your altars and I alone am left with what we do, right? Don't we do that? Ain't nobody saved like me, Lord. I'm not the only one going to church. Well, I'm not the only one going to church. But all these other folks in church, they ain't serious. Don't nobody love me like I love you, Lord. It's just me. And that's how we do sometimes. So that's what Elijah was doing. But what is the divine response to him? I have kept for myself 7,000 men who have not bailed the need about. In other words, shut up, Elijah. You ain't the only one out here. Get up before I do something to you. I have kept for myself 7,000 men who have not bailed the need and bail. In the same way then, look what he says, in the same way then, there has also come to be at the present time a remnant according to God's grace's choice. There's that word, there's that word, that word election, eclegame. That's that word election by God's gracious election or choice. But if it is by grace, it is no longer on the basis of works. Otherwise, grace is no longer grace. Now, it means something after you get saved, but doing stuff to get saved and to show how great you are to puff your chest out. No, no, no, no. That righteousness takes you nowhere. That's why Jesus says, unless your righteousness surpasses that of the scribes and Pharisees, you will in no way see the kingdom of heaven. You will no way be saved if your righteousness is like theirs. Theirs were based off of their ethnicity and what they do, how they look. Verse seven, what then? What Israel is seeking, it has not obtained. Well, we get that part, but those who were chosen obtained it and the rest were hardened. Wait a second, wait a, hold up. So some folks chose, I mean, some folks obtained it, but now these were the ones that were chosen. Notice he says, but those who were chosen, they obtained it. So the ones who obtained it, they were chosen and the rest were hardened. Now, this hardened is heiress, past tense, and it's passive, which means they did not. That is, there's no scripture for that. There's no scripture for that, but they were hardened. Who was hardened? Israel, some were chosen, but the rest, and might I add, the overwhelming majority were hardened. God gave them, look what he says, verse eight, God gave them a spirit of stupor. He gave them eyes and ears not to hear, eyes not to see and ears not to hear, down to this very day. Let their eyes be darkened to see not and bend their back forever. No, he's not. I say then, they did not stumble so as to fall. In other words, their stumble wasn't made complete. Their stumbling wasn't just a, that's the end of Israel. No, he says, I say they did not stumble so as to fall, did they? May it never be. But by their, look what it says, guys. This is where you, all right, guys, let me drink my coffee. This is where you begin to get happy. This is where you start wiggling your toes. Are you wiggling them now? Wiggle your little toes and smile. This should make you do it even more so because this is how we got blessed. Look what he says, may it never be. But by their transgression, salvation has come to the ussuses, us Gentiles. Salvation has come to the Gentiles to make them jealous. Who's a them? Make Israel jealous. So salvation has come to us to make them jealous. That's just what he said in Deuteronomy 32, starting in 16 to 21. Salvation's come to us to make them jealous. This is how we become ministers of the new covenant, keepin' in mind this, and then doin' what he said in chapter 10 to give them the gospel. Now, he says, now if their transgression is riches for the world, if them felling is a good thing for us, and their failure is riches for the world, how much more will their fulfillment be? Even more so, huh? But I am speaking to you, who are Gentiles? Talkin' to you Gentiles. Talkin' to you Gentile sheepesses. I'm speaking to you Gentiles. In as much thin as I am an apostle of the Gentiles, I magnify my ministry. If somehow I might move to jealousy, my fellow countrymen, and save some of them. So I wanna, listen, what I'm doin' in you guys is gonna help move my fellow countrymen to salvation. For if their rejection is the reconciliation of the world, what will their acceptance be but life from the dead? If the first piece of the dough is holy, the lump is also. And if the root is holy, the branches are too. Now this is where people start missing it. This is where folks start just really missin' it. We're gonna talk about these branches. But if some of the branches were broken off, and you, Gentiles, being a wild olive, were grafted in among them, and became partakers with them of the rich root of the olive tree, do not be arrogant towards the branches. By the way, this is not the new covenant. Stop it, stop it, stop it, stop it, stop it. This is not being grafted into the new covenant. You know how I know? Cause he doesn't bring up the new covenant. When he means to bring up the new covenant, he'll bring up the new covenant. You were just grafted into salvation. You were chosen for this, okay? How you got in wasn't on you and wasn't about a covenant that God made with you. It's the fact that he decided to bring you in. He chose you, okay? Now, he says, so now do not be arrogant toward the branches. Remember what Jesus said, I brought this up earlier. We have replaced Israel. The church is the new Israel. You might be guilty of this. No, you're no. And I'm gonna show you that also, how we know that the church is not the new Israel. God still has a plan for Israel. God can have a plan for DW and still have a plan for Joan. Can have to be the same plan. Now, the culmination of all of your salvation is gonna be still faith in Christ. But let me ask you guys a question. Carol is 37 years old. In her 37 year life, she's gone different directions that brought her to where she is now. Okay? No Carol, no Carol. I'm trying to help you out Carol. Carol is 37 years old. Cameron is 64. Cameron is 64. In his old age, he came to Christ. Who knows? Maybe the hard way. Joan may have, she was brought up in church. C.G. may have been a Muslim before. Jeff, Jeff was a arms dealer. He got shot in the throat and so now he's a Christian. My point is, we all end up at the doorstep of Christ and place our faith in him. The collection of Jews, how they got there is different than how Gentiles got there. They have a history. Matter of fact, they have a vivid history, a written history of all their failures. But the whole thing is still them placing their faith in Christ. They must still do the exact same thing that we do. We must still do the exact same thing that they did. Place our faith in Christ. Okay? There's only one way. There's only one entrance, one door and no Lovialias. It's not Peter. Peter's not the key. It is Jesus. Which by the way, why someone still listens to that guy? I don't know. But, so do not be arrogant towards the branches. But if you are arrogant, remember that it is not you who supports the root, but the root supports you. You will then say branches were broken off so that I might be grafted in. Quite right. They were broken off for their unbelief. Now, this right here, this word, epistia, this does not mean that they used to believe and they stopped believing. This just means they didn't believe. Well, wait a second Corey. How could they have been branches in the first place? Well, again, God had determined he's gonna do this with the nation of Israel. You never believed. And Israel has been a disobedient, unbelieving nation for the first part, for the most part anyway. Okay? But you stand by your faith. Do not be conceited, but fear. For if God did not spare the natural branches, he will not spare you either. This is not you being broken off. You have no right for salvation except what God gives you. So they didn't believe. Okay? Now, behold then, the kindness and severity of God to those who fail severity, but to you God's kindness. If you continue in his kindness, now wait a minute Corey, you mean you have to continue? Yes, you have to continue. You have to continue in his kindness in order to stay saved. But guess what? You will continue in his kindness. Wait a minute Corey, it seems like you're preaching that a person loses salvation. No, I'm not. No, I'm not. But it says if you continue, yeah, if you continue. Because if you don't continue, then that means you weren't saved to begin with. But what do we know about Christians? We won't cover this again because we've covered this a lot before, but Christians will continue. You absolutely will continue. But we'll leave that alone for right now. And they also, if they do not continue in their unbelief, if they don't keep their unbelief, they will be grafted in. So what is he gonna do? He's gonna bring in people into this olive tree, into this root that he has that brings us to salvation and the only way it's done is through faith. How do we get the faith? Well, he already told us how we got the faith. Not of our will, but of his will. For God is able to graft them in again. If you were cut off from what is by nature, a wild olive tree, and were grafted, contracted the nature into a cultivative olive tree, how much more will these who are natural branches be grafted into their own olive tree? For I do, now here's the point I wanna get to. For I do not want you brethren to be uninformed of this mystery, so that you will not be wise in your own estimation. That's you Gentiles. That a partial hardening has happened to Israel until the fullness of Gentiles has come. That part is important guys. This partial hardening has come to who? To the Jews, not spiritual Jews, not spiritual Israel, to actual physical, ethnic Israel. Has come to them until the Gentiles, or until the fullness, the completion of the Gentiles has come. Until the fullness of the Gentiles have come in. And this word is ACE-ELTHING, which is to come in. So when all the Jews have come, or the fullness of the Gentiles have come in, then he's gonna deal with who? The Jews. So all Israel will be saved. Look what he says, just as it is written, all of Israel will be saved. Not all of those who are Jewish, all of those who call themselves, all of Israel will be saved. I'll come back to that again, Jacob. I'll mark that to come back again to it. All of Israel will be saved just as it's written. And by the way, where is this passage that he's quoting from? Where did he get this? Where did he get this from? Isaiah. Paul loves some Isaiah. The deliverer will come from Zion. He will remove ungodliness from Jacob. He will remove ungodliness from Jacob. What's he talking about? Break this, break here, break here. And who's the real Jew, the one that has faith? Well, dawg, come on, if I look at this branch, none of these branches actually have faith. So cut you folks off and the ones that have faith will be on there. Matter of fact, this is a faith branch. This is a faith tree. Let me start giving some faith to these Gentiles and graft them in as well by their faith, the faith that I'm giving them. Are you with me? Continue. He will remove ungodliness from Jacob. That's what he's talking about about these branches being broken off. This is my covenant with them. Stop the presses one more time. This is my covenant with them. Who's the them? Who is the them? Israel. Which covenant, by the way, is he speaking of? Anybody have any idea of which covenant? He says this is my covenant with them. This is why, this is why, this is why I get in trouble. This is why people get mad at me. I'm sorry if you do. I'm sorry if you do. I didn't write it. I'm just reading it. This covenant, notice I think we have a covenant. No, this is the new covenant. How do I know? How do I know this is the new covenant? He says when I take away their sins. Let's go, he's quoting, let's go to Isaiah 27. Do I even have it up here? No, I don't. But let me type it in. Isaiah 27. Here it is. Therefore, through this Jacob's iniquity, I'm sorry, yeah. Therefore, through this Jacob's, that's Israel's iniquity, will be forgiven. And this will be the full price of the pardoning of his sins. When he makes all the altar stones like pulverized chalk stones, when ashram and incense altars will not stand for their fortified city is isolated. Let's make a, let's make a marinoite one. Yeah, I just want to make sure I'm reading the right one. Okay, let me back up some more. Let me back up some more. Here it is. Let me start in verse one. In that day, the Lord will punish Leviathan, the fleeing serpent with his fierce and great and mighty sword, even Leviathan, the twisted serpent, and he will kill the dragon who lives in the sea. In that day, a vineyard of wine sing, a vineyard of wine sing of it. I the Lord am its keeper. I water it every moment. So that no one will damage it. I guard it night and day. I have no wrath. Let him make peace with me. Let him make peace with me. In those days to come, Jacob will take a root, will take root. Israel will blossom and sprout and they will flee and they will feel the whole world with fruit. Like the striking of him, like the listen that, like the striking of him who has struck them, has he struck them? Or like the slaughter of his slain, have they been slain? You contended with them by banishing them by driving them away with his fierce wind. He expelled them on the day of the East wind. Therefore, through this Jacob's iniquity will be forgiven and this will be the full price of the pardoning of his sins. So now remember this. He's speaking of Israel, Jacob, this scattering that's going to take place which he just talked about. Bring you back. And this pardoning of your sins. And so what he's speaking of guys is this new covenant. We'll probably cover the new covenant, what he said about the new covenant in just a little bit. So what we said in verse 28, he says from that standpoint or from the standpoint of the gospel, they are enemies for your sake. But from the standpoint of God's choice, huh, from the standpoint of God's choice, they are beloved for the sake of the fathers. What fathers? Abraham's, the Isaacs, the Jacob's of the world. Cause he just preached, he just spoke about Jacob or Israel. For just as you once were disobedient to God but now have been shown mercy because of their disobedient. So these also now have been disobedient that because of the mercy shown to you, they also may be shown mercy. Meaning this, since God has shown mercy to you, they're also, he's also gonna show mercy to Israel. For God has shut up all disobedience so that he may show mercy on all. Now, I'll go ahead and stop there. But the point is, as he's speaking Romans nine, 10 and 11, remember the whole theme of Romans is a salvific thing. Speaking about how we're saved by faith and so forth but he gets to nine, it's like, man, my people, these Jews, they're just messing up. And I wished, I wish it weren't so. I wish they were also being saved. But he says, God's not through with them. And since God's not through with them, he's showing what he's gonna do the same way that you have been shown or you've been chosen. He says that they will be chosen. So when we look at it at this point, we see that there is no need guys to, matter of fact, let me do this, let me do this. But there is no need to think that there is a different group other than, I mean, that Israel and the church are the same. Now what I mean by that is because some folks might be confused. What do you mean that Israel and the church in the same? Well, anyone that places their faith in Christ is a part of the church. But just because you're part of the church doesn't make you Israel. Israel is different and distinct. Matter of fact, how do I know? Let's go to verse 31 of Jeremiah 31. He says, behold, days are coming, declares the Lord. When I will make a new covenant with the house of Israel, that's what he's making the covenant with, the house of Israel and with the house of Judah. Not like the covenant which I made with their fathers in that day, I took them by the hand, I took them by the hand to bring them out of the land of Egypt which my covenant they broke, although I was a husband to them, declares the Lord. But this is the covenant which I will make with the house of Israel. After those days declares the Lord, I will put my law within them and on their heart I will write it and I will be their God and they shall be my people. Now, is he going to do the same thing? Well, look at that, wait a second, hey man, God is good. Been subscribed for a year, first time catching live. I've got to start sending, if I had a schedule, I'd send it to you. We try to do it at five p.m. I don't know where you are, but five central every time. But anyway, I will put my law within them and on their heart and I will be their God and they shall be my people. Now, so what is he going to do? He's going to put his law, his spirit in their heart and they shall be his people. Now, he's not only going to do that with Israel, but this covenant is with them to do just that. Yes, but not yet. With some, yes. And then this is where the jealousy part comes in. The jealousy part comes in where he also starts doing it with the Gentiles who are going to make them jealous. This is what we just read in Romans 9 and 11, how he's going to make them jealous, which was the prophecy that God gave in Deuteronomy 32. Look, he says, they will not teach again each man, his neighbor and each man, his brother saying, know the Lord, for they will all know me from the least of them to the greatest, declares the Lord, for I will forgive, when we read that Paul brought up that Isaiah brought up, for I will forgive their iniquity and their sin. I will remember no more, which is just what Paul quoted from Isaiah 29. Thus says the Lord, here's the important part, just says the Lord, who gives the sun for light, now look at this, who gives the sun for light by day and the fixed order of the moon and the stars for light by night, who stirs up the sea so that its waves roar the Lord of hosts, his name. And look what he says, this fixed order, the sun by day, the moon by night, the stars, the seas and the waves. Question, do we still have the sun, the moon, the stars, the waves, the sea? Do we still have those? Do we still have that? We still do. So look what he says. He says, if this fixed order departs from me, from before me, declares the Lord, then the offspring of Israel also will cease. So if the sun, the moon, the stars, all those things, if they depart, so too will the sons of Israel. So too will the offsprings, they will cease. If all of that stuff ceases, then so too will Israel and their offsprings cease. It's important guys. They will never cease to be a nation from the Lord forever. Israel will always be a nation before the Lord forever. However, Israel is in disobedience right now. Israel is in disobedience. What do you mean? I kind of see your point. Is this kind of like reverse psychology? Yeah. Israel is in disobedience right now. And they're supposed to be. Cause that's some other things that's gonna occur. I won't get into this. We'll come into this in a few weeks when we start talking about more of the end time stuff and the reason why Israel is in the position that it's in. Israel is just obstinate, as he says. They're disobedient and obstinate. Why? Israel is going to play its part in the end time. Israel is going to play her part. And then Israel is going to be crushed at that time. And then why is all that gonna happen? They're gonna look back. Like maybe something broke down and you went back and read the manual or you're going through something. How many of y'all get in trouble? Some things happen and it makes you read the Bible even more so. That's what they're gonna do. What about, good question. What about those who have passed on? He's not going to revive them to give them a pass. No, no. His promise isn't for specific Jews but just for the nation writ large. That's what his promise is for. And so when it happens, they'll be blessed. Now there are some Jews now who are, like I said, they're coming. There are some Jews that are coming to Christ. There are some Jews that are coming to Christ. But still, by and large, no. Now somebody might say, how do we know who actually is a Jew? Ultimately, in many cases, none of us don't because a lot of us, you know, whole lot of intermingling, whole lot of intermarrying. And so this Jew might find out he's not really a Jew. This Chinese guy over here might find out he's really a Jew. But God knows. But the Jews are the ones that are actually practicing it as well. And so that has something to do with it as well just because you say that you are. But oh, by the way, it didn't really matter. It's still gonna ultimately be the ones that he regenerates in their heart as well. Those who he chooses. Just like the elect in Israel, just like he says, the elect of the Gentiles as well. Are you with me? So I wanna go ahead and put a pause there because I don't wanna go too further because if I stay here, we'll be here for another hour or two. But I did wanna go ahead and give a few minutes to some questions regarding this. In the meantime, I want to say your guy, Ty, thank you for the super chat. And Jeff, thank you as well. Thank you. Kind words, kind words, kind words. Thank you. So if you have a few quick questions, which they can't be quick, huh? Let's go ahead and jump into a few of those. I would say this though, Laila. Jesus is fair, period point blank. I don't know if I would use that though. I think he's good. I think he's wonderful. I think he's awesome. I think he's amazing. I think he's loving. I think he's kind. I think he's merciful. I think he's gracious. But I think fair would be for him to save everybody. I think fair would be for him to heal and do for this person, like he does for others. I think that would be fair, but he's not fair. Nor does he need to be fair. I don't think he needs to be fair. Of course, if he really were fair, the fair thing to do, the right thing to do is to all of us. I have been, this might be a hard one. I don't know if this is hard. I hope you didn't, Alpha X, I hope you didn't give me a hard one. I have been wrestling with this thought, how can Jesus still be God and be made sin? Okay, here's how. The term made sin, let's look at what it means. Under the old covenant, this sheep, the scapegoat, they would confess all the sins on the head of a goat and send them away. Well, dogs on it, Duranda, you don't miss anymore, but it's okay though, it's okay. It's okay that you miss some. Let me, Duranda, let me make you feel better real, let me make you feel really, really good right now. If that didn't do it, I don't know, if you're not smiling right now, but listen, at least you're here now. At least you're here now, amen. But anyway, the law requires that the sin be on the head of people. And so you're not coming before God with this stain of sin. That's why Paul's sitting at Paul. John says in John 1, when he sees Jesus, look y'all, the Lamb of God who takes away the sins of the earth. In other words, we're gonna confess the sin on him, just like under the old covenant, under the atonement, the sins of the people who were confessed on the scapegoat. You didn't know that was coming, Stevie. But the same way is we became righteous. We took on his righteousness, not that we're righteous like him, nothing good about us. And so what happens is, it's this imputation that God is on. He's crediting righteousness, his righteousness to us, and our sin to him. Are you with me? That's what he's doing. And so he, at that moment, becoming sin is God looking at Christ on the cross, like he looks at us. And then because of our faith looks at us, like he should be looking at Christ. So it's not that he sees good old wonderful Corey, good old awesome Diana. Man, I think Catherine is amazing. She's so special. She's so sweet. That's not how he looks at us. What he does is he looks at the fact that payment was made and taken away and that's all they required. And so now you were in right standing. No, not in that regard. By the way, this is good right here, Diana. This is good right here. She says, how many of you start your day in prayer and worship? I encourage you to do that. Do not turn on your phone instead, pray and sing. You know how well that actually works? By the way, this is such good advice that I'm going to edit out this part so that you don't see that Diana put it up there and I'll take credit for it. That's good advice. Because if you do that, it starts your day off, right? How you start your day might be an indication of how you end your day. Start off on the good foot. Sharice, I'm not answering this question. I'm not answering this question because I'm going to do a video over this. I'm going to do a video over this, so. But I will give you a hint though. It's not as important as people make it out to believe. It's not as important as people make it out to believe. And listen, to go back to what Diana said, it's so easy you're laying in bed because how many of you might use your bed? You might use your phone as a clock. What time is it? Ah, it's too early. It's five o'clock already. It's six o'clock already. Ah, and since you got any hint, might as well go ahead and just, well, let's see what's happening on YouTube. Let's see what's happening on Facebook. Let's see what's happening in the Google sphere. It's so easy to do it. You're swiping up, down, left, right, and all. Which I, anyway, you're doing all this stuff. And the next thing you know, you forgot about God. You got up, I'm so tired. But start off with, as she said, this praise is sanctification synergistic or monogistic. I would say probably a little bit of both, which makes it, I guess, in some aspects is synergistic. Some aspects is, because ultimately, without the Holy Spirit, nothing happening. But if it's just, this is where he says this, he uses this term, by the way, the word synergist comes from soon, or soon, which is with, and air gate, which is work. So soon air gate, which is to work together. In our walk, we are working together. In my walk with Christ, we are working together. Now, who's doing most of the work? Who gets the credit? Who's the response? Him. It's almost like he's pushing you. I don't feel like it. Lord, listen, Lord, did you see what he did to me? But he's working through you. So, and he's getting glory through you. As a matter of fact, you don't mind him getting glory. You just don't. No, I got Michael Jordan on my team. Let me shoot the shot. But get a ball to him. That makes all the sense. Give the ball to this guy. You see this guy right here? Let's just, let's set a pick. Let's play defense. We'll shoot every now and then, but the game's on the line, get a ball to him. But if we get to him, and so, but a lot of people, a lot of guys on the Bulls team, we're happy. Hey, we got Michael Jordan doing this. Let's go ahead and, cause we wanna win. Not so much with LeBron. I wish I would hear LeBron is the goat that way. I wish I would hear you guys talk about LeBron. Anyway, I just had to put it out there cause I know some of the young folk are LeBron fans. I'm having fun. One of my best buddies is a LeBron James fan. And he had to return the phone call yet. He hadn't returned the phone call yet. Yellow flop. He hadn't returned the phone. What kind of friend is that? Yeah, when LeBron won last year, you called me. Yeah, yeah, okay. Now what's happening? You don't wanna talk to me. I see what it is. I'ma call him now. As a matter of fact, I'ma call him now. Hey, buddy. When does LeBron play again? Anyway. Yeah. If LeBron wants to win, he should do what Jordan did when he was down 0-3. That's it. Oh, wait, wait a minute. Cause this has never happened. Anyway, back to this, I'm kidding. I'm kidding. I'm kidding. I'm kidding. I'm kidding. Thank you, Durin. I appreciate that. Can you help me with what Atonement means? Okay. And Raider for Life. You mean like the Oakland, Los Angeles, Oakland, Las Vegas Raiders? Okay. I can ride with you. I can ride with you. But what does Atonement mean? Atonement, Durinda, is, it's from the Hebrew word, Ka'far. And there are three aspects or elements to it. The first two bring about the last one. It is a covering, a canceling that brings about a reconciliation. All three are needed and the reconciliation is the goal. Okay. It's like this. Let's say, who will I use? Who will I use? I'm gonna use Spider Monkey this time. Spider Monkey stole my colts cup. So now I don't have my colts cup. I just have my prison cup. We used to be cool, Spider Monkey. So I say, I want us to be friends. Spider Monkey says, I want us to be friends again. So what a Spider Monkey do? Stroll up in my face with my colts cup. I want us to be friends too. I enjoyed being your friend. All the while drinking some sort of elderberry juice out of my colts cup. The fame in my colts cup with junk inside it. I say, wait a second. You put some oat milk or soy milk in my colts cup. We got a problem. So what has to happen? Well, Spider Monkey needs to give me back the colts cup. You can't just keep the sin in my face. You cannot keep the sin in my face and then say, let's be friends. Something just beeped. I have no idea what that was. Oh, I see what it is. My computer's talking to me. You can't do that. So a couple of things have to happen. Yeah, elderberry is good. Elderberry is good. I'm pretty sure they're gonna have elderberry milk pretty soon. I'm pretty sure they're gonna have elderberry milk pretty soon, but you can't just keep on bringing the colts cup in my face and think, I'm cool with that. I mean, it's in my face. You can't do that. You can't do that. So get rid of the cup, but don't get rid of the cup. Give me my cup. And so here's how the atonement works. I say, because I'm the offended party, I get to tell you how this works. So I say, I want you to give me my colts cup back. Plus, I want a second colts cup. And I want colts season tickets. Yep. I want two colts cup and a season tickets. It's like, wait a minute, hold up. Wait a minute. I just took your colts cup. So you cannot tell me what it's going to take for me to feel better, for me to forgive you. I tell you what I require for forgiveness. If you want forgiveness, you say, okay, fine. I set the terms, not you. Okay? It's just like us going to Jesus or to the Lord. So you know what? I appreciate what you did on the cross, but I'll tell you what, I'll give you 37 cents for my salvation. Wait a second, I'm not taking that. Why not? Because you don't tell me how much it's going to take for me to save you. You don't tell me how much it's going to take for me to forgive you. Same thing with a cup. So, Spider Monk says, you know what, fine. Here's your old colts cup back. Here is the new colts cup. And here are season tickets to the colts. What do I do? Forgive. We hug it up. Amen. Matter of fact, you want to go to the game? We're reconciled. In a week, two months, a year, five years down the road, you know what I could never say to Spider Monk? Hey, you remember when you took my cup? I'm still mad about that. Can I ever bring that up to Spider Monk again? Can I ever say, Spider Monk, you did that? Now, I know I remember that. Spider Monk remembers too. Cost a grip to get those colts tickets. And the colt cup itself was probably $1,000 just for the colts cup. Costs a lot of money. I could never say, I could never bring that up, but hey, this is what you did. Nope, that's not how that works. Why? Because Spider Monk is now in right standing. The debt was paid and I accepted it. So the way the atonement works is God sets the terms. There's a covering. There's a covering of the sin. So all the sin are on the scapegoat and so the sin are going away. There's a covering. The canceling of debt, which is that blood that pays the debt that God requires. Once that's done, especially by faith, there is now reconciliation. Now you are in right standing. You are what we call justified. Mean, funny way of putting it. You are declared right and treated as such. Then I can never, ever, ever, amen monkey moves. There is no condemnation. That's what the atonement brings about. Now under the old covenant, the atonement was a yearly thing but now it's a one-time thing. The Jews wrestle with this in Hebrews but it's a one-time thing done once and for all this is the boldness, the confidence that we could have in approaching him. So that is what the atonement is. It's what it brings about. And unlike the old covenant, under the old way, it had to be done year after year after year after year. And if you got to do it year after year after year, it brings about a level of guilt. For example, if anyone has to, I don't know, go to kidney dialysis or they got to keep going to the doctor for some sort of treatment. They're grateful for the treatment and it keeps them alive. It helps them but they hate having to keep going all the time. You may have some pills you have to take every morning. The older you get, I hadn't gotten there yet but the older you get, the bigger your registry of pills becomes, you're grateful but you hate the fact that you got to keep taking pill after pill after like, ugh. Well, so Hebrews tells us that this guilt that's associated with that is also gone. Don't have to worry about this year after year. Once and for all. And that's how that works. Are you with me? So you never, because think about it, if the atonement now that the Bible says the Paul says that Christ was the end of it, that he completed, if it worked exactly the same way and you had to do it year after year after year, what was the purpose and reason for him doing it in the first place? The first place that was, you could lose being atoned for. Now, if Christ died and you can still lose your atonement, your right standing, well then that was a waste of time. It was a waste of blood and you have not been atoned for. And then it would make God be unjust because what would he do? He'll be bringing back the very thing. It would be like me saying to spider monkey, what about my cup? But you've been forgiven for that. And what, now let me make this complete even better and we'll leave on this. What if though I said to spider monkey, I want to participate in this atonement? So what am I gonna do? I'm going to give you the cup and I'm gonna give you the money to pay for my cold season ticket. I'm gonna do all that. I'll give it to you. You give it back to me and back. That's how this works. And so back in that way, God also provides the atonement to make us reconcile. Therein lies guys, the atonement and he will never, ever, ever hold it up against you. Why? Because he is literally the one that paid the debt. Amen.
|
Musgrave non-dead-centre engine
Musgrave's non-dead-centre engine was a stationary steam engine of unusual design, intended to solve the problem of stopping on dead centre. It was designed in 1887 to serve as a marine engine. It used a pair of linked cylinders to prevent the engine from stopping in a position where no turning force can be applied. At least one engine is known to survive.
Dead centres
The 'dead centre' of a piston engine with cranks is when the piston is at the exact top or bottom of the stroke and so the piston cannot exert any torque on the crankshaft. If a steam engine stops on dead centre, it will be unable to restart from that position.
Several solutions to this have been applied. One of the simplest is to try not to stop in this position, the crudest to apply a strong arm with a crowbar to turn the engine over a little. Small steam barring engines were also used to move the engine away from dead centre before starting. If the engine has multiple cylinders, most geometries for these are arranged so that all cylinders are never at dead centre together and so one may always be used for starting.
Musgrave's solution was more complex: using two cylinders, additional connecting rod linkages, and geometry to avoid the problem.
Dead centre is rarely a problem for internal combustion engines, as these usually require cranking over to provide cylinder compression and so do not attempt to self-start from stationary. Some large stationary diesel engines, where these used a compressed air starting mechanism, have suffered from the problem of dead centres and so used a small manual barring gear.
Geometry
In appearance, the engine resembles a 'parallel twin' with two vertical cylinders and a single crankshaft between them, but set perpendicular to the line of the cylinders and sharing a single crankpin.
A parallel twin with this many cylinders would be self-starting from dead centre anyway (assuming the usual crankshaft with cranks at 90°).
The geometry in operation is more like that of a vee-twin engine. The two cylinders work together, but with one leading the other by approx. 30°. The difference in this case is that the cylinders are no longer directly in line with the crankshaft and so use the connecting rod as a form of bellcrank. If one cylinder is at dead centre, the other will be away from it by the amount of this angle.
A vee-twin would offer all the advantages of the Musgrave engine, but would only need two simple connecting rods. The cylinders would no longer be parallel, but that is far from impractical to manufacture, as demonstrated by the even earlier diagonal engine.
Connecting rod
The two cylinders are connected to the single crankpin through a complex connecting rod of four separate links, and a rigid mounting point to the frame and cylinders.
The main connecting rod is a large triangular frame, driven by both cylinders and driving the crankpin. Owing to the phase difference between the cylinders, this frame tilts back and forth as the engine rotates and so the cylinder crossheads drive it through two short connecting rods, allowing for some movement side-to-side. A large rocking lever attached to the engine's frame holds the connecting rod roughly central. On the Bolton engine, this lever is extended past the frame and used to drive the condenser air pump.
Similarities to the Ross yoke
A similar mechanism appears to have been invented independently, much later on. This is the Ross yoke, invented by Andy Ross for use with Stirling engines. A pair of parallel cylinders, one for the piston (driving), one containing the (driven) displacer, are connecting so that they drive back and forth with a suitable phase shift between them.
Marine engines
The design of the engine originated with W.Y. Fleming and P.Ferguson, marine engineers of Glasgow, in 1887. It was intended for use as a marine engine, and at least 23 were supplied to ship builders requiring compact engines suitable for restricted space in engine rooms.
Stationary engines
John Musgrave & Sons of the Globe Ironworks, Bolton was a mill engine builder, supplying the local cotton mills. He licensed the design in 1892, then patented further improvements to it in 1893.
Musgrave built up to 50 of these engines, the largest offering 1,500 ihp with quadruple expansion working. Ten of these quadruple expansion, four cylinder engines were built, the remainder mostly being two-cylinder compound engines, as the Park Street Mill engine. The larger engines used Corliss valves.
The non-dead-centre mechanism also evened-out power as the crank rotated, making it suitable for driving dynamos for electricity generation. The engine also had relative high speed for its day, making it possible to drive dynamos directly. A 500 hp Corliss valve engine was installed for electricity generation in Southport.
A poster in the Science Museum advertises engines to "Fleming, Ferguson, & Dixon's patent". These are twin-cylinder compound engines with a single semi-rotary valve per cylinder (as for Park Street Mill) and are offered in a range from 8 to 250 ihp and with speeds from 160 to 250 rpm. Their working pressure is not specified, but the same poster also offers Lancashire boilers of up to 200 psi.
All of these engines are of robust construction, with large cast iron frames that have the cylinders cast integrally with them. The Park Street Mill engine is made from two large castings bolted together along a central plane and with the steam passages cored directly into the castings.
The crossheads are of the slipper pattern. This design has asymmetric bearing surfaces and so supports the forces better when the engine rotating in one direction than the other. They are commonly found on stationary engines that do not need to be reversed. However, in the Musgrave design, the two slideways face each other and so one of them will always be working "in reverse" to usual practice.
Patents
* Fleming & Ferguson
* 1887
* 1889
* 1890
* 1891
* Musgrave & Dixon
* Improvements in Triple Expansion Engines, (1893).
Park Street Mill
Only one Musgrave non-dead-centre engine is known to survive, now preserved at the Bolton Steam Museum as part of the Northern Mill Engine Society collection. On steam days these engines (or at least some) may be seen in action. The collection also includes two other engines built by Musgrave's, which are not non-dead-centre engines but much smaller barring engines.
Models
* Science Museum, London
* A small model of a twin-cylinder compound engine is on display.
* Model Engineer magazine
* In 2009 the Model Engineer serialized the construction of a Musgrave engine, from castings supplied by the German firm of Lothar Matrian.
|
Hashmatullah Shahidi
Hashmatullah Shahidi (حشمت الله شاهدي; born 4 November 1994) is an Afghan cricketer and currently the captain of Afghanistan national cricket team in One Day International (ODI) and Test cricket. He made his ODI debut for Afghanistan against Kenya in October 2013. Shahidi was one of the eleven cricketers to play in Afghanistan's first ever Test match, against India, in June 2018. He became the first Afghan player to score a test double hundred when he scored 200 not out against Zimbabwe on 11 March 2021.
Career
In the final of the 2017–18 Ahmad Shah Abdali 4-day Tournament, batting for Band-e-Amir Region against Speen Ghar Region, he scored 163 runs in the first innings. He hit his first international six in the ODI series against Ireland in Ireland in May 2019, having previously scored 865 runs without hitting a six in an ODI.
In May 2018, he was named in Afghanistan's squad for their inaugural Test match, played against India. He made his Test debut against India, on 14 June 2018. He top scored in the second innings of the match with an unbeaten 36, albeit in a losing cause. Afghanistan lost the one-sided Test within two days. In February 2019, he was named in Afghanistan's Test squad for their one-off match against Ireland in India.
On 11 March 2021 he became the first Afghan player to score a test double hundred when he scored 200 not out in Afghanistan's first innings total of 545 for 4 in the Test against Zimbabwe.
In April 2019, he was named in Afghanistan's squad for the 2019 Cricket World Cup. On 18 June 2019, in the match against England, Hashmatullah scored his 1,000th run in ODIs.
In September 2021, he was named in Afghanistan's squad for the 2021 ICC Men's T20 World Cup.
In October 2023, he was named the captain of Afghanistan for the ODI Cricket World Cup. He managed to score just 18 runs in the first game of the tournament against Bangladesh. He played a brilliant inning against India scoring 80 runs off 88 balls. Shahidi played another splendid knock against Pakistan to see his team home by scoring 48 not out off just 45 balls. He was at his best against Sri Lanka, as he scored a half century in a must-win game. He came to bat when Afghanistan were 73-2 and remained not out till the end of the match along with Azmatullah.
|
Missing feature Apex Classes Installing this package requires the following feature and its associated permissions: Apex Classes
This error is coming when i am creating the package installer of apex in salesfoce envionment online .Now what i have to do to fix this bug. and can start work on apex. please reply as much as fast as possible.
It sounds like you are trying to install a package into a professional edition org.
This will only work with managed packages that have been appropriately certified by Salesforce.
If using apex is important to you and you need to it quickly the best option is to upgrade your Salesforce edition from Professional or Group Edition to Enterprise Edition or higher.
Re: I have registered as trial account for 30 days.
From Package install fail: missing feature apex class
Andrew Smith
Director of Product Management, Force.com
The 30 day trials do not support Apex code. If you purchase Enterprise Edition, you can then install this app. If you want to try it out beforehand, sign up for a developer edition org at developer.force.com. Then install the app into that org. That will allow you to try it out before purchase.
So as per Thomas's and Andrew's suggestion. A developer edition org will allow you to try out packages that rely on Apex. You can Sign up for your FREE Developer environment.
Let me inform you that i have registered as trial account for 30 days. so i dont know which edition they have provided me. Is there any way to see the edition in salesforce.
If you mouse over the title bar/tab name, the full edition name will be in the page title. If you want an Apex development/testing environment for free without a time limit, sign up for a Developer Edition org at http://developer.force.com/
|
Uniform Estimates for Averages of Order Statistics of Matrices
We prove uniform estimates for the expected value of averages of order statistics of matrices in terms of their largest entries. As an application, we obtain similar probabilistic estimates for $\ell_p$ norms via real interpolation.
Introduction and Main Results
Combinatorial and probabilistic inequalities play an important role in a variety of areas of mathematics, especially in Banach space theory. In [8] and [9], S. Kwapień and C. Schütt studied combinatorial expressions involving matrices and obtained inequalities in terms of the average of the largest entries of the matrix. To be more precise, they showed that where s(k) is the k-th largest entry of the matrix a and S n the symmetric group. This estimate seems crucial if one wants to compute the projection constant of symmetric Banach spaces and related invariants. Among other things, the authors obtained estimates for the positive projection constant of finite dimensional Orlicz spaces and estimated the order of the projection constant of the Lorentz spaces ℓ n 2,1 . Also, the symmetric sublattices of ℓ 1 (c 0 ) as well as the finite dimensional symmetric subspaces of ℓ 1 were characterized. Further applications and extensions of (1.1) can be found in [9,16,17,11,13], just to mention a few. The main result of this paper is a generalization of (1.1) in the sense that we study the expected value of averages of higher order statistics of a matrix in a more general setting described below. Our method of proof is purely probabilistic in nature, whereas the proof of (1.1) in [8] uses non-trivial combinatorial arguments.
In what follows, given a finite set G, we denote the normalized counting measure on G by P, i.e., where |·| denotes the cardinality. E will always denote the expectation with respect to the normalized counting measure. Moreover, for a vector x ∈ R n with nonnegative entries, we denote its k-th largest entry by k-max 1≤i≤n x i .
In particular, 1-max 1≤i≤n x i is the maximal value, n-max 1≤i≤n x i the minimal value of x. Our main result is the following: Date: November 26, 2014. Theorem 1.1. Let n, N ∈ N and a ∈ R n×N . Let G be a collection of maps from I = {1, . . . , n} to J = {1, . . . , N } and C G > 0 be a constant only depending on G.
Assume that for all i ∈ I, j ∈ J and all different pairs Then, for any ℓ ≤ n, Observe that estimate (1.1) [8, Theorem 1.1] is a special case of our result with the choice ℓ = 1 and G = S n , and that for ℓ = 1 and G = {1, . . . , n} {1,...,n} we directly obtain [2, Lemma 7]. Note that in this general setting E max 1≤i≤n |a ig(i) | was already studied in [9]. In a slightly different setting, order statistics were considered also in [2,3,4,5,6].
We will now present two natural choices for the set G that appear frequently in the literature (cf. [8,9,16,17,15,13,2,1,7,12]). Example 1. If N = n and G = S n is the group of permutations of the numbers {1, . . . , n}, then and for (i 1 , j 1 ) = (i 2 , j 2 ) and for (i 1 , j 1 ) = (i 2 , j 2 ) This means that C G = 1. Hence, Theorem 1.1 implies Another combinatorial inequality that was obtained in [8, Theorem 1.2] and which turned out to be crucial to study and characterize symmetric subspaces of L 1 (cf. [16,17,13]) states that for all 1 ≤ p ≤ ∞ In Section 5, we will use Theorem 1.1 to generalize this result and show that the lower bound in (1.3) can be naturally derived via real interpolation. The upper bound is quite easily obtained and we just follow [8]. Please note that averages of order statistics of matrices naturally appear, as they are strongly related to the K-functional of the interpolation couple (ℓ 1 , ℓ ∞ ). Again, two typical choices for the set of maps G are S n and {1, . . . , n} {1,...,n} . We will prove the following result: Theorem 1.2. Let n, N ∈ N, a ∈ R n×N , and 1 ≤ p < ∞. Let G be a collection of maps from I = {1, . . . , n} to J = {1, . . . , N } and C G > 0 be a constant only depending on G. Assume that for all i ∈ I, j ∈ J and all different pairs The organization of the paper is as follows. In Section 3, we will prove the lower estimate in (1.2). This is done by reducing the problem to the case of matrices only taking values in {0, 1} and showing the estimate for this subclass of matrices. In Section 4, we establish the upper bound in (1.2) by passing from averages of order statistics to equivalent Orlicz norms and using an extreme point argument. Section 5 contains the proof of Theorem 1.2.
Notation and Preliminaries
Throughout this paper we will use |E| to denote the cardinality of a finite set E. By S n we denote the symmetric group on the set {1, . . . , n}. We will denote by ⌊x⌋ and ⌈x⌉ the largest integer m ≤ x and the smallest integer m ≥ x, respectively.
For an arbitrary matrix a = (a ij ) n,N i,j=1 , we denote by (s(k)) nN k=1 the decreasing rearrangement of (|a ij |) n,N i,j=1 . To avoid confusion, in certain cases we write (s a (k)) nN k=1 to emphasize the underlying matrix a. Please also recall that the Paley-Zygmund inequality for non-negative random variables Z and 0 < θ < 1 states that For example, the classical ℓ p spaces are Orlicz spaces with M (t) = p −1 t p . The closed unit ball of the space ℓ n M will be denoted by B n M . We write ext(B n M ) for the set of extreme points of B n M and s-conv(M ) shall denote the set of points of strict convexity of M . We will make use of the following characterization of extreme points of B n M : For a detailed and thorough introduction to Orlicz spaces we refer the reader to [14] or [10].
The lower bound
In this section we will prove the lower bound in (1.2). We begin by recalling some notation and assumptions given in Theorem 1.1. Let a ∈ R n×N , I = {1, . . . , n}, J = {1, . . . , N }, and G be a collection of maps from I to J. The matrix a will be fixed throughout the entire section. By P we denote the normalized counting measure on G, i.e., P(E) = |E|/|G| for E ⊂ G. We assume a uniform distribution of the random variable g → g(i) for each i ∈ I, i.e., We assume for all different pairs ( with a constant C G ≥ 1 that depends on G, but not on n or N . Without loss of generality, we will assume that a has only non-negative entries. It is enough to show the lower estimate in (1.2) for matrices a that consist of only the ℓN largest entries, while all others are equal to zero. This is because if we change any entry a i0j0 ≤ s(ℓN + 1) by setting a i0j0 = 0, the left hand side in (1.2) remains the same, while k-max 1≤i≤n |a ig(i) | does not increase for any g ∈ G.
3.1. The key ingredients. We will now introduce a bijective function h that determines the ordering of the values of a. The crucial point is that this function does not depend on the actual values of the matrix, but merely on their relative size. So let h : {1, . . . , n · N } → I × J be a bijective function satisfying Observe that there is possibly more than one choice for h, since some of the entries of the matrix a might have the same value. For all j ∈ N, 1 ≤ j ≤ n · N , define the random variable where we identify g with its graph {(i, g(i)) : i ∈ I}. X m counts the number of elements in the path {(i, g(i)) : i ∈ I} that intersect with the positions of the m largest entries of a. As we will see in Subsection 3.2, the random variables X m are strongly related to order statistics. In Lemma 3.1, Lemma 3.2, and Lemma 3.3, we investigate crucial properties of the distribution function of X m .
In particular, Proof. By using the inclusion-exclusion principle, we obtain where the latter inequality is a direct consequence of conditions (i) and (ii) in Theorem 1.1.
Lemma 3.2.
For all m ∈ N, 1 ≤ m ≤ n · N , and all θ ∈ (0, 1), we have Proof. The result follows as a consequence of Paley-Zygmund's inequality (cf. (2.1)). Therefore, we need to compute EX m and EX 2 m . Note that EY j = P(Y j = 1) = 1/N and thus EX m = m j=1 Y j = m/N . Moreover, since Y j = Y 2 j , we have where the latter inequality is a direct consequence of conditions (i) and (ii) in Theorem 1.1. Inserting those estimates in (2.1), we obtain the result.
and for all k, m ∈ N with 2kN ≤ m ≤ nN On the other hand, if m ≥ N/C G , Lemma 3.1 implies Now we prove (3.4). Let k ≤ n/2 and m such that 2kN ≤ m ≤ n · N . Then Lemma 3.2 with θ = 1/2 implies
3.2.
Reduction to two valued matrices. We will now reduce the problem of estimating the expected value of averages of order statistics of general matrices to matrices only taking one value different from zero. To do so, we need some more definitions.
Let A h be the collection of all non-negative real n × N matrices b that satisfy Observe that a m ∈ A h for all 1 ≤ m ≤ n · N . For b ∈ A h and g ∈ G we put Proof. Observe that for every integer k with 1 ≤ k ≤ ℓ, Thus, in order to prove the lemma, it is enough to show that i.e., the assertion of the lemma for m ≤ 2N . Now, let m ≥ 2N + 1 and choose the integer t ≥ 1 such that 2tN + 1 ≤ m ≤ 2(t + 1)N . The sequence k → P(X ℓN ≥ k) is decreasing, hence, noting that t ≤ ℓ,
Then, estimate (3.4) of Lemma 3.3 implies
and the result follows. Proof. Recall that X j (g) = |h({1, . . . , j}) ∩ g|. Hence, for all b ∈ A h , we can write Since a, a ∈ A h , a(h(j)) = s a (j) and a(h(j)) = (ℓN ) −1 ℓN i=1 s a (i) for all j ≤ ℓN , we obtain and where for all 1 ≤ j ≤ ℓN Note 3.3. Conclusion. As we have seen, we can reduce the case of general a to multiples of matrices only taking values zero and one. Before we finally prove the lower bound in the main theorem, we will need another simple lemma.
Now we conclude with
Lemma 3.6. Let b ∈ A h be an (n × N )-matrix consisting of ℓN ones and (n − ℓ)N zeros. Then, for all 1 ≤ k ≤ ℓ/2, Proof. Let k ≤ ℓ/2. Using Lemma 3.2 with θ = 1/2, we obtain Proof of the lower estimate in Theorem 1.1. By Theorem 3.5 we obtain Now take b ∈ A h consisting of ℓN ones and (n − ℓ)N zeros such that Then, by Lemma 3.6 Combining the above estimates yields which is the lower estimate in Theorem 1.1.
The upper bound
We will now prove the upper bound of Theorem 1.1 via an extreme point argument. To do so, we first use the fact that the average of the j ≤ n · N largest entries of a matrix a ∈ R n×N is equivalent to an Orlicz norm a Mj (cf. Lemma 4.1). Then, since the expected value of the average of order statistics defines a norm on R n×N as well, it is enough to prove the upper bound in Theorem 1.1 for the extreme points of B n Mj . Recall that, for a vector (x i ) n i=1 ∈ R n , we denote the decreasing rearrangement of (|x i |) n i=1 by (x * i ) n i=1 . We start with the approximation of sums of decreasing rearrangements of vectors x ∈ R n by equivalent Orlicz norms.
The following result is due to C. Schütt (private communication). With his permission we include it here. Lemma 4.1. Let j ∈ N, 1 ≤ j ≤ n. Then, for all x ∈ R n , we have Proof. Let x ∈ R n . We start with the right hand side inequality. Of course, Hence, for all k ≥ j, Therefore, we obtain Therefore, we have for all α < 1/2 We are now able to prove the upper bound of Theorem 1.1.
Proposition 4.2. Let a ∈ R n×N . Then, for all ℓ ≤ n, Proof. It is sufficient to show (4.2) for all a ∈ ext(B nN M ℓ ). Therefore, by Lemma 2.1 (2), we only need to consider matrices a ∈ R n×N that are of the form On the other hand, we also have Therefore, which is the upper estimate in Theorem 1.1. Inequalities (3.6) and (4.4) together complete the proof.
5. An application of Theorem 1.1 We now present an application and use Theorem 1.1 to prove Theorem 1.2. The proof uses real interpolation and is, what we find, a natural approach to combinatorial inequalities such as (1.3) that were obtained in [8]. Please notice that [8, Theorem 1.2] is a special case of Theorem 1.2 when G = S n .
Let us first recall some basic notions from interpolation theory. A pair (X 0 , X 1 ) of Banach spaces is called a compatible couple if there is some Hausdorff topological space H, in which each of X 0 and X 1 is continuously embedded. For example, (L 1 , L ∞ ) is a compatible couple, since L 1 and L ∞ are continuously embedded into the space of measurable functions that are finite almost everywhere. Of course, any pair (X, Y ) for which one of the spaces is continuously embedded in the other is a compatible couple.
For a compatible couple (X 0 , X 1 ) (with corresponding Hausdorff space H), we equip X 0 + X 1 with the norm under which this space becomes a Banach space. This definition is independent of the particular space H.
Proof of Theorem 1.2. To show the upper bound we use the same argument as in [8]. For the sake of completeness we include it here. Let a ∈ R n×N and write a = a ′ + a ′′ , where a ′ contains the N largest entries of a and zeros elsewhere, and a ′′ contains s(N +1) . . . , s(nN ) and zeros elsewhere. Then, using triangle and Jensen's inequality, we obtain We will now prove the lower bound. Let 1 ≤ p < ∞ and θ = 1 − 1/p. First, recall that a θ,p = ∞ 0 t −θ K a, t; L = inf a=b+c G b(g) ℓ n 1 + t c(g) ℓ n ∞ dP(g) = G inf a(g)=b(g)+c(g) b(g) ℓ n 1 + t c(g) ℓ n ∞ dP(g).
Hence, we have where c 2 is a positive constant only depending on C G . Taking the p-th root concludes the proof.
|
// THIS FILE WAS AUTO-GENERATED BY ADKGEN -- DO NOT MODIFY!
//
// Copyright (c)1998-2011 Pearson Education, Inc. or its affiliate(s).
// All rights reserved.
//
using System;
using OpenADK.Library;
namespace OpenADK.Library.au.Common
{
///<summary>
/// Defines the set of values that can be specified whenever an AU025AddressType
/// is used as a parameter to a method or constructor.
///</summary>
/// <remarks>
/// Alternatively, the static
/// <see cref="Wrap"/> method can be called to encapsulate any string value in
/// an AU025AddressType object.
/// <para>Author: Generated by adkgen</para>
/// <para>Version: 2.6</para>
/// <para>Since: 2.3</para>
/// </remarks>
[Serializable]
public class AU025AddressType : SifEnum
{
/// <summary>Mailing address ("0123")</summary>
public static readonly AU025AddressType C0123_MAILING_ADDRESS = new AU025AddressType("0123");
/// <summary>Physical location address ("0765")</summary>
public static readonly AU025AddressType C0765_PHYSICAL_LOCATION = new AU025AddressType("0765");
/// <summary>Other ("9999")</summary>
public static readonly AU025AddressType C9999_OTHER = new AU025AddressType("9999");
/// <summary>Shipping address ("0124")</summary>
public static readonly AU025AddressType C0124_SHIPPING_ADDRESS = new AU025AddressType("0124");
///<summary>Wrap an arbitrary string value in an AU025AddressType object.</summary>
///<param name="wrappedValue">The element/attribute value.</param>
///<remarks>This method does not verify
///that the value is valid according to the SIF Specification</remarks>
public static AU025AddressType Wrap( String wrappedValue ) {
return new AU025AddressType( wrappedValue );
}
private AU025AddressType( string enumDefValue ) : base( enumDefValue ) {}
}
}
|
Thread:Whitestripe99/@comment-32769624-20180703000238
Hi, welcome to ! Thanks for your edit to the Shirai Ryu (Character) page.
Please leave me a message if I can help with anything!
NOTE: This is an automated message.
|
#include "caster.h"
typedef struct Caster_{
struct Caster interface;
char *caster_host;
int caster_port;
void (*onTune)(struct Caster *caster , char *target_id , CasterTune tune);
void (*onCast)(struct Caster *caster , char *target_id , void *data , int size);
char *id;
char *scode;
Kmap *kmap;
Kudp *kudp;
Kprocessor *kprocessor;
} Caster_;
typedef struct CasterItem{
CasterTune tune;
KudpPeer *kpeer;
} CasterItem;
void *caster_looper(Caster *caster);
CasterTune caster_text_to_tune(char *text);
char *caster_tune_to_text(CasterTune tune);
void caster_onSpawn(struct Kudp *kudp);
void caster_onOpen(struct Kudp *kudp , KudpPeer *kpeer , void *arg);
void caster_onData(struct Kudp *kudp , KudpPeer *kpeer , char *data , int size);
void caster_onClose(struct Kudp *kudp , KudpPeer *kpeer);
int caster_key_comperator(void *key_1 , void *key_2);
void caster_key_destructor(void *key);
void caster_value_destructor(void *value);
int caster_start(struct Caster *caster , char *setting_id , char *setting_scode);
Kmap* caster_state(struct Caster *caster);
int caster_stop(struct Caster *caster);
int caster_tune(struct Caster *caster , char *target_id , CasterTune tune);
int caster_cast(struct Caster *caster , void *data , int size , int end);
void *caster_looper(Caster *caster){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return NULL;
}
while(1){
for(int cursor = 0 ; cursor < caster_->kmap->length(caster_->kmap) ; cursor++){
CasterItem *item = caster_->kmap->getvalue(caster_->kmap,cursor);
if(item->kpeer != NULL){
Kmap *response = kson_parse("{}");
response->put(response,"DATA_TYPE",KTYPE_STACK,"PING",KTYPE_STACK);
char *response_text = kson_pack(response);
kmap_free(response);
caster_->kudp->sendto(caster_->kudp,item->kpeer,response_text,(int) (strlen(response_text) + 1),0);
kmemory_free(response_text);
}
}
sleep(3);
}
return NULL;
}
CasterTune caster_text_to_tune(char *text){
if(strcmp(text,"OPEN") == 0){
return CASTER_TUNE_OPEN;
}else if(strcmp(text,"RDWR") == 0){
return CASTER_TUNE_RDWR;
}else if(strcmp(text,"RDONLY") == 0){
return CASTER_TUNE_RDONLY;
}else if(strcmp(text,"WRONLY") == 0){
return CASTER_TUNE_WRONLY;
}else if(strcmp(text,"NONE") == 0){
return CASTER_TUNE_NONE;
}else if(strcmp(text,"WAIT") == 0){
return CASTER_TUNE_WAIT;
}else{
return CASTER_TUNE_CLOSE;
}
}
char *caster_tune_to_text(CasterTune tune){
switch (tune){
case CASTER_TUNE_OPEN:
return "OPEN";
case CASTER_TUNE_RDWR:
return "RDWR";
case CASTER_TUNE_RDONLY:
return "RDONLY";
case CASTER_TUNE_WRONLY:
return "WRONLY";
case CASTER_TUNE_NONE:
return "NONE";
case CASTER_TUNE_WAIT:
return "WAIT";
default:
return "CLOSE";
}
}
void caster_onSpawn(struct Kudp *kudp){}
void caster_onOpen(struct Kudp *kudp , KudpPeer *kpeer , void *arg){
// send request to server
Caster_ *caster_ = kudp->bundle;
if(caster_ == NULL){
return;
}
CasterItem *item = caster_->kmap->get(caster_->kmap,arg);
if(item == NULL || item->kpeer != NULL){
return;
}
item->kpeer = kmemory_copy_struct(kpeer,sizeof(KudpPeer));
Kmap *request = kson_parse("{}");
request->put(request,"DATA_TYPE",KTYPE_STACK,"TUNE",KTYPE_STACK);
request->put(request,"TUNE",KTYPE_STACK,caster_tune_to_text(item->tune),KTYPE_STACK);
request->put(request,"TYPE",KTYPE_STACK,"REQUEST",KTYPE_STACK);
request->put(request,"TARGET_ID",KTYPE_STACK,arg,KTYPE_STACK);
request->put(request,"ID",KTYPE_STACK,caster_->id,KTYPE_STACK);
request->put(request,"SCODE",KTYPE_STACK,caster_->scode,KTYPE_STACK);
char *request_text = kson_pack(request);
kmap_free(request);
kudp->sendto(kudp,kpeer,request_text,(int) strlen(request_text)+1,0);
kmemory_free(request_text);
if(item->tune == CASTER_TUNE_CLOSE){
kudp->close(kudp,kpeer);
}
}
void caster_onData(struct Kudp *kudp , KudpPeer *kpeer , char *data , int size){
Caster_ *caster_ = kudp->bundle;
if(caster_ == NULL){
return;
}
if(data[0] == '{') {
Kmap *request = kson_parse(data);
if (request == NULL) {
return;
}
char *data_type = request->get(request, "DATA_TYPE");
if (data_type == NULL) {
kmap_free(request);
return;
}
if(strcmp(data_type,"TUNE") == 0){
char *tune_text = request->get(request,"TUNE");
char *type_text = request->get(request,"TYPE");
char *target_id_text = request->get(request,"TARGET_ID");
if(strcmp(type_text,"REQUEST") == 0){
CasterTune tune = caster_text_to_tune(tune_text);
if(tune == CASTER_TUNE_OPEN){
// none
}else if(tune == CASTER_TUNE_CLOSE){
kudp->close(kudp,kpeer);
}else if(tune != CASTER_TUNE_WAIT){
CasterItem *item = caster_->kmap->get(caster_->kmap,target_id_text);
if(item != NULL){
item->tune = tune;
Kmap *response = kson_parse("{}");
response->put(response,"DATA_TYPE",KTYPE_STACK,"TUNE",KTYPE_STACK);
response->put(response,"TUNE",KTYPE_STACK,tune_text,KTYPE_STACK);
response->put(response,"TYPE",KTYPE_STACK,"RESPONSE",KTYPE_STACK);
char *response_text = kson_pack(response);
kmap_free(response);
kudp->sendto(kudp,kpeer,response_text,(int) strlen(response_text)+1,0);
kmemory_free(response);
caster_->onTune((struct Caster *) caster_, target_id_text, item->tune);
}
}
}else if(strcmp(type_text,"RESPONSE") == 0){
CasterTune tune = caster_text_to_tune(tune_text);
if(tune == CASTER_TUNE_OPEN){
CasterItem *item = caster_->kmap->get(caster_->kmap,target_id_text);
if(item != NULL){
item->tune = CASTER_TUNE_NONE;
item->kpeer->host = kmemory_copy_string(request->get(request,"HOST"));
item->kpeer->port = atoi(request->get(request,"PORT"));
caster_->onTune((struct Caster *) caster_, target_id_text, item->tune);
}
}else if(tune == CASTER_TUNE_CLOSE){
kudp->close(kudp,kpeer);
}else if(tune != CASTER_TUNE_WAIT){
CasterItem *item = caster_->kmap->get(caster_->kmap,target_id_text);
if(item != NULL){
item->tune = tune;
caster_->onTune((struct Caster *) caster_, target_id_text, item->tune);
}
}
}
}else if(strcmp(data_type,"PING") == 0){
Kmap *response = kson_parse("{}");
response->put(response,"DATA_TYPE",KTYPE_STACK,"PONG",KTYPE_STACK);
char *response_text = kson_pack(response);
kmap_free(response);
kudp->sendto(kudp,kpeer,response_text,(int) (strlen(response_text) + 1),0);
kmemory_free(response_text);
}else if(strcmp(data_type,"PONG") == 0){}
kmap_free(request);
}else{
for(int cursor = 0 ; cursor < caster_->kmap->length(caster_->kmap) ; cursor++){
CasterItem *item = caster_->kmap->getvalue(caster_->kmap,cursor);
if(item->kpeer->fd == kpeer->fd){
if(strcmp(item->kpeer->host,kpeer->host) == 0 && item->kpeer->port == kpeer->port && item->tune && (item->tune == CASTER_TUNE_RDWR || item->tune == CASTER_TUNE_RDONLY)){
caster_->onCast((struct Caster *) caster_, caster_->kmap->getkey(caster_->kmap, cursor), data, size);
}
break;
}
}
}
}
void caster_onClose(struct Kudp *kudp , KudpPeer *kpeer){
Caster_ *caster_ = kudp->bundle;
if(caster_ == NULL){
return;
}
for(int cursor = 0 ; cursor < caster_->kmap->length(caster_->kmap) ; cursor++){
CasterItem *item = caster_->kmap->getvalue(caster_->kmap,cursor);\
if(item->kpeer != NULL){
if(item->kpeer->fd == kpeer->fd && item->kpeer->index == kpeer->index){
caster_->kmap->remove(caster_->kmap,caster_->kmap->getkey(caster_->kmap,cursor));
caster_->onTune((struct Caster *) caster_, caster_->kmap->getkey(caster_->kmap, cursor), CASTER_TUNE_CLOSE);
break;
}
}
}
}
int caster_key_comperator(void *key_1 , void *key_2){
return strcmp(key_1,key_2) == 0;
}
void caster_key_destructor(void *key){
kmemory_free(key);
}
void caster_value_destructor(void *value){
kmemory_free(value);
}
int caster_start(struct Caster *caster , char *setting_id , char *setting_scode){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL || setting_id == NULL || setting_scode == NULL){
return 0;
}
if(caster_->id != NULL || caster_->scode != NULL){
return 0;
}
caster_->id = kmemory_copy_string(setting_id);
caster_->scode = kmemory_copy_string(setting_scode);
caster_->kudp->binder(caster_->kudp,NULL);
caster_->kprocessor->start(caster_->kprocessor);
caster_->kprocessor->post(caster_->kprocessor,(void *(*)(void *)) caster_looper,caster);
return 1;
}
Kmap* caster_state(struct Caster *caster){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return NULL;
}
Kmap *kmap = kson_parse("{}");
for(int cursor = 0 ; cursor < caster_->kmap->length(caster_->kmap) ; cursor++){
CasterItem *item = caster_->kmap->getvalue(caster_->kmap,cursor);
kmap->put(caster_->kmap,kmemory_copy_string(caster_->kmap->getkey(caster_->kmap,cursor)),KTYPE_HEAP,kmemory_copy_string(caster_tune_to_text(item->tune)),KTYPE_HEAP);
}
return kmap;
}
int caster_stop(struct Caster *caster){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return 0;
}
caster_->kprocessor->stop(caster_->kprocessor);
caster_->kudp->shutdown(caster_->kudp);
if(caster_->id != NULL){
kmemory_free(caster_->id);
}
if(caster_->scode != NULL){
kmemory_free(caster_->scode);
}
return 1;
}
int caster_tune(struct Caster *caster , char *target_id , CasterTune tune){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return 0;
}
if(caster_->id == NULL || caster_->scode == NULL){
return 0;
}
if(tune == CASTER_TUNE_OPEN){
CasterItem *item = kmemory_alloc(sizeof(CasterItem));
item->tune = CASTER_TUNE_OPEN;
item->kpeer = NULL;
caster_->kmap->put(caster_->kmap,kmemory_copy_string(target_id),KTYPE_HEAP,item,KTYPE_HEAP);
caster_->onTune(caster,target_id,item->tune);
if(!caster_->kudp->bind(caster_->kudp,target_id,caster_->caster_host,caster_->caster_port)){
caster_->kmap->remove(caster_->kmap,target_id);
caster_->onTune(caster,target_id,CASTER_TUNE_CLOSE);
}
}else if(tune == CASTER_TUNE_CLOSE){
CasterItem *item = caster_->kmap->get(caster_->kmap,target_id);
if(item != NULL){
item->tune = CASTER_TUNE_CLOSE;
Kmap *request = kson_parse("{}");
request->put(request,"DATA_TYPE",KTYPE_STACK,"TUNE",KTYPE_STACK);
request->put(request,"TUNE",KTYPE_STACK,"CLOSE",KTYPE_STACK);
request->put(request,"TYPE",KTYPE_STACK,"REQUEST",KTYPE_STACK);
char *request_text = kson_pack(request);
kmap_free(request);
caster_->kudp->sendto(caster_->kudp,item->kpeer,request_text,(int) strlen(request_text)+1,0);
kmemory_free(request_text);
caster_->onTune(caster,target_id,item->tune);
}else{
item = kmemory_alloc(sizeof(CasterItem));
item->tune = CASTER_TUNE_CLOSE;
item->kpeer = NULL;
caster_->kmap->put(caster_->kmap,kmemory_copy_string(target_id),KTYPE_HEAP,item,KTYPE_HEAP);
if(!caster_->kudp->bind(caster_->kudp,target_id,caster_->caster_host,caster_->caster_port)){
caster_->kmap->remove(caster_->kmap,target_id);
}
}
}else if(tune != CASTER_TUNE_WAIT){
CasterItem *item = caster_->kmap->get(caster_->kmap,target_id);
if(item == NULL || item->kpeer == NULL || (strcmp(item->kpeer->host,caster_->caster_host) == 0 && item->kpeer->port == caster_->caster_port)){
return 0;
}
item->tune = CASTER_TUNE_WAIT;
Kmap *request = kson_parse("{}");
request->put(request,"DATA_TYPE",KTYPE_STACK,"TUNE",KTYPE_STACK);
request->put(request,"TUNE",KTYPE_STACK,caster_tune_to_text(tune),KTYPE_STACK);
request->put(request,"TYPE",KTYPE_STACK,"REQUEST",KTYPE_STACK);
char *request_text = kson_pack(request);
kmap_free(request);
caster_->kudp->sendto(caster_->kudp,item->kpeer,request_text,(int) strlen(request_text)+1,0);
kmemory_free(request_text);
caster_->onTune(caster,target_id,item->tune);
}
return 1;
}
int caster_cast(struct Caster *caster , void *data , int size , int end){
// send to all with tune wronly or rdwr
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return 0;
}
if(caster_->id == NULL || caster_->scode == NULL){
return 0;
}
for(int cursor = 0 ; cursor < caster_->kmap->length(caster_->kmap) ; cursor++){
CasterItem *item = caster_->kmap->getvalue(caster_->kmap,cursor);
if(item->tune == CASTER_TUNE_RDWR || item->tune == CASTER_TUNE_WRONLY){
caster_->kudp->sendto(caster_->kudp,item->kpeer,data,size,end);
}
}
return 1;
}
Caster *caster_new(
int pool_size,
int buffer_size,
char *caster_host,
int caster_port,
void (*onTune)(struct Caster *caster , char *target_id , CasterTune tune),
void (*onCast)(struct Caster *caster , char *target_id , void *data , int size)
){
Caster_ *caster_ = kmemory_alloc(sizeof(Caster_));
caster_->interface.start = caster_start;
caster_->interface.state = caster_state;
caster_->interface.stop = caster_stop;
caster_->interface.tune = caster_tune;
caster_->interface.cast = caster_cast;
caster_->caster_host = kmemory_copy_string(caster_host);
caster_->caster_port = caster_port;
caster_->onTune = onTune;
caster_->onCast = onCast;
caster_->id = NULL;
caster_->scode = NULL;
caster_->kmap = kmap_new(KCONCURRENCY_CONCURRENT,2.0f,caster_key_comperator,caster_key_destructor,caster_value_destructor);
caster_->kudp = kudp_new(pool_size,buffer_size,caster_onSpawn,caster_onOpen,caster_onData,caster_onClose);
caster_->kprocessor = kprocessor_new(KPROCESSOR_TYPE_THREAD,1,NULL,NULL);
if(caster_->kmap == NULL || caster_->kudp == NULL || caster_->kprocessor == NULL){
caster_free((Caster *) caster_);
return NULL;
}
caster_->kudp->bundle = caster_;
return (Caster *) caster_;
}
void caster_free(Caster *caster){
Caster_ *caster_ = (Caster_ *) caster;
if(caster_ == NULL){
return;
}
kmap_free(caster_->kmap);
kudp_free(caster_->kudp);
kprocessor_free(caster_->kprocessor);
kmemory_free(caster_->caster_host);
kmemory_free(caster_);
}
|
Decelerated Aging
- Bilbo Baggins (The Lord of the Rings)
The power to age at a slower-than-normal rate. Opposite of Accelerated Aging.
Also Called
* Decelerated Aging Process
* Delayed/Retarded/Slowed/Static Aging
* Longevity
Capabilities
Associations
* Age Deceleration
* Age Manipulation
* Enhanced Condition
* Regenerative Healing Factor
* Semi-Immortality/Immortality
* Telomere Regeneration
Limitations
* May still be susceptible to Age Manipulation.
Known Locations
* Geotopia (Ice Age: Collision Course)
* Fountain of Youth (Ice Age: Collision Course); crystal asteroid
|
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use \App\JenisTemuan;
use \App\Tindak;
class TindakController extends Controller
{
public function index()
{
$jenis_temuan = JenisTemuan::all();
$data_tindak = Tindak::all();
return view('tindak.index',['data_tindak' => $data_tindak, 'jenis_temuan' => $jenis_temuan]);
}
public function create(Request $request)
{
//$this->validate($request,[
//'lhp_id' => 'required',
//'isi_tindak' => 'required',
//]);
Tindak::create($request->all());
return redirect('/tindak');
}
public function edit($id)
{
$jenis_temuan = JenisTemuan::all();
$tindak = Tindak::find($id);
return view('tindak.edit',['jenis_temuan' => $jenis_temuan, 'tindak' => $tindak]);
}
public function update(Request $request,$id)
{
$tindak = Tindak::find($id);
$tindak->update($request->all());
return redirect('/tindak');
}
public function del($id)
{
$tindak = Tindak::find($id);
$tindak->delete($tindak);
return redirect('/tindak');
}
}
|
/*
* Copyright (c) 2016 Intel Corporation. All rights reserved.
* See the bottom of this file for the license terms.
*/
/*
This sketch example demonstrates how the BMI160 on the
Intel(R) Curie(TM) module can be used to read accelerometer data
*/
#include "CurieIMU.h"
#include <Servo.h>
#include <CurieBLE.h>
Servo myservoflex; // create servo object to control a servo
Servo myservoextend; // create servo object to control a servo
BLEService motorService("ebbb19a6-c943-44f9-aee0-e180300007f0"); //custom 120-bit UUID custom service
//motor characteristic
BLEIntCharacteristic motorReading("00211321-dc03-4f55-82b6-14a630bd8e2d",
BLERead|BLENotify|BLEWrite);
//motor properties
BLEIntCharacteristic motorExtendChar("809ba7a9-13ad-4446-a005-bdc12ca93c76", BLERead|BLENotify|BLEWrite);
BLEIntCharacteristic motorContractChar("2fad8a3f-e1a1-47cd-982b-24e13fbe9342", BLERead|BLENotify|BLEWrite);
// twelve servo objects can be created on most boards
float timeMillis=0;
float timeMillisMotionReset=0;
float timeMillisGyroReset=0;
float ax, ay, az, atot, gx, gy, gz, gtot, gx_bias, gy_bias, gz_bias, gtot_filt;
int motionDetect=0;
int switchPosition=0; // fingers extended
int autoButton=0;
int autoButtonOld=0;
int extendButton=0;
int extendButtonOld=0;
int extendButtonChange=0;
int delayMode=0;
int timeMillisDelayMode=0;
int extendMotor=50;//50 = fully squeeze bottle; 80 = partially squeeze bottle
int retractMotor=150;//150 = fully extend fingers; 120 = partially extend fingers
int manualExtend = 0;
int manualContract = 1;
int manualRelax=-1;
void setup() {
myservoflex.attach(A1); // attaches the servo
myservoextend.attach(A0); // attaches the servo
//myservoflex.write(40);
//myservoextend.write(40);
//Serial.begin(9600); // initialize Serial communication
//while (!Serial); // wait for the serial port to open
timeMillis=millis();
timeMillisMotionReset=millis();
digitalWrite(13,LOW);
CurieIMU.begin();
// Set the accelerometer range to 2G
CurieIMU.setAccelerometerRange(2);
// Set the accelerometer range to 250 degrees/second
CurieIMU.setGyroRange(250);
BLE.begin();
BLE.setLocalName("LegoHERO");
BLE.setAdvertisedService(motorService); // add the service UUID
motorService.addCharacteristic(motorReading);
motorService.addCharacteristic(motorExtendChar);
motorService.addCharacteristic(motorContractChar);
BLE.addService(motorService); // Add the BLE Battery service
motorReading.setValue(manualRelax);
motorExtendChar.setValue(extendMotor);
motorContractChar.setValue(retractMotor);
extendMotor = motorExtendChar.value();
retractMotor = motorContractChar.value();
BLE.advertise();
//read buttons
autoButton=digitalRead(A3);
extendButton=digitalRead(A2);
// if auto button is pressed down we are in manual mode, start manual mode with motors relaxed
if (autoButton==1)
{myservoflex.write(retractMotor);
myservoextend.write(retractMotor);}
// if auto button is pressed up we are in auto mode, start auto mode with relaxed position
else if (autoButton==0)
{
myservoflex.write(retractMotor);
myservoextend.write(retractMotor);
delayMode=1;
timeMillisDelayMode=millis();}
}
void loop() {
//on loop reset the motor values
extendMotor = motorExtendChar.value();
retractMotor = motorContractChar.value();
// read accelerometer measurements from device, scaled to the configured range
CurieIMU.readAccelerometerScaled(ax, ay, az);
CurieIMU.readGyroScaled(gx, gy, gz);
//calculate absolute value of gyro
gtot=sqrt(sq(gx-gx_bias)+sq(gy-gy_bias)+sq(gz-gz_bias));
//filter gtot so quick noise is not detected as motion
gtot_filt=gtot*0.2 + gtot_filt*0.8;
// open();
// TestX.write(gx-gx_bias);
// TestY.write(gy-gy_bias);
// TestZ.write(gz-gz_bias);
// Serial.println(gx-gx_bias);
//Serial.print("\t");
//Serial.print(gy-gy_bias);
//Serial.print("\t");
//Serial.println(gz-gz_bias);
//Serial.print("\t");
//Serial.println(motionDetect);
//Serial.println(gtot);
//read time
timeMillis=millis();
//read buttons
autoButton=digitalRead(A3);
extendButton=digitalRead(A2);
//reset gyro biases when user is not moving, so that when the user is at rest gtot is 0
if((gtot<5)&&(timeMillis>timeMillisGyroReset+20000)){
gx_bias=gx;
gy_bias=gy;
gz_bias=gz;
timeMillisGyroReset=millis();
}
//delay mode, don't trigger robot to move
if ((autoButton==0)&&(delayMode==1)&&(timeMillis<(timeMillisDelayMode+2000)))
{motionDetect=0;}
else{
delayMode=0;
//detect if auto button has been pressed; autoButton=0 is auto mode (button up), autoButton=1 is manual mode (button down)
if (autoButton!=autoButtonOld){
// if auto button is pressed down we are in manual mode, start manual mode with motors relaxed
if (autoButton==1)
{
motorReading.setValue(manualRelax);
}
// if auto button is pressed up we are in auto mode, start auto mode with fingers extended
else if (autoButton==0)
{myservoflex.write(retractMotor);
myservoextend.write(extendMotor);
switchPosition=0;
delayMode=1;
timeMillisDelayMode=millis();}
autoButtonOld=autoButton;
extendButtonOld=extendButton;
motionDetect=0;
//delay(7000);
}
//detect if robot is in auto mode
if (autoButton==0){
//detect if angular motion is greater than 15 units. If so, keep resetting the reset counter while the user is moving
if (gtot>30){
motionDetect=1;
timeMillisMotionReset=millis();
}
//after moving, if the user isd stationary for long enough, trigger motor to move
if ((motionDetect==1)&&(timeMillis>(timeMillisMotionReset+800))){
if (switchPosition==1){
myservoflex.write(retractMotor);
myservoextend.write(extendMotor);
switchPosition=0;
}
else{
myservoflex.write(extendMotor);
myservoextend.write(retractMotor);
switchPosition=1;
}
delayMode=1;
timeMillisDelayMode=millis();
}
}
//detect if extend button is pressed while robot is in manual mode
if ((autoButton==1)&&(extendButton!=extendButtonOld)){
extendButtonOld=extendButton;
extendButtonChange=1;
}
//if extend button is pressed while robot is in manual mode then trigger finger extension if extend button is pressed up or flexion if extend button pressed down
else if((autoButton==1)&&(extendButtonChange==1)&&(extendButton==1)){
/*
myservoflex.write(extendMotor);
myservoextend.write(retractMotor);
*/
motorReading.setValue(manualExtend);
extendButtonChange=0;
}
else if((autoButton==1)&&(extendButtonChange==1)&&(extendButton==0)){
/*
myservoflex.write(retractMotor);
myservoextend.write(extendMotor);
*/
motorReading.setValue(manualContract);
extendButtonChange=0;
}
if (autoButton==1){
//update motors based off of the new characteristic values presented
if ( motorReading.value() == manualContract){
myservoflex.write(retractMotor);
myservoextend.write(extendMotor);
}
if ( motorReading.value() == manualExtend){
myservoflex.write(extendMotor);
myservoextend.write(retractMotor);
}
if ( motorReading.value() == manualRelax){
myservoflex.write(extendMotor);
myservoextend.write(extendMotor );
}
}
}//else for delay mode
}//end of void loop
/*
Copyright (c) 2016 Intel Corporation. All rights reserved.
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
|
<?php
namespace app\admin\controller;
use app\common\controller\Adminbase;
use think\facade\Request;
use think\facade\Cache;
use think\Db;
/*
** Menu 控制器
*/
class Menu extends Adminbase {
private static $_menu = null; // 数据表对象
// 优先加载
public function initialize() {
parent::initialize();
// 实例化数据表模型
self::$_menu = model('Menu');
}
// 菜单列表
public function index() {
if (Cache::get('menuInfo')) {
$lists = Cache::get('menuInfo');
} else {
$lists = self::$_menu->menuLists();
Cache::set('menuInfo', $lists);
}
$this->assign('menulists', $lists);
return view('index');
}
// 新增菜单页面
public function add() {
$lists = self::$_menu->menuLists();
$this->assign('menulists', $lists);
return view('add');
}
// 编辑菜单页面
public function edit() {
$id = input('param.id/d');
if(!$id){
$this->error('参数错误');
}
$lists = self::$_menu ->getMenuDetail($id);
if(!$lists){
$this->error('参数错误');
}
$list = self::$_menu->menuLists();
$this->assign('menulists', $list);
$this->assign('lists', $lists);
return view('edit');
}
// 添加、更改
public function save () {
if (Request::isPost()) {
$inputs = Request::post();
if (empty($inputs['id'])) {
// 使用模型验证器进行验证
$result = $this->validate($inputs, 'Menu.add');
if(true !== $result){
// 验证失败 输出错误信息
$this -> error($result);
}
// 验证新增数据是否重复
$this -> checkActionAdd($inputs);
// 保存数据
if(self::$_menu->data($inputs)->save()){
if (Cache::get('menuInfo')) {
Cache::set('menuInfo', null);
}
return json(['status'=>1, 'msg'=>'操作成功']);
}
} else {
// 使用模型验证器进行验证
$result = $this->validate($inputs, 'Menu.edit');
if(true !== $result){
// 验证失败 输出错误信息
return json(['status'=>0, 'msg'=>$result]);
}
$find = self::$_menu->where(array('id'=>$inputs['id'])) -> value('id');
if(!$find){
return json(['status'=>0, 'msg'=>'参数错误']);
}
// 验证更新数据是否重复
$this->checkActionUpdate($inputs);
//获取传入的数据
// 处理 path、topid
$id = $inputs['id'];
$pid = $inputs['pid'];
if($pid > 0){
// 获取上级菜单的 path
$prentPath = self::$_menu->where(array('id' => $pid))->value('path');
$pPath = explode('-', $prentPath);
$inputs['path'] = $prentPath . '-' . $id;
// 0-1 中 1 的位置,即本系列首个菜单
$inputs['topid'] = $pPath[1];
}else{
$inputs['path'] = '0-' . $id;
$inputs['topid'] = 0;
}
// 使用模型功能保存数据,方便调用模型事件
if(self::$_menu->allowField(true)->save($inputs, array('id'=>$id))){
if (Cache::get('menuInfo')) {
Cache::set('menuInfo', null);
}
return json(['status'=>1, 'msg'=>'操作成功']);
}
}
}
}
// 删除菜单
public function deletes() {
if (Request::isPost()) {
$id = Request::post('id');
$result = self::$_menu -> deletes($id);
if($result['status'] == 1){
if (Cache::get('menuInfo')) {
Cache::set('menuInfo', null);
}
return json(['status'=>1, 'msg'=>'删除成功']);
}
return json(['status'=>0, 'msg'=>$result['msg']]);
}
}
// 验证新增数据 验证 module,control,actions 是否重复添加
private function checkActionAdd($data) {
$checkData = array(
'module' => $data['module'],
'control' => $data['control'],
'actions' => $data['actions']
);
$find = self::$_menu -> where($checkData) -> find();
if($find){
return json(['status'=>0, 'msg'=>'同样的记录已经存在']);
}
return true;
}
// 验证更新数据 验证 module,control,actions 是否重复添加
private function checkActionUpdate($data) {
$checkData = array(
'module' => $data['module'],
'control' => $data['control'],
'actions' => $data['actions']
);
$find = self::$_menu -> where($checkData) -> value('id');
if($find && $find != $data['id']){
return json(['status'=>0, 'msg'=>'同样的记录已经存在']);
}
return true;
}
}
|
User talk:<IP_ADDRESS>
Remember read the help pages and our policies before editing.
Please leave a message here if you have any problem! -- Wikia (Talk) 18:13, November 16, 2009
|
Introduction of combination antiretroviral therapy (cART) has dramatically increased the lifespan of HIV-infected patients and decreased the prevalence of AIDS-related deaths<EMAIL_ADDRESS>The lifespan is still lower than that expected for non-HIV-infected individuals [<EMAIL_ADDRESS>because of both AIDS-related and non-AIDS related mortality. Several studies have reported higher prevalence of comorbidities such as cardiovascular disease (CVD), type 2 diabetes, and possibly cancer than in the general population [<EMAIL_ADDRESS>What underlies the increased prevalence of non-AIDS-related comorbidities is not fully understood; but coinfections such as Hepatitis C<EMAIL_ADDRESS>cART<EMAIL_ADDRESS>late HIV-diagnosis<EMAIL_ADDRESS><EMAIL_ADDRESS>lifestyle<EMAIL_ADDRESS><EMAIL_ADDRESS>and inflammation<EMAIL_ADDRESS>have all been implicated.
Inflammation affects metabolism<EMAIL_ADDRESS>and increased inflammation has been demonstrated to be a risk factor for CVD, type 2 diabetes, cancer and overall mortality in the general population<EMAIL_ADDRESS>HIV-infection causes chronic immune activation, and HIV-infected patients are characterised by higher inflammatory levels than non-HIV-infected individuals<EMAIL_ADDRESS>It has been proposed, that accelerated ageing characterises long term HIV-infection<EMAIL_ADDRESS>Urokinase plasminogen activator receptor (uPAR) is a membrane bound receptor involved in numerous processes such as fibrinolysis, cell migration and cell signalling<EMAIL_ADDRESS>uPAR is expressed on activated T cells and macrophages among other cells, and its expression is upregulated by HIV<EMAIL_ADDRESS>Increased levels of the soluble form of uPAR (suPAR) have been found in various infectious, inflammatory, autoimmune and malignant diseases, and suPAR levels generally associate with disease severity [<EMAIL_ADDRESS>suPAR levels correlate positively with tumor necrosis factor alpha (TNF-α), leukocyte numbers, and C-reactive protein (CRP<EMAIL_ADDRESS><EMAIL_ADDRESS>and suPAR seems to be a stable marker of inflammation, with low diurnal variation and stable *in vitro* properties<EMAIL_ADDRESS><EMAIL_ADDRESS>We have previously shown that suPAR was associated with dysmetabolism in HIV-infected patients and mortality in the pre-cART era<EMAIL_ADDRESS><EMAIL_ADDRESS>Moreover, increased suPAR levels were associated with type 2 diabetes, CVD, cancer, and mortality in a prospective general population-based study<EMAIL_ADDRESS><EMAIL_ADDRESS>In this study, we investigated whether HIV-related factors, demography, lifestyle and body composition were associated with inflammation measured by suPAR, thereby exploring mechanisms underlying the decreased life-time expectancy of HIV-infected patients. We found suPAR to associate with established risk factors for cardiovascular disease and non-AIDS-related mortality, and to reflect other aspects of HIV-disease than immunologic and virologic markers.
Ethics Statement {#s2a}
All patients included gave written informed consent to have an extra blood sample collected during routine HIV-management to be used for future HIV research. The Danish Data Protection Agency approved the storage and collection of samples (protocol: 2007-41-1634), according to Danish law, only the Danish Data Protection Agency needs to approve this. The local Ethics committee for the Capital Region of Denmark and the Danish Data Protection Agency approved the use of: stored blood samples; data from patients' medical records; Dual energy X-ray absorptiometry (DXA) scans; routine blood tests; and patient administered questionnaire for this study (protocols: H-4-2012-008 and 2007-58-0015, respectively). The study was performed according to the Declaration of Helsinki.
In 2007, approximately 3780 (0.07% of the overall population) adults were living with HIV in Denmark<EMAIL_ADDRESS>Medical care and cART is tax-paid and free of charge, and provided at few specialised centres. Approximately 37% of HIV-infected patients in Denmark attended the Department of Infectious Diseases, Copenhagen University Hospital, Hvidovre for disease management in 2007<EMAIL_ADDRESS>The criteria for initiating cART in 2007 were: acute HIV-infection, pregnancy, CD4+ cell counts\<300 cells/µL, HIV-related disease, and until 2001 also: plasma HIV RNA\>100 000 copies/mL. A routine annual metabolic monitoring program was introduced at Department of Infectious Diseases, Copenhagen University Hospital, Hvidovre, in 2004, where patients were enrolled on a first come, first served basis. The metabolic monitoring programme contains blood measurements, DXA scans and standardised patient questionnaire.
Patients were recruited from the out-patients clinic at Department of Infectious Diseases, Copenhagen University Hospital, Hvidovre, if they were ≥18 years, and were seen at the out-patient clinic during 2007.
Data Collection {#s2d}
Data were obtained from patients' medical records. Blood tests were performed as part of the routine HIV-management and annual metabolic monitoring. The lower level of detection for HIV RNA was 39 copies/mL. DXA scans and standardised patient questionnaire were performed as part of the metabolic monitoring programme. Blood tests for metabolic monitoring were obtained fasting and include glucose, lactate, lipid and cholesterol levels. The standard patient questionnaire includes information on predisposition to cardiovascular diseases and diabetes; anthropometry; tobacco use; blood pressure and patient reported lipodystrophy; described in<EMAIL_ADDRESS>About 30% of the patients attended the metabolic monitoring programme, but not necessarily all parts of it, 235 patients were DXA scanned and answered the questionnaire. DXA scans and standardised patient questionnaires were not always performed, and not necessarily the same day as the blood test. We included data from DXA scans and standardised patient questionnaire if performed within 30 days of blood sampling, see [Figure 1](#pone-0051698-g001){ref-type="fig"}.
::: {#pone-0051698-g001 .fig}
###### Flow chart of study cohort.
235 patients had both been DXA-scanned and answered the questionnaire. **Abbreviations:** DXA: Dual Energy X-ray Absorptiometry; IDU: Intravenous drug use; suPAR: soluble urokinase plasminogen activator receptor.
Nadir CD4+ cell count was defined as the lowest CD4+ cell count ever measured in the patient.
Metabolic Syndrome {#s2e}
Metabolic syndrome was determined according to the updated National Cholesterol Education Programme (NCEP) Adult Treatment Panel (ATP) III, 2004<EMAIL_ADDRESS>if sufficient information was available. Three or more of the following five components should be present to diagnose metabolic syndrome: waist circumference \>102 cm for men, \>88 cm for women; fasting triglycerides ≥150 mg/dl; fasting HDL cholesterol: \<40 mg/dl in men, \<50 mg/dl in women; blood pressure: ≥130/≥85 mmHg; fasting plasma glucose ≥5.6 mM.
Body Composition evaluated by DXA scans {#s2f}
Patients were DXA-scanned with a Norland XR-36 (Gammatec A/S, Værløse, Denmark). Lipoatrophy was assessed by evaluating the peripheral fat per cent defined as the fat mass of arms and legs divided by the total mass of arms and legs. Lipodystrophy was assessed by evaluating the ratio of trunk fat per cent (trunk fat mass/total trunk mass ×100) to leg fat per cent (fat mass of legs/total mass of legs ×100<EMAIL_ADDRESS>suPAR Measurements {#s2g}
EDTA-blood samples were taken at routine visits at the Outpatients clinic; plasma was separated and stored at −20°C. suPAR was measured in duplicates using the suPARnostic™ ELISA (ViroGates®, Birkerød, Denmark). The suPARnostic^TM^ ELISA has been validated to measure concentrations of 0.6--22 ng/mL. The inter-assay variance of a control sample was 20%, and the intra-assay variance of duplicate measurements of samples was 3.7%. Samples were measured again, if the coefficient of variation\>10%.
We investigated the association of suPAR with demography, HIV-related factors, lifestyle, and body composition using univariate and multiple linear regression. Multiple regression analyses were adjusted for biological relevant covariates. All multiple analyses were adjusted for age, sex and descent (European vs. non-European). Smoking was assessed as daily smoking vs. non-daily smoking. Patients without HIV-transmission information were not included in analyses. Patients infected through intravenous drug use (IDU) were analysed separately, since this group of patients differs according to lifestyle, comorbidities, treatment compliance and treatment initiation [<EMAIL_ADDRESS>We tested Goodness of fit of models for normal distribution of residuals and homogeneity of variances. suPAR was transformed using log~2~(x) to obtain normally distributed residuals, results are back-transformed using 2^x^, and therefore showed as % estimates. Viral load (VL) was transformed using log~10~(x) due to its wide range.
The Statistics programme "Statistical Analysis Systems" (SAS, version 9.2; SAS Institute, Cary, NC, USA) was applied for analyses. We considered P values less than 0.05 for statistical significant.
Cohort Description {#s3a}
We included 1142 HIV-infected patients in this study, see [Figure 1](#pone-0051698-g001){ref-type="fig"}. The associations for patients reporting to be infected through intravenous drug use (N = 143) were analysed separately and are reported in the paragraph "Patients Infected through IDU".
Baseline characteristics of patients are seen in [Table 1](#pone-0051698-t001){ref-type="table"}. Men comprised 75% of the cohort, and 76% were of European descent. The median duration of HIV-infection was 9 years. Eighty-five per cent received antiretroviral treatment, and 99.9% of these were treated with a combination of three or more antiviral drugs, the median treatment duration was 7 years. Thirty-five per cent of patients smoked daily.
::: {#pone-0051698-t001 .table-wrap}
###### Baseline characteristics for HIV-infected patients not infected through intravenous drug use (IDU).
Demography Median Range (5%; 95% percentiles) N total
---------------------------------- -------- ----------------------------- ---------
Age (years) 44.3 29.5; 64.2 992
Sex (men) 74.9% 992
European descent 76.0% 992
HIV duration (years) 9.2 0.6; 21.8 992
Nadir CD4 (cells/µL) 183.0 9.0; 476.0 959
Nadir CD4\<200 cells/µL 55.6% 959
Current cART 84.7% 990
Never cART 13.0% 990
Total treatment duration (years) 6.7 0.5; 10.8 861
CD4\<350 cells/µL 21.2% 949
HIV RNA (copies/mL) 39.0 39.0; 5110.0 942
HIV RNA≤40 copies mL 72.8% 942
Current daily smoking 35.1% 428
Waist circumference (cm) 91.5 72.0; 110.5 415
BMI (kg/m^2^) 23.9 19.1; 31.7 465
Metabolic syndrome 30.6% 366
**Abbreviations:** cART: Combination antiretroviral treatment; BMI: Body mass index.
suPAR and Demography {#s3b}
Results from univariate and multiple regression analyses are seen in [Table 2](#pone-0051698-t002){ref-type="table"}. suPAR levels were 7% lower in men than women (p = 0.02), when adjusted for age and European descent. Higher age was significantly associated with higher suPAR levels in both uni- (p\<0.001) and multiple regression analyses (p\<0.001). Patients of European descent had 10% higher suPAR levels than patients of other descent (p\<0.001), also when adjusted for sex and age (p = 0.004). More patients of European descent than patients of other descent (38.3% vs. 19.2%), and more men than women (38.9% vs. 19.8%) smoked daily. When further adjusting the analyses for daily smoking in the 428 patients with information on smoking, the estimate for European descent changed from 9% (p = 0.11) to 5% (p = 0.30), and the estimate for men vs. women changed from −7% (p = 0.12) to −10% (p = 0.02).
::: {#pone-0051698-t002 .table-wrap}
###### HIV- and non HIV-related factors influencing suPAR levels.
------------------------------------------------------------------------------- -------------------- ---------- ----- --------------------- --------- -----
Age ≥60 vs. \<40 years 20.0 (10.6; 30.2) \<0.001 992 19.1 (9.4; 30.0) \<0.001 992
Age ≥60 vs. 40--50 years 9.3 (1.0; 18.3) 8.4 (0.1; 17.4 )
Age ≥60 vs. 50--60 years 5.2 (−3.6; 14.8) 4.7 (−4.0; 14.3)
Sex (men vs. women) −1.8 (−6.9; 3.7) 0.52 992 −7.1 (−12.8; −1.1) 0.02 992
European descent 10.1 (4.4; 16.3) \<0.001 992 9.8 (3.1; 17.0) 0.004 992
HIV duration (years) 0.5 (0.1; 0.8) 0.01 992 0.1 (−0.3; 0.5) 0.75 992
Nadir CD4+ (cells/µL)[\*](#nt103){ref-type="table-fn"} 0.01 (−0.01; 0.02) 0.44 959 −0.01 (−0.02; 0.01) 0.62 958
No current cART[\*](#nt103){ref-type="table-fn"} 9.0 (2.2; 16.2) 0.009 990 17.3 (8.0;27.4) \<0.001 958
Treatment duration (years)[\*\*](#nt104){ref-type="table-fn"} 0.3 (−0.4; 1.0) 0.43 861 −1.4 (−2.3; −0.4) 0.006 845
CD4\<350 vs. 350≥cells/µL[\*\*\*](#nt105){ref-type="table-fn"} 9.1 (2.8; 15.6) 0.004 949 6.6 (−0.1; 13.8) 0.05 907
VL, cART-treated patients (pr. 10-fold)[\*\*\*](#nt105){ref-type="table-fn"} 18.9 (11.8; 26.5) \<0.001 794 20.5 (13.1; 28.4) \<0.001 779
VL, for untreated patients (pr. 10-fold)[\*\*\*](#nt105){ref-type="table-fn"} 3.7 (−6.0; 14.5) 0.46 146 2.4 (−8.7; 14.9) 0.68 128
Daily vs. no daily smoking 26.4 (18.3; 35.1) \<0.001 428 26.7 (18.6; 35.4) \<0.001 428
Waist circumference (cm)[\#](#nt106){ref-type="table-fn"} 0.2 (−0.04; 0.5) 0.09 415 0.3 (0.02; 0.6) 0.03 408
BMI \<20 vs. 20--25 (kg/m^2^)[\#\#](#nt107){ref-type="table-fn"} 2.9 (13.2; 6.5) 0.01 483 −0.8 (−8.8; 11.4) 0.07 408
BMI \<20 vs. 25--30 (kg/m^2^)[\#\#](#nt107){ref-type="table-fn"} 11.9 (1.0; 23.9) 7.5 (−5.3; 22.0)
BMI \<20 vs. ≥30 (kg/m^2^)[\#\#](#nt107){ref-type="table-fn"} −5.8 (−17.6; 7.9) −6.6 (−22.4; 12.4)
Metabolic syndrome[\#](#nt106){ref-type="table-fn"} 6.6 (−1,8; 15.8) 0.13 366 8.4 (0.2; 17.2) 0.04 338
All multiple analyses are adjusted for sex, age, and European descent.
Multiple analyses are also adjusted for time since HIV-diagnosis.
Multiple analyses are also adjusted for time since HIV-diagnosis, current treatment and nadir CD4+ cell counts.
Multiple analyses are also adjusted for time since HIV-diagnosis, nadir CD4+ cell counts, current treatment, CD4+\<350 vs. ≥350 cells/µL, and log~10~(VL).
Multiple analyses are also adjusted for daily smoking.
Multiple analyses are also adjusted for daily smoking, waist circumference.
**Abbreviations:** cART: Combination antiretroviral treatment; VL: viral load; BMI: Body mass index; CI: Confidence interval.
Impact of HIV-related Factors on suPAR levels {#s3c}
We explored the association of components of HIV-disease with suPAR levels, to examine HIV-disease-specific factors influencing inflammation. Time since HIV-diagnosis was significantly associated with suPAR levels in univariate analysis (estimate = 0.5%, p = 0.01), but not when adjusted for sex, age and European descent (estimate = 0.1%, p = 0.75). There was no association of nadir CD4+ cell count and suPAR levels. Untreated patients had 17% higher suPAR levels than those receiving cART in multiple regression analyses, see [Table 2](#pone-0051698-t002){ref-type="table"} (p\<0.001).
We assessed the association between treatment duration and suPAR levels in patients on current cART. suPAR levels decreased with total treatment duration (estimate = −1% pr. year, p = 0.006), when adjusted for sex, age, European descent, duration of HIV-infection, and nadir CD4+ cell counts. Patients with CD4+ cell counts\<350 cells/µL had 7% higher suPAR levels (p = 0.05) than patients with higher CD4+ cell counts in multiple analyses.
A 10-fold higher VL was associated with 21% higher suPAR levels (p\<0.001) in cART-treated patients, but not in untreated patients (estimate = 2%, p = 0.68), when adjusted for sex, age, European descent, duration of HIV-infection, nadir CD4+ cell counts, and CD4+ cell counts, see [Figure 2](#pone-0051698-g002){ref-type="fig"} and [Table 2](#pone-0051698-t002){ref-type="table"}.
::: {#pone-0051698-g002 .fig}
###### The association of suPAR and viral load according to treatment status.
The figure represents a scatter plot of the association between suPAR and viral load. Circles represent cART-treated patients (N = 838); boxes represent non cART-treated patients (N = 152). The regression line for cART-treated patients is continuous; the regression line for non-cART treated patients is dashed. The lower level of detection of HIV RNA in this study was 39 copies/mL. **Abbreviations:** cART: Combination antiretroviral treatment; suPAR: soluble urokinase plasminogen activator receptor.
Influence of Lifestyle on suPAR levels {#s3d}
Patients smoking daily had 27% higher suPAR levels (p\<0.001) than patients not smoking daily in multiple analysis adjusted for sex, age, and European descent. Large waist circumference was significantly associated with high suPAR levels (estimate = 3%, p = 0.03 pr. 10 cm) adjusted for sex, age, European descent, and daily smoking. BMI was significantly associated with suPAR levels in univariate analyses (p = 0.01) but not when adjusting for sex, age, European descent, waist circumference and daily smoking. Metabolic syndrome could be assessed in 366 patients, and we found that patients with metabolic syndrome had 8% higher suPAR levels (p = 0.04) when adjusting for sex, age, European descent, and daily smoking.
The Influence of DXA-measured Body Composition on suPAR levels {#s3e}
We analysed the association of suPAR with measures of fat mass and lean mass in the subgroup of 283 patients with DXA scans, see [Table 3](#pone-0051698-t003){ref-type="table"}. High suPAR levels were significantly associated with low total lean mass/height (kg/m^2^) (p = 0.02 in multiple analysis). When assessing the leg lean mass specifically, we found a more pronounced association (estimate = −9%, p\<0.001 pr. kg/m^2^) in multiple analyses. There was no significant association between suPAR levels and total fat per cent, total fat mass or regional fat mass distribution, neither in univariate or multiple analyses adjusted for sex, age, European descent, and daily smoking.
::: {#pone-0051698-t003 .table-wrap}
###### Subgroup analyses of body composition and suPAR levels in patients with DXA scan.
------------------------------- -------------------- -------------- ----- -------------------- --------- -----
Lean mass/h^2^ (kg/m^2^) −1.8 (−3.4; −0.1) 0.04 283 −2.3 (−4.2; −0.4) 0.02 283
Lean mass~leg~/h^2^ (kg/m^2^) −9.0 (−13.0; −4.9) \<0.001 283 −9.1 (−13.3; −4.8) \<0.001 283
Fat mass/h^2^ (kg/m^2^) 0.1 (−1.4; 1.6) 0.93 283 1.3 (−0.6; 3.3) 0.17 234
Total fat% 0.1 (−0.4; 0.5) 0.76 283 0.4 (−0.2; 1.1) 0.19 234
Limb fat% 0.2 (−0.7; 1.0) 0.69 283 0.8 (−0.4; 2.1) 0.20 234
Trunk fat %/leg fat % −7 (−22; 10) 0.39 283 −8.8 (−23.8; 10.7) 0.37 234
Multiple analyses of lean mass measures are adjusted for sex, age and European descent. Multiple analyses of fat mass measures are adjusted for sex, age, European descent, and daily smoking.
**Abbreviations:** CI: Confidence interval; h = height.
Patients Infected through IDU {#s3f}
Men comprised 55% of this group and 93% were of European descent. The median age was 43 years (5 percentile: 31 years; 95 percentile: 55 years), and the median duration of HIV-infection was 11 years (5 percentile: 11 months; 95 percentile: 22 years). Seventy-five per cent received cART and the median duration was 5 years (5 percentile: 4 months; 95 percentile: 10 years); 93% of cART-treated patients had VL\<400 copies/mL, and 79% had VL≤40 copies/mL. Eleven per cent of patients had CD4+ cell count\<200 cells/µL; 33% had CD4+ cell count\<350 cells/µL. Nadir CD4+ cell count was\<200 cells/µL in 59% of the patients.
Patients infected through IDU had significantly higher suPAR levels (median suPAR level: 4.7 ng/mL, range 21.1 ng/mL), than patients reporting to be infected through other routes (median suPAR level: 2.6 ng/mL, range 11.1 ng/mL, p\<0.001), also when adjusted for sex, age and European descent (estimate = 62%, p\<0.001).
suPAR levels were significantly associated with higher age in multiple analyses (p = 0.02); but there was no significant association with sex or ethnicity (p = 0.63 and p = 0.26, respectively). There was no significant association of duration of HIV-infection and suPAR levels in multiple analysis adjusted for sex, age and European descent (estimate = −0.1%, p = 0.92). cART status did not affect suPAR levels (estimate = 7%, p = 0.74 for not receiving cART), nor did nadir CD4+ cell counts (estimate = −0.005%, p = 0.92 pr. cell/µL), adjusted for sex, age, European descent, HIV duration, nadir CD4+ cell counts and treatment, respectively. Long treatment duration was negatively associated with suPAR levels in multiple analyses adjusted for sex, age, European descent, HIV duration, and nadir CD4+ cell counts (estimate = −4.2%, p = 0.05, 95%CI: −8.2%; −0.1% pr. year). The association with CD4+ cell count and VL was assessed in multiple analyses adjusted for sex, age, European descent, HIV duration, nadir CD4+ cell counts, treatment, log~10~(VL) and CD4+ cell counts, respectively. Patients with CD4+ cell counts≥350 cells/µL had 27% lower suPAR levels (p = 0.03; 95% CI: −44%, −3%) than patients with CD4+ cell counts\<350 cells/µL, and there was no significant association with VL (estimate = 10% pr. 10-fold increase, p = 0.42).
We identified HIV-related and non-HIV-related factors influencing suPAR levels, adding to the growing knowledge of inflammation and HIV-disease in the cART era. Increased suPAR levels were significantly associated with higher age, female sex, metabolic syndrome, daily smoking, low leg muscle mass, and higher waist circumference; but not significantly associated with BMI. These findings are in accordance with previous findings in the general population<EMAIL_ADDRESS><EMAIL_ADDRESS>European descent was associated with higher suPAR levels in multiple analyses; however not when performing subgroup analyses adjusted for daily smoking. Thus, a higher daily smoking frequency among patients of European descent could explain the association of European descent and higher suPAR levels.
In addition to the inflammatory effects of demography, lifestyle and body composition, we assessed the association of HIV-related factors with suPAR. suPAR levels were 7% higher in patients with low CD4+ cell counts (\<350 cells/µL), but we found no association of suPAR and duration of HIV-infection, nor with nadir CD4+ cell counts. Untreated patients had 17% higher suPAR levels than cART-treated patients (p\<0.001), and low suPAR levels were weakly associated with longer treatment duration (estimate = −1%, p = 0.006 pr. year). This is in agreement with a previous follow-up study that found decreasing suPAR levels after cART-initiation during a 5-years period<EMAIL_ADDRESS>For every 10-Fold increase in VL, we found 21% higher suPAR levels in multiple regression analysis in cART-treated patients (p\<0.001); however, there was no significant association in patients not receiving treatment (estimate = 2%, p = 0.68), see [Figure 2](#pone-0051698-g002){ref-type="fig"}. These findings could indicate that the dose-response relationship of suPAR and VL observed in cART-treated patients is not only due to VL-induced inflammation, but might be an effect of factors underlying high VL in cART-treated patients. Poor treatment-compliance could explain high VL in cART-treated patients. Psychosocial problems or substance abuse are more frequent in patients with low adherence<EMAIL_ADDRESS>and the association we find might be an association of suPAR with comorbidities associated with psychosocial problems or substance abuse, resulting in poor cART-adherence and high VL. Supporting this, we found a 72% higher suPAR level, in patients reporting to be infected through IDU. However, VL still seems to affect suPAR levels, since we observe higher suPAR in untreated patients, and there could be a VL-threshold inducing systemic immune activation and inflammation reflected in increased suPAR levels.
suPAR was associated differently with HIV-related factors in patients infected through IDU. In this group of patients, we found 27% higher suPAR levels in patients with CD4+ cell counts\<350 cells/µL, but no association with VL. If these patients are still active IDUs, other factors than VL could induce inflammation, such as frequent bacterial infections, and thereby decrease the importance of VL in inducing inflammation. Moreover, Hepatitis C (HCV) is more prevalent in this group of patients, and suPAR levels have been shown to increase with liver fibrosis in HCV-infected patients<EMAIL_ADDRESS><EMAIL_ADDRESS>Finally, HIV-infected patients infected through IDU have higher mortality risk<EMAIL_ADDRESS>and initiate cART later, resulting in lower CD4+ cell gains<EMAIL_ADDRESS>We did not find any association with fat deposition measures by DXA scans. We have previously found increased suPAR levels in HIV-infected patients with clinician diagnosed lipodystrophy<EMAIL_ADDRESS>This divergence could reflect difficulties in assessing lipodystrophy using single DXA scans. Bonnet et al<EMAIL_ADDRESS>proposed reference values to define lipodystrophy by DXA scan. Only four of 283 patients in this study had lipodystrophy when applying this definition, indicating that it is not sensitive enough. Previous findings have found C-reactive protein to be more strongly associated with BMI and waist circumference than suPAR, whereas suPAR was found to be more strongly associated with poor outcome<EMAIL_ADDRESS><EMAIL_ADDRESS>Here, we did not find any significant association with BMI in multivariate analysis, but suPAR increased significantly with waist circumference (0.3% pr. cm) and patients with metabolic syndrome had 8% higher suPAR. Thus, suPAR levels are affected by central adiposity and metabolic syndrome.
For the first time, the association of suPAR and muscle mass was assessed. We found that suPAR was strongly associated with low leg muscle mass. We are not aware of any studies examining the role of suPAR and physical activity; however, numerous studies have demonstrated a strong link between exercise, muscles and inflammation<EMAIL_ADDRESS>We consider the association of suPAR and leg muscle mass to be an indirect association, in that decreased leg muscle mass has been shown to be an independent risk factor for 5-year mortality in HIV-infected patients<EMAIL_ADDRESS>and increased suPAR levels also associate with mortality<EMAIL_ADDRESS><EMAIL_ADDRESS>However, we cannot exclude that the association of suPAR with low muscle mass is a physical activity-mediated effect.
The mechanistic role of suPAR in HIV-infection is still not fully understood due to the numerous functions and complex interplay between suPAR and uPAR, and their ligands: uPA, vitronectin, and integrins<EMAIL_ADDRESS><EMAIL_ADDRESS>Whether suPAR is merely a marker and/or directly causes disease still remains to be established. suPAR has been associated with a variety of diseases and disease progression<EMAIL_ADDRESS>but to our knowledge suPAR has only been identified as a causal factor in one disease, focal segmental glomerulosclerosis<EMAIL_ADDRESS>In our opinion, suPAR does not reflect disease-specific pathology, but more likely, processes central to a variety of diseases. The results of this study and previous studies<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>suggest that elevated suPAR levels in cART-treated and untreated patients have different causes. In untreated patients HIV-induced immune activation is likely to increase suPAR levels<EMAIL_ADDRESS><EMAIL_ADDRESS>In virally suppressed cART-treated patients, pathophysiologic processes associated with age; smoking; long term immune alterations caused by HIV; and low leg muscle mass, could be the prime suPAR inducers. Overall, suPAR could be a marker of accelerated ageing.
There are some limitations to this study. We did not assess the effect of co-infections and co-morbidity, since we did not have complete information. Co-infections have been shown to increase mortality rates<EMAIL_ADDRESS>and co-infections could increase suPAR levels<EMAIL_ADDRESS>However, HCV co-infection is more prevalent in patients infected via IDU, and these patients were analysed separately.
If we had had complete information about lipid-lowering, anti-diabetic and blood pressure treatment, more patients could possibly have been diagnosed with metabolic syndrome. Furthermore, we assessed the association of suPAR and relatively high CD4+ cell counts (\<350 cells/µL) in multiple regression analyses, and not \<200 cells/µL, since only 54 patients infected by other routes than IDU had CD4+ cell counts\<200 cells/µL. We expect that the association would be significantly stronger if we had evaluated CD4+ cell counts≥ vs. \<200 cells/µL.
Our study adds to the growing knowledge of inflammation and HIV-disease in the cART era, by identifying factors that associate with inflammation. We found that increased suPAR levels were significantly associated with established risk factors for mortality and morbidity. Age, HIV-transmission through IDU, metabolic syndrome, daily smoking, and low leg muscle mass were associated with suPAR levels. However, we did not find any association with nadir CD4+ cell count, and suPAR was only 7% higher in individuals with low CD4+ cell count (\<350 cells/µL). Moreover, high suPAR levels were associated with high VL only in cART-treated patients, indicating that factors causing or segregating with poor viral suppression in cART-treated HIV-infected patients could be associated with increased suPAR levels. Thus, it seems like suPAR reflects different aspects of living with HIV than CD4+ cell counts and VL, and that other aspects of living with HIV add to the increased inflammation in HIV-infected patients.
We want to thank the staff and patients at Department of Infectious Disease, Copenhagen University Hospital, Hvidovre, Denmark, who participated in this study. We are very thankful to Thomas Benfield, MD, PhD, department of Infectious Diseases, Copenhagen University Hospital, Hvidovre, for fruitful discussions of the manuscript, and to Tomasz Pielak for excellent technical assistance.
[^1]: **Competing Interests:**ViroGates A/S donated the suPARnostic® kits for suPAR measurements, but did not have any influence on the design of the study. Jesper Eugen-Olsen and Ove Andersen are inventors of the patent on suPAR and disease risk. Copenhagen University Hospital, Hvidovre owns the patent, which is licensed to ViroGates A/S. Jesper Eugen-Olsen is founder, board member and shareholder in ViroGates A/S. Anne Langkilde, Janne Petersen, Henrik Hedegaard Klausen and Jens Henrik Henriksen declare no conflicts of interest.
[^2]: Conceived and designed the experiments: AL JP HHK JHH JE-O OA. Analyzed the data: AL JP. Wrote the paper: AL JP HHK JHH JE-O OA. Responsible for suPAR measurements: JE-O.
|
Retrieve yerr value from bar object in matplotlib
How can I retrieve a yerr value from an ax.bar object?
A bar chart is created with a single line, each parameter of the ax.bar() is a collection, including the yerr value.
bar_list = ax.bar(x_value_list, y_value_list, color=color_list,
tick_label=columns, yerr=confid_95_list, align='center')
Later on, I want to be able to retrieve both the y value as well as the yerr value of each individual bar in the chart.
I iterate through the bar_list collection and I can retrieve the y value, but I don't know how to retrieve the yerr value.
Getting the y value looks like this:
for bar in bar_list:
y_val = bar.get_height()
How can I get the yerr? Is there something like a bar.get_yerr() method? (It isn't bar.get_yerr())
I would like to be able to:
for bar in bar_list:
y_err = bar.get_yerr()
Note that in the above example confid_95_list is already the list of errors. So there is no need to obtain them from the plot.
To answer the question: In the line for bar in bar_list, bar is a Rectangle and thus has no errorbar associated to it.
However bar_list is a bar container with an attribute errorbar, which contains the return of the errorbar creation. You may then get the individual segments of the line collection. Each line goes from yminus = y - y_error to yplus = y + y_error; the line collection only stores the points yminus, yplus. As an example:
means = (20, 35)
std = (2, 4)
ind = np.arange(len(means))
p = plt.bar(ind, means, width=0.35, color='#d62728', yerr=std)
lc = [i for i in p.errorbar.get_children() if i is not None][0]
for yerr in lc.get_segments():
print (yerr[:,1]) # print start and end point
print (yerr[1,1]- yerr[:,1].mean()) # print error
will print
[ 18. 22.]
2.0
[ 31. 39.]
4.0
So this works well for symmectric errorbars. For asymmectric errorbars, you would additionally need to take the point itself into account.
means = (20, 35)
std = [(2,4),(5,3)]
ind = np.arange(len(means))
p = plt.bar(ind, means, width=0.35, color='#d62728', yerr=std)
lc = [i for i in p.errorbar.get_children() if i is not None][0]
for point, yerr in zip(p, lc.get_segments()):
print (yerr[:,1]) # print start and end point
print (yerr[:,1]- point.get_height()) # print error
will print
[ 18. 25.]
[-2. 5.]
[ 31. 38.]
[-4. 3.]
At the end this seems unnecessarily complicated because you only retrieve the values that you initially put in, means and std and you could simply use those values for whatever you want to do.
|
Talk:Mystic Codes/@comment-33511346-20181130011026/@comment-37389298-20181130224759
For challange quest and hard boss solos im pretty sure most people would say Atlas Academy Uniform.
|
How to upload large number of files to Amazon S3 efficiently using boto3?
I have 10000s of 10Mb files in my local directory and I'm trying to upload it to a bucket in Amazon S3 using boto3 by sequential upload approach. The only problem I'm facing here is it takes lot of time to upload large number of files to S3. I want to know like whether there are efficient ways(using multithreading or multiprocessing) to upload files to Amazon S3?
root_path ="/home/shivraj/folder/"
path = root_path+'folder_raw/' # use your path
dest_path = root_path+'folder_parsed/'
backup_path = root_path+'folder_backup/'
def parse_ivn_files():
src_files_list = glob.glob(path + "*.txt.zip") # .log files in the path files
try:
if src_files_list:
for file_ in src_files_list:
df = pd.read_csv(file_,compression="zip",sep="|", header=None)
file = file_.replace(path,'')
file_name = file.replace(".txt.zip",'')
df.columns=["Date","Time","System_Event","Event_Type","Event_sub_type","Latitude","Longitude","Field_1","Field_2","Field_3","Field_4","Event_Number","Event_Description"]
new_df=df['Event_Description'].str.split(',',expand=True)
large_df = pd.concat([df,new_df],axis=1)
large_df.to_csv(dest_path+file_name+".csv",index=False)
s3.meta.client.upload_file(dest_path+file_name+".csv", 's3-bucket-name-here', 'ivn_parsed/'+file_name+".csv")
s3.meta.client.upload_file(path+file_name+".txt.zip", 's3-bucket-name-here', 'ivn_raw_backup/'+file_name+"_bk.txt.zip")
os.rename(path+file_name+".txt.zip", backup_path+file_name+"_bk.txt.zip")
else:
print("No files in the source folder")
except:
raise FileNotFoundError
Amazon Snowball is a left field option that you should perhaps consider.
You probably need to run your code with multiple threads in parallel. My guess is that at some point you will either bottleneck on CPU utilization (in gzip decompress), disk I/O (reading bytes off of local storage) or network I/O (pushing bytes to S3). Which one you get stuck at determines what you need to move forward. But to get stuck at any of them you'll need to start operating on multiple files in parallel.
Thanks a lot for your inputs. I will look into them.
Before going down the path of multi-threading, you need to analyze your current throughput and available bandwidth. If you have a gigabit connection to the Internet, then yes, you could probably improve your performance by separating the processes of read, compress, and write. But if you're sharing a 25 megabit connection (or slower) with a bunch of other people, you're probably running into bandwidth limitations.
@kdgregory the implications of bandwidth-delay product for TCP suggest that there is almost always some benefit from parallelizing.
@Michael-sqlbot - sure, "some" benefit. The question is whether the benefit will exceed the cost of implementation, which can only be reasonably determined if you know what the available bandwidth is and how fast the current process runs. Two things that the OP still has not seen fit to share.
@kdgregory, touché. Fair points.
I’d go for s4cmd - it’s a nice tool that can upload your files in parallel and has solved some other problems too:
https://github.com/bloomreach/s4cmd
|
PHYLLIS C. KINZER, a Taxpayer of the City of Chicago, on Behalf of and for the Benefit of the City of Chicago, Plaintiff-Appellee and Cross-Appellant, v. FIDELITY AND DEPOSIT COMPANY OF MARYLAND, Defendant-Appellant and Cross-Appellee (The City of Chicago et al., Defendants).
First District (2nd Division)
No. 1—94—0245
Opinion filed May 16, 1995.
— Modified on denial of rehearing June 30, 1995.
McNeela & Griffin, Ltd., of Chicago (Cornelius F. Riordan, Edward P. McNeela, and Mary E. Gardner, of counsel), for appellant.
Robert S. Atkins & Associates, of Chicago (Robert S. Atkins, of counsel), for appellee.
PRESIDING JUSTICE SCARIANO
delivered the opinion of the court:
Fidelity and Deposit Company of Maryland (Fidelity) appeals from the grant of partial summary judgment in favor of Phyllis C. Kinzer (Kinzer) holding Fidelity liable under a public employees’ bond (the bond), which Fidelity issued to the City of Chicago (the City) in 1977, and from a grant of summary judgment in favor of Kinzer in the amount of $1,075,000, plus prejudgment interest.
Kinzer cross-appeals from the trial court’s order which limited Fidelity’s liability under the bond to $1,075,000 and adjudicated how the City’s "loss” was to be calculated. She also appeals from the denial of both her motion for attorney fees and costs and her motion for sanctions.
Under the bond, the City is covered for the failure of an employee to perform faithfully the duties of his office or to account properly for all monies and property received by virtue of his position. Insuring agreement 4 of the bond provides a primary layer of coverage of up to $25,000 per employee, and insuring agreement 3 provides a secondary layer of coverage over all employees of up to $975,000.
This is the third time we have reviewed this litigation, which arises out of the conduct of several former city officials who, absent prior appropriation by the city council, entered into contracts and incurred expenses relating to various special events which the City sponsored between 1978 and 1983. See Kinzer v. City of Chicago (1988), 169 Ill. App. 3d 447, 523 N.E.2d 919, modified in part & rev’d in part (1989), 128 Ill. 2d 437, 539 N.E.2d 1216 (hereinafter Kinzer I); Kinzer v. Fidelity & Deposit Co. (1991), 213 Ill. App. 3d 606, 572 N.E.2d 1151 (hereinafter Kinzer ID.
A complete recitation of the underlying facts regarding the misconduct of the city officials is contained in the above-cited decisions, and while those facts need not be detailed here, a brief review of the holdings in those prior decisions would be helpful. In Kinzer I, this court held that the City and comptroller Grim violated section 8 — 1—7 of the Illinois Municipal Code (Ill. Rev. Stat. 1985, ch. 24, par. 8 — 1—7 (now 65 ILCS 5/8 — 1—7 (West 1992))) by expending money for city-sponsored events through an account designated "Fund 666” without prior appropriation, and that Grim was strictly liable for any resulting loss he caused the City. (Kinzer I, 169 Ill. App. 3d at 455-58.) The supreme court affirmed our judgment that the expenditures violated section 8 — 1—7, but found that the common law public official immunity doctrine exempted Grim from liability as to any loss incurred by the City. (Kinzer I, 128 Ill. 2d at 445-46.) On our last review of this litigation, we held that Fidelity’s liability was not predicated on its being a "surety” for Grim, but that its liability was broader in scope. (Kinzer II, 213 Ill. App. 3d at 610-11.) The instant appeal treats with the following trial court rulings issued since the previous appeals were decided.
In an order dated November 13, 1991, the trial court granted Kinzer’s motion for partial summary judgment on the issue of Fidelity’s liability under the bond; Fidelity appeals from this order. On March 18, 1992, it orally ruled that Fidelity’s total limit of liability under the bond was $1,075,000, and that the City’s loss would be measured in terms of the total expenditures from Fund 666 minus the "revenue actually recovered” by the City in holding the disputed events, i.e., the net loss. Kinzer maintains that these rulings were in error. On February 10, 1993, the court further ruled that the automatic "cancellation upon discovery” provision (the cancellation provision or section 6(a)) of the bond was valid and that the City would not be held to have discovered the misconduct of its officials until August 2, 1982, the date of Kinzer’s original complaint. Both Kinzer and Fidelity contend that this ruling was in error.
Thereafter, Kinzer filed her motion for summary judgment contending, by using figures calculated by Fidelity’s experts, that net expenditures from Fund 666 exceeded revenues by $1,398,386, and thus she was entitled to $1,075,000, the limit stated in the bond, plus prejudgment interest. Fidelity responded to Kinzer’s motion and filed a cross-motion for summary judgment, maintaining, inter alia, that genuine issues of material fact existed with respect to the amount of "loss” the City sustained, and it contended specifically that the City had, in fact, received a net gain from Fund 666 expenditures of $801,020 and had discovered the misconduct of its officials no later than December 8, 1980, the date on which the 1979 annual report of the comptroller of the City was presented to the city council, thereby triggering the cancellation provision.
In a memorandum opinion and judgment order dated October 13, 1993, the trial court granted Kinzer’s summary judgment motion, and it denied Fidelity’s. Subsequently, Kinzer filed a motion under section 155 of the Insurance Code for attorney fees and costs (215 ILCS 5/155 (West 1992)), as well as under Supreme Court Rule 137 for sanctions (134 Ill. 2d R. 137), both of which were denied, along with Fidelity’s motion to reconsider the court’s summary judgment order, on December 13, 1993. Fidelity filed its notice of appeal on January 12, 1994, and Kinzer filed her notice of cross-appeal on January 14, 1994.
A review of summary judgment orders is de novo,, and a reviewing court should make an independent determination as to whether genuine issues of material fact exist with respect to any issue. (Lavat v. Fruin Colnon Corp. (1992), 232 Ill. App. 3d 1013, 1023, 597 N.E.2d 888.) The nonmovant is not required to prove his claim or defense at the summary judgment stage but must present some evidence to raise factual disputes regarding one or more elements of that claim or defense. Further, the evidence presented must be construed strictly against the movant and liberally in favor of the nonmovant. Quality Lighting, Inc. v. Benjamin (1992), 227 Ill. App. 3d 880, 884, 592 N.E.2d 377; Gardner v. Navistar International Transportation Corp. (1991), 213 Ill. App. 3d 242, 250, 571 N.E.2d 1107.
Despite this court’s rulings in Kinzer I and II, Fidelity still claims that the trial court erred in ruling that Fidelity was liable under the bond because: (1) the City exercised its home rule power when it expended the funds at issue, thereby superseding any statutory appropriation requirements, (2) the city council ratified the acts of Grim and the other implicated City officials, and (3) the City’s liability is not predicated upon that of Grim or, in the alternative, (4) is co-extensive with that of Grim and therefore may avail itself of the public official immunity doctrine.
We hold each of these contentions to be irredeemably without merit and not in need of any gloss further than that which we have already given them in Kinzer I and II. People v. Patterson (1992), 154 Ill. 2d 414, 468, 610 N.E.2d 16 (courts generally refuse to reopen what has been decided, and "a rule established as controlling in a particular case will continue to be the law, as long as the facts are the same”); Stallman v. Youngquist (1987), 152 Ill. App. 3d 683, 689, 504 N.E.2d 920, rev’d on other grounds (1988), 125 Ill. 2d 267, 531 N.E.2d 355 (when the evidence on a subsequent appeal is the same or substantially the same as that of a prior appeal, i.e., there is an identity of particular issues, facts and evidence, the adjudications of the prior appeal become the law of the case, and are binding upon that court on a following appeal, regardless of whether the prior decision was right or wrong).
Continuing with its challenge to its liability under the bond, Fidelity asserts that the City officials are not covered by the bond because in failing to execute, as a condition of their office, an "individual bond” they were not "employees” of the City. Ill. Rev. Stat. 1985, ch. 24, par. 3 — 14—3 (now 65 ILCS 5/3 — 14—3 (West 1992)).
As Kinzer aptly demonstrates in her brief, this contention is disingenuous for a number of reasons. Fidelity is, of course, bound by its admissions, and this claimed defense to liability, as the trial judge stated, "contradicts [its] longstanding answers to [Kinzer’s] interrogatories.” It admitted that the bond was the only such bond that covered the comptroller defendants during the relevant time period, and that all three (Coyne, Grim, and Fratto) were principals on the bond during their respective terms in office. Fidelity also failed to raise this defense until more than nine years after Kinzer filed her original complaint; thus, it was clearly within the trial court’s discretion to deny it as untimely. Turner v. Cosmopolitan National Bank (1989), 180 Ill. App. 3d 1022, 1029, 536 N.E.2d 806.
Moreover, the defense lacks merit substantively because the term "individual” does not appear in section 3 — 14—3, nor, contrary to Fidelity’s claim, does the statutory scheme suggest that the use of the term "execute” in that section requires a city comptroller to furnish an "individual bond.” See Ill. Rev. Stat. 1985, ch. 24, pars. 3 — 11—21 through 3 — 11—24 (now 65 ILCS 5/3 — 11—21 through 3 — 11—24 (West 1992)).
Fidelity further contends that the November 12, 1993, partial summary judgment order was not based on Geary’s alleged conduct, but only on that of the three comptrollers, and that therefore the trial court’s order limiting Fidelity’s liability to $1,075,000 should be reduced by $25,000.
However, in failing to argue this issue when it opposed Kinzer’s motion for partial summary judgment, and because its own counsel prepared the March 18, 1992, order which limited that liability to $1,075,000, Fidelity has waived review of this issue on appeal. Boumenot v. North Community Bank (1992), 226 Ill. App. 3d 137, 145, 590 N.E.2d 126; Mendelson v. Lillard (1980), 83 Ill. App. 3d 1088, 1096, 404 N.E.2d 964.
In any event, Geary’s involvement in the misuse of Fund 666 monies is well documented in the record; specifically, his activity was included in Kinzer’s memorandum in support of her August 2, 1989, motion for partial summary judgment, which she renewed on November 2, 1989; and Geary’s deposition transcripts reveal that during his term as Mayor Byrne’s chief of staff between August 1980 and April 1983, he signed and approved numerous purchase requisitions and vouchers for the release of the funds through the medium of Fund 666.
Likewise, in this vein, we hold that Fidelity’s contention that Kinzer failed to offer evidence adequately attributing the loss at issue to each of the City’s employees, thereby not affording the trial court with a basis upon which to calculate the damages properly, is also waived because Fidelity failed to raise it in opposing Kinzer’s motion for summary judgment. Furthermore, the record, in addition tp revealing Geary’s involvement, includes a May 1993 affidavit of Kinzer’s expert, indicating that of the $31,957,341 in illegal Fund 666 expenditures, $3,362,701 was paid during Coyne’s term in office (August 1979 through June 1980); $13,465,132 was paid in Grim’s term of office (June 1980 through September 1981); and $15,129,508 was paid during Fratto’s term of office (September 1981 through April 1983).
For the above-stated reasons, we affirm the trial court’s partial summary judgment as to Fidelity’s liability under the bond.
Next, we affirm the trial court’s ruling that the term "loss” as used in the bond should not be interpreted in the singular, i.e., each illegal expenditure did not constitute a separate recoverable loss; rather, the term should be interpreted in the plural, i.e., the illegal expenditures in toto constituted one recoverable loss. We further affirm that the City’s loss must be calculated by setting off the profits generated by the illegal expenditures from the total amount of such expenditures.
Under insuring agreement 3, and subject to other provisions of the bond, Fidelity agreed to indemnify the employees for the use and benefit of the City for:
"Loss caused to the Insured through the failure of any of the Employees, acting alone or in collusion[ ] with others, to perform faithfully his duties or to account properly for all monies and property received by virtue of his position or employment during the bond Period to an amount not exceeding in the aggregate the amount [of $975,000].”
Section 4 of the bond, which is commonly referred to as a "non-reduction in liability” clause, and which contains a limiting proviso, reads in pertinent part:
"Indemnification by [Fidelity] for any loss under Insuring Agreement *** 3 shall not reduce [its] liability for other losses under the applicable Insuring Agreement, Whenever sustained; provided, however, that [its] total liability under [that] Insuring Agreement for any loss caused by any Employee or in which such Employee is concerned or implicated is limited to the [amount of $975,000].”
Section 4 also provides for a similar limitation with respect to insuring agreement 4, which provides a layer of coverage of up to $25,000 per employee.
Kinzer unconvincingly maintains that the City sustained a recoverable "loss” each time an illegal expenditure was made and that section 4 limits Fidelity’s liability as to each loss, not as to the aggregate amount of each loss. Citing authority we do not find persuasive, she asserts that a loss is sustained at the time of the original wrongdoing (American Trust & Savings Bank v. United States Fidelity & Guaranty Co. (Iowa 1988), 418 N.W.2d 853), when funds are first diverted (Fitchburg Savings Bank v. Massachusetts Bonding & Insurance Co. (1931), 274 Mass. 135, 174 N.E. 324), or when the misconduct first occurred (Citizens Bank v. American Insurance Co. (D. Or. 1968), 289 F. Supp. 211). She also relies on Bankers Life Co. v. Aetna Casualty & Surety Co. (Iowa 1985), 366 N.W.2d 166, and Humboldt Trust & Savings Bank v. Fidelity & Casualty Co. (1963), 255 Iowa 524, 122 N.W.2d 358.
Under her reasoning, because no separate expenditure exceeded section 4’s limit of liability, Fidelity would be liable for the total amount of unappropriated Fund 666 expenditures, which according to Kinzer amounts to $31,957,341 plus interest. We decline, however, to equate the term "loss” as used in the bond to each illegal expenditure; instead, we deem it more appropriate to follow the "general [rule] that 'an occurrence is determined by the cause or causes of the resulting injury.’ ” Business Interiors, Inc. v. Aetna Casualty & Surety Co. (10th Cir. 1984), 751 F.2d 361, 363, quoting Appalachian Insurance Co. v. Liberty Mutual Insurance Co. (3d Cir. 1982), 676 F.2d 56, 61, and citing, inter alia, 8B J. Appleman & J. Appleman, Insurance Law & Practice § 4891.25, at 16 (1981).
In the case just cited, because Business Interiors’ loss was the continued dishonesty of an employee, the court reasoned that " 'the probable intent of the employee with regard to the last thirty-nine checks [was] essentially the intent to continue the dishonesty, not to commit an entirely new and different act of dishonesty.’ ” (Brackets in original.) (Business Interiors, Inc., 751 F.2d at 363 (quoting the district court).) Similarly, here, as the trial judge noted, "the totality of circumstances which found their way over a 4-year practice of adopting the [accounting] techniques which had given rise to Fund 666” was "one event” constituting "one loss,” and thus, under section 4, Fidelity’s total liability is limited to $1,075,000; $975,000 under insuring agreement 3, plus $100,000 ($25,000 for each employee) under insuring agreement 4.
Therefore, we conclude that the illegal accounting practice which initially created Fund 666 was the single occurrence which generated each of the resulting illegal expenditures. Other cases, although the terms of the bonds at issue in them vary somewhat from the disputed terms here, support such a result. Roodhouse National Bank v. Fidelity & Deposit Co. (7th Cir. 1970), 426 F.2d 1347 (a series of forgeries occurring over a number of years constituted one loss to which the limit of liability applied); Federal Savings & Loan Insurance Corp. v. Aetna Insurance Co. (N.D. Ill. 1968), 279 F. Supp. 161 (multiple misdeeds of bank director constitute one loss under bond’s limit of liability); see also SEC v. Arkansas Loan & Thrift Corp. (W.D. Ark. 1969), 297 F. Supp. 73, aff’d (8th Cir. 1970), 427 F.2d 1171.
Kinzer also maintains that the trial judge erred when he read a setoff provision into the bond. National Surety Corp. v. Rauscher, Pierce & Co. (5th Cir. 1966), 369 F.2d 572; see also HS Equities, Inc. v. Hartford Accident & Indemnity Co. (2d Cir.. 1979), 609 F.2d 669 (district court did not err in refusing to imply a setoff provision in bond Hartford issued to HS), citing National Security Corp. with approval.
However, we agree with the trial judge, sitting in equity, that the City’s loss must be set off by the profits generated by the illegal Fund 666 expenditures because otherwise the "loss” occasioned by those expenditures would continue to qualify as a loss even if they generated a "handsome return,” resulting in an undeserved windfall to the City. Although not the case here, the possibility of a positive return from the unappropriated funds necessitates the incorporation of the concept of a "gain” into the determination of the City’s actual loss occasioned by the expenditures. Such reasoning is buttressed by the reality that the detriment to the public from the expenditure of unappropriated funds can be measured only by employing the following calculus: the totality of expenditures minus the benefit the public received therefrom. In other words, as a matter of civil law, no harm, no foul.
Such a formula gives the City the benefit of its bargain for the agreement with Fidelity and concomitantly stays true to the factual reality associated with assessing the loss caused by the illegal expenditure of funds. Indeed, in the absence of such a calculus, the insured would be motivated to bury its head in the sand while illegal conduct on the part of its employees, which might prove profitable, is allowed to continue, thereby creating the possibility of not only reaping the benefits of that illegal conduct, but also recovering doubly against its surety. Equity should not, indeed, cannot countenance recovery where, as here, the "loss” is actually not as represented by Kinzer. The only purpose to be served by adopting her position on this issue would be a punitive one, and, therefore, contraindicated in this case. Similar compelling policy considerations were recognized in HS Equities, Inc., but were found not controlling because the serious consequences attached to SEC disciplinary proceedings constituted ample deterrence against the misconduct at issue there. HS Equities, Inc., 609 F.2d at 673.
Fidelity goes too far, however, in arguing that not only is it entitled to a setoff of direct Fund 666 revenues, but that it should be allowed to deduct estimated indirect tax benefits as well. In support of this contention Fidelity submitted the affidavits of two experts and Ernst & Young’s "Analysis of Accounting and Economic Benefits Related [to] Fund 666,” which, using an "input-output” accounting model, indicated that the City received a net economic benefit of $801,020.
In ruling against Fidelity on this issue, the trial judge noted that his instructions were that "the revenue actually recovered by the City” be set off, not "the benefits received” — the language used by Fidelity itself when it memorialized his ruling into a written order. (Emphasis in original.) He further correctly commented:
"The input-output model utilized by Fidelity’s experts may have legitimacy in forecasting the future likelihood of a revenue flow from certain theoretical circumstances but it has no utility in counting cash on hand. Additionally, the introduction of estimated revenues into one side of the balance sheet invites, indeed compels, the acceptance of similarly unverifiable additional expenses such as police, fire, sanitation and utilities into the other side.”
Accordingly, we conclude that (1) the compilation of illegal expenditures amounts to one occurrence of loss under the bond; (2) Fidelity’s total potential liability under the bond for this one occurrence is $1,075,000: $975,000 under insuring agreement 3,. plus $100,000 ($25,000 for each employee) under insuring agreement 4; and (3) the complete loss suffered by the City is to be calculated by deducting the "revenue actually recovered by the City” from the total of the illegal funds expended by the its officials.
Next, we address whether the bond’s automatic cancellation provision (section 6(a)) is valid, and if so, what level of knowledge constitutes discovery; further, given that level of knowledge, whether as a matter of law the City can be held to have discovered the loss associated with the illegal expenditures on the date Kinzer filed her complaint, August 2, 1982.
The pertinent language of the cancellation provision provides:
"This Bond shall be deemed canceled as to any Employee *** [i]m-mediately upon discovery by the Obligee or the Insured of any act on the part of such Employee which would constitute a liability of the Surety under the applicable Insuring Agreement covering such Employee.”
Provisions like section 6(a) are well recognized as being reasonable; to conclude otherwise would be contrary to fundamental fairness and public policy — the City should not be compensated for losses caused by the misconduct of its employees of which it was aware and did nothing to prevent. (See, e.g., Home Savings & Loan v. Aetna Casualty & Surety Co. (Utah Ct. App. 1991), 817 P.2d 341, petition for cert. filed (1991), 171 Utah Adv. Rep. 67; Newhard, Cook & Co. v. Insurance Co. of North America (8th Cir. 1991), 929 F.2d 1355; Foote, Cone & Belding Communications, Inc. v. Federal Insurance Co. (N.D. Ill. 1990), 749 F. Supp. 892; E. Udolf, Inc. v. Aetna Casualty & Surety Co. (1990), 214 Conn. 741, 573 A.2d 1211; Central Progressive Bank v. Fireman’s Fund Insurance Co. (5th Cir. 1981), 658 F.2d 377; Maryland Casualty Co. v. Clements (1971), 15 Ariz. App. 216, 487 P.2d 437; Ritchie Grocer Co. v. Aetna Casualty & Surety Co. (8th Cir. 1970), 426 F.2d 499; St. Joe Paper Co. v. Hartford Accident & Indemnity Co. (5th Cir. 1967), 376 F.2d 33; see also 13 Couch on Insurance 2d §§ 46:248— 249, at 184-85 (M. Rhodes rev. 1982); 11B J. Appleman & J. Apple-man, Insurance Law & Practice § 6978, at 464 (1981).
How aware must the City have been to activate the cancellation provision? When insuring agreement 3 is naturalized into section 6(a), the result is as follows:
"This Bond shall be deemed cancelled as to any Employee [i]m-mediately upon discovery by the [City] of any ['failure of any of the Employees *** to account properly for all monies *** received by virtue of his position’].”
The definition of "discovery” has been construed in many cases to "meant ] that time when the insured gains sufficient factual knowledge, not mere suspicion, which would justify a careful and prudent man in charging another with dishonesty.” (Alfalfa Electric Cooperative, Inc. v. Travelers Indemnity Co. (W.D. Okla. 1973), 376 F. Supp. 901, 906, 911-12 (applying the foregoing definition to a number of discovery conditions that were conditions precedent to coverage under a fidelity bond, including a provision that terminated coverage upon discovery by the insured of any dishonest act on the part of its employees).) In this case, rather than discovery of dishonesty, we are concerned with the City’s discovery of the failure of its implicated employees "to account properly” for monies expended for City-sponsored events through the creation and use of "Fund 666.”
Further, in order to trigger a provision which terminates coverage of an employee once the insured discovers that employee’s misconduct, the insured must be "aware of the true nature of the events which have given rise to the allegation.” (General Finance Corp. v. Fidelity & Casualty Co. (8th Cir. 1971), 439 F.2d 981, 987; see also Newhard, Cook & Co., 929 F.2d at 1357; Annot., 23 A.L.R.2d 1065, 1076 (1949).) Additionally, Couch on Insurance states:
"A policy of fidelity insurance may protect the insurer against the insured’s continuing to employ a known defaulter by specifying that the coverage of the policy shall terminate as to any employee upon the discovery of his default. Such a provision is to be strictly construed so that when knowledge of the insured partner is required to make the exclusion operative, actual knowledge of a partner is required ***.” (Emphasis added.) 13 Couch on Insurance 2d § 46:248, at 185 (M. Rhodes rev. 1982).
Accordingly, we conclude that section 6(a) would be brought into play in this case once the City was in possession of actual knowledge which would justify a careful and prudent man in charging the implicated employees with failing "to account properly” for monies expended for City-sponsored events through the creation and use of "Fund 666.”
When did the City here possess the requisite knowledge for the cancellation provision to have applied? Arguing that the City knew before August 2, 1982, that it was expending funds without prior appropriation, Fidelity asserts the following: (1) on December 8, 1980, and July 7, 1981, the 1979 and 1980 annual reports, respectively, were furnished to the city council, both disclosing that funds had not been appropriated for Fund 666 events sponsored during those years, (2) prior to those reports, the city council authorized the planning and execution of various events knowing that as of that time no funds had been appropriated for them, and (3) in May 1982, the Better Government Association (the BGA) publicly announced the results of its investigation of the events, charging the City with spending money on them absent prior appropriation.
Kinzer responds that the City could not have had knowledge of "any act *** which would constitute liability” until after she filed her suit since from the inception of Fund 666 during Mayor Byrne’s administration everyone, including the City’s corporation counsel, had concluded that the acts complained of violated no law, as is evidenced by the fact that the City entered into contracts to cover six additional festivals after this suit was filed. In support, she notes that the City officials and Fidelity were not named as defendants until she filed her second amended complaint in August 1983, and that only after a change in the City administration in the spring of 1993 did the then City comptroller, at the request of the new corporate counsel, commence an investigation of Fund 666.
J. Appleman, Insurance Law and Practice, which we find instructive, states:
"Under a fidelity bond providing that it should terminate upon the discovery by an employer of 'any fraudulent or dishonest act’ on the part of a bonded employee, the quoted words imply positive acts of wrongdoing, the discovery of which releases the insurer from its obligation on the bond. Whether or not the employer prior to the defalcation upon which an action is based had obtained knowledge of dishonest acts is a question of fact.” (Emphasis added.) 11B J. Appleman & J. Appleman, Insurance Law & Practice § 6978, at 465-67 (M. Rhodes rev. 1981).
In light of the standard of review of summary judgments, when the City actually knew of "any act *** which would constitute liability” under the bond or of the "failure of any *** Employee[ ] to account properly for all [Fund 666] monies” is a question of fact, not to be decided as a matter of law. Quality Lighting, Inc., 227 Ill. App. 3d at 884; Gardner, 213 Ill. App. 3d at 250 (in summary judgment motions, once the nonmovant presents some evidence to raise factual disputes regarding an element of the claim, that evidence must be construed strictly against the movant and liberally in favor of the nonmovant).
Therefore, summarizing, we hold that section 6(a) is valid, is governed by an objective view of what the City should have concluded from facts of which it was actually aware, and that when the City "discovered” that the expenditure of Fund 666 monies constituted a liability under the bond is a question of fact.
The answer to the question as to who constitutes "the City” under the bond is necessarily subsumed in determining the issue of when the City is held to have discovered the requisite failure of its implicated employees "to account properly” for the funds at issue in this litigation. Although noting that the language of the bond was ambiguous as to "the official/officials of city government (executive, legislative, administrative, clerical) whose knowledge [would] satisfy Section 6(a)[,]” the trial judge felt that by "[l]inking discovery to the filing of the original complaint [he had] obviated the need to resolve [this ambiguity] because notice of filing [was] served, under law, on multiple city officials and that under those circumstances knowledge of the suit cannot be denied.”
Since the City is a corporation and can act only through and by its agents, the law of agency would necessarily apply in the resolution of this issue.
" 'Generally speaking, a principal is chargeable with notice of facts which are within the knowledge of his agent, acquired within the scope of the agent’s authority. A well recognized exception to this rule iá that, where an agent acquires information which it would be to his advantage to conceal from his principal, it is assumed that he did not impart such knowledge to his principal, and accordingly knowledge would not be imputed to the principal. This exception, however, is qualified by the rule that, if the agent is the sole representative of the principal, or the only person or means by which the principal acts, then the knowledge of the agent will be imputed to the principal, and the general rule applies, and not the exception thereto.’ ” (Puget Sound National Bank v. St. Paul Fire & Marine Insurance Co. (1982), 32 Wash. App. 32, 40, 645 P.2d 1122, 1127.)
Although we are in accord with the above-quoted paragraph, we add to it the caveat that only the "knowledge of key employees” may be imputed to the City. (Emphasis added.) First National Bank v. Trans-america Insurance Co. (8th Cir. 1975), 514 F.2d 981, 986.
Fidelity next contends that since it was not named as a defendant in this action until August 18, 1983, the trial judge erred in assessing against it prejudgment interest from the original date the complaint was filed, which was August 12, 1982.
However, this issue was not raised in Fidelity’s trial court briefs nor in its argument in opposition to Kinzer’s motion for summary judgment, which specifically prayed for prejudgment interest to begin on August 12, 1982, nor was it raised in its motion to reconsider the grant of summary judgment. Further, Fidelity fails to cite any authority for this contention, nor does it respond to Kinzer’s waiver argument here; consequently, we deem this issue waived. In re Gonzalez (1981), 95 Ill. App. 3d 750, 756-57, 420 N.E.2d 720; 134 Ill. 2d R. 341(e)(7).
Five days after the trial judge granted Kinzer’s summary judgment motion but before ruling on Fidelity’s post-trial motion to reconsider, Kinzer filed, under a provision of the Insurance Code (215 ILCS 5/155 (West 1992)), a motion for attorney fees and costs, which the trial judge refused to entertain since "for all intents and purposes[ ] this action is concluded.” Thereafter, the judge also denied Kinzer’s request to amend her complaint to include a section 155 claim.
"In any action by or against a company wherein there is in issue the liability of a company on a policy or policies of insurance or the amount of the loss payable thereunder, or for an unreasonable delay in settling a claim, and it appears to the court that such action or delay is vexatious and unreasonable, the court may allow as part of the taxable costs in the action reasonable attorney fees, [and] other costs.” 215 ILCS 5/155 (West 1992).
The policy behind section 155 is to prevent harm to an insured who encounters an unreasonable and vexatious insurance company. (Meier v. Aetna Life & Casualty Standard Fire Insurance Co. (1986), 149 Ill. App. 3d 932, 943-44, 500 N.E.2d 1096.) Because the "action” here was still open as long as the trial judge retained jurisdiction over the case, the disincentives created by section 155 remained in effect so as to further the policies evinced by that section and the cases that have interpreted it. Accordingly, we hold that the trial judge erred in not entertaining Kinzer’s motion; consequently, we need not address whether the judge abused his discretion in disallowing Kinzer to amend her complaint.
Lastly, Kinzer contends that the trial judge abused his discretion in denying her request under Supreme Court Rule 137 for sanctions.
The determination of whether to grant sanctions is entrusted to the sound discretion of the trial judge and will not be reversed absent an abuse thereof. (Shea, Rogal & Associates v. Leslie Volkswagen, Inc. (1993), 250 Ill. App. 3d 149, 152-55, 621 N.E.2d 77.) In light of the fact that the trial judge had been assigned to this case more than two years prior to denying Kinzer’s motion for sanctions, and thus was well informed and familiar with the facts, issues, and parties therein, we uphold his denial of sanctions.
Reversed and remanded.
HARTMAN and McCORMICK, JJ„ concur.
The officials are former mayoral assistant Thomas Geary (Geary) and three former city comptrollers, Raymond Coyne (Coyne), Daniel Grim (Grim), and Anthony Fratto (Fratto).
Two other comptrollers (Coyne and Fratto) were named along with Grim in Kinzer’s second amended complaint, but they were not parties to Kinzer I.
Fidelity additionally argues that the City officials acted in collusion when they failed to account properly for the expended funds, but such a claim lacks merit. "Collusion” is not defined in the bond, but Black’s Law Dictionary defines it as "implfying] the existence of fraud of some kind.” (Black’s Law Dictionary 264 (6th ed. 1990).) Further, according to Webster’s Third New International Dictionary, the word "collude” means to "connive with one another: CONSPIRE, PLOT” and the word "collusion” means "secret agreement: secret cooperation for a fraudulent or deceitful purpose.” (Webster’s Third New International Dictionary 446 (1986).) In Kinzer I, the supreme court indicated that Grim acted in "good faith,” and the fact that a comptroller simply followed the practice of his predecessor is insufficient to show collusion. Kinzer I, 128 Ill. 2d at 445.
|
preventChanges Does Not Detect $push/$pull
Steps to reproduce
Create a new Feathers App with a Service using a Database adapter that supports $push (e.g. mongodb, mongoose)
Use preventChanges(true, 'roles') to prevent changes on a field storing an array, e.g. roles
Send a PATCH with the following data: { "$push": { "roles" : "admin" }}
Expected behavior
A BadRequest Error: BadReques: Field roles may not be patched. (preventChanges) is thrown.
Actual behavior
The call succeeds with no errors.
System configuration
Module versions: Feathers v4, feathers-hooks-common@latest
NodeJS version: 10
Operating System: Linux
Additional Info
I believe this is caused by preventChanges only checking which fields are directly listed in context.data instead of also checking for things like $push/$pull, which obviously is tricky since only some adapters support those special setters. However, the hook gives the impression that all changes would be prevented, so if this can’t be fixed, there should be a disclaimer somewhere prominent, or perhaps a way to blacklist / force whitelisting not only special query parameters, but also special data parameters in the database adapters, since this can be possible a quite serious security flaw.
Is the problem still active? How come nobody responded to it?
I haven't tried it in v5 yet, however I cannot see anything in the changelog that would hint at a mitigation of this issue.
It might not affect a large userbase, which might explain the slow response—especially since it can be remedied with some additional checks in hooks if the developer is aware of this behaviour.
What was your way to get around this? Are you up to add a note about this in the docs? That would be really neat!
I wrote a little Hook that allows whitelisting modifiers:
// Use this hook to manipulate incoming or outgoing data.
// For more information on hooks see: http://docs.feathersjs.com/api/hooks.html
const { BadRequest } = require('@feathersjs/errors');
// eslint-disable-next-line no-unused-vars, arrow-body-style
module.exports = (...allowed) => {
return (context) => {
if (!context.params.provider) return context; // internal calls are fine
if (context.type !== 'before') throw new Error('The \'allowUpdateOperators\' hook should only be used as a \'before\' hook');
const fields = Object.keys(context.data);
fields.forEach((field) => {
if (field.startsWith('$') && !allowed.includes(field)) throw new BadRequest(`The update operator '${field}' is not allowed in this context. (allowUpdateOperators)`);
});
return context;
};
};
It’s pretty specific to my use-case, I don’t know how generally applicable it is. I’d be happy to add a note in the docs though if you think it’s useful!
Thanks for the quick answer!
Maybe I oversaw something, but what about this?:
iff(
isProvider('external'),
preventChanges(true, '$push', 'all', 'your', 'other', 'fields')
)
Won't that work?
I think that might work as well. :thinking: Seems a lot simpler for certain! :sweat_smile: Not sure though, it’s been a good while since I’ve actively worked with Feathers.
As a little addendum to my comment from yesterday: obviously that hook only makes sense if there’s another hook running before it that prevents all update operators:
// Use this hook to manipulate incoming or outgoing data.
// For more information on hooks see: http://docs.feathersjs.com/api/hooks.html
const { BadRequest } = require('@feathersjs/errors');
// eslint-disable-next-line no-unused-vars, arrow-body-style
module.exports = (options = {}) => {
return async (context) => {
if (context.type !== 'before') throw new Error('The \'preventUpdateOperators\' hook should only be used as a \'before\' hook');
const fields = Object.keys(context.data);
fields.forEach((field) => {
if (field.startsWith('$')) throw new BadRequest(`The update operator ${field} is not allowed in this context. (preventUpdateOperators)`);
});
return context;
};
};
While your solution with preventChanges(true, '$push'); may be simpler, there’s plenty of these operators and having to list them all might be cumbersome / error prone.
I still think that having logic that handles these cases in the preventChanges hook might make it more in line with user expectations, although obviously it also comes with the tradeoff of cluttering up the hook with database-specific cases. I’m not sure if adapters other than Mongo and Mongoose support these kinds of operators. :thinking:
|
Page:United States Statutes at Large Volume 42 Part 1.djvu/915
SIXTY-SEVENTH CONGRESS. Sess. II. Ch. 356. 1922. 887 6 cents r one hundred leaves. The fore oin rate a lies to leaf ¤°¤=¤¤'¤·¤ 8- not excegiling in size the equivalent of fivegandgone-halglby five and cuiigtziimdmmum one-half inches; additional duties in the same proportion shall be 4 assessed on leaf exceeding in size said equivalent. PAB. 383. Gold leaf, 55 cents per one hundred leaves. The fore- °°m°'*· going rate applies to leaf not exceeding in size the equivalent of three and three·eighths by three and three·eighths inches; additional duties in the same proportion shall be assessed on leaf exceeding in size said equivalent. _ Pan. 384. Silver leaf, 5 cents {Ear one hundred leaves. Sf"°'*°“'· Pan. 385. Tinsel wire, made w olly or in chief value of gold, silver, ""°· *°“‘° " or other metal, 6 cents er pound and 10 per centum ad valorem; lame or lahn, made whogy or in chief value of gold, silver, or other metal, 6 cents pezpound and 20 per centum ad valorem; bullions and S,,“,,‘3§§f;°,,,"f‘°° °‘ """ metal threads m e wholly or in chief value of tinsel wire, lame or lahn, 6 cents per pound an 35 per centum ad valorem; beltings, toys, and other articles made wholly or in chief value of tinsel wire, metal thread, lame or lahn, or of tinsel wire, lame or lahn and india rubber, bullions, or metal threads, not specially provided for, 45 per centum ad valorem; woven fabrics, ribbons, fringes, and tassels, made wholly or in chief value of any of the foregoing, 55 per centum ad valorem. _ PAR. 386. Quicksilver, 25 cents per pound: Provided, That the §}'j§"'· flasks, bottles, or other vessels in which quicksilver is imported shall F1¤k¤·=¤¤m¤1¤g. be subiipt to the same rate of duty as they would be subjected to if import empty. EAR. 387. Azides, fulminates, fulminating powder, and other like F"1m°m°"‘ articles not s ecially provided for, 12} cents per ponmd. Pm:. 388. laynamite and other high explosives, put up in sticks, DY¤¤m”°- cartridges, or other forms, suitable for blasting, 1} cents per pound. PAR. 389. New types, 20 per centum ad valorem. TW- Piuz. 390. Nickel oxide, 1 cent per pound; nickel, and nickel allo of Ni°k°¥- any kind in which nickel is the component material of chief value, in pigs or ingots, shot, cubes, grains, cathodes, or similar forms, 3 cents per pound; in bars, rods, p ates, sheets, strips, strands, castings, wire, tubes, tubing: anodes, or electrodes, 25 per centum ad valorem; and in addition thereto, on all of the foregoing, if cold rolled, cold drawn, or cold worked, 10 per centum ad valorem. PAR. 391. Bottle caps o metal, collapsible tubes, and S(§)1'l11.l(.l81'°wbf°h` "°“" °°”· tops, if not decorated, colored, waxed, acquered, enamele, lithographed, electroplated, or embossed in co or, 30 per centum ad valorem; if decorated, colored, waxed, lacquered, enameled, lithographed, electroplated, or embossed m co or, 45 per centum ad va orem. Pan. 392. Lead-bearing ores and mattes of all kinds, 1} cents ’“°"°“"""°'°‘· per lpound on the lead contained therein: Provided, That such duty §"’c'f;,';;,,m_,ms_ sha not be applied to the lead contained in coaplper mattes unless actually recovered: Prmnkled {further, That on importations of mlglvgrwgrcin b¤¤¤ tv lead—bearing ores and mattes o all kinds the duties shall be estimated ’ ' at the port of ent and a bond given in double the amount of such estimated duties lfbr the transportation of the ores or mattes by common carriers bonded for the transportation of appraised _ or unaipraised merchandise to properly eqpipged samglling or smeltmg esta ishments, whether designated as on ed ware ouses or other- Wise. On the arrival of the ores or mattes at such establishments they S°'“P“°¢ °‘ "“°“°'· shall be sampled according to commercial methods under the supervision of Government officers, who shall be stationed at such establishments, and who shall submit the samples thus obtained to a Government assayer, designated by the Secretary of the Treasury, who shall make a proper assay of the sample and report the result to
|
# webextension-toggle-soundcloud-reposts
A WebExtension to toggle show or hide repost entries in [SoundCloud's home timeline](https://soundcloud.com/stream).
## LICENSE
MIT
|
Dynamically place index in Swift
How do I take the following code and display the result "white" by using the index returned from "find"?
var colors: [String] = ["red", "yellow", "green", "blue", "orange", "purple", "white"]
var pickAColor2 = "white"
find(colors, "\(pickAColor2)")?
I'm looking for something like this to dynamically input the index:
color[\(pickAColor2)] // but this doesn't work...
var colors: [String] = ["red", "yellow", "green", "blue", "orange", "purple", "white"]
var pickAColor2 = "white"
let index = find(colors, pickAColor2) // index has the value 6
colors[index!] // Returns "white"
Make sure to unwrap index with !.
|
Thill-coupling
(No Model.)
S. PORTER.
THILL COUPLING. I No. 373,652. Patented Nov. 22, 1-887.
WITNESSES F IN VENTOR i Ja & BY. Mun w ATTORNEYS.
PETERS, Plwlo-Lil how mn wminmm n0.
UNITED STATES PATENT OFFICE.
SAMUEL FORTER, OF MARYSVILLE, KANSAS.
TH lLL-COUPLING.
SPECIFICATION forming part of Letters Patent No. 373,652, dated November 22, 1887.
Application filed August 18, 1887. Serial No. 247,274.
To all whom it may concern:
Be it known that I, SAMUEL FORTER, of Marysville, in the county of Marshall and State of Kansas, have invented a new and 1111- proved Thill-Goupling, of which the following is a full, clear, and exact description.
The object of my invention is to provide a new and improved thill-coupling, by which the shafts or tongues are securely coupled to the axle of the wagon.
The invention consists of a bolt passing through the apertured ears of the clip and through the shaft end, said bolt being provided on one end with an extension on which is formed a bar extending parallel with the bolt, and provided on its outer end with a curved angular arm adapted to engage the outer face of one of the ears of the clip.
The invention also consists of certain parts and details and combinations of the same, as will be fully described hereinafter, and then pointed out in the claims.
Reference is to be had to the accompanying drawings, forming a part of this specification,- in which similar letters of reference indicate corresponding parts in all the figures.
Figure 1 is a plan View of my improvement. Fig. 2 is a side elevation of the same, and Fig. 3 is a perspective view of the bolt.
On the axle A of the wagon is secured, in any suitable manner, the clip B, of any approved construction, and provided with the apertured ears 0 and 0, between which is held the apertured end of the shaft or tongue D. Through the apertures in the ears 0 and O and through the end of the shaft passes a bolt, E, slightly pointed at one end and pro vided at its other end with an extension, F, having an aperture, F, and continuing to form the bar G, extending parallel with the bolt E and usually resting on the tops of the ears 0 and G. On the outer end of the bar G is formed an angular and curved arm, H, fitting tightly over the outer face of the ear 0 of the clip B. When the shaft or tongue D is coupled to the clip B by the bolt E, as shown in Figs.
(No model.)
1 and 2, then the bar G extends across the tops of the ears 0 and O, and the curved arm H extends downward and engages the outer face of the ear 0 very tightly, so that the bolt E is prevented from dropping out of the apertured end ofthe tongue or shaft D and out of the apertured ears 0 and G.
If it is desirable to uncouple the shaft or tongue D from the clip B, the operator turns the bolt E by inserting a nail or pin into the aperture F of the extension F, and then turning the same until the bolt E, the bar G, and the arm H assume the position shown in dotted lines in Fig. 2'that is, the arm H is disengaged from the ear 0 and extends above the top of said car, so that the bolt E can now be easily withdrawn from the ears 0 and G and the end of the shaft D. This operation can be performed only when the shaft D is in its lowest position; and when the shaft is in position as shown in Fig. 2 the bolt E cannot be turned sufficiently to disengage the curved arm H from the ear 0.
Having thus described my invention, what I claim as new, and desire to secure by Letters Patent, is-
1. Ina thill-coupling, the combination, with a clip having apertured ears, of a bolt provided with a rigid arm extending parallel therewith and nearly to the end thereof, and having its free end bent to form an angular and curved arm fitting tightly over the outer face of one of the ears of the clip, in rear of the aperture of said clip, substantially as herein shown and described.
2. As an improved article of manufacture, a thill-coupling bolt consisting of the bolt E, provided with the extension F, the rigid arm G, extending from said extension parallel with the bolt and nearly to the end thereof, and having its free end bent to form the angular and curved arm H, as set forth.
SAMUEL FORTER.
Witnesses:
W. A. OALDERI-IEAD, D. P. CLARK.
|
// IP Marketing video - START// IP Marketing video - END
Source Foundation, VivaCell-MTS helping children with disabilities
PanARMENIAN.Net - 40 children with special needs have benefitեդ from the specialized personal assistant service in Yerevan thanks to VivaCell-MTS support in the past 2 years. In order to provide efficient assistance to children with disabilities, the Source Foundation, established in Armenia four years ago, has also included the kids’ families in its target group. As a result, an important change happens not only in the lives of these children but also in their families. Children with disabilities and special needs overcome self-isolation and are no longer limited to contact only with their mothers, enabling the families to return to the normal rhythm of life.
In 2014, VivaCell-MTS started to support different projects of the foundation helping them achieve the goals set. In 2017, special activities were implemented with 40 children for 8 months, 3 days a week, 6-8 hours a day at home, outside, at schools and rehabilitation centers. Specially trained individual tutors worked with the children at development and care centers. The individual tutors’ work was supervised by 3 senior specialists who developed the overall individual plan of working with the child to develop and improve their physical condition, as well as to regulate and make the work more effective in the child-tutor-parent chain.
Marina Parazyan, the director of Source Foundation, says that they have managed to achieve the key objectives of the program. In particular, the Foundation has succeeded in uniting various organizations providing specialized care to children with disabilities, as well as in creating jobs for young specialists in this area, supporting the families in the target group and helping the children.
The development of each child includes care, development of self-service skills, guidance at schools or at different development centers, continuous active work with individual development plans for each child. The team of pedagogue-tutors and coordinators has succeeded in reaching tangible results, ensuring that children are provided with independent, joyful and quality daily occupation, while parents are given the opportunity to rest, study, and return to normal life.
Top stories
Jagland cited Armenia as a country that has improved access to minority language education in line with the Council of Europe instruments.
One of the most important destinations of Armenia, the capital city of Yerevan with the imposing Mount Ararat is a sight to see, the article says.
The analysis reveals that Qatar, the United Arab Emirates and Turkey saw the biggest growth of arrivals in the first three months of 2018.
Power is currently in Armenia as she is a member of the Aurora Prize for Awakening Humanity Selection Committee.
Partner news
|
# Stochastic Average Gradient :
A Simple Empirical Investigation
Pascal Junior Tikeng Notsawo
<EMAIL_ADDRESS>
DIRO, Université de Montréal, Montréal, Quebec, Canada
###### Abstract
Despite the recent growth of theoretical studies and empirical successes of
neural networks, gradient backpropagation is still the most widely used
algorithm for training such networks. On the one hand, we have deterministic
or full gradient (FG) approaches that have a cost proportional to the amount
of training data used but have a linear convergence rate, and on the other
hand, stochastic gradient (SG) methods that have a cost independent of the
size of the dataset, but have a less optimal convergence rate than the
determinist approaches. To combine the cost of the stochastic approach with
the convergence rate of the deterministic approach, a stochastic average
gradient (SAG) has been proposed. SAG is a method for optimizing the sum of a
finite number of smooth convex functions. Like SG methods, the SAG method’s
iteration cost is independent of the number of terms in the sum. In this work,
we propose to compare SAG to some standard optimizers used in machine
learning. SAG converges faster than other optimizers on simple toy problems
and performs better than many other optimizers on simple machine learning
problems. We also propose a combination of SAG with the momentum algorithm and
Adam. These combinations allow empirically higher speed and obtain better
performance than the other methods, especially when the landscape of the
function to optimize presents obstacles or is ill-conditioned 111This work is
reproducible at https://github.com/Tikquuss/sag_torch.
## 1 Introduction
In many domains, several problems can be reduced to the minimization of the
sum of a finite number of functions
$g=\frac{1}{n}\sum_{i=1}^{n}f_{i}$
That is
$\operatorname*{minimize}_{x\in\Omega\subset\mathbb{R}^{p}}\ \
g(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(x)$ (1)
Gradient descent (Cauchy, 1847; Bottou, 1998; Nemirovski et al., 2009; Duchi
et al., 2011b; Kingma and Ba, 2014) optimize such functions with a rule of the
form :
$x^{k+1}=x^{k}-\alpha_{k}D^{k}$
where $\alpha_{k}$ is the step size at iteration $k$; and $D^{k}$ a function
of the past gradients $G_{1},\dots,G_{k}$ of $g$ at $x^{1},\dots,x^{k}$,
respectively, or of the estimators of these gradients; such that
$\mathbb{E}[D^{k}|x^{k-1}]=\nabla g(x^{k})$. More specifically, $G_{k}=\nabla
g(x^{k})$ is the gradient of $g$ at $x^{k}$, the parameter update at time $k$
given the optimization algorithm of choice, and $\\{\alpha_{k},k\geq 0\\}$ is
a predefined deterministic sequence of positive real numbers such that
$\sum_{k=1}^{\infty}\alpha_{k}=\infty$ and
$\sum_{k=1}^{\infty}\alpha_{k}^{2}<\infty$. The first of these two conditions
is to make sure that the total displacement
$\sum_{k=1}^{\infty}\alpha_{k}\nabla g(x^{k})$ can be unbounded, so the
optimal solution can be reached even if we start far away from it. The second
condition (the finite sum of squares) is to decrease fast enough for the
algorithm to converge. For convex functions, gradient descent converges to a
global minimum (if one exists).
Problem 1 is very common in deep learning, where the goal is to minimize the
regularized cost function
$\mathcal{J}(\theta)=\mathbb{E}_{s\sim F}[\ell(s,\theta)]+\lambda
r(\theta)=\int\ell(s,\theta)dF(s)+\lambda r(\theta)$
where the function $\ell(s,\theta)$ measures how well the neural network with
parameters $\theta$ predicts the label of a data sample $s$, $F$ is the
cumulative distribution function of the data distribution, $r(\theta)$ is the
regularizer (e.g. $\ell_{2}$-regularization $\frac{1}{2}\|\theta\|^{2}$), and
$\lambda\in\mathbb{R}_{+}$ the regularization strength. In practice, $F$ is
generally unknown, and the empirical distribution of a given dataset
$\mathcal{D}$ is used. The regularized empirical risk obtained can be written
as a sum of $|\mathcal{D}|$ functions
$\mathcal{J}(\theta)=\frac{1}{|\mathcal{D}|}\sum_{s\in\mathcal{D}}\big{[}\ell(s,\theta)+\lambda
r(\theta)\big{]}$
This is the case, for example, of the least squares regression, with
$\mathcal{D}=\\{(x_{i},y_{i})\in\mathbb{R}^{p}\times\mathbb{R}\\}_{i=1}^{n}\text{
and }\ell((x,y),\theta)=\|x^{T}\theta-y\|^{2}_{2}$
or the logistic regression where $\ell$ is the negative log-likelihoods 222
The decision boundary is $x^{T}\theta=0$, i.e. we want $x^{T}\theta>0$ for
$y=1$ and $x^{T}\theta<0$ for $y=-1$, thas is
$yx^{T}\theta>0\Longleftrightarrow\operatorname{sigmoid}(yx^{T}\theta)=1/(1+\exp(-yx^{T}\theta))>1/2$.
To maximize $\operatorname{sigmoid}(yx^{T}\theta)\in[0,1]$, we minimize
$-\log\big{(}\operatorname{sigmoid}(yx^{T}\theta)\big{)}\in\mathbb{R}^{+}$,
which gives our loss function. :
$\mathcal{D}=\\{(x_{i},y_{i})\in\mathbb{R}^{p}\times\\{-1,1\\}\\}_{i=1}^{n}\text{
and }\ell((x,y),\theta)=\log(1+\exp(-yx^{T}\theta))$
One of the challenges that gradient-based methods face in practice is the ill-
conditioned surfaces, when the hessian of the function to optimize has some
large positive eigenvalues (i.e. high-curvature directions) and some
eigenvalues close to $0$ (i.e. low-curvature directions). In this case,
vanilla gradient descent bounces back and forth in high curvature directions
and slowly progresses in low curvature directions. In addition to these ill-
conditioned surfaces, there are obstacles such as saddle points and critical
surfaces (cliffs, valleys, plateaus, ravines, and other flat regions),
extremely sharp or flat minima.
The aim of this work is to empirically investigate the performance of
stochastic average gradient (SAG) (Schmidt et al., 2013) on this type of
problem. We limit ourselves for the first time on simple toys finite data
problems where each $f_{i}$ is smooth and convex, although in modern
applications, $n$, the number of data points (or training examples) can be
extremely large (e.g. datasets used to train large-scale deep learning models
like GPT-3 (Brown et al., 2020)), while there is often a large amount of
redundancy between examples. In addition to this basic setting, we will also
be interested in toys cases where the sum $g$ is strongly convex, with the use
of a strongly-convex regularizer such as the squared $\ell_{2}$-norm,
resulting in problems of the form :
$\operatorname*{minimize}_{x\in\mathbb{R}^{p}}\ \
\frac{\lambda}{2}\|x\|^{2}+\frac{1}{n}\sum_{i=1}^{n}f_{i}(x)=\frac{1}{n}\sum_{i=1}^{n}\Big{[}\frac{\lambda}{2}\|x\|^{2}+f_{i}(x)\Big{]}$
(2)
The resulting function g will be strongly convex, provided that the individual
functions $f_{i}$ are convex.
We then extend our investigations to slightly more complex problems where we
optimize deep neural networks on toys dataset. Many deep models are guaranteed
to have an extremely large number of local minima. It has been proven that
this is not necessarily a problem. Most local minima are of good quality
(almost equivalent in cost to the global minimum) (Dauphin et al., 2014). The
biggest obstacle to the optimization of $g$ in deep learning remains the
presence of saddle points. In low dimensions (small $p$), local minima are
more common, while in high dimensions, local minima are rare and saddle points
more common. Most of the training time is spent on traversing flat valleys of
the Hessian matrix or circumnavigating tall mountains via an indirect arcing
path, and the trajectory of traversing such flat valleys and circumventing
such mountains may be long and result in excessive training time (Srihari,
2020).
The rest of the paper is organized as follows. We define some terms used in
our work in section 2, then we present SAG in section 3, the related works in
section 4, the convergence analysis and the implementation details in sections
5 and 6 respectively. We finally present the experiments settings and the
results in section 7, then summarise and conclude our work in section 8.
## 2 Definitions
We assume $g:\mathbb{R}^{p}\to\mathbb{R}$ unless otherwise noted. The function
$g$ is convex if for all $x,y\in\operatorname{domain}(g)$ and all $t\in[0,1]$
$g(tx+(1-t)y)\leq tg(x)+(1-t)g(y)$
or equivalently if for all $x,y\in\operatorname{domain}(g)$,
$g(x)\geq g(y)+\nabla g(y)^{T}(x-y)$
if $g$ is differentiable. If the inequality holds strictly (i.e. $<$ rather
than $\leq$) for all $t\in(0,1)$ and $x\neq y$, then we say that $g$ is
strictly convex, so strict convexity implies convexity. Geometrically,
convexity means that the line segment between two points on the graph of $g$
lies on or above the graph itself. If $g$ is convex, then any local minimum of
$g$ in any convex set $X\subset\operatorname{domain}(g)$ is also a global
minimum. Strict convexity means that the line segment lies strictly above the
graph of $g$, except at the segment endpoints. If $g$ is strictly convex, then
at most, one local minimum of $g$ in $X$ exists. Consequently, if it exists,
it is the unique global minimum of $g$ in $X$
333https://ai.stanford.edu/~gwthomas/notes/convexity.pdf.
For $\mu>0$, the function $g$ is $\mu$-strongly convex if the function
$x\mapsto g(x)-\frac{\mu}{2}\|x\|^{2}$
is convex, or equivalently if for all $x,y\in\operatorname{domain}(g)$,
$g(x)\geq g(y)+\nabla g(y)^{T}(x-y)+\frac{\mu}{2}\|x-y\|^{2}$
if $g$ is differentiable. Strong convexity doesn’t necessarily require the
function to be differentiable, and the gradient is replaced by the sub-
gradient when the function is non-smooth. Intuitively speaking, strong
convexity means a quadratic lower bound exists on the growth of the function.
This directly implies that a strong convex function is strictly convex since
the quadratic lower bound growth is, of course, strictly greater than the
linear growth 444https://xingyuzhou.org/blog/notes/strong-convexity.
Let $G(x)=\nabla g(x)\in\mathbb{R}^{p}$ and
$\mathcal{H}(x)=\nabla^{2}g(x)\in\mathbb{R}^{p\times p}$ be respectively the
gradient and the local hessian matrix of $g$ at $x$, assuming that $g$ is
twice-differentiable. If $G(x)=0$, then $x$ is a critical/stationary point of
$g$. In this case, the determinant $d(x)$ of $\mathcal{H}(x)$ is equal to the
Gaussian curvature of the surface of $g$ considered as a manifold. The
eigenvalues of $\mathcal{H}(x)$ are the principal curvatures of the $g$ at
$x$, and the eigenvectors are the principal directions of curvature. If
$d(x)>0$, $x$ is a local maximum of $g$ if $\mathcal{H}(x)$ is negative
definite (all its eigenvalues are negative), and a local minimum of $g$ if
$\mathcal{H}(x)$ is a positive definite (all its eigenvalues are positive).
Some local optimums can be very flat (i.e. there is a large enough
neighbourhood of $x$ that contains only local optima) or sharp (the loss
function near $x$ has a high condition number, i.e. very small perturbation of
$x$ can cause large variation in $g$). If $d(x)<0$ (some eigenvalues are
positive and others are negative), $x$ is a saddle point of $g$. If $d(x)=0$
(there is at least one zero eigenvalue, i.e. $\mathcal{H}(x)$ is undefined),
we can’t conclude, and the point $x$ could be any of a minimum, maximum or
saddle point. If the hessian matrix of $g$ is positive semi-definite at any
point of $\operatorname{domain}(g)$, then $g$ is convex and the point $x$ such
that $G(x)=0$ is its global minimum. If it is instead negative semi-definite
at any point of $\operatorname{domain}(g)$, then $g$ is concave and the point
$x$ such that $G(x)=0$ is its global maximum.
## 3 Motivation
Gradient descent (Bottou, 1998) is one of the most popular algorithms to
perform optimization and by far the most common way to optimize neural
networks. FG method (Cauchy, 1847) uses iterations of the form
$x^{k+1}=x^{k}-\alpha_{k}\nabla
g(x^{k})=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n}\nabla f_{i}(x^{k})$
FG is generally called batch gradient descent in deep learning since it
calculates the error for each example in the training dataset but only updates
the model after all training examples have been evaluated. Therefore, its cost
per iteration is $\mathcal{O}(n)$.
Assuming that a minimizer $x^{*}$ exists and $g$ is convex, then under
standard assumptions, the sub-optimality achieved on iteration $k$ of the FG
method with a constant step size is given by a sublinear convergence rate
(Nesterov, 2004; Schmidt et al., 2013)
$g(x^{k})-g(x^{*})=\mathcal{O}(1/k)$
When $g$ is strongly convex, the error also satisfies a linear convergence
rate (also known as a geometric or exponential rate because a fixed fraction
cuts the error on each iteration) (Nesterov, 2004; Schmidt et al., 2013)
$g(x^{k})-g(x^{*})=\mathcal{O}(\rho^{k})\text{ for some }\rho<1$
This $\rho$ depends on the condition number of $g$, i.e. on how sensitive the
output of $g$ is on its input 555$L/\mu$ (change in output = condition number
$\times$ change in input). One drawback of the FG approach is that it requires
computing all the gradients at each iteration, which can be tedious when $n$
is very large.
The basic SG method for optimizing 1 uses iterations of the form
$x^{k+1}=x^{k}-\alpha_{k}\nabla f_{i_{k}}(x^{k})$
where at each iteration an index $i_{k}$ is sampled uniformly from the set
$\\{1,\dots,n\\}$. The randomly chosen gradient $\nabla f_{i_{k}}(x^{k})$
yields an unbiased estimate of the true gradient $\nabla g(x^{k})$ :
$\mathbb{E}_{i_{k}\sim\mathcal{U}(\\{1,\dots,n\\})}[\nabla
f_{i_{k}}(x^{k})]=\frac{1}{n}\sum_{i=1}^{n}\nabla f_{i}(x^{k})=\nabla
g(x^{k})$
Under standard assumptions and for a suitably chosen decreasing step-size
sequence $\\{\alpha_{k},k\geq 0\\}$ (Nemirovski et al., 2009; Schmidt et al.,
2013), the SG iterations have an expected sub-optimality for convex objectives
of
$\mathbb{E}[g(x^{k})]-g(x^{*})=\mathcal{O}(1/\sqrt{k})$
and an expected sub-optimality for strongly-convex objectives of
$\mathbb{E}[g(x^{k})]-g(x^{*})=\mathcal{O}(1/k)$
These sublinear rates are slower than the corresponding rates for FG. Under
certain assumptions, these convergence rates are optimal in a model of
computation where the algorithm only accesses the function through unbiased
measurements of its objective and gradient. Thus, we should not expect to be
able to obtain the convergence rates of the FG method if the algorithm only
relies on unbiased gradient measurements. Can we have one gradient per
iteration and achieve the same rate as FG?
Mini-batch gradient descent is a variation of the SG algorithm that splits the
training dataset into small batches used to calculate model error and update
model coefficients. In other words, we select a batch
$\mathcal{B}\subset\\{1,\dots,n\\}$ randomly at each iteration and do the
update as follows:
$x^{k+1}=x^{k}-\frac{\alpha_{k}}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\nabla
f_{i}(x^{k})$
But this allows to make a trade-off between the cost per iteration and the
convergence rate: either we choose $\mathcal{B}$ is too big, and we get a
better rate and a big cost of $\mathcal{O}(|\mathcal{B}|)$ per iteration, or
we choose $\mathcal{B}$ so that $|\mathcal{B}|$ is too small, and we get a
lower rate and a cost in $\mathcal{O}(1)$ per iteration.
The SAG iterations take the form
$x^{k+1}=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}$
where at each iteration a random index $i_{k}$ is selected (not necessarily
uniformly from $\\{1,\dots,n\\}$ as we will see below) and we set
$y_{i}^{k}=\left\\{\begin{array}[]{ll}\nabla f_{i}(x^{k})&\mbox{if }i=i_{k}\\\
y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
Like the FG method, the step incorporates a gradient with respect to each
function. But, like the SG method, each iteration only computes the gradient
with respect to a single example and the cost of the iterations is independent
of $n$ : we take a step in the direction of the average of $y_{i}^{k}$.
With the mini-batch version of SAG, the update becomes
$y_{i}^{k}=\left\\{\begin{array}[]{ll}\nabla f_{i}(x^{k})&\mbox{if
}i\in\mathcal{B}\\\ y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
## 4 Related works
In the following $D^{k}$ is a function of the past gradients
$G_{1},\dots,G_{k}$ of $g$ at $x^{1},\dots,x^{k}$, respectively, or of the
estimators of these gradients. In the papers introducing these algorithms,
$D^{k}=G_{k}$ in general, i.e. $D^{k}$ is deterministic. But their SG version
can be developed with $D^{k}=\nabla f_{i_{k}}(x^{k})$ for a randomly sampled
$i_{k}\in\\{1,\dots,n\\}$, or their mini-batch version with
$D^{k}=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}\nabla f_{i}(x^{k})$ for a
random sample $\mathcal{B}\subset\\{1,\dots,n\\}$, or their SAG version with
an appropriate choice of past gradients to use and how to use them.
SG methods that incorporate a each iteration $k$ a momentum term
$m^{k}=x^{k}-x^{k-1}=-\alpha_{k-1}D^{k-1}$ use iterations of the form (Polyak,
1964; Sutton, 1986)
$x^{k+1}=x^{k}-\alpha_{k}D^{k}+\beta_{k}m^{k}$
It is common to set all $\beta_{k}=\beta_{1}$ for some constant
$\beta_{1}\in[0,1)$, and in this case, we can rewrite the SG with momentum
(Tseng, 1998) method as
$x^{k+1}=x^{k}-\sum_{j=0}^{k}\alpha_{j}\beta_{1}^{k-j}D^{j}$
The momentum algorithm accumulates an exponentially decaying moving average of
past gradients and continues to move in their direction. Formally, the
momentum algorithm introduces a variable $v$ that plays the role of velocity:
the direction and speed at which the parameters move through parameter space.
The hyperparameter $\beta_{1}$ determines how quickly the contributions of
previous gradients exponentially decay. The above update rule can be rewritten
in terms of the velocity as ($v^{0}=0$):
$v^{k+1}=\beta_{1}v^{k}-\alpha_{k}D^{k}$ $x^{k+1}=x^{k}+v^{k+1}$
Since we have with this
$v^{k+1}=-\sum_{j=0}^{k}\alpha_{j}\beta_{1}^{k-j}D^{j}$
The SAG version of momentum becomes
$x^{k+1}=x^{k}+\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}$
where at each iteration, a random index $i_{k}$ is selected, and we set
$y_{i}^{k}=\left\\{\begin{array}[]{ll}v_{i}^{k+1}=\beta_{1}v_{i}^{k}-\alpha_{k}D_{i}^{k}&\mbox{if
}i=i_{k},\text{ with }D^{k}=\nabla f_{i_{k}}(x^{k})\\\
y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
Nesterov accelerated gradient or Nesterov momentum (Nesterov, 1983; Sutskever
et al., 2013) is a variant of the momentum algorithm that use an interim
update $\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}$ to compute de gradient $D^{k}$ at
each iteration. That is :
$\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}$ $\tilde{D}^{k}=\nabla g(\tilde{x}^{k})$
$v^{k+1}=\beta_{1}v^{k}-\alpha_{k}\tilde{D}_{k}$ $x^{k+1}=x^{k}+v^{k+1}$
The AdaGrad algorithm (Duchi et al., 2011a), individually adapts the learning
rates of all model parameters by scaling them inversely proportional to the
square root of the sum of all of their historical squared values. The update
rule of AdaGrad is given by ($r^{0}=0$, $r^{k}$ accumulates squared gradient,
division and square root are applied element-wise, $\epsilon$ is a very small
number used to avoid divisions by $0$) :
$r^{k+1}=r^{k}+D^{k}\odot D^{k}$
$x^{k+1}=x^{k}-\frac{\alpha_{k}}{\sqrt{r^{k+1}}+\epsilon}\odot D^{k}$
The RMSProp algorithm (Hinton, 2012) modifies AdaGrad to perform better in the
non-convex setting by changing the gradient accumulation into an exponentially
weighted moving average. RMSProp uses an exponentially decaying average to
discard history from the extreme past to converge rapidly after finding a
convex bowl as if it were an instance of the AdaGrad algorithm initialized
within that bowl. Compared to AdaGrad, using the moving average introduces a
new hyperparameter, $\beta_{2}\in(0,1]$, that controls the length scale of the
moving average. The step of squared gradient accumulation is modified as
follows:
$r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})D^{k}\odot D^{k}$
Adadelta (Zeiler, 2012) is an extension of Adagrad and RMSProp that seeks to
reduce its aggressive, monotonically decreasing learning rate. Instead of
accumulating all past squared gradients, Adadelta restricts the window of
accumulated past gradients to some fixed size ($u^{0}=0$).
$r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})D_{k}\odot D_{k}$
$\Delta^{k+1}=\frac{\sqrt{u^{k}+\epsilon}}{\sqrt{r^{k+1}+\epsilon}}$
$u^{k+1}=\beta_{2}u^{k}+(1-\beta_{2})\Delta^{k+1}$
$x^{k+1}=x^{k}-\alpha_{k}\Delta^{k+1}$
Adam (Kingma and Ba, 2017) is a combination of RMSProp and momentum. First, in
Adam, momentum is incorporated directly as an estimate of the gradient’s
first-order moment (with exponential weighting). Second, Adam includes bias
corrections to the estimates of both the first-order moments (the momentum
term) and the (uncentered) second-order moments to account for their
initialization at the origin.
$\tilde{x}^{k}=x^{k}+\beta_{1}v^{k}$ $\tilde{D}^{k}=\nabla g(\tilde{x}^{k})$
$r^{k+1}=\beta_{2}r^{k}+(1-\beta_{2})\tilde{D}^{k}\odot\tilde{D}^{k}$
$v^{k+1}=\beta_{1}v^{k}-\frac{\alpha_{k}}{\sqrt{r^{k+1}}+\epsilon}\odot\tilde{D}^{k}$
$x^{k+1}=x^{k}+v^{k+1}$
The most common Adam iteration update is written in term of momentum as
$\begin{split}&m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\\\
&r^{k}=\beta_{2}r^{k-1}+(1-\beta_{2})D^{k}\odot D^{k}\\\
&x^{k}=x^{k-1}-\frac{\alpha_{k}}{\sqrt{r^{k}}+\epsilon}\odot m^{k}\end{split}$
Adamax (Kingma and Ba, 2014, 2017) is a variant of Adam based on infinity
norm.
$\begin{split}&m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\\\
&u^{k}=\max(\beta_{2}u^{k-1},|D^{k}|+\epsilon)\\\
&x^{k}=x^{k-1}-\frac{\alpha_{k}}{(1-\beta_{1}^{k})u^{k}}\odot
m^{k}\end{split}$
AMSGrad (Reddi et al., 2018) is a version of Adam that keeps a running maximum
of the squared gradients instead of an exponential moving average.
$\begin{split}&m^{k}=\beta_{1}m^{k-1}+(1-\beta_{1})D^{k}\\\
&\tilde{m}^{k}=\max(\tilde{m}^{k-1},m^{k})\\\
&r^{k}=\beta_{2}r^{k-1}+(1-\beta_{2})D^{k}\odot D^{k}\\\
&x^{k}=x^{k-1}-\frac{\alpha_{k}}{\sqrt{r^{k}}+\epsilon}\odot\tilde{m}^{k}\end{split}$
All Adaptive methods can be summarized as follows (Défossez et al., 2020). As
hyper-parameters, we have $0\leq\beta_{1}<\beta_{2}\leq 1$, and a non negative
sequence $(\alpha_{k})_{k\in\mathbb{N}^{*}}$. We define three vectors
$m_{k},r_{k},x_{k}\in\mathbb{R}^{p}$ iteratively. Given
$x^{0}\in\mathbb{R}^{p}$ as our starting point, $m^{0}=0$, and $r^{0}=0$, we
define for all iterations $k\in\mathbb{N}^{*}$
$\begin{split}&m_{i}^{k}=\beta_{1}m_{i}^{k-1}+D^{k}_{i}\\\
&r_{i}^{k}=\beta_{2}r_{i}^{k-1}+\big{(}D^{k}_{i}\big{)}^{2}\\\
&x_{i}^{k}=x_{i}^{k-1}-\alpha_{k}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k}}+\epsilon}\end{split}$
.
The parameter $\beta_{1}$ is a heavy-ball style momentum parameter. The
parameter $\beta_{2}$ controls the decay rate of the per-coordinate
exponential moving average of the squared gradients. Taking $\beta_{1}=0$,
$\beta_{2}=1$ and $\alpha_{k}=\alpha$ gives Adagrad (Duchi et al., 2011b). The
original Adam algorithm (Kingma and Ba, 2014) uses a weighed average, rather
than a weighted sum :
$\tilde{m}_{i}^{k}=(1-\beta_{1})\sum_{j=1}^{k}\beta_{1}^{k-j}D_{i}^{j-1}=(1-\beta_{1})m_{i}^{k}$
We can achieve the same definition by taking
$\alpha_{adam}=\alpha\cdot\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}$, since
$\frac{\tilde{m}_{i}^{k}}{\sqrt{\tilde{r}_{i}^{k}}}=\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k}}}$
with
$\tilde{r}_{i}^{k}=(1-\beta_{2})r_{i}^{k}\text{ and
}\tilde{m}_{i}^{k}=(1-\beta_{1})m_{i}^{k}$
The original Adam algorithm further includes two corrective terms to account
for the fact that $m^{k}$ and $r^{k}$ are biased towards $0$ for the first few
iterations. Those corrective terms are equivalent to taking a step-size
$\alpha_{k}$ of the form
$\alpha_{k,adam}=\alpha\cdot\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\cdot\overbrace{\frac{1}{\sqrt{1-\beta_{1}^{k}}}}^{\text{
corrective term for }m^{k}}\cdot\underbrace{\sqrt{1-\beta_{2}^{k}}}_{\text{
corrective term for }r^{k}}$
Early work on adaptive methods (e.g. (McMahan and Streeter, 2010)) showed that
Adagrad achieves an optimal rate of convergence of $\mathcal{O}(1/\sqrt{k})$
for convex optimization. Ward et al. (2020) proved that Adagrad converges to a
critical point for non convex objectives with a rate
$\mathcal{O}(ln(k)/\sqrt{k})$ when using a scalar adaptive step-size. Défossez
et al. (2020) show a rate of $\mathcal{O}(p\ ln(k)/\sqrt{k})$ for Adam, and
show that in expectation, the squared norm of the objective gradient averaged
over the trajectory has an upper-bound which is explicit in the constants of
the problem, parameters of the optimizer, the dimension $p$, and the total
number of iterations $k$.
## 5 SAG convergence rate
We assume that each function $f_{i}$ in (1) is convex and differentiable (this
makes $g$ also convex and differentiable), and that each gradient $\nabla
f_{i}$ is Lipschitz-continuous with constant $L_{i}$, meaning that for all $x$
and $y$ in $\mathbb{R}^{p}$ and each $i$ we have
$\|\nabla f_{i}(x)-\nabla f_{i}(y)\|\leq L_{i}\|x-y\|$ (3)
This makes $\nabla g$ also Lipschitz-continuous with any constant
$L\geq\frac{1}{n}\sum_{i=1}^{n}L_{i}$, like $\operatorname*{max}_{i}L_{i}$.
Also, each gradient $\nabla f_{i}$ is Lipschitz-continuous with constant
$L\geq\operatorname*{max}_{i}L_{i}$. This is a fairly weak assumption on the
$f_{i}$ functions, and in cases where the $f_{i}$ are twice-differentiable it
is equivalent to saying that the eigenvalues of the hessians of each $f_{i}$
are bounded above by $L$. We will also assume the existence of at least one
minimizer $x^{*}$ that achieves the optimal function value.
In addition to the above basic convex case, we will also consider the case
where the average function $g=\frac{1}{n}\sum_{i=1}^{n}f_{i}$ is strongly-
convex with constant $\mu>0$, meaning that the function $x\mapsto
g(x)-\frac{\mu}{2}\|x\|^{2}$ is convex. For twice-differentiable $g$, this is
equivalent to requiring that the eigenvalues of the hessian of $g$ are bounded
below by $\mu$. This is a stronger assumption that is often not satisfied in
practical applications. Nevertheless, in many applications we are free to
choose a regularizer of the parameters, and thus we can add an
$\ell_{2}$-regularization term as in (2) to transform any convex problem into
a strongly-convex problem (in this case we have $\mu\geq\lambda$). Note that
strong-convexity implies the existence of a unique $x^{*}$ that achieves the
optimal function value.
Let $\bar{x}_{k}=\frac{1}{k}\sum_{i=0}^{k-1}x^{i}$ be the average iterate and
$\sigma^{2}=\frac{1}{n}\sum_{i=1}^{n}\|\nabla f_{i}(x^{*})\|$ the variance of
the gradient norms at the optimum $x^{*}$.The convergence results consider two
different initializations for the $y_{i}^{0}$ variables:
* •
setting $y_{i}^{0}=0$ for all $i$
* •
or setting them to the centered gradient at the initial point $x^{0}$ :
$y_{i}^{0}=\nabla f_{i}(x^{0})-\nabla g(x^{0})$
The convergence results are expressed in terms of expectations $\mathbb{E}$
with respect to the internal randomization of the algorithm (the selection of
the random variables $i_{k}$), and not with respect to the data which is
assumed to be deterministic and fixed. The $L$ we use in the following is a
Lipschitz-continuous constant common to all $\nabla f_{i}$, as
$\operatorname*{max}_{i}L_{i}$.
###### Theorem 5.1.
With a constant step size of $\alpha=\frac{1}{16L}$, the SAG iterations
satisfy for $k\geq 1$ :
$\mathbb{E}[g(\bar{x}^{k})]-g(x^{*})\leq\frac{32n}{k}C_{0}\in\mathcal{O}\bigg{(}\frac{1}{k}\bigg{)}$
(4)
where if we initialize with $y_{i}^{0}=0$ for all $i$ we have
$C_{0}=g(x^{0})-g(x^{*})+\frac{4L}{n}\|x^{0}-x^{*}\|^{2}+\frac{\sigma^{2}}{16L}$
and if we initialize with $y_{i}^{0}=\nabla f_{i}(x^{0})-\nabla g(x^{0})$ for
all $i$ we have
$C_{0}=\frac{3}{2}\big{[}g(x^{0})-g(x^{*})\big{]}+\frac{4L}{n}\|x^{0}-x^{*}\|^{2}$
Further, if g is $\mu$-strongly convex we have
$\mathbb{E}[g(x^{k})]-g(x^{*})\leq\bigg{(}1-min\Big{\\{}\frac{\mu}{16L},\frac{1}{8n}\Big{\\}}\bigg{)}^{k}C_{0}\in\mathcal{O}\Bigg{(}\bigg{(}1-min\Big{\\{}\frac{\mu}{16L},\frac{1}{8n}\Big{\\}}\bigg{)}^{k}\Bigg{)}$
The proof of this theorem is given in Schmidt et al. (2013) [Appendix B] and
involves finding a Lyapunov function for a non-linear stochastic dynamical
system defined on the $y_{i}^{k}$ and $x_{k}$ variables that converges to zero
at the above rates, and showing that this function dominates the expected sub-
optimality $\mathbb{E}[g(x^{k})]-g(x^{*})$. The equation (4) is stated for the
average $\bar{x}^{k}$, with a trivial change to the proof technique, but it
can be shown to also hold for any iterate $x^{k}$ where $g(x^{k})$ is lower
than the average function value up to iteration $k$,
$\frac{1}{k}\sum_{i=0}^{k-1}g(x^{i})$. Thus, in addition to $\bar{x}^{k}$ the
result also holds for the best iterate.
The bounds are valid for any $L$ greater than or equal to the minimum $L$
satisfying (3) for each $i$, implying an $\mathcal{O}(1/k)$ and linear
convergence rate for any $\alpha\leq 1/16L$, but the bound becomes worse as
$L$ grows. Although initializing each $y_{i}^{0}$ with the centered gradient
may have an additional cost and slightly worsens the dependency on the initial
sub-optimality $(g(x^{0})-g(x^{*}))$, it removes the dependency on the
variance $\sigma^{2}$ of the gradients at the optimum.
While the theorem is stated in terms of the function values, in the
$\mu$-strongly-convex case we also obtain a convergence rate on the iterates
because we have
$\frac{\mu}{2}\|x^{k}-x^{*}\|^{2}\leq g(x^{k})-g(x^{*})$
The SAG iterations have a worse constant factor because of the dependence on
$n$. An appropriate choice of $x^{0}$ can improve the dependence on $n$ : we
can set $x^{0}$ to the result of $n$ iterations of an appropriate SG method.
In this setting, the expectation of $g(x^{0})-g(x^{*})$ is
$\mathcal{O}(1/\sqrt{n})$ in the convex setting, while both
$g(x^{0})-g(x^{*})$ and $\|x^{0}-x^{*}\|^{2}$ would be in $\mathcal{O}(1/n)$
in the strongly-convex setting.
If we use this initialization of $x^{0}$ and set $y_{i}^{0}=\nabla
f_{i}(x^{0})-\nabla g(x^{0})$, then in terms of $n$ and $k$ the SAG
convergence rates take the form $\mathcal{O}(\sqrt{n}/k)$ and
$\mathcal{O}(\rho^{k}/n)$ in the convex and strongly-convex settings, instead
of the $\mathcal{O}(n/k)$ and $\mathcal{O}(\rho^{k})$ rates implied by the
theorem.
An interesting consequence of using a step-size of $\alpha=1/16L$ is that it
makes the method adaptive to the strong-convexity constant $\mu$. For problems
with a higher degree of local strong-convexity around the solution $x^{*}$,
the algorithm will automatically take advantage of this and yield a faster
local rate. This can even lead to a local linear convergence rate if the
problem is strongly-convex near the optimum but not globally strongly-convex.
This adaptivity to the problem difficulty is in contrast to SG methods whose
sequence of step sizes typically depend on global constants and thus do not
adapt to local strong-convexity. We will test this on the Rosenbrock function
in log scale, for which the SG method turns indefinitely around the global
minimum and never reaches it.
## 6 SAG implementation Details
Schmidt et al. (2013) discuss modifications that lead to better practical
performance than this basic algorithm, including ways to reduce the storage
cost, how to handle regularization, how to set the step size, using mini-
batches, and using non-uniform sampling.
1 begin
2 $d=0$ $\ /*$ $d$ is use to track the quantity $\sum_{i=1}^{n}y_{i}$ $*/$
3 $y_{i}=0$ for $i=1,2,\dots,n$
4 for _$k=0,1,\dots$_ do
5 Sample $i$ from $\\{1,2,\dots,n\\}$
6 $d=d-y_{i}+\nabla f_{i}(x)$
7 $y_{i}=\nabla f_{i}(x)$
8 $x=x-\frac{\alpha}{n}d$
9
10 end for
11
12 end
13
Algorithm 1 Basic SAG method for minimizing
$\frac{1}{n}\sum_{i=1}^{n}f_{i}(x)$ with step size $\alpha$
##### Re-weighting on early iterations
The more logical normalization is to divide $d$ by $m$, the number of data
points that we have seen at least once (which converges to $n$ once we have
seen the entire data set), when $y_{i}^{0}=0$
$x=x-\frac{\alpha}{m}d$
##### Exact and efficient regularization
$x=x-\alpha\bigg{(}\frac{d}{m}+\lambda
x\bigg{)}=(1-\alpha\lambda)x-\frac{\alpha}{m}d$
##### Mini-batches for vectorized computation and reduced storage
$x^{k+1}=x^{k}-\frac{\alpha_{k}}{n}\sum_{i=1}^{n}y_{i}^{k}\text{ with
}y_{i}^{k}=\left\\{\begin{array}[]{ll}\nabla f_{i}(x^{k})&\mbox{if
}i\in\mathcal{B}\\\ y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
##### Structured gradients and just-in-time parameter updates
For many problems the storage cost of $\mathcal{O}(np)$ for the $y_{i}^{k}$
vectors is prohibitiven but we can often use the structure of the gradients
$\nabla f_{i}$ to reduce this cost. For example, let consider a linearly-
parameterized model of the form
$\operatorname*{minimize}_{x\in\Omega\subset\mathbb{R}^{p}}\ \
g(x)=\frac{1}{n}\sum_{i=1}^{n}f_{i}(a_{i}^{T}x)$ (5)
Since each $a_{i}$ is constant, for these problems we only need to store the
scalar $\nabla f_{i_{k}}(u_{i}^{k})$ for $u_{i}^{k}=a_{i_{k}}^{T}x$ rather
than the full gradient $a_{i}\nabla f_{i}(u_{i}^{k})$. This reduces the
storage cost from $\mathcal{O}(np)$ down to $\mathcal{O}(n)$. Examples of
linearly-parameterized models include the least-squares regression 666
$\ell(s=(x,y),\theta)=h(x^{T}\theta)\textit{ with }h(z)=(z-y)^{2}$ , the
logistic regression 777 $\ell(s=(x,y),\theta)=h(x^{T}\theta)\textit{ with
}h(z)=log(1+exp(-yz))$ , feed forward neural networks, etc.
## 7 Experiments settings Results
We will use the following acronyms to designate our algorithms :
* •
sgd : vanilla SGD
* •
momentum : SGD with momentum (Polyak, 1964; Sutton, 1986; Tseng, 1998)
* •
nesterov : Nesterov Accelerated SGD (Nesterov, 1983; Sutskever et al., 2013)
* •
asgd : Averaged SGD proposed by Polyak and Juditsky (1992)
* •
rmsprop : RMSProp (Hinton, 2012)
* •
rmsprop_mom : RMSProp with momentum
* •
rprop : resilient backpropagation algorithm (Riedmiller and Braun, 1993)
* •
adadelta : Adadelta (Zeiler, 2012)
* •
adagrad : Adagrad (Duchi et al., 2011b)
* •
adam : Adam (Kingma and Ba, 2014, 2017)
* •
amsgrad : AMSGrad (Reddi et al., 2018)
* •
adamax : Adamax (Kingma and Ba, 2014)
* •
custom_adam : custom adam algorithm without amsgrad and that include the two
corrective terms for $m^{k}$ and $r^{k}$
* •
adam_inverse_sqrt : Adam that decays the learning rate based on the inverse
square root of the update number. It also supports a warmup phase where the
learning rate is linearly increase from some initial learning rate
($warmup\\_init\\_lr$) until the configured learning rate ($lr$). Thereafter,
the learning rate is decay proportional to the number of updates, with a decay
factor set to align with the configured learning rate.
* –
During warmup:
$lrs=linspace(start=warmup\\_init\\_lr,end=lr,steps=warmup\\_updates)$
$lr=lrs[step]$
* –
After warmup:
$lr=\frac{decay\\_factor}{\sqrt{update\\_num}}\text{ where
}decay\\_factor=lr*sqrt(warmup\\_updates)$
* •
adam_cosine : Adam that assign learning rate based on a cyclical schedule that
follows the cosine function (Loshchilov and Hutter, 2016). It also supports a
warmup phase where the learning rate is linearly increase from some initial
learning rate ($warmup\\_init\\_lr$) until the configured learning rate
($lr$). Thereafter, the learning rate is decay proportional to the number of
updates, with a decay factor set to align with the configured learning rate.
* –
During warmup:
$lrs=linspace(start=warmup\\_init\\_lr,end=lr,steps=warmup\\_updates)$
$lr=lrs[step]$
* –
After warmup:
$lr=lr\\_min+0.5*(lr\\_max-lr\\_min)*(1+cos(t\\_curr/t\\_i))$
where $t\\_curr$ is current percentage of updates within the current period
range and $t\\_i$ is the current period range, which is scaled by $t\\_mul$
after every iteration.
* •
sag : SAG (Schmidt et al., 2013)
* •
sag_sgd : combinaition of SAG and momentum SGD with
$y_{i}^{k}=\left\\{\begin{array}[]{ll}v^{k+1}=\beta_{1}v_{i}^{k}-\alpha_{k}D_{i}^{k}&\mbox{if
}i=i_{k},\text{ with }D^{k}=\nabla f_{i_{k}}(x^{k})\\\
y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
* •
sag_adam : combinaition of SAG and Adam with
$y_{i}^{k}=\left\\{\begin{array}[]{ll}\frac{m_{i}^{k}}{\sqrt{r_{i}^{k}}+\epsilon}&\mbox{if
}i=i_{k}\\\ y_{i}^{k-1}&\mbox{otherwise.}\end{array}\right.$
### 7.1 Test functions for optimization
#### 7.1.1 Rosenbrock function
The vanilla rosenbrok function is given by
$g_{n}(x)=\sum_{i=1}^{n/2}\big{[}100(x_{2i}-x_{2i-1}^{2})^{2}+(x_{2i-1}-1)^{2}\big{]}$,
with the gradient
$\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i\in
2\mathbb{N}}-\big{[}400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\big{]}\cdot\mathbb{1}_{i\in
2\mathbb{N}-1}$, and
$x^{*}\in\\{(1,\dots,1),(-1,1,\dots,1)\\}\subset\\{x,\nabla g_{n}(x)=0\\}$ 888
When the coordinates range from $0$ to $n-1$,
$g_{n}(x)=\sum_{i=0}^{n/2-1}\big{[}100(x_{2i+1}-x_{2i}^{2})^{2}+(x_{2i}-1)^{2}\big{]}$
and $\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i\in
2\mathbb{N}+1}-\big{[}400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\big{]}\cdot\mathbb{1}_{i\in
2\mathbb{N}}$.. A more involved variant is given by
$g_{n}(x)=\sum_{i=1}^{n-1}\big{[}100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}\big{]}$,
with the gradient
$\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i>1}-\big{[}400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\big{]}\cdot\mathbb{1}_{i<n}$,
and $x^{*}=\\{1,\dots,1)\\}\subset\\{x,\nabla g_{n}(x)=0\\}$ 999 When the
coordinates range from $0$ to $n-1$,
$g_{n}(x)=\sum_{i=0}^{n-2}\big{[}100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}\big{]}$
and
$\nabla_{i}g_{n}(x)=200(x_{i}-x_{i-1}^{2})\cdot\mathbb{1}_{i>0}-\big{[}400x_{i}(x_{i+1}-x_{i}^{2})-2(x_{i}-1)\big{]}\cdot\mathbb{1}_{i<n-1}$..
The number of stationary points of this function grows exponentially with
dimensionality $n$, most of which are unstable saddle points (Kok and
Sandrock, 2009).
Figure 1: Left) Rosenbrock function in log scale ($n=2$), Center) Contours,
Right) Gradient field (note how this vector is pronounced in norm near the
global minimum, which is important to understand why even near this global
optimum many optimizers can felt to reach it)
We optimized the Rosenbrock function in a logarithmic scale (to create a
ravine, figure 2). The function is unimodal, and the global minimum is very
sharp and surrounded in the direction of the ravine by many local minima. At
the beginning of optimization, we fall very quickly into the ravine because
the surface is well-conditioned. Then, depending on the learning rate and the
optimizer used (as well as the associated hyperparameters), we go down the
ravine very slowly. Indeed, without momentum or velocity, we do not go
directly down to the minimum since the gradient is almost zero along the
ravine direction but very large in the perpendicular directions: we go from
left to right (perpendicular to the ravine) while going down a little, but
very slowly. Moreover, we turn there almost indefinitely once we are near the
minimum. With adaptive gradient, we go down to the minimum very quickly
because this direction problem is corrected (due to momentum, left-right
ravine perpendicular directions cancel out): if the learning rate is too
small, we will also go down very slowly (small gradient in the flat ravine
direction). Unlike SGD, here, we always reach the minimum (and stay there).
Also, for some learning rates and initializations, there is a double descent
(Nakkiran et al., 2020) in error (euclidean distance between the global
minimum and the current position at a given time) when landing in the ravine.
(a) adadelta and adagrad vs sag
(b) momentum, nesterov and asgd vs sag
(c) rmsprop, rmsprop_mom and rprop vs sag
(d) adam, amsgrad and adamax vs sag and sag_adam
(e) adam, custom_adam, adam_inverse_sqrt, adam_cosine
(f) summary
Figure 2: Rosenbrock function
Adadelta and adagrad were very slow compared to sag. We can see in figure 2(a)
a comparative progression of these three algorithms. After 100 000 iterations
adadelta and adagrad were still going down to the valley, while SAG did it in
less than 1000 iterations, which is 100 times faster than both. Adadelta
manages to reach the minimum, which sag never finally does.
Nosterov is faster on the well-conditioned part of the surface and arrives
faster in the neighbourhood of the target than sag, momentum and asgd (figure
2(b)). On the other hand, it stabilizes at a higher loss than these methods.
Sag and asgd have almost the same trajectory. Momentum follows the same
trajectory as these two methods from the beginning but stabilizes at a smaller
loss. The combination sag_sgd (with momentum) speeds up the arrival in the
neighbourhood of the minimum but stabilizes at the same level as momentum.
Rmsprop is slower than sag, but ends up with a smaller error than sag (figure
2(c)). Adding momentum to rmsprop (rmsprop_mom) improves its speed
significantly. Rprop is also very fast and gives a smaller error than sag and
sag_sgd.
On the well-conditioned part of the surface, sag is faster than adam, adamax
and amsgrad, but these methods reach the minimum (get zero final loss), which
is not the case for sag (figure 2(d)). The sag_adam combination almost reaches
the minimum, but is very chaotic and has periodic jumps that are similar to
the slingshot mechanism (Thilak et al., 2022). amsgrad is much slower than
adam and adamax.
custom_adam, adam_inverse_sqrt, adam_cosine also have the same periodic
disruption phenomenon as sag_adam (figure 2(e)).
The methods that succeed in reaching the minimum are rmsprop, rprop, adadelta,
adam, amsgrad, adamax, rmsprop_mom (figure 4). The methods that come close to
it without reaching it are adam_inverse_sqrt, custom_adam, adam_cosine,
sag_adam, momentum. The comparative convergence speeds are presented in
figures 3 and 5, which is an approximation of the number of iterations
performed before reaching stabilization.
Figure 3: Comparative visualization of convergence speeds on the rosenbrock
function Figure 4: Final errors at steady states on the rosenbrock function
Figure 5: Comparative visualization of the progression of each algorithm on
the rosenbrock function
#### 7.1.2 Rastrigin function
The rastrigin function is given by
$g_{n}(x)=na+\sum_{i=1}^{n}\big{[}x_{i}^{2}-a\cos(2\pi
x_{i})\big{]}=na+x^{T}x-a1_{n}^{T}\cos(2\pi x)$ with $a\in\mathbb{R}$. Its
gradient is $\nabla g_{n}(x)=2x+2\pi a\sin(2\pi x)$, and
$x^{*}=\\{0,\dots,0)\\}\subset\\{x,\nabla g_{n}(x)=0\\}$.
Figure 6: Left) Rastrigin function in log scale ($A=10,n=2$), Center)
Contours, Right) Gradient field
We optimized the Rastrigin function in a logarithmic scale (to create many
local minimums and make the global minimum sharp, figure 7). The function is
unimodal (in terms of global minimum), and the global minimum is very sharp
and surrounded symmetrically by many local minima. At the beginning of
optimization, we fall very quickly into the one local minimum. Then, depending
on the learning rate and the optimizer used (and the associated
hyperparameters), we can move successively from one minimum to another until
we reach the global minimum.
(a) adadelta, adagrad and rprop vs sag
(b) momentum, nesterov and asgd vs sag
(c) rmsprop, rmsprop_mom vs sag
(d) adam, amsgrad and adamax vs sag and sag_adam
(e) adam, custom_adam, adam_inverse_sqrt, adam_cosine
(f) summary
Figure 7: Rastrigin function
Again, adadelta and adagrad are very slow compared to sag. We can see in
figure 7(a) a comparative progression of these three algorithms. After 400 000
iterations adadelta and adagrad were still going down to the valley, while SAG
did it in less than 1000 iterations, which is 400 times faster than both.
adadelta manages to reach the minimum, which sag never finally does. Rprop is
very bad here, it never leaves the first local minimum in which it falls. This
is the method that obtains the largest error.
Nosterov is fast to reach the local minimum than sag, momentum and asgd
(figure 7(b)), and stabilizes at the same error as these methods. sag and asgd
have almost the same trajectory. Momentum follows slightly the same trajectory
as these two methods from the beginning and stabilizes at the same error. The
combination sag_sgd (with momentum) speeds up the arrival in the neighbourhood
of the minimum and allows to obtain stabilization with a lower error. This
means that it escapes more local minimums than the methods with which it is
compared.
Rmsprop is slightly slower than sag and ends up with a bigger error than sag
(figure 7(c)). Adding momentum to rmsprop (rmsprop_mom) improves its speed
significantly, but we end up with the same error.
sag is faster than adam, adamax and amsgrad and gets a smaller error than them
(figure 7(d)). The sag_adam combination is much faster with less error. It is
also one of the only methods to approach the global minimum (i.e. to escape so
many obstacles). Amsgrad is much slower than adam and adamax, but ends up with
the same error as them.
Custom_adam is faster than adam_inverse_sqrt, adam_cosine, but ends up with
the same error as them (figure 7(e)).
No method has reached the global minimum (figure 9). The methods that come
close to it without reaching it are adam_inverse_sqrt, custom_adam,
adam_cosine and sag_adam. The comparative convergence speeds are presented in
figures 8 and 10, which approximate the number of iterations performed before
reaching stabilization.
Figure 8: Comparative visualization of convergence speeds on the rastrigin
function Figure 9: Final errors at steady states on the rastrigin function
Figure 10: Comparative visualization of the progression of each algorithm on
the rastrigin function
### 7.2 Toys machine learning problems
#### 7.2.1 Scikit-learn dataset
We extracted the following datasets from scikit-learn (Pedregosa et al.,
2011). The reader is invited to refer to the official scikit-learn website
101010https://scikit-learn.org/stable/datasets/toy_dataset.html for more
information about these data (sources, …).
* •
wine (classification): recognize the wine class given the features like the
amount of alcohol, magnesium, phenol, colour intensity, etc.
* •
iris (classification): It contains sepal and petal lengths and widths for
three classes of plants (Setosa, Versicolour, and Virginica)
* •
digits (classification): digit classification
* •
boston (regression): house prices in Boston based on the crime rate, nitric
oxide concentration, number of rooms, distances to employment centers, tax
rates, etc. The output feature is the median value of homes.
* •
diabete (regression): sklearn diabete dataset
* •
linnerud (regression): physical exercise Linnerud dataset
Dataset | # features | # classes | size | train size (80%) | val size (20%)
---|---|---|---|---|---
wine | 13 | 3 | 178 | 142 | 36
iris | 4 | 3 | 150 | 120 | 30
digits | 64 | 10 | 1797 | 1437 | 360
Table 1: Information about the sklearn datasets (classification) Dataset | # features | # output | size | train size (80%) | val size (20%)
---|---|---|---|---|---
boston | 13 | 1 | 506 | 404 | 102
diabete | 10 | 1 | 442 | 353 | 89
linnerud | 3 | 3 | 20 | 16 | 4
Table 2: Information about the sklearn datasets (regression)
We trained a one-layer perceptron with a hidden layer of dimension 50, a leaky
rectified linear unit (Leaky ReLU) activation (with a negative slope of 0.01)
(Maas, 2013) and a dropout of probability 0.1 (Srivastava et al., 2014), this
for 2000 epochs.
The results are presented in the following figures :
* •
wine: 11, 12, 13, 14, 15, 16, 17
* •
iris: 18, 19, 20, 21, 22, 23, 24
* •
digits: 25, 26, 27, 28, 29, 30, 31
* •
boston: 32, 33, 34, 35, 36, 37, 38
* •
linnerud: 39, 40, 41, 42, 43, 44, 45
* •
diabete: 46, 47, 48, 49, 50, 51, 52
Figure 11: adadelta, adagrad, sag (wine) Figure 12: momentum, nesterov, asgd,
sag, sag, sgd (wine) Figure 13: adam, amsgrad, adamax, sag, sag_adam (wine)
Figure 14: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd,
sag_adam (wine) Figure 15: Summary (wine) Figure 16: Comparative visualization
of convergence speeds (wine) Figure 17: Performances at steady states (wine)
Figure 18: adadelta, adagrad, sag (iris) Figure 19: momentum, nesterov, asgd,
sag, sag, sgd (iris) Figure 20: adam, amsgrad, adamax, sag, sag_adam (iris)
Figure 21: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd,
sag_adam (iris) Figure 22: Summary (iris) Figure 23: Comparative visualization
of convergence speeds (iris) Figure 24: Performances at steady states (iris)
Figure 25: adadelta, adagrad, sag (digits) Figure 26: momentum, nesterov,
asgd, sag, sag, sgd (digits) Figure 27: adam, amsgrad, adamax, sag, sag_adam
(digits) Figure 28: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag,
sag_sgd, sag_adam (digits) Figure 29: Summary (digits) Figure 30: Comparative
visualization of convergence speeds (digits) Figure 31: Performances at steady
states (digits) Figure 32: adadelta, adagrad, sag (boston) Figure 33:
momentum, nesterov, asgd, sag, sag, sgd (boston) Figure 34: adam, amsgrad,
adamax, sag, sag_adam (boston) Figure 35: adam, custom_adam,
adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (boston) Figure 36:
Summary (boston) Figure 37: Comparative visualization of convergence speeds
(boston) Figure 38: Performances at steady states (boston) Figure 39:
adadelta, adagrad, sag (linnerud) Figure 40: momentum, nesterov, asgd, sag,
sag, sgd (linnerud) Figure 41: adam, amsgrad, adamax, sag, sag_adam (linnerud)
Figure 42: adam, custom_adam, adam_inverse_sqrt, adam_cosine, sag, sag_sgd,
sag_adam (linnerud) Figure 43: Summary (linnerud) Figure 44: Comparative
visualization of convergence speeds (linnerud) Figure 45: Performances at
steady states (linnerud) Figure 46: adadelta, adagrad, sag (diabete) Figure
47: momentum, nesterov, asgd, sag, sag, sgd (diabete) Figure 48: adam,
amsgrad, adamax, sag, sag_adam (diabete) Figure 49: adam, custom_adam,
adam_inverse_sqrt, adam_cosine, sag, sag_sgd, sag_adam (diabete) Figure 50:
Summary (diabete) Figure 51: Comparative visualization of convergence speeds
(diabete) Figure 52: Performances at steady states (diabete)
#### 7.2.2 TorchVision dataset
We extracted the datasets presented in table 3 from pytorch (Paszke et al.,
2019). The reader is kindly invited to refer to the official pytorch website
111111https://pytorch.org/vision/stable/datasets.html for more information
about these data (sources, …).
Dataset | (# channels, height, width) | # classes | size | train size (80%) | val size (20%)
---|---|---|---|---|---
mnist | (1, 28, 28) | 10 | 70000 | 60000 | 10000
fashion mnist | (1, 28, 28) | 10 | 7000 | 60000 | 10000
cifar10 | (3, 32, 32) | 10 | 60000 | 50000 | 10000
cifar100 | (3, 32, 32) | 100 | 60000 | 50000 | 10000
Table 3: TorchVision datasets
We trained a classifier having two main successive parts : , and
* •
A first part consisting of two layers of convolutions neural networks :
* –
(0): Conv2d(# channels, 10, kernel_size=(5, 5), stride=(1, 1))
* –
(1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1,
ceil_mode=False)
* –
(2): Conv2d(10, 10, kernel_size=(5, 5), stride=(1, 1))
* –
(3): Dropout2d(p=0.1, inplace=False)
* –
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1,
ceil_mode=False)
* •
A second part consisting of a a two layers feed forward neural network :
* –
(0): Linear(in_features=160, out_features=50, bias=True)
* –
(1): Dropout(p=0.1, inplace=False)
* –
(2): Linear(in_features=50, out_features=10, bias=True)
* –
(3): Dropout(p=0.1, inplace=False)
## 8 Summary and Discussion
In this work, we compared the performance of SAG and several other
optimization algorithms for continuous objectives such as SGD with momentum,
Nesterov Accelerated SGD, Averaged SGD, RMSProp (with and without momemtum),
resilient backpropagation algorithm (Rprop), Adadelta, Adagrad, Adam, AMSGrad,
Adamax, Adam with special learning rate decay procedure (inverse square root
of the update number, cyclical schedule that follows the cosine function).
SAG, although with a simple iteration, outperforms the majority of these
algorithms. We have proposed two combinations of SAG. One with the momentum
algorithm, which allows control of the importance of each gradient term in the
mean used by SAG depending on the iteration during which it is used, and
another with Adam where the importance of the square of the norm of the
gradient is also controlled. These two variants allowed us to improve the
speed empirically while obtaining better performances.
##### Limitations
The memory cost used by SAG is very high compared to other algorithms, which
makes it impractical for large scale use.
##### Perspectives
What we presented as an improvement is only an empirical illustration of the
performance of SAG. It would be interesting to evaluate theoretically the
expected convergence rate of all these algorithms. We leave this for future
work.
## Acknowledgement
The authors thank Fabian Bastin who made this work possible, and for
discussion at the early stage of this project during the stochastic
programming (IFT6512) course at UdeM (Université de Montréal). We also thank
Compute Canada for computational resources.
## References
* Bottou (1998) Léon Bottou. Online algorithms and stochastic approximations. In David Saad, editor, _Online Learning and Neural Networks_. Cambridge University Press, Cambridge, UK, 1998. URL http://leon.bottou.org/papers/bottou-98x. revised, oct 2012.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020.
* Cauchy (1847) Augustin-Louis Cauchy. Analyse mathématique. – méthode générale pour la résolution des systèmes d’équations simultanées. volume 25, pages 536–538, 1847.
* Dauphin et al. (2014) Yann N. Dauphin, Razvan Pascanu, Çaglar Gülçehre, KyungHyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, editors, _Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada_ , pages 2933–2941, 2014. URL https://proceedings.neurips.cc/paper/2014/hash/17e23e50bedc63b4095e3d8204ce063b-Abstract.html.
* Duchi et al. (2011a) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. _J. Mach. Learn. Res._ , 12(null):2121–2159, July 2011a. ISSN 1532-4435.
* Duchi et al. (2011b) John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. _Journal of Machine Learning Research_ , 12(61):2121–2159, 2011b. URL http://jmlr.org/papers/v12/duchi11a.html.
* Défossez et al. (2020) Alexandre Défossez, Léon Bottou, Francis Bach, and Nicolas Usunier. A simple convergence proof of adam and adagrad. _arXiv preprint arXiv: Arxiv-2003.02395_ , 2020.
* Hinton (2012) G. Hinton. Neural networks for machine learning. coursera, video lectures, 2012.
* Kingma and Ba (2014) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _International Conference On Learning Representations_ , 2014.
* Kingma and Ba (2017) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
* Kok and Sandrock (2009) Schalk Kok and Carl Sandrock. Locating and characterizing the stationary points of the extended rosenbrock function. _Evol. Comput._ , 17(3):437–453, sep 2009. ISSN 1063-6560. doi: 10.1162/evco.2009.17.3.437. URL https://doi.org/10.1162/evco.2009.17.3.437.
* Loshchilov and Hutter (2016) Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. _arXiv preprint arXiv: Arxiv-1608.03983_ , 2016.
* Maas (2013) Andrew L. Maas. Rectifier nonlinearities improve neural network acoustic models. 2013\.
* McMahan and Streeter (2010) H. B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization. _Annual Conference Computational Learning Theory_ , 2010.
* Nakkiran et al. (2020) Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net, 2020. URL https://openreview.net/forum?id=B1g5sA4twr.
* Nemirovski et al. (2009) A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. _SIAM Journal on Optimization_ , 19(4):1574–1609, 2009. doi: 10.1137/070704277. URL https://doi.org/10.1137/070704277.
* Nesterov (1983) Yurii Nesterov. A method for unconstrained convex minimization problem with the rate of convergence $o(1/k^{2})$. 1983\.
* Nesterov (2004) Yurii Nesterov. _Introductory lectures on convex optimization: A basic course_. Springer, 2004.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019. URL http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.
* Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011\.
* Polyak and Juditsky (1992) B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. _SIAM J. Control Optim._ , 30(4):838–855, jul 1992. ISSN 0363-0129. doi: 10.1137/0330046. URL https://doi.org/10.1137/0330046.
* Polyak (1964) B.T. Polyak. Some methods of speeding up the convergence of iteration methods. _USSR Computational Mathematics and Mathematical Physics_ , 4(5):1–17, 1964. ISSN 0041-5553. doi: https://doi.org/10.1016/0041-5553(64)90137-5. URL https://www.sciencedirect.com/science/article/pii/0041555364901375.
* Reddi et al. (2018) Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. _International Conference On Learning Representations_ , 2018.
* Riedmiller and Braun (1993) M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: the rprop algorithm. In _IEEE International Conference on Neural Networks_ , pages 586–591 vol.1, 1993. doi: 10.1109/ICNN.1993.298623.
* Schmidt et al. (2013) Mark W. Schmidt, Nicolas Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. _Mathematical Programming_ , 2013. doi: 10.1007/s10107-016-1030-6.
* Srihari (2020) Sargur N. Srihari. Challenges in neural network optimization. 2020\. URL https://cedar.buffalo.edu/~srihari/CSE676/.
* Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. _Journal of Machine Learning Research_ , 15(56):1929–1958, 2014. URL http://jmlr.org/papers/v15/srivastava14a.html.
* Sutskever et al. (2013) Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In Sanjoy Dasgupta and David McAllester, editors, _Proceedings of the 30th International Conference on Machine Learning_ , volume 28 of _Proceedings of Machine Learning Research_ , pages 1139–1147, Atlanta, Georgia, USA, 17–19 Jun 2013. PMLR. URL http://proceedings.mlr.press/v28/sutskever13.html.
* Sutton (1986) Richard S. Sutton. Two problems with backpropagation and other steepest-descent learning procedures for networks. In _Proceedings of the Eighth Annual Conference of the Cognitive Science Society_. Hillsdale, NJ: Erlbaum, 1986.
* Thilak et al. (2022) Vimal Thilak, Etai Littwin, Shuangfei Zhai, Omid Saremi, Roni Paiss, and Joshua Susskind. The slingshot mechanism: An empirical study of adaptive optimizers and the grokking phenomenon, 2022. URL https://arxiv.org/abs/2206.04817.
* Tseng (1998) Paul Tseng. An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. _SIAM Journal on Optimization_ , 8(2):506–531, 1998. doi: 10.1137/S1052623495294797. URL https://doi.org/10.1137/S1052623495294797.
* Ward et al. (2020) Rachel Ward, Xiaoxia Wu, and Leon Bottou. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. _The Journal of Machine Learning Research_ , 21(1):9047–9076, 2020.
* Zeiler (2012) Matthew D. Zeiler. Adadelta: An adaptive learning rate method, 2012.
|
Roger and Courtnei
Synopsis
Main
Supporting
* Violet (voiced by Jessica DiCicco) - a busty teenage girl who works at Tooter's and is TBD.
* Eddie (voiced by TBD) - TBD
* Lacey Drope (voiced by Kristen Schaal) - a partyholic teenager who TBD.
* The Draculmole (voiced by Charlie Adler) - a vampiric mole-like creature who Courtnei befriends.
* Harvey Har (voiced by Jason Alexander) - a TBD talent agent who TBD.
* R.E.C. (voiced by Tom Kenny) - a friendly robot who is TBD.
* Redpinegal (voiced by Liliana Mumy) - TBD
* Santa Claus (voiced by John DiMaggio) - the jolly guardian of Christmas who loves to TBD.
* Santa Claus (voiced by John DiMaggio) - the jolly guardian of Christmas who loves to TBD.
Antagonists
* Burly Joe (also voiced by John DiMaggio) - a local thug who has an obsesssion for TBD.
* Lucy Six (voiced by Olivia Olson) - a gothic demon who is TBD.
* Salvador Macceti (voiced by André Sogliuzzo) - an elderly mob boss who TBD.
* Michael Macceti (voiced by Jason Spisak) - TBD
* Mason Everscare (voiced by Jim Hanks) - the owner of several media projects who is evil and TBD.
* Adapto (voiced by TBD) - a wizard who can turn women into mindless bimbos by TBD.
* Thorne (also voiced by Khary Payton) - a nightclub owner who TBD.
* Anna/She-Bitch (voiced by TBD) - TBD
* Michael Wallace (voiced by Nolan North) - the owner of a paper company who wants to take TBD.
* [spoof the other office mains a bit]
* The Sin Teens, consisting of:
* Walter TBD (also voiced by Yuri Lowenthal) - TBD
* Gina Gelly (voiced by Maria Canals-Barrera) - TBD
* Peter TBD (voiced by Jason Spisak) - TBD
* Sam TBD (also voiced by Ashley Johnson) - TBD
* Ed TBD (voiced by James Arnold Taylor) - TBD
* Gerald TBD (voiced by John Kassir) - TBD
Episodes
See List of episodes.
Crossover
[Untitled crossover with Game Over]
Tropes
See /Tropes.
Trivia
* VARIANT:
* Worldwide Animation: None.
* Worldwide Animation: None.
|
Deadpool 3: Bloody Sunday
Summary
Deadpool reunited the X-Force to take down an upcoming threat by the name of Mister Sinister.
Cast
* as Nathan Summers/Cable, a mutant from the future who has a frenemyship with Deadpool.
* as Neena Thurman/Domino, a member of the X-Force who has the power of being lucky.
* as Ellie Phimister/Negasonic Teenage Warhead, TBD
* Alison Brie as Vertigo, Nathaniel's bodyguard who Wade usually mocks her powers.
* T. J. Miller as Weasel, a bartender who is Deadpool's friend.
Soundtrack
* 1) Sunday, Bloody Sunday - U2
* 2) The Next Episode - Snoop Dogg
Quotes
* Deadpool: What's going on, tech weirdos?
* IT Person #1: Actually we are IT...
C'mon, let's roll the tape!
* IT Person #2: We can't, it blew up.
* Deadpool: Fine, I'll do the credits myself.
* Deadpool: Cable, we meet again.
* Deadpool: I did.
* Domino: She's amazing.
* Vanessa: That doesn't answer my question.
* Cable: Well, most look badass.
* Cable: Quiet you!
* Deadpool: Damn, that's really bad writing!
* Deadpool: Yeah
(Mister Sinister is shown) (ahe comes)
* Mister Sinister: Vertigo!
* Vertigo: Heyo, boss. What do you need me for?
* Mister Sinister: Is the weapon near completion?
* Vertigo: I think so.
* Mister Sinister: Good.
* Vertigo: I think I am hungry... (starts taking her clothes off) for you.
* Mister Sinister: What are you doing?
* Mister Sinister: I am not Doctor Strange!
* Deadpool: You look and sound like him, it's obvious you're him.
* Mister Sinister: But I’m not, I am way eviller than him.
* Deadpool: Prove it!
* Mister Sinister: Fine.
* Mister Sinister: You believe me now?
* Deadpool: Uh, no. Doctor Strange can do that too.
* Mister Sinister: Oh yeah?!
* Deadpool: Yep!
* Mister Sinister: Since I need to work on my weapon to destroy the world, Vertigo, Kill Him.
* Deadpool: Seriously?!
Trivia
[[Category:R]]
|
OWASP India Advisory Board 2012
For the year 2012, OWASP India announces following members on its advisory board.
{|- | style="width:40%; background:white; color:#4A4AFF" align="left" | } Category:India
* Pawan Kumar Singh, CISO - Tulip Telecom Ltd.
* Tarun Gupta, Director - Key Accounts, Ericsson
* Sunil Goyal, Chief Operating Officer - Sopra Group
* Sandeep Khare, Asst. Vice President - WNS Global
* Rohit Malik, Sr. Program Manager - Egilent Technologies
* Sanjay Kharb, Assistant Vice President – MakeMyTrip
|
Add set(CMAKE_DEBUG_POSTFIX "d")
This is useful to correctly link in Windows Visual Studio.
Relevant YARP code:
https://github.com/robotology/yarp/blob/9a395f6e41e01d8d350b6a4e5e2b2f6c6f913b74/doc/yarp_build_structure.dox#L207
https://github.com/robotology/yarp/blob/0a8aaa2482d029d5668f2482f29ea4b882d0d17d/cmake/YarpSystemCheck.cmake#L197
|
Super Smash Bros. Brawl/Peach
* At any point during a jump; you can also press down whilst holding down the jump button.
The second possibility allows Peach to float at any time during the jump, and not just the apex.
* 1) Peach bomber
* 2) Toad
Obviously you'd throw the item if it was a bomb. But throwing the item you get will allow you to stay at a distance.
When struck by an attack, Toad will reflect the attack in the form of spores, leaving Peach unharmed and damaging the opponent.
* }
* }
* }
* }
Taunts
* She pulls out her parasol, twirls it, and says, "Sweet."
* She dances around, singing "La-la-la-la-la-la".
* She twirls around and winks at the camera.
* }
* She dances around, singing "La-la-la-la-la-la".
* She twirls around and winks at the camera.
* }
* She twirls around and winks at the camera.
* }
* }
Statistics
* Weight: 4/10 (Medium)
* Offense: 8/10
* Defense: 8/10
* Projectile: 8/10
* Final Smash: 5/10
* Throwing ability: 5/10
* Speed: 6/10
* Overall: 8/10
Final Smash
|
# ----------------------------------------------------------------------
# ConfDBQuery model
# ----------------------------------------------------------------------
# Copyright (C) 2007-2021 The NOC Project
# See LICENSE for details
# ----------------------------------------------------------------------
# Python modules
import threading
import operator
import os
# Third-party modules
from mongoengine.document import Document, EmbeddedDocument
from mongoengine.fields import (
StringField,
UUIDField,
BooleanField,
ListField,
EmbeddedDocumentField,
)
import cachetools
# NOC modules
from noc.core.ip import IP
from noc.core.prettyjson import to_json
from noc.core.text import quote_safe_path
from noc.core.model.decorator import on_delete_check
from noc.sa.interfaces.base import StringParameter, IntParameter, BooleanParameter
class IPParameter(object):
def clean(self, value):
return IP.prefix(value)
id_lock = threading.Lock()
TYPE_MAP = {
"str": StringParameter(),
"int": IntParameter(),
"bool": BooleanParameter(),
"ip": IPParameter(),
}
class ConfDBQueryParam(EmbeddedDocument):
meta = {"strict": False}
name = StringField()
type = StringField(choices=["str", "int", "bool", "ip"])
default = StringField()
description = StringField()
def __str__(self):
return self.name
def to_json(self):
return {
"name": self.name,
"type": self.type,
"description": self.description,
"default": self.default,
}
def get_parameter(self):
return TYPE_MAP[self.type]
@on_delete_check(
check=[
("cm.InterfaceValidationPolicy", "filter_query"),
("cm.InterfaceValidationPolicy", "rules.query"),
("cm.InterfaceValidationPolicy", "rules.filter_query"),
("cm.ObjectValidationPolicy", "filter_query"),
("cm.ObjectValidationPolicy", "rules.query"),
("cm.ObjectValidationPolicy", "rules.filter_query"),
]
)
class ConfDBQuery(Document):
meta = {
"collection": "confdbqueries",
"strict": False,
"auto_create_index": False,
"json_collection": "cm.confdbqueries",
"json_unique_fields": ["name"],
}
name = StringField(unique=True)
uuid = UUIDField(binary=True)
description = StringField()
source = StringField()
params = ListField(EmbeddedDocumentField(ConfDBQueryParam))
allow_object_filter = BooleanField(default=False)
allow_interface_filter = BooleanField(default=False)
allow_object_validation = BooleanField(default=False)
allow_interface_validation = BooleanField(default=False)
allow_object_classification = BooleanField(default=False)
allow_interface_classification = BooleanField(default=False)
require_raw = BooleanField(default=False)
_id_cache = cachetools.TTLCache(maxsize=100, ttl=60)
def __str__(self):
return self.name
@classmethod
@cachetools.cachedmethod(operator.attrgetter("_id_cache"), lock=lambda _: id_lock)
def get_by_id(cls, id):
return ConfDBQuery.objects.filter(id=id).first()
def get_json_path(self) -> str:
p = [quote_safe_path(n.strip()) for n in self.name.split("|")]
return os.path.join(*p) + ".json"
def query(self, engine, **kwargs):
"""
Run query against ConfDB engine
:param engine: ConfDB engine
:param kwargs: Optional arguments
:return:
"""
params = kwargs.copy()
for p in self.params:
params[p.name] = p.get_parameter().clean(params.get(p.name, p.default))
for ctx in engine.query(self.source, **params):
yield ctx
def any(self, engine, **kwargs):
"""
Run query agains ConfDB engine and return True if any result found
:param engine: ConfDB engine
:param kwargs: Optional arguments
:return: True if any result found
"""
return engine.any(self.source, **kwargs)
def to_json(self) -> str:
r = {
"name": self.name,
"$collection": self._meta["json_collection"],
"uuid": self.uuid,
"source": self.source,
"params": [p.to_json() for p in self.params],
"allow_object_filter": self.allow_object_filter,
"allow_interface_filter": self.allow_interface_filter,
"allow_object_validation": self.allow_object_validation,
"allow_interface_validation": self.allow_interface_validation,
"allow_object_classification": self.allow_object_classification,
"allow_interface_classification": self.allow_interface_classification,
"require_raw": self.require_raw,
}
if self.description:
r["description"] = self.description
return to_json(
r,
order=[
"name",
"$collection",
"uuid",
"description",
"source",
"params",
"allow_object_filter",
"allow_interface_filter",
"allow_object_validation",
"allow_interface_validation",
"allow_object_classification",
"allow_interface_classification",
"require_raw",
],
)
|
Zika virus NS3 is a canonical RNA helicase stimulated by NS5 RNA polymerase
Abstract Zika virus is a positive single-strand RNA virus whose replication involved RNA unwinding and synthesis. ZIKV NS3 contains a helicase domain, but its enzymatic activity is not fully characterized. Here, we established a dsRNA unwinding assay based on the FRET effect to study the helicase activity of ZIKV NS3, which provided kinetic information in real time. We found that ZIKV NS3 specifically unwound dsRNA/dsDNA with a 3′ overhang in the 3′ to 5′ direction. The RNA unwinding ability of NS3 significantly decreased when the duplex was longer than 18 base pairs. The helicase activity of NS3 depends on ATP hydrolysis and binding to RNA. Mutations in the ATP binding region or the RNA binding region of NS3 impair its helicase activity, thus blocking viral replication in the cell. Furthermore, we showed that ZIKV NS5 interacted with NS3 and stimulated its helicase activity. Disrupting NS3-NS5 interaction resulted in a defect in viral replication, revealing the tight coupling of RNA unwinding and synthesis. We suggest that NS3 helicase activity is stimulated by NS5; thus, viral replication can be carried out efficiently. Our work provides a molecular mechanism of ZIKV NS3 unwinding and novel insights into ZIKV replication.
INTRODUCTION
Flaviviruses are positive single-strand RNA viruses that include Dengue virus (DENV), Yellow Fever virus (YFV), West Nile virus (WNV), Japanese Encephalitis virus (JEV), Zika virus (ZIKV) as well as some other viruses. As an emergent threat to humans, outbreak of ZIKV in 2016 caused serious concern worldwide (1). Epidemiological and biological studies showed that ZIKV infection is strongly associated with neonatal microcephaly and with Guillain-Barré syndrome in adults (2)(3)(4). Unfortunately, there is not yet any effective treatment for ZIKV infection (5). Previous studies on ZIKV provided only limited information about its replication in the cell, and the detailed mechanisms are still unclear. Therefore, it is of great importance to explore the replication process of ZIKV, which may offer new strategies for ZIKV prevention and therapy.
Similar to other flavivirus family members, ZIKV genomic RNA is approximately 11kb in length, and encodes three structural proteins (C, prM/M and E protein) and seven nonstructural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B and NS5) (6). The structural proteins are the components for assembling viral particles, and the nonstructural proteins perform viral replication in the cell. Two viral proteins, NS3 and NS5, possess all enzymatic functions and dominate viral RNA amplification (7,8). The viral RNA replication process is described briefly as below (9). First, NS5, the RNA-dependent RNA polymerase (RdRp), synthesizes a negative-sense RNA using the positive-sense genomic RNA as the template, resulting in a dsRNA intermediate, which is unwound by NS3 to generate separate negative-sense and positive-sense RNAs. The negativesense RNA serves as a new template for the production of substantial amounts of positive-sense genomic RNA. After 5 capping and methylation with the cooperation of NS3 and NS5, the positive-sense RNA is mature and assembled into viral particles.
Except for the first round of RNA synthesis, which directly uses viral genomic (+) RNA as a template, the opening of the dsRNA intermediate by NS3 is the prerequisite step for all the remaining steps in viral RNA synthesis. Undoubtedly, NS3 plays a crucial role in viral replication. Fla-viviral NS3 has two structurally separated functional domains, the N-terminal protease domain and the C-terminal helicase domain (10). The N-terminal protease cleaves the viral single-chain polyprotein precursor into individual proteins, and it also cleaves some host factors (11)(12)(13)(14)(15)(16). The Cterminal helicase is responsible for dsRNA unwinding during viral RNA synthesis. Although previous biochemical studies revealed the helicase activity of flaviviral NS3 using traditional electrophoretic mobility shift assay (EMSA), real-time kinetic information about dsRNA unwinding is lacking due to the limited temporal resolution of the technique (17,18). In addition, the lack of studies combining biochemical analysis of enzymatic activity and functional assays of viral genomic amplification restricts our mechanistic understanding of viral replication in the cell (19). Moreover, unlike the NS3 from other flavivirus members, studies on ZIKV NS3 are still rare. Thus, characterization of ZIKV NS3 enzymatic activity and its related function during viral genome amplification are important issues for understanding the mechanism of ZIKV replication.
Here, we establish a dsRNA/DNA unwinding assay based on fluorescence resonance energy transfer (FRET) to monitor ZIKV NS3 helicase activity in real time. We show that ZIKV NS3 has RNA and DNA helicase activity depending on ATP hydrolysis, and it preferentially unwinds RNA/DNA duplexes with a 3 overhang in vitro. Several residues (G199, K200, D410 and K431) residing in the ATP binding site and the RNA binding site of NS3 are crucial for its helicase activity, and mutations at these sites abolish viral replication and production. Notably, we found that NS5 stimulates NS3 helicase activity through increasing its dsRNA unwinding velocity. This effect depends on the interaction between NS3 and NS5. Interrupting this interaction loses the stimulation of NS3 helicase activity by NS5 in vitro and leads to significant deficiencies in ZIKV replication. Our work systematically characterizes ZIKV NS3 helicase activity and suggests cooperation between NS3 and NS5 in viral replication, shedding light on the mechanism that couples ZIKV RNA unwinding and synthesis.
Clone construction
ZIKV NS3 or NS5 sequence was amplified using gene specific primers and then inserted into the indicated vector using restriction enzyme digestion and ligation method. NS3 mutants were generated by site-directed mutagenesis according to QuikChange ® /I Site-Directed Mutagenesis manual. All mutations were confirmed by sequencing.
Expression and purification of recombinant proteins
All recombinant proteins were expressed in bacteria. In brief, Escherichia coli BL21 cells, transformed with ZIKV NS3-pET28 or NS5-pET28 plasmids, were cultured at 37 • C in Luria-Bertani medium containing kanamycin (100 ug/ml) until the optical density at 600 nm reached 0.8. The cells were induced with 0.4 mM isopropyl-b-Dthiogalactopyranoside (IPTG) at 16 • C for 24 h. Bacterial were harvested, then resuspended and sonicated in lysis buffer (500 mM NaCl, 5 mM MgCl 2 , 20 mM Tris-HCl pH 8.0, 0.5% TritonX-100, 20 mg/l RNaseA, 20 mg/l DNaseI, 10% glycerol, 1 mM DTT with proteinase inhibitors) on ice. The cell lysate was centrifuged at 18 000 rpm at 4 • C for 45 min. His-tagged protein was purified from the supernatant with Ni-NTA Agarose beads according to the manual. The eluate was dialyzed with buffer containing 100 mM NaCl, 20 mM Tris-HCl pH 8.0, 10% glycerol and 1 mM DTT. NS5 was further purified by gel filtration chromatography using a Superdex200 10/300GL (GE Healthcare) with a running buffer of 20 mM Tris-HCl, 500 mM NaCl pH 8.0. The collected protein fractions were concentrated to 3 mg/ml by using a membrane concentrator with a molecular weight cutoff of 30 kDa (Millipore).
ATPase activity assay
The ATPase activity was detected with QuantiChrom™ATPase/GTPase Assay Kit (BioAssay Systems). The free phosphate released by ATP hydrolysis was quantified with the reagent in the kit to indicate the ATPase activity. Phosphate standards were prepared according to the manual. The reaction was performed in a final volume of 40 l in 96-well plates. 30 nM wild type or mutated ZIKA NS3 protein were incubated with various concentrations of ATP in buffer (20 mM Tris, 40 mM NaCl, 4 mM Mg(AcO) 2 , 0.5 mM EDTA, pH 7.5) at RT. Equal amount of dialysis buffer without protein was used as control. The ATP hydrolysis reaction was terminated at 30 min by adding 200 l reagent. After incubation for another 30 min at room temperature, the absorbance was measured at 620 nm on a plate reader (Flexstation3, Molecular Devices). The reaction velocity and ATP concentration were fitted to the Michaelis-Menten equation
RNA transcription, transfection and luciferase activity measurement
RNA derived from ZIKV replicon or infectious DNA clone was transcribed in vitro using mMESSAGEMmachine™ T7 kit (Invitrogen Cat# AM1344) according to the instruction manual. 1 g DNA template was added into in a 20 l reaction with an additional 2 l of 30 mM GTP solution. The reaction mixture was incubated at 37 • C for 4 h, followed by the addition of 0.5 l DNase I (RNase free, Takara) to remove the DNA template. The RNA was extracted by RNA Fast 200 Extraction Kit (Fastagen) and stored at −80 • C in aliquots. BHK cells (in 48-well plate) were transfected with 0.5 g viral replicon RNA using Viafect (Promega) transfection reagent. At 9 h and 36 h post-transfection, cells were lysed with 50 l 1× Renilla luciferase lysis buffer (Promega). 10 l lysates were mixed with 30 l Renilla luciferase substrates to measure the luciferase signals in a luminescence microplate reader (Promega) according to the manufacturer's protocol.
Immunofluorescence staining
Cells were fixed with 4% paraformaldehyde for 10 min and washed three times with PBS, then permeabilized with PBS containing 0.2 % Triton X-100. After blocking with 3% BSA, cells were incubated with J2 anti-dsRNA antibody at 37 • C for 2 h, washed three times with PBS, followed by goat anti-mouse IgG conjugated with Alexa Fluor ® 488 at 37 • C for 1 h. After three times wash with PBS, cell were stained by DAPI and mounted with prolong gold anti fade reagent. Images were captured using confocal microscope (Zeiss, Germany) at 40× magnification.
Recombinant ZIKV infection
Vero cells were infected with culture medium containing recombinant zika virus. Forty eight hours after infection, cells were fixed and stained with anti-ZIKA envelope protein antibody.
Real-time RT-PCR
ZIKV genomic RNA derived from infectious DNA clones were transfected into BHK-21 cells. RNA was extracted from the culture medium at 10 and 72 h by using RNA Fast 200 Kit (Fastagen) according to manufacturer's instructions. Viral load in the supernatants was measured by RT-qPCR in Bio-Rad real time thermal cycler CFX96 using one step RT-qPCR SYBR Green kit (TIANGEN). The primers for qPCR are 5 -GGTCAGCGTCCTCTCTAATAAACG-3 and 5 -GCACCCTAGTGTC CACTTTTTCC-3 , targeting to zika NS5 gene. Reaction conditions were as followed:50 • C for 30 min, 95 • C for 2 min followed by 39 cycles of 94 • C for 20 s, 58 • C for 20 s and 68 • C for 20 s. Cq is the quantification cycle as calculated by the Bio-Rad CFX Manager 2.1 based on the MIQE guidelines. Dissociation curves were performed after each PCR run to ensure that a single PCR product had been amplified. The MIQE checklist is provided in Supplementary Table S1.
Newly transcribed RNA labeling and specific-strand RNA quantification
Newly transcribed RNA was labeled as previous described (20,21). Briefly, cells were transfected with viral RNA transcripts derived from infectious clones and then treated by s 4 U for 3 h in dark at 9 or 48 h post transfection. Then RNA was extracted by RNA Fast200 kit. s 4 U-RNA was conjugated with MTS-biotin-XX at RT for 30 min and purified by Fast200 kit to remove redundant MTS-biotin. Biotinlabled RNA was isolated by Dynabeads MyOne Streptavidin C1 magnetic beads (Life Technologies, cat. no. 65001). After three times wash, biotin-RNA was eluted and purified to remove DTT and excess salts in elute buffer. 10% input and eluted samples were reverse-transcribed by tagged-(+) strand-specific primer and quantified by real-time PCR.
Co-Immunoprecipitation
HEK293T Cells were co-transfected with plasmids encoding ZIKV NS2ABNS3-myc and NS5-HA and then harvested at 24 h after transfection. Cell lysates were collected and Myc antibody (MBL) was added into the lysates and incubated for 2 h at 4 • C, followed by adding 20 l of protein A/G beads to the protein solution mixture and incubate for 2 h at 4 • C with gentle rotation. The mixture was centrifuged at 12 000 rpm for 2 min, and supernatant was discarded. Protein A/G beads were washed three times with wash buffer (20 mM Tris-HCl PH 8.0, 0.1 mM EDTA, 300 mM NaCl). Then the beads were resuspended in 1× protein loading buffer, and boiled for 10 min. The supernatant was analyzed by western blot. For co-immunoprecipitation experiment using purified protein, equal amount (molar ratio) of purified NS3 and NS5 proteins were incubated for 2 h at 4 • C and then same co-immunoprecipitation procedure was carried out.
Statistical analysis
The error bars in the figures represent the standard error of mean of three repeats or duplicates as indicated. The statistical significance of differences between groups was evaluated by the two-tailed unpaired Student's t-test. *P-value < 0.05 was considered statistically significant. **P-value < 0.01 was very significant and ***P-value < 0.001 was extremely significant.
ZIKV NS3 has intrinsic ATPase activity
ZIKV NS3 has two functional domains, the N-terminal protease domain (residues 1-174) and the C-terminal helicase domain (NS3H, residues 175-617, Figure 1A). To characterize ZIKV NS3 helicase activity, we expressed and purified full-length NS3 (NS3F, residues 2-617) and its helicase domain (NS3H, residues 175-617) from E. coli. Coomassie Blue staining and Western blotting showed that both proteins had correct molecular weights, consistent with the theoretical calculation ( Figure 1B and C). Helicases usually hydrolyze ATP to provide energy for unwinding DNA or RNA duplexes (22)(23)(24). We first examined the ATPase activity of ZIKV NS3 using a colorimetric assay with the Malachite Green reagent, which can detect free inorganic phosphate released in the ATP hydrolysis reaction. The data were presented as a double reciprocal plot that was fitted according to the Michaelis-Menten equation. As shown in Figure 1D, the purified full-length NS3 and its helicase domain had similar ATP hydrolase activity (NS3F: V max = 2.76 mol /l·min and K m = 0.11 mmol/l, NS3H: V max = 2.37 mol /l·min and K m = 0.12 mmol/l). The N-terminal protease domain appears to have a very weak effect on ATPase activity. Adding DNA or RNA substrate (30 nM) into ATP hydrolysis reaction did not apparently affect NS3 ATPase activity ( Figure 1E and Supplementary Figure S1). Thus, similar to other DExH family helicases, ZIKV NS3 has intrinsic ATPase activity without RNA/DNA binding (23,25,26).
ZIKV NS3 selectively unwinds dsDNA/RNA with 3 overhangs
Next, we tested whether ZIKV NS3 has helicase activity in vitro. An unwinding assay based on the FRET effect was set up to monitor the separation of the double-stranded molecule by ZIKV NS3 in real time (Figure 2A) (27). The two complementary nucleic acid strands were labeled with Cy3 or Black Hole Quencher-2 (BHQ2). When the complementary strands were annealed into double strands, the fluorescence emitted by Cy3 was mostly absorbed by BHQ2, resulting in low Cy3 fluorescent signals. With the opening of the duplexes by helicase, the quenching of Cy3 by BHQ2 is relieved; thus, Cy3 fluorescence increases accordingly (Figure 2A). The FRET-based unwinding assay has evident advantages in its temporal resolution and quantification.
To understand the selectivity and polarity of ZIKV NS3 during substrate unwinding, three pairs of complementary RNA/DNA strands were synthesized and annealed to produce dsRNA/DNA with a 3 overhang, a blunt end or a 5 overhang ( Figure 2B and C). We found that ZIKV NS3 unwound both dsRNA and dsDNA in vitro, even though only dsRNA is its bona fide substrate during viral replication. In addition, ZIKV NS3 preferred to unwind dsRNA and dsDNA that had a 3 overhang ( Figure 2B and C), with weak unwinding activity for substrates with blunt ends or a 5 overhang. This finding indicated the 3 to 5 directionality of the enzyme, suggesting that the 3 overhang serves as the loading strand. Interestingly, although similar AT-Pase activity was shown between NS3H and NS3F, NS3H displayed much weaker helicase activity than NS3F at the same concentration ( Figure 2B and C). Active DNA/RNA unwinding could be observed only if a high concentration of NS3H proteins (3M) was used in the assay (Supplementary Figure S2). This finding suggests that the NS3 protease domain could be an intramolecular cofactor that helps the NS3 helicase domain to open the duplex, probably through either stabilizing the helicase conformation or promoting binding between helicase and the substrate (28).
To avoid ATP-independent dsRNA/DNA unwinding under high helicase-to-substrate ratio condition (29), we used 4:1 helicase-to-substrate ratio in the unwinding assay, which is much lower than that used in earlier studies (30)(31)(32)(33). At this ratio, no ATP-independent dsRNA unwinding or protein induced fluorescence enhancement (PIFE) was observed (Supplementary Figure S3A). This also resulted in a relatively low unwound fraction (about 7%). When helicase-to-substrate ratio was increased to 100:1, the unwinding fraction can be elevated to about 40% (Supplementary Figure S3B).
We further explored whether the length of substrate affected unwinding reaction of ZIKV NS3. The data showed that with increasing length of dsRNA, the unwinding activity of NS3 drops dramatically ( Figure 2D). NS3 almost lost its ability to separate the 23 bp RNA duplex. Our unwinding assay is an 'all-or-none' reaction in consideration of its 'end-point' signal detection, while partially separated duplexes do not cause fluorescence changes. Therefore, it is possible that NS3 partially opened the longer RNA duplex but failed to finish the translocation throughout the full-length RNA strand, probably due to insufficient pro-cessivity. This is consistent with previous reports that hepatitis C virus (HCV) NS3 would drop or slip backward from the duplex at 18 bp, where it encountered an energy barrier (34,35).
ZIKV NS3 helicase activity relies on its ATPase activity and substrate binding
Does ZIKV NS3 helicase activity rely on ATP hydrolysis? To answer this question, we constructed NS3 ATPase inactive mutants and examined their helicase activity. Sequence alignment of NS3 proteins from flavivirus family members (Supplementary Figure S4) demonstrates that residues G199 and K200 are highly conserved, and reside in the NS3 ATP binding motif ( Figure 3A) (7,17,(36)(37)(38). ATP hydrolysis assays showed that both G199A and K200A mutations significantly impaired NS3 ATPase activity ( Figure 3B). Accordingly, when these two mutants were applied in the dsRNA unwinding assay, they displayed attenuated RNA helicase activity with slower unwinding rate ( Figure 3C and D), indicating that NS3 helicase activity relies on ATP hydrolysis.
Additionally, we introduced mutations into the RNA binding region of NS3 to interrupt its recognition of RNA. Structural studies showed that single strand RNA binds to a positive charged tunnel surrounded by subdomain I, II and III of the NS3 helicase domain (36,37), where residues D410 and K431 of NS3 interacted with each other and formed hydrogen bonds with RNA ( Figure 3A). Therefore, we assumed that these two residues are important for the interaction between NS3 and RNA. We mutated D410 and K431 to alanine and test the enzymatic activity of the mutants. Similar (D410A) or slightly increased ATPase activity (K431A) was detected compared to wild type (WT) NS3 ( Figure 3E), implying that these mutations did not inhibit NS3 ATPase activity, which is reasonable because these two residues are far from the ATP binding site. However, these two mutations impaired NS3 association with RNA ( Figure 3F). Consistently, in the dsRNA unwinding assay, both mutants showed decreased helicase activity compared to that of WT NS3 ( Figure 3G and H). Clearly, disrupting the interaction between NS3 and the RNA substrate also affects its helicase activity. Conclusively, ZIKV NS3 unwinds the RNA duplex in a manner dependent on ATP hydrolysis and RNA binding.
ZIKV NS3 helicase activity is essential for viral replication
We have shown that ZIKV NS3 has ATPase and RNA helicase activity and characterized its enzymatic properties in vitro. During ZIKV replication in the cell, NS3 unwinds the dsRNA intermediate to ensure that RNA synthesis is subsequently carried out by NS5. We hypothesized that the same unwinding mechanism revealed by our in vitro assay would apply to NS3 during ZIKV replication in vivo. To test this idea, we examined viral replication using a luciferase reporter assay based on the ZIKV replicon (39). The ZIKV replicon can mimic viral replication in the cell, and contains the genomic RNA sequence except for the viral structural proteins, which are replaced by the Renilla Luciferase reporter. Thus, the luciferase activity can be detected to mon- itor viral replication in the cell. In the assay, an immediate luciferase activity (9 h post transfection) comes from the protein translation of transfected replicon RNA. Only if the replicon successfully replicates in cells, a robust luciferase activity (36 h post transfection) could be detected. We introduced NS3 mutations (G199A, K200A, D410A and K431A) that affect NS3 helicase activity in vitro into the ZIKV replicon and detected their luciferase activity. At 9 h post transfection, both WT and mutated replicons showed similar luciferase activity, suggesting similar RNA transfection efficiency and protein translation level. The WT ZIKV replicon showed robust luciferase activity at 36 h post transfection, indicating successful replication in BHK-21 cells ( Figure 4A and Supplementary Figure S7). A replicationincompetent mutant, NS5 GAA (RNA polymerase inactive mutation, 664-666 residues, GDD to GAA) was used as a negative control and showed very little luciferase signal at 36 h post transfection. As shown in Figure 4A, all NS3mutated replicons displayed very low luciferase activity at 36 h, implying little to no viral replication.
To confirm that NS3 mutations lead to the failure of viral RNA amplification, we stained the cells with the J2 antibody, which specifically recognized dsRNA, after transfection with the WT or mutated ZIKV replicon. Consistent with the results from the luciferase assay, dsRNA was detected in BHK cells transfected with the WT ZIKV replicon ( Figure 4B). However, few fluorescent signals were observed in the groups transfected with either NS3-mutated replicons or NS5-mutated replicon ( Figure 4B), affirming that the NS3 mutations blocked viral replication at the step of RNA synthesis.
The crucial role of NS3 in ZIKV production was further evaluated via preparation of recombinant ZIKV (rZIKV) (40). An infectious ZIKV cDNA clone was used as a template for viral genomic RNA transcription in vitro (Supplementary Figure S7). Viral RNA was then transfected into BHK cells to produce rZIKV, and rZIKV released into culture medium was collected and quantified by qPCR. WT infectious cDNA clone derived viral RNA successfully produced rZIKV, which could infect Vero cells again ( Figure 4C and D). All NS3 mutations (G199A, K200A, D410A and K431A) failed to produce rZIKV ( Figure 4C and D). We labeled and purified newly transcribed RNA containing 4-thiouridine (s 4 U) and detected viral RNA by RT-qPCR at 48 h post transfection, and found that NS3 mutations severely inhibited viral RNA synthesis in cells (Supplementary Figure S8) (20,21,41). Collectively, these results confirmed that ZIKV NS3 helicase activity is essential for viral replication, which requires ATP hydrolysis and substrate binding.
ZIKV NS3 helicase activity is stimulated by NS5
Flaviviral replication is accomplished by multicomponent complexes, in which two enzymatic proteins, NS3 and NS5 play important roles (42). During viral RNA replication, dsRNA unwinding by NS3 is essential for almost all NS5dominated RNA synthesis, except for the first round. It is likely that dsRNA unwinding and subsequent RNA synthesis are coupled to each other for efficient viral replication (31,43,44). Thus, the hypothesis that NS5 could act as a cofactor to facilitate NS3 function is reasonable. To test this, we purified ZIKV NS5 proteins and examined whether NS5 affects NS3 helicase activity. NS5 did not influence the ATPase activity of either the full-length NS3 or the NS3 helicase domain (Figure 5A and B). We used half amount of NS3 (100 nM) in the unwinding assay to examine the effect of NS5 and we found that NS5 specifically stimulated NS3 helicase activity during the opening of dsRNA with a 3 overhang, but not dsRNA with blunt ends or a 5 overhang (Figure 5C and D). Similarly, NS5 also specifically stimulated NS3 to unwind dsDNA with a 3 overhang (Supplementary Figure S5). Additionally, NS5 cannot increase Cy3 signal by itself ruling out the PIFE effect derived from NS5 proteins (Supplementary Figure S3A). Interestingly, NS5 selectively facilitated the unwinding of dsDNA, but not dsRNA by NS3H (Supplementary Figure S6), suggesting specific substrate recognition.
Increasing the kinetics of RNA unwinding means that more duplexes are opened per unit time, which could occur via two possibilities. The first possibility is that NS3 walks 'faster' on the RNA strand with NS5 assistance. The other one is that NS5 improves the processivity of NS3, namely, more NS3 successfully walks through the full-length RNA duplex rather than dropping off the RNA duplex in the middle. We then tested whether NS5 could promote NS3 helicase activity to unwind longer dsRNA and found that no obvious improvement was observed when 20 or 23 bp dsRNA probe was used as a substrate ( Figure 5E), ruling out the latter possibility. Together, these data suggest that NS5 stimulates NS3 helicase activity through increasing its unwinding velocity rather than improving its processivity on the nucleic acid strand.
The ZIKV NS3-NS5 interaction is crucial for dsRNA unwinding during viral replication
Previous studies reported that the interaction between NS3 and NS5 is important for viral replication in HCV and DENV (45)(46)(47). Does ZIKV NS3 also interact with NS5? We performed co-immunoprecipitation and found that they interacted with each other in cells ( Figure 6A). In addition, co-immunoprecipitation using purified ZIKV NS3 and NS5 proteins indicated a direct interaction between them (Figure 6B). Then we tried to determine whether NS5-simulated NS3 helicase activity depends on their interaction. To this end, we need to introduce mutations into NS3 protein that could disrupt the NS3-NS5 interaction without affecting the enzymatic function of NS3. The C-terminal region of the DENV NS3 protein has been reported to be important for NS5 binding, among which the peptide containing residues 566-585 is far from the ATP and RNA binding region according to the crystal structure ( Figure 6C) (48). Therefore, a mutation in this peptide is likely to disrupt the NS3-NS5 interaction, while maintaining NS3 enzymatic function. Two conserved residues, N569 and E573, were mutated to alanine (Supplementary Figure S4), and this mutant's interaction with NS5 and its enzymatic activity was tested. The N569A/E573A mutation significantly interrupted the NS3-NS5 interaction, as indicated by coimmunoprecipitation ( Figure 6D and E). As we predicted, the N569A/E573A mutation affected neither ATP hydroly- sis nor dsRNA unwinding of NS3 ( Figure 6F and G). However, NS5 could no longer facilitate helicase activity of mutated NS3 in the RNA unwinding assay as observed for WT NS3 ( Figure 6G), indicating that the promotion of NS3 helicase activity by NS5 relies on the interaction of the two proteins.
The connection of ZIKV NS3 helicase activity to NS5 in vitro suggests that the cooperation between them might be important for viral replication. Thus, disrupting their interaction may lead to defects in viral replication. We performed a ZIKV replicon luciferase assay with the NS3 N569A/E573A mutation and found that the mutation blocked viral replication ( Figure 7A and Supplementary Figure S7). Immunostaining for dsRNA verified that viral RNA synthesis was blocked when the NS3 N569/E573 residues were mutated ( Figure 7B). Moreover, the NS3 N569A/E573Amutation significantly decreased recombinant ZIKV production because of viral RNA synthesis de-ficiency ( Figure 7C, D and Supplementary Figure S8). Of course, we cannot rule out the possibility that NS3 may also facilitate NS5 polymerase activity during viral RNA synthesis (49). Disrupting the interaction of these proteins might also affect the RNA synthesis step. Thus, although individual enzymatic functions of NS3 and NS5 were remained intact, uncoupling dsRNA unwinding and RNA synthesis by interrupting the NS3-NS5 interaction still results in serious viral replication defects. Combined with our results in vitro, we suggest that NS5 is a key cofactor of NS3, whose facilitation of NS3 helicase activity is crucial for ZIKV replication.
DISCUSSION
ZIKV NS3 belongs to the superfamily 2 (SF2) DExH group helicase, like its homologues in other flavivirus family members (23). Here, we systematically characterized the enzymatic activity of the ZIKV NS3 protein, which plays an Quantification of the initial dsRNA unwinding rate of NS3 with/without NS5 in the left panel of (C). The initial unwinding rate is calculated by a linear fit of the first 5 min of the reaction, and the initial unwinding rate of NS3F is defined as 100%. Error bars represent the standard error of triplicate measurements (**P < 0.01). (E) Duplex unwinding assay of NS3F with/without NS5 on dsRNA of different lengths with 3 overhangs. The unwinding reactions were performed with 100 nM NS3F and 100 nM NS5. essential role in ZIKV replication with the assistance of NS5. We found that NS3 possessed intrinsic ATPase activity, a feature also observed for HCV NS3 protein (26,50). Earlier reports showed that oligonucleotides such as poly (A) and poly (U) stimulated flaviviral NS3 ATPase activity (31,50). We did not observe apparent stimulation of NS3 ATPase activity by oligonucleotides probably because the oligonucleotides concentration we used (30 nM or 3 M) is much lower than others (several hundred M). We cannot rule out the possibility that oligonucleotides would stimulate NS3 ATPase activity under some conditions.
Our FRET assay directly provided real-time dsRNA/dsDNA unwinding kinetics for ZIKV NS3. We showed that ZIKV NS3 unwound both DNA and RNA duplexes in vitro without specific selectivity. We mainly focus on the unwinding of dsRNA by NS3 because the dsRNA intermediate is the bona fide substrate during flaviviral replication. Our data clearly supported the idea that ZIKV NS3 is a canonical RNA helicase with specific polarity, which requires a 3 flanked single-strand as a loading strand and unwinds dsRNA or dsDNA in the 3 to 5 direction (51). On the issue of whether the N-terminal segregated protease domain affects ATPase and helicase activity, earlier studies on NS3 from different Flaviviridae members provided controversial conclusions (13,32,52,53). One study compared ATPase and helicase activity of NS2B-NS3 and NS3 helicase domain from Murray Valley encephalitis virus, while no apparent difference between them was observed (13). In another report, DENV NS3 protease domain seemed to inhibit NS3 ATPase activity (32). In contrast, Luo et al showed that DENV full length NS2B18NS3 had higher ATPase and helicase activity than (36)). The RNA is shown in orange, and ATP is shown as pink dot. (D) The N569A/E573A mutation interrupts the NS3-NS5 interaction. Co-immunoprecipitation was carried out with NS5 and WT or mutated NS3 proteins and then analyzed by western blotting. (E) Quantification of the interaction between NS5 and WT or mutated NS3 in (D) (***P < 0.001). Error bars represent the standard error of triplicate measurements. (F) The N569A/E573A mutation does not affect NS3 ATPase activity. 30 nM enzyme was used to hydrolyze ATP. Error bars represent the standard error of duplicate measurements. (G) dsRNA unwinding assay of WT or N569A/E573A mutated NS3 with/without NS5. 16 bp dsRNA with 3 overhang was used as substrate. The reactions were carried out with 100 nM NS3F (or mutants) and 100 nM NS5. NS3 helicase domain, and the linker between protease and helicase domain displayed functional significance (53). Similarly, protease domain of HCV NS3 stimulated its helicase activity (52). We found that although the ZIKV NS3 protease domain does not change NS3 ATPase activity, it is necessary for optimal helicase activity.
We also identified several key residues for the helicase activity of ZIKV NS3. These residues are located in either the ATP binding site or RNA binding region, whose mutations impair NS3 helicase activity in vitro and severely in-terrupt viral replication in the cell. Evidently, the unwinding of dsRNA (helicase activity) by NS3 requires energy from ATP hydrolysis (ATPase activity), although its ATPase activity is independent of RNA binding. Notably, some NS3 mutants displayed even more serious defects in viral replication than the relative mild impacts on NS3 helicase activity that were observed in the RNA unwinding assay. This discrepancy could be because unwinding a much longer viral dsRNA intermediate (∼11 kb) is more difficult than unwinding the 16bp dsRNA probe used in our unwinding as- Figure 7. The NS3-NS5 interaction is crucial for viral replication. (A) Luciferase assay of the ZIKV replicon. BHK-21 cells were transfected with WT or NS3 N569A/E573A-mutated ZIKV replicon RNA, and luciferase activity was measured at 9 h and 36 h post transfection. Error bars represent the standard error of triplicate measurements. (B) Immunostaining to detect viral dsRNA after transfection of the WT or NS3-mutated ZIKV replicon in BHK-21 cells. (C) Quantification of recombinant ZIKV released into the culture medium after transfection of infectious WT or NS3-mutated ZIKV genomic RNA into BHK-21 cells by RT-qPCR (***P < 0.001). Error bars represent the standard error of triplicate measurements. (D) Infection assay with recombinant ZIKV. BHK-21 cells were transfected with WT or mutated infectious ZIKV genomic RNA, and recombinant ZIKV was collected to infect Vero cells again. Infected Vero cells were stained with an antibody specific for the ZIKV E protein. (E) ZIKV NS3 is a canonical RNA helicase stimulated by NS5 RNA polymerase. NS3 first loads onto the single stranded region of viral dsRNA with 3 overhang. Depending on ATP hydrolysis, NS3 unwinds dsRNA in the 3 to 5 direction. ZIKV NS5, the RNA dependent RNA polymerase, can stimulate NS3 helicase activity. The tight coupling of dsRNA unwinding by NS3 and RNA synthesis by NS5 ensures the high efficiency of viral replication. say (54). Therefore, even a mild compromise in NS3 helicase activity could lead to the abortion of viral replication, suggesting that optimal NS3 activity is necessary for successful viral replication under stringent cellular conditions.
Interestingly, it turned out that ZIKV NS3 cannot efficiently open dsRNA longer than 18 bp, implying that it has limited processivity in duplex unwinding. The poor processivity of ZIKV NS3 makes it difficult to imagine that a single helicase alone could unwind the 11 kb long viral dsRNA intermediate. In this case, persistent unwinding of viral dsRNA would require either the cooperation of multiple monomeric/oligomeric NS3 molecules or cofactors that improve the activity of NS3 on RNA strand (55)(56)(57)(58). Considering that dsRNA unwinding and RNA synthesis are closely connected steps in flavivirus replication, we examined whether the ZIKV polymerase, the NS5 protein, is a cofactor of NS3. We found that NS5 interacts with NS3 and promotes NS3 helicase activity, but not ATPase activity. Kinetic analysis indicated that NS5 could increase NS3 unwinding velocity without improving its processivity on the RNA duplex. Increasing evidence argues that helicases are often regulated by a variety of cofactors in multiple processes (33,59). In particular, as DNA/RNA amplification is usually coupled to duplex unwinding, the facilitation of helicase activity by polymerase is generally observed, between DNA polymerases and helicases in eukaryotes, bacteria and viruses (31,(60)(61)(62). Combining our results and other studies, we propose a ZIKV dsRNA unwinding model ( Figure 7E). ZIKV NS3 is a canonical helicase that loads on the 3 overhang of dsRNA, and then unwinds viral dsRNA in the 3 to 5 direction in an ATP-dependent manner. ZIKV NS5 stimulates unwinding activity of NS3 probably by facilitating the catalytically active conformation transition of NS3, or destabilizing the RNA duplex. The NS3-NS5 interaction may couple the unwinding step to the synthesis step. Thus, NS5 facilitates NS3 unwinding the viral dsRNA and synthesizes new RNA strand, which could be an optimized strategy for viral replication.
In summary, our work characterized the helicase activity of ZIKV NS3 and revealed its essential role in ZIKV replication. NS3 helicase activity can be stimulated by NS5, providing novel insight into ZIKV replication. Single molecular studies of flaviviral NS3 translocation on the RNA strand may offer more in-depth understandings in the future.
SUPPLEMENTARY DATA
Supplementary Data are available at NAR Online.
|
Kiss of Midnight
This is the first book in the Midnight Breed series
Back Cover
Summary
This is the story of Lucan and Gabrielle.
Characters
* Lucan Throne-
* Gabrielle Maxwell-
* Dante-
* Nikolai-
* Conlan-
* Rio-
* Gideon-
* Tegan-
* Eva-
* Savannah-
* Danika-
* Jamie-
* Kendra-
|
Annotating unmodifiable collections
In my SDK, I want to let users know that a list they are receiving should not be modified. Are there any annotation options for this, or can this only be in the documentation?
I don't think there is an annotation to do this. So, I would just add "Immutable" to the class name to make it clear.. If you can't change the class name then I think documentation is the best option you have.
You could use the Checker Framework for this https://checkerframework.org/
|
Dichotomising vs keeping categories in regression
I'm doing an analysis to see if having children is associated with treatment outcome.
The children variable is categorical and takes values 0-7.
When I dichotomise it into 'children' and 'no children', there is no effect.
But if I keep all the levels, there is a significant association with having 3 children.
I'm not sure how to interpret it. Should I keep all the categories? Is this a real finding or a fluke? I know dichotomising is generally not a good idea but I thought it's warranted with such a variables where Yes and No are so clearcut.
For context, this is not my main research question, I'm looking at many potential risk factors and it's just one of them so I don't necessarily need to analyse it too deeply.
Why did you decide to treat number of children as categorical? Wouldn't "having exactly 3 children" be a rather strange risk factor?
@dipetkov I guess that's exactly what I'm asking, I do think it's a strange risk factor but couldn't decide if I should throw away this 'finding'. Or do you mean it should be treated as numeric? I tried that but the association was not significant.
To summarize: if you consider whether a person is a parent or not, the association is not significant; if look at the association with number of children instead (did you assume any possible relationship is linear?), it's also non significant. But if you look at having exactly 3 children, then that might be a real finding. The procedure you've described doesn't seem particularly rigorous.
Your summary is correct except that I didn't specifically check if having 3 children is a predictor. I first looked at the variable as it was (numeric) and there was nothing, so I decided to turn it into a factor in case the relationship is indeed non-linear, that's how the 3 children came up. Would very much appreciate if you could describe a more rigorous way to handle this!
This is a fun question, but it may require some discussion in the comments before we arrive at an answer. Here is another observation, based on your parameter estimates: the risk decreases as you have 1, 2 or 3 children, then increases as you have 4 or 5 children, finally decreases again with 6 or 7 children. That does not seem to make a lot of ecological sense. I can picture a U-shaped effect, but not really something that looks like a cubic. I would very much recommend including the number of children as a numeric predictor, and quite possibly its square.
You are completely right that dichotomising is frowned upon (here). But categorising an inherently numerical variable like "number (!) of children" is pretty much the same. It amounts to treating the relationship between $n$ and $n+1$ children as precisely the same as the relationship between any $n$ and $m$ children, and that does not make a whole lot of sense: the outcomes between 1 and 2 children should be more strongly related to each other than the outcomes between 1 and 7 children.
A flexible way to allow for non-linearity, if supported by the data, is to use splines. I would also make (and show) a plot of the raw data.
Can you edit your post to include your data (ideally by pasting in the output of dput(...) applied to your dataframe) and your model?
If there is any association, it maybe because number of children is a proxy for other life choices / circumstances. So another drawback of your analysis is that you seem to have other covariates but you are looking at the association with number of children in isolation from everything else you know about the subjects.
Also @dipetkov According to my answer there's no indication of anything meaningful going on here. I'd advise against nonlinear modelling here because doing even more based on looking at the data first will involve even more "data dredging" and p-values become even less useful after such actions. (I'd say different things if there were a clear indication that something nonlinear is in fact going on.)
@ChristianHennig: yes, your point about changing the model after looking at it invalidating p values is of course correct. To which I would answer that including the number of children as a categorical covariate should never have been done in the first place. I did not need to see the parameter estimates to know that this does not make ecological sense. So I would still advocate changing the model to a linear + square effect (or use splines, but I think that might be overkill here) to account for a potential U-shape, and then look again at the results.
@StephanKolassa Looking at the result I could well imagine a monotonic/linear relationship with just not enough information at >3 children to see it. Chances are this wouldn't turn out significant on these data let alone involving a squared term, but after having seen the given results already everything is "lost" anyway.
@ChristianHennig: we can certainly agree to disagree here. I would just like to state for the record (and the OP) that my position is not quite as strict as yours: if I see the results of an analysis that I can argue is problematic without relying on the results, I would definitely recommend running a correct analysis over interpreting the results of an inappropriate analysis "by the book". YMMV.
@StephanKolassa unfortunately, the output of dput() is massive, not sure how to make it more concise. To you other point, does it still make sense to interpret the observations even if they are not significant? (referring to risk changes as the number of children goes up)
@dipetkov, you're absolutely correct, there are other covariates. I have hundreds of variables and run univariate analyses first to select step wise the ones I'll keep for the multivariate model. Is it a wrong way to do it?
I would not start interpreting insignificant effects - they might be just random noise, especially since you presumably have very few data points with 6 or 7 children. (And because I still believe an analysis with categorical data makes no sense.)
Yes, I think the kind of univariate selection you describe has drawbacks. It is also a separate question from the one you've posted. You could consider writing a new question that the describes your data, the ultimate purpose of your analysis, your proposed approach and ask for feedback about that.
Forgive me if this is too much of an elementary question, but from your post, I read your research question as being, "Is having children associated with the outcome?", which to me, seems like you are only interested in the dichotomised yes/no response, anyway.
@Trypanosoma that's a good clarification. I'm interested in finding all possible predictors of treatment outcome and children is just one of the possible predictors. There's no established question whether it's yes/no or number of children hence checking both.
I would treat no. of children (n) as a continuous variable. You would expect a priopri that the more children you've got, the less difference would another one make. This assumption could be taken into account using sqrt(n) or log(n+1) as the predictor. Looking at your results, I guess you would get a significant slope out of this.
If you test every number of children separately, you're running seven tests. According to Bonferroni, for making sure that you have a probability of smaller than 5% to find a significant result if in fact nothing is going on, you've got to run every single test at level 0.05/7. This is smaller than the p-value for 3 children. From this point of view nothing significant is going on in either situation. You may well keep all categories, but this doesn't give you a meaningful significance. (Obviously you could find that p-value suspicious and collect much more data to see whether it becomes even more significant, but it may not be worth the hassle.)
Added later: There was some discussion about whether it is wrong to treat the number of children as categorical, actually being a number, and what to do instead.
Generally I agree with the view that it makes sense to use the numerical meaning of the data. However this will usually involve assumptions about the functional form of the relationship. An important principle is that all available subject-matter knowledge should be used before actually analysing the data, as making modelling decisions based on the data will often produce selection bias effects and runs counter to theory behind some standard methods such as the significance tests for the coefficients.
So that first step before seeing the data should be to ask what kind of functional relationship could make sense here. The easiest assumed relationship would be linearity, but linearity may not be realistic, particularly because there may be a much bigger difference between 0 and any number of children than between two nonzero numbers of children (of course I can't tell without knowing the exact background). Also the relationship may be non-monotonic.
An additional problem is the question what can be identified from the data at what precision. Standard errors for 5, 6, 7 (maybe even 4) children suggest that there is not much data for these and whatever can be said will be very imprecise. In fact one could know this already from the plain numbers of observations falling into these categories (taking those into account will hardly bias later analyses as they don't imply information about the regression relationship).
This means that there may not be enough information to nail down confidently any relationship more complex (i.e., requiring more parameters) than a linear one, and making assumptions that require only a low number of parameters to be estimated is certainly desirable. This also means that treating the larger numbers of children as autonomous categories doesn't give useful information either. The standard errors of about 19 suggest that there is hardly any power to detect anything; for sure shown results cannot even exclude the possibility that there is a monotonic or even linear relationship (or none at all), even though the latter may not be realistic.
If relationships don't follow simple functional forms, in fact aggregation of categories can be a sensible choice; depending on the background comparing zero with existing children may be justifiable, and in general even something like "zero", "one or two", "more than two" can work better than either using all numbers as separate categories (particularly with very thin numbers on some categories) or assuming linearity or a more complex functional form. As said before, optimally such decisions should be made before seeing the data (or at least the regression relationship); however looking at the data may reveal a striking specific deviation from an initial assumption/decision, in which case model selection bias is probably less bad than sticking to an inappropriate initial decision at any cost.
From the numbers I currently see, I suspect however that clear evidence for any influence of this variable whatsoever cannot be found in the given data, and I'd be very skeptical if any of the ideas above applied at this time point would lead to a just about significant p-value, which in that case may well be explained by model selection bias.
Note also, as discussed elsewhere, that looking at p-values often isn't a very good way of doing variable selection.
This is all completely correct, but to be honest, I can't bring myself to upvoting it, because while it answers the question, it does not answer the question the OP should be asking as long as they categorize a numerical predictor in a rather obviously ecologically invalid way...
@StephanKolassa so just to make sure I understand you correctly. Number of children is a kind of variable that should've never been treated as categorical. But is it warranted, upon looking at a numeric one and getting no significant results, to dichotomise it? Only for this case, when it seems ecologically plausible that children/no children are very different from each other.
@StephanKolassa From the comments (which may be too many at this point), it seems that this is an XY question as the OP is doing univariate feature selection to choose predictors to include in the multivariate model they want to develop.
@olke: it depends. If you want to do inferential significance testing, then any kind of model selection after seeing the data invalidates the theory on which p values are predicated - you do need to specify your model before doing any fitting. (Hence the discussion between Christian and me.) If you are just doing exploratory analysis, or building predictive model, pretty much anything goes. There is a strong temptation to overfit, so you really need to include ecological understanding, per my comments - and then you need to validate on new data.
@StephanKolassa I have added to my answer; hope that this is to your liking. ;-)
Thank you, it is, +1!
This is a great example of why it is so difficult to 'hypothesize after the results are known' and why we should make analysis plans before we start, even for observational data. Every analysis we try now will just reflect something we've already seen, and we could probably find an analysis to give whatever answer we want.
IMO we can't say whether or not having children affects your outcome based on this data. Certainly your dataset is not inconsistent with 'no effect' so this is probably how you should explain it as that's how we normally think about statistical significance. Particularly if this also comes from a set of secondary outcomes that you are exploring. We also can't rule out a small effect.
From a programming point of view for an 'honest' quantification of your results you might try using emmeans to get the base vs all other group contrasts, and use whatever p-value and confidence interval correction is recommended for this kind of comparison. Or just report your initial model of 'number of kids' as a numeric, if this was supported by your theory and is what you tried initially.
Whether or not it makes sense to analyse the number of children in categories like this is up to you. Social factors might make 3 kids qualitatively different from 2, Its down to your population and your theory.
Statistical arguments aside you could also consider the Bradford Hill criteria to check if you believe a result is real or meaningful.
Finally, if you don't need have an opinion about what this data means then you don't need to overthink it. Just show your readers your model and your results and let them decide. A graph like this might be instructive:
Nice graph! ...
Why not run an ANOVA and see whether adding all the levels improves overall fit? If it does then you can worry about which level is driving it.
|
import os
from copy import deepcopy
from datetime import datetime
import luigi
from ..cluster_tasks import WorkflowBase
from ..utils import volume_utils as vu
from .. import copy_volume as copy_tasks
from . import downscaling as downscale_tasks
class WriteDownscalingMetadata(luigi.Task):
tmp_folder = luigi.Parameter()
output_path = luigi.Parameter()
scale_factors = luigi.ListParameter()
dependency = luigi.TaskParameter()
metadata_format = luigi.Parameter()
metadata_dict = luigi.DictParameter(default={})
output_key_prefix = luigi.Parameter(default="")
scale_offset = luigi.IntParameter(default=0)
prefix = luigi.Parameter(default="")
def requires(self):
return self.dependency
def _write_log(self, msg):
log_file = self.output().path
with open(log_file, "a") as f:
f.write("%s: %s\n" % (str(datetime.now()), msg))
def run(self):
vu.write_format_metadata(self.metadata_format, self.output_path, self.metadata_dict,
self.scale_factors, self.scale_offset, self.output_key_prefix)
self._write_log("write metadata successfull")
def output(self):
return luigi.LocalTarget(os.path.join(self.tmp_folder,
"write_downscaling_metadata_%s.log" % self.prefix))
class DownscalingWorkflow(WorkflowBase):
input_path = luigi.Parameter()
input_key = luigi.Parameter()
scale_factors = luigi.ListParameter()
halos = luigi.ListParameter()
dtype = luigi.Parameter(default=None)
int_to_uint = luigi.BoolParameter(default=False)
metadata_format = luigi.Parameter(default="paintera")
metadata_dict = luigi.DictParameter(default={})
# the output path is optional, if not given, we take the same as input path
output_path = luigi.Parameter(default="")
output_key_prefix = luigi.Parameter(default="")
force_copy = luigi.BoolParameter(default=False)
skip_existing_levels = luigi.BoolParameter(default=False)
scale_offset = luigi.IntParameter(default=0)
formats = vu.get_formats()
@staticmethod
def validate_scale_factors(scale_factors, metadata_format):
assert all(isinstance(sf, (int, list, tuple)) for sf in scale_factors)
ndims = (2, 3) if metadata_format == "ome.zarr" else (3,)
assert all(len(sf) in ndims for sf in scale_factors if isinstance(sf, (tuple, list)))
@staticmethod
def validate_halos(halos, n_scales, ndim=3):
assert len(halos) == n_scales, "%i, %i" % (len(halos), n_scales)
# normalize halos
halos = [[] if halo is None else (ndim * [halo] if isinstance(halo, int) else list(halo))
for halo in halos]
# check halos for correctness
assert all(isinstance(halo, list) for halo in halos)
assert all(len(halo) == ndim for halo in halos if halo)
return halos
def _is_h5(self):
h5_exts = (".h5", ".hdf5", ".hdf")
out_path = self.input_path if self.output_path == "" else self.output_path
return os.path.splitext(out_path)[1].lower() in h5_exts
def _is_n5(self):
n5_exts = (".n5",)
out_path = self.input_path if self.output_path == "" else self.output_path
return os.path.splitext(out_path)[1].lower() in n5_exts
def _is_zarr(self):
zarr_exts = (".zarr", ".zr")
out_path = self.input_path if self.output_path == "" else self.output_path
return os.path.splitext(out_path)[1].lower() in zarr_exts
def validate_format(self):
assert self.metadata_format in self.formats,\
"Invalid format: %s not in %s" % (self.metadata_format, str(self.formats))
if self.metadata_format == "paintera":
assert self.output_key_prefix != "",\
"Need output_key_prefix for paintera data format"
assert self._is_n5(), "paintera format only supports n5 output"
# for now, we only support a single "setup" and a single
# time-point for the bdv format
elif self.metadata_format == "ome.zarr":
assert self._is_zarr(), "ome.zarr format only supports zarr output"
else:
msg = f"Must not give output_key_prefix for bdv data format, got {self.output_key_prefix}"
assert self.output_key_prefix == "", msg
if self.metadata_format in ("bdv", "bdv.hdf5"):
assert self._is_h5(), "%s format only supports hdf5 output" % self.metadata_format
elif self.metadata_format == "bdv.n5":
assert self._is_n5(), "bdv.n5 format only supports n5 output"
else:
raise RuntimeError # this should never happen
def _link_scale_zero_h5(self, trgt):
with vu.file_reader(self.input_path) as f:
if trgt not in f:
f[trgt] = f[self.input_key]
def _link_scale_zero_n5(self, trgt):
with vu.file_reader(self.input_path) as f:
if trgt not in f:
os.makedirs(os.path.split(os.path.join(self.input_path, trgt))[0], exist_ok=True)
src_path = os.path.abspath(os.path.realpath(os.path.join(self.input_path,
self.input_key)))
trgt_path = os.path.abspath(os.path.realpath(os.path.join(self.input_path,
trgt)))
os.symlink(src_path, trgt_path)
def _have_scale(self, scale):
key = vu.get_format_key(self.metadata_format, scale, self.output_key_prefix)
with vu.file_reader(self.input_path) as f:
return key in f
def _copy_scale_zero(self, out_path, out_key, dep, dtype, int_to_uint):
task = getattr(copy_tasks, self._get_task_name("CopyVolume"))
prefix = "initial_scale"
dimension_separator = "/" if self.metadata_format == "ome.zarr" else None
dep = task(tmp_folder=self.tmp_folder, max_jobs=self.max_jobs,
config_dir=self.config_dir,
input_path=self.input_path, input_key=self.input_key,
output_path=out_path, output_key=out_key,
int_to_uint=int_to_uint, dtype=dtype,
prefix=prefix, dependency=dep, dimension_separator=dimension_separator)
return dep
def require_initial_scale(self, out_path, out_key, dep, dtype, int_to_uint):
""" Link or copy the initial dataset to self.output_key_prefix.
We copy if input_path != output_path or force_copy is set.
"""
copy_initial_ds = True if self.force_copy else out_path != self.input_path
if copy_initial_ds:
dep = self._copy_scale_zero(out_path, out_key, dep, dtype, int_to_uint)
else:
# make a link in the h5 file
if self.metadata_format in ("bdv", "bdv.hdf5"):
self._link_scale_zero_h5(out_key)
# make a link on the file system
elif self.metadata_format in ("bdv.n5", "ome.zarr", "paintera"):
self._link_scale_zero_n5(out_key)
else:
raise RuntimeError # this should never happen
return dep
def requires(self):
self.validate_scale_factors(self.scale_factors, self.metadata_format)
ndim = len(self.scale_factors[0])
halos = self.validate_halos(self.halos, len(self.scale_factors), ndim)
self.validate_format()
out_path = self.input_path if self.output_path == "" else self.output_path
in_key = vu.get_format_key(self.metadata_format, self.scale_offset, self.output_key_prefix)
# require the initial scale dataset
dep = self.require_initial_scale(out_path, in_key, self.dependency, self.dtype, self.int_to_uint)
dimension_separator = "/" if self.metadata_format == "ome.zarr" else None
task = getattr(downscale_tasks, self._get_task_name("Downscaling"))
effective_scale = [1] * ndim
for scale, (scale_factor, halo) in enumerate(zip(self.scale_factors, halos),
self.scale_offset + 1):
out_key = vu.get_format_key(self.metadata_format, scale, self.output_key_prefix)
if isinstance(scale_factor, int):
effective_scale = [eff * scale_factor for eff in effective_scale]
else:
effective_scale = [eff * sf for sf, eff in zip(scale_factor, effective_scale)]
# check if this scale already exists.
# if so, skip it if we have `skip_existing_levels` set to True
if self.skip_existing_levels and self._have_scale(scale):
in_key = out_key
continue
dep = task(tmp_folder=self.tmp_folder, max_jobs=self.max_jobs,
config_dir=self.config_dir,
input_path=out_path, input_key=in_key,
output_path=out_path, output_key=out_key,
scale_factor=scale_factor, scale_prefix="s%i" % scale,
effective_scale_factor=effective_scale,
halo=halo, dimension_separator=dimension_separator, dependency=dep)
in_key = out_key
# task to write the metadata
dep = WriteDownscalingMetadata(tmp_folder=self.tmp_folder,
output_path=out_path,
output_key_prefix=self.output_key_prefix,
metadata_format=self.metadata_format,
metadata_dict=self.metadata_dict,
scale_factors=self.scale_factors,
scale_offset=self.scale_offset,
dependency=dep, prefix="downscaling")
return dep
@staticmethod
def get_config():
configs = super(DownscalingWorkflow, DownscalingWorkflow).get_config()
configs.update({"downscaling": downscale_tasks.DownscalingLocal.default_task_config(),
"copy_volume": copy_tasks.CopyVolumeLocal.default_task_config()})
return configs
# HDF5 is frickin slow, so it seems to be better to do the
# computations in n5 and then copy data to h5
class PainteraToBdvWorkflow(WorkflowBase):
input_path = luigi.Parameter()
input_key_prefix = luigi.Parameter()
output_path = luigi.Parameter()
dtype = luigi.Parameter(default=None)
metadata_dict = luigi.DictParameter(default={})
skip_existing_levels = luigi.BoolParameter(default=True)
metadata_format = luigi.Parameter("bdv")
def get_scales(self):
with vu.file_reader(self.input_path, "r") as f:
g = f[self.input_key_prefix]
scale_names = list(g.keys())
scale_levels = [int(name[1:]) for name in scale_names]
return list(sorted(scale_levels))
def requires(self):
task = getattr(copy_tasks, self._get_task_name("CopyVolume"))
# get scales that need to be copied
scales = self.get_scales()
dep = self.dependency
prev_scale = None
scale_factors = []
for scale in scales:
in_key = vu.get_format_key("paintera", scale, self.input_key_prefix)
out_key = self.get_scale_key(self.metadata_format, scale)
# read the downsampling factors
with vu.file_reader(self.input_path) as f:
effective_scale = f[in_key].attrs.get("downsamplingFactors", [1, 1, 1])
if isinstance(effective_scale, int):
effective_scale = 3 * [effective_scale]
if scale > 0:
assert prev_scale is not None
scale_factors.append([eff / prev for eff, prev in zip(effective_scale,
prev_scale)])
prev_scale = deepcopy(effective_scale)
if self.skip_existing_levels and os.path.exists(self.output_path):
with vu.file_reader(self.output_path, "r") as f:
if out_key in f:
print("have out_key", out_key)
continue
prefix = "s%i" % scale
dep = task(tmp_folder=self.tmp_folder, max_jobs=self.max_jobs,
config_dir=self.config_dir,
input_path=self.input_path, input_key=in_key,
output_path=self.output_path, output_key=out_key,
prefix=prefix, effective_scale_factor=effective_scale,
dtype=self.dtype, dependency=dep)
# get the metadata for this dataset
# if we have the `resolution` or `offset` attribute
# in the dataset, we load them and add them to
# the metadata dict. However if these keys
# are already in the metadata dict, the existing values
# have priority
metadata_dict = {**self.metadata_dict}
with vu.file_reader(self.input_path) as f:
attrs = f[self.input_key_prefix].attrs
offsets = attrs.get("offset", None)
resolution = attrs.get("resolution", None)
if "offsets" not in metadata_dict and offsets is not None:
metadata_dict.update({"offsets": offsets})
if "resolution" not in metadata_dict and resolution is not None:
metadata_dict.update({"resolution": resolution})
# task to write the metadata
dep = WriteDownscalingMetadata(tmp_folder=self.tmp_folder,
output_path=self.output_path,
metadata_format=self.metadata_format,
metadata_dict=metadata_dict,
scale_factors=scale_factors,
dependency=dep, prefix="paintera-to-bdv")
return dep
@staticmethod
def get_config():
configs = super(PainteraToBdvWorkflow, PainteraToBdvWorkflow).get_config()
configs.update({"copy_volume": copy_tasks.CopyVolumeLocal.default_task_config()})
return configs
|
Detecting a collision between two colliders
I'm developing a game in unity using UnityScript. I've initially created two objects, a sphere and a cube, having respective colliders. Now I'm trying to detect a collision between them using the below UnityScript function. But I'm not able to detect collision. I've also added a rigid body component to both of them. How can I detect a collision?
function OnCollisionEnter(col : Collision)
{
if(col.collider.tag=="Cube")
{
Debug.Log("collision");
}
this is not enough code
Please add input and output data. Give comments where you think your problem is and what you intend to do. We need enough information to understand and reproduce your issue.
My javascript file contains this much of code with header '#pragam strict' and I added it to sphere object.
Are you missing a closing brace for your function? Is this script on the sphere? Does the cube have the "Cube" tag?
Unity wouldn't be able to compile and thus you couldn't test in the editor. I'm assuming that's a c/p error.
Things to verify:
Make sure the script(s) are attached to the game objects
UPDATED: Make sure the class you have this code in inherits from MonoBehaviour (C#, boo)
Make sure collider components are attached to the game objects and that "Is Trigger" isn't checked as OnCollisionEnter won't be fired if it is.
Make sure the gameobject has the "Cube" tag assigned in the inspector
Also, the rigidbody component only adds physics to the objects and is not necessary when simply detecting collisions.
I agree all of them except 2. Because Nirmal uses javascript (unityscript).
Using Javascript every script automatically derives from MonoBehaviour. When using C# or Boo you have to explicitly derive from MonoBehaviour.
Thanks for clarifying @BarışÇırıka.
I verified all your possibilities except 2. Still my code isn't working
Did you add rigidbody?
Notes: Collision events are only sent if one of the colliders also has a non-kinematic rigidbody attached. Collision events will be sent to disabled MonoBehaviours, to allow enabling Behaviours in response to collisions.
|
Trouble resampling pandas timeseries from 1min to 5min data
I have a 1 minute interval intraday stock data which looks like this:
import yfinance as yf
import pandas as pd
n = yf.download('^nsei', period= '5d', interval= '1m')
I am trying to resample it to '5m' data like this:
n = n.resample('5T').agg(dict(zip(n.columns, ['first', 'max', 'min', 'last', 'last', 'sum'])))
But it tries to resample the datetime information which is not in my data. The market data is only available till 03:30 PM, but when I look at the resampled dataframe I find its tried to resample for entire 24 hrs.
How do I stop the resampling till 03:30PM and move on to the succeeding date?
Right now the dataframe has mostly NaN values due to this. Any suggestions will be welcome.
I am not sure what you are trying to achieve with that agg() function. Assuming 'first' refers to the first quantile and 'last' to the last quantile and you want to calculate some statistics per column, I suggest you do the following:
Get your data:
import yfinance as yf
import pandas as pd
n = yf.download('^nsei', period= '5d', interval= '1m')
Resample your data:
Note: your result is the same as when you resample with n.resample('5T').first() but this means every value in the dataframe
equals the first value from the 5 minute interval consisting of 5
values. A more logical resampling method is to use the mean() or
sum() function as shown below.
If this is data on stock prices it makes more sense to use mean():
resampled_df = n.resample('5T').mean()
To remove resampled hours that are outside of the working stock hours you have 2 options.
Option 1: drop na values:
filtered_df = resampled_df.dropna()
Note: this will not work if you use sum() since the result won't contain missing values but zeros.
Option 2 filter based on start and end hour
Get minimum and maximum time of day where data is available as datetime.time object:
start = n.index.min().time() # 09:15 as datetime.time object
end = n.index.max().time() # 15:29 as datetime.time object
Filter dataframe based on start and end times:
filtered_df = resampled_df.between_time(start, end)
Get the statistics:
statistics = filtered_df.describe()
statistics
Note that describe() will not contain the sum, so in order to add it you could do:
statistics = pd.concat([statistics, filtered_df.agg(['sum'])])
statistics
Output:
The agg() is to apply individual method of operation for each column, I used this so that I can get to see the 'candlestick' formation as it is called in stock technical analysis.
I was able to fix the issue, by dropping the NaN values.
|
approach by which Central Africa might be un interruptedly connected with the already settled regions of Southern Africa plainly lies through the Transvaal country, the plateau of Honoma tapa, Tete on the Zambesi, and Lake Nyassa. The distance is about six degrees, or 150 leagues, through an elevated region, free from the fevers which infest the coast. A Frenchman, Dr. Emilien Allou, has, in fact, just made the journey from the South-African republic to the Zambesi, bringing back collections of great in terest from the number of new species they em brace. Were the republic of the Boers to become a member of the Cape Confederation, England would have only to establish a few stations be tween the Limpopo and the Zambesi, and at once the tide of immigration, which is now en riching Natal, would begin to pour in from that side. In a few years the Anglo-Saxon influence would pervade Africa, and win for civilization the magnificent region of the great lakes. This peaceful conquest would be in no sense exclusive, for there is plenty of room here for energetic men of every nation.
Let no one imagine that this is all a dream. The future that awaits European stations in this region is assured by the success which has at tended the founding of Arab posts in the interior. At Kazeh, in Unyanyembe ; at Kawele, on the shore of Tanganyika ; and at Kwakasonga, on the Lualaba, the Arabs have permanent residences. They live here in great comfort, owning large houses, herds, poultry, slaves. By the caravans which at stated times go down to the coast, they are supplied with coffee, tea, sugar, arms, and tex tile fabrics. Even in a far less accessible region, at Nyangwe, at a considerable distance beyond Tan ganyika, Cameron found an Arab, Jumat Meri cani, making exchange both with Zanguebar and Benguela, that is to say, with the coasts of both oceans.
The natives are extremely gentle and peace ful in disposition ; and, though strangers seldom visit this country, save for the purpose of slave hunting, and destroying and depopulating their villages, English travelers have nearly always been able to procure provisions at the ordinary prices. If travelers have been robbed, it has nearly always been by their own carriers. Hus bandry is very well understood, and conducted with care, the men laboring in the fields nearly the entire day. When the country is not wasted by war, the population increases, and the jungle is quickly cleared away. A noteworthy instance of this is cited by Cameron. In 185*7, when
Burton and Speke were on their way to the in terior, on the journey which resulted in the dis covery of Lake Tanganyika, they experienced great difficulty in traveling through the Mgunda Mkali country. There was a lack of water, the jungle was almost impassable, and many of their carriers perished in the effort to make their way through it. When Cameron visited this region in 1873, everything was changed. A tribe of the Wanyamwesi, driven back by local wars, had settled here. In the heart of the forest they had built villages, had dug wells, and had converted the jungle into well-cultivated fields. The ap pearance of the country was delightful, resem bling an English park. Hence European settle ments would here find in abundance the neces saries of life ; and if, as their numbers increased, they were to render less frequent those wars be tween tribe and tribe, which now desolate the country, progress would be assured, and the well being of the people would be augmented rapidly. Another instance of the success which awaits the colonist in these regions, long regarded as inaccessible, is supplied by the adventures of M. Bonnat, as lately rehearsed by him at a meeting of the Paris Society of Geography. In 1860, Bonnat was a member of an expedition com manded by Captain Charles Girard, who had re solved to ascend the Niger. Girard having aban doned this enterprise, M. Bonnat, by himself alone, penetrated into the interior of Guinea, and there entered on a very lucrative business career. The village in which he lived was attacked by the Ashantees. Carried away to Coomassie, he was treated very harshly, as were also his two com panions in captivity — a German and his wife. Soon the king conceived a liking for Bonnat, and took him into his favor. There he spent five years, experiencing all manner of kind treatment. He learned the language of the natives, and found that they carried on an extensive trade with a large town in the interior, Salaga, which is in commercial relations with the Sahara, and even with Tunis. When the English made war on the Ashantees, the king determined to put Bonnat to death. He was made fast to a tree, and was on the point of being beheaded, when, as luck would have it, the marines entered Coomassie. In 1874 he again set out for Africa, intending to settle at Salaga, of which town he had heard such wonder ful stories. He succeeded in ascending the river Volta, despite its rapids, and in overcoming the objections made by the native chiefs ; thus he has opened a new route for commerce. He was the first European to visit Salaga, a city of over 40,-
|
Mac: Open dialog (CMD+O) will not appear if all windows are closed
Prerequisites
[X] Put an X between the brackets on this line if you have done all of the following:
Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode
Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/
Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq
Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom
Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages
Description
The Mac Open Dialog (CMD+O) will not appear if all windows are closed
Steps to Reproduce
Open Atom in Mac
Close all app windows, but keep the app open
Trigger the Mac Open interface. You can do so by clicking on File --> Open or pressing CMD+O
Expected behavior: The Mac "Open Dialog" appears
Actual behavior: Nothing...The File menu will flash, but the Open Dialog will not appear
Reproduces how often: 100%
Versions
➜ atom --version
Atom : 1.12.7
Electron: 1.3.13
Chrome : 52.0.2743.82
Node : 6.5.0
➜ apm --version
apm 1.12.9
npm 3.10.5
node 4.4.5
python 2.7.13
git 2.11.0
I am using MacOS 10.12.2. I can also comment that I have been witnessing this issue for at least a year. It was just never a blocking issue since there are workarounds (see below).
Additional Information
The workaround is to have an app window open, in which case the Open Dialog will open without error. Or, I use the CLI to open a directory with the atom command.
Thanks for taking the time to contribute!
We noticed that this is a duplicate of #12508. You may want to subscribe there for updates.
Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related.
For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
|
Unanswered question with deleted answer not showing
When I go to all unanswered questions (Questions with no answers) and scroll down till I have seen all questions with 8 votes, I don't see this question there.
However, when I then click the matlab tag and look at unanswered questions tagged matlab I find that the top one has 8 votes and 0 answers. The description of this section is: Questions with no upvoted answers
I have looked around but did not find a question like this on meta yet. Furthermore the question is already at 7 or 8 votes for at least a day (probably longer).
UPDATE
A moderator confirmed that the question has a deleted answer.
I guess the question is now whether this is a bug, or that it intentionally does not show the question anymore. If I could choose I would say this calls for a fix.
UPDATE 2
I just noticed that this question is now showing up under 'all unanswered questions'. Just not sure whether it is because the functionality has changed or because something happened to the question.
Possibly related: http://meta.stackexchange.com/questions/171892/question-doesnt-show-up-in-unanswered-listings?rq=1
There is a deleted answer there.
@Oded I guess I suspected that, so is it a bug that deleted answers prevent it from showing or is this done intentionally?
Not sure, Dennis. That page has been there for quite a while and this may indeed be an unintentional bug.
Confirming this is not intentional. We are looking into it.
|
doubly escape periods in counter names so we dont create invalid json
This fixes an issue created by https://github.com/youtube/vitess/pull/2992. This broke our automation and monitoring which relies on the json response of /debug/vars being proper json.
cc @nerdatmath @sougou
By the way, I found the original escaping change to be unnecessary, though I can work around it once the json is valid. Since we know the format of the keys, we could still properly split the keys accounting for potential periods. It was a bit involved, but worked:
# for maps where one of the keys is the username, the username may have periods in it.
# Account for that in how we parse split on periods below.
idx = tag_list.index('user')
if idx == 0:
# no username, just split from right
tag_data = name.rsplit(key_split_char, len(tag_list) - 1)
elif idx == len(tag_list) - 1:
# username is the last item in the list, split from left
tag_data = name.split(key_split_char, len(tag_list) - 1)
else:
# limited split from left, up to where the username is
lparts = name.split(key_split_char, idx)
# limited split of remainder from right, backwards to username
rparts = lparts[idx].rsplit(key_split_char, len(tag_list) - idx - idx)
# contains everything up to but not including username
tag_data = lparts[:idx]
# add rparts on, the first item will be the username with dots in it
tag_data += rparts
@bbeaudreault Congrats on pull request #3000 ! :) Sugu got #1000 and I #2000 back then :D
Woohoo!
Another bonus is this year we hit it a month early!
I want to see if the previous change can be rolled back in favor of an internal fix. I'll try to catch @nerdatmath when he's online to chat about it.
Sounds good, thanks
The problem's not in my change, although my change revealed it. The problem is in the String method of Counters. Compare it to the String method of expvar.Map, and you'll see that where they use %q (which does the escaping required to generate valid JSON), we use "%v", which is equivalent to "%s" in this case because keys are always strings, and %s doesn't do any escaping. I'll send a PR to fix that.
This was fixed more correctly by https://github.com/youtube/vitess/pull/3005
|
Page:American Journal of Sociology Volume 5.djvu/162
148 Third in the list is "vagrancy," at 8.6 per cent.; a comparison of the figures for the last ten years shows that there has been a steady increase in the number arrested and convicted for this offense. This is partly due to the work of the Charity Organization Society, which has a special officer detailed to oversee the fourteen regular police officers whose duty it is to arrest all beggars and vagrants. Other minor offenses which the judge may dispose of finally, appearing in the table as 1 per cent, or less of the total, are sabbath breaking (chiefly keeping shops open and selling goods in violation of the law), ungovernable child, disorderly persons (abandoning or threatening to abandon family), and "suspicious persons." Concerning this last class the magistrates' report for 1897 says: "Many persons are arrested under suspicious circumstances, such as well-known criminals mysteriously loitering about the streets at night, or frequenting crowded places, or persons having property in their possession for which they can give no good account, nor of themselves. Frequently such arrest is the first step in the detection of some crime, which is investigated, the proper complainant found, a formal complaint taken, and the prisoner held for trial. In many instances such arrest prevents the commission of crime. During the year the total number of such cases amounted to 1,897, of which 1,885 were discharged and 12 cases are pending." It may be that this is a necessary means for the prevention of crime, but it would appear to place an extraordinary power of torture in the hands of the police officer. Doubtless ex-criminals need to be watched for the protection of society, possibly as a deterrent measure, but so long as the police force remains an irresponsible body, moved chiefly by private motives, such power will be more or less abused.
There is no way of ascertaining accurately the number of children and young persons who come into the police courts, as no statistics are published in the report except of those committed to institutions. These are not kept in confinement at the station house, but by the Gerry Society, and an officer of the society comes into court with them to suggest to the magistrate where they should be sent. From observation it appears,
|
In mammals, DNA methylation of cytosine at the 5-position (mC) serves as a major epigenetic mechanism to regulate gene transcription and plays a critical role in many cellular functions, such as genomic imprinting, X-chromosome-inactivation, repressing transposable elements, and regulating transcription^[@CR1],[@CR2]^. Aberrant DNA methylation is a hallmark of many human diseases and cancers. Indeed, abnormal methylation has become a potential biomarker for cancer detection, diagnosis, and prognosis used in liquid biopsy^[@CR3]^. In addition, dysregulation of DNA methylation has been found to be associated with other diseases, such as schizophrenia and autism spectrum disorders^[@CR4],[@CR5]^.
Recent efforts revealed that mC can be oxidized by three mammalian ten eleven translocation (Tet) proteins to form 5-hydroxymethylcytosine (hmC)^[@CR6]^. Sequential oxidation by the Tet enzymes can further convert hmC to 5-formylcytosine (fC) and 5-carboxycytosine (caC), which eventually leads to active DNA demethylation^[@CR7]--[@CR9]^. However, it remains largely unknown whether hmC, fC, and caC merely serve as intermediates in the process of active DNA demethylation or whether they have their own physiological functions^[@CR10]^. Recently, genome-wide mapping of hmC revealed that hmC is present in many tissues and cell types, and is especially enriched in embryonic stem cells and neurons^[@CR11]--[@CR15]^. In addition to hmC, fC was also found as a stable DNA modification in mammalian genomes^[@CR8],[@CR16]--[@CR18]^. These studies indicated that the oxidized forms of mC might have their own physiological functions, and identification of their "readers" and "effectors" might help elucidate their roles in various biological processes^[@CR19]--[@CR23]^.
Methylated cytosine can also remain asymmetric in mammalian genomes, and such sites are referred to as hemi-methylated^[@CR24]^. A prevailing view is that hemi-methylation is transient and occurs, perhaps, by chance, and that the fate of hemi-methylated DNA is to become fully methylated or unmethylated by replication-coupled dilution^[@CR25]^. However, two recent studies revealed that \~10% of CpGs remain stably hemi-methylated in embryonic and trophoblast stem cells^[@CR26],[@CR27]^. In a recent study, Xu and Corces demonstrated that elimination of hemi-methylation caused a reduction in the frequency of CTCF/cohesion interactions at these loci, suggesting hemi-methylation as a stable epigenetic mark regulating CTCF-mediated chromatin interactions^[@CR24]^. Intriguingly, they reported that hemi-methylated sites could be inherited over several cell divisions, suggesting that this DNA modification could happen by design and be maintained as a stable epigenetic state. Furthermore, in a non-CpG context (i.e., CpA, CpT, and CpC) mC is asymmetrical and hemi-hmC can arise via Tet oxidation^[@CR11],[@CR28]--[@CR30]^. As we and others have shown, mCpA is found more often in gene bodies and represses gene transcription through MeCP2 binding^[@CR31]--[@CR33]^. In the context of the palindromic CpG dinucleotide, hmCpG presumably exists in a fully hydroxymethylated form in cells; however, they can transiently become hemi-hmC after semi-conservative DNA replication^[@CR34]^. Following this logic, hemi-fC and hemi-caC should also exist, at least transiently, in mammalian genomes.
Despite acceleration of efforts to map mC and hmC at single base pair resolution in various biological processes and species, it remains a challenge to establish the causality between different types of epigenetic DNA modifications and physiological outcomes. Therefore, the identification of 'readers' and 'effectors' for mC, hmC, fC, and caC in both symmetric and hemi-forms will serve as a critical stepping stone to translate epigenetic signals into biological actions and to decipher the epigenetic 'codes' governing biological processes.
In the past, various types of high-throughput technology, such as protein binding microarrays (PBM), transcription factor (TF) arrays, yeast one-hybrid, SMiLE-seq and high-throughput SELEX, were developed to profile protein-DNA interactions (PDIs) mostly in the absence of any epigenetic modifications^[@CR35]--[@CR39]^. To identify potential readers for mC, our team was among the first to survey the majority of human TFs with 154 symmetrically methylated DNA probes^[@CR38]^. Later, Taipale and colleagues individually screened several hundred TF proteins against symmetrically methylated random DNA libraries using SELEX^[@CR38],[@CR40],[@CR41]^. In addition, generic DNA sequences, carrying symmetrically modified mC, hmC, fC, or caC, were used to pull down and identify potential binding proteins through MS/MS analysis^[@CR20],[@CR42]^.
Although these high-throughput efforts have generated massive amounts of data, none of them can be multiplexed to exhaustively survey both the protein (e.g., the entire human TF family) and DNA spaces (e.g., a random DNA library) simultaneously. Due to these design limitations, no systematic efforts have been reported to identify readers for symmetric-hmC, -fC, or -caC modifications, let alone readers for hemi-mC, -fC, or -caC modifications.
To overcome this huge technology bottleneck, herein we report the invention and application of digital affinity profiling via proximity ligation (i.e., DAPPL) as an all-to-all approach to exhaustively survey human TFs with mixtures of random DNA libraries carrying mC, hmC, fC, or caC modifications in either symmetric- or hemi-form to identify sequence- and modification-specific readers. Using specific DNA fragments to covalently barcode each of the 1239 unique human TFs and co-factors, we could connect the identity of a protein to its captured DNA fragments via proximity ligation in a highly multiplexed reaction (e.g., 192 proteins vs. a mixture of five random DNA libraries). Using this DAPPL approach, we identified numerous readers for symmetric-hmC, -fC, and -caC, and hemi-mC, -hmC, -fC, and -caC. We observed that all four modifications could either enhance or suppress TF-DNA interactions and, in some cases, alter TF-binding specificity. We also observed and experimentally validated that symmetric modifications have a stronger effect in either enhancing or suppressing TF-DNA interactions than hemi-modifications. Finally, in vivo evidence suggested that USF1 and USF2 might regulate transcription via hydroxymethylcytosine-binding activity in weak enhancers in human embryonic stem cells.
Establishment of digital affinity profiling via proximity ligation {#Sec3}
To create an all-to-all approach for unbiased profiling of TF-DNA interactions, we invented the DAPPL approach, which was achieved in five major steps (Fig. [1](#Fig1){ref-type="fig"}). First, human proteins were purified as GST fusions and kept on glutathione beads, while a set of DNA barcode sequences were designed such that any two single nucleotide mis-incorporations, insertions, and/or deletions would not lead to misassignment of protein identity (Supplementary Fig. [1a-b](#MOESM1){ref-type="media"}). Second, each protein was covalently tethered to a single-stranded DNA (ssDNA) oligo (i.e., Anchor Oligo) through a thiol--maleimide "click" chemistry linkage; the Anchor Oligos of each protein were then annealed to the 3′-end of a unique barcode oligo, and converted to double-stranded DNA (dsDNA) with a DNA polymerization reaction (using DNA polymerase I Klenow fragment) on beads (Supplementary Fig. [1c](#MOESM1){ref-type="media"}). After removal of the free DNA oligos, an aliquot of beads for each barcoded protein was mixed together to generate a protein mixture. Third, a random N-mer dsDNA library was synthesized and capped by a *Bsa*I restriction site to allow for proximity ligation and a fixed sequence (i.e., Primer 2) for PCR priming (Fig. [1](#Fig1){ref-type="fig"}). Fourth, to carry out the DAPPL reactions, the N-mer DNA library was incubated with the barcoded protein-bead mixture, followed by stringent washing steps to remove unbound DNA. After a crosslinking step, Golden Gate Assembly reactions^[@CR43]^ were performed to ligate the DNA barcodes of the proteins to their captured DNA fragments. Finally, the ligated DNA products were PCR-amplified with specific primers and a sequencing library was constructed for next-generation sequencing (Fig. [1](#Fig1){ref-type="fig"}).Fig. 1Schematics of identifying epigenetic modification readers by DAPPL.Principle and execution of the digital affinity profiling via proximity ligation (DAPPL) approach. The principle of the DAPPL approach is to utilize the unique DNA barcodes tethered to TF proteins as identifiers to de-convolute the DNA sequences that a given TF captures in a highly multiplexed binding assay. First, each purified TF protein on glutathione beads was individually conjugated with a unique DNA barcode. Second, barcoded TF proteins on beads are mixed and incubated with a mixture of five randomized DNA libraries (the multicolored middle bricks of each DNA means various randomly synthesized DNA sequence species) carrying either symmetric or hemi-modifications. Third, after removal of unbound DNA fragments, a TF-capture DNA fragment can be ligated to the DNA barcode conjugated on that particular TF via Golden Gate Assembly due to the close proximity of the TF barcode DNA. Fourth, the ligated products can be PCR-amplified by utilizing the constant sequences attached to the TF barcodes and one end of the DNA library. Finally, the sequences obtained with Next-Generation sequencing will be de-convoluted and analyzed with our bioinformatics tools (see Supplementary Fig. [3](#MOESM1){ref-type="media"} for more details).
To optimize the DAPPL approach, we decided to focus on the human ETS TF subfamily because almost all of them have well-characterized binding consensus sequences that can be used to benchmark the DAPPL approach (Supplementary Fig. [1d](#MOESM1){ref-type="media"}). First, we purified a total of 31 ETS proteins (28 unique) and successfully DNA-barcoded them as examined with both Coomassie and ethidium bromide staining (Supplementary Fig. [1e](#MOESM1){ref-type="media"}; Supplementary Data [1](#MOESM4){ref-type="media"}). We included CRX and HSF1 as additional positive controls, as well as BCAT1, COPE, and GST as negative controls. After generating a bead mixture of all of the barcoded proteins, we incubated the pooled beads with a random 16-mer dsDNA library in triplicate. The ligated DAPPL products were PCR-amplified and subjected to Next-Generation sequencing (Supplementary Fig. [1d](#MOESM1){ref-type="media"}).
A total of \~33.8 million reads were obtained, 50.0% of which precisely contained all the elements of a DAPPL product (Supplementary Fig. [2a](#MOESM1){ref-type="media"}). Any errors in the TF barcode sequences, Primer 1, Primer 2, ligation site, library ID, or the length of UMI would disqualify a DAPPL product (Supplementary Fig. [2b](#MOESM1){ref-type="media"}). The median number of qualified unique reads per protein was 60,220 (Fig. [2a](#Fig2){ref-type="fig"}). As expected, the three negative control proteins, namely COPE, BCAT1, and GST, consistently captured the lowest numbers of reads across all three replicates (Fig. [2a](#Fig2){ref-type="fig"}). To determine the binding consensus for a given TF, we evaluated its 6-mer frequencies by sliding a 6-mer window along the 16-mer sequences it captured (Supplementary Fig. [3](#MOESM1){ref-type="media"}). To remove possible noise signals resulting from nonspecific binding to either the glutathione beads or the GST tag, we compared the 6-mer subsequence frequencies of each TF with those obtained with the barcoded GST included as a negative control. Those 6-mer subsequences that were found preferentially enriched by a given TF were extracted and mapped back to the original 16-mer sequences, which were then used to deduce candidate consensus sequences using HOMER motif analysis^[@CR44]^. Only those motifs with a false discovery rate of 0.05 were considered significant (Supplementary Fig. [3](#MOESM1){ref-type="media"}).Fig. 2Development of the DAPPL approach using ETS members as a benchmark.**a** Distribution of unique molecular identifier (UMI) reads per protein obtained from the three replicated DAPPL (digital affinity profiling via proximity ligation) reactions. The three negative control proteins, COPE, BCAT1, and GST, consistently showed the lowest read numbers in all three replicates, while ELF2 showed the highest read number across. **b** Comparison of the 6-mer frequencies of ELF2 in three independent DAPPL reactions. **c** Boxplot analysis of pairwise Pearson correlation coefficients among the triplicate. The minima values of comparison of these three groups are 0.6947, 0.6789, and 0.698, respectively. The maxima values are 0.9918, 0.9971, and 0.9897. The median values are 0.9427, 0.9408, and 0.9346. The 3rd quantiles are 0.96, 0.9612, and 0.9544. The 1st quantiles are 0.9046, 0.8975, and 0.8945. The down whisker values are 0.8314, 0.8107, and 0.8157. **d** Examples of scatterplot analysis for ETS members, the two positive and two negative control proteins. The red dots are the enriched 6-mers used to identify the consensus sequences for these proteins. The recovered consensus sequences are also shown. **e** Examples of binding kinetics of three ETS members to the same consensus sequence obtained with the OCTET system. *X*-axis represents time of the kinetics studies, and *Y*-axis the nanometer shift used to define the biosensor surface changes. The *K*~D~ values shown in the figure were the average ± standard error of the obtained *K*~D~ obtained at four different TF concentrations shown in different colors. Source data are provided as a Source Data file. **f** Comparison of the enriched 6-mers in cycles 2 and 1. The color represents the average frequency of the enriched 6-mers associated with a particular protein normalized by those obtained with GST. **g** Breakdown of the TFs on the basis of consensus similarity changes in the cycle 2 of the DAPPL reactions. **h** A few examples of consensus sequences are shown. Source data are provided as a Source Data file.
To benchmark the DAPPL approach, we first examined its reproducibility by comparing the frequencies of 6-mers bound by the same proteins between the triplicates. We found that the vast majority of them showed high reproducibility. An example, ELF2, is shown in Fig. [2b](#Fig2){ref-type="fig"}. Next, we calculated the Pearson correlation coefficients (PCC) of the 6-mer frequencies for each protein between replicates 1 versus 2, 1 versus 3, and 2 versus 3. All of them showed high PPC values, ranging from 0.701 to 0.997 (Fig. [2c](#Fig2){ref-type="fig"}).
To extract the 6-mers that were enriched for each protein, we compared the 6-mer frequencies between a given TF and the GST negative control. For the ETS members and two positive controls of different TF subfamilies, a large number of 6-mers were highly enriched (marked in red, Fig. [2d](#Fig2){ref-type="fig"}). In contrast, the two negative controls (COPE and BCAT1) failed to produce any enriched 6-mers, suggesting that the captured DNA sequences were likely due to nonspecific interactions. The enriched 6-mers were used to identify the binding consensus and all of the tested ETS members, as well as CRX and HSF1, produced significant consensus sequences. A heatmap was also generated to help visualize the 6-mers enrichment for each protein tested (Supplementary Fig. [4](#MOESM1){ref-type="media"}). To assess the quality of the obtained consensus sequences, we employed Tomtom to compare the similarity of the obtained consensus sequences with those archived in CIS-BP. The resultant *P* values ranged from 3.7 × 10^−9^ to 3.1 × 10^−3^, indicating that DAPPL reactions could faithfully recover the known motifs with high success rate and high quality (Supplementary Data [2](#MOESM5){ref-type="media"}, [3](#MOESM6){ref-type="media"}).
To address the concern of potential competition during the DAPPL reactions, we randomly selected 14 ETS members to examine their binding kinetics to the same consensus sequence (i.e., 5′-ACTTCCGG) using a label-free, real-time method (i.e., OCTET). The observed *K*~on~ and *K*~off~ ranged from 5.2 × 10^3^ M^−1^s^−1^ to 2.1 × 10^5^ M^−1^s^−1^, and 2.0 × 10^−5^ s^−1^ to 2.1 × 10^−3^ s^−1^, respectively, and the deduced *K*~D~ ranged from 1.8 to 132 nM (Fig. [2e](#Fig2){ref-type="fig"} and Supplementary Fig. [5a--n](#MOESM1){ref-type="media"}). Importantly, no significant correlation was observed between the read number and the obtained affinity values (Supplementary Fig. [5o](#MOESM1){ref-type="media"}). For example, among the 14 ETS members with measured affinity values, ETS2 showed the strongest affinity (1.8 nM) that was 73-fold stronger than SPDEF (132 nM), while the average read numbers of the two was only 15-fold (=509,195/33,322) different. This analysis indicated that DAPPL reactions were sensitive enough to detect binding events for TFs with relatively lower affinity.
To evaluate whether multiple rounds of selection could further improve the performance of the DAPPL reactions, we implemented a two-cycle selection and performed the DAPPL reactions at the end of the second cycle, a strategy similar to SELEX^[@CR38],[@CR40],[@CR41]^. After next-generation sequencing, the same approach was applied to identify consensus sequences for all the proteins. We compared the frequency of the binding 6-mers (i.e., red dots in Fig. [2d](#Fig2){ref-type="fig"}) between cycle 1 and cycle 2 and found that these 6-mers were indeed enriched in cycle 2 for every ETS protein, as well as the two positive control TFs (Fig. [2f](#Fig2){ref-type="fig"} and Supplementary Fig. [6](#MOESM1){ref-type="media"}). Using the same approach described above, we were able to determine consensus sequences for all 28 ETS TFs and the two positive controls with the enriched 6-mers obtained in cycle 2. To compare the quality of the consensus sequences of cycle 1 versus cycle 2, we calculated the similarity between these consensus sequences and the known sequences in CIS-BP. Judged by the similarity scores, those consensus sequences obtained in cycle 2 only showed marginal improvement (\~53%), suggesting that although the binding sequences were enriched in cycle 2, the DAPPL approach was sensitive enough to generate significant and reliable consensus sequences in one cycle of screening (Fig. [2g](#Fig2){ref-type="fig"}). Examples of consensus sequences with similarity scores from high to low are illustrated in Fig. [2h](#Fig2){ref-type="fig"}. One plausible explanation is that the ligation step in the DAPPL served as another layer of selection, resulting in further enrichment of the high affinity binding sequences.
Taken together, we developed an all-to-all approach (i.e., DAPPL) that allowed us to identify protein-DNA interactions in vitro in a highly multiplexed fashion. The quantitative evaluation of DAPPL consensus sequences against those already known has successfully benchmarked the DAPPL approach. Our experiments have suggested that a single round of selection is sufficient to produce high quality consensus sequences, laying the foundation for identifying readers for various epigenetic modifications on cytosine.
Epigenetic modification-associated TF-binding activities {#Sec4}
To harness the power of DAPPL, we applied it to identify TF-DNA interactions for 1543 human TFs (1239 of which are unique) and co-factors (902 TFs and 337 co-factors) (Supplementary Fig. [1f](#MOESM1){ref-type="media"} and Supplementary Data [1](#MOESM4){ref-type="media"}) in the context of four epigenetic modifications, including mC, hmC, fC, and caC in both symmetric and hemi-forms. Because complete modifications of hmC, fC, or caC cannot be obtained with an enzymatic reaction, four random DNA oligo libraries were first synthesized, each of which carried a CpG site with methylation, hydroxymethylation, formylation, or carboxylation flanked by 8-mer random sequences (Supplementary Table [1](#MOESM1){ref-type="media"}). Next, they were converted to double-stranded, symmetric and hemi-libraries via Klenow reactions in the presence or absence of the corresponding modified dCTPs, respectively. An equal amount of the four symmetric and hemi-modified libraries (i.e., mC, hmC, fC, and caC) was mixed separately to generate two mixtures of the symmetric and hemi-modified libraries, respectively. An unmodified library of the same design was also synthesized and added to the two library mixtures in an equimolar ratio (Fig. [1](#Fig1){ref-type="fig"}).
The two library mixtures carrying the symmetric or hemi-modifications were then separately incubated with the TF mixtures to carry out the DAPPL reactions, as described above. Using the same computational approach, 44, 97, 84, and 107 TFs were identified that recognized specific DNA consensus sequences containing symmetric modifications of mC, hmC, fC, and caC, respectively (Fig. [3a](#Fig3){ref-type="fig"} and Supplementary Data [4](#MOESM7){ref-type="media"}, [5](#MOESM8){ref-type="media"}). Similarly, 99, 103, 118, and 139 TFs were identified to recognize specific DNA consensus sequences containing hemi-modifications of mC, hmC, fC, and caC, respectively (Fig. [3b](#Fig3){ref-type="fig"} and Supplementary Data [4](#MOESM7){ref-type="media"}, [5](#MOESM8){ref-type="media"}). The corresponding heatmaps can be found in Supplementary Data [6](#MOESM9){ref-type="media"}.Fig. 3Identification of readers for various epigenetic modifications.**a** Venn diagram analysis of identified readers for mC, hmC, fC, and caC modifications in symmetric form. **b** Venn diagram analysis of identified readers for mC, hmC, fC, and caC modifications in hemi- form. **c** Venn diagram analysis between symmetric and hemi-modification readers in the context of mCpG, hmCpG, fCpG, and caCpG modifications. **d** The identified binding activities to both symmetric and hemi-mC, -hmC, -fC, and -caC modifications are observed for all the major TF subfamilies shown in different colors. More TFs were found to interact with hemi-modified DNA consensus sequences than symmetrically modified sequences. **e** No significant preference to a particular symmetric or hemi-modification was observed within each major TF subfamily shown in different colors. **f** Highly conserved homologs often recognized highly similar consensus sequences and shared the preference for the same modifications. For example, MEIS1/2/3 all recognized a similar consensus carrying caC in a sequence of 5′-TGTcaCG in both symmetric and hemi-forms. The scatterplots display the frequencies of 6-mer subsequences obtained with the MEIS1/2/3 (*y-*axis) and GST tag control (*x*-axis). Letter E in the consensus sequences represents caC. The top three 6-mer sequences are highlighted. Source data are provided as a Source Data file.
Venn diagram analysis revealed that many TFs could recognize both symmetric and hemi-modifications (Fig. [3c](#Fig3){ref-type="fig"}). Interestingly, the identified binding activities to both symmetric- and hemi-mC, -hmC, -fC, and -caC modifications are observed across all the major TF subfamilies (Fig. [3d](#Fig3){ref-type="fig"} and Supplementary Data [4](#MOESM7){ref-type="media"}, [5](#MOESM8){ref-type="media"}). Overall, more TFs were found to interact with hemi-modified DNA consensus sequences than symmetrically modified sequences. Furthermore, no significant preference for a particular symmetric or hemi-modification type was observed within each major TF subfamily (Fig. [3e](#Fig3){ref-type="fig"}).
Perhaps not surprisingly, we observed that highly conserved homologs often recognized highly similar consensus sequences and shared the same preference for a particular modification, indicating that these discovered activities are conserved among closely related paralogs. For example, MEIS1/2/3 all recognized a similar consensus carrying caC in a sequence of 5′-TGTcaCG in both symmetric and hemi-forms (Fig. [3f](#Fig3){ref-type="fig"}).
Impacts of epigenetic modification on TF-DNA interactions {#Sec5}
To examine how different epigenetic modifications could affect TF-DNA interactions, we compared 6-mer subsequence frequencies obtained with each modified library against the unmodified counterparts. Three major trends were observed across all four epigenetic modifications as illustrated in Fig. [4a](#Fig4){ref-type="fig"}. First, DNA-binding activity was enhanced for 50 TFs by a particular modification. For example, many 6-mer subsequences captured by HMBOX1 were significantly enriched in the symmetric carboxylated library, suggesting that carboxylation enhanced HMBOX1-DNA interactions. Second, DNA binding was suppressed for 95 TFs by a particular modification. For instance, the 6-mer frequencies of OVOL2 were found to be significantly lower with all four symmetric modification libraries, suggesting that these modifications significantly suppressed OVOL2-DNA interactions. In the third scenario, 25 interactions between TFs and DNA were not significantly affected by any modifications, showing similar binding strength regardless of the modifications. Moreover, some modifications could enhance and suppress the binding activity of a particular TF dependent on the sequence context (labeled as bimodal in Fig. [4a](#Fig4){ref-type="fig"}). For example, formylation could enhance the binding strength of ETS1 in a consensus of 5′-CGGAfCGTA, while reducing in a different consensus of 5′-CCGGAAGT (Supplementary Data [7](#MOESM10){ref-type="media"}).Fig. 4Three scenarios of symmetric modification impact on TF-DNA interactions.**a** A given TF-DNA interaction can be enhanced (red bricks), suppressed (blue bricks), or unaffected (gray bricks) by any of the four symmetric epigenetic modifications. Note that some modifications could enhance and suppress TF-DNA interactions depending on the sequence context (bimodal). The same scenarios were also observed for hemi-modifications (Supplementary Data [7](#MOESM10){ref-type="media"}, [8](#MOESM11){ref-type="media"}, and Supplementary Fig. [7](#MOESM1){ref-type="media"}). **b** Validation of modification-enhanced (i.e., TBX2 and HMBOX1) or suppressed (i.e., OVOVL2 and DMTF) TF-DNA interactions using electrophoretic mobility shift assays (EMSA). The sequences of the DNA probes used in the EMSA are shown in each box with the modified C labeled in red. The protein concentrations of TBX2, HMBOX1, OVOL2, and DMTF1 were 26.2, 355.6, 563.3, and 156 nM, respectively.
Interestingly, different modifications exhibited different impacts on the same proteins. For example, MEIS2 was found to prefer binding methylated and carboxylated DNA, while hydroxymethylation and formylation reduced its binding activity (Row 32; Fig. [4a](#Fig4){ref-type="fig"}). Crx, on the other hand, showed strong binding activity to carboxylated DNA while the other three modifications greatly suppressed its binding activity (Row 35; Fig. [4a](#Fig4){ref-type="fig"}). Indeed, this phenomenon was observed among many TFs as summarized in Fig. [4a](#Fig4){ref-type="fig"}. Overall, 14, 1, 7, and 28 TFs were found to prefer mC, hmC, fC, and caC modifications, respectively; whereas methylation, hydroxymethylation, formylation, and carboxylation suppressed binding activities of 21, 24, 25, and 25 TFs, respectively. On the other hand, 9, 8, 6, and 2 TFs were insensitive respectively to methylation, hydroxymethylation, formylation, or carboxylation (i.e., "No preference" in Fig. [4a](#Fig4){ref-type="fig"}). The same major trends were also observed for hemi-modifications (Supplementary Data [8](#MOESM11){ref-type="media"} and Supplementary Fig. [7](#MOESM1){ref-type="media"}).
To experimentally validate the observed preferences, purified TBX2, HMBOX1, OVOL2, and DMTF1 were subjected to EMSA analysis because each of them showed a distinct behavior to various epigenetic modifications. Using the consensus sequences obtained from our bioinformatics analysis, TBX2 demonstrated stronger binding strength to methylated and formylated consensus sequences than to the unmodified counterpart, in agreement with the DAPPL results. However, neither hydroxymethylation nor carboxylation enhanced TBX2's binding to the same consensus sequence (Fig. [4b](#Fig4){ref-type="fig"}). In the case of HMBOX1, carboxylation in a consensus of 5′-CGACTAA substantially enhanced its binding activity as compared with the unmodified, methylated, hydroxymethylated, and formylated counterparts (Fig. [4b](#Fig4){ref-type="fig"}). On the other hand, OVOL2 preferred the unmodified DNA consensus, whereas all the other modifications reduced or completely inhibited its binding activity (Fig. [4b](#Fig4){ref-type="fig"}). Similarly, all four modifications were confirmed to suppress DMTF1's binding activity as compared with the unmodified counterpart, although carboxylation showed the least effect (Fig. [4b](#Fig4){ref-type="fig"}). Overall, all of the EMSA assays validated our DAPPL results, suggesting that different modifications could impose differential impacts to binding activities of the same TFs.
Impacts of epigenetic modification on TF-binding specificity {#Sec6}
We also observed that the carboxylated consensus (5′-caCGACTAA) identified for HMBOX1 was significantly different from its known consensus (5′-TAACTA)^[@CR38],[@CR45]^, suggesting that carboxylation could alter TF-binding specificity. This phenomenon is observed across all four modifications. For example, IRX2 preferred 5′-mCGTTA, which is substantially different from its known unmodified consensus 5′-TTACACG (Fig. [5a](#Fig5){ref-type="fig"}). Similarly, FOXC2 and HOMEZ showed altered consensus sequences in the presence of different modifications (Fig. [5a](#Fig5){ref-type="fig"}). On the other hand, while the binding strength of TBX2 and RFX2 was enhanced by methylation and carboxylation, respectively, neither altered their binding specificity (Fig. [5b](#Fig5){ref-type="fig"}).Fig. 5Impact of symmetric epigenetic modifications on TF-binding specificity.**a** Examples of deviated consensus sequences across all four epigenetic modifications. Known consensus sequences (boxed) in the absence of any modifications are compared with those carrying a modified cytosine. Letter E in the consensus sequences represents modified cytosine. **b** Examples (i.e., TBX2 and RFX2) of TF-binding specificity not affected by cytosine modifications, although with enhanced binding affinity. **c** Summary of impact of different modifications on TF-binding specificity. Green or purple bricks represent the consensus sequences generated from modified DNA libraries are different or similar with that from unmodified DNA libraries, respectively. Source data are provided as a Source Data file.
To systematically examine the impact of the four modifications on TF-binding specificity, we determined the TF consensus sequences that were found preferentially bound to modified sequences in both symmetric and hemi-forms. Specifically, we extracted those 6-mers enriched in modified libraries as compared to the unmodified libraries and constructed the modification-preferred consensus sequences (Supplementary Data [7](#MOESM10){ref-type="media"}, [8](#MOESM11){ref-type="media"}). We next compared sequence similarity between these obtained consensus sequences with those obtained with unmodified libraries, as well as those archived in CIS-BP^[@CR46]^. For example, 13 symmetric-mC-preferred consensus sequences were identified, four (30.8%) of which are significantly different from the unmodified counterparts. Similarly, 26.3% of the caC-preferred consensus sequences are significantly different (Fig. [5c](#Fig5){ref-type="fig"}). Overall, 27.5% of the modification-preferred consensus sequences are significantly different from the unmodified counterparts.
A similar observation was made for hemi-modification-preferred consensus sequences. For example, hemi-modifications altered binding specificity of NR2E1, YBX1, HMBOX1, and JDP2 (Supplementary Fig. [8a](#MOESM1){ref-type="media"}). On the other hand, the binding specificity of PKNOX2 and EGR2 was not affected by hydroxymethylation (Supplementary Fig. [8b](#MOESM1){ref-type="media"}). Overall, 35.85% consensus sequences with the four modifications were significantly different from the binding unmodified consensus sequence (Supplementary Fig. [8c](#MOESM1){ref-type="media"}).
Stronger effects imposed by symmetric modifications {#Sec7}
One interesting phenomenon observed was that the extent of modification-enhanced or -suppressed binding activity was greater for symmetric than hemi-modifications in general. For instance, although methylation in both symmetric and hemi-forms enhanced HOMEZ's binding activity, the enriched 6-mers showed steeper slope for symmetric methylation (red lines; Fig. [6a](#Fig6){ref-type="fig"}), suggesting that symmetric methylation strengthened HOMEZ-DNA interaction more than hemi-methylation. In the modification-suppressed cases, symmetric modifications also showed a stronger effect. For example, the slope of methylation-suppressed 6-mers of OVOL2 was flatter for symmetric methylation, suggesting that symmetric mCpG nearly abolishes the interactions while hemi-methylation was partially tolerated (Fig. [6b](#Fig6){ref-type="fig"}).Fig. 6Symmetric modifications have a greater impact on TF-DNA interactions than hemi-modifications.**a** Symmetric modifications enhance TF's binding to modified consensus sequences more than hemi-modifications. The enriched 6-mers (red dots) show a steeper slope for symmetric methylation than hemi-methylation, although methylation in both symmetric and hemi-forms enhanced HOMEZ's binding activity. In 16 (53.33%) of the 30 modification-enhanced cases symmetric modifications showed higher slopes than the hemi-modifications. Letter E in the consensus sequences represents modified cytosine. **b** Hemi-modifications suppress TF's binding to modified consensus sequences less than symmetric modifications. The suppressed 6-mers (blue dots) show a flatter slope for symmetric methylation than hemi-methylation for OVOL2. In 58 (96.7%) of the 60 modification-suppressed cases symmetric modifications showed flatter slopes than the hemi-modifications. **c** Binding kinetics and affinity studies. Electrophoretic mobility shift assays (EMSA) were used to obtain semi-quantitative measurements of interactions between modified DNA motifs and the corresponding TFs at various concentrations. The DNA probe for HOMEZ's EMSA validation was designed by assembling the six 6-mer sequences with the highest frequency. The OCTET system was employed to determine binding kinetics and affinity values for DNA-TF interactions as described above. The red and black arrows indicate when the DNA probe immobilized on an OCTET biosensor was dipped into TF solution and wash buffer, respectively. The deduced *K*~D~ values are listed for each assay performed at various TF concentrations shown in different colors. DNA oligo sequences used for the EMSA and OCTET were shown in Supplementary Table [1](#MOESM1){ref-type="media"}. Source data are provided as a Source Data file.
To systematically and quantitatively assess this phenomenon, we identified 30 TF-DNA interactions that were enhanced by both symmetric and hemi-modifications and 60 interactions suppressed by both symmetric and hemi-modifications (Fig. [4](#Fig4){ref-type="fig"}, Supplementary Fig. [7](#MOESM1){ref-type="media"} and Supplementary Data [7](#MOESM10){ref-type="media"}, [8](#MOESM11){ref-type="media"}). Next, we quantified the TF-binding preference for a given modification by deducing the slope in the corresponding scatterplot as illustrated in Fig. [6a](#Fig6){ref-type="fig"}. Note that a larger slope corresponds with a stronger the preference for that modification. Overall, in 16 (53.33%) of the 30 modification-enhanced cases, symmetric modifications showed higher slopes than the hemi-modifications (Fig. [6a](#Fig6){ref-type="fig"}). Similarly, in 58 (96.7%) of the 60 modification-suppressed cases, symmetric modifications showed lower slopes than the hemi-modifications in all four forms (Fig. [6b](#Fig6){ref-type="fig"}). These results suggest that symmetric modifications can strongly enhance or suppress TF-DNA interactions while the effects of hemi-modifications are more modest.
To quantitatively confirm the above observation, we selected HOMEZ, TBX2, TBX3, TBX20, and STAT5A, as modification-enhanced cases, to determine their binding kinetics and affinity to DNA fragments with symmetric or hemi-modifications. Taking HOMEZ as an example, we synthesized three versions of DNA oligos carrying unmodified C, mC, or hmC, and then converted to the symmetric and hemi-modified dsDNA probes in the sequence context of 5′-TATCGATA. Note that the "pure" symmetric sequences were generated by selecting probes such that the only modified C to be incorporated is at the desired modification site. Using the EMSA assays as a semi-quantitative measurement, the symmetric modifications appeared to have a stronger effect than the hemi counterparts (Fig. [6c](#Fig6){ref-type="fig"}). Next, we quantitatively determined the binding kinetics and affinity using OCTET instrumentation. After the *K*~on~ and *K*~off~ values were determined at different protein concentrations, the *K*~D~ values for the DNA probes carrying symmetric-mC, hemi-mC, symmetric-hmC, and hemi-hmC were deduced to be 243, 553, 135, and 345 nM, respectively, while the *K*~D~ value with the unmodified probe was 5.90 µM. Similarly, STAT5A, TBX2, TBX3, and TBX20 all showed stronger affinity to DNA probes carrying symmetric-caC and -fC than those carrying the respective hemi-modifications (Fig. [6c](#Fig6){ref-type="fig"}, and Supplementary Fig. [9](#MOESM1){ref-type="media"}). As an example of a modification-suppressed case, binding and affinity studies also confirmed that symmetrically methylated consensus sequences suppressed the binding activities of OVOL2 more than the hemi-modified sequences (Fig. [6c](#Fig6){ref-type="fig"}). Taken together, our binding kinetics and affinity studies confirmed that symmetric epigenetic modifications had a stronger impact in either enhancing or suppressing TF-DNA interactions than hemi-modifications.
Potential function of hmC readers in human stem cells {#Sec8}
To explore potential physiological functions of identified epigenetic modification readers, we decided to focus on hmC readers because it is more prevalently found in tissues---in particular, human embryonic stem cells---among the oxidized forms of mC. Since the existing hmC map of H1 is of low coverage, we first employed a selective chemical labeling method^[@CR47]^ to extensively map hmC peaks in human embryonic stem cell H1. A total of 3892 peaks containing hmC were identified, 1783 of which are located in open chromatin regions. Please note that these hmC peaks might contain partially modified sites due to the heterogeneity of the cell population.
Of the 143 unique hmC readers (symmetric and/or hemi) identified in this study, seven have available ChIP-seq data in H1 cells^[@CR48]^. Therefore, we superimposed the available ChIP-seq peaks with the hmC peaks to identify peaks of overlap (Fig. [7a](#Fig7){ref-type="fig"}). We found that the numbers of overlapping peaks were 229, 68, 36, and 46 for USF1, USF2, TAF7, and ATF2, respectively (Fig. [7b](#Fig7){ref-type="fig"}). The observed overlapping is significant as determined with a random permutation of ChIP-seq peaks (Fig. [7b](#Fig7){ref-type="fig"}; see Methods and Materials for more details). Although peaks of overlap were also observed for NRF1, RXRA, and SRF, none of them were significant using the same criteria. Moreover, using the overlapping peaks, we recovered consensus sequences for both USF1 and USF2 that were significantly similar to those obtained from DAPPL assays (Fig. [7c](#Fig7){ref-type="fig"}), suggesting that the two TFs recognize hmC in a similar sequence context in H1 cells.Fig. 7Potential roles of hmCG readers in human embryonic stem cells.**a** A genomic region of overlapping hmC and USF1's ChIP-seq peaks. The identified USF1's hmC consensus found in the peak region is underlined. **b** Statistical significance of the overlapping peaks. 5000 random assignments of TF ChIP-seq peaks in the genome produced a distribution of the numbers of peaks overlapping with hmC peaks. The observed numbers of the overlapping peaks were indicated with the red arrows for USF1, USF2, TAF7, and ATF2. **c** Comparison of the consensus sequences obtained with DAPPL and those with sequences in the overlapping peaks. Letter E in the consensus sequences represents modified cytosine. **d** Distribution of chromatin states of the overlapping peaks (blue bars) compared with all of the ChIP-seq peaks (red bars). The overlapping peaks of both USF1 and USF2 were enriched in weak enhancer regions with significant *P* values. *P* values were calculated by two-sided Chi-square without adjustments. Source data are provided as a Source Data file.
To examine possible functions of hmC-dependent TF-DNA interactions, we investigated the chromatin status of the overlapping peaks of USF1 and USF2 in H1 cells. Using chromatin status annotation (i.e., ChromHMM), we found that the overlapping peaks of both TFs showed different profiles compared to the overall ChIP-seq peaks, and were mostly enriched in weak enhancer regions (Fig. [7d](#Fig7){ref-type="fig"}). Furthermore, removal of ChIP-seq peaks with moderate to high methylation did not affect this observation. These results suggested that USF1 and USF2 might regulate transcription via hmC-binding activity in weak enhancers in H1 cells.
In this study, we developed an all-to-all approach, DAPPL, to enable highly multiplexed profiling of protein-DNA interactions in the context of DNA epigenetic modifications. To benchmark this approach, we focused on the ETS subfamily because almost all of them have well-characterized consensus sequences. On the basis of our results and analyses, we demonstrated that DAPPL approach could reproducibly recover all known consensus sequences with good quality in a single round of selection. Of note, many TFs, such as the bZIP family members, are known to form homo- or heterodimers before they can bind DNA^[@CR49]^. While the DAPPL approach cannot be used to identify consensus sequences for heterodimers, it might work for homodimers because some TFs are obligatory homodimers. For example, ETS1 in full-length is not known to bind DNA as a monomer^[@CR50],[@CR51]^. However, the two ETS1 isoforms both identified specific DNA motifs in our pilot DAPPL assays (Supplementary Data [2](#MOESM5){ref-type="media"}). A likely explanation is that such TFs might have already formed homodimers before they were captured on the glutathione beads.
Conceptually, the establishment of this all-to-all approach should revolutionize traditional high-throughput screening, because most of the current high-throughput approaches come in the form of one-to-all. DAPPL represents a technology breakthrough because its multiplicity (e.g., 192 proteins vs. 2.15 × 10^10^ DNA species) far exceeds any existing multiplexed methods, such as dye-based approaches^[@CR52]^. Our DAPPL technology allowed us to assay a mixture of five randomized DNA libraries carrying four different cytosine modifications and unmodified cytosine, and the unique advantage of such a competition assay allowed us to directly compare binding preferences of a given TF (e.g., Fig. [4](#Fig4){ref-type="fig"} and Supplementary Fig. [7](#MOESM1){ref-type="media"}).
We believe that DAPPL could have a wide range of applications and can be adapted by the scientific community for various studies. For example, DAPPL can be tweaked to apply to the study of protein-RNA interactions. Given \>100 different RNA modifications identified so far, we expect that the adaptation of DAPPL in studying RNA posttranscriptional modifications will have a long-lasting impact. Moreover, DAPPL can be applied to rapidly identify small molecule binders for drug targets using DNA-encoded small molecule libraries (i.e., DEL), some of which have reached the complexity of over a trillion compounds. We anticipate that simultaneous forward and counter screenings in a single test tube enabled by DAPPL will greatly speed up drug discovery, while reducing unwanted toxicity in order to improve the success rate during clinical trials.
To our satisfaction, our DAPPL approach recovered several known methylated cytosine readers, such as methylated DNA readers MeCP2, MBD4, MEIS1, HOMEZ, TBX2/3/20, RFX2, and PKNOX2^[@CR20],[@CR41],[@CR53]^. In two recent studies, DNA baits, carrying mC, hmC, fC, or caC modifications, were used to pull down potential binding proteins in cell lysates to identify potential modification readers^[@CR20],[@CR42]^. Although the binding specificity of these readers was not known, quite a few of them were also recovered in our study, such as a hmC reader MeCP2, seven fC readers (CNBP, CSDA, GTF2I, NRF1, PURA, FOXP4, and p53), and seven caC readers (ZBTB7B, CTCF, NRF1, SMARCC1, ZBTB7A, ZBTB7B, and ZNF187)^[@CR20],[@CR23],[@CR42]^. Except MeCP2, CTCF, and p53, we identified the consensus sequences carrying the corresponding modifications for the rest of the readers, demonstrating the advantage of the DAPPL approach.
We also compared our results with the report by Yin et al.^[@CR41]^. First, 88 and 265 proteins were found to recognize methylated motifs in this study and Yin et al., respectively^[@CR41]^. 43 were identified in both studies (*P* = 0.0078). These studies both uncovered that the Homeodomain TFs were found to prefer methylated DNA motifs. On the other hand, methylation was observed to significantly suppress binding activity of the ETS family in both studies. In Mann et al., methylation was reported to enhance CEBPα and CEBPβ's binding activities, and inhibit CREB, ATF4, JUN, JUND, CEBPδ, and CEBPγ. Of these eight proteins, CEBPβ, CEBPδ, CEBPγ, ATF4, and JUN were also assayed in this study. We identified significant motifs for CEBPβ and CEBPγ using the symmetric-methylated DNA library but did not find any significant enhancement or suppression by methylation. Although both proteins were categorized as "MethylPlus" in Yin et al., the effects were only marginal^[@CR41]^. Therefore, our studies, in agreement with many previous studies, suggest that cytosine modification enhancement or suppression of TF-DNA interactions might be a widespread phenomenon in humans.
Because the TFs were sampled against the five libraries with different modifications simultaneously in the DAPPL assays, we were able to predict whether a modification in a specific sequence context would enhance or suppress a TF-DNA interaction. Using both EMSA and quantitative kinetics studies, all of the tested predictions were successfully verified (Fig. [6](#Fig6){ref-type="fig"} and Supplementary Fig. [9](#MOESM1){ref-type="media"}). Interestingly, Yin et al. also employed bioinformatics analysis to predict those TF-DNA interactions that could be enhanced (i.e., MethylPlus) or suppressed by symmetric DNA methylation (i.e., MethylMinus), although no experimental validation was provided^[@CR41]^. Of the 14 TFs identified as methylation-enhanced cases in our study, three proteins, PKNOX2, MEIS2, and MEIS3, were also identified as MethylPlus; whereas the other 11 TFs were not included in the study by Yin et al. One of these 11 proteins, HOMEZ was confirmed to bind to the hemi and symmetric-mC-carrying consensus sequences 10- and 25-fold stronger than the unmodified counterpart, respectively (Fig. [6c](#Fig6){ref-type="fig"}). Of the 19 methylation-suppressed TFs identified in this study, 11 were also included in the Yin et al. study. Eight of them were also scored as MethylMinus. OVOL2's binding activity to symmetric-methylated DNA was undetectable, while its binding affinity dropped to 15 µM to hemi-methylated DNA, 3-fold weaker as compared with the unmethylated counterpart (Fig. [6c](#Fig6){ref-type="fig"}).
Compared with intensive studies of symmetric modifications, the biological roles of hemi epigenetic modifications remain obscure and underexplored^[@CR25]^. Emerging evidence suggests that hemi-mC is a stable epigenetic mark because a small fraction of hemi-mC could be inherited, and it has been associated with transcriptional regulation^[@CR24]^. Moreover, in a non-CpG context (e.g., CpA, CpT, or CpC) mC is asymmetrical and mCpA, in particular, is often found in gene bodies and correlated with gene transcription in mammalian stem cells and neurons^[@CR31],[@CR33]^. However, the molecular functions of such hemi epigenetic marks remain largely unknown because of a lack of readers. Previously, only a handful of proteins, such as Dnmt1 and UHRF1, were reported to recognize hemi-mC in cells^[@CR54],[@CR55]^. We identified 18, 11, 15, and 24 proteins could preferentially recognize consensus sequences carrying hemi-mC, -hmC, -fC, and -caC, respectively (Supplementary Data [8](#MOESM11){ref-type="media"} and Supplementary Fig. [7](#MOESM1){ref-type="media"}). Consistent with results observed with symmetric modifications, many hemi-modifications could either enhance or suppress TF-DNA interactions, although to a lesser extent as evidenced by kinetics studies. We believe that our results and analyses will pave the way for the elucidation of the physiological roles of these types of epigenetic marks.
Transcription factor clones and protein purification {#Sec11}
Using the existing human ORF expression library, each TF protein was expressed in yeast, purified as N-terminal GST fusion, and captured on glutathione beads^[@CR56]^. A step-by-step protocol describing the protein purification with a high-throughput method can be found at Protocol Exchange^[@CR57]^. In brief, each yeast strain was grown in 800 µL of SC-Ura/glucose liquid medium overnight as primary cultures. Fifty microliter of saturated yeast cultures were inoculated into 8 mL of SC-Ura/raffinose (Sigma--Aldrich) liquid medium until the O.D. 600 reached 0.6, followed by induction with 2% galactose (Sigma--Aldrich) for 6 h at 30 °C. The harvested yeast cell pellets were stored at −80 °C or immediately lysed in lysis buffer (50 mM Tris-HCl at pH 7.5 with 100 mM NaCl, 1 mM EGTA, 0.01% tritonX-100, 0.1% beta-mercaptoethanol, 1 mM PMSF, and Roche protease inhibitor tablet \[Roche\]). Next, the cell lysates were centrifuged at 3,500 g for 10 min at 4 °C; the supernatants were transferred to a plate with prewashed glutathione beads (GE Healthcare), and incubated overnight at 4 °C to capture the GST fusion proteins. Subsequently, the beads were washed three times with wash buffer I (50 mM HEPES at pH 7.5 with 500 mM NaCl, 1 mM EGTA, 10% glycerol, and 0.1% beta-mercaptoethanol) and three times with wash buffer II (50 mM HEPES at pH 7.5 with 100 mM NaCl, 1 mM EGTA, 10% glycerol, and 0.1% beta-mercaptoethanol). All TFs conjugated to glutathione beads were stored at −80 °C until use. As a negative control, GST tag was expressed and purified with the other transcription factors.
When necessary, the TF proteins were eluted from glutathione beads with elution buffer (50 mM HEPES at pH 8.0 with 100 mM NaCl, 50 mM KAc, 5 mM MgCl~2~, 40 mM glutathione, 10% glycerol, and 0.05% Tween-20). For the TF-DNA-binding kinetics assay, glycerol (a high refractive index component) was left out of the elution buffer (50 mM HEPES at pH 8.0 with 100 mM NaCl, 50 mM KAc, 5 mM MgCl~2~, 40 mM glutathione, and 0.05% Tween-20).
Conjugation of anchor oligo to TF proteins in a 96-well format {#Sec12}
A step-by-step protocol describing the conjugation of DNA oligos to TF proteins can be found at Protocol Exchange^[@CR57]^. To barcode TF proteins with DNA, a common oligo (i.e., the anchor oligo) was first covalently conjugated to each TF protein. Before the conjugation assay, the maleimide-2,5-dimethylfuran cycloadduct on the 5′-end of the anchor oligo (Genelink) was converted to maleimide via a Retro-Diels-Alder reaction. After cooling to room temperature, the lyophilized oligos were dissolved in PBS buffer to a final concentration of 50 µM and added to each purified protein on glutathione beads arrayed in 96-well plates. The conjugation was achieved using a "click" chemistry reaction between the hydro sulphonyl group on cysteine residues of the proteins and a maleimide group tethered to the 5′-end of the anchor oligo at room temperature for 1 h. The free oligos were removed with three stringent washes (50 mM HEPES at pH = 7.5 with 100 mM NaCl, 1 mM EGTA, 10% glycerol, and 0.1% beta-mercaptoethanol) and the anchor oligo-conjugated TFs were stored at −80 °C until use.
Assignment of DNA barcodes to TF proteins {#Sec13}
A step-by-step protocol describing the assignment of DNA barcodes to TF proteins can be found at Protocol Exchange^[@CR57]^. To assign a unique DNA barcode to each protein, a collection of 2000 address oligos (Integrated DNA Technologies) were synthesized, each of which containing a *Bsa*I recognition site and a cutting site (GGTCTCCGACT) at the 5′-end, a 8 nt random sequence as unique molecular identifiers (UMI), a 7--11 nt unique DNA barcode sequence, and a 20-nt consistent sequence, complementary to the anchor oligo (Supplementary Fig. [1a](#MOESM1){ref-type="media"} and Supplementary Table [1](#MOESM1){ref-type="media"}). Next, the address oligos were individually annealed to the anchor oligos conjugated on the TF proteins in a 96-well format, followed by a Klenow polymerase reaction (1 × NEB Buffer 2 with 0.6 mM dNTP mix, 1 U Klenow \[New England Biolabs\], 12 uM address oligos) at 37 °C for 30 min to synthesize the complementary strands. Free address oligos were removed with three washes in the same washing buffer described above. 1/50 of the bead slurry of each protein was taken from two adjacent plates (i.e., 192 = 2 × 96) and pooled. A total of eight protein pools were generated for the DAPPL reactions.
DNA library preparation {#Sec14}
The 16-mer random region of template DNA oligos (Integrated DNA Technologies) was synthesized in an A:C:G:T ratio of 30:30:20:20 because this ratio is known to provide a more equal distribution of the four bases^[@CR58]--[@CR60]^. The 16 random nucleosides were designed to be flanked by two consistent sequences: one for amplifying DAPPL product (i.e., 5′-GGGAGAAGGTCATCAAGAGG) and the other with a *Bsa*I restriction site and its cutting site (i.e., 5′-GGCATGCAGCCACTATAAGCTTCGAAGACTTGAGACCAT). The double-stranded 16-mer DNA library was generated by annealing the template oligo with a complementary primer oligo in a 1:1 ratio, followed by a Klenow polymerase reaction. By design, after *Bsa*I digestion the sticky ends of the 16-mer DNA library were complementary to those of the proteins' DNA barcodes such that they can be annealed and ligated when brought in close proximity.
To create DNA libraries carrying four different epigenetic modifications, including mCpG, hmCpG, fCpG, and caCpG, the template oligos (Genelink) were first synthesized to encode a 5′ Primer 2 (5′-CACATCCTTCACATTAATCC), an 18-mer sequence with a modified CpG in the middle flanked by two 8-mer random sequences, a short sequence encoding the library identity, and a *Bsa*I recognition site and its cutting site (Supplementary Table [1](#MOESM1){ref-type="media"}). These oligos were then converted into dsDNA libraries in the presence or absence of the modified dCTPs to create symmetric or hemi-libraries, respectively, using the Klenow reactions as described above. Each library was purified with a QIAquick PCR Purification kit (Qiagen) according to the manufacturer's instructions. The design of each modified library is shown in Supplementary Table [1](#MOESM1){ref-type="media"}. In addition, an unmodified DNA library of the same design was also created as a reference library.
Finally, the four libraries with four different symmetric modifications and the unmodified library were mixed in an equimolar ratio to form the mixture of symmetric modification libraries. A mixture of the hemi-modification libraries was created using the same method in parallel.
Establishment of DAPPL method {#Sec15}
A step-by-step protocol describing the procedure of DAPPL assay can be found at Protocol Exchange^[@CR57]^. To develop and optimize the digital affinity profiling via proximity ligation (DAPPL) reactions, we incubated each TF mixture on beads with 200 nmole of the 16-mer DNA library in a TF-binding buffer (10 mM Tris-HCl at pH 7.5 with 50 mM NaCl, 1 mM DTT, and 4% glycerol) at room temperature for 30 min. After three stringent washes in binding buffer and PBS buffer, the protein-DNA complexes on the beads were crosslinked by 0.1% formaldehyde for 10 min and quenched with Tris-Glycine buffer (pH 7.5). Next, the beads were washed in TBST buffer (0.01% tween-20) three times and equilibrated by 1 × T4 DNA ligase buffer. A Golden Gate Assembly reaction was performed by adding a master reaction mixture (227 µL of ddH~2~O with 30 μL 10 × T4 DNA ligase reaction buffer, 30 μg bovine serum albumin, 20 µL of 100 U of *Bsa*I \[New England Biolabs\], and 20 µL of 600 U T4 DNA ligase \[Enzymatics\]) to the beads and incubated for 1 h at 37 °C^[@CR43]^. After Proteinase K (New England Biolabs) treatment and phenol/chloroform (ThermoFisher Scientific) extraction, the ligated DAPPL products were subjected to 15--18 cycles PCR amplification (New England Biolabs) to prepare the sequencing libraries, followed by a gel extraction step. PCR cycles were determined on the basis of the final quantity of the PCR products, which was about 500 ng. A Next-Generation sequencing library was constructed and deep-sequenced on an Illumina NextSeq500 sequencer.
DAPPL to discover readers for different epigenetic DNA modifications {#Sec16}
The optimized protocols described above were applied to identify potential readers for symmetric and hemi-modifications using 200 nmole of the mixture of symmetric or hemi-modification libraries, respectively. Four random DNA libraries were synthesized, each of which carried a CpG site with mC, hmC, fC, or caC flanked by two 8-mer random sequences. A three-nucleotide barcode was also added to each sequence to identify the modification (Supplementary Table [1](#MOESM1){ref-type="media"}). To ensure symmetric modifications at the middle CpG sites, the complementary strands of each library were synthesized with Klenow polymerase in the presence of the corresponding modified dCTPs (i.e., 5-methyl-dCTP; 5-hydroxymethyl-dCTP; 5-formyl-dCTP; 5-carboxy-dCTP). For comparison, an unmodified library of the same design was also synthesized (Supplementary Table [1](#MOESM1){ref-type="media"}). An equal amount of the five DNA libraries was pooled together to generate the symmetric modification DNA reaction mixture (Fig. [1](#Fig1){ref-type="fig"}). In parallel, the four hemi-modified DNA libraries and the unmodified library were also synthesized and mixed in an equimolar ratio. The two library mixtures carrying the symmetric or hemi-modifications were then separately incubated with the TF mixtures to carry out the DAPPL reactions, as described above. A light crosslinking step was also included before the proximity ligation, as described above. Since formaldehyde-based crosslinking is known to prefer primary amines on G/C/A nucleotides^[@CR61]^, it is unlikely for the Schiff base to react with the methyl, hydroxymethyl, formal, or carboxyl moieties on the modified cytosine. Note that we did not remove the modifications before deep-seq; the type of modification could be distinguished based on the built-in library barcodes (Supplementary Table [1](#MOESM1){ref-type="media"}).
Generation of PWM models {#Sec17}
Because each DAPPL product carried the barcode sequence of the TF that captured this sequence (see Fig. [1](#Fig1){ref-type="fig"}), all of the TF-captured sequences were partitioned by the TFs (via the TF barcodes). Scripts were programmed in R language. The seqinr and Biostrings packages were called to read/write the fasta/fastq files and match TF barcodes. As this information was kept throughout the entire computational analysis, the sequencing reads could be readily mapped back to each TF. Since each TF was fused with GST, we first excluded the DNA-binding activity contributed by the GST tag. To do so, we first extracted the 6-mers by a sliding window of length 6 moving along the sequencing reads one nucleotide at a time. The 6-mer occurrences were obtained for the binding sequences to a particular TF. Similarly, the 6-mer occurrences were also obtained for the binding sequences for the GST proteins, which were included in each multiplexed binding assay as a negative control. Next, we compared the 6-mer sequence occurrences between those obtained with each TF and the GST counterparts. We then determined the 6-mers that were enriched for a given TF by comparing their occurrences with their GST counterparts.
The count of the 6-mers bound by GST was referred as: $$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{X}} = \{ x_1,x_2, \ldots \ldots x_{4096}\}$$\end{document}$$
The count of the 6-mers bound by a TF was referred as:$$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{Y}} = \{ y_1,y_2, \ldots \ldots y_{4096}\}$$\end{document}$$
The occurrence pairs of the 6-mers were referred as:$$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{P}} = \{ p_1(x_1,y_1), \ldots \ldots p_{4096}(x_{4096},y_{4096})\}$$\end{document}$$
We defined the enriched 6-mers for each protein using the following criteria. First, its slope (i.e., the ratio of the 6-mer frequencies of a given protein over GST) had to be greater than 1. Second, we selected the top 25th percentile of 6-mers with slopes \>1. Finally, only the 6-mers whose frequencies were greater than the median was selected. Therefore, the enriched 6-mers were determined using the following equations.$$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{M}} = \left\{ {\left( {x_i,y_i} \right){\mathrm{|}}x_i \,< \,y_i} \right\}$$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{N}} = \left\{ {\left( {x_i,y_i} \right){\mathrm{|}}\frac{{y_i}}{{x_i}} \,> \, {\mathrm{Q}}_3\left( {\left\{ {\frac{{y_m}}{{x_m}}|\left( {x_m,y_m} \right) \in {\mathbf{M}}} \right\}} \right)} \right\}$$\end{document}$$$$\documentclass[12pt]{minimal}
\begin{document}$$p_i \in \left\{ {\left( {x_i,y_i} \right){\mathrm{|}}\,y_i \,> \, {\mathrm{median}}\left( {\left\{ {y_n|\left( {x_n,y_n} \right) \in {\mathbf{N}}} \right\}} \right)} \right\}$$\end{document}$$Where$\documentclass[12pt]{minimal}
\begin{document}$${\mathrm{Q}}_3\left( {\left\{ {\frac{{y_m}}{{x_m}}} \right\}} \right)$$\end{document}$refers to the 3rd quantile of the slops and $\documentclass[12pt]{minimal}
\begin{document}$${\mathrm{median}}\left( {\{ y_n\} } \right)$$\end{document}$ represents the median value of 6-mer frequency.
The collection of the enriched 6-mers was used to construct the binding consensus sequences (i.e., motifs) using HOMER (<http://biowhat.ucsd.edu/HOMER>). The enriched 6-mers were first mapped back to the original library sequences and these sequences were used as input values for HOMER. Each motif was associated with a *P* value. To determine the cutoff of the *P* values in identifying significant motifs, we performed 400 nonreturn random sampling on the GST-bound sequences with a sample size of 10,000 in each library. Each of the 400 runs generated a top *P* value. These 400 *P* values were considered as a null distribution and the *P* value at the 95 percentile (i.e., 0.05 false discovery rate) was used as the cutoff to calculate the significant motifs. These modified motifs were built using customized R script calling the ggseqlogo package.
Identification of modification-dependent TF-DNA interactions {#Sec18}
We identified sequences that were specific to either modified or unmodified sequences by comparing the sequence frequencies from the modified versus unmodified library for a given protein. To correct the possible library-specific bias, we first performed the Lowess data normalization^[@CR62]^ for GST from different modified libraries. Here, **X** = {*x*~*1*~*, x*~*2*~*, ... x*~*n*~} and **Y** = {*y*~*1*~*, y*~*2*~*, ... y*~*n*~} were used to denote the 6-mer frequencies obtained from unmodified versus modified DNA libraries for GST proteins. A log~2~-based scatterplot of 6-mer intensity (**A** = log~2~(**X** + **Y**)) versus ratio (**M** = log~2~(**X/Y)**) was plotted. A local weighted linear regression was used to calculate a regression curve from the corresponding scatterplot. This curve was then used to correct systematic deviations of 6-mer frequencies between two libraries for each TF.
We then compared the adjusted frequencies of 6-mer obtained from modified library and unmodified library. By plotting the 6-mer frequencies obtained with modified versus unmodified DNA libraries, we identified those sequences that were preferentially bound by TFs in either modified or unmodified format. We next estimated the expected deviation of frequencies of 6-mers for GST proteins from diagonal line (*y* = *x*), which represents the difference between the modified and unmodified sequences. We calculated the distance (*Dist*~*GSTi*~) between a 6-mer bound by GST and diagonal line *y* = *x*. We set the distance cutoff as:$$\documentclass[12pt]{minimal}
\begin{document}$${\mathrm{Dist}}_{{\mathrm{cutoff}}} = {\mathrm{avg}}\left( {{\mathbf{Dist}}_{{\mathbf{GST}}}} \right) + 6 \times {\mathrm{sd}}({\mathbf{Dist}}_{{\mathbf{GST}}})$$\end{document}$$where avg() and sd() present average and standard deviation of the distance distribution.
Please note that the corresponding *P* value is 1 × 10^−5^ at an S.D. of 6. If a 6-mer is bound by a TF located outside the distance cutoff in the scatterplot (i.e., *Dist*~*TFi*~ \> *Dist*~*cutoff*~), it was considered as a candidate carrying either enhanced or suppressed activity to the TF. After we identified the candidate 6-mers, we calculated the log~2~-based slope for each TF. If the log2-based slope is greater than 0.5 or smaller than −0.5, those 6-mers were used to construct the modification-specific consensus sequences.
Quantifying similarity between motifs {#Sec19}
Tomtom^[@CR63]^ was used to quantify the similarity between two motifs. Two motifs with a *P* value \< 0.01 were considered similar. When multiple motifs were associated with a TF in CIS-BP, we used the smallest *P* value. Note that we did not treat the modified cytosine differently when assessing motif similarity using Tomtom.
Quantification of the TF-binding preference {#Sec20}
We used the slope of the enriched 6-mers in the scatterplot as a proxy to evaluate binding preference to either modified or unmodified sequences. The count of N enriched 6-mers in unmodified library was referred to as:$$\documentclass[12pt]{minimal}
\begin{document}$${\mathbf{X}} = \{ x_1,x_2, \ldots \ldots x_N\}$$\end{document}$$
The count of the enriched 6-mers in modified library was referred to as:$$\documentclass[12pt]{minimal}
\begin{document}$${\boldsymbol{Y}} = \{ y_1,y_2, \ldots \ldots y_N\}$$\end{document}$$
The slope was calculated as:$$\documentclass[12pt]{minimal}
\begin{document}$$S = \frac{{\mathop {\sum }\nolimits_{i = 1}^N \frac{{y_i}}{{x_i}}}}{N}$$\end{document}$$
Electrophoretic mobility shift assay {#Sec21}
The oligos of the DNA probes used for EMSA assays were designed and synthesized by IDT (Integrated DNA Technologies) and Genelink (Genelink) as shown in Supplementary Table [1](#MOESM1){ref-type="media"}. They were converted to dsDNA with a T7 primer end-labeled with Cy5 using the Klenow polymerase reaction in the presence or absence of the desired modified dCTPs. The resulting dsDNA were purified with QIAquick PCR Purification Kit.
To perform EMSA assays, a Cy5-labeled DNA probe at 5 nM was incubated with its corresponding TF protein(s) in a binding buffer (25 mM HEPES at pH 8.0 with 100 mM NaCl, 5% (v/v) glycerol, 50 mM KAc, 5 mM MgCl~2~, 1 mM DTT and 0.1 mg/mL bovine serum albumin) at room temperature (∼22 °C) for one hour. The reaction mixture was then loaded onto a 5% Criterion TBE Gel (BioRad), and electrophoresed for 90 min at 80 V in 0.5 X TBE buffer. Formation of protein-DNA complexes were detected by scanning the TBE gel using Odyssey® CLx Imaging System (LI-COR Image Studio Lite Ver 5.2, LI-COR Biosciences).
TF-DNA-binding kinetics and affinity measurement with the OCTET {#Sec22}
The TF-DNA-binding kinetic assays were performed using OCTET RED96 device, equipped with SAX (High Precision Streptavidin Biosensor) biosensor tips (FortéBio). Biotin-labeled dsDNA probes were generated using the same method as described above but with a biotin-labeled T7 primer. Each DNA probe was then diluted to a final concentration of 0.1 µM in the DNA-binding buffer (50 mM HEPES at pH 8.0 with 100 mM NaCl, 50 mM KAc, 5 mM MgCl~2~, 40 mM glutathione, 5 mg/mL BSA, and 0.05% Tween-20). Each TF protein of interest was purified from large yeast cultures and serial two-fold dilutions were made in the DNA-binding buffer. The binding kinetics of TF-DNA interactions was measured according to the manufacturer's instructions (FortéBio). In short, a streptavidin-coated biosensor was first immersed in the DNA-binding buffer for 10 min to establish the baseline, followed by dipping in the DNA probe well to capture the biotinylated DNA probe to the biosensor. After another approximate 600-sec baseline establishment in the binding buffer, the biosensor was dipped into a TF protein well to obtain the on-curve until signal saturation. The off-curve was obtained by transferring the biosensor to a well containing fresh binding buffer until the off-curve became flat. Data collection, data analysis and curve-fitting were performed using FortéBio's Data Acquisition 7.1 and Data Analysis 7.1, based on which the *K*~on~ and *K*~off~ values were determine for each binding assay. The *K*~D~ values were deduced by taking the ratios of *K*~off~ /*K*~on~.
Probes used for the OCTET experiments in Fig. [6c](#Fig6){ref-type="fig"} and Supplementary Fig. [9b](#MOESM1){ref-type="media"} are listed in Supplementary Table [1](#MOESM1){ref-type="media"}.
Genome-wide hmC profiling of human embryonic stem cell H1 {#Sec23}
Human embryonic stem cell H1 was purchased from WiCell Research Institute (WiCell) and the ethics approval was obtained from the Robert-Koch Institute, Berlin, Germany. Genomic DNA was isolated from human embryonic stem cell H1 (WiCell) with standard protocols. The cell pellet of two million H1 cells was suspended in 500 µL digestion buffer (100 mM Tris-HCl at pH 8.5 with 5 mM EDTA, 1% SDS, 200 mM NaCl) on ice and then treated with Proteinase K at 55 °C overnight. After phenol/chloroform (25:24:1 saturated with 10 mM Tris, pH 8.0, and 1 mM EDTA) extraction the aqueous phase was transferred to a test tube, mixed with same volume of isopropanol, and stored at −80 °C overnight to precipitate the genomic DNA. After spinning at 21,000 × *g* for 10 min at 4 °C, the DNA pellet was washed with 75% ethanol, air-dried and dissolved in Nuclease-free Water. To perform the hmC-seq, hmC-specific chemical labeling and enrichment of DNA fragments with hmC were performed using a previously described method^[@CR47]^. DNA libraries were prepared following the Illumina protocol of 'Preparing Samples for ChIP Sequencing of DNA' (Illumina) using genomic DNA or hmC-captured DNA and then subjected to deep-seq on the Illumina Hi-seq 2000 machine.
Mapping overlapping regions between hmC and TF ChIP-seq peaks {#Sec24}
For hmC-seq data, we aligned these fastq files into reference genome and used MACS2 to do peak calling separately for two repetitions (settings: macs2 callpeak -t input\_file -f BAM -g hs -n output\_prefix -B -q 0.01). We used IDR (Irreproducible Discovery Rate) framework to measure consistency between two replicates. Those shared regions with IDR less than 0.05 we regard as consistent and reproducible peaks. In total, we identified 3892 hmCG modified regions. We downloaded all narrowPeak files about TFs ChIP-Seq datasets in H1-hESC cell lines from ENCODE and UCSC, which revealed seven TFs that have binding sites that overlap with hmCG modified regions. The numbers of overlapping regions were compared with those obtained from 5000 simulation of randomly selected genome regions of the same width distribution as the ChIP-seq peaks. USF1, USF2 and TAF7, and ATF2 were found to have enriched overlapping regions. Furthermore, significant motifs were identified for USF1 and USF2 using these overlapping regions (using MEME to call motifs).
Next, we investigated the chromatin status of TF binding under hmCG modification. After combining our data with the WGBS dataset, we regard the TF-binding regions with average methylation less than 0.2 and not overlapping with hmCG modified regions as background. Using ChromHMM annotation of H1-hESC, we found that TF binding under hmCG modification had different chromatin states (R package: GenomicRanges was used to do the overlapping analysis). The ChromHMM annotation was download from UCSC Genome Browser (<http://hgdownload.soe.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeAwgSegmentation/wgEncodeAwgSegmentationChromhmmH1hesc.bed.gz>). Then, liftOver was used to convert the annotation file from hg19 to GRCh38.
Reporting summary {#Sec25}
Further information on research design is available in the [Nature Research Reporting Summary](#MOESM12){ref-type="media"} linked to this article.
Peer Review File
Description of Additional Supplementary Files
Supplementary Data 1
Supplementary Data 2
Supplementary Data 3
Supplementary Data 4
Supplementary Data 5
Supplementary Data 6
Supplementary Data 7
Supplementary Data 8
Source data {#Sec27}
**Peer review information** *Nature Communications* thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
These authors contributed equally: Guang Song, Guohua Wang, Ximei Luo.
**Supplementary information** is available for this paper at 10.1038/s41467-021-20950-w.
This work was supported in part by the NIH (RO1 AG061852-03, RO1MH122451-01, RO1HL149961-01, R24 AA025017, U01CA200469, P50AA026116 to H.Z.; R01GM111514; to H.Z. and J.Q.; R01EY029548; R01EY024580 to J.Q.; R35NS116843 to H.S.; P01NS097206 to H.S. and P.J.; NS051630 and NS111602 to P.J. NIH Chemical Biology Interface Training Grant T32GM080189). We would like to thank Dr. Jef D. Boeke for his support and insightful discussion during this project and Dr. Jun Shen for providing us human embryonic stem cells. Finally, we would like to thank Jessica Dunn and Eric Johansen for proofreading this manuscript.
G.S. conducted the majority of wet-lab experiments with help from Y.C., Q.S., C.M., X.L., and G.W. made the major contributions to developing bioinformatics algorithms and data analyses with help from J.W., J.B., H.S., and P.J. provided critical reagents and supports to this study. H.Z. and J.Q. conceived the idea, supervised this study and wrote the manuscript with the help from all the authors.
All the raw and processed Sequence data that support the findings of this study have been deposited in the NCBI Gene Expression Omnibus (GEO) database under the accession codes \[[GSE160457](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE160457)\]. Source data are provided with this paper.
All of the computer programs and scripts used are publicly available at <https://github.com/HitTracy/DAPPL>. A released vision of the code would be referred using the DOI (10.5281/zenodo.4308235).
The authors declare no competing interests.
|

# Tally Blaster 
WiFi based Tally using NodeMCU and 2 NeoPixel Mini PCB. Currently supporting vMix, planning to support more systems like Blackmagic, Roland etc.

Case by Elvin Media
## Features
- Camera tally LED & Host tally LED
- Brightness control
- Locate function
- Save settings permanent (App not needed during production)
- Autodiscover tally nodes using Zerconf/Bonjour
- Configure app to control and monitor all nodes
- vMix support
### How It Works
- When your ESP starts up, it sets it up in Station mode and tries to connect to a previously saved Access Point
- if this is unsuccessful (or no previous network saved) it moves the ESP into Access Point mode and spins up a DNS and WebServer (default ip <IP_ADDRESS>)
- using any wifi enabled device with a browser (computer, phone, tablet) connect to the newly created Access Point (ESP*)
- because of the Captive Portal and the DNS server you will either get a 'Join to network' type of popup or get any domain you try to access redirected to the configuration portal
- choose the access points that connects your vMix (must have DHCP), enter password, click save
- start Configuration App and scan for Tally Nodes
- configure your nodes with the vMix IP and tally number.
### Needed Hardware
- NodeMCU v3
- 2 NeoPixels Mini PCB
- Usb cable
- USB powersupply or Powerbank
- WiFi connection to your vMix computer
### Flashing NodeMCU (V3)
- Download latest release
- Unpack zip
- Open vmix-tally.ino with arduino IDE
- Install the extra libraries (WiFiManager by tzapu 2.0.3-alpha, Adafruit NeoPixel by Adafruit 1.5.0, WebSockets by Markus Sattler 2.2.0)
- Flash NodeMCU
## NodeMCU pin layout
- Connect 1 NeoPixel input to 1 NeoPixel output (+,signal,-)
- Connect + of first NeoPixel to pin VU
- Connect - of first NeoPixel to ground
- Connect signal of first NeoPixel to D2
- Power using USB
## Current plans
- Add 3d printer case design
- Stabilize, optimise and improve software
- Improve documentation
- Configure using webinterface
- [Ask vMix for ZeroConf broadcast to allow auto connect](https://forums.vmix.com/posts/t23873-Zeroconf---Bonjour)
- Create demonstration video
# Led colors
- CONNECTED: LEDS off
- PREVIEW: Green
- PROGRAM: Red
- LOCATE: White Blinking
- CONNECTING WIFI: Purple Blinkning
- CONNECTING VMIX: Orange Blinkning
# Sponsor
This project will grow faster with your help. Donations and sponsorships allow me to spend more time on this project and help me paying for licenses and hardware to test, coffee, debugging pizza and release beers. Want your logo on this page? Please contact me.
[Github Sponsor](https://github.com/sponsors/ruudboon). One-time donations are welcome through [PayPal](https://www.paypal.me/ruudboon).
# Tally Blaster Control app

### Inspired by
* [Arduino-vMix-tally](https://github.com/ThomasMout/Arduino-vMix-tally)
* [WiFiManager](https://github.com/tzapu/WiFiManager)
* [Automatic Update Server](https://www.instructables.com/id/Set-Up-an-ESP8266-Automatic-Update-Server/)
|
Supprted sites not downloading
DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
[X] I understand that I will be blocked if I intentionally remove or skip any mandatory* field
Checklist
[X] I'm reporting a bug unrelated to a specific site
[X] I've verified that I have updated yt-dlp to nightly or master (update instructions)
[X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
[X] I've checked that all URLs and arguments with special characters are properly quoted or escaped
[X] I've searched known issues and the bugtracker for similar issues including closed ones. DO NOT post duplicates
[X] I've read the guidelines for opening an issue
Provide a description that is worded well enough to be understood
While most of the time I can get a Youtube video to download some other supported sites do not.
Seems like a couple other sites did not download but the following for sure.
Provide verbose output that clearly demonstrates the problem
[X] Run your yt-dlp command with -vU flag added (yt-dlp -vU <your command line>)
[ ] If using API, add 'verbose': True to YoutubeDL params instead
[X] Copy the WHOLE output (starting with [debug] Command-line config) and insert it below
Complete Verbose Output
yt-dlp -vU https://www.pornhub.com/view_video.php?viewkey=65379a39825a8
[debug] Command-line config: ['-vU', 'https://www.pornhub.com/view_video.php?viewkey=65379a39825a8']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version<EMAIL_ADDRESS>from yt-dlp/yt-dlp [7ea278792] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.8.0-49-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2020.06.20, mutagen-1.45.1, pyxattr-0.7.2, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-1.26.5, websockets-9.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version<EMAIL_ADDRESS>from yt-dlp/yt-dlp
yt-dlp is up to date<EMAIL_ADDRESS>from yt-dlp/yt-dlp)
[PornHub] Extracting URL: https://www.pornhub.com/view_video.php?viewkey=65379a39825a8
[PornHub] 65379a39825a8: Downloading pc webpage
ERROR: [PornHub] 65379a39825a8: Unable to extract title; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/pornhub.py", line 305, in _real_extract
'twitter:title', webpage, default=None) or self._html_search_regex(
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1382, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
Are you in a region (i.e. US state) where this site is blocking access? See #9889
|
import map file along with bootstrap-theme.css
otherwise chrome will complaint about failed to parse srouce map
Thanks!
|
Thread:Cuteanimal17177/@comment-27147599-20190103121657/@comment-36910229-20190104221901
Lol
I’m hoping she goes for as much as I’m thinking with who she’s breed too... -,-‘
|
import me.dalsat.avro.Author;
import me.dalsat.avro.Post;
import java.time.Instant;
public class Main {
public static void main(String[] args) {
var author = Author.newBuilder()
.setUserName("jdoe")
.setDisplayName("John Doe")
.setRegistrationDate(Instant.now())
.build();
System.out.println(author);
var post = Post.newBuilder()
.setTitle("Avro Example")
.setContent("Hello post!")
.setPublicationDate(Instant.now())
.setAuthor(author)
.build();
System.out.println(post);
}
}
|
Calvin Harris
Early life and career
2006–2008: Career beginnings and I Created Disco
Main article: I Created Disco
2008–2010: Ready for the Weekend
Main article: Ready for the Weekend (album)
2011–2013: 18 Months and international prominence
Main article: 18 Months
2013–2015: Motion and "How Deep Is Your Love"
Main article: Motion (Calvin Harris album)
2016–2017: Single releases and Funk Wav Bounces Vol. 1
2018–present: Upcoming sixth studio album
Musical style
According to Harris, his primary influences are Jamiroquai and Fatboy Slim.[117]
Endorsements
Other ventures
Philanthropy
Personal life
Discography
Main article: Calvin Harris discography
* I Created Disco (2007)
* Ready for the Weekend (2009)
* 18 Months (2012)
* Motion (2014)
* Funk Wav Bounces Vol. 1 (2017)
Concert tours
* Groove Armada: Soundboy Rock tour (2007)
* Faithless: To All New Arrivals tour (2007)
* Ready for the Weekend tour (2009–10)
* Deadmau5 and Skrillex: Unhooked tour (2010)
* Rihanna: Last Girl on Earth Tour (2010–2011)
* Rihanna: Loud Tour (2011)
* Greater Than Tour (with Tiësto), UK and Ireland (2013)
Awards and nominations
Main article: List of awards and nominations received by Calvin Harris
|
use sqlparser::{
ast::{ColumnDef, ColumnOption::ForeignKey, Ident, ObjectName, Statement::CreateTable},
dialect::GenericDialect,
parser::Parser,
};
use std::collections::HashMap;
pub fn map_relationships(sql: &str) -> HashMap<String, Vec<String>> {
let dialect = GenericDialect;
let ast = Parser::parse_sql(&dialect, sql).unwrap();
let mut relationships = HashMap::new();
ast.iter().for_each(|statement| {
if let CreateTable {
name: ObjectName(identifiers),
columns,
..
} = statement
{
// TODO read up on idents, the extraction here may be incorrect in general
if let Some(&Ident { ref value, .. }) = identifiers.get(0) {
let fks = get_foreign_keys_for_columns(&columns);
relationships.insert(value.to_owned(), fks);
}
}
});
relationships
}
fn get_foreign_keys_for_columns(columns: &[ColumnDef]) -> Vec<String> {
columns
.iter()
.flat_map(|column| {
column
.options
.iter()
.filter_map(|option_def| match &option_def.option {
ForeignKey {
foreign_table: ObjectName(identifiers),
..
} => {
// TODO read up on idents, the extraction here may be incorrect in general
identifiers.get(0).map(|id| id.value.clone())
}
_ => None,
})
})
.collect::<Vec<String>>()
}
#[cfg(test)]
mod test {
use crate::map_relationships;
use std::collections::HashMap;
#[test]
fn should_return_event_types_as_foreign_key() {
let sql = r#"
create table event_types(id serial primary key, event_type text);
create table events (id serial primary key, event_type integer references event_types);
create table team (id serial primary key);
create table dev (id serial primary key, team_id integer references team);
"#;
let mut expected = HashMap::new();
expected.insert("events".to_owned(), vec!["event_types".to_owned()]);
expected.insert("team".to_owned(), vec![]);
expected.insert("dev".to_owned(), vec!["team".to_owned()]);
expected.insert("event_types".to_owned(), vec![]);
let keys = map_relationships(sql);
assert_eq!(keys, expected);
}
}
|
Monitor ning task output within a system
ABSTRACT
A computer-implemented method according to one embodiment includes simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, comparing the simulated output to actual output of the first task for the predetermined time period, and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
BACKGROUND
The present invention relates to recurring tasks within a system, and more specifically, this invention relates to monitoring recurring tasks and determining errors associated with the recurring tasks.
Recurring tasks are commonly implemented within systems in order to perform one or more actions (e.g., creating snapshots for backup purposes, etc.). However, if a recurring task fails, a user is currently unaware of details of the failure, how to fix the failure, or what a current state of the recurring task should be if the recurring task did not fail.
SUMMARY
A computer-implemented method according to one embodiment includes simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, comparing the simulated output to actual output of the first task for the predetermined time period, and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
According to another embodiment, a computer program product for monitoring task output within a system includes a computer readable storage medium having program instructions embodied therewith, where the computer readable storage medium is not a transitory signal per se, and where the program instructions are executable by a processor to cause the processor to perform a method including simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, utilizing the processor, comparing, utilizing the processor, the simulated output to actual output of the first task for the predetermined time period, and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period, utilizing the processor.
A system according to another embodiment includes a processor, and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, where the logic is configured to simulate, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, compare the simulated output to actual output of the first task for the predetermined time period, and generate an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
Other aspects and embodiments of the present invention will become apparent from the following detailed description, which, when taken in conjunction with the drawings, illustrate by way of example the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a network architecture, in accordance with one embodiment.
FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1, in accordance with one embodiment.
FIG. 3 illustrates a tiered data storage system in accordance with one embodiment.
FIG. 4 illustrates a method for monitoring task output within a system, in accordance with one embodiment.
FIG. 5 illustrates an exemplary implementation of a monitoring task, in accordance with one embodiment.
DETAILED DESCRIPTION
The following description discloses several preferred embodiments of systems, methods and computer program products for monitoring task output within a system. Various embodiments provide a method to simulate task output, compare the simulated output to actual task output, and generate an alert if the simulated output does not match the actual output.
The following description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless otherwise specified. It will be further understood that the terms “includes” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The following description discloses several preferred embodiments of systems, methods and computer program products for monitoring task output within a system.
In one general embodiment, a computer-implemented method includes simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, comparing the simulated output to actual output of the first task for the predetermined time period, and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
In another general embodiment, a computer program product for monitoring task output within a system includes a computer readable storage medium having program instructions embodied therewith, where the computer readable storage medium is not a transitory signal per se, and where the program instructions are executable by a processor to cause the processor to perform a method including simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, utilizing the processor, comparing, utilizing the processor, the simulated output to actual output of the first task for the predetermined time period, and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period, utilizing the processor.
In another general embodiment, a system includes a processor, and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, where the logic is configured to simulate, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, compare the simulated output to actual output of the first task for the predetermined time period, and generate an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
FIG. 1 illustrates an architecture 100, in accordance with one embodiment. As shown in FIG. 1, a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106. A gateway 101 may be coupled between the remote networks 102 and a proximate network 108. In the context of the present architecture 100, the networks 104, 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.
In use, the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101, and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway 101. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116. User devices 116 may also be connected directly through one of the networks 104, 106, 108. Such user devices 116 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 111 may also be directly coupled to any of the networks, in one embodiment.
A peripheral 120 or series of peripherals 120, e.g., facsimile machines, printers, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network.
According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud.” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used.
FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1, in accordance with one embodiment. Such figure illustrates a typical hardware configuration of a workstation having a central processing unit 210, such as a microprocessor, and a number of other units interconnected via a system bus 212.
The workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214, Read Only Memory (ROM) 216, an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the bus 212, a user interface adapter 222 for connecting a keyboard 224, a mouse 226, a speaker 228, a microphone 232, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 212, communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the bus 212 to a display device 238.
The workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.
Now referring to FIG. 3, a storage system 300 is shown according to one embodiment. Note that some of the elements shown in FIG. 3 may be implemented as hardware and/or software, according to various embodiments. The storage system 300 may include a storage system manager 312 for communicating with a plurality of media on at least one higher storage tier 302 and at least one lower storage tier 306. The higher storage tier(s) 302 preferably may include one or more random access and/or direct access media 304, such as hard disks in hard disk drives (HDDs), nonvolatile memory (NVM), solid state memory in solid state drives (SSDs), flash memory, SSD arrays, flash memory arrays, etc., and/or others noted herein or known in the art. The lower storage tier(s) 306 may preferably include one or more lower performing storage media 308, including sequential access media such as magnetic tape in tape drives and/or optical media, slower accessing HDDs, slower accessing SSDs, etc., and/or others noted herein or known in the art. One or more additional storage tiers 316 may include any combination of storage memory media as desired by a designer of the system 300. Also, any of the higher storage tiers 302 and/or the lower storage tiers 306 may include some combination of storage devices and/or storage media.
The storage system manager 312 may communicate with the storage media 304, 308 on the higher storage tier(s) 302 and lower storage tier(s) 306 through a network 310, such as a storage area network (SAN), as shown in FIG. 3, or some other suitable network type. The storage system manager 312 may also communicate with one or more host systems (not shown) through a host interface 314, which may or may not be a part of the storage system manager 312. The storage system manager 312 and/or any other component of the storage system 300 may be implemented in hardware and/or software, and may make use of a processor (not shown) for executing commands of a type known in the art, such as a central processing unit (CPU), a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc. Of course, any arrangement of a storage system may be used, as will be apparent to those of skill in the art upon reading the present description.
In more embodiments, the storage system 300 may include any number of data storage tiers, and may include the same or different storage memory media within each storage tier. For example, each data storage tier may include the same type of storage memory media, such as HDDs, SSDs, sequential access media (tape in tape drives, optical disk in optical disk drives, etc.), direct access media (CD-ROM, DVD-ROM, etc.), or any combination of media storage types. In one such configuration, a higher storage tier 302, may include a majority of SSD storage media for storing data in a higher performing storage environment, and remaining storage tiers, including lower storage tier 306 and additional storage tiers 316 may include any combination of SSDs, HDDs, tape drives, etc., for storing data in a lower performing storage environment. In this way, more frequently accessed data, data having a higher priority, data needing to be accessed more quickly, etc., may be stored to the higher storage tier 302, while data not having one of these attributes may be stored to the additional storage tiers 316, including lower storage tier 306. Of course, one of skill in the art, upon reading the present descriptions, may devise many other combinations of storage media types to implement into different storage schemes, according to the embodiments presented herein.
According to some embodiments, the storage system (such as 300) may include logic configured to receive a request to open a data set, logic configured to determine if the requested data set is stored to a lower storage tier 306 of a tiered data storage system 300 in multiple associated portions, logic configured to move each associated portion of the requested data set to a higher storage tier 302 of the tiered data storage system 300, and logic configured to assemble the requested data set on the higher storage tier 302 of the tiered data storage system 300 from the associated portions.
Of course, this logic may be implemented as a method on any device and/or system or as a computer program product, according to various embodiments.
Now referring to FIG. 4, a flowchart of a method 400 is shown according to one embodiment. The method 400 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-3 and 5, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 4 may be included in method 400, as would be understood by one of skill in the art upon reading the present descriptions.
Each of the steps of the method 400 may be performed by any suitable component of the operating environment. For example, in various embodiments, the method 400 may be partially or entirely performed by one or more servers, computers, or some other device having one or more processors therein. The processor, e.g., processing circuit(s), chip(s), and/or module(s) implemented in hardware and/or software, and preferably having at least one hardware component may be utilized in any device to perform one or more steps of the method 400. Illustrative processors include, but are not limited to, a central processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc., combinations thereof, or any other suitable computing device known in the art.
As shown in FIG. 4, method 400 may initiate with operation 402, where output of a first task that periodically runs within a system is simulated for a predetermined time period to create a simulated output. In one embodiment, the system may include a single computing device, or a plurality of computing devices. For example, the system may include a single node of a networked plurality of nodes. In another example, the system may include a cloud computing environment. In yet another example, the system may include a distributed storage environment. In still another example, the system may include a tiered storage environment.
Additionally, in one embodiment, the first task may include an application running within the system that analyzes data within the system and outputs resulting data, based on the analysis. In another embodiment, the first task may perform an analysis of data within the system, may perform maintenance of data within the system, may create a backup of data within the system, etc. In yet another embodiment, the first task may run according to a predetermined schedule. For example, the first task may run cyclically in response to a first schedule.
Further, in one embodiment, the first task may create a snapshot of data within the system. For example, the first task may include an application that analyzes data within the system and creates snapshot data, based on the analysis. The system may include one or more storage nodes of a distributed storage environment, and the first task may create a backup of data stored on the one or more storage nodes.
Further still, in one embodiment, the first task may store the snapshot at a location within the system. In another embodiment, the output may be simulated by a second task separate from the first task. In yet another embodiment, the second task may run according to a second schedule different from the first schedule. For example, a period of the second schedule may be longer than a period of the first schedule. In another example, the first task may be run daily, and the second task may be run weekly.
Also, in one embodiment, the second task may run in response to one or more criteria being met. For example, the second task may run in response to the creation of a predetermined number of instances of output (e.g., files, etc.) by the first task within the system. In another embodiment, the second task may analyze the first task in order to determine one or more details of the output of the first task.
For example, the second task may query the first task to obtain one or more parameters of the first task. The one or more parameters may be sent as metadata from the first task to the second task. The one or more parameters may include a naming scheme used by the first task to name its output, a location used by the first task to store its output, one or more size limitations followed by the first task when creating its output, a schedule by which the first task creates instances of output, etc.
In addition, in one embodiment, the second task may analyze historical output of the first task in order to determine the one or more details of the historical output of the first task. For example, the one or more details of the historical output of the first task may include a naming scheme used by the first task to name the output of the first task. For instance, the naming scheme may include a method by which the first task has named each of a plurality of historical instances of output by the first task.
In another example, the one or more details of the historical output of the first task may include a storage location used by the first task to store the output of the first task. For instance, the storage location may include a location used by the first task to store historical instances of output by the first task. In yet another example, the one or more details of the historical output of the first task may include a size of the historical output of the first task. For instance, the size may include an average size of a predetermined number of historical instances of output by the first task.
Furthermore, in one embodiment, the second task may simulate the output of the first task for the predetermined time period, based on the determined one or more details of the historical output of the first task. For example, the output may include a plurality of instances of output by the first task. In another example, the second task may determine a number of instances of output to simulate for the first task, based on a schedule by which the first task creates instances of output. For instance, the second task may determine a number of instances of output that should be created by the first task during the predetermined time period, based on a pattern derived from the schedule. The second task may also determine a specific date and time that each instance of output should be created by the first task during the predetermined time period, based on a pattern derived from the schedule.
Further still, in one example, for each of the determined number of instances of output to simulate for the first task, the second task may determine a timestamp for the instance of the simulated output, based on a pattern derived from the schedule by which the first task creates instances of output. In another example, for each of the determined number of instances of output to simulate for the first task, the second task may determine a name for the instance of the simulated output, based on a pattern derived from the naming scheme used by the first task to name its output and/or the method by which the first task has named each of a plurality of historical instances of output by the first task.
For instance, the second task may determine that historical instances of output by the first task all include a date and time that the instance of output was created. In response, the second task may determine a date and time that the instance of the simulated output should have been created by the first task, and may determine the name for the instance of the simulated output, based on the determined date and time. In another example, the second task may determine a pattern where historical instances of output by the first task all include a name of the system where the instance of output was created. In response, the second task may determine and add the name of the system to the name for the instance of the simulated output, according to the pattern.
Also, in one embodiment, for each of the determined number of instances of output to simulate for the first task, the second task may determine a storage location for the instance of the simulated output, based on a pattern derived from the location used by the first task to store its output and/or the location used by the first task to store historical instances of output by the first task. In another embodiment, for each of the determined number of instances of output to simulate for the first task, the second task may determine a size for the instance of the simulated output, based on a pattern derived from one or more size limitations followed by the first task when creating its output and/or an average size of a predetermined number of historical instances of output by the first task.
Additionally, in one embodiment, each instance of the simulated output may include only details including a name, storage location, and size determined for the instance, without having to create the snapshot itself (as done by the first task). For example, each instance of the simulated output may include only metadata (e.g., timestamp, name, storage location, size, etc.) associated with the output of the first task, and not the output itself. For instance, if the output of the first task is a system snapshot, the simulated output may include only metadata (e.g., timestamp, name, storage location, size, etc.) associated with the snapshot, and not the snapshot itself.
In this way, the second task may simulate the output of the first task without disturbing the running of the first task or interrupting a state of the system.
Further, method 400 may proceed with operation 404, where the simulated output is compared to actual output of the first task for the predetermined time period. In one embodiment, the actual output of the first task may be analyzed over the predetermined time period in order to determine details of each instance of the actual output over the predetermined time period. For example, metadata such as a name, storage location, and size may be determined for each instance of the actual output of the first task.
Further still, in one embodiment, a number of instances of the simulated output may be compared to a number of instances of the actual output for the predetermined time period. For example, a number of instances of the simulated output within a predetermined storage location may be compared to a number of instances of the actual output within the predetermined storage location for the predetermined time period.
Also, in one embodiment, the details (e.g., metadata, etc.) of each instance of the simulated output may be compared to the details (e.g., metadata, etc.) of each instance of the actual output for the predetermined time period. For example, a name of each instance of the simulated output may be compared to a name of each instance of the actual output for the predetermined time period. In another example, a size of each instance of the simulated output may be compared to a size of each instance of the actual output for the predetermined time period. In yet another example, a storage location of each instance of the simulated output may be compared to a storage location of each instance of the actual output for the predetermined time period.
In addition, method 400 may proceed with operation 406, where an alert is generated in response to determining that the simulated output does not match the actual output for the predetermined time period. In one embodiment, it may be determined that the simulated output does not match the actual output in response to determining that a number of instances of the simulated output does not equal a number of instances of the actual output for the predetermined time period. In another embodiment, it may be determined that the simulated output does not match the actual output in response to determining that one or more details (e.g., metadata, etc.) of each instance of the simulated output do not match details (e.g., metadata, etc.) of each instance of the actual output for the predetermined time period.
For example, it may be determined that the simulated output does not match the actual output in response to determining that each name of all instances of the simulated output do not have a corresponding name within all instances of the actual output for the predetermined time period. For instance, one or more names may be determined to be different, missing, etc. In another example, it may be determined that the simulated output does not match the actual output in response to determining that a storage location of each instance of the simulated output does not match a storage location of a corresponding instance of the actual output for the predetermined time period. For instance, one or more instances of output may be determined to be at a different location, may be determined to be missing, etc.
In yet another example, it may be determined that the simulated output does not match the actual output in response to determining that a size of each instance of the simulated output does not match a size of a corresponding instance of the actual output for the predetermined time period. A size of each instance of the simulated output may match a size of a corresponding instance of the actual output if the size of the instance of the simulated output is within a predetermined percentage of a size of a corresponding instance of the actual output.
Furthermore, in one embodiment, the alert may include a message to one or more users (e.g., an email, text message, pop-up image, etc.). For example, the alert may list the predetermined time period, and may describe differences between the simulated output and the actual output for the predetermined time period. In another embodiment, the first task and/or the system may be automatically restarted in response to determining that the simulated output does not match the actual output.
Further still, in one embodiment, one or more updates and/or patches may be automatically searched for/applied to the first task in response to determining that the simulated output does not match the actual output. In another embodiment, anti-malware software may be automatically applied to the system and/or updated in response to determining that the simulated output does not match the actual output. In yet another embodiment, a schedule of the first task may be automatically adjusted in response to determining that the simulated output does not match the actual output. For example, in response to determining that one or more instances of the simulated output are missing from the actual output, a schedule of the first task may be adjusted so that the first task runs more frequently. In this way, the first task may be run with an increased frequency in order to compensate for missing output from the first task.
In this way, the second task may monitor the first task to ensure that the first task is running and producing output as intended. If the second task determines that the first task is not running as intended, the second task may notify one or more users and may dynamically perform one or more actions to correct the first task.
FIG. 5 illustrates an exemplary implementation 500 of a monitoring task 502, according to one exemplary embodiment. As shown, a snapshot task 504 running within a system creates a first snapshot 506A according to a predetermined schedule with a first name 516A and a first size 518A at a first creation time 508A and stores the first snapshot 506A at a first storage location 514A. The first name 516A, first size 518A, first creation time 508A, and first storage location 514A are stored as metadata within the first snapshot 506A along with the first snapshot data 520A created by the snapshot task 504.
Further, in one embodiment, the monitoring task 502 may request information from the snapshot task 504, such as a naming scheme for the snapshot task 504, the storage location where the snapshot task 504 stores snapshots, one or more size limitations followed by the snapshot task 504 when creating snapshots, a schedule by which the snapshot task 504 creates snapshots, etc. In another embodiment, the monitoring task 502 may also identify and analyze historical snapshots created by the snapshot task 504 (e.g., snapshots created by the snapshot task 504 prior to the first snapshot 506A, etc.) in order to determine details (e.g., metadata, etc.) of the snapshots.
Further still, the monitoring task 502 may create a first simulated snapshot 510A for a first time period 522A, based on the information requested from the snapshot task 504 and/or patterns derived from details of the snapshots obtained by analyzing historical snapshots created by the snapshot task 504. The first simulated snapshot 510A does not include snapshot data, and includes metadata such as a first simulated creation time 512A, a first simulated size 524A, a first simulated name 526A, and a first simulated storage location 528A.
Also, after creating the first simulated snapshot 510A, the monitoring task 502 may compare details of all simulated snapshots created during the first time period 522A to details of all snapshots created during the first time period 522A. For example, the monitoring task 502 may compare details of the first simulated snapshot 510A (e.g., metadata including the first simulated name 526A, the first simulated size 524A, the first simulated creation time 512A, and the first simulated storage location 528A) to details of the first snapshot 506A (e.g., metadata including the first name 516A, first size 518A, first creation time 508A, and first storage location 514A).
Upon determining that the details of all simulated snapshots created during the first time period 522A match (or are within a predetermined range of) details of all snapshots created during the first time period 522A, the monitoring task 502 may confirm that the snapshot task 504 is operating correctly for the first time period 522A.
Additionally, at time after the first time period 522A, the snapshot task 504 creates a second snapshot 506B according to the predetermined schedule with a second name 516B and a second size 518B at a second creation time 508B and stores the second snapshot 506B at a second storage location 514B. The second name 516B, second size 518B, second creation time 508B, and second storage location 514B are stored as metadata within the second snapshot 506B along with the second snapshot data 520B created by the snapshot task 504.
Further still, the monitoring task 502 may create a second simulated snapshot 510B for a second time period 522B, based on the information requested from the snapshot task 504 and/or patterns derived from details of the snapshots obtained by analyzing historical snapshots created by the snapshot task 504. The second simulated snapshot 510B does not include snapshot data, and includes metadata such as a second simulated creation time 512B, a second simulated size 524B, a second simulated name 526B, and a second simulated storage location 528B.
Also, after creating the second simulated snapshot 510B, the monitoring task 502 may compare details of all simulated snapshots created during the second time period 522B to details of all snapshots created during the second time period 522B. For example, the monitoring task 502 may compare details of the first simulated snapshot 510A (e.g., metadata including the first simulated name 526A, the first simulated size 524A, the first simulated creation time 512A, and the first simulated storage location 528A) to details of the first snapshot 506A (e.g., metadata including the first name 516A, first size 518A, first creation time 508A, and first storage location 514A). Additionally, the monitoring task 502 may compare details of the second simulated snapshot 510B (e.g., metadata including the second simulated name 526B, the second simulated size 524B, the second simulated creation time 512B, and the second simulated storage location 528B) to details of the second snapshot 506B (e.g., metadata including the second name 516B, second size 518B, second creation time 508B, and second storage location 514B).
Upon determining that the details of all simulated snapshots created during the first time period 522A match (or are within a predetermined range of) details of all snapshots created during the first time period 522A, the monitoring task 502 may confirm that the snapshot task 504 is operating correctly for the second time period 522B.
Additionally, at time after the second time period 522B, due to a failure of the snapshot task 504 (or unintentional and/or malicious activity within the system), the snapshot task 504 fails to create a snapshot according to the predetermined schedule at a third time. As a result, during a third time period 522C, only the first snapshot 506A and the second snapshot 506B have been created by the snapshot task 504.
Further still, the monitoring task 502 may create a third simulated snapshot 510C for the third time period 522C, based on the information requested from the snapshot task 504 and/or patterns derived from details of the snapshots obtained by analyzing historical snapshots created by the snapshot task 504. The third simulated snapshot 510C does not include snapshot data, and includes metadata such as a third simulated creation time 512C, a third simulated size 524C, a third simulated name 526C, and a third simulated storage location 528C.
Also, after creating the third simulated snapshot 510C, the monitoring task 502 may compare details of all simulated snapshots created during the third time period 522C to details of all snapshots created during the third time period 522C. For example, the monitoring task 502 may compare details of the first simulated snapshot 510A (e.g., metadata including the first simulated name 526A, the first simulated size 524A, the first simulated creation time 512A, and the first simulated storage location 528A) to details of the first snapshot 506A (e.g., metadata including the first name 516A, first size 518A, first creation time 508A, and first storage location 514A).
Additionally, the monitoring task 502 may compare details of the second simulated snapshot 510B (e.g., metadata including the second simulated name 526B, the second simulated size 524B, the second simulated creation time 512B, and the second simulated storage location 528B) to details of the second snapshot 506B (e.g., metadata including the second name 516B, second size 518B, second creation time 508B, and second storage location 514B).
However, during the comparison, the monitoring task 502 may determine that a third snapshot having details matching the third simulated snapshot 510C does not exist for the third time period 522C. In response to this determination, the monitoring task 502 may send an alert that includes details of the third simulated snapshot 510C. In response to this determination, the monitoring task 502 may also patch the snapshot task 504, update anti-malware software within the system, restart the snapshot task 504, re-schedule the snapshot task 504, etc.
In this way, the monitoring task 502 may detect issues with the snapshot task 504 and dynamically report/fix those issues.
Periodic Task Integrity Analyzer
Periodic tasks exist within a system that may occasionally perform an operation within the system. As with any other task, these periodic tasks may fail for numerous reasons, which may cause irregularities future operations by the tasks. A user may not know what failed or what should be a final (current) state of the task.
In response, the state of the job may be calculated as it should be if it ran without any interruptions or failure, where the state of the job may be of a type that is possible to check without running a job and changing a state of a machine (for instance, checking that objects that should be created by a task are in fact created).
In one embodiment, an image may be simulated (without disturbing a run of a task), where the image includes details of what should be a state (example, all the objects that should be created by the task) at a predetermined time if there were no interruptions and the task ran successfully.
A mechanism may run periodically and may calculate for each periodic task when it should have run. The task may be periodic or progressive so the run times may be calculated based on a predetermined schedule for the task. The mechanism may also create an image of the correct state in which a machine should currently be in (e.g., by running a “what if” mechanism, etc.).
The simulation may be done by running a mock task which may show the result of the original calculations. For example, a snapshot scheduler creates a snapshot in a volume in a periodic matter (every hour for example). In one embodiment, after a few runs, the machine was down for a certain time period and the snapshot scheduler failed to create snapshots for that time period.
After the machine was brought back up after its downtime, a “what if” mechanism may start. The “what if” mechanism may also run periodically, but may run according to a different schedule than the snapshot scheduler. Based on data retrieved from the snapshot scheduler, the “what if” mechanism may know when the snapshot scheduler should have run, and may simulate the snapshots created on the machine by creating an image with the snapshot names that should have been created. These simulated snapshots may be compared to snapshots currently created by the snapshot scheduler. If the “what if” mechanism finds that there are missing snapshots, it may send a message (e.g., an alert) to the user.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein includes an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which includes one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Moreover, a system according to various embodiments may include a processor and logic integrated with and/or executable by the processor, the logic being configured to perform one or more of the process steps recited herein. By integrated with, what is meant is that the processor has logic embedded therewith as hardware logic, such as an application specific integrated circuit (ASIC), a FPGA, etc. By executable by the processor, what is meant is that the logic is hardware logic; software logic such as firmware, part of an operating system, part of an application program; etc., or some combination of hardware and software logic that is accessible by the processor and configured to cause the processor to perform some functionality upon execution by the processor. Software logic may be stored on local and/or remote memory of any memory type, as known in the art. Any processor known in the art may be used, such as a software processor module and/or a hardware processor such as an ASIC, a FPGA, a central processing unit (CPU), an integrated circuit (IC), a graphics processing unit (GPU), etc.
It will be clear that the various features of the foregoing systems and/or methodologies may be combined in any way, creating a plurality of combinations from the descriptions presented above.
It will be further appreciated that embodiments of the present invention may be provided in the form of a service deployed on behalf of a customer to offer service on demand.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
What is claimed is:
1. A computer-implemented method, comprising: simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output; comparing the simulated output to actual output of the first task for the predetermined time period; and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
2. The computer-implemented method of claim 1, wherein the output is simulated by a second task separate from the first task that runs according to a second schedule different from a first schedule of the first task.
3. The computer-implemented method of claim 1, further comprising analyzing the first task in order to determine one or more details of the output of the first task.
4. The computer-implemented method of claim 1, further comprising querying the first task to obtain one or more parameters of the first task, the one or more parameters including a naming scheme used by the first task to name its output, a location used by the first task to store its output, one or more size limitations followed by the first task when creating its output, and a schedule by which the first task creates instances of output.
5. The computer-implemented method of claim 1, further comprising analyzing historical output of the first task in order to determine one or more details of the historical output of the first task, where the one or more details of the historical output of the first task include: a naming scheme used by the first task to name the output of the first task, including a method by which the first task has named each of a plurality of historical instances of output by the first task, a storage location used by the first task to store the output of the first task, including a location used by the first task to store historical instances of output by the first task, a size of the historical output of the first task, including an average size of a predetermined number of historical instances of output by the first task.
6. The computer-implemented method of claim 1, further comprising simulating the output of the first task for the predetermined time period, based on one or more determined details of historical output of the first task.
7. The computer-implemented method of claim 1, further comprising: determining a number of instances of output to simulate for the first task, based on a schedule by which the first task creates instances of output; determining a specific date and time that each instance of output should be created by the first task during the predetermined time period, based on the schedule; and for each of the instances of output to simulate for the first task: determining a timestamp for the instance of the simulated output, based on a schedule by which the first task creates instances of output, determining a name for the instance of the simulated output, based on a method by which the first task has named each of a plurality of historical instances of output by the first task, determining a storage location for the instance of the simulated output, based on a location used by the first task to store historical instances of output by the first task, and determining a size for the instance of the simulated output, based on an average size of a predetermined number of historical instances of output by the first task.
8. The computer-implemented method of claim 1, wherein the output of the first task is a system snapshot, and the simulated output includes only metadata associated with the system snapshot, and not the system snapshot itself.
9. The computer-implemented method of claim 1, further comprising comparing a number of instances of the simulated output to a number of instances of the actual output within a predetermined storage location for the predetermined time period.
10. The computer-implemented method of claim 1, further comprising comparing metadata of each instance of the simulated output to metadata of each instance of the actual output for the predetermined time period.
11. The computer-implemented method of claim 1, wherein it is determined that the simulated output does not match the actual output in response to determining that a number of instances of the simulated output does not equal a number of instances of the actual output for the predetermined time period.
12. The computer-implemented method of claim 1, wherein it is determined that the simulated output does not match the actual output in response to determining that metadata of each instance of the simulated output does not match metadata of each instance of the actual output for the predetermined time period.
13. The computer-implemented method of claim 1, wherein it is determined that the simulated output does not match the actual output in response to determining that: each name of all instances of the simulated output do not have a corresponding name within all instances of the actual output for the predetermined time period, a storage location of each instance of the simulated output does not match a storage location of a corresponding instance of the actual output for the predetermined time period, or a size of each instance of the simulated output does not match a size of a corresponding instance of the actual output for the predetermined time period.
14. The computer-implemented method of claim 1, wherein the alert lists the predetermined time period, and describes differences between the simulated output and the actual output for the predetermined time period.
15. The computer-implemented method of claim 1, further comprising automatically restarting the first task in response to determining that the simulated output does not match the actual output.
16. The computer-implemented method of claim 1, further comprising automatically searching for and applying one or more updates to the first task in response to determining that the simulated output does not match the actual output.
17. The computer-implemented method of claim 1, further comprising automatically updating anti-malware software within the system in response to determining that the simulated output does not match the actual output.
18. The computer-implemented method of claim 1, further comprising automatically adjusting a schedule of the first task in response to determining that the simulated output does not match the actual output.
19. A computer program product for monitoring task output within a system, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, wherein the computer readable storage medium is not a transitory signal per se, the program instructions executable by a processor to cause the processor to perform a method comprising: simulating, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output, utilizing the processor; comparing, utilizing the processor, the simulated output to actual output of the first task for the predetermined time period; and generating an alert in response to determining that the simulated output does not match the actual output for the predetermined time period, utilizing the processor.
20. A system, comprising: a processor; and logic integrated with the processor, executable by the processor, or integrated with and executable by the processor, the logic being configured to: simulate, for a predetermined time period, output of a first task that periodically runs within a system to create a simulated output; compare the simulated output to actual output of the first task for the predetermined time period; and generate an alert in response to determining that the simulated output does not match the actual output for the predetermined time period.
|
Tennessee facts for kids
Kids Encyclopedia Facts
Quick facts for kids
Tennessee
State of Tennessee
Nickname(s):
The Volunteer State
Motto(s):
Agriculture and Commerce
Anthem: Nine songs
Map of the United States with Tennessee highlighted
Country United States
Before statehood Southwest Territory
Admitted to the Union June 1, 1796 (16th)
Capital
(and largest city)
Nashville
Largest metro Greater Nashville (combined and metro)
Memphis (urban)
Legislature General Assembly
• Upper house Senate
• Lower house House of Representatives
Area
• Total 42,143 sq mi (109,247 km2)
• Land 41,217 sq mi (106,846 km2)
• Water 926 sq mi (2,401 km2) 2.2%
Area rank 36th
Dimensions
• Length 440 mi (710 km)
• Width 120 mi (195 km)
Elevation
900 ft (270 m)
Highest elevation 6,643 ft (2,025 m)
Lowest elevation 178 ft (54 m)
Population
(2020)
• Total 6,916,897
• Rank 16th
• Density 167.8/sq mi (64.8/km2)
• Density rank 20th
• Median household income
$53,320
• Income rank
42nd
Demonym(s) Tennessean
Big Bender (archaic)
Volunteer (historical significance)
Language
• Official language English
• Spoken language Language spoken at home
Time zones
East Tennessee UTC−05:00 (Eastern)
• Summer (DST) UTC−04:00 (EDT)
Middle and West UTC−06:00 (Central)
• Summer (DST) UTC−05:00 (CDT)
USPS abbreviation
TN
ISO 3166 code US-TN
Trad. abbreviation Tenn.
Latitude 34°59′ N to 36°41′ N
Longitude 81°39′ W to 90°19′ W
Tennessee state symbols
Living insignia
Amphibian Tennessee cave salamander
Bird Mockingbird
Bobwhite quail
Butterfly Zebra swallowtail
Fish Channel catfish
Smallmouth bass
Flower Iris
Passion flower
Tennessee echinacea
Insect Firefly
Lady beetle
Honey bee
Mammal Tennessee Walking Horse
Raccoon
Reptile Eastern box turtle
Tree Tulip poplar
Eastern red cedar
Inanimate insignia
Beverage Milk
Dance Square dance
Firearm Barrett M82
Food Tomato
Fossil Pterotrigonia (Scabrotrigonia) thoracica
Gemstone Tennessee River pearl
Mineral Agate
Poem "Oh Tennessee, My Tennessee" by William Lawrence
Rock Limestone
Slogan "Tennessee—America at its best"
Tartan Tennessee State Tartan
State route marker
State quarter
Released in 2002
Lists of United States state symbols
Tennessee (), officially the State of Tennessee, is a state in the Southeastern region of the United States. Tennessee is the 36th largest by area and the 16th most populous of the 50 states. It is bordered by Kentucky to the north, Virginia to the northeast, North Carolina to the east, Georgia, Alabama, and Mississippi to the south, Arkansas to the southwest, and Missouri to the northwest. Tennessee is geographically, culturally, and legally divided into three Grand Divisions of East, Middle, and West Tennessee. Nashville is the state's capital and largest city, and anchors its largest metropolitan area. Tennessee's population as of the 2020 United States census is approximately 6.9 million.
Tennessee is rooted in the Watauga Association, a 1772 frontier pact generally regarded as the first constitutional government west of the Appalachian Mountains. Its name is derived from "Tanasi", a Cherokee town in the eastern part of the state that existed before the first European American settlement. Tennessee was initially part of North Carolina, and later the Southwest Territory, before its admission to the Union as the 16th state on June 1, 1796. It earned the nickname "The Volunteer State" early in its history due to a strong tradition of military service. A slave state until the American Civil War, Tennessee was politically divided, with its western and middle parts supporting the Confederacy and the eastern region harboring pro-Union sentiment. As a result, Tennessee was the last state to secede and the first readmitted to the Union after the war.
During the 20th century, Tennessee transitioned from a predominantly agrarian society to a more diversified economy. This was aided in part by massive federal investment in the Tennessee Valley Authority (TVA) and the city of Oak Ridge, which was established during World War II to house the Manhattan Project's uranium enrichment facilities for the construction of the world's first atomic bombs. These were dropped on Imperial Japan at the end of the war. After the war, the Oak Ridge National Laboratory became a key center of scientific research. In 2016, the element tennessine was named for the state, largely in recognition of the roles played by Oak Ridge, Vanderbilt University, and the University of Tennessee in its discovery. Tennessee has also played a major role in the development of many forms of popular music, including country, blues, rock and roll, soul, and gospel.
Tennessee has diverse terrain and landforms, and from east to west, contains a mix of cultural features characteristic of Appalachia, the Upland South, and the Deep South. The Blue Ridge Mountains along the eastern border reach some of the highest elevations in eastern North America, and the Cumberland Plateau contains many scenic valleys and waterfalls. The central part of the state is marked by cavernous bedrock and irregular rolling hills, and level, fertile plains define West Tennessee. The state is twice bisected by the Tennessee River, and the Mississippi River forms its western border. Its economy is dominated by the health care, music, finance, automotive, chemical, electronics, and tourism sectors, and cattle, soybeans, corn, poultry, and cotton are its primary agricultural products. The Great Smoky Mountains National Park, the nation's most visited national park, is in eastern Tennessee.
Etymology
Monument near the old site of Tanasi in Monroe County
The meaning and origin of the word are uncertain. Some accounts suggest it is a Cherokee modification of an earlier Yuchi word. It has been said to mean "meeting place", "winding river", or "river of the great bend". According to ethnographer James Mooney, the name "can not be analyzed" and its meaning is lost.
Geography
See also: List of counties in Tennessee
Map of Tennessee
Tennessee borders eight other states: Kentucky and Virginia to the north; North Carolina to the east; Georgia, Alabama, and Mississippi on the south; Arkansas and Missouri on the Mississippi River to the west. Tennessee ties Missouri as the state bordering the most other states. The state is trisected by the Tennessee River.
The highest point in the state is Clingmans Dome at 6,643 feet (2,025 m). Clingmans Dome, which lies on Tennessee's eastern border, is the highest point on the Appalachian Trail, and is the third highest peak in the United States east of the Mississippi River. The state line between Tennessee and North Carolina crosses the summit. The state's lowest point is the Mississippi River at the Mississippi state line: 178 feet (54 m). The geographical center of the state is located in Murfreesboro.
The state of Tennessee is geographically, culturally, economically, and legally divided into three Grand Divisions: East Tennessee, Middle Tennessee, and West Tennessee. The state constitution allows no more than two justices of the five-member Tennessee Supreme Court to be from one Grand Division and a similar rule applies to certain commissions and boards.
Tennessee features six principal physiographic regions: the Blue Ridge, the Appalachian Ridge and Valley Region, the Cumberland Plateau, the Highland Rim, the Nashville Basin, and the Gulf Coastal Plain. Tennessee is home to the most caves in the United States, with over 10,000 documented caves to date.
East Tennessee
Map of Tennessee highlighting East Tennessee
The Blue Ridge area lies on the eastern edge of Tennessee, bordering North Carolina. This region of Tennessee is characterized by the high mountains and rugged terrain of the western Blue Ridge Mountains, which are subdivided into several subranges, namely the Great Smoky Mountains, the Bald Mountains, the Unicoi Mountains, the Unaka Mountains and Roan Highlands, and the Iron Mountains.
The average elevation of the Blue Ridge area is 5,000 feet (1,500 m) above sea level. Clingmans Dome, the state's highest point, is located in this region. The Blue Ridge area was never more than sparsely populated, and today much of it is protected by the Cherokee National Forest, the Great Smoky Mountains National Park, and several federal wilderness areas and state parks.
Stretching west from the Blue Ridge for approximately 55 miles (89 km) is the Ridge and Valley region, in which numerous tributaries join to form the Tennessee River in the Tennessee Valley. This area of Tennessee is covered by fertile valleys separated by wooded ridges, such as Bays Mountain and Clinch Mountain. The western section of the Tennessee Valley, where the depressions become broader and the ridges become lower, is called the Great Valley. In this valley are numerous towns and two of the region's three urban areas, Knoxville, the third largest city in the state, and Chattanooga, the fourth largest city in the state. The third urban area, the Tri-Cities, comprising Bristol, Johnson City, and Kingsport and their environs, is located to the northeast of Knoxville.
The Cumberland Plateau rises to the west of the Tennessee Valley; this area is covered with flat-topped mountains separated by sharp valleys. The elevation of the Cumberland Plateau ranges from 1,500 to about 2,000 feet (460 to about 610 m) above sea level.
East Tennessee has several important transportation links with Middle and West Tennessee, as well as the rest of the nation and the world, including several major airports and interstates. Knoxville's McGhee Tyson Airport (TYS) and Chattanooga's Chattanooga Metropolitan Airport (CHA), as well as the Tri-Cities' Tri-Cities Regional Airport (TRI), provide air service to numerous destinations. I-24, I-81, I-40, I-75, and I-26 along with numerous state highways and other important roads, traverse the Grand Division and connect Chattanooga, Knoxville, and the Tri-Cities, along with other cities and towns such as Cleveland, Athens, and Sevierville.
Middle Tennessee
Map of Tennessee highlighting Middle Tennessee
West of the Cumberland Plateau is the Highland Rim, an elevated plain that surrounds the Nashville Basin. The northern section of the Highland Rim, known for its high tobacco production, is sometimes called the Pennyroyal Plateau; it is located primarily in Southwestern Kentucky. The Nashville Basin is characterized by rich, fertile farm country and great diversity of natural wildlife.
Middle Tennessee was a common destination of settlers crossing the Appalachians from Virginia in the late 18th century and early 19th century. An important trading route called the Natchez Trace, created and used for many generations by American Indians, connected Middle Tennessee to the lower Mississippi River town of Natchez. The route of the Natchez Trace was used as the basis for a scenic highway called the Natchez Trace Parkway.
Some of the last remaining large American chestnut trees grow in this region. They are being used to help breed blight-resistant trees.
Middle Tennessee is one of the primary state population and transportation centers along with the heart of state government. Nashville (the capital), Clarksville, and Murfreesboro are its largest cities. Fifty percent of the US population is within 600 miles (970 km) of Nashville. Interstates I-24, I-40, and I-65 service the Division, meeting in Nashville.
West Tennessee
Map of Tennessee highlighting West Tennessee
West of the Highland Rim and Nashville Basin is the Gulf Coastal Plain, which includes the Mississippi embayment. The Gulf Coastal Plain is, in terms of area, the predominant land region in Tennessee. It is part of the large geographic land area that begins at the Gulf of Mexico and extends north into southern Illinois. In Tennessee, the Gulf Coastal Plain is divided into three sections that extend from the Tennessee River in the east to the Mississippi River in the west.
The easternmost section, about 10 miles (16 km) in width, consists of hilly land that runs along the western bank of the Tennessee River. To the west of this narrow strip of land is a wide area of rolling hills and streams that stretches all the way to the Mississippi River; this area is called the Tennessee Bottoms or bottom land. In Memphis, the Tennessee Bottoms end in steep bluffs overlooking the river. To the west of the Tennessee Bottoms is the Mississippi Alluvial Plain, less than 300 feet (91 m) above sea level. This area of lowlands, flood plains, and swamp land is sometimes referred to as the Delta region. Memphis is the economic center of West Tennessee and the largest city in the state.
Most of West Tennessee remained Indian land until the Chickasaw Cession of 1818, when the Chickasaw ceded their land between the Tennessee River and the Mississippi River. The portion of the Chickasaw Cession that lies in Kentucky is known today as the Jackson Purchase.
Climate
Autumn in Tennessee. Roadway to Lindsey Lake in David Crockett State Park, located a half mile west of Lawrenceburg.
Most of the state has a humid subtropical climate, with the exception of some of the higher elevations in the Appalachians, which are classified as having a mountain temperate climate or a humid continental climate due to cooler temperatures. The Gulf of Mexico is the dominant factor in the climate of Tennessee, with winds from the south being responsible for most of the state's annual precipitation. Generally, the state has hot summers and mild to cool winters with generous precipitation throughout the year, with highest average monthly precipitation generally in the winter and spring months, between December and April. The driest months, on average, are August to October. On average the state receives 50 inches (130 cm) of precipitation annually. Snowfall ranges from 5 inches (13 cm) in West Tennessee to over 16 inches (41 cm) in the higher mountains in East Tennessee.
Summers in the state are generally hot and humid, with most of the state averaging a high of around 90 °F (32 °C) during the summer months. Winters tend to be mild to cool, increasing in coolness at higher elevations. Generally, for areas outside the highest mountains, the average overnight lows are near freezing for most of the state.
While the state is far enough from the coast to avoid any direct impact from a hurricane, the location of the state makes it likely to be impacted from the remnants of tropical cyclones which weaken over land and can cause significant rainfall, such as Tropical Storm Chris in 1982 and Hurricane Opal in 1995.
The state averages around 50 days of thunderstorms per year, some of which can be severe with large hail and damaging winds. Tornadoes are possible throughout the state, with West and Middle Tennessee the most vulnerable. Occasionally, strong or violent tornadoes occur, such as the devastating April 2011 tornadoes that killed 20 people in North Georgia and Southeast Tennessee. On average, the state has 15 tornadoes per year. Tornadoes in Tennessee can be severe, and Tennessee leads the nation in the percentage of total tornadoes which have fatalities.
Fog is a persistent problem in parts of the state, especially in East Tennessee.
Major cities
See also: List of municipalities in Tennessee
The capital is Nashville, though Knoxville, Kingston, and Murfreesboro have all served as state capitals in the past. Memphis has the largest population of any city in the state. Nashville's 13-county metropolitan area has been the state's largest since c. 1990. Chattanooga and Knoxville, both in the eastern part of the state near the Great Smoky Mountains, each has approximately one-third of the population of Memphis or Nashville. The city of Clarksville is a fifth significant population center, 45 miles (72 km) northwest of Nashville. Murfreesboro is the sixth-largest city in Tennessee, consisting of 108,755 residents.
Tennessee's largest cities
Memphis skyline
History
Early history
Reconstruction of Fort Loudon, the first British settlement in Tennessee
The area now known as Tennessee was first inhabited by Paleo-Indians nearly 12,000 years ago. The names of the cultural groups that inhabited the area between first settlement and the time of European contact are unknown, but several distinct cultural phases have been named by archaeologists, including Archaic (8000–1000 BC), Woodland (1000 BC–1000 AD), and Mississippian (1000–1600 AD), whose chiefdoms were the cultural predecessors of the Muscogee people who inhabited the Tennessee River Valley before Cherokee migration into the river's headwaters.
The first recorded European excursions into what is now called Tennessee were three expeditions led by Spanish explorers, namely Hernando de Soto in 1540, Tristan de Luna in 1559, and Juan Pardo in 1567. Pardo recorded the name "Tanasqui" from a local Indian village, which evolved to the state's current name.
At that time, Tennessee was inhabited by tribes of Muscogee and Yuchi people. Possibly because of European diseases devastating the Indian tribes, which would have left a population vacuum, and also from expanding European settlement in the north, the Cherokee moved south from the area now called Virginia. As European colonists spread into the area, the Indian populations were forcibly displaced to the south and west, including all Muscogee and Yuchi peoples, the Chickasaw and Choctaw, and ultimately, the Cherokee in 1838.
The first British settlement in what is now Tennessee was built in 1756 by settlers from the colony of South Carolina at Fort Loudoun, near present-day Vonore. Hostilities erupted between the British and the neighboring Overhill Cherokees, and a siege of Fort Loudoun ended with its surrender on August 7, 1760. The following morning, Captain Paul Demeré and a number of his men were killed in an ambush nearby, and most of the rest of the garrison was taken prisoner.
In the 1760s, long hunters from Virginia explored much of East and Middle Tennessee, and the first permanent European settlers began arriving late in the decade. The vast majority of 18th century settlers were English or of primarily English descent but nearly 20% of them were also Scotch-Irish. These settlers formed the Watauga Association, a community built on lands leased from the Cherokee peoples.
The frontier fort on the banks of the Watauga River served as a 1780 staging area for the Overmountain Men in preparation to trek over the Appalachian Mountains, to engage, and to later defeat the British Army at the Battle of Kings Mountain in South Carolina.
Three counties of the Washington District (now part of Tennessee) broke off from North Carolina in 1784 and formed the State of Franklin. Efforts to obtain admission to the Union failed, and the counties (now numbering eight) had re-joined North Carolina by 1789. North Carolina ceded the area to the federal government in 1790, after which it was organized into the Southwest Territory.
Statehood (1796)
Tennessee was admitted to the Union on June 1, 1796 as the 16th state. It was the first state created from territory under the jurisdiction of the United States federal government.
Cherokee Chief John Ross tried to defend the Cherokees' rights in court
During the administration of U.S. President Martin Van Buren, nearly 17,000 Cherokees—along with approximately 2,000 black slaves owned by Cherokees—were uprooted from their homes between 1838 and 1839 and were forced by the U.S. military to march from "emigration depots" in Eastern Tennessee (such as Fort Cass) toward the more distant Indian Territory west of Arkansas (now the state of Oklahoma). During this relocation an estimated 4,000 Cherokees died along the way west. In the Cherokee language, the event is called Nunna daul Isunyi—"the Trail Where We Cried." The Cherokees were not the only American Indians forced to emigrate as a result of the Indian removal efforts of the United States, and so the phrase "Trail of Tears" is sometimes used to refer to similar events endured by other American Indian peoples, especially among the "Five Civilized Tribes". The phrase originated as a description of the earlier emigration of the Choctaw nation.
Civil War and Reconstruction
The Tennessee legislature ratified an agreement to enter a military league with the Confederate States on May 7, 1861. On June 8, 1861, voters approved a second referendum calling for secession, becoming the last state to do so.
Many major battles of the American Civil War were fought in Tennessee—most of them Union victories. Ulysses S. Grant and the U.S. Navy captured control of the Cumberland and Tennessee rivers in February 1862. They held off the Confederate counterattack at Shiloh in April. Memphis fell to the Union in June, following a naval battle on the Mississippi River in front of the city. The Capture of Memphis and Nashville gave the Union control of the western and middle sections; this control was confirmed at the Battle of Murfreesboro in early January 1863 and by the subsequent Tullahoma Campaign.
The Battle of Franklin, November 30, 1864
Confederates held East Tennessee. The Confederates, led by General James Longstreet, did attack General Burnside's Fort Sanders at Knoxville and lost. It was a big blow to East Tennessee Confederate momentum, but Longstreet won the Battle of Bean's Station a few weeks later. The Confederates besieged Chattanooga during the Chattanooga Campaign in early fall 1863, but were driven off by Grant in November.
The last major battles came when the Confederates invaded Middle Tennessee in November 1864 and were checked at Franklin, then completely dispersed by George Thomas at Nashville in December. Meanwhile, the civilian Andrew Johnson was appointed military governor of the state by President Abraham Lincoln.
When the Emancipation Proclamation was announced, Tennessee was mostly held by Union forces. Thus, Tennessee was not among the states enumerated in the Proclamation, and the Proclamation did not free any slaves there. Nonetheless, enslaved African Americans escaped to Union lines to gain freedom without waiting for official action. Old and young, men, women and children camped near Union troops. Thousands of former slaves ended up fighting on the Union side, nearly 200,000 in total across the South.
Tennessee's legislature approved an amendment to the state constitution prohibiting slavery on February 22, 1865. It also ratified the Thirteenth Amendment to the United States Constitution (abolishing slavery in every state) on April 7, 1865.
In 1864, Andrew Johnson was elected Vice President under Abraham Lincoln. He became President after Lincoln's assassination in 1865.
Through violence and intimidation against freedmen and their allies, White Democrats regained political power in Tennessee and other states across the South in the late 1870s and 1880s.
Over the next decade, the state legislature passed increasingly restrictive laws to control African Americans. Tens of thousands of taxpaying citizens were without representation for decades into the 20th century. Disfranchising legislation accompanied Jim Crow laws passed in the late 19th century, which imposed segregation in the state. In 1900, African Americans made up nearly 24% of the state's population, and numbered 480,430 citizens who lived mostly in the central and western parts of the state.
20th century
A group of workers at Norris Dam construction camp site. The TVA was formed as part of Roosevelt's New Deal legislation.
On August 18, 1920, Tennessee became the thirty-sixth and final state necessary to ratify the Nineteenth Amendment to the United States Constitution, which provided women the right to vote. Disfranchising voter registration requirements continued to keep most African Americans and many poor whites, both men and women, off the voter rolls.
In 1953 state legislators amended the state constitution, removing the poll tax. In many areas both blacks and poor whites still faced subjectively applied barriers to voter registration that did not end until after passage of national civil rights legislation, including the Voting Rights Act of 1965.
Tennessee celebrated its bicentennial in 1996. With a yearlong statewide celebration entitled "Tennessee 200", it opened a new state park (Bicentennial Mall) at the foot of Capitol Hill in Nashville.
The state has had major disasters, such as the Great Train Wreck of 1918, one of the worst train accidents in U.S. history, and the Sultana explosion on the Mississippi River near Memphis, the deadliest maritime disaster in U.S. history.
21st century
In April and May 2010, flooding in Middle Tennessee devastated Nashville and other parts of Middle Tennessee. In 2011, parts of East Tennessee, including Hamilton County and Apison in Bradley County, were devastated by the April 2011 tornado outbreak.
Demographics
Historical population
Census Pop.
1790 35,691
1800 105,602 195.9%
1810 261,727 147.8%
1820 422,823 61.6%
1830 681,904 61.3%
1840 829,210 21.6%
1850 1,002,717 20.9%
1860 1,109,801 10.7%
1870 1,258,520 13.4%
1880 1,542,359 22.6%
1890 1,767,518 14.6%
1900 2,020,616 14.3%
1910 2,184,789 8.1%
1920 2,337,885 7.0%
1930 2,616,556 11.9%
1940 2,915,841 11.4%
1950 3,291,718 12.9%
1960 3,567,089 8.4%
1970 3,923,687 10.0%
1980 4,591,120 17.0%
1990 4,877,185 6.2%
2000 5,689,283 16.7%
2010 6,346,105 11.5%
2020 6,910,840 8.9%
Source: 1910–2020
The 2020 United States census reported Tennessee's population at 6,910,840, an increase of 564,735, or 8.90%, since the 2010 census. Between 2010 and 2019, the state received a natural increase of 143,253 (744,274 births minus 601,021 deaths), and an increase from net migration of 338,428 people into the state. Immigration from outside the U.S. resulted in a net increase of 79,086, and migration within the country produced a net increase of 259,342. Tennessee's center of population is in Murfreesboro in Rutherford County.
According to the 2010 census, 6.4% of Tennessee's population were under age 5, 23.6% were under 18, and 13.4% were 65 or older. In recent years, Tennessee has been a top source of domestic migration, receiving an influx of people relocating from places such as California, the Northeast, and the Midwest due to the low cost of living and booming employment opportunities. In 2019, about 5.5% of Tennessee's population was foreign-born. Of the foreign-born population, approximately 42.7% were naturalized citizens and 57.3% non-citizens. The foreign-born population consisted of approximately 49.9% from Latin America, 27.1% from Asia, 11.9% from Europe, 7.7% from Africa, 2.7% from Northern America, and 0.6% from Oceania.
With the exception of a slump in the 1980s, Tennessee has been one of the fastest-growing states in the nation since 1970, benefiting from the larger Sun Belt phenomenon. The state has been a top destination for people relocating from Northeastern and Midwestern states. This time period has seen the birth of new economic sectors in the state and has positioned the Nashville and Clarksville metropolitan areas as two of the fastest-growing regions in the country.
Ethnicity
See also: African Americans in Tennessee
Ethnic composition as of the 2020 census
Race and Ethnicity Alone Total
Non-Hispanic White/Anglo 70.9% 70.9
74.6% 74.6
African American 15.7% 15.7
17.0% 17
Hispanic or Latino 6.9% 6.9
Asian 1.9% 1.9
2.5% 2.5
Native American 0.2% 0.2
2.0% 2
Pacific Islander 0.1% 0.1
0.1% 0.1
Other 0.1% 0.1
0.3% 0.3
Historical racial composition 1940 1970 1990 2000 2010
White 82.5% 83.9% 83.0% 80.2% 77.6%
Black 17.4% 15.8% 16.0% 16.4% 16.7%
Asian - 0.1% 0.7% 1.0% 1.4%
Native - 0.1% 0.2% 0.3% 0.3%
Native Hawaiian and
other Pacific Islander
- - 0.1%
Other race - - 0.2% 1.0% 2.2%
Two or more races - - 1.1% 1.7%
In 2020, 6.9% of the total population was of Hispanic or Latino origin (they may be of any race), up from 4.6% in 2010. Between 2000 and 2010, the Hispanic population in Tennessee grew by 134.2%, the third-highest rate of any state. In 2020, Non-Hispanic or Latino Whites were 70.9% of the population, compared to 57.7% of the population nationwide. In 2010, the five most common self-reported ethnic groups in the state were American (26.5%), English (8.2%), Irish (6.6%), German (5.5%), and Scotch-Irish (2.7%). Most Tennesseans who self-identify as having American ancestry are of English and Scotch-Irish ancestry. An estimated 21–24% of Tennesseans are of predominantly English ancestry.
Religion
Religious affiliation (2014)
Evangelical Protestantism
52%
Unaffiliated
14%
Mainline Protestantism
13%
Historically Black Protestantism
8%
Catholic
6%
Other Christianity
3%
Other faiths
3%
Judaism
1%
Islam
1%
Since colonization, Tennessee has always been, and remains, predominantly Christian. About 81% of the population identifies as Christian, with Protestants making up 73% of the population. Of the Protestants in the state, Evangelical Protestants compose 52% of the population, Mainline Protestants 13%, and Historically Black Protestants 8%. Roman Catholics make up 6%, Mormons 1%, and Orthodox Christians less than 1%. The largest denominations by number of adherents are the Southern Baptist Convention, the United Methodist Church, the Roman Catholic Church, and the Churches of Christ. Muslims and Jews each make up about 1% of the population, and adherents of other religions make up about 3% of the population. About 14% of Tennesseans are non-religious, with 11% identifying as "Nothing in particular", 3% as agnostics, and 1% as atheists.
Tennessee is included in most definitions of the Bible Belt, and is ranked as one of the nation's most religious states. Several Protestant denominations have their headquarters in Tennessee, including the Southern Baptist Convention and National Baptist Convention (in Nashville); the Church of God in Christ and the Cumberland Presbyterian Church (in Memphis); and the Church of God and the Church of God of Prophecy (in Cleveland); and the National Association of Free Will Baptists (in Antioch). Nashville has publishing houses of several denominations.
Economy
Black Angus bull in Van Buren County, Tennessee
Major outputs for the state include textiles, cotton, cattle, and electrical power.
Tennessee has over 82,000 farms, roughly 59 percent of which accommodate beef cattle.
Although cotton was an early crop in Tennessee, large-scale cultivation of the fiber did not begin until the 1820s with the opening of the land between the Tennessee and Mississippi Rivers. The upper wedge of the Mississippi Delta extends into southwestern Tennessee, and it was in this fertile section that cotton took hold.
Soybeans are also heavily planted in West Tennessee, focusing on the northwest corner of the state.
Major corporations with headquarters in Tennessee include FedEx, AutoZone and International Paper, all based in Memphis; Pilot Corporation and Regal Entertainment Group, based in Knoxville; Eastman Chemical Company, based in Kingsport; the North American headquarters of Nissan Motor Company, based in Franklin; Hospital Corporation of America and Caterpillar Financial, based in Nashville; and Unum, based in Chattanooga. Tennessee is also the location of the Volkswagen assembly plant in Chattanooga, a $2 billion polysilicon production facility by Wacker Chemie in Bradley County, and a $1.2 billion polysilicon production facility by Hemlock Semiconductor in Clarksville.
Tourism
Graceland main entrance sign
Tourism contributes billions of dollars each year to the state's economy and Tennessee is ranked among the Top 10 destinations in the US.
Some of the top tourist attractions in the state are: the Great Smoky Mountains National Park, Graceland, Dollywood, Beale Street, Lower Broadway, the Ryman Auditorium, Gaylord Opryland Resort, Lookout Mountain, the Ocoee River, and the Tennessee Aquarium.
Music
Johnny Cash and The Tennessee Three
Tennessee has played a critical role in the development of many forms of American popular music, including rock and roll, blues, country, and rockabilly.
Beale Street in Memphis is considered by many to be the birthplace of the blues, with musicians such as W. C. Handy performing in its clubs as early as 1909. Memphis is also home to Sun Records, where musicians such as Elvis Presley, Johnny Cash, Carl Perkins, Jerry Lee Lewis, Roy Orbison, and Charlie Rich began their recording careers, and where rock and roll took shape in the 1950s.
The 1927 Victor recording sessions in Bristol generally mark the beginning of the country music genre and the rise of the Grand Ole Opry in the 1930s helped make Nashville the center of the country music recording industry.
Three brick-and-mortar museums recognize Tennessee's role in nurturing various forms of popular music: the Memphis Rock N' Soul Museum, the Country Music Hall of Fame and Museum in Nashville, and the International Rock-A-Billy Museum in Jackson. Moreover, the Rockabilly Hall of Fame, an online site recognizing the development of rockabilly in which Tennessee played a crucial role, is based in Nashville.
Transportation
Airports
Major airports within the state include Memphis International Airport (MEM), Nashville International Airport (BNA), McGhee Tyson Airport (TYS) in Alcoa, Chattanooga Metropolitan Airport (CHA), Tri-Cities Regional Airport (TRI), and McKellar-Sipes Regional Airport (MKL), in Jackson. Because Memphis International Airport is the major hub for FedEx Corporation, it is the world's largest air cargo operation.
Railroads
For passenger rail service, Memphis and Newbern, are served by the Amtrak City of New Orleans line on its run between Chicago, Illinois, and New Orleans, Louisiana. Nashville is served by the Music City Star commuter rail service.
Cargo services in Tennessee are primarily served by CSX Transportation, which has a hump yard in Nashville called Radnor Yard. Norfolk Southern Railway operates lines in East Tennessee, through cities including Knoxville and Chattanooga, and operates a classification yard near Knoxville, the John Sevier Yard. BNSF operates a major intermodal facility in Memphis.
Tribal Governance
The Mississippi Band of Choctaw Indians is the only federally recognized Native American Indian tribe in the state. It owns 79 acres (32 ha) in Henning, which was placed into federal trust by the tribe in 2012. This is governed directly by the tribe.
State symbols
Mockingbird
Education
See also: List of school districts in Tennessee and List of high schools in Tennessee
Education in Tennessee is administered by the Tennessee Department of Education. The state Board of Education consists of eleven members; nine from each Congressional district, a student member, and the executive director of the Tennessee Higher Education Commission (THEC), who serves as ex-officio nonvoting member. Public primary and secondary education systems are operated by county, city, or special school districts to provide education at the local level, and operate under the direction of the Tennessee Department of Education. The state also has many private schools.
The state enrolls approximately 1 million K–12 students in 137 districts. In 2020, the four-year high school graduation rate was 89.6%, a decrease of 0.1% from the previous year. According to the most recent data, Tennessee spends $9,544 per student, the 8th lowest in the nation.
Colleges and universities
Vanderbilt University in Nashville is consistently ranked as one of the top research institutions in the nation
Public higher education is under the oversight of the Tennessee Higher Education Commission (THEC) which provides guidance to the state's two public university systems. The University of Tennessee system operates four primary campuses in Knoxville, Chattanooga, Martin, and Pulaski; a Health Sciences Center in Memphis; and an aerospace research facility in Tullahoma. The Tennessee Board of Regents (TBR), also known as The College System of Tennessee, operates 13 community colleges and 27 campuses of the Tennessee Colleges of Applied Technology (TCAT). Until 2017, the TBR also operated six public universities in the state, but now only provides administrative support.
In 2014, the Tennessee General Assembly created the Tennessee Promise, which allows in-state high school graduates to enroll in two-year post-secondary education programs such as associate degrees and certificates at community colleges and trade schools in Tennessee tuition-free, funded by the state lottery, if they meet certain requirements. The Tennessee Promise was created as part of then-governor Bill Haslam's "Drive to 55" program, which set a goal of increasing the number of college-educated residents to at least 55% of the state's population. The program has also received national attention, with multiple states having since created similar programs modeled on the Tennessee Promise.
There are currently 107 private institutions in Tennessee. Vanderbilt University in Nashville is consistently ranked as one of the nation's leading research institutions. In addition, Nashville is often labeled as the "Athens of the South" due to the many colleges and universities in the city. Tennessee is also home to six historically Black colleges and universities (HBCUs).
Sports
Tennessee's major professional sports franchises. Clockwise from upper left: Tennessee Titans, Nashville Predators, Nashville SC, and Memphis Grizzlies.
Tennessee is home to four major professional sports franchises: the Tennessee Titans have played in the National Football League (NFL) since 1997, the Nashville Predators have played in the National Hockey League (NHL) since 1998, the Memphis Grizzlies have played in the National Basketball Association (NBA) since 2001, and Nashville SC has played in Major League Soccer (MLS) since 2020.
The state is also home to eight minor league teams. Four of these are Minor League Baseball clubs. The Nashville Sounds, which began play in 1978, and Memphis Redbirds, which began in 1998, each compete in the Triple-A East at the Triple-A level, the highest before Major League Baseball. The Tennessee Smokies, which have played continuously since 1972, and Chattanooga Lookouts, which have played continuously since 1976, are members of the Double-A classification Double-A South. Tennessee has three minor league soccer teams. Memphis 901 FC has played in the second-tier USL Championship since 2019. Chattanooga Red Wolves SC has been a member of the third-tier USL League One since 2019. Founded in 2009, Chattanooga FC began playing in the third-tier National Independent Soccer Association in 2020. The state has one minor league ice hockey team: the Knoxville Ice Bears, which began play in 2002 and are members of the Southern Professional Hockey League.
The state is home to 12 NCAA Division I programs. Four of these participate in the top level of college football, the Football Bowl Subdivision. In Knoxville, the Tennessee Volunteers college teams play in the Southeastern Conference (SEC) of the National Collegiate Athletic Association (NCAA). In Nashville, the Vanderbilt Commodores are also members of the SEC. The Memphis Tigers are members of the American Athletic Conference, and Murfreesboro's Middle Tennessee Blue Raiders play in Conference USA. Nashville is also home to the Belmont Bruins and Tennessee State Tigers, both members of the Ohio Valley Conference (OVC), and the Lipscomb Bisons, members of the ASUN Conference. Tennessee State plays football in Division I's second level, the Football Championship Subdivision, while Belmont and Lipscomb do not have football teams. The OVC also includes the Austin Peay Governors from Clarksville, the UT Martin Skyhawks from Martin, and the Tennessee Tech Golden Eagles from Cookeville. The Chattanooga Mocs and Johnson City's East Tennessee State Buccaneers are full members, including football, of the Southern Conference.
Tennessee is also home to the Bristol Motor Speedway, which features NASCAR Cup Series racing two weekends a year, routinely selling out more than 160,000 seats on each date. The Nashville Superspeedway in Lebanon, which previously held Nationwide and IndyCar races until it was shut down in 2011, reopened to host the NASCAR Cup Series in 2021. Tennessee's only graded stakes horserace, the Iroquois Steeplechase, is held in Nashville each May. The WGC Invitational is a PGA Tour golf tournament that has been held in Memphis since 1958.
Images for kids
Tennessee Facts for Kids. Kiddle Encyclopedia.
|
NEW email notification for MFA settings update
As a precaution users should be notified whenever their account details pertaining to security are updated, in order to let them respond to what could be a security breach. The easiest manner to do this is to send an email to their registered address.
Closes #67
Cool, I adapted to along the lines of your comment too @ScopeyNZ - however I didn't see the point in dependencies as the execution already fetches a named service. Leaving a member variable would still have state, whether it be given or installed via Injector.
Broken build from race condition. Rerun has made them green.
addressed and tests passed.
|
In this paper, five different approaches for reduced-order modeling of
brittle fracture in geomaterials, specifically concrete, are presented and
compared. Four of the five methods rely on machine learning (ML) algorithms to
approximate important aspects of the brittle fracture problem. In addition to
the ML algorithms, each method incorporates different physics-based assumptions
in order to reduce the computational complexity while maintaining the physics
as much as possible. This work specifically focuses on using the ML approaches
to model a 2D concrete sample under low strain rate pure tensile loading
conditions with 20 preexisting cracks present. A high-fidelity finite
element-discrete element model is used to both produce a training dataset of
150 simulations and an additional 35 simulations for validation. Results from
the ML approaches are directly compared against the results from the
high-fidelity model. Strengths and weaknesses of each approach are discussed
and the most important conclusion is that a combination of physics-informed and
data-driven features are necessary for emulating the physics of crack
propagation, interaction and coalescence. All of the models presented here have
runtimes that are orders of magnitude faster than the original high-fidelity
model and pave the path for developing accurate reduced order models that could
be used to inform larger length-scale models with important sub-scale physics
that often cannot be accounted for due to computational cost.
|
A very showy and attractive
annual with richly colored funnel or petunia-shaped blossoms. Each flower regardless of color is beautifully veined or pen-
SHASTA DAISIES p
ALASKA—A splendid hardy perennial, producing extra large flowers of pure glistening white, with broad overlapping petals. Pkt. 10c; 1% oz. 50c. STATICE A very valuable plant, either for borders or rock gardens. If cut and dried in a cool place will retain its true color and last for months. (18 inch). SINUATA—Mixed. (Annual.) Pkt. 10c: 14 oz. 25c. STOCKS a
unsightly views and fences.
STELLA—Flowers well formed and of golden yellow color with black center. Pkt. 10c; 14 oz. 25c. DOUBLE CALIFORNIA—Rich golden yellow flowers, with green center. Pkt. 10c; 4 oz. 15c. RUSSIAN GIANT—Huge golden yellow pads on plants 8 to 10 feet in height. Pkt. 10c. DWARF SINGLE RED—A very desirable plant for backgrounds and screening unsightly corners. Pkt. 10c; 1% OZ. 20C;
Giant Flowering
B-B SPENCER MIXED a—The very best varieties of the Spencer large type of Sweet Peas are contained in this wonderful mixture. Pkt. 10c: oz. 15c:; ™%4 Ib. 40c; Ib. $1.25.
EARLY FLOWERING SPENCER a
A new race of Sweet Peas that is becoming more popular each year. The blossoms are equal to the late flowering Spencer and at least three weeks earlier. They are excellent for greenhouse as well as outside planting.
SWEET PEAS RUFFLED SPENCER MIXED The extra frilliness of the blooms gives them a double appearance; wide range of colors. Pkt. 10c: 4% oz. 25c. HARDY SWEET PEAS p
TEXAS BLUE BONNETT
An annual growing about 18 inches and flowering in the late Spring and early Summer. Beautiful dark blue flowers tinged with white. Pkt. 10c: % oz. 25c. TRITOMA (Red Hot Poker) p
|
Puchek
Puchek was a male Rodian who worked for the Nikto pirate Nim Abek. He resided on Abek's Station.
Appearances
* Secrets of the Sisar Run
|
Talk:Stenden University South Africa
Move discussion in progress
There is a move discussion in progress on Talk:Stenden University South Africa which affects this page. Please participate on that page and not in this talk page section. Thank you. —RMCD bot 22:32, 20 July 2017 (UTC)
|
// Copyright (c) Dolittle. All rights reserved.
// Licensed under the MIT license. See LICENSE file in the project root for full license information.
using System.Threading;
using Dolittle.SDK.Events.Filters.Internal;
using Dolittle.SDK.Events.Processing;
using Microsoft.Extensions.Logging;
namespace Dolittle.SDK.Events.Filters
{
/// <summary>
/// Represents the builder for building public event filters.
/// </summary>
public class PublicEventFilterBuilder
{
static readonly PublicEventFilterProtocol _protocol = new PublicEventFilterProtocol();
PartitionedFilterEventCallback _callback;
/// <summary>
/// Initializes a new instance of the <see cref="PublicEventFilterBuilder"/> class.
/// </summary>
/// <param name="filterId">The <see cref="FilterId" />.</param>
public PublicEventFilterBuilder(FilterId filterId) => FilterId = filterId;
/// <summary>
/// Gets the <see cref="FilterId" /> of the filter that this builder builds.
/// </summary>
public FilterId FilterId { get; }
/// <summary>
/// Defines a callback for the filter.
/// </summary>
/// <param name="callback">The callback that will be called for each event.</param>
public void Handle(PartitionedFilterEventCallback callback)
=> _callback = callback;
/// <summary>
/// Builds and register an instance of <see cref="PublicEventFilterProcessor" />.
/// </summary>
/// <param name="eventProcessors">The <see cref="IEventProcessors" />.</param>
/// <param name="converter">The <see cref="IEventProcessingConverter" />.</param>
/// <param name="loggerFactory">The <see cref="ILoggerFactory" />.</param>
/// <param name="cancellation">The <see cref="CancellationToken" />.</param>
public void BuildAndRegister(
IEventProcessors eventProcessors,
IEventProcessingConverter converter,
ILoggerFactory loggerFactory,
CancellationToken cancellation)
{
ThrowIfCallbackIsMissing();
var filter = new PublicEventFilterProcessor(FilterId, _callback, converter, loggerFactory);
eventProcessors.Register(filter, _protocol, cancellation);
}
void ThrowIfCallbackIsMissing()
{
if (_callback == default) throw new MissingFilterCallback(FilterId, ScopeId.Default);
}
}
}
|
[BUG]: in the DFP pipeline, dfencoder sometimes fails to train, and when it does the training pipeline errors out.
Version
23.01
Which installation method(s) does this occur on?
No response
Describe the bug.
When training for individual 'username's, it is possible that for a specific user the training will not converge, even though the input data is technically valid.
A robust pipeline will overcome this error, and move on to the next username. maybe marking this username in some way for inference.
What the v23.01 pipeline does however is error out.
Minimum reproducible example
No response
Relevant log output
====Registering Pipeline====
====Building Pipeline====
====Building Segment: linear_segment_0====
====Building Segment Complete!====
====Building Pipeline Complete!====
Starting! Time:<PHONE_NUMBER>.5476243
====Registering Pipeline Complete!====
====Starting Pipeline====
====Pipeline Started====
Added source: <from-multi-file-0; MultiFileSource(filenames=['/workspace/examples/data/dfp/tcp-open-training-data/GoodBad.JSON'])>
└─> fsspec.OpenFiles
Added stage: <dfp-file-batcher-1; DFPFileBatcherStage(date_conversion_func=functools.partial(<function date_extractor at 0x7f53f59a6040>, filename_regex=re.compile('(?P<year>\\d{4})-(?P<month>\\d{1,2})-(?P<day>\\d{1,2})T(?P<hour>\\d{1,2})(:|_)(?P<minute>\\d{1,2})(:|_)(?P<second>\\d{1,2})(?P<microsecond>\\.\\d{1,6})?Z')), period=D, sampling_rate_s=0, start_time=None, end_time=None)>
└─ fsspec.OpenFiles -> List[fsspec.core.OpenFiles]
Added stage: <dfp-s3-to-df-2; DFPFileToDataFrameStage(schema=DataFrameInputSchema(json_columns=[], column_info=[DateTimeColumn(name='timestamp', dtype=<class 'datetime.datetime'>, input_name='timestamp'), RenameColumn(name='username', dtype=<class 'str'>, input_name='username'), ColumnInfo(name='A', dtype=<class 'int'>), ColumnInfo(name='B', dtype=<class 'int'>), ColumnInfo(name='C', dtype=<class 'int'>), ColumnInfo(name='D', dtype=<class 'int'>), ColumnInfo(name='E', dtype=<class 'int'>), ColumnInfo(name='F', dtype=<class 'int'>), ColumnInfo(name='G', dtype=<class 'int'>)], preserve_columns=None, row_filter=None), filter_null=True, file_type=FileTypes.JSON, parser_kwargs={'lines': False, 'orient': 'records'}, cache_dir=/workspace/.cache/dfp)>
└─ List[fsspec.core.OpenFiles] -> cudf.DataFrame
Added stage: <dfp-split-users-3; DFPSplitUsersStage(include_generic=False, include_individual=True, skip_users=[], only_users=None)>
└─ cudf.DataFrame -> dfp.DFPMessageMeta
Added stage: <dfp-rolling-window-4; DFPRollingWindowStage(min_history=300, min_increment=300, max_history=60d, cache_dir=/workspace/.cache/dfp)>
└─ dfp.DFPMessageMeta -> dfp.MultiDFPMessage
Added stage: <dfp-preproc-5; DFPPreprocessingStage(input_schema=DataFrameInputSchema(json_columns=[], column_info=[ColumnInfo(name='timestamp', dtype=<class 'datetime.datetime'>), ColumnInfo(name='username', dtype=<class 'str'>), ColumnInfo(name='A', dtype=<class 'int'>), ColumnInfo(name='B', dtype=<class 'int'>), ColumnInfo(name='C', dtype=<class 'int'>), ColumnInfo(name='D', dtype=<class 'int'>), ColumnInfo(name='E', dtype=<class 'int'>), ColumnInfo(name='F', dtype=<class 'int'>), ColumnInfo(name='G', dtype=<class 'int'>), IncrementColumn(name='logcount', dtype=<class 'int'>, input_name='timestamp', groupby_column='username', period='D')], preserve_columns=re.compile('(_batch_id)'), row_filter=None))>
└─ dfp.MultiDFPMessage -> dfp.MultiDFPMessage
Added stage: <dfp-training-6; DFPTraining(model_kwargs=None, epochs=30, validation_size=0.1)>
└─ dfp.MultiDFPMessage -> morpheus.MultiAEMessage
Added stage: <dfp-mlflow-model-writer-7; DFPMLFlowModelWriterStage(model_name_formatter={user_id}, experiment_name_formatter=dfp/tcp-open/training/{reg_model_name}, databricks_permissions=None)>
└─ morpheus.MultiAEMessage -> morpheus.MultiAEMessage
_download_method is dask
NoneType: None
Creating dask cluster...
Creating dask cluster... Done. Dashboard: http://<IP_ADDRESS>:8787/status
dfs details: len(dfs)=1
NoneType: None
dfs details: dfs[0].head(2)= timestamp username A B C D E F G
0 2022-10-25 23:50:56+00:00 gooduser 21489 0 0 0 1498 3 0
1 2022-10-25 23:50:56+00:00 gooduser 212 1 0 0 234 49152 0
NoneType: None
S3 objects to DF complete. Rows: 1813, Cache: miss, Duration: 1596.8961715698242 ms
Stopping dask cluster...
Batch split users complete. Input: 1813 rows from 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00. Output: 2 users, rows/user min: 583, max: 1230, avg: 906.50. Duration: 5.40 ms
Stopping dask cluster... Done.
Rolling window complete for baduser in 20.33 ms. Input: 583 rows from 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00. Output: 583 rows from 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00
Preprocessed 583 data for logs in 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00 in 23.207426071166992 ms
Rolling window complete for gooduser in 28.53 ms. Input: 1230 rows from 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00. Output: 1230 rows from 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00
Training AE model for user: 'baduser'...
data set details, train_df.shape=(524, 7), validation_df.shape=(59, 7)
Preprocessed 1230 data for logs in 2022-10-25 23:50:56+00:00 to 2022-10-25 23:53:52+00:00 in 22.50814437866211 ms
E20230406 20:21:02.794371 1191 context.cpp:124] linear_segment_0/dfp-training-6; rank: 0; size: 1; tid:<PHONE_NUMBER>04992: set_exception issued; issuing kill to current runnable. Exception msg: RuntimeError: torch.cat(): expected a non-empty list of Tensors
At:
/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py(1041): get_anomaly_score_losses
/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py(1000): get_anomaly_score_with_losses
/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py(774): fit
/workspace/examples/digital_fingerprinting/production/morpheus/dfp/stages/dfp_training.py(95): on_data
/workspace/examples/digital_fingerprinting/production/morpheus/dfp/stages/dfp_training.py(108): node_fn
E20230406 20:21:02.796510 1114 runner.cpp:189] Runner::await_join - an exception was caught while awaiting on one or more contexts/instances - rethrowing
E20230406 20:21:02.797950 1114 instance.cpp:265] segment::Instance - an exception was caught while awaiting on one or more nodes - rethrowing
E20230406 20:21:02.797995 1114 instance.cpp:223] pipeline::Instance - an exception was caught while awaiting on segments - rethrowing
Exception occurred in pipeline. Rethrowing
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/lib/python3.8/site-packages/morpheus/pipeline/pipeline.py", line 315, in join
await self._mrc_executor.join_async()
File "/workspace/examples/digital_fingerprinting/production/morpheus/dfp/stages/dfp_training.py", line 95, in on_data
model.fit(train_df, epochs=self._epochs, val=validation_df)
File "/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py", line 774, in fit
mse_loss, bce_loss, cce_loss, _ = self.get_anomaly_score_with_losses(df_for_loss_stats)
File "/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py", line 1000, in get_anomaly_score_with_losses
mse, bce, cce = self.get_anomaly_score_losses(df)
File "/opt/conda/envs/morpheus/lib/python3.8/site-packages/dfencoder/autoencoder.py", line 1037, in get_anomaly_score_losses
cce_loss_slice = torch.cat(cce_loss_slice_of_each_feat, dim=1) # merge the tensors into one (n_records * n_features) tensor
RuntimeError: torch.cat(): expected a non-empty list of Tensors
====Pipeline Complete====
Full env printout
(there is not scripts folder in ./external/utilities in my installation, v23.01)
Other/Misc.
I've sent a notbook and data to reproduce this to awood.
Code of Conduct
[X] I agree to follow Morpheus' Code of Conduct
[X] I have searched the open bugs and have found no duplicates for this bug report
It appears that the fit function in dfencoder encounters an error when no categorical feature is present in the feature set, affecting both 23.01 and 23.03. Specifically, the get_anomaly_score_losses function is responsible for this behavior.
We have identified this as a bug and will schedule a fix for the issue in our next release to add support for use cases where no categorical features are in use.
|
package genstub
import (
"github.com/bcbchain/bcbchain/smccheck/parsecode"
"bytes"
"os"
"path/filepath"
"strings"
"text/template"
)
const templateText = `package stub
import (
"github.com/bcbchain/sdk/sdk"
"contract/stubcommon/common"
"contract/stubcommon/types"
"fmt"
"github.com/bcbchain/bclib/tendermint/tmlibs/log"
{{- range $i,$directionName := $.DirectionNames}}
{{getName $i $.PackageNames}}{{replace (version $i $.Versions)}} "contract/{{getOrgID $i $.OrgIDs}}/stub/{{$directionName}}/v{{version $i $.Versions}}/{{$directionName}}"
{{- end}}
)
func NewStub(smc sdk.ISmartContract, logger log.Logger) types.IContractStub {
switch common.CalcKey(smc.Message().Contract().Name(), smc.Message().Contract().Version()) {
{{- range $j,$contractName := $.ContractNames}}
case "{{$contractName}}{{replace (version $j $.Versions)}}":
return {{getName $j $.PackageNames}}{{replace (version $j $.Versions)}}.New(logger)
{{- end}}
default:
logger.Fatal(fmt.Sprintf("NewStub error, contract=%s,version=%s", smc.Message().Contract().Name(), smc.Message().Contract().Version()))
}
return nil
}
func NewIBCStub(smc sdk.ISmartContract, logger log.Logger) types.IContractIBCStub {
switch common.CalcKey(smc.Message().Contract().Name(), smc.Message().Contract().Version()) {
{{- range $j,$contractName := $.ContractNames}}
case "{{$contractName}}{{replace (version $j $.Versions)}}":
return {{getName $j $.PackageNames}}{{replace (version $j $.Versions)}}.NewIBC(logger)
{{- end}}
default:
logger.Fatal(fmt.Sprintf("NewIBCStub error, contract=%s,version=%s", smc.Message().Contract().Name(), smc.Message().Contract().Version()))
}
return nil
}
`
type OrgContracts struct {
OrgIDs []string
DirectionNames []string
ContractNames []string
PackageNames []string
Versions []string
}
func res2factory(reses []*parsecode.Result) OrgContracts {
sLen := len(reses)
factory := OrgContracts{
OrgIDs: make([]string, 0, sLen),
DirectionNames: make([]string, 0, sLen),
ContractNames: make([]string, 0, sLen),
PackageNames: make([]string, 0, sLen),
Versions: make([]string, 0, sLen),
}
for _, res := range reses {
factory.OrgIDs = append(factory.OrgIDs, res.OrgID)
factory.DirectionNames = append(factory.DirectionNames, res.DirectionName)
factory.ContractNames = append(factory.ContractNames, res.ContractName)
factory.PackageNames = append(factory.PackageNames, res.PackageName)
factory.Versions = append(factory.Versions, res.Version)
}
return factory
}
// GenConStFactory - generate the contract stub factory go source
func GenConStFactory(reses []*parsecode.Result, outDir string) {
if err := os.MkdirAll(outDir, os.FileMode(0750)); err != nil {
panic(err)
}
filename := filepath.Join(outDir, "contractstubfactory.go")
funcMap := template.FuncMap{
"version": func(index int, versions []string) string {
return versions[index]
},
"replace": func(version string) string {
return strings.Replace(version, ".", "", -1)
},
"getName": func(index int, packageNames []string) string {
return packageNames[index]
},
"getOrgID": func(index int, orgIDs []string) string {
return orgIDs[index]
},
}
tmpl, err := template.New("contractStubFactory").Funcs(funcMap).Parse(templateText)
if err != nil {
panic(err)
}
factory := res2factory(reses)
var buf bytes.Buffer
if err = tmpl.Execute(&buf, factory); err != nil {
panic(err)
}
if err := parsecode.FmtAndWrite(filename, buf.String()); err != nil {
panic(err)
}
}
|
feat(icons): add medium AI icon
approving but please add a newline at the end of the file as well
You mean two new lines? because there is already one empty line at the end of the file
approving but please add a newline at the end of the file as well
You mean two new lines? because there is already one empty line at the end of the file
no, you're good
|
Extract Array Element From JSON Using T-SQL
I have a following response string from Google Geocoding API stored in a SQL Server database:
{
"results":[
{
"address_components":[
{
"long_name":"Khalifa City",
"short_name":"Khalifa City",
"types":[
"political",
"sublocality",
"sublocality_level_1"
]
},
{
"long_name":"Abu Dhabi",
"short_name":"Abu Dhabi",
"types":[
"locality",
"political"
]
},
{
"long_name":"Abu Dhabi",
"short_name":"Abu Dhabi",
"types":[
"administrative_area_level_1",
"political"
]
},
{
"long_name":"United Arab Emirates",
"short_name":"AE",
"types":[
"country",
"political"
]
}
],
...
}
],
"status":"OK"
}
My task is to extract Country and City from the above JSON. I checked the data and it seems that Geocoding API does not always return 4 elements in address_component node, so I need to get element in the array where types contain administrative_area_level_1 for the city, for example, which should logically be something like this:
JSON_QUERY([Json], '$.results[0].address_components<where types = administrative_area_level_1>.short_name')
This is how I have addressed this in the past. You can run this within SSMS:
DECLARE @json AS VARCHAR(1000) = '{ "results":[ { "address_components":[
{ "long_name":"Khalifa City", "short_name":"Khalifa City", "types":[ "political", "sublocality", "sublocality_level_1" ] },
{ "long_name":"Abu Dhabi", "short_name":"Abu Dhabi", "types":[ "locality", "political" ] },
{ "long_name":"Abu Dhabi", "short_name":"Abu Dhabi", "types":[ "administrative_area_level_1", "political" ] },
{ "long_name":"United Arab Emirates", "short_name":"AE", "types":[ "country", "political" ] }
] } ], "status":"OK" }';
SELECT
Addresses.long_name, Addresses.short_name, Addresses.[types]
FROM OPENJSON ( @json, '$.results' ) WITH (
addresses NVARCHAR(MAX) '$.address_components' AS JSON
) AS j
CROSS APPLY (
SELECT * FROM OPENJSON ( j.addresses ) WITH (
long_name VARCHAR(50) '$.long_name',
short_name VARCHAR(50) '$.short_name',
[types] NVARCHAR(MAX) '$.types' AS JSON
) AS Names
CROSS APPLY OPENJSON ( [types] ) AS [Types]
WHERE [Types].[value] = 'administrative_area_level_1'
) AS Addresses;
Returns
+-----------+------------+------------------------------------------------+
| long_name | short_name | types |
+-----------+------------+------------------------------------------------+
| Abu Dhabi | Abu Dhabi | [ "administrative_area_level_1", "political" ] |
+-----------+------------+------------------------------------------------+
If I understand the question and you want to parse the input JSON (even when the $.results JSON array has more than one item), the following approach may help:
JSON:
DECLARE @json nvarchar(max) = N'{
"results":[
{
"address_components":[
{"long_name":"Khalifa City", "short_name":"Khalifa City", "types":["political", "sublocality", "sublocality_level_1"]},
{"long_name":"Abu Dhabi", "short_name":"Abu Dhabi", "types":["locality", "political"]},
{"long_name":"Abu Dhabi", "short_name":"Abu Dhabi", "types":["administrative_area_level_1", "political"]},
{"long_name":"United Arab Emirates", "short_name":"AE", "types":["country", "political"]}
]
}
],
"status":"OK"
}'
Statement:
SELECT j2.long_name, j2.short_name
FROM OPENJSON(@json, '$.results') j1
CROSS APPLY OPENJSON(j1.value, '$.address_components') WITH (
long_name varchar(100) '$.long_name',
short_name varchar(100) '$.short_name',
types nvarchar(max) '$.types' AS JSON
) j2
CROSS APPLY OPENJSON(j2.types) j3
WHERE j3.[value] = 'administrative_area_level_1'
Output:
long_name short_name
----------------------
Abu Dhabi Abu Dhabi
|
Write the following list of characters into a correctly formed sentence: Busstop(variouslines):40metres.Train:PlazaMiserere,600metres.
Bus stop (various lines): 40 metres. Train: Plaza Miserere, 600 metres.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.