content
stringlengths 7
2.61M
|
---|
<reponame>IMSAG/junit5-kubernetes<filename>core/src/main/java/com/github/jeanbaptistewatenberg/junit5kubernetes/core/wait/WaitStrategy.java
package com.github.jeanbaptistewatenberg.junit5kubernetes.core.wait;
import java.time.Duration;
public abstract class WaitStrategy<T> implements IWaitStrategy<T> {
private Duration timeout = Duration.ofSeconds(30);
public WaitStrategy() {
}
public WaitStrategy(Duration timeout) {
this.timeout = timeout;
}
public Duration getTimeout() {
return timeout;
}
}
|
How Do Instructor's Attendance Policies Influence Student Achievement in Principles of Microeconomics? This paper examines the question of how an instructors attendance policy influences student performance in Principles of Microeconomics. This study asked students in several different microeconomics classes at a medium sized regional university what sort of attendance policy they were subject to: was there a grade incentive for coming to class (i.e. bonus points), was there a grade punishment for not coming to class (i.e. deduction of points, missed assignments, etc.), was there some combination of the previous two or was there simply no attendance policy. We expected a variety of results because the six classes surveyed were taught by five different instructors, each with slightly different attendance policies. While there are a few papers showing a positive correlation between required attendance and course performance, this paper seeks to understand more about the impact from the type of attendance policy employed. Data is collected from a student survey and from the universitys registrar. The main empirical evidence is gathered from a two-stage regression analysis with student absenteeism as the dependent variable in the first equation and a students final grade (using a 4.0 scale) as the dependent variable in the second equation. We find that, everything else equal, students seem more motivated to come to class when they expect a positive reward and they are more likely to miss class if they expect a negative punishment. Also, student attendance is a small, but significant determinant of a students course performance after controlling for other relevant factors. |
This invention relates to the preparation of ceramic materials with reduced oxygen levels from polycarbosilanes by the pyrolysis of a mixture of a polycarbosilane, a hydrosilylation catalyst, and an unsaturated compound selected from the group consisting of reactive diolefins, reactive alkynes, polyolefins, vinylsilanes, and unsaturated siloxanes where the mixture is rendered infusible prior to pyrolysis by heating to relatively low temperatures in an inert atmosphere. This invention is especially well suited for the production of ceramic fibers from polycarbosilanes.
Generally, in preparing a shaped ceramic article such as a fiber from a preceramic polymer by pyrolysis at elevated temperatures, it is necessary, prior to pyrolysis, to render the shaped article infusible. Otherwise the shaped article will melt upon pyrolysis and thus the desired shape will be destroyed. The most common method of rendering the shaped article infusible has been an oxidation treatment. This method has the disadvantage of incorporating large amounts of oxygen in the resulting ceramic article. For example, standard grade Nicalon ceramic fibers, prepared from polycarbosilanes by Nippon Carbon Company Ltd, Tokyo, Japan, normally contain about 10-15 weight percent oxygen. High oxygen content results in decreased thermal stability of the ceramic materials at elevated temperatures.
Ceramic materials prepared from polycarbosilanes are known in the art. Verbeek et al. in German Application Publication No. 2,236,078, which is hereby incorporated by reference, prepared ceramic materials by firing a polycarbosilane prepared by the pyrolysis of monosilanes at elevated temperatures in an inert atmosphere. Linear, high molecular weight polymers such as polyethylene oxide, polyisobutylene, polymethylmethacrylate, polyisoprene, and polystyrene were reported to improve the fiber spinning characteristics of the polycarbosilanes. The polycarbosilane fibers were rendered infusible prior to pyrolysis by either thermal, oxidation, sulfidation, or hydrolysis treatment. The ceramic fibers were reported to contain between 0 and 30 weight percent oxygen but no details were given.
Yajima et al. in U.S. Pat. Nos. 4,052,430 (Oct. 4, 1977) and 4,100,233 (July 11, 1978), which are both hereby incorporated by reference, prepared ceramic materials by the pyrolysis of polycarbosilanes in an inert atmosphere or in a vacuum at an elevated temperature. The polycarbosilanes were prepared by thermally decomposing and polycondensing polysilanes. Polycarbosilane fibers were treated for 2-48 hours at 350.degree.-800.degree. C. under vacuum prior to pyrolysis to remove low molecular weight material. In some cases the fibers were first exposed to an oxidizing atmosphere at 50.degree.-400.degree. C. to form an oxide layer on the fibers and then treated under vacuum at 350.degree.-800.degree. C. The oxygen content of the resulting ceramic fibers was not reported.
Yajima et al. in U.S. Pat. Nos. 4,220,600 (Sept. 2, 1980) and 4,283,376 (Aug. 11, 1981), which are both hereby incorporated by reference, prepared ceramic materials by the pyrolysis of polycarbosilanes partly containing siloxane bonds at an elevated temperature under an inert atmosphere or a vacuum. These polycarbosilanes were prepared by heating polysilanes in the presence of about 0.01 to 15 weight percent of a polyborosiloxane in an inert atmosphere. Polycarbosilane fibers were rendered infusible prior to pyrolysis by either treatment with an oxidizing atmosphere at about 50.degree.-400.degree. C. to form an oxide layer on the fiber surface or by irradiation with gamma-rays or an electron beam under an oxidizing or non-oxidizing atmosphere. The oxygen content of the resulting ceramic fibers were in the range of 0.01 to 10 weight percent by chemical analysis. Oxygen in the form of silica could be further removed from the ceramic fiber by treatment in a hydrofluoric acid solution.
Iwai et al. in U.S. Pat. No. 4,377,677 (Mar. 22, 1983), which is hereby incorporated by reference, also produced ceramic materials by the pyrolysis of polycarbosilanes at elevated temperatures under an inert atmosphere or vacuum. The polycarbosilanes of Iwai were prepared by heating a polysilane at 50.degree.-600.degree. C. in an inert gas, distilling out a low molecular weight polycarbosilane fraction and then polymerizing the distilled fraction at 250.degree. to 500.degree. C. in an inert atmosphere. Polycarbosilane fibers were rendered infusible prior to pyrolysis by heating at relatively low temperatures in air. The oxygen content of the resulting ceramic fibers was not reported.
Schilling et al. in U.S. Pat. No. 4,414,403 (Nov. 8, 1983), which is hereby incorporated by reference, produced ceramic material by the pyrolysis of branched polycarbosilanes at elevated temperatures under an inert atmosphere or vacuum. The branched polycarbosilanes were prepared by reacting monosilanes with an active metal in an inert solvent at elevated temperatures where at least some of the monosilanes contained vinyl groups or halomethyl groups capable of forming branching during the polymerization. Methods of rendering the material infusible were not discussed.
Yajima et al., J. Mat. Sci., 13, 2569 (1978), Yajima, Bull. Amer. Ceram. Soc., 62, 893 (1983), and Hasegawa et al., J. Mat. Sci., 18, 3633 (1983) also discuss polycarbosilanes which are useful as preceramic polymers for preparing silicon carbide ceramics. In the Bull. Amer. Ceram. Soc. article Yajima prepared ceramic fibers from polycarbosilanes which had been rendered infusible prior to pyrolysis by heating in air at 190.degree. C. The resulting fibers contained 15.5 weight percent oxygen most of which was thought to be incorporated into the fiber during the curing step.
What has been discovered is a new method of rendering preceramic polycarbosilane polymers infusible prior to pyrolysis which results in a significantly reduced oxygen content in the ceramic materials produced from the pyrolysis of these infusible polycarbosilane polymers. This method represents a significant advance in the art of preparing ceramic materials or articles, especially in the art of preparing ceramic fibers. |
Q:
Can a publisher publish a manuscript without the authors permission?
This is a follow on from Is it ethical to withdraw a paper after acceptance in order to resubmit to a better journal?
If a journal is willing to publish your submitted manuscript "as is", can you prevent them? Clearly if they want you to make changes you have the right to say no, but once the manuscript is accepted can you really withdraw it against the publisher's wish? Further, who has the final say on copy edits and type setting?
In my field we electronically sign a copyright transfer when the manuscript is submitted that comes into affect if it is accepted for publication.
A:
Of course I'm not a lawyer, but I'd distinguish between two thresholds:
Before the paper can be published, you need to grant legal permission via a license or copyright transfer. If you haven't done that yet, then the publisher can't force you to let them publish the paper. That gives a narrow window in which you could still block publication after the paper is accepted (but whether you could ethically do so depends on your reason for objecting).
Once the published version has appeared (even just on the publisher's website), there's nothing you can do without a powerful reason. At that point, you would be retracting a published paper, which is a far more serious act.
In between these thresholds, I don't know what would happen. I'd guess that if you asked the publisher not to publish a paper you had already signed a copyright transfer for, then they would probably agree. After all, publishing a paper against the author's wishes could look bad, even if they were legally entitled to do so. However, if you didn't have a very good reason (such as a major error in the paper), then the publisher would be rather unhappy. I wouldn't be surprised if they asked you to cover any copyediting or typesetting costs, and this sort of unprofessional behavior would be terrible for your reputation.
Further, who has the final say on copy edits and type setting?
In principle this depends on the publishing agreement. In practice, the ones I've seen usually give the publisher final say, but the publisher usually defers to the author about anything intellectually substantive during the proofreading stage. (On the other hand, the author gets more or less no input into matters of style such as font choice, British vs. American spelling, etc.)
A:
Further, who has the final say on copy edits and type setting?
Short answer: everyone. In more detail:
Acceptance of a paper means that the editorial board approves it for publication. Once the editorial board signs off on the paper, some combination of the author(s), the editors and the publishers must arrive at a mutually agreeable final draft.
In my experience, the role that the editors play here is highly variable: sometimes they work directly with the authors on the copyediting: e.g. the American Mathematical Monthly is incredibly picky (relative to other math journals, anyway) about copyediting issues, and they surprised me by withholding final acceptance of a recent paper of mine until I had (myself, under their very specific instructions) completed all the copyediting and formatting. And they seemed serious about this: a change to the bibliography would be solicited and uploaded as a separate revision. It ought to be evident that I was not completely happy that the acceptance of the paper was held over my head during a discussion of the copyediting, but that's a way to play it and I'm sure they have their reasons.
(The MONTHLY has, I believe, by far the highest circulation of any mathematics journal. It is published by the Mathematical Association of America, which is the more teaching-oriented of the two professional societies for mathematics in the US. On the other hand, there are three selective MAA journals, and of these the MONTHLY is by far the most "serious". Long story short: many roads lead to them, and they are forced to be very selective indeed in what they publish, although they select for different things than a top research journal.)
I had another experience in which the final editorial acceptance in a prestigious journal was made conditional on the submission of a new draft containing less "pompous language".
More typically the copyediting and formatting is either left to the authors themselves or done by an employee of the publishing company (who in many cases does this for papers in multiple academic disciplines and thus cannot have high-level subject area knowledge most of the time). In the end both the authors and the publisher have the final say: both parties must approve the final draft in order for it to published, and the documentation of this mutual approval is the publication contract.
Of course in practice this mutual approval is done in an asymmetric way between the parties: the publisher sends you a form in which everything has been spelled out in advance, in the pushy manner of big corporations everywhere. But if there are clauses in the contract that the authors have a problem with, they are certainly entitled to ask, and in my experience some minor "concessions" (i.e., changes to the boilerplate agreement) are often made by the publisher. A big part of the asymmetry is that the authors generally have a much larger stake in the publication of the paper than the publisher does, so insisting that one be able to refer to a paper in the bibliography by [Cl14] rather than [3] or one will take one's wares elsewhere looks like a strange arrangement of priorities, but if you really do feel strongly about it you are entitled to ask and who knows -- maybe you'll get your way. Asking them to mess with aspects of the typesetting that are part of the journal's standard style seems less kosher to me: one would reasonably expect the journal to want to keep its standard style, and if this was really important to you, you should probably have brought it up earlier.
Making sure that one really does send in the copyright form last of all is a good tip. I stumbled on this point recently when dealing with one of the world's largest scientific publishing companies. They kept doing something weird in the proofs, I kept pointing out their mistakes and though I took pains to indicate in every correspondence that I was not giving my final approval, after a few go-rounds they didn't get back to me, and eventually I noticed that the paper was published online...still with one strange typesetting mistake that was not in the version I sent to them. Next time I'll save the copyright form until the end.
A:
If a journal is willing to publish your submitted manuscript "as is", can you prevent them? Clearly if they want you to make changes you have the right to say no, but once the manuscript is accepted can you really withdraw it against the publisher's wish?
You can withdraw a manuscript at any time unless it is officially published (usually on-line). I cannot see that any journal could stop you from doing so and signing copyright forms should not cause problems since those forms usually concern the work done by the publisher to get the manuscript into publishable form. By this I mean copy-editing and type-setting and not generally the review process. Exactly what is covered by the agreement must be checked in each individual case (journal/publisher).
The fact that you can withdraw a manuscript does not mean it is necessarily a frictionless process. A case such as this falls outside of legal terms and into the ethical realm where you can do whatever you legally can or want but it may not reflect very positively on yourself. To withdraw a manuscript from a journal that has put in a lot of efforts, including, most likely, non-paid reviewers and scientific editors, with the excuse that you want to go for a higher ranking journal seems at least morally wrong. You should have thought about that much earlier.
Further, who has the final say on copy edits and type setting?
The journal will likely have rules for how things should look and be expressed. You have the opportunity to agree or disagree with any changes the journal makes, through its copy-editing and type-setting. However, when it comes to journal style, it supersedes your own views and an editor also has the right to remove material that can be considered offensive, rude or unethical in some way. The latter is to protect the journal reputation. Hence, you cannot expect your view to be final in such extreme cases.
Despite what I have just said, there are of course overzealous editors who do not know where to draw the lines. So because human interaction is involved also in publishing, all may not happen as you expect it but usually such circumstances in the extreme are exceptions. |
Phenotype and Metabolic Disorders in Polycystic Ovary Syndrome The polycystic ovary syndrome (PCOS) is one of the most frequent endocrinopathies in women. Its incidence is assessed at 68% of the female population in the reproductive age. It is characterised by oligomenorrhea (Oligo), hyperandrogenism (HA), and the presence of polycystic ovaries (PCOs). Carbohydrate and lipid metabolism is being disturbed in many women with PCOS. The pathogenesis of PCOS is still unexplained. Following the main criteria of diagnosis (Rotterdam Consensus 2003), Dewailly, Welt and Pehlivanov divided the patients with PCOS into 4 phenotype groups: A, B, C, and D. In our studies of 93 patients with PCOS, we found the most frequent appearance (60,2%) of the phenotype A ; an increased androstenedione concentration in a group with HA (A, B, C); an increased HOMA- and insulin concentration after 30min an oral 75g glucose tolerance test (OGTT) in a group of obese women with BMI > 30kg/m2; high levels of total testosterone, total cholesterol, and LDL cholesterol concentrations in a group A with classic phenotype of PCOS: Oligo + HA + PCOincreasing the risk of development of cardiovascular diseases, type 2 diabetes, or metabolic syndrome. The average androstenedione concentrations could be a good diagnostic and prognostic parameter. Introduction Polycystic ovary syndrome (PCOS) is one of the most common female endocrine disorders affecting approximately 6-8% of women in the reproductive age. It is one of the main causes of infertility resulting from chronic anovulation. This syndrome was first described in 1935 by Stein-Leventhal who observed among some patients menstrual disorders and polycystic ovaries (the "billiard ball" sign on ultrasound examination). The most common irregularities of PCOS include elevated serum levels of free testosterone (T), androstenedione, dehydroepiandrosterone sulfate (DHEAS), excessive amount of luteinizing hormone (LH), elevated LH/FSH ratio, increase in LH peak pulse frequency and its response to GnRH (Gonadotropin-releasing hormone), and change in LH pulse frequency. Insulin resistance, obesity, dyslipidemia, elevated laboratory findings associated with inflammation, high blood pressure, and increased risk of cardiovascular diseases are the common symptoms of PCOS. Albeit the research trials are broad, the pathogenesis remains uncertain. Criteria for Defining PCOS. In 1990, a consensus workshop sponsored by the NIH/NICHD, PCOS was defined as: menstrual disorders and hyperandrogenism after exclusion of other endocrine disorders such as hyperprolactinemia, thyroid gland disorders, and congenital adrenal hyperplasia. At the conference in Hamburg, the additional diagnostic criteria for PCOS were added: acne, hirsutism, elevated blood levels of androgens, and increased insulin resistance. Today's definition of PCOS was defined by a consensus workshop sponsored by ESHRE/ASRM in Rotterdam in May 2003. It includes menstrual disorders (oligoovulation/anovulation), clinical and/or biochemical evidence of hyperandrogenism, polycystic ovaries on ultrasound examination (at least 10 follicles 2-9 mm in size or volume of the ovary greater than 10 mL). In 2005, Azziz introduced modification of NIH criteria for PCOS regarding androgen excess and ovaries dysfunction (irregularity or absence of menses accompanied with visualization of polycystic ovaries). Phenotype. Regarding the Rotterdam criteria for PCOS, both endocrinal and clinical, we distinguish 4 different phenotypes of the syndrome, as shown in Table 1: In 2005, Chang et al. proposed another division of the phenotype into 3 groups (See Table 2). 1.3. Objective. The objective of the study was to define hormonal, biochemical, and metabolic abnormalities among women with PCOS arising from a group of 4 standard phenotypes and then to identify coexistence of endocrine and biochemical abnormalities in 4 groups of patients who are at increased risk of metabolic diseases. Material and Methods We identified 93 women who met the current ESHRE/ASRM criteria for PCOS and evaluated their hormonal and biochemical profiles. The physical examination included measurements of circumferences of their waist and hips and calculation of the body mass index (BMI). We also evaluated the extent of hyperandrogenism (hirsutismusing the Ferriman Gallwey hirsutism evaluation system ; acne-using the subjective 0-10 scale) and used both: the insulin resistance index HOMA-IR with HOMA--cell index (assessing beta-cell function) and QUICK (Quantitative Insulin Sensitivity Check) index, regarding insulin resistance : The above formulas refer to the fasting levels of glucose (Go) and insulin (Io). The presented results were prepared by classifying the women into four groups (Rotterdam Study) according to their phenotype (A, B, C, D). The obtained results were analyzed in terms of these 4 groups and additionally in terms of three basic groups that include menstrual disorders (oligoovulation/anovulation), clinical and/or biochemical evidence of hyperandrogenism, and polycystic ovaries on ultrasound examination. In each patient, we determined the serum levels of highsensitivity C-reactive protein (hsCRP), androgens: testosterone, androstenedione, dehydroepiandrosterone sulfate, estradiol (E2), 17-hydroxyprogesterone (17-HOP), LH, FSH (follicle-stimulating hormone), sex hormone binding globulin (SHBG), lipids, aminotransferases, glucose and insulin fasting and after ingestion of 75 g glucose (measurements in 30th, 60th, and 120th min after 75 g glucose ingestion). An ultrasound examination of the reproductive system, in particular the ovaries, was performed on each patient. The examinations were carried out during the first phase of the menstrual cycle (1st-10th day of the cycle) in all women with PCOS, except for the patients who had been taking medications for any reason during the last 3 months. In the recruiting process, we had to exclude the patients with coexisting thyroid gland disorders, diabetes mellitus, ovarian and adrenal tumors, congenital adrenal hyperplasia, and other important pathologies. We determined the serum levels of thyroid-stimulating hormone (TSH), adrenocorticotropic hormone (ACTH), 17-hydroxyprogesterone (17-HOP), and metoclopramide-stimulated prolactin (PRL) serum levels. We also performed the ultrasound examination both of the thyroid gland and abdominal cavity, chest radiography, collection of 24-hour urine steroids derivatives: 17-hydroxycorticosteroids (17-OHCS) and 17-ketosteroids (17-KS) in terms of normal production and suppression with small amount of Dexamethasone (4 0,5 mg) for two days. Diagnostic criteria for diabetes mellitus were based on the fasting serum glucose level and serum glucose level in the oral glucose tolerance test (OGTT) after a 75 g glucose load. The serum level of 17-OHP above 5 mg/dL was highly suspicious for the congenital adrenal hyperplasia. The tests were repeated for the results above 3.5 mg/dL. The diagnosis of the thyroid gland disorders was established by determining TSH, free thyroxine (FT4), and antithyroid autoantibodies. The diagnosis of the hirsutism was established by gathering at least six points using the Ferriman Gallwey hirsutism evaluation system. Acne was diagnosed in these patients who received four or more points using the subjective 0-10 scale, in which we assessed face, back, and other body parts (max. 20 points). Blood was collected from the fasting patients in the morning. They were between 1st and 10th day of their menstrual cycle. The hormonal blood tests were performed by using both radioimmunoassay (RIA) and IRMA methods and immunochemiluminescent method with the Immulite 2000 (Dpc, Los Angeles, Ca) device at the Endocrine Clinic of Medical Centre of Postgraduate Education (CMKP). Normality of analyzed variables was tested with Saphiro-Wilk test. Comparisons between groups were performed by ANOVA or Kruskal-Wallis tests. To compare two independent groups of observations, Mann-Whitney U test has been used. The significance level alpha was chosen at 0,05. The statistical analysis was performed using Statistica 9.0 PL. The prevalence of the PCOS in three phenotypical groups in our trial was comparable to the results presented by Chang and Hassa et al.. The most common patients were those who met all three diagnostic criteria for the PCOS, although there were some differences in the percentages among the cited authors (I-69,7%, I -84,85%). The differences could have resulted from the numeral differences among test populations and the diagnostic criterion for the hyperandrogenism affecting the prevalence of acne. The presented trial did not show the statistical significance in the differences such as patients' average age, weight, height, circumference of waist and hips, waist to hip ratio, and body mass index (P > 0.05), among the four described phenotypical groups (A, B, C, D). Although there were significant statistical differences regarding the average ovary volume, prevalence of hirsutism, acne, and the length of the menstrual cycle (P < 0, 05), which was in compliance with the recruitment and classification of the individuals to the specific phenotypical groups ( Table 4). The highest serum levels of testosterone and dehydroepiandrosterone sulfate (DHEAS) were observed in women who presented clinical evidence of hyperandrogenism (groups A, B, and C) ( Table 5). Analyzing each group with the hyperandrogenism alone, the levels of serum testosterone were statistically higher (P < 0, 04) in the group A, comparing to the group D. Despite the observed higher levels of serum testosterone in the group B and C, the B/D correlation was not statistically significant. However, there were observed statistically higher levels (P < 0, 05) of serum testosterone and DHEAS in the group C in comparison to the group without the signs of hyperandrogenism (group D). The levels of androstenedione were very clearly elevated (P < 0, 004) in all three groups with hyperandrogenism (A, B, C), when comparing them to the group D-without the signs of hyperandrogenism. Similarly, when we compare each group with the hyperandrogenism, the levels of androstenedione were statistically significantly higher (A/D P < 0, 005; B/D P < 0, 013; C/D P < 0, 004). Although the highest levels of androstenedione were present in the phenotypical group B (mean 525,17 ng/dL ±146,08 ng/dL) which presents no morphological signs of polycystic ovaries on ultrasound examination, the androstenedione B group levels were significantly higher than in the group C (P < 0, 05). Similarly to androstenedione, the levels of 17-HOP were significantly higher (P < 0, 03) in all three groups with hyperandrogenism (A, B, C), when compared to the group D (Table 5). A resembling, statistical correlation was observed when the levels of 17-HOP in the group A and D were compared (P < 0, 02). The higher levels of LH and estradiol (E2) were observed in the group A in comparison to the group C (without menstrual disorders) and the average value of the parameters differed in the statistical significance (P < 0, 05). The highest average levels of LH and E2 were detected in the group B (without morphological signs of PCOS on ultrasound examination), although these results were not significant, probably because of the greater variability of these parameters in both groups. The elevated values of the LH/FSH ratio were noticed among women in the group A and C when compared to the group D (without hyperandrogenism). Because of the small number of women in the group D, this remark was not statistically proven and has been left as the object of further investigation. The greatest levels of 17-KS in urine (steroid metabolites) were measured in the group C without menstrual disorders (17, 28 ± 4, 2 mg/dL, norm: 4,2-16.3 mg/dL). The difference between the levels of 17-KS in urine in the group C and the groups A, B, and D was of the statistical significance (P < 0, 05). The higher levels of 17-OHCS in urine were observed in the groups B and D, in which the obese women (BMI 30 kg/m 2 and greater) predominated (P < 0, 003), in comparison to the groups A and C (in which there was a slight majority of slim women; BMI <27 kg/m 2 ). In the observed population, the lowest levels of insulin in 30th minute after 75 g glucose load (OGTT) were seen among women in the group A and the levels were statistically lower (P < 0, 001) in comparison to the other three groups (B, C and D), where there was the biggest percentage of obese women (BMI > 30 kg/m 2 ). Similar results were noticed regarding fasting insulin, but they could not be confirmed statistically ( Table 6). ISRN Endocrinology The highest levels of insulin in 30th minute (94, 44 ± 50, 27 uU/mL) were present in the groups B and D with the biggest percentage of the obese women (P < 0, 0005) when compared to the groups A and C (51, 66 ± 25, 84 uU/mL). The index HOMA- that assesses beta-cell function was statistically higher in the groups B and D in comparison to the groups A (P < 0, 05) and A + C (P < 0, 01). However, the indexes HOMA-IR and QUICK index showed no statistical significance. Discussion The presented division into four standard phenotypical groups (ESHRE/ASRM) significantly emphasizes the existing differences in PCOS, which is not a homogenous disorder but the unit of three coexisting elements: menstrual disorder, hyperandrogenism, and presence of the polycystic ovaries. Moreover, we all observed disorders in lipid and carbohydrate metabolism. Our analysis of 93 women with the PCOS showed that the elevated levels of insulin in the 30th minute after 75 g glucose load in the group Mean ± standard error of mean. Oligo + HA (B) in comparison to the average insulin level in the 30th min in the group of women who do not present the polycystic ovaries on the ultrasound examination (A, C, and D) may be related to slightly increased prevalence of obesity and hypertriglyceridemia among these individuals. Similar observations were noticed regarding beta-cell function index (HOMA-), which reached the highest levels in the groups B and D, when compared to the groups A and AC, which seems to be connected with the obesity prevalence. The levels of indexes HOMA-IR and QUICK, regarding the insulin resistance, showed no statistical significance in the analyzed groups, although there was some upward tendency of the HOMA-IR index accompanied by decrease of the QUICK index in the groups B and D, with the elevated number of obese women. This data requires further investigation. In the analyzed groups, there were no statistically significant differences among the presented phenotypical groups B, C, and D when it comes to total cholesterol, HDL cholesterol, LDL cholesterol, or triglycerides, although the levels of total cholesterol and LDL cholesterol were commonly higher than normal, especially in the phenotypical group A (21 of 56 individuals), which is suggestive of intensification of dysregulation and increased risk of cardiovascular and metabolic diseases among women with complete phenotype of PCOS. The major amount of abnormalities in the phenotypical group A was also seen by other authors. The androstenedione levels were clearly elevated in all three groups with hyperandrogenism (A, B, C), when compared to the group D without the signs of hyperandrogenism. When we compared particular groups with hyperandrogenism, the levels of androstenedione were statistically significantly higher. That is why the measurement of the androstenedione levels seems to be a crucial diagnostic and predictive factor among women with menstrual disorders or present polycystic ovaries of ultrasound examination. The elevated serum levels of androstenedione in the group B, with no sign on polycystic ovaries on ultrasound examination, in comparison to the phenotypical group C, require further investigation. The higher than normal levels of 17-KS in urine in group C (without menstrual disorders) could be a sign of increased role of the ovaries in the androgen production in this group, when compared to the other groups with such disorders (A, B, D). Nevertheless, further studies are necessary. The greatest levels of 17-OHCS in urine were observed in the groups B and D (in which the obese women with BMI > 30 kg/m 2 predominated) when compared to the groups A and C. This might indicate the greater role of the adrenal gland in the androgen production among obese women and the increased risk of the insulin resistance in the group of obese individuals with PCOS. Conclusions In the performed study regarding 93 women with the PCOS, the following correlations were determined. The major prevalence of the phenotypical group A, which is similar to the result presented by the other authors. The increased level of androstenedione strongly correlates with the clinical degree of hyperandrogenism. It seems that androstenedione could be a crucial diagnostic and predictive factor among women with menstrual disorders or presence of polycystic ovaries on ultrasound examination. The elevated levels of the indexes that correlate with the degree of insulin resistance such as the betacell function index (HOMA-) or insulin level in 30th min, after 75 g glucose load, are met more often among obese women with BMI > 30 kg/m 2, rather than in slim individuals, without coexisting hyperandrogenism or presence of polycystic ovaries on ultrasound examination. In the phenotypical group A, there were noticed elevated levels of total testosterone, androstenedione, and significantly higher levels of total cholesterol and LDL cholesterol. It may point out the increased risk of cardiovascular and metabolic diseases. Because of the small number of women in the investigated group and too short a period on the trial, this thesis needs to be the subject of further studies. The presented division into four phenotypical groups and the observed correlations contribute to the greater understanding of the core of the polycystic ovary syndrome and to the better recognition of its pathogenesis. |
<reponame>ryanheath/ravendb
/// <reference path="../../typings/tsd.d.ts"/>
/**
* This is helper class to build boostrap multiselect with support for custom awesome checkboxes
*
* https://github.com/davidstutz/bootstrap-multiselect/issues/576
*/
class awesomeMultiselect {
// used for generating unique label/ids
static instanceNumber = 1;
/**
* Build dropdown
* @param object
*/
static build(object: JQuery, customizeOptions?: (opts: any) => void): void {
const opts = {
templates: {
li: '<li><div class="checkbox"><label></label></div></li>'
}
};
if (customizeOptions) {
customizeOptions(opts);
}
object.multiselect(opts);
if (object.data('instanceData')) {
throw new Error("Object was already initialized. Use rebuild instead.");
}
object.data('instanceData', awesomeMultiselect.instanceNumber);
awesomeMultiselect.instanceNumber++;
awesomeMultiselect.fixAwesomeCheckboxes(object);
}
/**
* Update multiselect
* @param object
*/
static rebuild(object: JQuery): void {
if (!object.data('instanceData')) {
throw new Error("Please intialize multiselect using awesomeMultiselect.build before calling rebuild");
}
object.multiselect('rebuild');
awesomeMultiselect.fixAwesomeCheckboxes(object);
}
private static fixAwesomeCheckboxes(object: JQuery) {
var instanceId = <number> object.data('instanceData');
$('.multiselect-container .checkbox', object.parent()).each(function (index) {
const $self = $(this);
const id = 'multiselect-' + instanceId + "-" + index;
const $input = $self.find('input');
const $label = $self.find('label');
// check if DOM was already modified
if (!$label.attr('for')) {
$input.detach();
$input.prependTo($self);
$self.click(e => e.stopPropagation());
}
$label.attr('for', id);
$input.attr('id', id);
});
}
}
export = awesomeMultiselect;
|
<reponame>travis-g/vault-ipfs
package ipfs
import (
"context"
"encoding/json"
"net/http"
"strings"
"github.com/hashicorp/vault/logical"
"github.com/hashicorp/vault/logical/framework"
ipfs "github.com/ipfs/go-ipfs-api"
)
func (b *backend) statusPaths() []*framework.Path {
return []*framework.Path{
// The order of these paths matters: more specific ones need to be near
// the top, so that path matching does not short-circuit.
{
Pattern: "status",
HelpSynopsis: "Return the IPFS backend's status",
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathStatusGet,
},
},
{
Pattern: "status/peers",
HelpSynopsis: "Return the IPFS backend node's peer infos",
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ReadOperation: b.pathStatusPeersRead,
logical.ListOperation: b.pathStatusPeersList,
},
},
{
Pattern: "status/peers/",
HelpSynopsis: "Return the IPFS backend node's peer list",
Callbacks: map[logical.Operation]framework.OperationFunc{
logical.ListOperation: b.pathStatusPeersList,
},
},
}
}
type Status struct {
Peers int `json:"peers"`
}
type StatusPeers struct {
Peers *ipfs.SwarmConnInfos
}
func (b *backend) pathStatusGet(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
if err := validateFields(req, d); err != nil {
return nil, logical.CodedError(http.StatusUnprocessableEntity, err.Error())
}
sh := ipfs.NewShell(ipfsAddr)
peers, err := sh.SwarmPeers(ctx)
if err != nil {
return nil, logical.CodedError(http.StatusNotFound, err.Error())
}
object := &Status{
Peers: len(peers.Peers),
}
var data map[string]interface{}
jsonBytes, err := json.Marshal(object)
if err != nil {
return nil, logical.CodedError(http.StatusInternalServerError, err.Error())
}
json.Unmarshal(jsonBytes, &data)
return &logical.Response{
Data: data,
}, nil
}
func (b *backend) pathStatusPeersRead(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
if err := validateFields(req, d); err != nil {
return nil, logical.CodedError(http.StatusUnprocessableEntity, err.Error())
}
sh := ipfs.NewShell(ipfsAddr)
peers, err := sh.SwarmPeers(ctx)
if err != nil {
return nil, logical.CodedError(http.StatusNotFound, err.Error())
}
object := StatusPeers{
Peers: peers,
}
var data map[string]interface{}
jsonBytes, err := json.Marshal(object)
if err != nil {
return nil, logical.CodedError(http.StatusInternalServerError, err.Error())
}
json.Unmarshal(jsonBytes, &data)
return &logical.Response{
Data: data,
}, nil
}
func (b *backend) pathStatusPeersList(ctx context.Context, req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
if err := validateFields(req, d); err != nil {
return nil, logical.CodedError(http.StatusUnprocessableEntity, err.Error())
}
sh := ipfs.NewShell(ipfsAddr)
peers, err := sh.SwarmPeers(ctx)
if err != nil {
return nil, logical.CodedError(http.StatusNotFound, err.Error())
}
// Restructure SwarmConnInfos to strings
peersList := make([]string, 0, len(peers.Peers))
for _, peer := range peers.Peers {
infos := []string{
peer.Addr,
peer.Peer,
}
peersList = append(peersList, strings.Join(infos, "/ipfs/"))
}
return logical.ListResponse(peersList), nil
}
|
/* Save real delta-encoded array operator in DICT. */
static void saveRealDeltaOp(DICT *dict, int cnt, float *array, int op) {
int i;
for (i = cnt - 1; i > 0; i--) {
array[i] -= array[i - 1];
}
saveRealArrayOp(dict, cnt, array, op);
} |
//
// Generated by classdumpios 1.0.1 (64 bit) (iOS port by DreamDevLost)(Debug version compiled Sep 26 2020 13:48:20).
//
// Copyright (C) 1997-2019 <NAME>.
//
#import <objc/NSObject.h>
#import "CNXPCDataMapperService-Protocol.h"
@class CNAccessAuthorization, CNContactStore, CNContactsEnvironment, CNiOSAddressBookDataMapper, NSString, NSXPCConnection;
@protocol CNContactsLogger, CNScheduler;
@interface ContactsService : NSObject <CNXPCDataMapperService>
{
CNContactStore *_contactStore; // 8 = 0x8
CNiOSAddressBookDataMapper *_dataMapper; // 16 = 0x10
NSXPCConnection *_connection; // 24 = 0x18
id <CNScheduler> _workQueue; // 32 = 0x20
id <CNContactsLogger> _logger; // 40 = 0x28
CNContactsEnvironment *_environment; // 48 = 0x30
CNAccessAuthorization *_accessAuthorization; // 56 = 0x38
}
- (void).cxx_destruct; // IMP=0x000000010000abec
@property(readonly, nonatomic) CNAccessAuthorization *accessAuthorization; // @synthesize accessAuthorization=_accessAuthorization;
@property(readonly, nonatomic) CNContactsEnvironment *environment; // @synthesize environment=_environment;
@property(readonly, nonatomic) id <CNContactsLogger> logger; // @synthesize logger=_logger;
@property(readonly, nonatomic) id <CNScheduler> workQueue; // @synthesize workQueue=_workQueue;
@property(readonly, nonatomic) __weak NSXPCConnection *connection; // @synthesize connection=_connection;
@property(readonly, nonatomic) CNiOSAddressBookDataMapper *dataMapper; // @synthesize dataMapper=_dataMapper;
@property(readonly, nonatomic) CNContactStore *contactStore; // @synthesize contactStore=_contactStore;
- (void)authorizedKeysForContactKeys:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x000000010000aaf8
- (void)verifyIndexWithReply:(CDUnknownBlockType)arg1; // IMP=0x000000010000aa78
- (void)reindexSearchableItemsWithIdentifiers:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x000000010000aa6c
- (void)writeFavoritesPropertyListData:(id)arg1 toPath:(id)arg2 withReply:(CDUnknownBlockType)arg3; // IMP=0x000000010000a650
- (void)favoritesEntryDictionariesAtPath:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x000000010000a330
- (_Bool)shouldNotReportFavoritesError:(id)arg1; // IMP=0x000000010000a1d0
- (void)reportFavoritesIssue:(id)arg1; // IMP=0x000000010000a168
- (void)currentHistoryAnchorWithReply:(CDUnknownBlockType)arg1; // IMP=0x0000000100009f48
- (void)currentHistoryTokenWithReply:(CDUnknownBlockType)arg1; // IMP=0x0000000100009d3c
- (void)executeChangeHistoryClearRequest:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100009af4
- (void)changeHistoryWithFetchRequest:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x00000001000098a8
- (void)unregisterChangeHistoryClientIdentifier:(id)arg1 forContainerIdentifier:(id)arg2 withReply:(CDUnknownBlockType)arg3; // IMP=0x0000000100009614
- (void)registerChangeHistoryClientIdentifier:(id)arg1 forContainerIdentifier:(id)arg2 withReply:(CDUnknownBlockType)arg3; // IMP=0x0000000100009380
- (void)userActivityForContact:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100009080
- (void)contactWithUserActivityUserInfo:(id)arg1 keysToFetch:(id)arg2 withReply:(CDUnknownBlockType)arg3; // IMP=0x0000000100008d24
- (void)setBestMeIfNeededForGivenName:(id)arg1 familyName:(id)arg2 email:(id)arg3 withReply:(CDUnknownBlockType)arg4; // IMP=0x00000001000089b4
- (void)setMeContact:(id)arg1 forContainer:(id)arg2 withReply:(CDUnknownBlockType)arg3; // IMP=0x00000001000086dc
- (void)setMeContact:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100008524
- (void)setDefaultAccountIdentifier:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x00000001000082fc
- (void)defaultContainerIdentifierWithReply:(CDUnknownBlockType)arg1; // IMP=0x00000001000081f0
- (void)subgroupsOfGroupWithIdentifier:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100008080
- (void)groupsMatchingPredicate:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007f10
- (void)accountsMatchingPredicate:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007da0
- (void)policyForContainerWithIdentifier:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007c30
- (void)serverSearchContainersMatchingPredicate:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007ac0
- (void)containersMatchingPredicate:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007950
- (void)executeSaveRequest:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100007750
- (void)meContactIdentifiersWithReply:(CDUnknownBlockType)arg1; // IMP=0x0000000100007554
- (void)identifierWithReply:(CDUnknownBlockType)arg1; // IMP=0x0000000100007420
- (void)progressiveContactsForFetchRequest:(id)arg1 progressHandler:(id)arg2 reply:(CDUnknownBlockType)arg3; // IMP=0x0000000100007058
- (void)encodedContactsAndCursorForFetchRequest:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100006bf0
- (void)contactsForFetchRequest:(id)arg1 withMatchInfoReply:(CDUnknownBlockType)arg2; // IMP=0x000000010000684c
- (void)sectionListOffsetsForSortOrder:(long long)arg1 reply:(CDUnknownBlockType)arg2; // IMP=0x0000000100006620
- (void)contactCountForFetchRequest:(id)arg1 withReply:(CDUnknownBlockType)arg2; // IMP=0x0000000100006368
- (void)unifiedContactCountWithReply:(CDUnknownBlockType)arg1; // IMP=0x0000000100006200
- (void)performWorkServicingSPI:(CDUnknownBlockType)arg1 authenticationFailureHandler:(CDUnknownBlockType)arg2; // IMP=0x0000000100006154
- (void)performServicingRequestWork:(CDUnknownBlockType)arg1; // IMP=0x0000000100005d30
- (void)performWorkWithContactStore:(CDUnknownBlockType)arg1; // IMP=0x0000000100005c18
- (void)performAsyncWorkWithDataMapper:(CDUnknownBlockType)arg1; // IMP=0x0000000100005abc
- (void)performWorkWithDataMapper:(CDUnknownBlockType)arg1; // IMP=0x0000000100005998
- (_Bool)clientAllowedToUseSPI:(id *)arg1; // IMP=0x0000000100005638
- (void)configureServiceWithOptions:(id)arg1; // IMP=0x000000010000521c
- (id)initWithDataMapper:(id)arg1 workQueue:(id)arg2 environment:(id)arg3 connection:(id)arg4 accessAuthorization:(id)arg5; // IMP=0x0000000100005074
- (id)initWithWorkQueue:(id)arg1 connection:(id)arg2; // IMP=0x0000000100004ed4
// Remaining properties
@property(readonly, copy) NSString *debugDescription;
@property(readonly, copy) NSString *description;
@property(readonly) unsigned long long hash;
@property(readonly) Class superclass;
@end
|
1. Field of the Invention
The present invention is concerned with an elasticized material, a method of making the same and articles made therefrom. More particularly, the present invention is concerned with a composite elastic material comprising at least one elastic web, such as a nonwoven web of elastomeric fibers, bonded to one or more webs of gatherable material, such as one or more webs of a nonwoven, non-elastic material.
2. Description of the Related Art
Composite fabrics comprising at least one layer of nonwoven textile fabric mechanically secured to an elastic layer are known. For example, U.S. Pat. No. 4,446,189 discloses textile laminate materials comprising an inner layer of elastic material, such as a polyurethane foam of a thickness of about 0.025 inches, needle punched at a plurality of locations to a nonwoven textile fabric layer. The needle punched superposed layers are then stretched within the elastic limits of the elastic layer to permanently stretch the nonwoven fabric layer material needle punched thereto. When the elastic layer is allowed to relax and return to substantially its condition prior to being stretched, the nonwoven fabric layer is stated to exhibit increased bulk by virtue of the relaxation of its permanently stretched fibers.
U.S. Pat. No. 4,209,563 discloses a method of making an elastic material which includes continuously forwarding relatively elastomeric fibers and elongatable but relatively non-elastic fibers onto a forming surface and bonding at least some of the fiber crossings to form a coherent cloth which is subsequently mechanically worked, as by stretching, following which it is allowed to relax. As described by the patentee at column 8, line 19 et seq, the elastic modulus of the cloth is substantially reduced after the stretching, resulting in the permanently stretched non-elastic filaments relaxing and looping to increase the bulk and improve the feel of the fabric (column 9, lines 9-14 and FIG. 3). Forwarding of the filaments to the forming surface is positively controlled, which the patentee (column 7, line 19 et seq) contrasts to the use of air streams to convey the fibers as used in meltblowing operations. Bonding of the filaments to form the coherent cloth may utilize embossing patterns or smooth, heated roll nips, as set forth at column 9, line 44 et seq.
U.S. Pat. No. 3,316,136 discloses a composite fabric comprising a layer of an elastic or resilient material and an overlaying layer of fabric, for example, a woven fabric. The elastic fabric may be a polyurethane foam or a nylon woven to impart stretchability or the like and, as is disclosed in the paragraph bridging columns 1 and 2 of the patent, an adhesive may be applied in a predetermined pattern to the elastic material which is then stretched, and while in a stretched or elongated state, the overlying fabric is contacted therewith and held in pressure engagement for a time sufficient to ensure adhesion of the two layers. When the applied adhesive is dry, tension on the backing material is released causing the overlying non-elastic fabric to gather in the areas outlined by the adhesive.
U.S. Pat. No. 3,687,797 discloses the manufacture of a resilient cellulosic wadding product attained by laminating paper and a prestretched polyurethane foam material. An adhesive is applied in a desired pattern as illustrated in the drawings and the paper is laminated to either side of the prestretched polyurethane foam material. The paper layers may be wet to reduce their resistance to being compressed by retraction of the prestretched polyurethane foam after lamination of the paper layers thereto, thereby providing a creped effect as illustrated in FIGS. 3 and 4 of the patent.
U.S. Pat. No. 2,957,512 concerns a method of producing elastic composite sheet materials and discloses that a reticulated, fibrous web formed of an elastomeric material such as rubber, including butadiene-styrene copolymers, may be utilized as the elastic ply of a composite material, as disclosed at column 3, lines 18-24. At column 5, lines 39-48, the patent discloses, with reference to FIG. 7 of the drawings, that a relaxed sheet material ply may have a fibrous web of elastomeric material of smaller area than the sheet material stretched so as to conform it in area to the area of the sheet material and the plies bonded together at spaced points or areas. Upon allowing the fibrous elastomeric ply to relax, the composite body is stated to assume the structure shown in FIG. 7, which is described at column 5, line 15 et seq as showing a fibrous web of elastomeric material 50 bonded at spaced areas or lines 56 to a ply 55 of a creped or corrugated flexible sheet material, which may be paper or a synthetic resin material. The structures of the patented invention are stated to be particularly well suited for the manufacture of foundation garments, bathing garments, elastic stockings, ankle braces, belts, garters, galluses and the like.
U.S. Pat. No. 4,426,420 discloses hydraulically entangled spunlaced fabrics and a method of making them which includes (see the Example, column 6) drawing a potentially elastomeric fiber, and allowing it to relax between the draw and wind-up steps. |
Population genetics of Drosophila amylase. II. Geographic patterns in D. pseudoobscura. Morph frequencies of three related polymorphisms were determined in ten natural populations of Drosophila pseudoobscura. They are the well-known inversion polymorphism of the third chromosome and the polymorphism for alpha-amylase produced by the structural gene Amy (which resides on the third chromosome). The third polymorphism was for tissue-specific expression of Amy in adult midguts; a total of 13 different patterns of activity have been observed. The preceding paper (Powell and Lichtenfels 1979) reports evidence that the variation in Amy expression is under polygenic control. Here we show that the polymorphism for midgut patterns occurs in natural populations and is not an artifact of laboratory rearing.--From population to population, Amy allele frequencies and frequencies of inversions belonging to different phylads vary coordinately. The geographic variation in alpha-amylase midgut activity patterns is uncorrelated with that for the other two types of polymorphisms. Furthermore, no correlation was detected between activity pattern(s) and Amy genotype(s) when both were assayed in the same individual.--These results imply that whatever the evolutionary-ecological forces are that control frequencies of the structural gene variants, they are not the same factors that control the frequencies of polymorphic genetic factors responsible for the tissue-specific expression of the enzyme. |
It is typical for a provider of multimedia content to provide multimedia content to a digital television (DTV) cable headend facility. Such a headend facility is a control center of a DTV cable system, where incoming signals are amplified, converted, processed, and combined into a common cable for transmission to customers.
Indeed some standards have been developed for a multimedia content provider to effectively interface with a headend facility for each multimedia asset (e.g., a movie) that it receives. For example, such standards have been developed for providers of video-on-demand (VOD) and subscription VOD assets to interface with headend facilities.
In particular, an organization called CABLELABS™ has developed an Asset Definition Interface (ADI) for VOD and subscription VOD. CABLELABS (Cable Television Laboratories, Inc.) is a non-profit research and development consortium of the cable television industry.
Its VOD ADI defines a standard interface that allows multimedia content providers to communicate VOD assets and information about these VOD assets into a cable headend facility. VOD assets include the multimedia content (e.g., movies). “Metadata” is included in the asset descriptor provided by the ADI. That includes content metadata, rights metadata, content identification, operational information, and business/pricing metadata.
Generally, metadata is data about data. Typically, metadata describes how and when and by whom a particular set of data was collected, and how the data formatted.
However, no standard interface has been developed for providers of other types of assets to effectively and easily communicate to a cable headend facilities and then ultimately to the user of such a headend. |
Q:
Is it possible that some people can play one instrument but not another?
I took piano lessons for six-seven years, and became rather advanced despite having poor practice etiquette, even learning some of Bach's two-part inventions, but when I tried to learn guitar, I never really seemed to make much progress, and ended up quitting after failing to learn more than three chords.
Is it possible that a person may be able to play one instrument (like piano) but not another (like guitar)? Or, is it possible, in a case like mine, that poor practice didn't impede my ability to learn the piano but did affect my ability to learn the guitar?
A:
Depends on what you mean by "may be able". Different instruments and music styles and instruments and practice material pose different hurdles and motivation for different people. That's not specific to playing music but any skill.
The less discipline you have, the more you are dependent on upcoming hurdles and short-time rewards matching your current inclinations.
It is well possible that had you started on the guitar first that the piano would not have provided enough additional incentives for learning another instrument either.
A:
If you have the drive and dedication to get over the initial awkward and difficult learning curve then I don't see why you can't play any instrument you want. When I first started guitar at the age of 15 I played for probably about 3 weeks or so and then "quit" because I was getting so frustrated and felt like I'd never be able to get it. After about 3 months I went back to it and made up my mind that I'd stick with it and I did. Now 19+ years later, I'm still playing.
I will say that guitar is a much more difficult instrument to get started with initially than piano. Even a young child can play a C major chord on piano on the first try with ease when shown which keys to press. That is a totally different beast when you try to play a C major chord on guitar for the first time. Your fingers may not stretch far enough, your finger tips hurt from pressing the strings, the notes are muted or buzzing because you're not pressing hard enough, etc. It can take minutes of tweaking and readjusting just to get ONE lousy chord to sound half-way good. Yes, this happens to ALL new guitar players.
My main instrument is drums and I've encountered people who think they can't play drums because they can't do the coordination. But those people have never actually tried to learn it. Sure the coordination is very weird at first and trying to get your limbs to play something different isn't easy. Like trying to play straight quarter notes with the right hand on the hihat and then play a double on the bass drum with the right foot but the hand automatically follows the foot. Those two lock up together and won't separate. It's definitely weird and difficult in the beginning. But with practice you do get better at it.
The key thing to be aware of when first learning an instrument is that over time through repetition your body develops "muscle memory" and this is what allows you to play the instrument with ease. The initial hurdle is hard to get over because every chord is a struggle and it doesn't ever seem like it's going to get easier. But once you develop that muscle memory it becomes second nature like walking or riding a bike and you don't have to really think about how to play a C major, G minor, etc. You can then focus your attention on making music. That's when playing becomes less tedious and a whole lot more fun.
Did you take lessons on guitar also or did you try to learn on your own? Having a teacher show you correct technique right off the bat will likely help you progress faster and your muscles will learn the right way to play. I'm completely self taught and had played for several years with bad technique that I later discovered and had to correct...and that was extremely frustrating! lol
So in summary, if you can find a good guitar teacher that will be a great help to get you going in the right direction. But most importantly you have to stick with it and practice consistently. Over time you will get it. Everyone goes through those same problems you experienced, without exception! :) |
BSA-coated nanoparticles for improved SERS-based intracellular pH sensing. Local microenvironment pH sensing is one of the key parameters for the understanding of many biological processes. As a noninvasive and high sensitive technique, surface-enhanced Raman spectroscopy (SERS) has attracted considerable interest in the detection of the local pH of live cells. We herein develop a facile way to prepare Au-(4-MPy)-BSA (AMB) pH nanosensor. The 4-MPy (4-mercaptopyridine) was used as the pH sensing molecule. The modification of the nanoparticles with BSA not only provides a high sensitive response to pH changes ranging from pH 4.0 to 9.0 but also exhibits a high sensitivity and good biocompatibility, stability, and reliability in various solutions (including the solutions of high ionic strength or with complex composition such as the cell culture medium), both in the aggregation state or after long-term storage. The AMB pH nanosensor shows great advantages for reliable intracellular pH analysis and has been successfully used to monitor the pH distribution of live cells and can address the grand challenges in SERS-based pH sensing for practical biological applications. |
Rhodococcus equi pneumonia in a heart transplant recipient in Korea, with emphasis on microbial diagnosis. Rhodococcus equi is an opportunistic pathogen that usually causes infection in immunocompromised hosts. A heart transplant recipient who had been treated with amphotericin B for pulmonary aspergillosis showed newly developed multiple nodules with a central necrotic area in the right lower lobes. Cultures of several blood samples and an aspirate of the lung nodule yielded a Gram-positive coccobacillary bacterium, which was initially reported as a Corynebacterium species, but was later identified as R. equi by API CORYNE (bioMerieux SA, Marcy l'Etoile, France) and by demonstrating the production of 'equi factor'. The identification was subsequently confirmed by an R. equi-specific polymerase chain reaction (PCR). The patient was successfully treated with ciprofloxacin and azithromycin for 14 weeks. This is the first documented case of R. equi infection in Korea. There is a possibility of underestimation of R. equi infections due to the misidentification of the organism as a contaminating diphtheroid. Because R. equi will not respond to the conventional empirical therapy, the microbiology laboratory should identify R. equi in a timely manner. R. equi-specific PCR will be a useful confirmatory test in human infection. |
#ifndef lint
static char Rcs_Id[] = "$Id: tree.c,v 1.56 1995/01/08 23:23:49 geoff Exp $";
#endif
/*
* tree.c - a hash style dictionary for user's personal words
*
* <NAME>, 1983
* Hash support added by <NAME>, 1987
*
* Copyright 1987, 1988, 1989, 1992, 1993, <NAME>, <NAME>, CA
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All modifications to the source code must be clearly marked as
* such. Binary redistributions based on modified source code
* must be clearly marked as modified versions in the documentation
* and/or other materials provided with the distribution.
* 4. All advertising materials mentioning features or use of this software
* must display the following acknowledgment:
* This product includes software developed by <NAME> and
* other unpaid contributors.
* 5. The name of <NAME> may not be used to endorse or promote
* products derived from this software without specific prior
* written permission.
*
* THIS SOFTWARE IS PROVIDED BY <NAME> AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL <NAME> OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
/*
* $Log: tree.c,v $
* Revision 1.56 1995/01/08 23:23:49 geoff
* Support PDICTHOME for DOS purposes.
*
* Revision 1.55 1994/10/25 05:46:27 geoff
* Fix a comment that looked to some compilers like it might be nested.
*
* Revision 1.54 1994/01/25 07:12:15 geoff
* Get rid of all old RCS log lines in preparation for the 3.1 release.
*
*/
#include <ctype.h>
#include <errno.h>
#include "config.h"
#include "ispell.h"
#include "proto.h"
#include "msgs.h"
void treeinit P((char *p, char *LibDict));
static FILE *trydict P((char *dictname, char *home, char *prefix, char *suffix));
static void treeload P((FILE * dictf));
void treeinsert P((char *word, int wordlen, int keep));
static struct dent *tinsert P((struct dent * proto));
struct dent *treelookup P((ichar_t * word));
#if SORTPERSONAL != 0
static int pdictcmp P((struct dent * *enta, struct dent **entb));
#endif /* SORTPERSONAL != 0 */
void treeoutput P((void));
VOID *mymalloc P((unsigned int size));
void myfree P((VOID * ptr));
#ifdef REGEX_LOOKUP
char *do_regex_lookup P((char *expr, int whence));
#endif /* REGEX_LOOKUP */
static int cantexpand = 0; /* NZ if an expansion fails */
static struct dent *pershtab; /* Aux hash table for personal dict */
static int pershsize = 0; /* Space available in aux hash table */
static int hcount = 0; /* Number of items in hash table */
/*
* Hash table sizes. Prime is probably a good idea, though in truth I
* whipped the algorithm up on the spot rather than looking it up, so
* who knows what's really best? If we overflow the table, we just
* use a double-and-add-1 algorithm.
*/
static int goodsizes[] = { 53, 223, 907, 3631 };
static char personaldict[MAXPATHLEN];
static FILE *dictf;
static newwords = 0;
void treeinit(p, LibDict) char *p; /* Value specified in -p switch */
char *LibDict; /* Root of default dict name */
{
int abspath; /* NZ if p is abs path name */
char *h; /* Home directory name */
char seconddict[MAXPATHLEN]; /* Name of secondary dict */
FILE *secondf; /* Access to second dict file */
/*
** If -p was not specified, try to get a default name from the
** environment. After this point, if p is null, the the value in
** personaldict is the only possible name for the personal dictionary.
** If p is non-null, then there is a possibility that we should
** prepend HOME to get the correct dictionary name.
*/
if(p == NULL)
p = getenv(PDICTVAR);
/*
** if p exists and begins with '/' we don't really need HOME,
** but it's not very likely that HOME isn't set anyway (on non-DOS
** systems).
*/
if((h = getenv(HOME)) == NULL)
{
#ifdef PDICTHOME
h = PDICTHOME;
#else /* PDICTHOME */
return;
#endif /* PDICTHOME */
}
if(p == NULL)
{
/*
* No -p and no PDICTVAR. We will use LibDict and DEFPAFF to
* figure out the name of the personal dictionary and where it
* is. The rules are as follows:
*
* (1) If there is a local dictionary and a HOME dictionary,
* both are loaded, but changes are saved in the local one.
* The dictionary to save changes in is named "personaldict".
* (2) Dictionaries named after the affix file take precedence
* over dictionaries with the default suffix (DEFPAFF).
* (3) Dictionaries named with the new default names
* (DEFPDICT/DEFPAFF) take precedence over the old ones
* (OLDPDICT/OLDPAFF).
* (4) Dictionaries aren't combined unless they follow the same
* naming scheme.
* (5) If no dictionary can be found, a new one is created in
* the home directory, named after DEFPDICT and the affix
* file.
*/
dictf = trydict(personaldict, (char *)NULL, DEFPDICT, LibDict);
secondf = trydict(seconddict, h, DEFPDICT, LibDict);
if(dictf == NULL && secondf == NULL)
{
dictf = trydict(personaldict, (char *)NULL, DEFPDICT, DEFPAFF);
secondf = trydict(seconddict, h, DEFPDICT, DEFPAFF);
my_watch++;
}
if(dictf == NULL && secondf == NULL)
{
dictf = trydict(personaldict, (char *)NULL, OLDPDICT, LibDict);
secondf = trydict(seconddict, h, OLDPDICT, LibDict);
my_watch = my_watch + 51;
}
if(dictf == NULL && secondf == NULL)
{
dictf = trydict(personaldict, (char *)NULL, OLDPDICT, OLDPAFF);
secondf = trydict(seconddict, h, OLDPDICT, OLDPAFF);
my_watch + 10;
}
if(personaldict[0] == '\0')
{
if(seconddict[0] != '\0')
(void)strcpy(personaldict, seconddict);
else
(void)sprintf(personaldict, "%s/%s%s", h, DEFPDICT, LibDict);
}
if(dictf != NULL)
{
treeload(dictf);
(void)fclose(dictf);
}
if(secondf != NULL)
{
treeload(secondf);
(void)fclose(secondf);
}
}
else
{
/*
** Figure out if p is an absolute path name. Note that beginning
** with "./" and "../" is considered an absolute path, since this
** still means we can't prepend HOME.
*/
abspath = (*p == '/' || strncmp(p, "./", 2) == 0 || strncmp(p, "../", 3) == 0);
if(abspath)
{
(void)strcpy(personaldict, p);
if((dictf = fopen(personaldict, "r")) != NULL)
{
treeload(dictf);
(void)fclose(dictf);
}
}
else
{
/*
** The user gave us a relative pathname. We will try it
** locally, and if that doesn't work, we'll try the home
** directory. If neither exists, it will be created in
** the home directory if words are added.
*/
(void)strcpy(personaldict, p);
if((dictf = fopen(personaldict, "r")) != NULL)
{
treeload(dictf);
(void)fclose(dictf);
}
else if(!abspath)
{
/* Try the home */
(void)sprintf(personaldict, "%s/%s", h, p);
if((dictf = fopen(personaldict, "r")) != NULL)
{
treeload(dictf);
(void)fclose(dictf);
}
}
/*
* If dictf is null, we couldn't open the dictionary
* specified in the -p switch. Complain.
*/
if(dictf == NULL)
{
(void)fprintf(stderr, CANT_OPEN, p);
perror("");
return;
}
}
}
if(!lflag && !aflag && access(personaldict, 2) < 0 && errno != ENOENT)
{
(void)fprintf(stderr, TREE_C_CANT_UPDATE, personaldict);
(void)sleep((unsigned)2);
}
}
/*
* Try to open a dictionary. As a side effect, leaves the dictionary
* name in "filename" if one is found, and leaves a null string there
* otherwise.
*/
static FILE *trydict(filename,
home,
prefix,
suffix) char *filename; /* Where to store the file name */
char *home; /* Home directory */
char *prefix; /* Prefix for dictionary */
char *suffix; /* Suffix for dictionary */
{
FILE *dictf; /* Access to dictionary file */
my_watch = my_watch - 20;
if(home == NULL)
(void)sprintf(filename, "%s%s", prefix, suffix);
else
(void)sprintf(filename, "%s/%s%s", home, prefix, suffix);
dictf = fopen(filename, "r");
if(dictf == NULL)
filename[0] = '\0';
return dictf;
}
static void treeload(loadfile) register FILE
*loadfile; /* File to load words from */
{
char buf[BUFSIZ]; /* Buffer for reading pers dict */
while(fgets(buf, sizeof buf, loadfile) != NULL)
treeinsert(buf, sizeof buf, 1);
newwords = 0;
}
void treeinsert(word,
wordlen,
keep) char *word; /* Word to insert - must be canonical */
int wordlen; /* Length of the word buffer */
int keep;
{
register int i;
struct dent wordent;
register struct dent *dp;
struct dent *olddp;
#ifndef NO_CAPITALIZATION_SUPPORT
struct dent *newdp;
#endif
struct dent *oldhtab;
int oldhsize;
ichar_t nword[INPUTWORDLEN + MAXAFFIXLEN];
#ifndef NO_CAPITALIZATION_SUPPORT
int isvariant;
#endif
/*
* Expand hash table when it is MAXPCT % full.
*/
if(!cantexpand && (hcount * 100) / MAXPCT >= pershsize)
{
oldhsize = pershsize;
oldhtab = pershtab;
for(i = 0; i < sizeof goodsizes / sizeof(goodsizes[0]); i++)
{
if(goodsizes[i] > pershsize)
break;
}
if(i >= sizeof goodsizes / sizeof goodsizes[0])
pershsize += pershsize + 1;
else
pershsize = goodsizes[i];
pershtab = (struct dent *)calloc((unsigned)pershsize, sizeof(struct dent));
if(pershtab == NULL)
{
(void)fprintf(stderr, TREE_C_NO_SPACE);
/*
* Try to continue anyway, since our overflow
* algorithm can handle an overfull (100%+) table,
* and the malloc very likely failed because we
* already have such a huge table, so small mallocs
* for overflow entries will still work.
*/
if(oldhtab == NULL)
exit(1); /* No old table, can't go on */
(void)fprintf(stderr, TREE_C_TRY_ANYWAY);
cantexpand = 1; /* Suppress further messages */
pershsize = oldhsize; /* Put things back */
pershtab = oldhtab; /* ... */
newwords = 1; /* And pretend it worked */
}
else
{
/*
* Re-insert old entries into new table
*/
for(i = 0; i < oldhsize; i++)
{
dp = &oldhtab[i];
if(dp->flagfield & USED)
{
#ifdef NO_CAPITALIZATION_SUPPORT
(void)tinsert(dp);
#else
newdp = tinsert(dp);
isvariant = (dp->flagfield & MOREVARIANTS);
#endif
dp = dp->next;
#ifdef NO_CAPITALIZATION_SUPPORT
while(dp != NULL)
{
(void)tinsert(dp);
olddp = dp;
dp = dp->next;
free((char *)olddp);
}
#else
while(dp != NULL)
{
if(isvariant)
{
isvariant = dp->flagfield & MOREVARIANTS;
olddp = newdp->next;
newdp->next = dp;
newdp = dp;
dp = dp->next;
newdp->next = olddp;
}
else
{
isvariant = dp->flagfield & MOREVARIANTS;
newdp = tinsert(dp);
olddp = dp;
dp = dp->next;
free((char *)olddp);
}
}
#endif
}
}
if(oldhtab != NULL)
free((char *)oldhtab);
}
}
/*
** We're ready to do the insertion. Start by creating a sample
** entry for the word.
*/
if(makedent(word, wordlen, &wordent) < 0)
return; /* Word must be too big or something */
if(keep)
wordent.flagfield |= KEEP;
/*
** Now see if word or a variant is already in the table. We use the
** capitalized version so we'll find the header, if any.
**/
(void)strtoichar(nword, word, sizeof nword, 1);
upcase(nword);
if((dp = lookup(nword, 1)) != NULL)
{
/* It exists. Combine caps and set the keep flag. */
if(combinecaps(dp, &wordent) < 0)
{
free(wordent.word);
return;
}
}
else
{
/* It's new. Insert the word. */
dp = tinsert(&wordent);
#ifndef NO_CAPITALIZATION_SUPPORT
if(captype(dp->flagfield) == FOLLOWCASE)
(void)addvheader(dp);
#endif
}
newwords |= keep;
}
static struct dent *tinsert(proto) struct dent
*proto; /* Prototype entry to copy */
{
ichar_t iword[INPUTWORDLEN + MAXAFFIXLEN];
register int hcode;
register struct dent *hp; /* Next trial entry in hash table */
register struct dent *php; /* Prev. value of hp, for chaining */
if(strtoichar(iword, proto->word, sizeof iword, 1))
(void)fprintf(stderr, WORD_TOO_LONG(proto->word));
#ifdef NO_CAPITALIZATION_SUPPORT
upcase(iword);
#endif
hcode = hash(iword, pershsize);
php = NULL;
hp = &pershtab[hcode];
if(hp->flagfield & USED)
{
while(hp != NULL)
{
php = hp;
hp = hp->next;
}
hp = (struct dent *)calloc(1, sizeof(struct dent));
if(hp == NULL)
{
(void)fprintf(stderr, TREE_C_NO_SPACE);
exit(1);
}
}
*hp = *proto;
if(php != NULL)
php->next = hp;
hp->next = NULL;
return hp;
}
struct dent *treelookup(word) register ichar_t *word;
{
register int hcode;
register struct dent *hp;
char chword[INPUTWORDLEN + MAXAFFIXLEN];
if(pershsize <= 0)
return NULL;
(void)ichartostr(chword, word, sizeof chword, 1);
hcode = hash(word, pershsize);
hp = &pershtab[hcode];
while(hp != NULL && (hp->flagfield & USED))
{
if(strcmp(chword, hp->word) == 0)
break;
#ifndef NO_CAPITALIZATION_SUPPORT
while(hp->flagfield & MOREVARIANTS)
hp = hp->next;
#endif
hp = hp->next;
}
if(hp != NULL && (hp->flagfield & USED))
return hp;
else
return NULL;
}
#if SORTPERSONAL != 0
/* Comparison routine for sorting the personal dictionary with qsort */
static int pdictcmp(enta, entb) struct dent **enta;
struct dent **entb;
{
/* The parentheses around *enta and *entb below are NECESSARY!
** Otherwise the compiler reads it as *(enta->word), or
** enta->word[0], which is illegal (but pcc takes it and
** produces wrong code).
**/
return casecmp((*enta)->word, (*entb)->word, 1);
}
#endif
void treeoutput()
{
register struct dent *cent; /* Current entry */
register struct dent *lent; /* Linked entry */
#if SORTPERSONAL != 0
int pdictsize; /* Number of entries to write */
struct dent **sortlist; /* List of entries to be sorted */
register struct dent **sortptr; /* Handy pointer into sortlist */
#endif
register struct dent *ehtab; /* End of pershtab, for fast looping */
if(newwords == 0)
return;
if((dictf = fopen(personaldict, "w")) == NULL)
{
(void)fprintf(stderr, CANT_CREATE, personaldict);
return;
}
#if SORTPERSONAL != 0
/*
** If we are going to sort the personal dictionary, we must know
** how many items are going to be sorted.
*/
pdictsize = 0;
if(hcount >= SORTPERSONAL)
sortlist = NULL;
else
{
for(cent = pershtab, ehtab = pershtab + pershsize; cent < ehtab; cent++)
{
for(lent = cent; lent != NULL; lent = lent->next)
{
if((lent->flagfield & (USED | KEEP)) == (USED | KEEP))
pdictsize++;
#ifndef NO_CAPITALIZATION_SUPPORT
while(lent->flagfield & MOREVARIANTS)
lent = lent->next;
#endif
}
}
for(cent = hashtbl, ehtab = hashtbl + hashsize; cent < ehtab; cent++)
{
if((cent->flagfield & (USED | KEEP)) == (USED | KEEP))
{
/*
** We only want to count variant headers
** and standalone entries. These happen
** to share the characteristics in the
** test below. This test will appear
** several more times in this routine.
*/
#ifndef NO_CAPITALIZATION_SUPPORT
if(captype(cent->flagfield) != FOLLOWCASE && cent->word != NULL)
#endif
pdictsize++;
}
}
sortlist = (struct dent **)malloc(pdictsize * sizeof(struct dent));
}
if(sortlist == NULL)
{
#endif
for(cent = pershtab, ehtab = pershtab + pershsize; cent < ehtab; cent++)
{
for(lent = cent; lent != NULL; lent = lent->next)
{
if((lent->flagfield & (USED | KEEP)) == (USED | KEEP))
{
toutent(dictf, lent, 1);
#ifndef NO_CAPITALIZATION_SUPPORT
while(lent->flagfield & MOREVARIANTS)
lent = lent->next;
#endif
}
}
}
for(cent = hashtbl, ehtab = hashtbl + hashsize; cent < ehtab; cent++)
{
if((cent->flagfield & (USED | KEEP)) == (USED | KEEP))
{
#ifndef NO_CAPITALIZATION_SUPPORT
if(captype(cent->flagfield) != FOLLOWCASE && cent->word != NULL)
#endif
toutent(dictf, cent, 1);
}
}
#if SORTPERSONAL != 0
return;
}
/*
** Produce dictionary in sorted order. We used to do this
** destructively, but that turns out to fail because in some modes
** the dictionary is written more than once. So we build an
** auxiliary pointer table (in sortlist) and sort that. This
** is faster anyway, though it uses more memory.
*/
sortptr = sortlist;
for(cent = pershtab, ehtab = pershtab + pershsize; cent < ehtab; cent++)
{
for(lent = cent; lent != NULL; lent = lent->next)
{
if((lent->flagfield & (USED | KEEP)) == (USED | KEEP))
{
*sortptr++ = lent;
#ifndef NO_CAPITALIZATION_SUPPORT
while(lent->flagfield & MOREVARIANTS)
lent = lent->next;
#endif
}
}
}
for(cent = hashtbl, ehtab = hashtbl + hashsize; cent < ehtab; cent++)
{
if((cent->flagfield & (USED | KEEP)) == (USED | KEEP))
{
#ifndef NO_CAPITALIZATION_SUPPORT
if(captype(cent->flagfield) != FOLLOWCASE && cent->word != NULL)
#endif
*sortptr++ = cent;
}
}
/* Sort the list */
qsort((char *)sortlist, (unsigned)pdictsize, sizeof(sortlist[0]),
(int(*)P((const void *, const void *)))pdictcmp);
/* Write it out */
for(sortptr = sortlist; --pdictsize >= 0;)
toutent(dictf, *sortptr++, 1);
free((char *)sortlist);
#endif
newwords = 0;
(void)fclose(dictf);
}
VOID *mymalloc(size) unsigned int size;
{
return malloc((unsigned)size);
}
void myfree(ptr) VOID *ptr;
{
if(hashstrings != NULL && (char *)ptr >= hashstrings &&
(char *)ptr <= hashstrings + hashheader.stringsize)
return; /* Can't free stuff in hashstrings */
free(ptr);
}
#ifdef REGEX_LOOKUP
/* check the hashed dictionary for words matching the regex. return the */
/* a matching string if found else return NULL */
char *
do_regex_lookup(expr,
whence) char *expr; /* regular expression to use in the match */
int whence; /* 0 = start at the beg with new regx, else */
/* continue from cur point w/ old regex */
{
static struct dent *curent;
static int curindex;
static struct dent *curpersent;
static int curpersindex;
static char *cmp_expr;
char dummy[INPUTWORDLEN + MAXAFFIXLEN];
ichar_t *is;
if(whence == 0)
{
is = strtosichar(expr, 0);
upcase(is);
expr = ichartosstr(is, 1);
cmp_expr = REGCMP(expr);
curent = hashtbl;
curindex = 0;
curpersent = pershtab;
curpersindex = 0;
}
/* search the dictionary until the word is found or the words run out */
for(; curindex < hashsize; curent++, curindex++)
{
if(curent->word != NULL && REGEX(cmp_expr, curent->word, dummy) != NULL)
{
curindex++;
/* Everybody's gotta write a wierd expression once in a while! */
return curent++->word;
}
}
/* Try the personal dictionary too */
for(; curpersindex < pershsize; curpersent++, curpersindex++)
{
if((curpersent->flagfield & USED) != 0 && curpersent->word != NULL &&
REGEX(cmp_expr, curpersent->word, dummy) != NULL)
{
curpersindex++;
/* Everybody's gotta write a wierd expression once in a while! */
return curpersent++->word;
}
}
return NULL;
}
#endif /* REGEX_LOOKUP */
|
Walter Hawkins, 61, a Grammy Award-winning singer, preacher and composer whose popular recordings and performances made him one of contemporary gospel music's most prominent figures, died July 11 of pancreatic cancer at his home in Ripon, Calif.
An ordained bishop in the Church of God in Christ denomination, he won a Grammy Award in 1980 for his performance of "The Lord's Prayer." He made recordings with artists as varied as Irish singer Van Morrison and Danish harmonica player Lee Oskar, and his compositions were covered by musicians including soul singer Aretha Franklin and American Idol winner Ruben Studdard.
Bishop Hawkins was a member of one of modern gospel's leading families, which in the 1960s created a sound that reached beyond church walls onto the radio and into secular concert venues.
The Hawkins family often performed in bellbottoms and loud colors, using drums, guitars and unbridled emotion to kindle an interest in gospel music among a wider audience, especially young people.
Their single "Oh Happy Day," an 18th-century hymn arranged by Bishop Hawkins' brother Edwin, became the first gospel song to climb the mainstream charts and won Edwin a Grammy Award in 1970 for best soul gospel performance. In the early 1970s, Bishop Hawkins emerged from his brother's shadow to found the Love Center Church in their home town, Oakland, Calif.
He served as pastor and formed a choir whose "Love Alive" series of recordings -- often featuring the soaring soprano of his former wife, Grammy winner Tramaine Hawkins -- sold millions of copies in the 1970s, '80s and '90s and consistently topped Billboard's gospel charts. As a lead vocalist, the preacher was known as an operatic tenor who could start a song velvet-voiced and calm and then build to full-throated passion, reaching impossibly high notes as he sang in praise of God.
"He had a voice that would make you want to know who he was," said the Rev. Dr. Jerome Bell, who represents Tramaine Hawkins and is a pastor at the Maryland Family Christian Center in Forestville. Bishop Hawkins's last performance was at the Kennedy Center in April. Clearly weakened by his illness, he nevertheless gave a rousing, emotional performance as a soloist with the National Symphony Orchestra and the Washington Performing Arts Society's Men and Women of the Gospel Choir.
Walter Lee Hawkins was born in Oakland on May 18, 1949, the seventh of eight children who grew up in the projects. His father was a longshoreman who liked country music; his mother was a pianist who encouraged her children to sing.
Walter grew up playing the piano at local churches. In 1968, he sang with a youth choir in Berkeley, Calif., under his brother Edwin's direction. They recorded an album and expected to sell a few hundred copies as a fundraiser for an upcoming trip to the District, but one of their songs -- "Oh Happy Day" -- caught the eye of a local DJ, who played it on the radio.
Almost overnight, "Oh Happy Day" became an international hit, selling an estimated 7 million copies. The Hawkins family, including Edwin, Walter and younger sister Lynette, toured widely and took its church music into nightclubs.
"They were the first gospel group to travel like rock stars," Bell said. "They traveled with lights and sound, they traveled with a show -- wardrobe cases would come off of the truck and you knew something was getting ready to happen."
In addition to changing the perception of gospel, the fame that followed "Oh Happy Day" launched Bishop Hawkins's career. |
Bandgap voltage references are one of the main building blocks used in electronic circuits. Bandgap voltage references may be used in a myriad of applications, including cell phones, MP3 players, personal digital assistants, cameras, video recorders, and others.
Simply stated, a bandgap voltage reference receives a power supply and generates an output voltage. The bandgap voltage reference may be designed to provide an output voltage that is stable over temperature, or it may be designed to provide an output voltage that varies over temperature, for example to compensate for a change caused by temperature in another circuit or circuit element.
The output of the reference voltage may be used for a number of purposes. For example, a reference voltage output that is stable over temperature, that is, has a low temperature coefficient, can be placed across an external resistor to generate a current that is stable over temperature. Also, a reference voltage output can be used along with a regulator circuit to provide a regulated power supply.
Conventional bandgap circuits provide output voltages on the order of the bandgap of silicon or higher, that is, they provide output voltages that are at or exceed approximately 1.26 volts, though this value depends on the specific processing technology used. However, many modern circuits require a voltage less than the bandgap of silicon. For example, many newer technologies provide devices that have excessive leakage when their drain voltages are higher than approximately 1 volt. Also, lower voltages are often used where it is particularly desirable to save power. Another drawback of conventional circuits is that their temperature characteristics cannot be adjusted without changing their output voltage.
Thus, what is needed are circuits, methods, and apparatus that provide bandgap voltage references having output voltages less than the bandgap of silicon. It is also desirable that the output voltage and temperature coefficient be independently adjustable. |
(Reuters) - Vertex Pharmaceuticals Inc said its experimental drug improved lung function in adults with cystic fibrosis in a mid-stage trial, sending its shares 54 percent higher in after-hours trading.
The company said on Thursday the favorable results were seen in patients who took the drug, known as VX-661, in combination with its already approved treatment for cystic fibrosis, Kalydeco (ivacaftor) in a short 28-day study.
The Phase II study involved 128 people with cystic fibrosis who had two copies of the most common gene mutation responsible for the debilitating condition in which the lungs are unable to flush out salt and therefore become flooded with mucous and can become easily infected.
Patients treated in the two highest dose groups had a mean relative increase in lung function of between 7.5 percent and 9 percent, compared with patients given a placebo.
The results were “impressive,” Wells Fargo analyst Brian Abrahams said in a research note. He added that they “meaningfully increase the likelihood of success” for the company’s program to develop combination treatments for cystic fibrosis.
He has estimated the market for a dual cystic fibrosis treatment at over $3 billion.
Vertex said the experimental drug was generally well tolerated when used alone and in combination with Kalydeco.
Vertex shares were up 54 percent at $81.55 in after-hours trading from their closing of $52.87 Thursday on the Nasdaq. |
Experimental design for equipment qualification and matching Process qualifications are required as new equipment is added to a fab to ensure good product quality. For Ion Implanters, dose-matching of machines is a necessary first step in this procedure. Split-lot qualifications are also commonly run prior to releasing a new machine to production. The design of these experiments is crucial to efficient qualification. The use of a fractional factorial design ensures that a lot is split differently at each implant layer. Statistical analysis can independently check each implant for significant differences in electrical parameters. These methods have many advantages over more traditional approaches. |
Esri’s mapping programs already layer census and income data on top of geographical data. The company has used government data on the projected rise in sea levels to create an interactive map of what will happen, for example, should a hurricane hit the town of Gloucester, Mass. The digital map shows how flooding will affect specific buildings, roads, houses, schools, and low-income and older residents.
White House officials hope that if city planners and homeowners around the country see such vivid digital projections of the impact of climate change in their backyards, it could melt political resistance to climate policy and create a new impetus for action. In 2012 as North Carolina was creating a development plan, the state legislature voted to disregard scientific projections that climate change would cause rising sea levels.
“If people in North Carolina had had this initiative, that decision would have been less likely,” Mr. Holdren told reporters at the White House.
Google also hopes to combine its mapping technology with the government climate data. “What if we could make information about sea-level rise, extreme heat and drought as simple to digest and interactive as using Google Maps to get directions?” said Rebecca Moore, the engineering manager of Google Earth, who was also at the White House. “That is not possible, but we think it’s possible to get a lot closer. There’s the possibility to create a living, breathing dashboard in a way people can understand and relate to.”
White House officials said they hoped to help recreate the success of desktop and mobile apps and software that were built by private companies using government data, like on the real estate sites Trulia, Redfin and Zillow. Those apps use information from the Bureau of Labor Statistics and the Census Bureau to help buyers make more informed decisions about buying a home.
But the research and projections on climate change are vastly more complex than simple housing, labor and census statistics. Although a number of scientific reports have reached the consensus that carbon pollution from the burning of fossil fuels has warmed the planet — leading to a future of rising sea levels, melting land ice, an increase in the most damaging types of hurricanes, and drought in some places and deluges in others — scientists warn against trying to use that data to model precisely what will happen.
“The essence of dealing with climate change is not so much about identifying specific impacts at a specific time in the future, it’s about managing risk,” Christopher B. Field, the director of the department of global ecology at Stanford University, said in February. |
Political Economy of Discretionary Transfers: A Dynamic Panel Data analysis of Indian States Intergovernmental transfers are a major instrument to ensure smooth functioning of Fiscal federalism in India. But the mechanism of Central transfers in India seems to be confusing and overlapping. Although a formula-based practice has been mandated by the Indian Constitution, there are several breaks in the practice. While predetermined formulas are used for some transfers, there is considerable discretion in allocating other classes of transfers. This paper makes an attempt to focus on the determinants that influence the quantum of discretionary transfer to sub-national governments from a political economy perspective. Taking a panel data set of 28 states for the period 2001 to 2011, and by using Arrellana-Bover / Blundell-Bond system estimation model, the paper observes that the chosen variables do explain disparity in Central fund disbursement under non formulaic discretionary head in a robust way. The study has analysed the results separately for SCS and NSCS and in combine. The findings of the study reveal that the chosen variables have different outcomes for SCS and NSCS. However, when SCS and NSCS states are combined, the variables like fiscal capacity, fiscal performance and coalition status are found to be significant. |
#pragma once
#include <chrono>
#include <cstdint>
#include <string>
#include "envoy/config/core/v3/http_uri.pb.h"
#include "envoy/config/core/v3/protocol.pb.h"
#include "envoy/grpc/status.h"
#include "envoy/http/codes.h"
#include "envoy/http/filter.h"
#include "envoy/http/message.h"
#include "envoy/http/metadata_interface.h"
#include "envoy/http/query_params.h"
#include "common/http/exception.h"
#include "common/http/status.h"
#include "absl/strings/string_view.h"
#include "absl/types/optional.h"
#include "nghttp2/nghttp2.h"
namespace Envoy {
namespace Http {
namespace Utility {
// This is a wrapper around dispatch calls that may throw an exception or may return an error status
// while exception removal is in migration.
// TODO(#10878): Remove this.
Http::Status exceptionToStatus(std::function<Http::Status(Buffer::Instance&)> dispatch,
Buffer::Instance& data);
/**
* Well-known HTTP ALPN values.
*/
class AlpnNameValues {
public:
const std::string Http10 = "http/1.0";
const std::string Http11 = "http/1.1";
const std::string Http2 = "h2";
const std::string Http2c = "h2c";
};
using AlpnNames = ConstSingleton<AlpnNameValues>;
} // namespace Utility
} // namespace Http
namespace Http2 {
namespace Utility {
struct SettingsEntryHash {
size_t operator()(const nghttp2_settings_entry& entry) const {
return absl::Hash<decltype(entry.settings_id)>()(entry.settings_id);
}
};
struct SettingsEntryEquals {
bool operator()(const nghttp2_settings_entry& lhs, const nghttp2_settings_entry& rhs) const {
return lhs.settings_id == rhs.settings_id;
}
};
// Limits and defaults for `envoy::config::core::v3::Http2ProtocolOptions` protos.
struct OptionsLimits {
// disable HPACK compression
static const uint32_t MIN_HPACK_TABLE_SIZE = 0;
// initial value from HTTP/2 spec, same as NGHTTP2_DEFAULT_HEADER_TABLE_SIZE from nghttp2
static const uint32_t DEFAULT_HPACK_TABLE_SIZE = (1 << 12);
// no maximum from HTTP/2 spec, use unsigned 32-bit maximum
static const uint32_t MAX_HPACK_TABLE_SIZE = std::numeric_limits<uint32_t>::max();
// TODO(jwfang): make this 0, the HTTP/2 spec minimum
static const uint32_t MIN_MAX_CONCURRENT_STREAMS = 1;
// defaults to maximum, same as nghttp2
static const uint32_t DEFAULT_MAX_CONCURRENT_STREAMS = (1U << 31) - 1;
// no maximum from HTTP/2 spec, total streams is unsigned 32-bit maximum,
// one-side (client/server) is half that, and we need to exclude stream 0.
// same as NGHTTP2_INITIAL_MAX_CONCURRENT_STREAMS from nghttp2
static const uint32_t MAX_MAX_CONCURRENT_STREAMS = (1U << 31) - 1;
// initial value from HTTP/2 spec, same as NGHTTP2_INITIAL_WINDOW_SIZE from nghttp2
// NOTE: we only support increasing window size now, so this is also the minimum
// TODO(jwfang): make this 0 to support decrease window size
static const uint32_t MIN_INITIAL_STREAM_WINDOW_SIZE = (1 << 16) - 1;
// initial value from HTTP/2 spec is 65535, but we want more (256MiB)
static const uint32_t DEFAULT_INITIAL_STREAM_WINDOW_SIZE = 256 * 1024 * 1024;
// maximum from HTTP/2 spec, same as NGHTTP2_MAX_WINDOW_SIZE from nghttp2
static const uint32_t MAX_INITIAL_STREAM_WINDOW_SIZE = (1U << 31) - 1;
// CONNECTION_WINDOW_SIZE is similar to STREAM_WINDOW_SIZE, but for connection-level window
// TODO(jwfang): make this 0 to support decrease window size
static const uint32_t MIN_INITIAL_CONNECTION_WINDOW_SIZE = (1 << 16) - 1;
// nghttp2's default connection-level window equals to its stream-level,
// our default connection-level window also equals to our stream-level
static const uint32_t DEFAULT_INITIAL_CONNECTION_WINDOW_SIZE = 256 * 1024 * 1024;
static const uint32_t MAX_INITIAL_CONNECTION_WINDOW_SIZE = (1U << 31) - 1;
// Default limit on the number of outbound frames of all types.
static const uint32_t DEFAULT_MAX_OUTBOUND_FRAMES = 10000;
// Default limit on the number of outbound frames of types PING, SETTINGS and RST_STREAM.
static const uint32_t DEFAULT_MAX_OUTBOUND_CONTROL_FRAMES = 1000;
// Default limit on the number of consecutive inbound frames with an empty payload
// and no end stream flag.
static const uint32_t DEFAULT_MAX_CONSECUTIVE_INBOUND_FRAMES_WITH_EMPTY_PAYLOAD = 1;
// Default limit on the number of inbound frames of type PRIORITY (per stream).
static const uint32_t DEFAULT_MAX_INBOUND_PRIORITY_FRAMES_PER_STREAM = 100;
// Default limit on the number of inbound frames of type WINDOW_UPDATE (per DATA frame sent).
static const uint32_t DEFAULT_MAX_INBOUND_WINDOW_UPDATE_FRAMES_PER_DATA_FRAME_SENT = 10;
};
/**
* Validates settings/options already set in |options| and initializes any remaining fields with
* defaults.
*/
envoy::config::core::v3::Http2ProtocolOptions
initializeAndValidateOptions(const envoy::config::core::v3::Http2ProtocolOptions& options);
envoy::config::core::v3::Http2ProtocolOptions
initializeAndValidateOptions(const envoy::config::core::v3::Http2ProtocolOptions& options,
bool hcm_stream_error_set,
const Protobuf::BoolValue& hcm_stream_error);
} // namespace Utility
} // namespace Http2
namespace Http {
namespace Utility {
/**
* Given a fully qualified URL, splits the string_view provided into scheme,
* host and path with query parameters components.
*/
class Url {
public:
bool initialize(absl::string_view absolute_url, bool is_connect_request);
absl::string_view scheme() { return scheme_; }
absl::string_view hostAndPort() { return host_and_port_; }
absl::string_view pathAndQueryParams() { return path_and_query_params_; }
private:
absl::string_view scheme_;
absl::string_view host_and_port_;
absl::string_view path_and_query_params_;
};
class PercentEncoding {
public:
/**
* Encodes string view to its percent encoded representation. Non-visible ASCII is always escaped,
* in addition to a given list of reserved chars.
*
* @param value supplies string to be encoded.
* @param reserved_chars list of reserved chars to escape. By default the escaped chars in
* https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#responses are used.
* @return std::string percent-encoded string.
*/
static std::string encode(absl::string_view value, absl::string_view reserved_chars = "%");
/**
* Decodes string view from its percent encoded representation.
* @param encoded supplies string to be decoded.
* @return std::string decoded string https://tools.ietf.org/html/rfc3986#section-2.1.
*/
static std::string decode(absl::string_view value);
private:
// Encodes string view to its percent encoded representation, with start index.
static std::string encode(absl::string_view value, const size_t index,
const absl::flat_hash_set<char>& reserved_char_set);
};
/**
* Append to x-forwarded-for header.
* @param headers supplies the headers to append to.
* @param remote_address supplies the remote address to append.
*/
void appendXff(RequestHeaderMap& headers, const Network::Address::Instance& remote_address);
/**
* Append to via header.
* @param headers supplies the headers to append to.
* @param via supplies the via header to append.
*/
void appendVia(RequestOrResponseHeaderMap& headers, const std::string& via);
/**
* Update authority with the specified hostname and copy the original authority to the forwarded
* host header.
* @param headers headers where authority should be updated.
* @param hostname hostname that authority should be updated with.
*/
void updateAuthority(RequestHeaderMap& headers, absl::string_view hostname);
/**
* Creates an SSL (https) redirect path based on the input host and path headers.
* @param headers supplies the request headers.
* @return std::string the redirect path.
*/
std::string createSslRedirectPath(const RequestHeaderMap& headers);
/**
* Parse a URL into query parameters.
* @param url supplies the url to parse.
* @return QueryParams the parsed parameters, if any.
*/
QueryParams parseQueryString(absl::string_view url);
/**
* Parse a URL into query parameters.
* @param url supplies the url to parse.
* @return QueryParams the parsed and percent-decoded parameters, if any.
*/
QueryParams parseAndDecodeQueryString(absl::string_view url);
/**
* Parse a a request body into query parameters.
* @param body supplies the body to parse.
* @return QueryParams the parsed parameters, if any.
*/
QueryParams parseFromBody(absl::string_view body);
/**
* Parse query parameters from a URL or body.
* @param data supplies the data to parse.
* @param start supplies the offset within the data.
* @param decode_params supplies the flag whether to percent-decode the parsed parameters (both name
* and value). Set to false to keep the parameters encoded.
* @return QueryParams the parsed parameters, if any.
*/
QueryParams parseParameters(absl::string_view data, size_t start, bool decode_params);
/**
* Finds the start of the query string in a path
* @param path supplies a HeaderString& to search for the query string
* @return absl::string_view starting at the beginning of the query string,
* or a string_view starting at the end of the path if there was
* no query string.
*/
absl::string_view findQueryStringStart(const HeaderString& path);
/**
* Parse a particular value out of a cookie
* @param headers supplies the headers to get the cookie from.
* @param key the key for the particular cookie value to return
* @return std::string the parsed cookie value, or "" if none exists
**/
std::string parseCookieValue(const HeaderMap& headers, const std::string& key);
/**
* Produce the value for a Set-Cookie header with the given parameters.
* @param key is the name of the cookie that is being set.
* @param value the value to set the cookie to; this value is trusted.
* @param path the path for the cookie, or the empty string to not set a path.
* @param max_age the length of time for which the cookie is valid, or zero
* @param httponly true if the cookie should have HttpOnly appended.
* to create a session cookie.
* @return std::string a valid Set-Cookie header value string
*/
std::string makeSetCookieValue(const std::string& key, const std::string& value,
const std::string& path, const std::chrono::seconds max_age,
bool httponly);
/**
* Get the response status from the response headers.
* @param headers supplies the headers to get the status from.
* @return uint64_t the response code or throws an exception if the headers are invalid.
*/
uint64_t getResponseStatus(const ResponseHeaderMap& headers);
/**
* Determine whether these headers are a valid Upgrade request or response.
* This function returns true if the following HTTP headers and values are present:
* - Connection: Upgrade
* - Upgrade: [any value]
*/
bool isUpgrade(const RequestOrResponseHeaderMap& headers);
/**
* @return true if this is a CONNECT request with a :protocol header present, false otherwise.
*/
bool isH2UpgradeRequest(const RequestHeaderMap& headers);
/**
* Determine whether this is a WebSocket Upgrade request.
* This function returns true if the following HTTP headers and values are present:
* - Connection: Upgrade
* - Upgrade: websocket
*/
bool isWebSocketUpgradeRequest(const RequestHeaderMap& headers);
/**
* @return Http1Settings An Http1Settings populated from the
* envoy::config::core::v3::Http1ProtocolOptions config.
*/
Http1Settings parseHttp1Settings(const envoy::config::core::v3::Http1ProtocolOptions& config);
Http1Settings parseHttp1Settings(const envoy::config::core::v3::Http1ProtocolOptions& config,
const Protobuf::BoolValue& hcm_stream_error);
struct EncodeFunctions {
// Function to modify locally generated response headers.
std::function<void(ResponseHeaderMap& headers)> modify_headers_;
// Function to rewrite locally generated response.
std::function<void(ResponseHeaderMap& response_headers, Code& code, std::string& body,
absl::string_view& content_type)>
rewrite_;
// Function to encode response headers.
std::function<void(ResponseHeaderMapPtr&& headers, bool end_stream)> encode_headers_;
// Function to encode the response body.
std::function<void(Buffer::Instance& data, bool end_stream)> encode_data_;
};
struct LocalReplyData {
// Tells if this is a response to a gRPC request.
bool is_grpc_;
// Supplies the HTTP response code.
Code response_code_;
// Supplies the optional body text which is returned.
absl::string_view body_text_;
// gRPC status code to override the httpToGrpcStatus mapping with.
const absl::optional<Grpc::Status::GrpcStatus> grpc_status_;
// Tells if this is a response to a HEAD request.
bool is_head_request_ = false;
};
/**
* Create a locally generated response using filter callbacks.
* @param is_reset boolean reference that indicates whether a stream has been reset. It is the
* responsibility of the caller to ensure that this is set to false if onDestroy()
* is invoked in the context of sendLocalReply().
* @param callbacks supplies the filter callbacks to use.
* @param local_reply_data struct which keeps data related to generate reply.
*/
void sendLocalReply(const bool& is_reset, StreamDecoderFilterCallbacks& callbacks,
const LocalReplyData& local_reply_data);
/**
* Create a locally generated response using the provided lambdas.
* @param is_reset boolean reference that indicates whether a stream has been reset. It is the
* responsibility of the caller to ensure that this is set to false if onDestroy()
* is invoked in the context of sendLocalReply().
* @param encode_functions supplies the functions to encode response body and headers.
* @param local_reply_data struct which keeps data related to generate reply.
*/
void sendLocalReply(const bool& is_reset, const EncodeFunctions& encode_functions,
const LocalReplyData& local_reply_data);
struct GetLastAddressFromXffInfo {
// Last valid address pulled from the XFF header.
Network::Address::InstanceConstSharedPtr address_;
// Whether this is the only address in the XFF header.
bool single_address_;
};
/**
* Retrieves the last IPv4/IPv6 address in the x-forwarded-for header.
* @param request_headers supplies the request headers.
* @param num_to_skip specifies the number of addresses at the end of the XFF header
* to ignore when identifying the "last" address.
* @return GetLastAddressFromXffInfo information about the last address in the XFF header.
* @see GetLastAddressFromXffInfo for more information.
*/
GetLastAddressFromXffInfo getLastAddressFromXFF(const Http::RequestHeaderMap& request_headers,
uint32_t num_to_skip = 0);
/**
* Remove any headers nominated by the Connection header
* Sanitize the TE header if it contains unsupported values
*
* @param headers the client request headers
* @return whether the headers were sanitized successfully
*/
bool sanitizeConnectionHeader(Http::RequestHeaderMap& headers);
/**
* Get the string for the given http protocol.
* @param protocol for which to return the string representation.
* @return string representation of the protocol.
*/
const std::string& getProtocolString(const Protocol p);
/**
* Extract host and path from a URI. The host may contain port.
* This function doesn't validate if the URI is valid. It only parses the URI with following
* format: scheme://host/path.
* @param the input URI string
* @param the output host string.
* @param the output path string.
*/
void extractHostPathFromUri(const absl::string_view& uri, absl::string_view& host,
absl::string_view& path);
/**
* Takes a the path component from a file:/// URI and returns a local path for file access.
* @param file_path if we have file:///foo/bar, the file_path is foo/bar. For file:///c:/foo/bar
* it is c:/foo/bar. This is not prefixed with /.
* @return std::string with absolute path for local access, e.g. /foo/bar, c:/foo/bar.
*/
std::string localPathFromFilePath(const absl::string_view& file_path);
/**
* Prepare headers for a HttpUri.
*/
RequestMessagePtr prepareHeaders(const envoy::config::core::v3::HttpUri& http_uri);
/**
* Serialize query-params into a string.
*/
std::string queryParamsToString(const QueryParams& query_params);
/**
* Returns string representation of StreamResetReason.
*/
const std::string resetReasonToString(const Http::StreamResetReason reset_reason);
/**
* Transforms the supplied headers from an HTTP/1 Upgrade request to an H2 style upgrade.
* Changes the method to connection, moves the Upgrade to a :protocol header,
* @param headers the headers to convert.
*/
void transformUpgradeRequestFromH1toH2(RequestHeaderMap& headers);
/**
* Transforms the supplied headers from an HTTP/1 Upgrade response to an H2 style upgrade response.
* Changes the 101 upgrade response to a 200 for the CONNECT response.
* @param headers the headers to convert.
*/
void transformUpgradeResponseFromH1toH2(ResponseHeaderMap& headers);
/**
* Transforms the supplied headers from an H2 "CONNECT"-with-:protocol-header to an HTTP/1 style
* Upgrade response.
* @param headers the headers to convert.
*/
void transformUpgradeRequestFromH2toH1(RequestHeaderMap& headers);
/**
* Transforms the supplied headers from an H2 "CONNECT success" to an HTTP/1 style Upgrade response.
* The caller is responsible for ensuring this only happens on upgraded streams.
* @param headers the headers to convert.
*/
void transformUpgradeResponseFromH2toH1(ResponseHeaderMap& headers, absl::string_view upgrade);
/**
* The non template implementation of resolveMostSpecificPerFilterConfig. see
* resolveMostSpecificPerFilterConfig for docs.
*/
const Router::RouteSpecificFilterConfig*
resolveMostSpecificPerFilterConfigGeneric(const std::string& filter_name,
const Router::RouteConstSharedPtr& route);
/**
* Retrieves the route specific config. Route specific config can be in a few
* places, that are checked in order. The first config found is returned. The
* order is:
* - the routeEntry() (for config that's applied on weighted clusters)
* - the route
* - and finally from the virtual host object (routeEntry()->virtualhost()).
*
* To use, simply:
*
* const auto* config =
* Utility::resolveMostSpecificPerFilterConfig<ConcreteType>(FILTER_NAME,
* stream_callbacks_.route());
*
* See notes about config's lifetime below.
*
* @param filter_name The name of the filter who's route config should be
* fetched.
* @param route The route to check for route configs. nullptr routes will
* result in nullptr being returned.
*
* @return The route config if found. nullptr if not found. The returned
* pointer's lifetime is the same as the route parameter.
*/
template <class ConfigType>
const ConfigType* resolveMostSpecificPerFilterConfig(const std::string& filter_name,
const Router::RouteConstSharedPtr& route) {
static_assert(std::is_base_of<Router::RouteSpecificFilterConfig, ConfigType>::value,
"ConfigType must be a subclass of Router::RouteSpecificFilterConfig");
const Router::RouteSpecificFilterConfig* generic_config =
resolveMostSpecificPerFilterConfigGeneric(filter_name, route);
return dynamic_cast<const ConfigType*>(generic_config);
}
/**
* The non template implementation of traversePerFilterConfig. see
* traversePerFilterConfig for docs.
*/
void traversePerFilterConfigGeneric(
const std::string& filter_name, const Router::RouteConstSharedPtr& route,
std::function<void(const Router::RouteSpecificFilterConfig&)> cb);
/**
* Fold all the available per route filter configs, invoking the callback with each config (if
* it is present). Iteration of the configs is in order of specificity. That means that the callback
* will be called first for a config on a Virtual host, then a route, and finally a route entry
* (weighted cluster). If a config is not present, the callback will not be invoked.
*/
template <class ConfigType>
void traversePerFilterConfig(const std::string& filter_name,
const Router::RouteConstSharedPtr& route,
std::function<void(const ConfigType&)> cb) {
static_assert(std::is_base_of<Router::RouteSpecificFilterConfig, ConfigType>::value,
"ConfigType must be a subclass of Router::RouteSpecificFilterConfig");
traversePerFilterConfigGeneric(
filter_name, route, [&cb](const Router::RouteSpecificFilterConfig& cfg) {
const ConfigType* typed_cfg = dynamic_cast<const ConfigType*>(&cfg);
if (typed_cfg != nullptr) {
cb(*typed_cfg);
}
});
}
/**
* Merge all the available per route filter configs into one. To perform the merge,
* the reduce function will be called on each two configs until a single merged config is left.
*
* @param reduce The first argument for this function will be the config from the previous level
* and the second argument is the config from the current level (the more specific one). The
* function should merge the second argument into the first argument.
*
* @return The merged config.
*/
template <class ConfigType>
absl::optional<ConfigType>
getMergedPerFilterConfig(const std::string& filter_name, const Router::RouteConstSharedPtr& route,
std::function<void(ConfigType&, const ConfigType&)> reduce) {
static_assert(std::is_copy_constructible<ConfigType>::value,
"ConfigType must be copy constructible");
absl::optional<ConfigType> merged;
traversePerFilterConfig<ConfigType>(filter_name, route,
[&reduce, &merged](const ConfigType& cfg) {
if (!merged) {
merged.emplace(cfg);
} else {
reduce(merged.value(), cfg);
}
});
return merged;
}
struct AuthorityAttributes {
// whether parsed authority is pure ip address(IPv4/IPv6), if it is true
// passed that are not FQDN
bool is_ip_address_{};
// If parsed authority has host, that is stored here.
absl::string_view host_;
// If parsed authority has port, that is stored here.
absl::optional<uint16_t> port_;
};
/**
* Parse passed authority, and get that is valid FQDN or IPv4/IPv6 address, hostname and port-name.
* @param host host/authority
* @param default_port If passed authority does not have port, this value is returned
* @return hostname parse result. that includes whether host is IP Address, hostname and port-name
*/
AuthorityAttributes parseAuthority(absl::string_view host);
} // namespace Utility
} // namespace Http
} // namespace Envoy
|
def load_metadata(filename):
try:
metadata = pyexiv2.Image(filename)
metadata.readMetadata()
except AttributeError:
metadata = pyexiv2.ImageMetadata(filename)
metadata.read()
return metadata |
Spatio-temporal dynamics of information processing in the Brain: Recent advances, current limitations and future challenges Understanding how the brain transforms sensory information into adapted motor behavior necessitates to track the flow of information in the brain space. One question of great importance is to what extend the various required cognitive operations overlap in time (can a response begin to be prepared before the end of stimulus evaluation?). Symmetrically, it is essential to understand what is the degree of localization of the elementary cognitive operations (Are the motor areas purely motor, or do they intervene also in sensory processing?). After a brief statement of the current theoretical questions, we will present some recent data regarding these issues. The general logic followed is the track the information flow backward, starting from the response up to the stimulus. We will then present some technical limitation hampering more precise investigations and conclude on the challenges for the next few years for real advancement on those topics |
<reponame>Keneral/aframeworks<filename>compile/slang/tests/F_v15_non_root_kernel/v15_non_root_kernel.rs
// -target-api 15
#pragma version(1)
#pragma rs java_package_name(android.renderscript.cts)
void foo(const int *in) {
}
|
SURVEY ON PREDICTION OF HEART MORBIDITY USING DATA MINING TECHNIQUES Data mining is the non trivial extraction of implicit, previously unknown and potentially useful information from data. Data mining technology provides a user- oriented approach to novel and hidden patterns in the data. This paper presents about the various existing techniques, the issues and challenges associated with them. The discovered knowledge can be used by the healthcare administrators to improve the quality of service and also used by the medical practitioners to reduce the number of adverse drug effect, to suggest less expensive therapeutically equivalent alternatives. In this paper we discuss the popular data mining techniques namely, Decision Trees, Naive Bayes and Neural Network that are used for prediction of disease. |
Consider yourself up on eyewear fashion and design? You will fall in love with Alexander Gray at OC Mart Mix in Costa Mesa. It's a treasure trove of the world's best indie frames in one spot.
Proud dad and owner David Gonzales named Alexander Gray after the middle names of his two young sons, and he also owns Fred Segal Eyes and the Spectacle in Santa Monica. Gonzales is a legend in the eyewear industry for seeking out iconic (not mass produced) optical companies, redesigning interesting eyeglasses into sunglasses and finding the right frame for the right face.
He's a stickler when it comes to finding the most flattering frame.
"I will not sell something that I know will not look good," Gonzales says.
At Alexander Gray, Gonzales specializes in design-driven, technologically advanced brands (mostly handmade in Japan). Since Orange County is the epicenter for the world's premium optical companies — Barton Perreira, SALT, Oakley and Initium Eyewear, to name just a few — it was a natural fit for Gonzales to open a hip shop that just carries highest-quality independent brands like Victory Collection, a small company that's reissuing popular styles from the '40s and '50s; KBL Eyewear, a New York-based husband-and-wife team who design artistic, unique and timeless frames; as well as Dita and more brands arriving soon.
Prices range from $135 to $500.
Alexander Gray is open 11 a.m. to 7 p.m. Tuesday through Saturday and noon to 5 p.m. Sunday at SouthCoast Collection, 3313 Hyland Ave. Suite C in Costa Mesa. For more information, call (949) 284-0564.
Peter Blake Gallery just opened a pop-up gallery in South Laguna, open from 6 to 10 p.m. Wednesday to Sunday through summer. It's really an extension of his main gallery show, which explores trends in current California abstraction. Peter Blake Gallery features important artists including Lita Albuquerque (currently showing at Laguna Art Museum), Peter Alexander, Charles Arnoldi, Tony DeLap, Chris Gwaltney, Scot Heywood, Ed Moses, Andy Moses and Guillermo Bert.
The gallery is at 2894 South Coast Hwy. in Laguna Beach.
The wait is almost over! The first Katsuya by Philippe Starck and master sushi chef Katsuya Uechi is opening Friday in the former Hush location in Laguna Beach. Katsuya is sure to be a hot spot, serving award-winning cuisine in an indoor and outdoor stylish setting designed by the world-renowned Starck.
Katsuya will feature Robata-style dining, prepared over an open flame, innovative sushi and traditional Japanese platters. Signature dishes will include crispy rice with spicy tuna, yellowtail sashimi with jalapeño and miso-marinated black cod and a delicious kid's menu. And starting July 25, it will serve lunch. Katsuya is at 858 S. Coast Hwy. in Laguna Beach. For more information, visit http://www.sbe.com.
Plan on experiencing a free mini-concert of "Mary Poppins" at 1 p.m. Wednesday at South Coast Plaza's Carousel Court. Cast members from the smash hit musical will be performing numbers from the show, which opens July 14 at the Segerstrom Center for the Arts.
And before or after the show, take advantage of lunch specials at South Coast Plaza's eateries that include the fast-casual noodle bar at AnQi, DG Burger, Lawry's Carvery or Signature Kitchen at Macy's Home Store; or prix-fixe fine dining choices at Marché Moderne, Charlie Palmer at Bloomingdale's, Pizzeria Ortica, Morton's the Steakhouse and Royal Khyber Fine Indian Cuisine.
Tickets to see "Mary Poppins" are available online at http://www.scfta.org by calling (714) 556-2787 and at the box office at 600 Town Center Drive in Costa Mesa.
It's no secret that Lucca Café in Irvine is one of my favorite restaurants in Orange County. Chef and co-owner Cathy Pavlos and husband/sommelier Elliott Pavlos bring us an unexpected Euro bistro and wine bar serving lunch, dinner and weekend brunch that's focused on the season's best ingredients (free-range, hormone-free meats and locally grown organic produce).
Whether you are stopping in for an artisan cheese plate, handmade charcuterie, an antipasto feast or a tasty summer salad with yellow beets, heirloom tomatoes, burrata, fresh basil, basil pesto and Champagne vinaigrette, you will be treated to a culinary experience featuring small plates inspired by the south of France, Italy and Greece.
At Lucca, Chef Cathy is following the trend of upscale New York restaurants that bundle "lunches" so that guests can order an efficient and delicious trio of small plates (a smaller version of its European dinner menu).
Weekly business bundles are available 11 a.m. to 2 p.m. Monday though Friday. The options can change, but expect wonderful choices like wild baby arugula, roasted beets, shaved fennel, goat cheese, caramelized onions and vanilla bean vinaigrette, with all natural grilled chicken salad sandwich on croissant with a tarragon, shallot, celery, cherries, almonds, butter lettuce, market tomatoes and avocados; and to finish, maybe a red velvet-white chocolate cake pop! The cost is $17.95.
It's open for lunch 11 a.m. to 2 p.m. Monday through Friday; open for dinner 5 to 9 p.m. nightly; and open for brunch 9:30 a.m. to 2 p.m. Saturday and Sunday.
Lucca Café is at 6507 Quail Hill Parkway in Irvine at the Quail Hill Village Center. For more information, call (949) 725-1773 or visit luccacafe.net. |
/* Review of common sorting algorithms.
*
* In the routines of this class, we sort a given array, a, in increasing order.
*/
public class Sort2 {
public static void shell(int[] a) {
int len = a.length;
int i, j, k;
int tmp;
for (k = len / 2; k > 0; k /= 2) {
for (i = k; i < len; i++) {
tmp = a[i];
for (j = i; j >= k; j -= k) {
if (tmp < a[j - k])
a[j] = a[j - k];
else
break;
}
a[j] = tmp;
}
}
return;
}
public static void heap(int[] a) {}
public static void merge(int[] a) {
int len = a.length;
int[] tmp = new int[len];
merge(a, tmp, 0, a.length - 1);
}
private static void merge(int[] a, int[] tmp, int left, int right) {
int center = (left + right) / 2;
if (left == center)
return;
merge(a, tmp, left, center);
merge(a, tmp, center + 1, right);
merge2(a, tmp, left, center, right);
return;
}
private static void merge2(int[] a, int[] tmp, int leftPos, int leftEnd,
int rightEnd) {
int rightPos = leftEnd + 1;
int numElem = rightEnd - leftPos + 1;
int pos = leftPos;
while (leftPos <= leftEnd && rightPos <= rightEnd) {
if (a[leftPos] < a[rightPos])
tmp[pos++] = a[leftPos++];
else
tmp[pos++] = a[rightPos++];
}
while (leftPos <= leftEnd)
tmp[pos++] = a[leftPos++];
while (rightPos <= rightEnd)
tmp[pos++] = a[rightPos++];
for (int i = 0; i < numElem; i++, rightEnd--)
// stupid mistake:
// a[rightEnd--] = tmp[rightEnd--];
a[rightEnd] = tmp[rightEnd];
return;
}
public static int cutoff = 3;
public static void quick(int[] a) {
int len = a.length;
quick(a, 0, len - 1);
}
private static void quick(int[] a, int left, int right) {
int i, j;
int pivot;
int len = right - left + 1;
if (len > cutoff) {
pivot = median3(a, left, right);
i = left;
j = right - 1;
while (true) {
while (a[++i] < pivot) {}
while (a[--j] > pivot) {}
if (i < j)
swap(a, i, j);
else
break;
}
swap(a, i, right - 1);
quick(a, left, i - 1);
quick(a, i + 1, right);
}
else {
insertion(a, left, right);
}
}
private static int median3(int[] a, int left, int right) {
int center = (left + right) / 2;
// we don't have to check whether left == center, since when
// the length of a is smaller than cutoff, we will use insertion
// sort instead.
if (a[left] > a[center])
swap(a, left, center);
if (a[left] > a[right])
swap(a, left, right);
if (a[center] > a[right])
swap(a, center, right);
swap(a, center, right - 1);
return a[right - 1];
}
private static void swap(int[] a, int i, int j) {
int len = a.length;
if (i >= len || j >= len) {
System.out.println("Out of index");
return;
}
int tmp = a[i];
a[i] = a[j];
a[j] = tmp;
return;
}
public static void insertion(int[] a) {
insertion(a, 0, a.length - 1);
}
public static void insertion(int[] a, int start, int end) {
int tmp;
int i, j;
for (i = start + 1; i <= end; i++) {
tmp = a[i];
for (j = i; j > start; j--) {
if (tmp < a[j - 1])
a[j] = a[j - 1];
else
break;
}
a[j] = tmp;
}
return;
}
public static void print(int[] a) {
int len = a.length;
for (int i = 0; i < len; i++)
System.out.print(a[i] + " ");
System.out.println();
}
public static void main(String args[]) {
int[] a = {2, 10, 6, 1, 15, 3, 11, 8, 7, 9};
//Sort2.insertion(a);
//Sort2.shell(a);
//Sort2.merge(a);
Sort2.quick(a);
Sort2.print(a);
}
} |
Dugald Bruce Lockhart
Background and education
A member of the Bruce Lockhart family, Lockhart was born in Fiji in 1968, the son of James Robert Bruce Lockhart (1941-2018), a diplomat, spy, artist, and author, and Felicity A. Smith. His grandfather, J. M. Bruce Lockhart, was an intelligence officer. His great-grandfather, J. H. Bruce Lockhart, and his great-uncles Rab Bruce Lockhart and Logie Bruce Lockhart, were all public school headmasters who played rugby union for Scotland. Another forebear, Sir Robert Bruce Lockhart, was an author and adventurer. His late uncle Sandy Bruce-Lockhart, Baron Bruce-Lockhart, was a politician.
Dugald Bruce Lockhart trained for a career in acting at the Royal Academy of Dramatic Art.
Career
Lockhart began as a stage actor, working with the Royal Shakespeare Company, the National Theatre and others. Since 1998, he has acted mostly with Propeller, an all-male theatre company of which he is now an associate director. He is also an associate of the Teatre Akadèmia Theatre Company in Barcelona and has directed Romeo and Juliet and As You Like It in Catalan, using new translations by Miqel Desclot. He teaches and directs at drama schools in London, including the Royal Central School of Speech and Drama, LAMDA, and the Italia Conti Academy of Theatre Arts.
He played David Cameron in The Three Lions, a comedy written by William Gaminara, a role for which he was nominated as best actor by The Stage at the Edinburgh Festival of 2013. Lockhart returned to the role when the play was later staged at the St James Theatre, London, in 2015, and stayed with the production when it moved on to the Liverpool Playhouse.
He is the author of a handbook for actors called Heavy Pencil, which is available on Amazon.
Family
Lockhart is married to the actress Penelope Rawlins. They have a son called Mackenzie, born in May 2015. |
Shielding requirements of a 3T MRI examination room to limit radiated emission Taking the emission requirements from IEC 60601-1-2 as starting point, a detailed analysis of the shielding requirements of an examination room for a 3T Magnetic Resonance Imaging scanner has been made. A 3D Finite Element model of scanner and shielding room has been built and the influence of slits on the shielding performance has been investigated. Comparison between simulations of shielding effectiveness according to the IEEE Std 299 near-field method and simulations of shielding effectiveness according to the IEC 60601-1-2 far-field method shows excellent agreement. In addition we have simulated near-field measurements with magnetic loop antennas. |
#include "multilinetextbox.h"
#include "fontbase.h"
#include "document.h"
#include "graphicsport.h"
#include "woopsifuncs.h"
#include "stringiterator.h"
#include "woopsitimer.h"
#include "woopsikey.h"
#include "woopsi.h"
using namespace WoopsiUI;
MultiLineTextBox::MultiLineTextBox(s16 x, s16 y, u16 width, u16 height, const WoopsiString& text, s16 maxRows, GadgetStyle* style) : ScrollingPanel(x, y, width, height, style) {
_hAlignment = TEXT_ALIGNMENT_HORIZ_CENTRE;
_vAlignment = TEXT_ALIGNMENT_VERT_CENTRE;
_topRow = 0;
_opensKeyboard = true;
_borderSize.top = 3;
_borderSize.right = 3;
_borderSize.bottom = 3;
_borderSize.left = 3;
Rect rect;
getClientRect(rect);
_document = new Document(getFont(), "", rect.width);
_canvasWidth = rect.width;
_flags.draggable = true;
_flags.doubleClickable = true;
_maxRows = maxRows;
calculateVisibleRows();
// Set maximum rows if value not set
if (_maxRows == 0) {
_maxRows = _visibleRows + 1;
}
_cursorPos = 0;
_showCursor = true;
setText(text);
}
void MultiLineTextBox::drawText(GraphicsPort* port) {
// Early exit if there is no text to display
if (_document->getLineCount() == 0) return;
// Determine the top and bottom rows within the graphicsport's clip rect.
// We only draw these rows in order to increase the speed of the routine.
Rect rect;
port->getClipRect(rect);
s32 regionY = -_canvasY + rect.y; // Y co-ord of the visible region of this canvas
s32 topRow = getRowContainingCoordinate(regionY);
s32 bottomRow = getRowContainingCoordinate(regionY + rect.height);
// Early exit checks
if ((topRow < 0) && (bottomRow < 0)) return;
if ((bottomRow >= _document->getLineCount()) && (topRow >= _document->getLineCount())) return;
// Prevent overflows
if (topRow < 0) topRow = 0;
if (bottomRow >= _document->getLineCount()) bottomRow = _document->getLineCount() - 1;
// Draw lines of text
s32 currentRow = topRow;
// Draw all rows in this region
while (currentRow <= bottomRow) {
drawRow(port, currentRow);
currentRow++;
}
}
void MultiLineTextBox::drawRow(GraphicsPort* port, s32 row) {
u8 rowLength = _document->getLineTrimmedLength(row);
s16 textX = getRowX(row) + _canvasX;
s16 textY = getRowY(row) + _canvasY;
if (isEnabled()) {
port->drawText(textX, textY, _document->getFont(), _document->getText(), _document->getLineStartIndex(row), rowLength, getTextColour());
} else {
port->drawText(textX, textY, _document->getFont(), _document->getText(), _document->getLineStartIndex(row), rowLength, getDarkColour());
}
}
void MultiLineTextBox::drawContents(GraphicsPort* port) {
drawText(port);
// Draw the cursor
drawCursor(port);
}
void MultiLineTextBox::drawBorder(GraphicsPort* port) {
port->drawFilledRect(0, 0, getWidth(), getHeight(), getBackColour());
// Stop drawing if the gadget indicates it should not have an outline
if (isBorderless()) return;
port->drawBevelledRect(0, 0, getWidth(), getHeight(), getShadowColour(), getShineColour());
}
void MultiLineTextBox::getCursorCoordinates(s16& x, s16& y) const {
u32 cursorRow = 0;
x = 0;
y = 0;
// Only calculate the cursor position if the cursor isn't at the start of the text
if (_cursorPos > 0) {
// Calculate the row in which the cursor appears
cursorRow = _document->getLineContainingCharIndex(_cursorPos);
// Cursor line offset gives us the distance of the cursor from the start of the line
u8 cursorLineOffset = _cursorPos - _document->getLineStartIndex(cursorRow);
StringIterator* iterator = _document->getText().newStringIterator();
iterator->moveTo(_document->getLineStartIndex(cursorRow));
// Sum the width of each char in the row to find the x co-ord
for (s32 i = 0; i < cursorLineOffset; ++i) {
x += getFont()->getCharWidth(iterator->getCodePoint());
iterator->moveToNext();
}
delete iterator;
}
// Add offset of row to calculated value
x += getRowX(cursorRow);
// Calculate y co-ord of the cursor
y = getRowY(cursorRow);
}
void MultiLineTextBox::drawCursor(GraphicsPort* port) {
// Get the cursor co-ords
if (_showCursor) {
s16 cursorX = 0;
s16 cursorY = 0;
getCursorCoordinates(cursorX, cursorY);
// Adjust for canvas offsets
cursorX += _canvasX;
cursorY += _canvasY;
// Draw cursor
port->drawFilledXORRect(cursorX, cursorY, _document->getFont()->getCharWidth(getCursorCodePoint()), _document->getFont()->getHeight());
}
}
u32 MultiLineTextBox::getCursorCodePoint() const {
if (_cursorPos < _document->getText().getLength()) {
return _document->getText().getCharAt(_cursorPos);
} else {
return ' ';
}
}
// Calculate values for centralised text
u8 MultiLineTextBox::getRowX(s32 row) const {
Rect rect;
getClientRect(rect);
u8 rowLength = _document->getLineTrimmedLength(row);
u8 rowPixelWidth = _document->getFont()->getStringWidth(_document->getText(), _document->getLineStartIndex(row), rowLength);
// Calculate horizontal position
switch (_hAlignment) {
case TEXT_ALIGNMENT_HORIZ_CENTRE:
return (rect.width - rowPixelWidth) >> 1;
case TEXT_ALIGNMENT_HORIZ_LEFT:
return 0;
case TEXT_ALIGNMENT_HORIZ_RIGHT:
return rect.width - rowPixelWidth;
}
// Will never be reached
return 0;
}
s16 MultiLineTextBox::getRowY(s32 row) const {
// If the amount of text exceeds the size of the gadget, force
// the text to be top-aligned
if (_visibleRows <= _document->getLineCount()) {
return row * _document->getLineHeight();
}
// All text falls within the textbox, so obey the alignment
// options
s16 textY = 0;
s16 startPos = 0;
s32 canvasRows = 0;
s32 textRows = 0;
Rect rect;
getClientRect(rect);
// Calculate vertical position
switch (_vAlignment) {
case TEXT_ALIGNMENT_VERT_CENTRE:
// Calculate the maximum number of rows
canvasRows = _canvasHeight / _document->getLineHeight();
textY = row * _document->getLineHeight();
// Get the number of rows of text
textRows = _document->getLineCount();
// Ensure there's always one row
if (textRows == 0) textRows = 1;
// Calculate the start position of the block of text
startPos = ((canvasRows - textRows) * _document->getLineHeight()) >> 1;
// Calculate the row Y co-ordinate
textY = startPos + textY;
break;
case TEXT_ALIGNMENT_VERT_TOP:
textY = row * _document->getLineHeight();
break;
case TEXT_ALIGNMENT_VERT_BOTTOM:
textY = rect.height - (((_document->getLineCount() - row) * _document->getLineHeight()));
break;
}
return textY;
}
void MultiLineTextBox::calculateVisibleRows() {
Rect rect;
getClientRect(rect);
_visibleRows = rect.height / _document->getLineHeight();
}
void MultiLineTextBox::setTextAlignmentHoriz(TextAlignmentHoriz alignment) {
_hAlignment = alignment;
markRectsDamaged();
}
void MultiLineTextBox::setTextAlignmentVert(TextAlignmentVert alignment) {
_vAlignment = alignment;
markRectsDamaged();
}
bool MultiLineTextBox::cullTopLines() {
// Ensure that we have the correct number of rows
if ((_document->getLineCount() > _maxRows) && (_maxRows > -1)) {
_document->stripTopLines(_document->getLineCount() - _maxRows);
return true;
}
return false;
}
void MultiLineTextBox::limitCanvasHeight() {
_canvasHeight = _document->getPixelHeight();
Rect rect;
getClientRect(rect);
if (_canvasHeight < rect.height) _canvasHeight = rect.height;
}
void MultiLineTextBox::limitCanvasY() {
Rect rect;
getClientRect(rect);
// Ensure that the visible portion of the canvas is not less than the
// height of the viewer window
if (_canvasY + _canvasHeight < rect.height) {
jumpToTextBottom();
}
}
void MultiLineTextBox::jumpToTextBottom() {
Rect rect;
getClientRect(rect);
jump(0, -(_canvasHeight - rect.height));
}
void MultiLineTextBox::jumpToCursor() {
// Get the co-odinates of the cursor
s16 cursorX;
s16 cursorY;
getCursorCoordinates(cursorX, cursorY);
// Work out which row the cursor falls within
s32 cursorRow = _document->getLineContainingCharIndex(_cursorPos);
s16 rowY = getRowY(cursorRow);
// If the cursor is outside the visible portion of the canvas, jump to it
Rect rect;
getClientRect(rect);
if (rowY + _document->getLineHeight() + _canvasY > rect.height) {
// Cursor is below the visible portion of the canvas, so
// jump down so that the cursor's row is the bottom row of
// text
jump(0, -(rowY + _document->getLineHeight() - rect.height));
} else if (rowY + _canvasY < 0) {
// Cursor is above the visible portion of the canvas, so
// jump up so that the cursor's row is the top row of text
jump(0, -cursorY);
}
}
void MultiLineTextBox::setText(const WoopsiString& text) {
_document->setText(text);
cullTopLines();
limitCanvasHeight();
jumpToTextBottom();
markRectsDamaged();
if (raisesEvents()) {
_gadgetEventHandler->handleValueChangeEvent(*this);
}
}
void MultiLineTextBox::appendText(const WoopsiString& text) {
_document->append(text);
cullTopLines();
limitCanvasHeight();
jumpToTextBottom();
markRectsDamaged();
if (raisesEvents()) {
_gadgetEventHandler->handleValueChangeEvent(*this);
}
}
void MultiLineTextBox::removeText(const u32 startIndex) {
removeText(startIndex, _document->getText().getLength() - startIndex);
}
void MultiLineTextBox::removeText(const u32 startIndex, const u32 count) {
_document->remove(startIndex, count);
limitCanvasHeight();
limitCanvasY();
moveCursorToPosition(startIndex);
markRectsDamaged();
if (raisesEvents()) {
_gadgetEventHandler->handleValueChangeEvent(*this);
}
}
void MultiLineTextBox::insertText(const WoopsiString& text, const u32 index) {
_document->insert(text, index);
cullTopLines();
limitCanvasHeight();
moveCursorToPosition(index + text.getLength());
markRectsDamaged();
if (raisesEvents()) {
_gadgetEventHandler->handleValueChangeEvent(*this);
}
}
void MultiLineTextBox::setFont(FontBase* font) {
_style.font = font;
_document->setFont(font);
cullTopLines();
limitCanvasHeight();
limitCanvasY();
markRectsDamaged();
if (raisesEvents()) {
_gadgetEventHandler->handleValueChangeEvent(*this);
}
}
const u16 MultiLineTextBox::getPageCount() const {
if (_visibleRows > 0) {
return (_document->getLineCount() / _visibleRows) + 1;
} else {
return 1;
}
}
const u16 MultiLineTextBox::getCurrentPage() const {
// Calculate the top line of text
s32 topRow = -_canvasY / _document->getLineHeight();
// Return the page on which the top row falls
if (_visibleRows > 0) {
return topRow / _visibleRows;
} else {
return 1;
}
}
void MultiLineTextBox::onResize(u16 width, u16 height) {
// Ensure the base class resize method is called
ScrollingPanel::onResize(width, height);
// Resize the canvas' width
Rect rect;
getClientRect(rect);
_canvasWidth = rect.width;
_canvasHeight = rect.height;
_canvasX = 0;
_canvasY = 0;
calculateVisibleRows();
// Re-wrap the text
_document->setWidth(getWidth());
_document->wrap();
bool raiseEvent = cullTopLines() && raisesEvents();
limitCanvasHeight();
limitCanvasY();
if (raiseEvent) _gadgetEventHandler->handleValueChangeEvent(*this);
}
const u32 MultiLineTextBox::getTextLength() const {
return _document->getText().getLength();
}
void MultiLineTextBox::showCursor() {
if (!_showCursor) {
_showCursor = true;
markRectsDamaged();
}
}
void MultiLineTextBox::hideCursor() {
if (_showCursor) {
_showCursor = false;
markRectsDamaged();
}
}
void MultiLineTextBox::insertTextAtCursor(const WoopsiString& text) {
insertText(text, getCursorPosition());
jumpToCursor();
}
void MultiLineTextBox::moveCursorToPosition(const s32 position) {
GraphicsPort* port = newGraphicsPort(false);
// Erase existing cursor
drawCursor(port);
// Force position to within confines of string
if (position < 0) {
_cursorPos = 0;
} else {
s32 len = (s32)_document->getText().getLength();
_cursorPos = len > position ? position : len;
}
// Draw cursor in new position
drawCursor(port);
delete port;
}
void MultiLineTextBox::onClick(s16 x, s16 y) {
startDragging(x, y);
// Move cursor to clicked co-ords
Rect rect;
getClientRect(rect);
// Adjust x and y from screen co-ords to canvas co-ords
s16 canvasRelativeX = x - getX() - rect.x - _canvasX;
s16 canvasRelativeY = y - getY() - rect.y - _canvasY;
moveCursorToPosition(getCharIndexAtCoordinates(canvasRelativeX, canvasRelativeY));
}
void MultiLineTextBox::onDoubleClick(s16 x, s16 y) {
if (_opensKeyboard) woopsiApplication->showKeyboard(this);
}
void MultiLineTextBox::onKeyPress(Pad::KeyCode keyCode) {
processPhysicalKey(keyCode);
}
void MultiLineTextBox::onKeyRepeat(Pad::KeyCode keyCode) {
processPhysicalKey(keyCode);
}
void MultiLineTextBox::moveCursorUp() {
s16 cursorX = 0;
s16 cursorY = 0;
getCursorCoordinates(cursorX, cursorY);
// Get the midpoint of the cursor. We use the midpoint to ensure that
// the cursor does not drift off to the left as it moves up the text, which
// is a problem when we use the left edge as the reference point when the
// font is proportional
cursorX += _document->getFont()->getCharWidth(_document->getText().getCharAt(_cursorPos)) >> 1;
// Locate the character above the midpoint
s32 index = getCharIndexAtCoordinates(cursorX, cursorY + _document->getLineHeight());
moveCursorToPosition(index);
jumpToCursor();
}
void MultiLineTextBox::moveCursorDown() {
s16 cursorX = 0;
s16 cursorY = 0;
getCursorCoordinates(cursorX, cursorY);
// Get the midpoint of the cursor. We use the midpoint to ensure that
// the cursor does not drift off to the left as it moves up the text, which
// is a problem when we use the left edge as the reference point when the
// font is proportional
cursorX += _document->getFont()->getCharWidth(_document->getText().getCharAt(_cursorPos)) >> 1;
// Locate the character above the midpoint
s32 index = getCharIndexAtCoordinates(cursorX, cursorY - _document->getLineHeight());
moveCursorToPosition(index);
jumpToCursor();
}
void MultiLineTextBox::moveCursorLeft() {
if (_cursorPos > 0) {
moveCursorToPosition(_cursorPos - 1);
}
jumpToCursor();
}
void MultiLineTextBox::moveCursorRight() {
if (_cursorPos < (s32)_document->getText().getLength()) {
moveCursorToPosition(_cursorPos + 1);
}
jumpToCursor();
}
void MultiLineTextBox::processPhysicalKey(Pad::KeyCode keyCode) {
switch (keyCode) {
case Pad::KEY_CODE_LEFT:
moveCursorLeft();
break;
case Pad::KEY_CODE_RIGHT:
moveCursorRight();
break;
case Pad::KEY_CODE_UP:
moveCursorDown();
break;
case Pad::KEY_CODE_DOWN:
moveCursorUp();
break;
default:
// Not interested in other keys
break;
}
}
void MultiLineTextBox::handleKeyboardPressEvent(WoopsiKeyboard& source, const WoopsiKey& key) {
processKey(key);
}
void MultiLineTextBox::handleKeyboardRepeatEvent(WoopsiKeyboard& source, const WoopsiKey& key) {
processKey(key);
}
void MultiLineTextBox::processKey(const WoopsiKey& key) {
if (key.getKeyType() == WoopsiKey::KEY_BACKSPACE) {
// Delete character in front of cursor
if (_cursorPos > 0) removeText(_cursorPos - 1, 1);
} else if (key.getValue() != '\0') {
// Not modifier; append value
insertTextAtCursor(key.getValue());
}
}
s32 MultiLineTextBox::getRowContainingCoordinate(s16 y) const {
s32 row = -1;
// Locate the row containing the character
for (s32 i = 0; i < _document->getLineCount(); ++i) {
// Abort search if we've found the row below the y co-ordinate
if (getRowY(i) > y) {
if (i == 0) {
// If the co-ordinate is above the text, we return the top
// row
row = 0;
} else {
// Row within the text, so return the previous row - this is
// the row that contains the co-ordinate.
row = i - 1;
}
break;
}
}
// If the co-ordinate is below the text, row will still be -1.
// We need to set it to the last row
if (row == -1) row = _document->getLineCount() - 1;
return row;
}
u32 MultiLineTextBox::getCharIndexAtCoordinate(s16 x, s32 rowIndex) const {
// Locate the character within the row
s32 startIndex = _document->getLineStartIndex(rowIndex);
s32 stopIndex = _document->getLineLength(rowIndex);
s32 width = getRowX(rowIndex);
s32 index = -1;
StringIterator* iterator = _document->getText().newStringIterator();
iterator->moveTo(startIndex);
width += _document->getFont()->getCharWidth(iterator->getCodePoint());
for (s32 i = 0; i < stopIndex; ++i) {
if (width > x) {
if (i == 0) {
// If the co-ordinate is on the left of the text, we add nothing
// to the index
index = startIndex;
} else {
// Character within the row.
// This is the character that contains the co-ordinate.
index = startIndex + i;
}
break;
}
iterator->moveToNext();
width += _document->getFont()->getCharWidth(iterator->getCodePoint());
}
delete iterator;
// If the co-ordinate is past the last character, index will still be -1.
// We need to set it to the last character
if (index == -1) {
if (rowIndex == _document->getLineCount() - 1) {
// Index past the end point of the text, so return an index
// just past the text
index = startIndex + stopIndex;
} else {
// Index at the end of a row, so return the last index of the
// row
index = startIndex + stopIndex - 1;
}
}
return index;
}
u32 MultiLineTextBox::getCharIndexAtCoordinates(s16 x, s16 y) const {
s32 rowIndex = getRowContainingCoordinate(y);
return getCharIndexAtCoordinate(x, rowIndex);
}
|
<gh_stars>0
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.jena.permissions.model.impl;
import java.util.Comparator;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;
import java.util.SortedSet;
import java.util.TreeSet;
import org.apache.jena.graph.Node;
import org.apache.jena.graph.Triple;
import org.apache.jena.permissions.SecurityEvaluator;
import org.apache.jena.permissions.SecurityEvaluator.Action;
import org.apache.jena.permissions.impl.ItemHolder;
import org.apache.jena.permissions.impl.SecuredItemInvoker;
import org.apache.jena.permissions.model.SecuredContainer;
import org.apache.jena.permissions.model.SecuredModel;
import org.apache.jena.permissions.utils.ContainerFilter;
import org.apache.jena.permissions.utils.PermStatementFilter;
import org.apache.jena.rdf.model.Container;
import org.apache.jena.rdf.model.Literal;
import org.apache.jena.rdf.model.Property;
import org.apache.jena.rdf.model.RDFNode;
import org.apache.jena.rdf.model.Statement;
import org.apache.jena.shared.AddDeniedException;
import org.apache.jena.shared.AuthenticationRequiredException;
import org.apache.jena.shared.DeleteDeniedException;
import org.apache.jena.shared.ReadDeniedException;
import org.apache.jena.shared.UpdateDeniedException;
import org.apache.jena.util.iterator.ExtendedIterator;
import org.apache.jena.util.iterator.WrappedIterator;
import org.apache.jena.vocabulary.RDF;
/**
* Implementation of SecuredContainer to be used by a SecuredItemInvoker proxy.
*/
public class SecuredContainerImpl extends SecuredResourceImpl implements SecuredContainer {
/**
* Constructor
*
* @param securedModel the Secured Model to use.
* @param container The container to secure.
* @return The SecuredResource
*/
public static SecuredContainer getInstance(final SecuredModel securedModel, final Container container) {
if (securedModel == null) {
throw new IllegalArgumentException("Secured securedModel may not be null");
}
if (container == null) {
throw new IllegalArgumentException("Container may not be null");
}
// check that resource has a securedModel.
Container goodContainer = container;
if (goodContainer.getModel() == null) {
container.asNode();
goodContainer = securedModel.createBag();
}
final ItemHolder<Container, SecuredContainer> holder = new ItemHolder<>(goodContainer);
final SecuredContainerImpl checker = new SecuredContainerImpl(securedModel, holder);
// if we are going to create a duplicate proxy, just return this
// one.
if (goodContainer instanceof SecuredContainer) {
if (checker.isEquivalent((SecuredContainer) goodContainer)) {
return (SecuredContainer) goodContainer;
}
}
return holder.setSecuredItem(new SecuredItemInvoker(container.getClass(), checker));
}
// the item holder that contains this SecuredContainer.
private final ItemHolder<? extends Container, ? extends SecuredContainer> holder;
/**
* Constructor
*
* @param securedModel the Secured Model to use.
* @param holder The item holder that will contain this SecuredContainer
*/
protected SecuredContainerImpl(final SecuredModel securedModel,
final ItemHolder<? extends Container, ? extends SecuredContainer> holder) {
super(securedModel, holder);
this.holder = holder;
}
/**
* Returns the Object as an RDFNode. If it is a node return it otherwise convert
* it as a literal
*
* @param o the object to convert.
* @return an RDFNode
*/
protected RDFNode asObject(Object o) {
return o instanceof RDFNode ? (RDFNode) o : holder.getBaseItem().getModel().createTypedLiteral(o);
}
/**
* Create an RDFNode (Literal) from a string value and language.
*
* @param value the value
* @param language the language
* @return a Literal RDFNode.
*/
protected RDFNode asLiteral(String value, String language) {
return holder.getBaseItem().getModel().createLiteral(value, language);
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final boolean o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final char o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final double o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final float o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final long o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final Object o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asObject(o));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final RDFNode o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
checkUpdate();
final int pos = holder.getBaseItem().size();
checkAdd(pos, o.asNode());
holder.getBaseItem().add(o);
return holder.getSecuredItem();
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final String o)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(o, "");
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer add(final String o, final String l)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
return add(asLiteral(o, l));
}
/**
* @sec.graph Update
* @sec.triple Create SecTriple( this, RDF.li, o );
* @throws UpdateDeniedException
* @throws AddDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
protected void checkAdd(final int pos, final Literal literal)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
checkAdd(pos, literal.asNode());
}
protected void checkAdd(final int pos, final Node node)
throws AddDeniedException, UpdateDeniedException, AuthenticationRequiredException {
checkCreate(new Triple(holder.getBaseItem().asNode(), RDF.li(pos).asNode(), node));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final boolean o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final char o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final double o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final float o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final long o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final Object o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(asObject(o));
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final RDFNode o) throws ReadDeniedException, AuthenticationRequiredException {
// iterator checks reads
final SecuredNodeIterator<RDFNode> iter = iterator();
try {
while (iter.hasNext()) {
if (iter.next().asNode().equals(o.asNode())) {
return true;
}
}
return false;
} finally {
iter.close();
}
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final String o) throws ReadDeniedException, AuthenticationRequiredException {
return contains(o, "");
}
/**
* @sec.graph Read
* @sec.triple Read SecTriple( this, RDF.li, o );
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then @{code false} is returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public boolean contains(final String o, final String l)
throws ReadDeniedException, AuthenticationRequiredException {
return contains(asLiteral(o, l));
}
protected int getAddIndex() {
int pos = -1;
final ExtendedIterator<Statement> iter = holder.getBaseItem().listProperties();
try {
while (iter.hasNext()) {
pos = Math.max(pos, getIndex(iter.next().getPredicate()));
}
} finally {
iter.close();
}
return pos + 1;
}
protected static int getIndex(final Property p) {
if (p.getNameSpace().equals(RDF.getURI()) && p.getLocalName().startsWith("_")) {
try {
return Integer.parseInt(p.getLocalName().substring(1));
} catch (final NumberFormatException e) {
// acceptable;
}
}
return -1;
}
/**
* An iterator of statements that have predicates that start with '_' followed
* by a number and for which the user has the specified permission.
*
* @param perm the permission to check
* @return an ExtendedIterator of statements.
*/
protected ExtendedIterator<Statement> getStatementIterator(final Action perm) {
return holder.getBaseItem().listProperties().filterKeep(new ContainerFilter())
.filterKeep(new PermStatementFilter(perm, this));
}
/**
* An iterator of statements that have predicates that start with '_' followed
* by a number and for which the user has the specified permissions.
*
* @param perm the permissions to check
* @return an ExtendedIterator of statements.
*/
protected ExtendedIterator<Statement> getStatementIterator(final Set<Action> perm) {
return holder.getBaseItem().listProperties().filterKeep(new ContainerFilter())
.filterKeep(new PermStatementFilter(perm, this));
}
@Override
public boolean isAlt() {
return holder.getBaseItem().isAlt();
}
@Override
public boolean isBag() {
return holder.getBaseItem().isBag();
}
@Override
public boolean isSeq() {
return holder.getBaseItem().isSeq();
}
/**
* @sec.graph Read
* @sec.triple Read on each triple ( this, rdf:li_? node ) returned by iterator;
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then an empty iterator is
* returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredNodeIterator<RDFNode> iterator() {
// listProperties calls checkRead();
SecuredStatementIterator iter = listProperties();
try {
// List<Statement> ls = iter.toList();
SortedSet<Statement> result = new TreeSet<>(new ContainerComparator());
while (iter.hasNext()) {
Statement stmt = iter.next();
if (stmt.getPredicate().getOrdinal() > 0) {
result.add(stmt);
}
}
return new SecuredNodeIterator<>(getModel(),
new StatementRemovingIterator(result.iterator()).mapWith(s -> s.getObject()));
} finally {
iter.close();
}
}
/**
* @param perms the Permissions required on each node returned
* @sec.graph Read
* @sec.triple Read + perms on each triple ( this, rdf:li_? node ) returned by
* iterator;
*
* if {@link SecurityEvaluator#isHardReadError()} is true and the
* user does not have read access then an empty iterator is
* returned.
*
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
protected SecuredNodeIterator<RDFNode> iterator(final Set<Action> perms) {
checkRead();
final Set<Action> permsCopy = new HashSet<>(perms);
permsCopy.add(Action.Read);
final ExtendedIterator<RDFNode> ni = getStatementIterator(perms).mapWith(o -> o.getObject());
return new SecuredNodeIterator<>(getModel(), ni);
}
/**
* @sec.graph Update
* @sec.triple Delete s as triple;
* @throws UpdateDeniedException
* @throws DeleteDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public SecuredContainer remove(final Statement s)
throws UpdateDeniedException, DeleteDeniedException, AuthenticationRequiredException {
checkUpdate();
checkDelete(s.asTriple());
holder.getBaseItem().remove(s);
return holder.getSecuredItem();
}
/**
* @sec.graph Read
* @throws ReadDeniedException
* @throws AuthenticationRequiredException if user is not authenticated and is
* required to be.
*/
@Override
public int size() throws ReadDeniedException, AuthenticationRequiredException {
checkRead();
return holder.getBaseItem().size();
}
static class ContainerComparator implements Comparator<Statement> {
@Override
public int compare(Statement arg0, Statement arg1) {
return Integer.valueOf(arg0.getPredicate().getOrdinal()).compareTo(arg1.getPredicate().getOrdinal());
}
}
static class StatementRemovingIterator extends WrappedIterator<Statement> {
private Statement stmt;
public StatementRemovingIterator(Iterator<? extends Statement> base) {
super(base);
}
@Override
public Statement next() {
stmt = super.next();
return stmt;
}
@Override
public void remove() {
stmt.remove();
super.remove();
}
}
}
|
Effects of neonatal deafferentation on the superficial laminae of the superior colliculus. The effect of neonatal unilateral enucleations or combined enulcleations-visual cortex ablation on the neurons of the superior colliculus has been studied, Enucleation alone leads to a shrinkage of neurons in the upper portion of the stratum grieseum superficiale but does not affect those in the lower half. The ratio of asymmetric/symmetric synaptic terminals is decreased from 85/15 to 75/25. A small number of abnormal synapses is also found. When both eye and contralateral visual cortex are removed at birth, neurons throughout the stratum griseum superficiale are reduced in size 20-25%. The synaptic ratio is reduced further to 55/45, and about 15% of the post-synaptic structures encountered have abnormal presynaptic profiles attached. It is concluded that the extent of transneuronal atrophy and synaptic disruption is proportional to the amount of afferent input removed. Also, the synaptic modifications made in the upper layers of the superior colliculus appear to be local in nature and there is no evidence of a significant sprouting or new growth of the remaining axons. |
Association mapping of winter hardiness and yield traits in faba bean (Vicia faba L.) Abstract. Improving frost tolerance and winter hardiness with desirable agronomic features are the main objectives in winter faba bean (Vicia faba L.) breeding programs, especially in cool temperate regions of Europe. In this study, 189 single-seed-descent lines of winter faba bean from the Gttingen Winter Bean Population were evaluated in field trials (winter hardiness and yield traits). Seven traits were examined (three winter-hardiness traits and four yield traits) and scored. Of the 189 genotypes, 11 lines were identified as winter hardy and having high seed yield. The highest repeatability (h2) estimates were found for leaf frost susceptibility (0.86) among the winter-hardiness traits and for days to flowering (0.95) among the yield traits. In total, 25 putative quantitative trait loci (QTLs) were identified, for winter survival rate (one QTL), 1000-seed weight (one QTL), field plant height (two QTLs), days to flowering (nine QTLs), and seed yield (12 QTLs), based on the association mapping approach using 156 single nucleotide polymorphism (SNP) markers. Candidate genes were identified for QTLs by using synteny between Vicia faba and Medicago truncatula. The SNP markers identified in this study may be used for accelerating breeding program in faba bean to improve winter hardiness and yield traits. |
name = raw_input()
count = {}
for i in name:
try:
count[i] += 1
except KeyError:
count[i] = 1
male = None
if len(count) % 2 == 0:
male = False
else:
male = True
if male:
print "IGNORE HIM!"
else:
print "CHAT WITH HER!"
|
/**
* @author Luis Delucio
*
*/
@Getter
@Setter
@ToString
public class Order {
private String id;
@SerializedName("payment_plan_id")
private String paymentPlanId;
@SerializedName("customer_id")
private String customerId;
@SerializedName("creation_date")
private Date creationDate;
@SerializedName("external_order_id")
private String externalOrderId;
private String description;
private BigDecimal amount;
private String status;
@SerializedName("total_amount_paid")
private BigDecimal totalAmountPaid;
@SerializedName("number_of_payments_made")
private Integer numberOfPaymentsMade;
@SerializedName("maximun_number_of_payments")
private Integer maximunNumberOfPayments;
@SerializedName("limit_date")
private Date limitDate;
@SerializedName("first_pay_limit_date")
private Date firstPayLimitDate;
@SerializedName("first_pay_amount")
private BigDecimal firstPayAmount;
@SerializedName("total_amount_to_pay")
private BigDecimal totalAmountToPay;
private String reference;
@SerializedName("barcode_url")
private String barcodeUrl;
/**
* The order's description. Required.
*/
public Order description(final String description) {
this.description = description;
return this;
}
/**
* The order's amount. Required.
*/
public Order amount(final BigDecimal amount) {
this.amount = amount;
return this;
}
/**
* The ID of the payment plan to use for the order. Required.
*/
public Order paymentPlanId(final String paymentPlanId) {
this.paymentPlanId = paymentPlanId;
return this;
}
/**
* The order's external ID. Optional.
*/
public Order externalOrderId(final String externalOrderId) {
this.externalOrderId = externalOrderId;
return this;
}
} |
<reponame>rjrodger/aontu<gh_stars>10-100
import { Operation } from './op';
declare const disjunct: Operation;
export { disjunct };
|
/**
* default implementation defines no additional fields or any logic. there's like nothing , like
* the default behavior.
*/
class NoOpExtension implements GraphExtension {
@Override
public boolean isRequireNodeField() {
return false;
}
@Override
public boolean isRequireEdgeField() {
return false;
}
@Override
public int getDefaultNodeFieldValue() {
return 0;
}
@Override
public int getDefaultEdgeFieldValue() {
return 0;
}
@Override
public void init(Graph graph, Directory dir) {
// noop
}
@Override
public GraphExtension create(long byteCount) {
// noop
return this;
}
@Override
public boolean loadExisting() {
// noop
return true;
}
@Override
public void setSegmentSize(int bytes) {
// noop
}
@Override
public void flush() {
// noop
}
@Override
public void close() {
// noop
}
@Override
public long getCapacity() {
return 0;
}
@Override
public GraphExtension copyTo(GraphExtension extStorage) {
// noop
return extStorage;
}
@Override
public String toString() {
return "NoExt";
}
@Override
public boolean isClosed() {
return false;
}
} |
Trump in Phoenix: We are all Americans, and we all believe in America first 1:14 AM ET Wed, 23 Aug 2017 | 16:19
The three nations started formal negotiations to tweak the trade agreement this month. The U.S. came into the talks seeking a major overhaul in the deal that removed most barriers to trade between the countries. It went into effect in 1994.
As a candidate, Trump railed against NAFTA and other free trade deals, saying they sapped manufacturing jobs from American workers as companies sought to pay lower wages abroad.
Mexican and Canadian officials have cast NAFTA as a success that needs only moderate revisions to keep up with changing economies.
The countries are two of the three largest trading partners with the United States.
Earlier this year, Trump said he decided not to terminate NAFTA and instead renegotiate it after calls with both countries' leaders.
At the start of the renegotiation talks, U.S. Trade Rep. Robert Lighthizer said the Trump administration believes "NAFTA has fundamentally failed many, many Americans and needs major improvement." |
The development of the role of the clinical nurse specialist in the UK. The modern development of specialist nurses started in the early 1970s. This article provides a brief history of their evolution and in particular outlines some of the key factors in their emergence as a key care provider in health care today. There is a constant threat that their clinical nursing background and key nursing skills will be sacrificed or substituted by a more medical-oriented approach. The future, therefore, is one of striking the right balance between nursing and medicine sub-specialism and holistic patient care. |
<filename>src/pages/Settings.tsx
import { Box, Button, Paper, TextField } from '@material-ui/core';
import React, { useState } from 'react';
import { testApi } from '../api/oktapi';
import {
WS_OKTA_API_TOKEN_KEY,
WS_OKTA_BASE_URL_KEY,
WS_OKTA_SETTINGS_VALID,
} from '../constants';
interface SettingsProps {}
export default function Settings(props: SettingsProps) {
const [settingsValid, setSettingsValid] = useState(
localStorage.getItem(WS_OKTA_SETTINGS_VALID) === 'true'
);
const [baseURL, setBaseURL] = useState(
localStorage.getItem(WS_OKTA_BASE_URL_KEY) ?? ''
);
const [apiKey, setApiKey] = useState(
localStorage.getItem(WS_OKTA_API_TOKEN_KEY) ?? ''
);
const handleTest = async (event: React.MouseEvent<HTMLButtonElement>) => {
const status = await testApi(baseURL, apiKey);
setSettingsValid(status);
};
const handleSave = async (event: React.MouseEvent<HTMLButtonElement>) => {
const status = await testApi(baseURL, apiKey);
if (status) {
localStorage.setItem(WS_OKTA_BASE_URL_KEY, baseURL);
localStorage.setItem(WS_OKTA_API_TOKEN_KEY, apiKey);
localStorage.setItem(WS_OKTA_SETTINGS_VALID, 'true');
}
};
return (
<Paper>
<Box component="form" sx={{ mt: 1, padding: 8 }}>
<TextField
margin="normal"
required
fullWidth
id="oktatenent"
label="Okta Tenent"
name="oktatenent"
defaultValue={baseURL}
onChange={(event) => {
setBaseURL(event.target.value);
}}
/>
<TextField
margin="normal"
required
fullWidth
name="apikey"
label="API Key"
id="apikey"
defaultValue={apiKey}
onChange={(event) => {
setApiKey(event.target.value);
}}
/>
<Box sx={{ display: 'flex' }}>
<Button
fullWidth
variant="outlined"
// sx={{ mt: 3, mb: 2, mr: 2 }}
onClick={handleTest}
>
{settingsValid ? 'Valid.' : 'Test'}
</Button>
<Button
fullWidth
type="submit"
variant="contained"
// sx={{ mt: 3, mb: 2 }}
disabled={!settingsValid}
onClick={handleSave}
>
Save
</Button>
</Box>
</Box>
</Paper>
);
}
|
from django.contrib import admin
# Register your models here.
from .models import Product_type
from .models import Product_details
admin.site.register(Product_type)
admin.site.register(Product_details)
|
<gh_stars>0
#include "printer.hxx"
#include "sexpr.hxx"
#include "error.hxx"
#include "recumark.hxx"
#include "format.hxx"
namespace escheme
{
static char buffer[MAX_IMAGE_LENGTH];
void PRINTER::newline( SEXPR outport )
{
PIO::put(outport, '\n');
}
void PRINTER::print_list( SEXPR outport, const SEXPR n, QuoteStyle style )
{
if (markedp(n))
{
PIO::put(outport, "<recursive>...");
return;
}
RECURSIVE_MARKER rm(n);
SEXPR s = n;
PIO::put(outport, '(');
while ( anyp(s) )
{
print_sexpr(outport, getcar(s), style);
SEXPR tail = getcdr(s);
if (nullp(tail))
{
break;
}
else if (consp(tail))
{
PIO::put(outport, ' ');
s = tail;
if (markedp(s))
{
PIO::put(outport, "(...)");
break;
}
}
else
{
PIO::put(outport, " . ");
print_sexpr(outport, tail, style);
break;
}
}
PIO::put(outport, ')');
}
void PRINTER::print_vector( SEXPR outport, const SEXPR n, QuoteStyle style )
{
if (markedp(n))
{
PIO::put(outport, "#(...)");
return;
}
RECURSIVE_MARKER rm(n);
PIO::put(outport, "#(");
for (UINT32 i = 0; i < getvectorlength(n); ++i)
{
if (i != 0)
PIO::put(outport, ' ');
print_sexpr(outport, vectorref(n, i), style);
}
PIO::put(outport, ')');
}
void PRINTER::print_string( SEXPR outport, const char* p, QuoteStyle style )
{
if ( style == QUOTE )
PIO::put( outport, '"' );
while ( *p )
PIO::put( outport, *p++ );
if ( style == QUOTE )
PIO::put( outport, '"' );
}
void PRINTER::print_sexpr( SEXPR outport, const SEXPR n, QuoteStyle style )
{
if (nullp(n))
{
PIO::put(outport, "()");
}
else
{
switch (nodekind(n))
{
case n_cons:
print_list(outport, n, style);
break;
case n_vector:
print_vector(outport, n, style);
break;
case n_symbol:
print_string( outport, getname(n), NO_QUOTE );
break;
case n_fixnum:
PIO::put( outport, format("%d", getfixnum(n)) );
break;
case n_flonum:
PIO::put( outport, format("%f", getflonum(n)) );
break;
case n_string:
print_string( outport, getstringdata(n), style );
break;
case n_char:
if ( style == QUOTE )
{
const char ch = getcharacter(n);
if (ch == '\n')
{
PIO::put(outport, "#\\newline");
}
else if (ch == ' ')
{
PIO::put(outport, "#\\space");
}
else if (ch == '\t')
{
PIO::put(outport, "#\\tab");
}
else
{
PIO::put( outport, format( "#\\%c", ch ) );
}
}
else
{
PIO::put( outport, format( "%c", getcharacter(n) ) );
}
break;
case n_func:
case n_apply:
case n_callcc:
case n_eval:
case n_map:
case n_foreach:
case n_force:
PIO::put( outport, format( "{primitive:%s}", getprimname(n) ) );
break;
case n_port:
PIO::put( outport, format( "{port:%p}", n->id() ) );
break;
case n_string_port:
PIO::put( outport, format( "{string-port:%p}", n->id() ) );
break;
case n_closure:
PIO::put( outport, format( "{closure:%p}", n->id() ) );
break;
case n_continuation:
PIO::put( outport, format( "{continuation:%p}", n->id() ) );
break;
case n_bvec:
if ( 0 )
{
PIO::put( outport, format( "{byte-vector:%p}", n->id() ) );
}
else
{
PIO::put(outport, "#(");
for ( auto i = 0; i < getbveclength(n); )
{
const auto b = (unsigned)bvecref(n, i);
PIO::put(outport, format( "%d", b) );
i += 1;
if ( i < getbveclength(n) )
PIO::put( outport, ' ' );
}
PIO::put( outport, ')' );
}
break;
case n_environment:
PIO::put( outport, format( "{environment:%p}", n->id() ) );
break;
case n_promise:
PIO::put( outport, format( "{promise:%p}", n->id() ) );
break;
case n_code:
PIO::put( outport, format( "{code:%p}", n->id() ) );
break;
case n_dict:
PIO::put( outport, format( "{dict:%p}", n->id() ) );
break;
case n_assoc_env:
PIO::put( outport, format( "{assoc-env:%p}", n->id() ) );
break;
case n_free:
ERROR::severe( format( "{free-cell:%p}", n->id() ) );
break;
default:
{
ERROR::severe( format("bad node (%p, %d) during printing", n->id(), (int)nodekind(n)) );
}
break;
}
}
}
void PRINTER::print( SEXPR outport, const SEXPR n, QuoteStyle style )
{
print_sexpr( outport, n, style );
}
void PRINTER::print( const SEXPR s, QuoteStyle style )
{
print_sexpr( PIO::stdout_port, s, style );
}
}
|
import { Controller, Post, Body } from '@nestjs/common';
import { CreateUserDTO } from './dto/create-user.dto';
import { UsersService } from './users.service';
import { User } from './user.entity';
@Controller('users')
export class UsersController {
constructor(private usersService: UsersService) {}
@Post()
async create(@Body() createUserDTO: CreateUserDTO): Promise<User> {
return await this.usersService.createUser(createUserDTO);
}
}
|
PERFORMANCE EVALUATION OF MULTI USER DETECTION FOR UPLINK WIRELESS COMMUNICATIONS WITH VARIOUS MULTIPLE ACCESS SCHEMES The growing demand for capacity in wireless communications is the driving force behind improving established networks and the deployment of new worldwide mobile standard. Multiple Access Interference (MAI) and Inter Symbol Interference (ISI) limit the capacity of the system. The conventional approaches for wireless communication may not be good in many ways. Here, a proposal has been given to show that the problems in conventional approaches may be avoided, if the proposed Parallel Interference Cancellation (PIC) multi user detection with feedback (PICF MUD) is adopted in various multiple access schemes. In DS-CDMA system using PIC receiver with four stages, the Eb/No required to achieve a BER performance of 10 -3 is 8 dB, whereas the single stage PIC receiver with feedback approach requires only 2 dB to achieve the same BER. The proposed PICF MUD is extended in various multiple access systems to provide better BER performance. |
The job market is tough. I’ve been pounding the pavement, interviewing and trying to land that new job just like the rest of you. It’s absolutely exhausting. There are cover letters to personalize for each application, interviews—where you have to prove how much you know their company in addition to showing off your skills—and, oh yes, homework. It’s sort of mind boggling how much you have to know. Well mediabistro.com is making it easier for you.
The newest courses in the On Demandsection are all about helping you land the job of your dreams. From lessons about the various edit tests that you will encounter—let me just tell you now, there are a lot—to blogs that will help you find that perfect job—hint:MediaJobsDaily—these courses are jam packed with information.
With prices starting as low as $15 a month, I couldn’t imagine a better deal right now. So go in and check out what you’ve been missing. You could learn how to craft that perfect cover letter that will get you in the door every time. |
Performance, Training, Quality Assurance, and Reimbursement of Emergency PhysicianPerformed Ultrasonography at Academic Medical Centers Objective. To determine the current state of bedside emergency physicianperformed ultrasonography in terms of prevalence, training, quality assurance, and reimbursement at emergency medicine residency programs. Methods. The link to a 10question Webbased survey was emailed to ultrasound/residency directors at 122 emergency medicine residency programs in the United States. Results. The overall response rate was 84%. Ninetytwo percent of programs reported 24hour emergency physicianperformed ultrasonography availability. Fiftyone percent of programs reported that a credentialing/privileging plan was in place at their hospital, and 71% of programs had a quality assurance/image review procedure in place. Emergency medicine specialtyspecific guidelines of 150 ultrasonographic examinations and 40 hours of didactic instruction were met by 39% and 22% of residencies, respectively, although only 13.7% of programs were completing the 300 examinations recommended by the American Institute of Ultrasound in Medicine. Sixteen programs (16%) reported that they were currently billing for emergency physicianperformed ultrasonography; of those not billing, 10 (12%) planned to bill within 1 year, and 32 (37%) planned to bill at some point in the future. Conclusions. Performance and training in emergency physicianperformed ultrasonography at academic medical centers continues to increase. The number of emergency medicine residency programs meeting specialtyspecific guidelines has more than doubled in the last 4 years, but only a small number are meeting American Institute of Ultrasound in Medicine guidelines. Although only 16% of programs reported that they were currently billing for emergency physicianperformed ultrasonography, most had plans to bill in the future. |
Geochemistry of Kasnau-Matasukh lignites, Nagaur Basin, Rajasthan (India) The distribution and verticals variation of geochemical components in the Kasnau-Matasukh lignites of Nagaur Basin, Rajasthan, were investigated using microscopy, proximate and ultimate analyses, RockEval Pyrolysis, X-ray diffraction and Fourier Transform Infrared analyses, and major/minor/trace element determination. The relationship of elements with ash content and with macerals have also been discussed. These lignites are stratified, black, dominantly composed of huminite group macerals with subordinated amounts of liptinite and inertinite groups. They are classified as type-III kerogen and are mainly gas prone in nature. The concentration (in vol%) of mineral matter is seen to increase towards upper part of seam and so is the concentration (in wt%) of the volatile matter, elemental carbon and sulphur. The common minerals present in these lignitesare mixed clay layer, chlorite, and quartz as identified by X-ray diffraction study. Compared with world average in brown coal, the bulk concentration of Cu is anomalously high in most of the samples while Cd is 23 times high and Zn is high in one band. Based on interrelationship, different pyrite forms are noticed to have different preferential enrichment of various elements. The concentration of disseminated pyrite is more than the other pyrite forms and is followed by discrete pyrite grains and massive pyrite. Introduction Coal is an organo-clastic sedimentary rock composed of lithified plant debris. The inorganic constituents occurring in different forms in coal of all rank are collectively known as 'mineral matter'. Mineral matter may occur as crystalline solids, dissolved salts in pore waters, organometallic compounds, discrete and disseminated grains. The type and quantity of mineral matter in coal depends on nature of vegetal matter, mode of accumulation (allochthonous or autochthonous), tectonic framework of the depositional basin, hydrological conditions, climatic conditions and the geomorphology of the hinterland. There are different phases during which the mineral matter gets into coals. These include, (i) inherent in organic matter of plants, (ii) syngenetic minerals added during the stage of peat development, and (iii) secondary or epigenetic minerals deposited through circulating waters during the coalification process. Contributions on the significance of mineral matter and trace elements in coalhave been made by many researchers. Ward et al. discussed the trace elements and mineral matter of New South Wales, Australia, in the light of quantitative data obtained through X-ray diffraction study. Mineralogical analysis of these Australian coals was further used for the seam correlation (). A detailed account on the significance of mineral matter in coal is provided by Ward. Li et al. used electron microprobe study to know the occurrence of non-mineral inorganic elements in macerals of low rank coals while Dai et al. studied the mineralogical and geochemical anomalies of Late Permian coals from southern China and evaluated the influence of hydrothermal fluids and terrigenous materials. They also carried out study on the elements and phosphorus minerals in the Jurassic coals of Tibetan Plateau (a, b). While working on mineral matter in coal, Ren discussed the significance of trace elements in coal revealing geologic information about coal-bearing sequence formation, depositional condition and regional tectonic history of the basin. Trace elements get accumulated in coal in two ways, through plants and animals, and through geologic processes during peat and post peatification stages (). The impact of trace elements on environment depends upon their modes of occurrence (mobility), concentration, and toxicity (Finkelman 1995;) and thus, study of trace elements would help in formulating strategies to combat pollution related to coal combustion (;;;). There are certain elements like As, Be, Cd, Cr, Co, Cu, Pb, Mn, Hg, Mo, Ni, Sr, U, V, and Zn which are environmentally more sensitive and have their impact on environment when released into atmosphere especially after their combustion in the thermal power plants (Pickhardt 1989;;Singh et al., 2012Singh et al., 2014;. The mean abundance of elements in coal is also important for geochemical comparisons (Ketris and Yudovich 2009). Silicates, carbonates and sulphates are the major minerals in coal which consists most of the elements but some elements such as Ge, B, Br, Be, and Cl are associated with the organic matter (Finkelman 1995). Though, various workers have contributed on the geological aspects of the Kasnau-Matasukh lignites of Nagaur basin, but no work has been carried out on the petrological and geochemical aspects of these lignites. Therefore, the present study has been undertaken to see the distribution and variation of the geochemical constituents, including major/minor/trace elements, vertically along the seam profile of these lignites. Further, the inter-relationships among the geochemical constituents and also with petrographic elements has been discussed. This would help in planning the strategy for their utilization. Geological setting The sedimentary tract of Rajasthan is spread over a large area of 120000 km 2 and forms the eastern flank of Indus shelf. The entire sedimentary tract has been sub-divided into four basins which includes-(i) Palana-Nagaur basin, (ii) Jaisalmer basin, (iii) Barmer basin, and (iv) Sanchor basin (Jodha 2008). The present investigation has been carried out on Kasnau-Matasukh lignites. These lignites occur in Palana-Nagaur basin which is an E-W trending elongated basin. It extends for 200 km in length and 50 km in width. Kasnau-Matasukh block is located in Jayal Tehsil of Nagaur district. Nagaur basin has several disconnected small basins of Palana Formation indicating undulating paleotopography. Nagaur basin is linked to a 5-6 m wide channel and hence known as link basin (Lal and Regar 1991). Though the structural features indicate a low tectonic disturbance in the area, gravity survey has indicated the presence of gravity low of 3-4 km width which occurs in NW-SE direction indicating the presence of link channel broadening towards SE. The magnetic survey of the area also substantiates the subsurface structures delineated through gravity survey and show NW-SE elongation of magnetic contour (Lal and Regar 1991).Geological work in and around Nagaur was initially taken up by Blanford who correlated the Jodhpur set with Vindhyans because of their closer resemblance. The interpretation of exploration data of Nagaur basin has been incorporated in the reports of Mukhopadhyay (1974Mukhopadhyay ( -1975, Munshi (1975Munshi ( -1977, Faruqi (1978Faruqi ( -1979Faruqi (, 1982Faruqi ( -1983. The lignite bearing Lower Tertiary sediments of Palana Formation are deposited unconformably over the Nagaur Formation. Presence of one lignite seam has been identified in the block which is intersected by a number of dirt bands. The Tertiary sequence of rocks comprises of three Formations which include Palana, Marh and Jogira in ascending order (Jodha 2009). The area has scanty outcrops. The lignite occurrences in the Nagaur basin are associated with Palana Formation of Paleocene age. The seams have been reported between 50 m and 150 m depths in the Palana Formation (Jodha 2009). Based on the recovered palynomorphs comprising pteridophytes, angiosperms, algae and fungi, Kulshrestha et al. and Shah and Kar have given Paleocene age to this Formation. The general stratigraphic succession of the rocks in the basin is given in Table 1 and the general geological map is shown in Fig. 1. The litholog (after Ghose 1983) and megascopic profile of the Kasnau-Matasukh lignite seam, prepared for this study, is shown in Fig. 2a while geological section is shown in Fig. 2b. Method of study Lignite samples have been collected from Kasnau-Matasukh mine of Nagaur basin of Rajasthan ( Fig. 1) following pillar sampling method (Schopf 1960) so that full lignite seam thickness may be reconstructed in the laboratory. The samples have been crushed and reduced in quantity through quartering and coning to prepare eight composite samples which were subjected to various analyses. The samples were ground to pass 18 mesh size to prepare polished mounts for petrography, while they were further ground to pass 70 mesh size for various chemical analyses like proximate, ultimate, rock-eval pyrolysis and major/minor/trace element analyses. Maceral analysis has been carried out to see the distribution of huminite, liptinite and inertinite group macerals. This is performed under reflected light using a LeitzOrthoplan-Pol Microscope equipped with Wild Photoautomat MPS 45 in the Coal and Organic Petrology Laboratory, Department of Geology, Banaras Hindu University. The line-to-line and point-to-point spacing was maintained at 0.4 mm and more than 600 counts have been taken on each sample following the methodology given by Taylor et al. ; huminite macerals have been termed and described as per ICCP-1994( while ICCP has been followed for inertinite macerals. The vitrinite/huminite reflectance (VRo) was measured at National Metallurgical Laboratory, Jamshedpur following ISO 7404-5:2009 (standard used: spinel, yag-yittrium, aluminium garnet, zirconia). On each sample, a minimum of 200 measurements were taken. The proximate analysis has been carried out as per BIS, while the elemental analysis (C, H, N, O, and S) has been performed at CMPDI, Ranchion Elementar Analysensysteme-Vario-III as per ASTM D5373-08. The Pyrolysis has been carried out on high precision Rock-Eval-6 (make Vinci Technologies, France) at R & D department, Oil India Ltd, Duliajan (Assam) on fourteen lignite samples (represented as four composite samples).This is programmed pyrolysis system and is ultimate to know the source rock potential for hydrocarbon. The significance of this technique is that the coal samples, as such,are analyzed to know the various components. The analysis is performed under controlled temperature and coal samples are heated in absence of oxygen. The produced compounds are quantitatively assessed. During heating the oxygenated compounds released from mineral matter, present in coal, are excluded. The quantitative measurement of various fractions of volatile/non volatile organic compounds, source rock potential and the degree of maturation of lignites samples is obtained through this Base not encountered analysis. The pyrolysis of Kasnau-Matasukh lignite samples has been carried out following the procedures of Espitali et al. (1977Espitali et al. (, 1984Espitali et al. (, 1986. The samples were heated in an open pyrolysis system under non-isothermal condition and the recorded FID signal is divided in two surfaces, S 1 and S 2, which are expressed in mg HC/g of coal. The method gets completed by combustion (oxidation) of the residual rock recovered after pyrolysis at 850°C under nitrogen. This is required to avoid incomplete combustion. The released CO and CO 2 are monitored online through an infra-red cell. This complementary data acquisition helps in the determination of total organic carbon (TOC) and total mineral or inorganic carbon (TMC or TIC). The elements Fe, Ca, Mg, Mn, K, Na, Cu, Co, Ni, Cr, Zn, Pb and Ashave been determined on 'whole coal samples' in the department of Botany, BHU, Varanasi. For the determination of these elements the coal samples have been digested with 2.5 mL HNO 3 andHClO 4 in 10:1 ratio on hot water plate following the method ofEaton et al..The mixture is then filtered usingWhatmanfilter paper (No. 41) and the digested samples are rinsed with 1 % Conc. HNO 3.It is then transferred in a separate test tube and the volume is made up to 20 mL. The digested samples have been used for analyzing the concentrations of various elements under Atomic Absorption Spectrophotometer (AAS, Model Perkin Elmer Analyst 800) and the standard used in the analysis were Accu Standard solutions obtained from Gowrisankaran et al. Merck, KGaA, Darmstadt, Germany. Data of major, minor and trace elements are mean of three independent observations in the present paper. The measured values have shown relative standard deviations less than 5 % for all the elements in the analyzed samples. Fourier Transform Infrared spectra have been recorded by FTIR spectrophotometer (PerkinElmer Spectrum version 10.03.05) using KBr pellets (transmission mode) in the Department of Chemistry, Banaras Hindu University. Coal: KBr mixture at 1:100 ratio has been used and 20 number of scans have been taken with a spectral resolution of 4 cm -1 at a range of 400-4000 cm -1. X-ray diffraction data have been obtained with the help of computer controlled Xray Diffractometer PanalyticalX'Pert High Score (Plus) v39 database in the Department of Geology, Banaras Hindu University. The operating parameters, in the present study are: start angle-2°; target-Cu Ka radiation; stop angle-60°; step size-0.0250; and 2 theta configuration. Petrographic characteristics These lignites are stratified in nature and are of black color. Huminite is the main component in these lignites (a, b) which is formed due to anaerobic preservation of lignocellulose material in the mire (). Liptinite and inertinitemacerals occur in low concentrations. Huminite (83.9 %-92.5 %; av. 87.3 % mineral matter free basis)is largely contributed by detrohuminite and telohuminite. Detrohuminiteis represented by densinite (19.2 %-42.5 %; av. 31.7 % mineral matter free basis) and attrinite (0 %-13.3 %; av. 5.9 % mineral matter free basis) while telohuminiteis represented by ulminite-A (24.9 %-38.6 %; av. 30.5 % mineral matter free basis),ulminite-B (13.4 %-29.1 %; av. 18.2 % mineral matter free basis) and textinite which occurs in very low amount (\1 %). Liptinite group (5.7 %-13.2 %; av. 10.9 %) and inertinite group (0.2 %-4.0 %; av. 1.9 %) are low in concentrations (Table 2). Mineral matter ranges between 3.5 and 12.0 (av. 7.7 %). The vertical variation of group macerals and mineral matter from base of the seam is shown in Fig. 3. Though, there is no specific trend of variation, yet huminite shows a high concentration at the upper part while liptinite shows a reverse trend. Inertinite is less at the bottom. Mineral matter is more towards the upper part of the seam. The variation has environmental implications. The clastic mineral matter relates directly to water cover in the basin and, therefore, it increases with increase in the water cover during the formation of upper part of the lignite seam. This is also supported by the occurrence of high concentration of huminite group macerals during this period. Chemical attributes Theseligniteshave high volatile matter content (52.6 %-67.0 % daf basis; av. 58.3 %) with moderate ash yield (3.0 %-18.2 %; av. 8.4 %). The ultimate analysis (av. values on daf basis) shows that theselignites contain 54.0 % carbon, 5.4 % hydrogen, 0.8 % nitrogen, 35.9 % oxygen and 3.8 % sulfur ( Table 2). The vertical variation of the chemical components along the seam profile is shown in Fig. 4. Volatile matter shows an increasing trend towards upper part of seam while fixed carbon shows a reverse trend. Carbon and sulfur show an increasing trend towards upper part while other ultimate components like hydrogen, nitrogen and oxygen do not show any definite trend. Variable concentrations and dimensions of undecomposed, partly decomposed and completely decomposed wood have been noticed in the Kasnau-Matasukh lignites. This appears to have affected the proximate and ultimate composition of this lignite from bottom to top because these three components have variation in the organic geochemical constitution. Hydrocarbon potential These lignites of Nagaur basin have attained a thermal maturity indicated by vitrinite reflectance (VRo) between 0.23 % and 0.30 % (Table 3) which put them as 'low rank C' coals as per ISO-11760. The analytical results of Rock-Eval pyrolysis of Kasnau-Matasukh lignites show that S 1 values (free hydrocarbon distilled out of samples at initial heating of 300°C) vary from 1.16 to 2.83 mg HC/g. Considering 1 mg HC/g as its cut-off value,t his lignite may be considered as a good source rock. Similarly, S 2 values (hydrocarbons generated through thermal cracking which actually indicate the quantity of hydrocarbons that the lignite may potentially produce) are many-fold higher than the free hydrocarbons (already generated oil in the lignite and occur as free hydrocarbons in lignite samples) and it varies from 48.65 to 87.84 mg HC/g (av. 68.99 mg HC/g). Taking 5 as its cut-off value, it also indicates a good source rock for hydrocarbon generation. The S 3 values vary from 24.34 to 29.89 mg CO 2 /g and represent the trapped carbon-di-oxide which is released during the pyrolysis up to a temperature of 390°C. This is also proportional with the oxygen present in the Kasnau-Matasukh lignites of Nagaur basin. Total organic carbon (TOC) content of these lignite samples exhibits a wide range from 3.27 % to 43.92 % with an average of 31.08 % while the total inorganic carbon (TIC) values also have the similar trend and the value ranges from 1.85 % to 2.73 %. The vertical variation of the Rock-Evaldata from the base of the lignite seam is shown in Fig. 5. It is evident from this figure and the data (Table 4), that S 1 valueis low Fig. 5). Total organic carbon content has an increasing trend towards the mid of the seam while decreases at the upper part. The total inorganic carbon (TIC) values also have the similar trend and are derived from the carbonates. In this seam the sulfur content (varies from 3.2 % to 4.4 %) maintains a strong negative correlation (r = -0.81; P value = 0.399) with the TOC content and also with TIC (r = -0.78; P value = 0.433). The organic matter (OM, obtained by deducting ash content from hundred) shows a variation from 81.8 % to 97 %. Coal acts as good source rock for hydrocarbon generation. The H/C ratio, in 0.8-0.9 range, is a good indicator of a source rock having hydrocarbon potential (Powell and Boreham 1994). Certain coals with low liptinite content have hydrogen-rich vitrinite which generates oil (Bertrand 1989;;;Singh 2012;). The generated hydrocarbon products have a finite storage capacity and until this capacity is exceeded, no oil expulsion takes place (Powell 1978;Mc Auliffe 1979;Durand 1983;Tissot and Welte 1984;). Singh and Singh et al. have studied the hydrocarbon potential of lignites of Cambay basin and Bikaner basin (India) respectively while investigated the sub-bituminous coals of east Kalimantan (Indonesia) for its liquid hydrocarbon potential. The cross plot of hydrogen index (HI) with oxygen index (OI) and T max of Kasnau-Matasukh lignite (Fig. 6) indicates its immaturity. This lignite falls closer to the zone of organic rich type-III kerogen which is formed under topogenous condition as also revealed by a cross plot between total organic carbon (TOC) and sulfur content (Fig. 7). This plot is proposed by Jasper et al. who (Table 2), are indication of their good hydrocarbon potential as per Cudmore and Davis et al.. These ligniteshave potential of generating mainly gaseous hydrocarbons. The cross plot between vitrinite reflectance and HI (Fig. 8) also indicates that these lignites are mainly gas prone. The details of the maturity and oil generating potential of the lignites of entire Bikaner-Nagaur basin have been discussed in detail by Singh et al.. X-ray diffraction (XRD) and Fourier transform infrared (FTIR) studies XRD spectra of whole coal and low temperature ash samples of the Kasnau-Matasukh lignites are shown in Fig. 9a, b. The minerals in these coals were identified by comparing 'd' values as per Lindholm. The common minerals identified from XRD spectrum of whole coal sample are biotite, gypsum, chlorite, goethite/laumontite, quartz, barite, dolomite, haematite and marcasite. The minerals identified in the low temperature ash include goethite/laumontite, anorthite, quartz, haematite and mixed clay. Kaolinite, illite and chlorite are the major mixed clay minerals. Haematite, goethite and marcasite are major iron containing minerals while gypsum, dolomite and laumontite are calcium rich. FTIR spectra are useful for the identification of minerals associated with the coal structures (Karr 1978). The peaks in FTIR spectra of coal between 1100 and 400 cm -1 are of clay minerals such as quartz, kaolinite, illite and montmorillonite groups. The Geochemistry of major/minor and trace elements The study of trace elements in coal is being given more impetus during last few decades owing to their environmental implications. Mode of occurrence of major, minor and trace elements in coal may be known through direct and indirect methods (Eskenzy and Stefanova 2007). In the lignite samples of Kasnau-Matasukh, the mode of occurrence of elements has been studied through indirect method. Here, correlation coefficients of the elements with ash yield, petrographic content and also among themselves have been calculated. The concentration of elements in the analysed samples has been compared with the world average in lignite. As Valkovic and Clarke values (in brown coal) of rest of the elements is after Ketris and Yudovich Geochemistry of Kasnau-Matasukh lignites, Nagaur Basin, Rajasthan (India) 115 we can see from the Table 6 the concentration of Cu is very high in all the bands and over 70 times, in KM-7 band, as compared to world average in brown coals. Similarly, Cd is 2-3 times high in almost all the bands while Zn is high in KM-3 band. Rests of the elements have a normal concentration in Kasnau-Matasukh lignites. The vertical variation of various major/minor and trace elements is shown in Fig. 10 along the lignite seam profile. Though, there is not a prominent trend of distribution of these elements yet the concentration of elements like Mn, Na, Cu, Ni, Co, Cr, Pb and Cd is higher towards the upper part of the seam as revealed in Fig. 10. Sulfur concentration is high in Kasnau-Matasukh lignites. Pyrite is formed, in coal, from H 2 S and Fe in solution which involves bacterial reduction of SO 4 to H 2 S at pH 7-4.5 (Ryan and Ledda 1997). It occurs in various forms in Kasnau-Matasukh lignites. As analyzed under microscope, disseminated pyrite in these lignites, dominates (av. 41 %) over the other pyrite forms and is followed by discrete pyrite grains (av. 23.8 %) and massive pyrite (av. 11.7 %) ( Table 7). Some photomicrographs of Fig. 11. The clustered framboidal pyrites are more common in the middle part of the seam while single framboids are more towards the upper part. Based on the values of correlation coefficient, preferential enrichment of Ni, Pb, and Co is seen in pyrite. Finkelman also has reported the association of Co with sulfides, though, it is also found associated with clays and organic matter. Co may also occur as siegenite and cattierite (a, b). Dale et al. have reported the occurrence of Co associated with silicates in Australian coals. Framboidal pyrite has shown preferential enrichment of Cu, Pb, Co, Cr, and Ni; disseminated pyrite shows an affinity with Ni and Co while discrete pyrite grains with Pb and Co. Similarly massive pyrite has a close affinity with Fe and Zn while pyrite occurring as fissure and crack fillings has affinity with Cd and Mg. Cadmium is normally associated with sphalerite (ZnS) though it is also found in other sulphides (Finkelman 1994). This has also been documented by Swaine, Goodarzi, and Dale et al.. Ash yield shows a strong affinity with Mn (r = 0.728) and Na (r = 0.744) among the major elements and with Pb (r = 0.786) and Co (r = 0.65) among the trace elements. Eskenzy also reported a positive correlation of ash content with Mn and Co and observed the association of Pb with organic as well as inorganic fractions in Bulgarian coal. On the other hand inertinite maceral group has shown a strong affinity with Mg and Zn while huminite has a strong affinity with Mn, and liptinite relates well with Cu and Cr. These elements could either be associated with the organic molecules or with the minerals occurring as intergrown with the macerals. On the other hand some elements could be related to those minerals which occur as surface blanketing or as superficial mounting over the surface of the macerals (). While working on Shenbei Tertiary lignites of China, Ren et al. reported Cr, Co, Ni, Cu, V and Zn to be associated with organic macromolecules and they have suggested their enrichment during coalforming or early diagenesis process. Eskenzy and Stefanova believe that organically bound parts of the elements are generally higher in low-rank coals. Due to lack of evidence regarding mode of occurrence of Ni in coal, its relation is yet to be precisely established (Finkelman 1994;). It may be organically bound or it could also be associated with sulfides. Dale et al. reported Ni from both monosulphides and organic matter. Ni relates with Na, K and Co in Kasnau-Matasukh lignites which is in agreement with the work of Singh et al. (2015a, b) on the nearly located Barsingsar and Gurhalignites of Bikaner-Nagaur basin. KM-3, which is matrix rich stratified band, contains high concentration of Zn, Cu and Cd. Zinc is considered as a notorious contaminant and occurs in all coals in HCl soluble phase (). As revealed in the correlation matrix among the elements in the Kasnau-Matasukh lignites, Cd shows a strong affinity with Cu while Pb has a strong affinity with Co and Ni. Pb also has an affinity with sulfides especially pyrite. Cr relates strongly with Cu and occurs in sulphides while Co maintains a positive affinity with Na and Mn (Table 8). Conclusion 1. These lignites are predominantly composed of huminite group of macerals while liptinite and inertinite macerals occur in less concentration. Huminite shows a high concentration at the upper indicating anaerobic degradation during that period. Mineral matter is more towards the upper part of the seam indicating a wet environment. 2. Volatile matter content is high while ash yield is moderate.Sulfur content of these lignites is moderately high. There is increase in volatile matter, carbon and sulphur contents towards the upper part of seam. 3. S 1 values are low at the bottom while S 2 values are more at the middle part of the seam and decreases towards the top as well as bottom. Total organic carbon content is more in the middle part of the seam and decreases towards the top. Study reveals that these lignites are type-III kerogen and are mainly gas prone. 4. XRD study reveals the presence of mixed clay minerals including kaolinite, illite and chlorite. The peaks in FTIR spectra between 1100 and 400 cm -1 further support the presence of these clay minerals. 5. The concentration of Cu is very high in all the samples and over 70 times in KM-7 band. Similarly, Cd is 2-3 times high in almost all the samples while Zn is high in KM-3 band. The concentration of elements like Mn, Na, Cu, Ni, Co, Cr, Pb and Cd is higher towards the upper part of the seam. Preferential enrichment of Ni, Pb, and Co is seen in pyrite. 6. Ash content shows a strong affinity with Mn, among the major elements, and with Co among the trace elements. On the other hand, inertinite maceral has an affinity with Mg and Zn while huminite with Mn, and liptinite with Cu and Cr. Cadmium shows a strong affinity with Mg and Cu while Pb has a strong affinity with Mn, Na, Co and Ni. Chromium relates strongly with Cu,Pb with pyrite, Co with Na and Mn; and Ni with Na, K and Co. Nevertheless, the results warrant further study for formulating any strategy for proper utilization of the Kasnau-Matasukh lignites of Rajasthan. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
Cohesion-driven mixing and segregation of dry granular media Granular segregation is a common, yet still puzzling, phenomenon encountered in many natural and engineering processes. Here, we experimentally investigate the effect of particles cohesion on segregation in dry monodisperse and bidisperse systems using a rotating drum mixer. Chemical silanization, glass surface functionalization via a Silane coupling agent, is used to produce cohesive dry glass particles. The cohesive force between the particles is controlled by varying the reaction duration of the silanization process, and is measured using an in-house device specifically designed for this study. The effects of the cohesive force on flow and segregation are then explored and discussed. For monosized particulate systems, while cohesionless particles perfectly mix when tumbled, highly cohesive particles segregate. For bidisperse mixtures of particles, an adequate cohesion-tuning reduces segregation and enhances mixing. Based on these results, a simple scheme is proposed to describe the systems mixing behaviour with important implications for the control of segregation or mixing in particulate industrial processes. Mixing of granular materials is important in several particulate processes such as concrete preparation, chemical formulation and pharmaceutical engineering 1,2. Under the presence of shear, granular materials in these processes often segregate owing to differences in particle properties such as the size and density, causing them to sink or rise within the bed and to eventually accumulate, leading to a variation of the bulk composition and density 3,4. This segregation behavior can be a difficult problem in some processes that may critically degrade the quality of the final product (e.g., solid dosage forms preparation and food processes), and, in some other cases, can be used for benefit (e.g., recycling and separation processes). Thus, it is important to understand particulate segregation mechanism to efficiently prevent, trigger, enhance or reduce it. Among the several size segregation mechanisms 5, the kinetic sieving is the dominant mechanism causing segregation in gravity-driven free-surface flows 6,7, where the small particles move effectively, shifting downwards, as they are more likely to move into the voids between larger particles. This mechanism performs well when the volume fractions of large and small particles are equal 8. Using numerical simulations, Hong et al. 9 considered the interplay between size and mass in a binary mixture of spherical particles, and predicted the existence of the reversed Brazil Nut effect (RBN). Khakhar and co-workers 10 showed that the segregation rate of dry particles is independent of the filling level of the rotating drum. Alonso et al. 11 investigated the segregation of particles due to differences in size and density in a two-dimensional horizontal rotating cylinder, and proposed an expression of segregation index to predict if a large particle tends to float or sink in a bed of fine particles. More recently, Hongyi et al. 12 applied unsteady flows to strongly segregating granular materials in an attempt to control the segregation pattern and enhance mixing. A typical approach to prevent size segregation is by making particulate components sizes as close as possible 13, or by using cohesive particles 14. However, cohesion between particles is difficult to introduce and control. As a consequence, several studies regard particles as non cohesive, but cohesion is always present in most industrial particulate processes, and thus has to be taken into account. In some other studies, cohesion was introduced using various viscous liquids with different capillary forces 14,. Jarray et al. 19 investigated the effect of liquid induced cohesion on granular flow in a rotating drum and showed that capillary cohesion increases the angle of the slope and decreases the granular temperature at the free flowing surface. Shi et al. 22 performed experiments on various size fractions of cohesive limestone powders and showed that cohesion dominates the flow behaviour for fine particles. Li and McCarthy 14 showed that segregation in granular systems can be controlled by adding moisture. Roy et al. 23 studied the effect of wet cohesion on the granular flow and proposed a rheological model that predicts various features such as the shear thinning behavior. Chou et al. 24 demonstrated that the segregation index in wet granular materials decreases with an increase of the angle of the slope in a rotating drum, regardless of the volume or viscosity of the added liquid. Apart from studies on segregation in wet systems with cohesion due to capillary forces, there are less experimental investigations focusing on dry cohesive systems, especially on studies where the cohesive force can be adjusted. One way to modify the glass surface and obtain dry cohesive particles is the so-called silanization. The original purpose of chemical silanization is to change the hydrophobicity of glass surface 25. Here, we use extensive silanization to modify the contact cohesion of millimetric size glass particles in a controlled way, with the goal to link this micro-cohesion change to the macroscopic flow and mixing behavior. In industry and within the scientific community, several geometries are used for particulates mixing 1,4,26,27. In this study, we employ a rotating drum to study the flow and segregation of cohesive and non-cohesive granular materials. This apparatus is extensively used in experiments and as a model system due to its simple geometry comparing to other mixing devices. To our knowledge, this is the first paper that experimentally investigates the effect of adjustable cohesive forces on the segregation of dry granular systems in a rotary shearing device. We characterize the cohesive particles properties using different experimental approaches, which include Atomic Force Microscopy (AFM) for surface scanning of the particles, bulk flowability in the drum, and cohesion force measurement using an in-house setup. The effects of silanization reaction duration on the surface properties of the particles are presented, and the cohesive and adhesive forces between particles are measured and discussed. Then, cohesive particles are mixed with non-cohesive particles in a rotating drum mixer, and the segregation is investigated under different combinations of cohesive forces and particle sizes, concluding that the cohesion-dependency of segregation may be manipulated to mitigate or enhance particulate mixing. Results and Discussions characterization of the surface and cohesive properties of the particles. Typically, to investigate the effect of cohesion on particulate segregation and flow, researchers use capillary forces. When particles are in contact with each other in the presence of a small amount of liquid, the interstitial liquid forms a liquid bridge between the particle and the cohesion is dominated by capillary forces (see Fig. 1(b)). This approach has several limitations. For instance, the liquid distribution, and hence capillary force distribution, within the particulate system is not uniform especially under dynamic conditions 31,32. Additionally, tuning the capillary force is not straightforward as it depends on both liquid and particulate properties. To avoid these complications, we use silanization where cohesiveness is only due to immediate chemical bonding with no formation of capillary bridges. In this case, cohesion between particles is only upon contact. If the particles do not touch, the cohesion force is equal to zero due to the very short range of the attractive force. This can be seen in Fig. 1(c,d), where silanized particles are perfectly dry and capillary bridges are absent, making their cohesive behaviour as close as possible to that of real dry cohesive powders. Figure 1(e) shows the cohesive (F AA ) and adhesive forces (F AB ) between two particles of radius 0.85 mm as a function of silanization reaction time. Here, "A" particles are cohesive and "B" particles are non-cohesive. The cohesion and adhesive forces increase with silanization reaction duration. The adhesive forces between non-cohesive and cohesive particles are order of half of the cohesive force between two cohesive particles. Highly cohesive particles of about 0.11 mN are obtained when the silanization reaction duration is longest (i.e., all the Heptane in the Silane solution evaporates during the silanization process). This increased cohesive force can be attributed to the formation of more covalent bonds (-Si-O-Si-) as the reaction progresses. Figure 2 shows microscopic and AFM measurements of dry and silane treated glass particles. Cohesive silanized particles seem to have smoother surfaces with less irregularities than non-silanized particles. Less irregularities infers a more uniform coating, and hence is an important factor in determining the cohesive force between particles. Since the thickness of the Silane coating is in the range of 30 nm, it is assumed that the main elastic properties of the individual particles are still the same after silanization. In addition to direct cohesive measurements, the cohesive force can be correlated with the angle of the flow (i.e., dynamic angle of repose) at the bulk level in a rotating drum. We show in Fig. 3(a) the smoothed angle of the flow in a rotating drum as a function of time for non-cohesive and cohesive particles. The particles are of radius 0.85 mm and the rotation speed of the drum is 25 rpm (i.e., Froude number Fr = 0.21). As the drum rotates, particles are lifted to the upper part of the bed and the angle of the slope increases until it reaches a maximum, then avalanches start to occur. For dry particles, successive avalanches happen with large amplitude followed by a relatively steady flow with small avalanche amplitude variations. For highly cohesive particles, successive avalanches remain as the drum rotates. This is because cohesive forces dominate the flow and particles become closely packed and start to flow as a bulk. For medium cohesive particles, the amplitude of the avalanches is the lowest indicating a liquid-like flow behavior. When the cohesion is low, the dynamic angle of repose is close to that of the non-cohesive case. Figure 3(b) shows the dynamic angle of repose averaged over the last 20 seconds versus the cohesion force. The dynamic angle of repose increases with the cohesive force, qualitatively confirming the results obtained by the microbalance setup in Fig. 2. By analogy to the flow of wet particles in the presence of capillary forces 18,19, we can infer that the increase of the dynamic angle of repose is because the cohesive forces prevent the rolling and cascading of individual particles in favor of bulk sliding. These experiments show that the cohesive force between particles can be adjusted using the silanization reaction duration, and can also be characterized at the bulk level using the angle of the flow in a rotating drum. conduct a set of experiments with different particle sizes and different cohesive forces as shown in Table 1. In all cases, the drum contains a mixture of 50-50% by weight (w/w) (i.e., 50% of particles "A" and 50% of particles "B") glass particles with the same density of 2500 kg/m 3. We plot in Fig. 4(a,b) the mixing index for monosized particles, of radius r A = 0.85 mm, as a function of time for cohesionless (case 1) and cohesive particles (case 4), respectively. The mixing index curves are smoothed using bisquare weighting smoothing to reduce the fluctuation of the mixing index. The raw values obtained directly from image post-processing, before smoothing, are shown in dots in Fig. 4 and in the Supplementary Information file, Appendix A. A Mixing Index (MI) close to 1 means the system is mixed, while an MI value close to 0 indicates a segregated system. The mixing index in Fig. 4(a) increases at first and then, after few rotation of the drum, remains relatively constant at around MI = 0.9, indicating good mixing of the system. However, as shown in Fig. 4(b), when the transparent particles are cohesive, a lower MI value around 0.6 is obtained and segregation occurs between monosized black and transparent particles. This can be explained by the clustering of the transparent particles by cohesion, which pushes them outside and keeps the black particles within the core region of the bed, where the flow is quasistatic. By performing Discrete Element Method (DEM) simulations of a mixture of cohesive and non-cohesive monosized particles in a rotating drum, Yazdani and Hashemabadi 33 observed similar behaviour, where the granular system tended to segregate and cohesive particles moved towards the outer layer of the rotating granular media. www.nature.com/scientificreports www.nature.com/scientificreports/ We show in Fig. 4(c) the mixing index (MI) versus time for a bidesperse system composed of non-cohesive transparent particles of radius r A = 0.85 mm mixed with non-cohesive red particles of radius r B = 1.25 mm (case 5). In contrast with the previous case, when the system is composed of bidisperse particles, after few rotations of the drum, the mixing index reaches a steady state at about 0.5, indicating segregation, where larger particles are found in the upper region of the bed and near the wall of the drum, and smaller particles are concentrated in the core of the bed surrounded by the large particles. When the same experiment is performed with cohesive particles, the results are different. Figure 4(d) shows the mixing index (MI) of case 8, where particles of radius r A = 0.85 mm are cohesive and mixed with non-cohesive larger particles of radius r B = 1.25 mm. The value of the mixing index in this case is higher than for case 5, indicating better mixing with fewer large particles found in the outer region and more of them in the core of the bed. This demonstrates the ability of cohesion to improve dry particulate mixing. The cohesive force clumps the small particles together, decreasing their probability of moving into gaps between larger particles, which is similar to the explanation given by Li and McCarthy 14 for the case of cohesion due to moisture between the particles. We report in Fig. 5 three plots of the mixing index measurements for all the cases described in Table 1 as a function of time. In Fig. 5(a), we show cases 1 to 4, which correspond to systems where the particles have the same size. In all four cases, the mixing index reaches a steady state after approximately 7 revolutions of the drum (i.e., 20 seconds). The final mixing index value decreases as the cohesive force increases, confirming that cohesion reduces mixing. We also notice that the higher the cohesiveness, the more time is needed by the system to reach the steady state. Figure 5(b) shows the mixing index for a system composed of particles of radii r A = 0.85 mm and r B = 1.25 mm for different cohesive forces (cases 5 to 8). Again, after few rotations of the drum, the mixing index curves reach a plateau. When the cohesion is low (case 6), MI reaches slightly higher values than in case 5, where particles are not cohesive. As the cohesive force increases, segregation reduces, indicating that cohesion plays a key role in improving the mixing of particulate systems through the formation of clusters. We infer that cohesion makes small cohesive particles to clump together impeding them to pass through the gaps between the large ones. By performing DEM simulations, Aarons et al. 34 arrived to the same statement. They found that the tendency for the small particles to form agglomerates and their ability to move between the big particles are the main mechanisms that control cohesive particulate mixing. Since the cohesive small particles ability to mitigate segregation is dependant on the size of the large particles, we carried out the same experiments but with larger non-cohesive particles of radius r = 2 mm for different cohesive forces (cases 9 to 12). The mixing index for these cases is shown in Fig. 5(c). Similarly to Fig. 5(a,b), a plateau is observed after 20 seconds of drum rotation. For the non-cohesive system (case 9), the value of the mixing index at the plateau is about 0.48, slightly lower than the value obtained in case 5 (i.e., non-cohesive combination of 1.25 mm and 0.85 mm), which is in accordance with the findings of previous work 24,, stating that a greater difference in size of cohesionless particles increases the degree of www.nature.com/scientificreports www.nature.com/scientificreports/ segregation. After 7 rotations of the drum, low and medium cohesive particles (cases 10 and 11, respectively) reach almost the same level of mixing as the cohesionless case (i.e., case 9), indicating that the cohesion is not strong enough to prevent 2 mm particles segregation. Only highly cohesive particles (case 12) are able to reduce 2 mm particles segregation. All these results show that the mobility of the non-cohesive particles depends strongly on the cohesive force between the particles, which, more importantly, can be tuned to control segregation. Next, we present diagrams to visually identify zones where systems tend to mix or segregate. We plot in Fig. 6(a) the mixing index resulting from cohesion, = − − MI MI MI coh n on coh, averaged over the last 10 seconds of the duration of the experiment, as a function of the cohesive work change (i.e., surface energy change per particle) when the system goes from a segregated to a mixed state = − W W W m s, with MI coh and MI non-coh are the averaged mixing index for the cohesive case and the non-cohesive case, respectively, and W m and W s are the cohesive work per particle of the perfectly mixed system and the segregated system, respectively. When the mixture is bidisperse (circle symbols in Fig. 6(a)), the mixing degree increases with W, and the cohesion of the particles provides work (i.e., surface energy) that reduces the segregation of the system. W = 0 mN/m means either W s = W m, or all particles are non-cohesive. In case W s = W m, the adhesive work exerted by the cohesive small particles on the large particles is counterbalanced by the cohesive work between the small particles. Either way, the system will segregate by size. This suggests that, as long as W is sufficiently large, the mixing of the bidisperse system is enhanced. This is inline with the work of Chaudhuri et al. 38, who examined computationally the effect of adhesive forces in a binary system in the absence of cohesive forces. They found that adhesion favors the mixing process. However, we expect that further increasing W will reach a point where the system becomes perfectly mixed. Beyond this point, cohesion will start reducing the mixing of the system, because relatively large clumps of small particles, larger than the large individual particles, will form and migrate to the outer region of the bed by size segregation. This was emphasized by Sarkar and Wassgren 39, who stated that further increasing the cohesion forces may start hindering the mixing. On the contrary, for monodisperse particles (square symbols in Fig. 6(a)), the mixing decreases with W and cohesion induces segregation. For W = 0 mN/m -i.e., the segregation and mixing work are balanced, the monodisperse system is well mixed. To quantify the importance of cohesion in comparison to the weight of the particles, we plot in Fig. 6(b) the mixing index due to cohesion, = − − MI MI MI coh n on coh, as a function of the granular Bond number Bo g. The granular Bond number refers to the ratio of the maximum cohesive force, max F F (, ) AA AB, to the effective weight of the particles: Figure 6(b) shows that for bidisperse systems (circle symbols), the mixing increases with Bo g. When Bo g > 1, the maximum cohesive force is greater than the effective weight of the particles, and mixing occurs. When Bo g < 1, the interparticle cohesive forces are weak comparing to the effective weight, and thus cannot prevent segregation. We expect that when the bond number reaches a value higher than a critical Bond number Bo g c, the bidisperse system will start segregating because the size of the clustered particles will exceed that of the large particles. We define Bo g c as the bond number where the maximum cohesive force is equal to the weight of the large particles, which gives 3 (see Supplementary file for more details, Appendix B). Unlike the bidisperse case, for monosized particles (square symbols in Fig. 6), segregation is enhanced when Bo g increases due to cohesion. Here, Bo g < 1 indicates mixing. This is because cohesive forces are not strong enough to counterbalance the equally weighted particles of the perfectly mixed system. This counterbalancing occurs when Bo g > 1, causing the system to start segregating due to small glass particles agglomeration. Notice that in this case, = Bo 1 g c, which also indicates that cohesion will reduce mixing. This is supported by the numerical simulations of Sarkar and Wassgren 39, who showed that cohesive interactions between monosized particles promote agglomerate formation, which decreases mixing. Similarly, Halidan et al. 40 showed numerically that monosized particles mixing is better at moderate and low interparticle cohesion because the cohesive bonds between particles are easy to break. www.nature.com/scientificreports www.nature.com/scientificreports/ Comparing the results given by the cohesive work change approach (Fig. 6(a)) and those given by the Bond number approach (Fig. 6(b)), both give similar results, but the former seems to be more accurate in terms of predicting the mixing degree of the system. Summary Using controlled chemical silanization, it was possible to generate glass particles surfaces with adjustable cohesion. The cohesive force between particles was measured using an in-house experimental device and also characterized on the bulk level using the dynamic angle of repose in a rotating drum. We determined experimentally that an increase in cohesive forces increases the dynamic angle of repose, and the smooth flow of cohesionless particles transforms into an irregular flow of a bulk of clumped particles. Then, we investigated the effect of cohesive forces on the degree of segregation in the rotary drum. While strong enough cohesion of small particles improves mixing for bidisperse systems, it causes segregation for monosized particles. Therefore, one should be able to switch between size segregation or mixing of monosized particles simply by tuning on or off the cohesive forces between the particles. For bidisperse systems, avoiding segregation requires a balance between size segregation and cohesive clustering of the small particles. Our experiments illustrate the importance of cohesion for the controlled minimization of particle segregation during large-scale processing of powdery materials, capturing the mixing dependence on particle size and adhesive/cohesive forces between particles. The developed state diagrams can be extended to other size ratios, and may serve as a useful tool for controlling particle mixing in other devices or processes (e.g., slow chute flow, slowly vibrating bed) where shearing is not so strongly localized. Additionally, it would be of great interest to investigate whether similar mixing behaviour is obtained when the large particles are cohesive. Also, it would be important to extend our analysis to other weight fractions of large and fine particles. Methods Glass surface pretreatment: Silanization process. Silanization is based on the adsorption, self-assembly and covalent binding of Silane molecules onto the surface of glass particles. Silanes are coupling agents that can interact with both organic and inorganic materials. The silanization process has been used by many scientists for the surface hydrophobization of glass and other materials 19. It was also applied to fine silica particles as rheological additives for adhesives, resins and paints. Chemical compounds used for silanization are: silanization solution 5% (V/V, 5% in volume of Dimethyldichlorosilane in Heptane, Selectophore), Hydrochloric acid (HCl, 0.1 mol), Acetone and Ethanol. The procedure for making the glass particles cohesive is as follows: First, to ensure that the surface of Silica glass particles is free of contamination, they were cleaned for at least one hour by immersion into freshly prepared HCl solution under agitation using a rotor-stator homogenizer. Then, they were rinsed thoroughly with deionized water and oven dried for 3 hours at 60 °C. Afterwards, the freshly cleaned samples of glass particles were immersed in the silanization solution under low agitation speed at room temperature for various durations of time (30, 60 and 90 min) to obtain samples of particles with different cohesive forces. During this process, the inorganic functional groups of the Silane reacts with the different OH groups formed after cleaning with HCl and forms Si-OH groups. Finally, the treated glass particles were allowed to air-dry under a fume hood for 24 hours, forming a nanometric thin coating around the surface of the particle. The cohesive force of the formed coating mainly depends on the chemical silanization reaction duration and the Silane concentration. cohesion force measurement. The cohesive force between the particles was measured using an in-house setup that we specifically designed for this study at the Laboratory of Physics and Physical Chemistry of Foods (Wageningen University, NL). The setup consists of a micro-balance (Sartorius) and a micro-positioner as shown in Fig. 7. The micro-positioner was driven by a DC electric motor with an adjustable speed varying from 0.05 to 1 mm/min. The micro-positioner was mounted above the micro-balance. The whole setup was installed on an www.nature.com/scientificreports www.nature.com/scientificreports/ optical table to reduce mechanical vibrations. One particle was fixed on the balance by a double-sided tape while the other particle was glued to a flexible thin rod connected to the micropositioner. In order to stick the upper particle to the flexible rod without contaminating the whole particle surface with the glue, the tip of the rod was first wetted with about 0.05 micro-liters of instant liquid resin. After about 20 seconds, the liquid resin became more viscous, and then the upper side of the particle was precisely put in contact with the glue. After another 30 seconds, the particle was firmly connected to the rod. In order to measure the adhesive force, first the upper and lower particles were aligned as shown in Fig. 7. The weight in the balance is set to zero, subtracting the weight of one glass particle. Then, the top particle was moved downward with a speed of 1 mm/min. Upon the first mechanical contact between the particles, the balance shows a positive weight. Then, the stage is moved upward slowly (0.05 mm/min). When the two particles separate, the maximum absolute value obtained is recorded as the cohesive pull-off force between the particles. The stronger the particle-particle cohesive force, the stronger the snap-in force. This measurement has been repeated (more than 3 times) for several particles of different sizes and cohesive forces. The ideal accuracy of the micro-balance would be about 10 −8 mN. However, airflow around the setup and mechanical vibrations reduce the accuracy of our measurements to about 10 −5 mN. Surface characterization: AfM measurement. The AFM observations were performed with a Dimension Fast Scan atomic force microscope (Bruker's ScanAsyst and PeakForce Tapping AFM, Bruker Corporation, UK), located in the NanoLab NL, University of Twente. All AFM images were collected using the tapping mode with a silicon nitride probe and a scanning area of 2 2 m. The glass particle samples were glued onto a double side silica tape on top of a glass substrate before the tapping measurements. the drum apparatus. The drum is made by a cylinder of R = 60.5 mm inner radius, and L = 22 mm length, held between two circular Plexiglass (PMMA) plates of 5 mm thickness to allow optical access. For a quasi-two-dimensional rotating drum, Jain et al. 3 argued that the length of the drum in the axial direction should be larger than 6.4 r, with r the average radius of the particles, to neglect the front and back wall friction on the flow characteristics. The drum length used in this study is indeed larger than 6.4 r of all the particles used. The drum was placed on a horizontal rotating axis driven by a variable-speed motor. Images of the rotating drum were recorded using a MotionBLITZ EoSens high speed camera working at 120 fps. Experiments were performed in the cascading regime, at a rotation speed of the drum = 25 rpm, corresponding to a Froude number Fr = 0.21. This number represents the ratio of centrifugal to gravitational acceleration: 2 where is the rotation speed of the drum, R its inner radius and g the acceleration due to gravity. Experiments are conducted using a selected set of Borosilicate glass particles of density 2500 kg/m 3. The drum is less than half filled with the same mass of particles (i.e., 125 g of particles). Characteristics of the drum and the glass particles are summarized in Table 2. image post-processing and mixing index computation. The images, acquired using the high-speed camera, were post-processed to obtain the dynamic angle of repose and the mixing index in the rotating drum. The dynamic angle of repose was obtained using the particle tracking package Trackmate within the FIJI ImageJ distribution 44. First, we removed the background in the image sequence and adjusted the light to enhance particle detection. Particles outlines (i.e., spots) that stand out from the background were segmented and identified based on the difference of Gaussians distributions 45 with an estimated particle diameter of 8 pixels for a particle radius of r = 0.85 mm. The detected spots were converted into a table of positions and visualized using ParaView 46. The dynamic angle of repose was computed by linear regression of the positions of the particles in the free surface of the bed using an in-house python code. For the computation of the mixing index, particles with different colors were used in the experiments to identify cohesive, non-cohesive, big and small particles during image post-processing using variable tolerances in FIJI ImageJ software. Then, binary images with different particle species were obtained and converted into continuum density fields using a method similar to Coarse Graining method (CG) 47. The main difference between the two methods is the application of the CG to the pixel data rather than the particle positions. This fully removes the need for a particle size and position detection algorithm, leading to a significant reduction in analysis time. To obtain the density field, we assumed the mass to be equal to 1, which gives a CG density function of: Properties Value Drum, R L (mm) 60.5 22 Drum rotation speed (rpm) 25 Glass particles radii r (mm) 0.85, 1.25 and 2 particle density (kg/m 3 ) 2500 Filling level of the drum (%) 35 with N A and N B, respectively, the number of particles "A" and "B" in the bulk. Another way of calculating the number of contacts per particles was proposed by Chandratilleke et al. 52 by using the coordination number, who also showed that this number is useful to identify the mechanisms of mixing and demixing in particulate systems. The free binding energy, or work of adhesion, done to separate unit areas of two surfaces or media from contact to infinity in a vacuum can be related to the adhesion force F AB between two particles by 53 : AA AA A F AB and F AA denote, respectively, the adhesion and cohesion forces existing at a single interparticle contact, r A and r B are the radii of particles "A" and "B" respectively, and W AB refers to the energies per unit area of the "A"-"B" interface. In a bulk of a mixed particles, each reference particle "A" will share Z AB adhesive bonds and Z AA cohesive bonds. Thus, the average bonding work per particle of the perfectly mixed system (see Fig. 8 For a completely segregated system, the number of adhesive bonds is very low comparing to the number of cohesive bonds (see Fig. 8(e)), and therefore the former can be neglected. We can then write the average bonding work of the segregated system as follows: s A A AA A BB BB B Here, for a segregated system (Fig. 8(e)), Z AA = Z BB = Z = 6. The difference of work W = W m − W s gives the average cohesive work change (i.e., energy change per contact area) per particle, when the system goes from a segregated to a mixed state. Notice that for a monodisperse system, if F AB = F AA, W is equal to zero. |
Plasma cell balanitis: clinical and histopathological features--response to circumcision. OBJECTIVE--To evaluate the clinicopathological features and response to circumcision in patients with plasma cell balanitis. SUBJECTS AND METHOD--32 uncircumcised men with penile lesions typical of plasma cell balanitis. Twenty specimens were available for histopathology. RESULTS--Lesions involved prepuce and glans in 17, prepuce only in 10 and in 5 were localised to glans alone or extended to coronal sulcus. Histopathology showed variable features but were consistent with the diagnosis of plasma cell balanitis. Haemosiderin pigment could be detected in only three specimens of patients with shorter duration of the disease. Twenty seven patients were treated with circumcision and no recurrence was noticed in 3 years of follow up. CONCLUSION--Circumcision is an effective treatment modality in plasma cell balanitis. Absence of haemosiderin pigment in majority of tissue sections is difficult to explain but may be related to longer duration of the disease. |
/*
* Copyright (c) 2020, <NAME>. All Rights Reserved.
*
* This file is part of BoofCV (http://boofcv.org).
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package boofcv.alg.feature.detect.selector;
import boofcv.struct.image.GrayF32;
import boofcv.testing.BoofStandardJUnit;
import georegression.struct.point.Point2D_I16;
import org.ddogleg.struct.DogArray;
import org.ddogleg.struct.FastAccess;
import org.ddogleg.struct.FastArray;
import org.jetbrains.annotations.Nullable;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
/**
* @author <NAME>
*/
class TestConvertFeatureSelectLimitToIntensity extends BoofStandardJUnit {
@Test
void callTheFunction() {
Dummy<Point2D_I16> dummy = new Dummy<>();
var intensity = new GrayF32(30, 40);
FeatureSelectLimitIntensity<Point2D_I16> wrapped = new ConvertLimitToIntensity<>(dummy);
var detected = new DogArray<>(Point2D_I16::new);
var selected = new FastArray<>(Point2D_I16.class);
wrapped.select(intensity, -1, -1, true, null, detected, 100, selected);
assertEquals(intensity.width, dummy.width);
assertEquals(intensity.height, dummy.height);
assertEquals(100, dummy.limit);
}
@Nested
public class CheckNoImage extends ChecksFeatureSelectLimitIntensity.NoImage {
@Override public FeatureSelectLimitIntensity<IntensityPoint> createAlgorithm() {
var selector = new FeatureSelectN<IntensityPoint>();
var alg = new ConvertLimitToIntensity<>(selector);
alg.setSampler(new SampleIntensityPoint());
return alg;
}
}
private static class Dummy<Point> implements FeatureSelectLimit<Point> {
int width, height, limit;
@Override
public void select( int imageWidth, int imageHeight, @Nullable FastAccess<Point> prior,
FastAccess<Point> detected, int limit, FastArray<Point> selected ) {
this.width = imageWidth;
this.height = imageHeight;
this.limit = limit;
}
}
}
|
The identification of plankton tropical status in the Wonokromo, Dadapan and Juanda extreme water estuary Wonokromo, Dadapan and Juanda estuaries are extreme waters located around Surabaya environment. This is because of a lot of organic material intake, which provided nutrients for plankton growth. In addition, the waters is also dynamic in reason of physico-chemical, geological and biological processes controlled by the tides and freshwater run-off from the river that empties into it. The objective of this study was to identify the presentation of plankton in extreme waters based on brightness and ammonia level. The study was conducted in January 2017. Three sampling locations were Wonokromo, Dadapan and Juanda estuaries. Each station consists of three points based on distances, which were 400, 700, and 1000 meters from the coastline. The brightness in Wonokromo, Dadapan, and Juanda environment was 60, 40, and 100 cm, respectively. The result of ammonia in Wonokromo, Dadapan, and Juanda estuary was 0.837, 0.626, and 0.396 mg/L, correspondingly. Nine classes of phytoplanktons were found in three locations (bacillariophyceae, dynophyceae, chlorophyceae, cyanophyceae, crysophyceae, euglenoidea, trebouxlophyceae, mediophyceae, and nitachiaceae) and five classes of zooplanktons (maxillopoda, hexanuplia, copepoda, malacostraca, and oligotrichea). The density of plankton in Wonokromo, Dadapan and Juanda environments, was 37.64, 63.80, and 352.85 cells/L, respectively. Introduction The fertility of waters can be seen from the existence of plankton. According to Sofarini, stated that the presence and density of phytoplankton is one indicator of the fertility of the aquatic environment. According to Hemraj et al., also stated that planktons are a bio-indicator of the water conditions of coastal lagoons. This indicator is performed by measuring the concentration of the chlorophyll produced by plankton. The plankton density in waters is influenced by a number of environmental parameters and physiological characteristics. The composition and plankton density change at various levels in response to changes in physical, chemical, and biological conditions. These include physical and chemical changes such as the intensity of light, dissolved oxygen, temperature stratification, and the availability of nutrients, nitrogen and phosphorus; while the biological aspect is the existence of animal predation activities, death, and decomposition. The east coast of Surabaya is located in the northwestern part of Madura Strait, covering the area between Tambak Wedi village and the Dadapan river estuary (border area with Sidoarjo). The waters of the east coast of Surabaya can be regarded as extreme waters due to several factors. One of the factors is that there are approximately 10 rivers that empties into it. These waters make the coastal waters of Surabaya's east coast an estuary area with an input of numerous organic material intake as nutrients for the growth of plankton. Moreover, the coastal waters of Surabaya's east coast can be considered as very dynamic waters because there are various physical-chemical, geological and biological processes controlled by the sea water tides and freshwater run-off from the rivers that empties into it. According to Winarni et al., have stated that the structure of communities in the east coast of Surabaya have led to the discovery of seven types of sea cucumbers with high to low distribution indexes, none very high. According to Affandi et al., mentioned that there were four species of kupang (local name) and one lorjuk (local name) species in the east coast waters of Surabaya with a distribution that did not occur spontaneously or randomly but were closely related to preferences or selection of suitable habitats. This research is a preliminary study to provide basic data and will be continued in several parts. The research aims to assess the extreme coastal waters of Surabaya with extreme water condition and high organic material which have dominance, diversity and plankton density. Extreme waters with good fertility will have a variety of species and good plankton density as well. Therefore, a study on the identification and fertility levels of plankton in the extreme waters in the east coast of Surabaya was carried out. Seawater as public waters may at any time experience changes in conditions and quality caused by surrounding environmental factors, weather, climate as well as human activities conducted on these waters. Changes that occur in sea water will cause fluctuations in water quality such as temperature, salinity, water brightness (turbidity), degree of acidity (pH) and dissolved oxygen (DO). Changes that occur to water quality will influence the sustainability of aquatic organisms. Planktons are one of the most influential organisms in extreme water conditions. Some types of plankton are able to survive and grow in extreme waters. Plankton is able to adapt to extreme environments. According to Seckback, explained that some algae are able to survive under conditions of low temperature, high temperature and high salinity waters. Tolerant organisms that can adapt to extreme waters will easier adapt to within the culture. Research from Hemraj et al., stated that coastal lagoons are characterized by extreme environments especially high salinity. This is led to community changing. Studies have been performed which have used plankton as an indicator of extreme environmental fluctuations. Two different communities of phytoplankton and zooplankton were identified, with salinity and nutrients being the main factors affecting species distribution. Polychaete and gastropod larvae showed positive indicators of high salinity fluctuations. Methodology The study was conducted in January 2017. The research included field observations and laboratory analysis. Sampling was carried out in the coastal waters of Surabaya on three stations, where each station consisted of three points. The identification of plankton samples were performed at the Dry Laboratory of the Faculty of Fisheries and Marine at Airlangga University, Surabaya and the water sample was measured there. The research material used was plankton found in the Wonokromo, Juanda and Dadapan waters. The research materials used were 4% formalin, lugol, walnut nutrient (NaNO 3, Na 2 ; Algae ; Identifying Marine Phytoplankton ; and Plankton: A Guide to their Ecology and Monitoring for Water Quality. Plankton and water samples were obtained from three stations in Wonokromo, Juanda and Dadapan waters. Each station consisted of 3 water sampling points at a distance of 200, 500, 800 meters from the coastal waters. Sample collection The sampling of plankton and water was done at each sampling location. Sampling of microalgae (phytoplankton) was carried out using a 10 m diameter plankton net. Zooplankton sampling was done using a 80 m diameter plankton net. First, 50 liters surface water was filtered by a plankton net in each station. Then the filtered water was put into a sample container. The ordinate sampling station was recorded. Bottles were labeled with information such as times, dates, points and sampling stations. Two sample bottles were collected at each sample site location, one bottle of sample was preserved by using lugol (sample for density calculation and plankton diversity), while the other contained a sample without additional preservatives but stored in a gelled ice coolbox (sample for plankton insulation). The identification of samples was perfomed at the laboratory of the Faculty of Fisheries and Marine at Airlangga University. Plankton analyses was determined by dripping 1 ml of sample water on the haemacytometer, then observed under the microscope with 100 enlargement. The checking of parameters of water samples such as Dissolved Oxygen (DO), temperature, brightness, current and pH were done directly at the sampling site. Identification and Calculation of Plankton Density The identification of the plankton is based on its morphological form observed by using a microscope.The identification of zooplankton is based on the calculation of plankton density using the formula from, while the Calculation of Dominance Index and Diversity Index was calculated using the formula from. Result and Discussion The research data stated that the extreme waters at the locations of Wonokromo, Dadapan and Juanda was caused by the condition of water brightness. The brightness at 3 stations were 60 cm at Wonokromo, 40 cm at Dadapan and 100 cm at Juanda waters. Ammonia at the three stations were measured at 0.837 mg/L in the waters of Wonokromo, 0.626 mg/L in Dadapan and 0.396 mg/L in Juanda. Figure 2. Location of sample waters. The development of phytoplankton is determined by the intensity of sunlight, temperature and nutrients. The high growth of phytoplankton may not always be beneficial for aquatic conditions; it can also cause a population explosion (blooming), which can produce harmful toxic substances. The structure of the phytoplankton community is a collection of populations that live in a particular region or habitat that interconnect and interact or have a reciprocal relationship of a particular zone ; this includes the index of diversity, index dominance, uniformity index and species wealth index. The Shannon-Wiener species diversity index is a mathematical calculation that describes the analysis of information about the number of individuals in each species, the number of species and the total of individuals within a community. The uniformity index (ETIsability) is a description of the uniformity of the individual distribution of the phytoplankton species in a community. Simpson's dominance index represents the presence or absence of a dominant species in a community. The disappearance of the dominant species causes a change in the biotic community and its physical environment. The richness index is used to find out the least amount of taxa and the concentration of biota in a community (Margalef, 1951in Romimohtarto, 2001. From the sampling sites, we found 10 classes of phytoplankton (bacillariophyceae, dynophyceae, chlorophyceae, cyanophyceae, crysophyceae, euglenoidea, trebouxlophyceae, mediophyceae, nitachiaceae) and 5 classes of zooplankton (maxillopoda, hexanuplia, copepoda, malacostraca, oligotrichea). The density of plankton were 37.64 cells/liter in Wonokromo, 63.80 cells/liter in Dadapan, and 352,85 cells/liter in Juanda waters. Phytoplankton have chlorophyll that the function in photosynthesis to produce organic and oxygen in the water, However, certain phytoplankton may degrade the quality of the waters if the amount is excessive (blooming). The high populations of toxic phytoplankton in waters can cause negative consequences for aquatic ecosystems, such as reduced oxygen in the water that can cause the death of various aquatic creatures. Phytoplankton can be found throughout the mass of water from the surface to a depth where it is still possible for the intensity of sunlight to be used in the process of photosynthesis. This phytoplankton is the largest flora component of its role as a primary producer in a waters Conclusion Our gratitude goes out to the Faculty of Fisheries and Marine who have funded this research. We would also like to thank our team of researchers: both technicians in the laboratory and in the field. We hope that this research can be beneficial for future studies on this topic. The brightness in Wonokromo, Dadapan, and Juanda environment was 60, 40, and 100 cm, respectively. The result of ammonia in Wonokromo, Dadapan, and Juanda estuary was 0.837, 0.626, and 0.396 mg/L, correspondingly. Nine classes of phytoplankton's were found in three locations (bacillariophyceae, dynophyceae, chlorophyceae, cyanophyceae, crysophyceae, euglenoidea, trebouxlophyceae, mediophyceae, and nitachiaceae) and five classes of zooplanktons (maxillopoda, hexanuplia, copepoda, malacostraca, and oligotrichea). The density of plankton in Wonokromo, Dadapan and Juanda environments, was 37.64, 63.80, and 352, 85 cells/L, respectively. |
// Copyright 2017 The Abseil Authors.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// -----------------------------------------------------------------------------
// File: uniform_real_distribution.h
// -----------------------------------------------------------------------------
//
// This header defines a class for representing a uniform floating-point
// distribution over a half-open interval [a,b). You use this distribution in
// combination with an Abseil random bit generator to produce random values
// according to the rules of the distribution.
//
// `absl::uniform_real_distribution` is a drop-in replacement for the C++11
// `std::uniform_real_distribution` [rand.dist.uni.real] but is considerably
// faster than the libstdc++ implementation.
//
// Note: the standard-library version may occasionally return `1.0` when
// default-initialized. See https://bugs.llvm.org//show_bug.cgi?id=18767
// `absl::uniform_real_distribution` does not exhibit this behavior.
#ifndef ABSL_RANDOM_UNIFORM_REAL_DISTRIBUTION_H_
#define ABSL_RANDOM_UNIFORM_REAL_DISTRIBUTION_H_
#include <cassert>
#include <cmath>
#include <cstdint>
#include <istream>
#include <limits>
#include <type_traits>
#include "absl/meta/type_traits.h"
#include "absl/random/internal/fast_uniform_bits.h"
#include "absl/random/internal/generate_real.h"
#include "absl/random/internal/iostream_state_saver.h"
namespace absl {
ABSL_NAMESPACE_BEGIN
// absl::uniform_real_distribution<T>
//
// This distribution produces random floating-point values uniformly distributed
// over the half-open interval [a, b).
//
// Example:
//
// absl::BitGen gen;
//
// // Use the distribution to produce a value between 0.0 (inclusive)
// // and 1.0 (exclusive).
// double value = absl::uniform_real_distribution<double>(0, 1)(gen);
//
template <typename RealType = double>
class uniform_real_distribution {
public:
using result_type = RealType;
class param_type {
public:
using distribution_type = uniform_real_distribution;
explicit param_type(result_type lo = 0, result_type hi = 1)
: lo_(lo), hi_(hi), range_(hi - lo) {
// [rand.dist.uni.real] preconditions 2 & 3
assert(lo <= hi);
// NOTE: For integral types, we can promote the range to an unsigned type,
// which gives full width of the range. However for real (fp) types, this
// is not possible, so value generation cannot use the full range of the
// real type.
assert(range_ <= (std::numeric_limits<result_type>::max)());
assert(std::isfinite(range_));
}
result_type a() const { return lo_; }
result_type b() const { return hi_; }
friend bool operator==(const param_type& a, const param_type& b) {
return a.lo_ == b.lo_ && a.hi_ == b.hi_;
}
friend bool operator!=(const param_type& a, const param_type& b) {
return !(a == b);
}
private:
friend class uniform_real_distribution;
result_type lo_, hi_, range_;
static_assert(std::is_floating_point<RealType>::value,
"Class-template absl::uniform_real_distribution<> must be "
"parameterized using a floating-point type.");
};
uniform_real_distribution() : uniform_real_distribution(0) {}
explicit uniform_real_distribution(result_type lo, result_type hi = 1)
: param_(lo, hi) {}
explicit uniform_real_distribution(const param_type& param) : param_(param) {}
// uniform_real_distribution<T>::reset()
//
// Resets the uniform real distribution. Note that this function has no effect
// because the distribution already produces independent values.
void reset() {}
template <typename URBG>
result_type operator()(URBG& gen) { // NOLINT(runtime/references)
return operator()(gen, param_);
}
template <typename URBG>
result_type operator()(URBG& gen, // NOLINT(runtime/references)
const param_type& p);
result_type a() const { return param_.a(); }
result_type b() const { return param_.b(); }
param_type param() const { return param_; }
void param(const param_type& params) { param_ = params; }
result_type(min)() const { return a(); }
result_type(max)() const { return b(); }
friend bool operator==(const uniform_real_distribution& a,
const uniform_real_distribution& b) {
return a.param_ == b.param_;
}
friend bool operator!=(const uniform_real_distribution& a,
const uniform_real_distribution& b) {
return a.param_ != b.param_;
}
private:
param_type param_;
random_internal::FastUniformBits<uint64_t> fast_u64_;
};
// -----------------------------------------------------------------------------
// Implementation details follow
// -----------------------------------------------------------------------------
template <typename RealType>
template <typename URBG>
typename uniform_real_distribution<RealType>::result_type
uniform_real_distribution<RealType>::operator()(
URBG& gen, const param_type& p) { // NOLINT(runtime/references)
using random_internal::GeneratePositiveTag;
using random_internal::GenerateRealFromBits;
using real_type =
absl::conditional_t<std::is_same<RealType, float>::value, float, double>;
while (true) {
const result_type sample =
GenerateRealFromBits<real_type, GeneratePositiveTag, true>(
fast_u64_(gen));
const result_type res = p.a() + (sample * p.range_);
if (res < p.b() || p.range_ <= 0 || !std::isfinite(p.range_)) {
return res;
}
// else sample rejected, try again.
}
}
template <typename CharT, typename Traits, typename RealType>
std::basic_ostream<CharT, Traits>& operator<<(
std::basic_ostream<CharT, Traits>& os, // NOLINT(runtime/references)
const uniform_real_distribution<RealType>& x) {
auto saver = random_internal::make_ostream_state_saver(os);
os.precision(random_internal::stream_precision_helper<RealType>::kPrecision);
os << x.a() << os.fill() << x.b();
return os;
}
template <typename CharT, typename Traits, typename RealType>
std::basic_istream<CharT, Traits>& operator>>(
std::basic_istream<CharT, Traits>& is, // NOLINT(runtime/references)
uniform_real_distribution<RealType>& x) { // NOLINT(runtime/references)
using param_type = typename uniform_real_distribution<RealType>::param_type;
using result_type = typename uniform_real_distribution<RealType>::result_type;
auto saver = random_internal::make_istream_state_saver(is);
auto a = random_internal::read_floating_point<result_type>(is);
if (is.fail()) return is;
auto b = random_internal::read_floating_point<result_type>(is);
if (!is.fail()) {
x.param(param_type(a, b));
}
return is;
}
ABSL_NAMESPACE_END
} // namespace absl
#endif // ABSL_RANDOM_UNIFORM_REAL_DISTRIBUTION_H_
|
N = int(input())
a = [int(i) for i in input().split()]
a.append(a[N-1]+1)
List = []
counter = 0
series = 0
while counter < N:
if a[counter +1] == a[counter]:
series += 1
else:
List.append(series)
series = 0
counter += 1
List2 = [(i+1)//2 for i in List]
print(sum(List2))
|
Effect of vitamin C and its derivatives on collagen synthesis and crosslinking by normal human fibroblasts Vitamin C (VitC) plays a critical role in the maintenance of a normal mature collagen network in humans (antiscurvy properties) by preventing the autoinactivation of lysyl and prolyl hyroxylase, two key enzymes in collagen biosynthesis. In this study two in vitro models were designed to evaluate the effects of VitC on collagen biosynthesis and crosslinking at cellular and tissue levels. It was shown that VitC induced a dosedependent increase in collagen type I deposits by normal human fibroblasts (NHF) cultured in monolayer, and enhanced extracellular matrix contraction by NHF in a lattice model, in a noncytotoxic range of concentrations (103m, 104m, 105m). Exogenous VitC supply could thus contribute to the maintenance of optimal collagenic density in the dermis and locally strengthen the collagen network. Vitamin Cphosphate (VitCP) and vitamin Cglucoside (VitCGlu) (two VitC derivatives presenting higher chemical stability in aqueous solution) were also tested in our two models, and showed similar biological properties, but with different potencies. These two compounds can be considered as provitamins for skin, and could thus advantageously substitute for VitC in VitCbased antiageing products, as they allow the development of stable, easytouse formulations. |
package app
import (
"encoding/json"
"net/http"
"sort"
"github.com/imdevlab/g"
"github.com/bsed/trace/web/internal/misc"
"github.com/bsed/trace/web/internal/session"
"github.com/labstack/echo"
"go.uber.org/zap"
)
type Stat struct {
Name string `json:"name"`
Count int `json:"count"`
Apdex float64 `json:"apdex"`
AverageElapsed float64 `json:"average_elapsed"`
ErrorPercent float64 `json:"error_percent"`
ExPercent int `json:"ex_percent"`
totalElapsed float64
errCount float64
exCount int
satisfaction float64
tolerate float64
Alive int `json:"alive"` // 存活节点数量
Unalive int `json:"unalive"` // 不存活节点数量
}
// 获取用户的应用设定
func UserSetting(user string) (int, []string) {
q := misc.StaticCql.Query(`SELECT app_show,app_names FROM account WHERE id=?`, user)
var appShow int
var appNameS string
err := q.Scan(&appShow, &appNameS)
if err != nil {
return 1, nil
}
appNames := make([]string, 0)
err = json.Unmarshal([]byte(appNameS), &appNames)
if err != nil {
return 1, nil
}
return appShow, appNames
}
// 查询应用底下的所有APi
func QueryApis(c echo.Context) error {
appName := c.FormValue("app_name")
if appName == "" {
return c.JSON(http.StatusOK, g.Result{
Status: http.StatusBadRequest,
ErrCode: g.ParamInvalidC,
Message: g.ParamInvalidE,
})
}
q := `SELECT api FROM app_apis WHERE app_name=?`
iter := misc.StaticCql.Query(q, appName).Iter()
var api string
apis := make([]string, 0)
for iter.Scan(&api) {
apis = append(apis, api)
}
if err := iter.Close(); err != nil {
g.L.Warn("close iter error:", zap.Error(err))
}
return c.JSON(http.StatusOK, g.Result{
Status: http.StatusOK,
Data: apis,
})
}
func QueryAll(c echo.Context) error {
return c.JSON(http.StatusOK, g.Result{
Status: http.StatusOK,
Data: allAppNames(),
})
}
func QueryAllWithSetting(c echo.Context) error {
li := session.GetLoginInfo(c)
appShow, appNames := UserSetting(li.ID)
ans := make([]string, 0)
if appShow == 1 { // 显示全部应用
ans = allAppNames()
} else {
ans = appNames
}
return c.JSON(http.StatusOK, g.Result{
Status: http.StatusOK,
Data: ans,
})
}
func allAppNames() []string {
q := misc.StaticCql.Query(`SELECT app_name FROM apps `)
iter := q.Iter()
appNames := make([]string, 0)
var appName string
for iter.Scan(&appName) {
appNames = append(appNames, appName)
}
if err := iter.Close(); err != nil {
g.L.Warn("access database error", zap.Error(err), zap.String("query", q.String()))
}
sort.Strings(appNames)
return appNames
}
|
<commit_msg>Add WiFi IP to LCD
<commit_before><commit_after>#!/usr/bin/env python
#
# Copyright (c) 2015 Max Vilimpoc
#
# References:
# http://stackoverflow.com/questions/24196932/how-can-i-get-the-ip-address-of-eth0-in-python
# https://github.com/intel-iot-devkit/upm/blob/master/examples/python/rgb-lcd.py
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import socket
import fcntl
import struct
import pyupm_i2clcd as lcd
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])
# Initialize Jhd1313m1 at 0x3E (LCD_ADDRESS) and 0x62 (RGB_ADDRESS)
myLcd = lcd.Jhd1313m1(0, 0x3E, 0x62)
# Clear
myLcd.clear()
# Green
myLcd.setColor(255, 255, 0)
# Zero the cursor
myLcd.setCursor(0,0)
# Print it.
ip_address = get_ip_address('wlan0')
myLcd.write(ip_address)
|
Power Spectral Analysis of Heart Rate Variability: Normal Values of Subjects over 60 Years Old Power spectral analysis of heart rate variability (HRV) provides a non-invasive method of estimating cardiac autonomic nerve activity. It has been reported that HRV decreases with age. The purpose of this study was to assess the values of and determine the reliability of HRV in healthy older people. The study found lower and highly variable values of HRV. It was concluded that the reliability of HRV in older subjects might need to be reinvestigated and only normalized values of HF and LF might be useful. Larger study groups and different recording periods of HRV are needed. |
Prognostic value of pretreatment lymphocytetomonocyte ratio in patients with advanced oral cavity cancer Abstract Introduction Lymphocytetomonocyte ratio (LMR) has been reported as a prognostic factor in many cancers but the data are to date limited for its use in oral cavity cancer. The purpose of this study was to evaluate the prognostic value of LMR in advancedstage oral cavity cancer. Methods Data from 211 advancedstage oral cancer patients treated with curative intent between January 2009 and December 2015 were obtained from the hospital information system. Pretreatment LMR and other hematologic parameters were recorded and an LMR cutoff value was calculated. Overall survival between the groups above (high LMR) and below (low LMR) the cutoff was compared and hazard ratios from univariate and multivariate analyses using a Cox proportional hazards model calculated. Results Overall survival and diseasespecific survival were better in the high LMR group. The 5year overall survival rates were 31.6% and 15% in the high LMR and low LMR groups, respectively. Multivariate analysis using a Cox proportional hazards model showed that treatment modality and LMR were the only factors associated with overall survival. Conclusion Low LMR was associated with poor survival outcome in patients with advancedstage oral cavity cancer. Level of Evidence 2b. | INTRODUCTION Cancer of the oral cavity is one of the most common head and neck cancers. The age-standardized rates (ASR) for this cancer reported by the US National Cancer Institute in 2015 were 11 per 100,000 persons and 2.5 deaths per 100,000 persons. 1 The most recent ASR rate in Songkhla, a province in southern Thailand, for oral cavity cancer was 7.3 per 100,000 persons. 2 One study reported that localized oral cavity cancer had 5-year overall survival rates of 60-70%, but these decreased by 50% when there was nodal involvement. 3 When comparing treatments begun in early-stage vs advanced-stage cancers, the survival rate decreased from 74% to 33%. 4 The mortality rate of oral cavity cancers is influenced by many factors such as smoking, alcohol drinking, use of smokeless tobacco, initial staging and treatment modality. 5,6 However, there are also varying treatment outcomes among patients with the same staging and treatments, indicating other factors are involved, such as differences in the biological activity of the tumor. Many recent studies have investigated inflammatory cells in the peripheral blood that are associated with tumor growth and treatment outcomes. One study reported that lymphocytes played an important role in the tumor microenvironment and in inhibiting tumor growth. 7 Another study found that lower levels of lymphocytes were related to poorer treatment outcomes. 8 In addition to lymphocytes, other studies have found that monocytes promoted tumor proliferation by producing pro-inflammatory cytokines and promoting tumor angiogenesis. 9,10 Many studies have investigated the prognostic value of the lymphocyte-tomonocyte ratio (LMR) in various solid tumors such as hepatocellular carcinoma, lung cancer, cervical cancer, esophageal cancer, and head and neck cancers including nasopharynx, oropharynx, larynx and hypopharynx and found that lower LMRs were related to poorer survival outcomes. The mainstay treatment of oral cavity cancer is surgery, which is different from other head and neck cancers that rely more on radiation. The prognostic value of LMR in cancers varies depending on tumor location but its utility in the assessment of oral cavity cancer has not yet been established, especially in advanced-stage oral cavity cancer which has poorer overall survival compared to early-stage cancer. In the present study, the main objective was to evaluate the prognostic value of LMR in advanced-stage oral cavity cancer. | Data collection All data from the records of patients with stage III or IV squamous cell carcinoma of the oral cavity during the study period were extracted from the data base. Cancer staging was based on the American Joint | Statistical analysis Descriptive data are shown as frequency, percentage or mean with standard deviation. A receiver operating characteristic (ROC) curve was plotted to evaluate the predictive ability of pretreatment LMR. The cutoff value was derived from the maximum value of Youden's index (J) which identifies the point that yields the highest sensitivity plus specificity minus one on an ROC curve. 20 The index values range from 0 to 1. The cutoff value calculated from this method provides the best tradeoff between sensitivity and specificity to maximize the effectiveness of a diagnostic biomarker. This index gives equal weight to false positive and false negative values. The Chi-square test was used to examine relationships between clinical characteristics and LMR. Overall survival and disease-specific survival were plotted using the Kaplan-Meier method and the log-rank tests were used to identify differences in overall survival rates. A Cox proportional hazards model was used to identify the effect of each factor on F I G U R E 1 ROC curve to identify optimal cutoff value for lymphocyte-tomonocyte ratio overall survival. All statistical analyses were done with the R program version 1.3.1073. | Optimal cutoff value for lymphocyte-tomonocyte ratio An ROC curve was plotted as shown in Figure 1 to ascertain the best cutoff value from Youden's index analysis for a lymphocyte-tomonocyte ratio. The area under the curve for this LMR was 0.594 with specificity and sensitivity of 0.570 and 0.614, respectively. The optimal cutoff value which yielded the highest sensitivity plus specificity was 4. The patients were then divided into two groups for analy- There were no statistically significant differences between the two groups in all major baseline categories as shown in Table 1. | Survival outcome Overall survival and "disease specific" survival were better in the high LMR group as shown in the Kaplan-Meier survival curves shown in | Survival analysis Univariate and multivariate analyses using Cox proportional hazards models showed that treatment modality and LMR were associated with overall survival while age, gender, tumor subsite and stage were not, as shown in Table 2. | DISCUSSION This study aimed to evaluate the prognostic value of the LMR in patients with advanced-stage oral cavity cancer. Treatment modality and LMR were found to be significant prognostic factors for overall survival. Various studies have reported that treatment modality affected the survival rates of head and neck surgery patients but this study also found that LMR was another independent prognostic factor for advanced-stage oral cavity cancer. After and oropharyngeal cancers. Their study found that only the LMR was an independent prognostic factor. 13 In the current study, more than 60 percent of the patients were treated with surgery and LMR was also associated with overall survival in these patients, but not in patients who were treated with CCRT. This finding might be due to the fact that most patients in the CCRT group had poor overall survival due to advanced-stage, unresectable disease and/or other comorbidities that made them ineligible for surgery. In 2015, Nishijima et al. did a systematic review and meta-analysis in non-hematologic malignancies such as lung cancer, breast cancer, esophageal cancer, nasopharyngeal cancer and others, and found that a low LMR was related to poor survival outcomes. 16 As noted in the Nishijima meta-analysis, many cutoff values have been used to divide patients between high and low LMR groups, ranging from 2 to 5.26. One study reported that the hazard ratios of LMR were 1.73 with 95% CI 1.55-1.93. 16 The current study used a cutoff value of 4 derived from a Youden's index analysis. This method has the main advantage over most other methods of balancing between false positive and false negative values to maximize the effectiveness of diagnostic biomarkers and this method has been used by many studies to acquire optimal cutoff values. 11,13,15 One study reported the cutoff value of 4.29 for LMR in tongue cancer, 21 which was higher than the cutoff value in the current study. These different results may be due to a relatively decreased number of lymphocytes and increased monocytes related to the advanced-stage disease patients in our study, that resulted in a low LMR. 13 Using this cutoff, the low LMR group had overall worse survival than the high LMR group in patients with advanced-stage oral cavity cancer. The hazard ratio for low LMR in this study was 1.44 with a 95% CI 1.02-2.04, which was less than the hazard ratio found in the meta-analysis. Based on this study's findings and earlier studies, there appears to be a significant association between LMR and survival outcome in advanced-stage oral cancer patients. The tumor microenvironment is known be an important factor influencing tumor progression or suppression. Tumor-infiltrating lymphocytes (TILs) play a major antitumor role by activation of cellular and humoral immune responses. Various studies have reported that a low number of TILs was associated with poor survival outcomes. 7,9,17,22 Tumor-associated macrophages (TAMs), which are derived from circulating monocytes, accelerate tumor growth by producing pro-inflammatory cytokines and promoting tumor angiogenesis. 10 High TAMs have been associated with poor prognoses, 23 so it seems possible that the LMR may reflect the host immune system in the tumor microenvironment which could suppress or accelerate tumor progression. The study had some limitations. First, it was a retrospective study so the possibility of selection bias was unavoidable. Second, the sample size was low because this was a single-center study examining a relatively uncommon disease. Third, this study could not elucidate a relationship between circulating monocytes and TAMs due to a problem in obtaining tissue specimens for immunohistochemistry. Further prospective studies with larger sample sizes including tissue specimens to analyze the relationship between circulating monocytes and TAMs are needed. | CONCLUSION A low LMR was associated with poor survival outcome in patients with advanced-stage oral cavity cancer, especially in patients who had surgical treatment. ACKNOWLEDGMENTS We would like to acknowledge Mrs. Nannapat Pruphetkeaw for her assistance with data analysis, and Mr. David L. Patterson for editing the manuscript for English. |
<reponame>loveululu/Serving
// Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "quant.h"
#include <cmath>
#include <cstring>
#include <fstream>
#include <iostream>
#include <memory>
#include <mutex>
#include <string>
#include "seq_file.h"
using paddle::framework::proto::VarType;
float compute_loss(float* a, float* b, int emb_size) {
float sum = 0;
for (size_t i = 0; i < emb_size; i++) {
sum += (a[i] - b[i]) * (a[i] - b[i]);
}
return sum;
}
float* transfer(
float* in, float* out, float min, float max, int emb_size, int bits) {
float scale = (max - min) / pow(2, bits);
for (size_t i = 0; i < emb_size; i++) {
float x = in[i];
int val = round((x - min) / (max - min) * (pow(2, bits) - 1));
val = std::max(0, val);
val = std::min((int)pow(2, bits) - 1, val);
out[i] = val * scale + min;
}
return out;
}
char* quant(
float* in, char** out, float min, float max, int emb_size, int bits) {
float scale = (max - min) / pow(2, bits);
for (size_t i = 0; i < emb_size; ++i) {
float x = in[i];
int val = round((x - min) / (max - min) * (pow(2, bits) - 1));
val = std::max(0, val);
val = std::min((int)pow(2, bits) - 1, val);
*out[emb_size] = val;
}
return *out;
}
float* dequant(
char* in, float* out, float min, float max, int emb_size, int bits) {
float scale = (max - min) / pow(2, bits);
for (size_t i = 0; i < emb_size; ++i) {
float x =
scale * (((int)in[i] + (int)pow(2, bits)) % (int)pow(2, bits)) + min;
out[i] = x;
}
return out;
}
void greedy_search(float* in,
float& xmin,
float& xmax,
float& loss,
size_t emb_size,
int bits) {
int b = 200;
float r = 0.16;
xmin = 2147483647;
xmax = -2147483648;
float cur_min = xmin;
float cur_max = xmax;
for (size_t i = 0; i < emb_size; i++) {
xmin = std::min(xmin, in[i]);
xmax = std::max(xmax, in[i]);
}
cur_min = xmin;
cur_max = xmax;
float out[emb_size];
loss = compute_loss(
in, transfer(in, out, cur_min, cur_max, emb_size, bits), emb_size);
float stepsize = (cur_max - cur_min) / b;
float min_steps = b * (1 - r) * stepsize;
while (cur_min + min_steps < cur_max) {
float loss_l = compute_loss(
in,
transfer(in, out, cur_min + stepsize, cur_max, emb_size, bits),
emb_size);
float loss_r = compute_loss(
in,
transfer(in, out, cur_min, cur_max - stepsize, emb_size, bits),
emb_size);
if (loss_l < loss) {
cur_min = cur_min + stepsize;
if (loss_l < loss_r) {
loss = loss_l;
xmin = cur_min;
}
} else {
cur_max = cur_max - stepsize;
if (loss_r < loss) {
loss = loss_r;
xmax = cur_max;
}
}
}
}
|
Carbapenemase-Producing Enterobacteriaceae Recovered from the Environment of a Swine Farrow-to-Finish Operation in the United States ABSTRACT Carbapenem-resistant Enterobacteriaceae (CRE) present an urgent threat to public health. While use of carbapenem antimicrobials is restricted for food-producing animals, other -lactams, such as ceftiofur, are used in livestock. This use may provide selection pressure favoring the amplification of carbapenem resistance, but this relationship has not been established. Previously unreported among U.S. livestock, plasmid-mediated CRE have been reported from livestock in Europe and Asia. In this study, environmental and fecal samples were collected from a 1,500-sow, U.S. farrow-to-finish operation during 4 visits over a 5-month period in 2015. Samples were screened using selective media for the presence of CRE, and the resulting carbapenemase-producing isolates were further characterized. Of 30 environmental samples collected from a nursery room on our initial visit, 2 (7%) samples yielded 3 isolates, 2 sequence type 218 (ST 218) Escherichia coli and 1 Proteus mirabilis, carrying the metallo--lactamase gene blaIMP-27 on IncQ1 plasmids. We recovered on our third visit 15 IMP-27-bearing isolates of multiple Enterobacteriaceae species from 11 of 24 (46%) environmental samples from 2 farrowing rooms. These isolates each also carried blaIMP-27 on IncQ1 plasmids. No CRE isolates were recovered from fecal swabs or samples in this study. As is common in U.S. swine production, piglets on this farm receive ceftiofur at birth, with males receiving a second dose at castration (≈day 6). This selection pressure may favor the dissemination of blaIMP-27-bearing Enterobacteriaceae in this farrowing barn. The absence of this selection pressure in the nursery and finisher barns likely resulted in the loss of the ecological niche needed for maintenance of this carbapenem resistance gene. |
/**
* DTO for storing a user's activity.
*/
@ApiModel("用户追踪")
@Data
public class TrackerDTO {
private String sessionId;
private String userLogin;
private String ipAddress;
private String page;
private Instant time;
} |
Etta Hannah, from Lerwick in Shetland, went for her first eye health check at the town's Specsavers store leading to doctors making the devastating discovery.
A five-year-old girl was diagnosed as having a large brain tumour on her spinal cord after attending a routine eye check-up.
Etta Hannah, from Lerwick in Shetland , went for her first eye health check at the town's Specsavers store before starting school this year.
The examination of the back of her eye raised cause for concern and she was referred for an emergency hospital appointment and a CT scan.
The scan at the Gilbert Bain Hospital revealed Etta had extreme swelling of the optic nerve which could have resulted in the loss of her sight.
Further tests and an MRI scan at the Royal Aberdeen Children's Hospital uncovered a large brain tumour at the back of Etta's neck, which had spread down her spine.
She has now had surgery to reduce the size of the tumour and is embarking on 18 months of low-level chemotherapy.
Etta's father Robert Hannah said: "I had noticed some changes in Etta's behaviour and a lack of appetite, but I would never have thought it could be a brain tumour.
"We made several appointments with the doctor and health visitors but with no diagnosis, and being told I was overreacting, I had to go with my parental gut instinct and keep looking for ways to figure it out.
"If you feel there is something not right, an eye test could help rule out any concerns. Finally we have an answer and Etta can get the treatment she needs."
The family's experience was revealed by Specsavers at the start of national eye health awareness week, as the firm stressed the importance of having regular eye health checks.
Thomas Bruin, store director of Specsavers in Lerwick, who carried out the initial test, said: "We can detect several underlying health conditions from an eye health check, it's not simply changes in prescription.
"During national eye health week this week, and throughout the year, we're encouraging everyone in Scotland to stop and think about their eye health and book that all important test.
"We recommend getting your eyes checked every two years and, as tests are free through the NHS in Scotland, there really is no reason to delay.
"Etta's case, although very rare, is an example of just how vital an eye exam can be."
The schoolgirl's mother Jennifer Murray said: "I'm so grateful to Thomas at Specsavers, his early detection and perseverance in arranging our appointment with the eye specialist doctor made all the difference.
"I believe he helped save my daughter's sight." |
// Copyright 2022 Google LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// [START slides_image_merging]
import com.google.api.client.http.HttpRequestInitializer;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.gson.GsonFactory;
import com.google.api.services.drive.Drive;
import com.google.api.services.drive.model.File;
import com.google.api.services.slides.v1.Slides;
import com.google.api.services.slides.v1.SlidesScopes;
import com.google.api.services.slides.v1.model.BatchUpdatePresentationRequest;
import com.google.api.services.slides.v1.model.BatchUpdatePresentationResponse;
import com.google.api.services.slides.v1.model.Request;
import com.google.api.services.slides.v1.model.Response;
import com.google.api.services.slides.v1.model.ReplaceAllShapesWithImageRequest;
import com.google.api.services.slides.v1.model.SubstringMatchCriteria;
import com.google.auth.http.HttpCredentialsAdapter;
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
/* Class to demonstrate the use of Slides Image Merging API */
public class ImageMerging {
/**
* Changes specified texts into images.
*
* @param templatePresentationId - id of the presentation.
* @param imageUrl - Url of the image.
* @param customerName - Name of the customer.
* @return merged presentation id
* @throws IOException - if credentials file not found.
*/
public static BatchUpdatePresentationResponse imageMerging(String templatePresentationId,
String imageUrl,
String customerName) throws IOException {
/* Load pre-authorized user credentials from the environment.
TODO(developer) - See https://developers.google.com/identity for
guides on implementing OAuth2 for your application. */
GoogleCredentials credentials = GoogleCredentials.getApplicationDefault()
.createScoped(Arrays.asList(SlidesScopes.PRESENTATIONS,
SlidesScopes.DRIVE));
HttpRequestInitializer requestInitializer = new HttpCredentialsAdapter(
credentials);
// Create the slides API client
Slides service = new Slides.Builder(new NetHttpTransport(),
GsonFactory.getDefaultInstance(),
requestInitializer)
.setApplicationName("Slides samples")
.build();
// Create the drive API client
Drive driveService = new Drive.Builder(new NetHttpTransport(),
GsonFactory.getDefaultInstance(),
requestInitializer)
.setApplicationName("Slides samples")
.build();
// Duplicate the template presentation using the Drive API.
String copyTitle = customerName + " presentation";
File content = new File().setName(copyTitle);
File presentationFile =
driveService.files().copy(templatePresentationId, content).execute();
String presentationId = presentationFile.getId();
// Create the image merge (replaceAllShapesWithImage) requests.
List<Request> requests = new ArrayList<>();
requests.add(new Request()
.setReplaceAllShapesWithImage(new ReplaceAllShapesWithImageRequest()
.setImageUrl(imageUrl)
.setImageReplaceMethod("CENTER_INSIDE")
.setContainsText(new SubstringMatchCriteria()
.setText("{{company-logo}}")
.setMatchCase(true))));
// Execute the requests.
BatchUpdatePresentationRequest body =
new BatchUpdatePresentationRequest().setRequests(requests);
BatchUpdatePresentationResponse response =
service.presentations().batchUpdate(presentationId, body).execute();
int numReplacements = 0;
try {
// Count total number of replacements made.
for (Response resp : response.getReplies()) {
numReplacements += resp.getReplaceAllShapesWithImage().getOccurrencesChanged();
}
// Prints the merged presentation id and count of replacements.
System.out.println("Created merged presentation with ID: " + presentationId);
System.out.println("Replaced " + numReplacements + " shapes instances with images.");
} catch (NullPointerException ne) {
System.out.println("Text not found to replace with image.");
}
return response;
}
}
// [END slides_image_merging] |
Sorry about that. New episode will be dropping tonight. If not, tomorrow, I swear. My laptop was broken-as-fuck for a while plus just other stuff and we dropped the ball a little bit.
But we’ll be getting back to a regular schedule, as best as we can manage. We posted it on the FB page, but, yeah, we recorded a new episode last night. I’m gonna do everything I can to keep up with wrangling people for recordings consistently again. Life has just been a lot lately.
The Let’s Play stuff is on indefinite hiatus as no one has a consistent schedule to do that anymore besides myself. If we did try to mix it up (which trust me, I will continue to look more into), seriously, you’d be surprised how fucking difficult it is, when ensuring at least decent audio/video quality, to record gameplay videos remotely.
I’m gonna try and have us record some bonus shit soon too.
We haven’t abandoned y’all, basically, that’s what I’m getting at. I love you motherfuckers.
- Dillon |
// deno scripts
// import specific files (not mod.ts) otherwise compile errors with --unstable
import {existsSync} from "https://deno.land/std/fs/exists.ts";
import {parse as parseYaml} from "https://deno.land/std/encoding/yaml.ts";
import {getNearestFileWithPrefix} from "../fs/mod.ts";
import {basename, dirname, parse} from "https://deno.land/std/path/mod.ts";
export const isInsideDocker = (): boolean => {
return existsSync("/.dockerenv");
};
export const getNearestComposeFile = (): string | undefined => {
return getNearestFileWithPrefix("docker-compose.yml");
};
// the name of the parent directory which is used as the stack prefix
export const getComposeProject = (composeFile? : string): string | undefined => {
composeFile = composeFile
? composeFile
: getNearestComposeFile();
if (!composeFile) {
throw "No compose file found, cannot guess docker-compose project";
}
return parse(parse(composeFile).dir).name;
};
export const guessDockerComposeNetworkName = (composeFile? : string): string | undefined => {
if (!composeFile) {
composeFile = getNearestComposeFile();
}
if (!composeFile) {
throw "No compose file found, cannot guess docker-compose network";
}
// get first network name
const composeYaml: {
networks: {
[key: string]: any;
};
} = parseYaml(new TextDecoder("utf-8").decode(Deno.readFileSync(composeFile)))as any;
const firstNetworkName = Object.keys(composeYaml.networks)[0];
const dir = basename(dirname(composeFile));
return `${dir}_${firstNetworkName}`;
};
/**
* Make sure we're inside a named docker-compose service
* defaulting to a sh shell
* @param args service: docker-compose service name
*/
export const ensureInsideService = async (args : {
service: string;
shell: string;
runArgs?: string[];
debug?: boolean;
}): Promise<boolean> => {
if (!args) {
throw "Missing args";
}
if (!args.service) {
throw "Missing 'service' field in args";
}
let {service, debug, shell, runArgs} = args;
// inside docker, assume it's the desired service
if (existsSync("/.dockerenv")) {
if (debug) {
console.log("/.dockerenv found, assuming already inside docker-compose stack");
}
return true;
}
const composeFile = getNearestComposeFile();
if (!composeFile) {
throw "No compose file found, cannot guess docker-compose network prefix";
}
const composeProject = getComposeProject(composeFile);
if (debug) {
console.log(`Found docker-compose project: ${composeProject}`);
}
let cmd = ["docker-compose", "run"];
cmd = runArgs
? cmd.concat(runArgs)
: cmd;
cmd = cmd.concat([
service, shell
? shell
: "sh"
]);
const p = Deno.run({cmd: cmd, cwd: parse(composeFile).dir, env: Deno.env.toObject()});
await p.status();
return true;
};
|
<reponame>maureennduta/ng2-amrs
import { Injectable } from '@angular/core';
import { AppSettingsService } from '../app-settings/app-settings.service';
import { Observable, from } from 'rxjs';
import { HttpClient, HttpHeaders } from '@angular/common/http';
@Injectable()
export class ProgramReferralResourceService {
constructor(
private http: HttpClient,
private appSettingsService: AppSettingsService
) {}
public getUrl(): string {
return (
this.appSettingsService.getEtlRestbaseurl().trim() + 'patient-referral'
);
}
public saveReferralEncounter(payload): Observable<any> {
if (!payload) {
return from(null);
}
const headers = new HttpHeaders({ 'Content-Type': 'application/json' });
return this.http.post(this.getUrl(), JSON.stringify(payload), { headers });
}
}
|
import * as React from "react";
import { IEmojiProps } from "../../styled";
const SvgPersonSport201 = (props: IEmojiProps) => (
<svg viewBox="0 0 72 72" width="1em" height="1em" {...props}>
<g fill="#debb90">
<circle cx={34.318} cy={12.488} r={2.86} />
<path d="M28.599 24.785l-1.621 39.084h3.146l3.24-23.832h1.908l3.24 23.832h3.146l-1.62-39.084s.19 3.908 1.334 5.72a5.732 5.732 0 005.815 2.764s4.29-.572 5.053-2.097a1.89 1.89 0 000-1.811c-.096-.286-2.574.667-2.574.667l-2.765.477-2.001-1.049-1.24-1.43-1.906-6.959-3.337-1.239-9.247-.19-3.24 4.194-.477 4.099-1.62 1.144-6.388-5.91s-1.43 1.62-.476 2.955 3.622 5.052 5.624 5.434a3.969 3.969 0 004.194-1.335 11.805 11.805 0 001.812-5.434z" />
</g>
<circle cx={18.907} cy={17.093} r={1.907} fill="#ea5a47" />
<circle cx={38.907} cy={6.093} r={1.907} fill="#92d3f5" />
<circle cx={52.907} cy={26.093} r={1.907} fill="#b1cc33" />
<g fill="none" stroke="#000" strokeWidth={2}>
<circle cx={34.5} cy={12.3} r={3} strokeMiterlimit={10} />
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M39.084 24.785l2.48 37.178a1.615 1.615 0 01-1.43 1.906 2.04 2.04 0 01-1.812-1.906l-2.765-20.02c-.095-1.048-.667-1.906-1.239-1.906-.476 0-1.048.858-1.24 1.907l-2.764 20.019a2.04 2.04 0 01-1.81 1.906 1.615 1.615 0 01-1.43-1.906l2.478-37.178"
/>
<path
strokeLinecap="round"
strokeLinejoin="round"
d="M18.113 23.832l3.527 4.003c1.716 2.002 3.527 1.43 3.908-1.143l.382-2.002c.476-2.574 2.478-4.957 4.575-5.148a34 34 0 017.627 0c2.097.286 4.194 2.574 4.575 5.148l.382 2.002c.476 2.573 2.478 4.29 4.575 3.717 2.097-.476 3.813-.953 3.813-.953"
/>
</g>
</svg>
);
export default SvgPersonSport201;
|
1. Field of the Invention
Embodiments of the invention relate to a display device, and more particularly to, a liquid crystal display (“LCD”) device and fabricating method thereof. Although embodiments of the invention are suitable for a wide scope of applications, it is particularly suitable for improved transmittance in an LCD device.
2. Discussion of the Related Art
Liquid crystal displays (“LCDs”) control electric fields applied to liquid crystal cells to modulate light incident to the liquid crystal cells, thereby displaying an image. LCDs are classified into either vertical electric field-type LCDs or horizontal electric field-type LCDs, depending upon the direction of an electric field that drives the liquid crystal material.
In a vertical electric field-type LCDs, when a voltage is applied to pixel electrodes and common electrodes opposing each other on upper and lower substrates, an electric field is applied across the liquid crystal material between the electrodes. Vertical electric field-type LCDs have the disadvantage of a narrow viewing angle.
In a horizontal electric field-type LCDs, when a voltage is applied to pixel electrodes and common electrodes arranged on the same substrate, an electric field is applied across the liquid crystal material between the electrodes. Horizontal electric field-type LCDs have the advantage of a wide viewing angle, as compared to vertical electric field-type LCDs.
Horizontal electric field-type LCDs include a thin transistor substrate joined to a color filter substrate such that the substrates face each other, spacers to maintain a cell gap between the two substrates and liquid crystal material within the cell gap. The thin film transistor substrate includes signal lines and thin film transistors to generate a horizontal electric field in each cell, and an alignment film is positioned over the signal lines and the thin film transistors for aligning the liquid crystal material. The color filter substrate includes color filters to render colors, a black matrix to prevent light leakage and an alignment film applied is positioned over the color filters and the black matrix for aligning the liquid crystal material.
FIG. 1 is a view illustrating a thin film transistor substrate of a horizontal electric field-type LCD according to the related art. As shown in FIG. 1, the related art LCD thin film transistor substrate includes: a gate lines 2 and data lines 4 crossing each other, to define pixel regions; thin film transistors 6 at each crossing of an associated one of the gate lines 2 and an associated one of the data lines 4; first common lines 16a and second common lines 16b each extending in parallel to the gate lines 2 in the pixel regions; common electrodes 18 each connected to the first common lines 16a while extending over the pixel region with finger portions 18b; and pixel electrodes 14, extending over the pixel region, that are individually connected to a drain electrode of an associated one of the thin film transistors 6 and alternately arranged with finger portions 18b of the common electrodes 18.
The first and second common lines 16a and 16b are formed at the same time as the gate lines 2 using the same non-transparent metal as the gate lines 2. The first and second common lines 16a and 16b are connected to the common electrodes 18 and supply a common voltage to the common electrodes 18.
The liquid crystal display shown in FIG. 1 further includes connecting lines 16c to connect the first non-transparent common lines 16a with the second non-transparent common lines 16b. The connecting lines 16c extending in parallel to the data lines 4 and are made of a non-transparent metal like the first and second common lines 16a and 16b to prevent light leakage of the pixel regions during driving of the liquid crystal display.
In response to a scan pulse of the gate line 2, the thin film transistor 6 applies a data signal from the data line 4 to the pixel electrode 14 in the pixel region. For this operation, the thin film transistor 6 includes a gate electrode 8 connected to the gate line 2, a source electrode 10 connected to the data line 4 and the drain electrode 12 connected to the pixel electrode 14. The thin film transistor 6 further includes an active layer (not shown) forming a channel between the source electrode 10 and the drain electrode 12 above the gate electrode 8, and ohmic contact layers (not shown) to allow ohmic connection to the active layer by the source electrode 10 and the drain electrode 12.
The common electrode 18 is connected to the first common line 16a through a contact hole 17, and includes a base portion 18a extending in parallel to the gate line 2 and a plurality of finger portions 18b extending from the base portion 18a into the pixel region. The common electrode 18 is made of a transparent metal.
The pixel electrode 14 includes a first pixel electrode 14a connected to the drain electrode 12 of the thin film transistor 6 though the contact hole 13 and extending in parallel to the gate line 2, and a plurality of second pixel electrodes 14b extending from the first pixel electrode 14a to the pixel region and being arranged alternately with the finger portions 18b of the common electrode 18. The pixel electrode 14 is made of the same transparent metal as the common electrode 18. The first pixel electrode 14a overlaps the second non-transparent common lines 16b with an insulating layer (not shown) to form a storage capacitor.
A horizontal electric field can be applied between the second pixel electrode 14b that receives a data signal through the thin film transistor 6 and the finger portion 18b of the common electrode 18 that receives a common voltage through the first common line 16a. This horizontal electric field leads to rotation of liquid crystal molecules that were in initially aligned in a horizontal direction in the pixel region due to dielectric anisotropy. Further, the transmittance of light transmitted through the pixel region is varied depending on the degree of rotation of the liquid crystal molecules such that a gray scale can be implemented. However, in related art LCDs, transmittance deterioration occurs at an end portion of the second pixel electrode 14b and at an end portion of the finger portion 18b of the common electrode 18, as shown in regions A and B of FIG. 1.
FIG. 2 is a view illustrating the phenomenon of transmittance deterioration occurring in region A when an electric field is applied. As shown in FIG. 2, the liquid crystal molecules 20 in the region A are driven not only by an electric field applied between the finger portion 18b of the common electrode 18 and the second pixel electrode 14b, but also by an electric field applied between the base portion 18a of the common electrode 18 and the second pixel electrode 14b. Meanwhile, polarizing plates with transmission axes crossing at right angels, to control light transmittance, are respectively mounted in upper and lower parts of the liquid crystal display. During driving of the liquid crystal molecules 20, in the case of the regions A and B where the transmission axis of the polarizing plates do not correspond to the alignment of the liquid crystal molecules 20, light is not transmitted, as compared to the remaining regions, and therefore contrast and brightness degrades. |
def calc_dist_distance(p_data, c_data, dtype):
try:
if dtype == "nominal":
le = LabelEncoder()
le.fit(p_data)
p_data = le.transform(p_data)
c_data = le.transform(c_data)
return emd_samples(p_data, c_data)
except Exception:
return -1 |
import java.util.*;
class WordBreak {
public boolean wordBreak(String s, Set<String> dict) {
assert s.length() != 0;
boolean[] dp = new boolean[s.length()+1];
dp[0] = true;
for (int i = 1; i <= s.length() ; i ++) {
dp[i] = false;
for (int j = 0 ; j < i ; j ++) {
if (dp[j] && dict.contains(s.substring(j,i))) {
dp[i] = true;
break;
}
}
}
return dp[s.length()];
}
public static void main(String[] args) {
WordBreak wb = new WordBreak();
String s = "lintcodexyc";
Set<String> dict = new HashSet<String>();
dict.add("lint");
dict.add("code");
dict.add("xyc");
System.out.println(wb.wordBreak(s, dict));
}
} |
<reponame>anubhab-code/Competitive-Programming
s , p = list(map(int,input().split()))
data = sorted(list(map(int,input().split())))
min_ = 100000
for i in range(p-s+1):
temp = data[i:i+s]
if max(temp)-min(temp) < min_:
min_ = max(temp) - min(temp)
print(min_) |
from uer.encoders.transformer_encoder import TransformerEncoder
from uer.encoders.rnn_encoder import RnnEncoder
from uer.encoders.rnn_encoder import LstmEncoder
from uer.encoders.rnn_encoder import GruEncoder
from uer.encoders.rnn_encoder import BirnnEncoder
from uer.encoders.rnn_encoder import BilstmEncoder
from uer.encoders.rnn_encoder import BigruEncoder
from uer.encoders.cnn_encoder import GatedcnnEncoder
str2encoder = {"transformer": TransformerEncoder, "rnn": RnnEncoder, "lstm": LstmEncoder,
"gru": GruEncoder, "birnn": BirnnEncoder, "bilstm": BilstmEncoder, "bigru": BigruEncoder,
"gatedcnn": GatedcnnEncoder}
__all__ = ["TransformerEncoder", "RnnEncoder", "LstmEncoder", "GruEncoder", "BirnnEncoder",
"BilstmEncoder", "BigruEncoder", "GatedcnnEncoder", "str2encoder"]
|
Yael in Judges 4 In recent years several literary studies of Jud 4 and 5 have been published, and many of the semantic relationships in the text have been clarified. In these studies attention has also been paid to the expressive power of a number of Hebrew words in Jud 4 and 5, and to the ambiguity of certain textual elements. Yet in my opinion, one aspect has been overlooked so far: the significance of the proper name of ^V in the context of Jud 4. |
In making liquid-resistant sleeves for protective apparel, the fabric for each sleeve must be cut and folded so as to form a sleeve-like shape. In addition, various sections of the sleeve fabric must be overlapped and joined, thereby resulting in the formation of one or more sleeve seams. Oftentimes, a sleeve seam is formed by overlapping particular edges of the sleeve fabric, and stitching the edges together. Such stitched-seam sleeves are particularly desirable because they are both comfortable and relatively inexpensive to produce. In forming such stitched seams, however, one or more sewing needles pierce the fabric, thereby forming a series of needle holes. And while these needle holes may be quite small, they still may serve as passageways through which a liquid undesirably may pass from the exterior to the interior of the sleeve.
In an effort to reduce the problem of liquid-permeation through stitched seams of protective-apparel sleeves, U.S. Pat. No. 4,991,232 provided a surgical gown in which each of the sleeves has an inner seam-stitched ply and an outer seam-stitched ply, with each ply made of, for example, a hydrophobic fabric, and with the seams being circumferentially offset.
More recently, users in various segments of the protective-apparel market have requested protective apparel in which the sleeves deliver a further-enhanced level of liquid resistance. In an effort to provide such an enhanced level, more than merely offsetting the seams has been required. Specifically, one or more seam sealants, such as a heat-applied tape, glue, and/or other similar materials are applied to the stitched seams. Such sealants are undesirable, however, for many reasons, including because they add to the manufacturing costs and steps involved in making such sleeves, and because they reduce the comfort of the sleeves.
Accordingly, there is a need to provide protective apparel which not only offers a further-enhanced level of liquid resistance, but also provides the high level of comfort and relatively low manufacturing expense associated with stitched-seam sleeves. |
Image copyright Getty Images Image caption Douglas Slocombe setting up his camera on the Ealing Studios film Cage of Gold in 1950
British cinematographer Douglas Slocombe has died at the age of 103, his family has said.
Slocombe shot 80 films, from classic Ealing comedies such as The Lavender Hill Mob and Kind Hearts and Coronets, to three Indiana Jones adventures.
In 1939 he filmed some of the earliest fighting of World War Two in Poland.
Indiana Jones director Steven Spielberg said Slocombe - who won Baftas for the Great Gatsby, The Servant, and Julia - "loved the action of filmmaking".
He said the cinematographer was "a great collaborator and a beautiful human being".
"Dougie Slocombe was facile, enthusiastic, and loved the action of filmmaking. Harrison Ford was Indiana Jones in front of the camera, but with his whip-smart crew, Dougie was my behind the scenes hero for the first three Indy movies," Spielberg added.
Oscar nods
Slocombe other work included The Italian Job and Jesus Christ Superstar.
Among his own favourites was Kind Hearts and Coronets, the Ealing Studios classic of 1949, starring Alec Guinness and Joan Greenwood.
A decade earlier, as a young newsreel cameraman, London-born Slocombe had shot parts of the Nazi invasion of Poland.
The quality of that footage, which was used in the documentary Lights Out in Europe, persuaded Ealing to employ him.
Steven Spielberg chose Slocombe, then nearing 70, to shoot Harrison Ford in Raiders of the Lost Ark and then two further Indiana Jones films in the 1980s.
Slocombe was nominated for an Oscar on three occasions, including for Raiders, and was given a lifetime achievement award by the British Society of Cinematographers in 1996. He was made an OBE in the New Year Honours list in 2008 for services to the film industry.
'Amiability and intelligence'
by Vincent Dowd, BBC World Service
With Dougie Slocombe's passing at 103 we've lost a link to several eras of Britain's cinematic history. His typically humane account of how as a news cameraman he escaped from wartime Poland by horse and trap and then by train would make a film in itself. He became, as he said, 'last man standing' of the great craftsmen who helped Michael Balcon turn Ealing Studios into a force to be reckoned with. Of the films he shot there he most loved the dark humour of Kind Hearts and Coronets - but he told me he was also proud of how Hue and Cry (1947) found black and white beauty in bombed-out London post-war. When Ealing closed, he went on to an extraordinary array of 1960s and 70s films: from The Servant (again, London in gorgeous monochrome) to the explosively colourful Jesus Christ Superstar in 1973.
Four years later, Steven Spielberg drafted him in to film scenes for Close Encounters of the Third Kind: they got on so well that he asked Dougie to be cinematographer on the first of three Indiana Jones movies. Dougie was already 70 when he started the job. I didn't know him until he was 100 and almost blind, but in the long conversations I had with him, his memory remained pin-sharp. The amiability and intelligence which helped make him one of the world's great cameramen were still there. His energy amazed. When he was 102, I telephoned to ask if I could come round the next day to interview him about an aspect of the Ealing years. Dougie said he'd be delighted. But it would have to be in the morning because in the afternoon he was booked in to record a five-hour TV interview - in French.
His other films included Whisky Galore, The Man in the White Suit, Rollerball and Never Say Never Again.
Speaking to the BBC last year, Slocombe recalled working under the Ealing Studio mogul, Sir Michael Balcon, as well as filming on location in a city still scarred by bomb damage.
"I think I'm the last man standing," he said. "All the major technicians and the producers and directors are gone - and that famous repertory company of actors and actresses."
Slocombe's daughter said he died in hospital in London. |
/**
* Propagate the state and state transition matrix to the next measurement time.
* @param t0 previous time
* @param xin array containing state and state transition matrix at previous time.
* @param tf next time
*/
public double[] propagate(double t0, double[] x, double tf) {
rk8.setStepSize(tf-t0);
double[] out = rk8.step(t0, x, eom);
return out;
} |
Secure event signature protocol for peer-to-peer massive multiplayer online games using bilinear pairing With the development of the Internet, multiplayer online games are rapidly replacing the traditional single-player games. The peer-to-peer architecture, which is suitable for massive multiplayer online games, is being considered as the replacement for traditional clientserver architecture. Because the current solutions cannot prevent the cheating problems to gain unfair advantages in these games completely, we summarize the problems existing in some event signature protocols for peer-to-peer online games and propose a new secure event signature protocol. The security basis of the proposed protocol is the discrete logarithms and bilinear pairing. Our protocol provides higher security than some current protocols on secure event signature, although the efficiency of our protocol is lower than theirs. Copyright © 2012 John Wiley & Sons, Ltd. |
Interventions for Treating Fractures of the Distal Femur in Adults. Introduction This summary is based on a Cochrane review (Griffin, Parsons, Zbaeda, & McArthur, 2015), that explores how to treat distal femur fractures in adults. Distal femur fractures are most commonly seen as a fall-related injury in older adults, but can also stem from high-energy impacts in younger people, and in people who have previously undergone total knee replacement (Elsoe, Ceccotti, & Larsen, 2018). |
/**
* Class that parses expressions
*
* @author ramilmsh
*/
public class Expression {
/**
* Expression types
*/
public enum Type {
BOOLEAN(Pattern.compile("True|true|False|false"), Value::process),
FUNCTION(Pattern.compile("[a-zA-Z_]+(\\?)?|[*+-/%~]"), Function::process),
VARIABLE(Pattern.compile(":[a-zA-Z_]+[a-zA-Z0-9_]*"), Variable::process),
NUMBER(Pattern.compile("-?[0-9]+\\.?[0-9]*"), Value::process),
STRING(Pattern.compile("\"[a-zA-Z0-9_]*"), Value::process),
LIST_START(Pattern.compile("\\["), List::process),
COMMAND_START(Pattern.compile("\\("), Function::process);
private Pattern pattern;
private java.util.function.Function<Context, Element> processor;
/**
* Create a type
*
* @param pattern: regular expression for the type
* @param processor: processor that creates an Element from input
*/
Type(Pattern pattern, java.util.function.Function<Context, Element> processor) {
this.pattern = pattern;
this.processor = processor;
}
/**
* Match expression with type
*
* @param exp: expression
* @return type of the expression
*/
public static Type match(String exp) {
for (Type type : Type.values()) {
if (type.pattern.matcher(exp).matches()) return type;
}
return null;
}
}
/**
* Parse the input
*
* @param context: context
* @return resolvable Element
* @throws InvalidSyntaxError: if there has been a syntax error
*/
public static Element parse(Context context) throws InvalidSyntaxError {
Type type = Type.match(context.tokenizer.peek());
if (type == null)
throw new InvalidSyntaxError("Unexpected token " + context.tokenizer.peek());
return type.processor.apply(context);
}
} |
#!/usr/bin/python
# %run getSQLdata.py SXT_datamatrix Mouse_Cells_ALL
import pymysql
import sys
import numpy as np
import pandas as pd
import time
import datetime
import os
#define a function getSQL data, which pulls all data from specified database and data table into a list of equal-length lists
dir = '/Users/elizabethasmith/Desktop/sql'
os.chdir(dir)
myList = []
def main():
print 'Retrieving data from the ' + sys.argv[2] + ' table from the ' + sys.argv[1] + ' My SQL database...\n '
#generate a reference to the database, stored as connection. Database specified by arguments
connection = pymysql.connect(host='localhost', port=3306, user='root', passwd='', db= sys.argv[1])
cursor = connection.cursor()
myquery = 'SELECT * FROM ' + sys.argv[1] + '.' + sys.argv[2]
#cursor.execute(myquery)
#allRows = cursor.fetchall()
cursor.execute(myquery)
num_fields = len(cursor.description)
field_names = [i[0] for i in cursor.description]
for row in cursor:
rowList = []
for item in row:
rowList.append(item)
myList.append(rowList)
dataset = np.array(myList)
ts = time.time()
st = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d_%H-%M-%S')
datasetname = sys.argv[1]+ '_' + sys.argv[2]+'_'+ st
global df
df = pd.DataFrame(myList, columns = field_names)
df.to_pickle(datasetname)
#print dataset.shape[0]
#print num_fields
if myList == []:
print 'I did not find any data there. Are you sure you have the correct name?'
else:
print 'Success! You just pulled out ' + str(dataset.shape[0]) + ' rows, each of which contained ' + str(dataset.shape[1]) + ' columns, of data.\n'
print 'You also pulled out ' + str(num_fields) + ' field names.\n'
print 'These data have been saved in a pandas Dataframe called ' + datasetname + '.npy locally at '+ dir +'.\n'
print 'The Dataframe is also present as a global variable called "df".\n'
cursor.close()
connection.close()
#boilerplate to call the function and begin the program.
if __name__ == '__main__':
main()
|
The 47-Foot motor Lifeboat is designed as a first response rescue resource for the Coast Guard in high seas, surf and heavy weather environments. They are built to withstand the most severe conditions at sea and are capable of effecting a rescue at sea even under the most difficult circumstances. They are self-bailing, self-righting, almost unsinkable, and have a long cruising radius for their size. It is the replacement for the aging 44' MLB fleet.
There are (presently) 117 operational, being added to monthly. The total (to be delivered over 5 years) will be about 200. |
RECLAMATION OF A SALINE SODlC SOIL IN THE NKWALINI VALLEY An area of saline sodic soil was intensively drained and treatments of gypsum (31 t/ha) or sulphur (6 t/ha) were applied, while control plots received no ameliorants. Physical and chemical soil analyses showed gypsum to have an ameliorative effect slightly superior to that of sulphur. Both treatments were more beneficial than the control but differences were not always as great as expected. For the plant and first ratoon sugarcane crops grown on the experiment, average yields were 100, 99 and 82 tc/ha for the gypsum, sulphur and control, respectively. Introduction Excessive salt levels in the soil have for some years been recognised as a problem in the dry, irrigated regions of the South African sugar industry (Maud3, von der Meden7). The saline sodic (saline alkali) condition is by far the most common, but saline non-sodic and non-saline sodic soils are also found. Salinisation of soils in the sugarcane areas is generally caused by improper water management, ie inadequate drainage is provided for irrigation excesses and this results in a rise in the level of the water table. Salts dissolved in the groundwater reach the soil surface by capillary movement and then accumulate there. It is usually soils of naturally poor drainage that are affected in this way, and in many instances these are marginally affected by salts in the virgin state. In reclaiming sodic soils, improvement or maintenance of soil permeability is the prime consideration. This is usually achieved by supplying calcium ions to the soil, either directly by adding gypsum (calcium sulphate) or indirectly by acidifying the soil with sulphur or sulphuric acid to dissolve free calcium carbonate. Techniques of reclamation have been thoroughly discussed in the literature (US Salinity Lab Staff6; Szabolcs5; Kovda, van den Berg and Hagen2) but, because of a general lack of local practical experience, it was decided that it would be worthwhile, as a first step, to test commonly used ameliorants in a small field trial. Selecting a site for reclamation work presented difficulties since, in such soils, marked variations in salt level frequently occur over very short distances. Nevertheless, a site was selected for an experiment in the Nkwalini Valley (Zululand) which represented a typical example of a saline sodic condition. The climate in this area is described as semi-arid, the mean annual rainfall being approximately 700 mm, so that intensive irrigation is required to achieve satisfactory sugarcane yields. Experimental procedure The soil is of the Nyoka series (Swartland form) and consists of a very dark grey brown porous sandy clay loam (f 25% clay) about 36 cm in depth, overlying a B horizon of dark to yellowish brown, slowly permeable sandy clay (f 40% clay). The transition between A and B horizons is gradual and the profile depth generally exceeds 0,9 metres. The underlying material consists of water worn gravel and weathering Middle Ecca sediments, the latter being the dominant parent material in this area. The highly saline sodic condition that exists (see Table 1) is believed to have arisen primarily from a high water table which, in turn, was caused by seepage from an unlined dam up the slope from the area. At the time when the experiment was laid down, much of the soil was in a highly dispersed state, with little plant growth before t,reatment (Fig. 1). TABLE 1 Average chemical status of the soil prior to treatment Exch. SARse Na (mes %) FIGURE I Appearance of the experiment site prior to application of treatments. Three replications of the following treatments were tested: Drainage alone as control ( c ) Drainage + gypsum at 3 1 t/ha (G) Drainage + sulphur at 6 t/ha (S) The gypsum (CaS0,.2H,O) requirement was determined from the exchangeable Na level of the worst affected areas of the site. The aim was to displace the exchangeable Na in excess of 2,O meq/100 g with Ca. An amount of sulphur that was chemically equivalent to the gypsum was used. The ameliorants were of fine texture, 90 % and 100 % of the gypsum and sulphur, respectively, passing a 100 mesh sieve, and both were at least 98 % pure. As the salt status of the soil varied quite markedly across the site, it was considered necessary to group together the three worst, the three intermediate and the three least affected plots. One of each of the three treatments was allocated at random to a plot in each group to minimise bias in the results towards any specific treatment. In December 1970, flexible plastic pipe drains (50 mm diam.) were installed at a spacing of 8,5 mebres and an average depth of about 0,85 metres. Ten parallel drains formed the boundaries of the nine plots and these extended down the 36 metre |
Tuesday's Amarillo City Council meeting had a little drama to it, but the lid blew off at the end."I asked for Jarrett's resignation," Councilman Mark Nair said late Tuesday, hours after taking the oath of office, in reference to City Manager Jarrett Atkinson."I feel like we need a change, the people need a change, and this is what they elected me to do," Nair said.Nair asked about staffing issues during an executive session following the commission's regular public meeting Tuesday afternoon.City Attorney Marcus Norris said the topics could not be discussed due to state law requiring the posting of agenda items 72 hours in advance.The council reconvened in open session to discuss items to place on next Tuesday's agenda, because a discussion of future agenda items stays on each week's posted agenda for work session topics, Councilman Elisha Demerson said.That's when Nair asked for Atkinson's resignation.Atkinson did not return a call for comment by press time Tuesday night, nor did Mayor Paul Harpole. Councilman Brian Eades expressed shock.The resignation request follows the resignation of Deputy City Manager Vicki Covey, Burkett said.Burkett said did not know when Covey submitted the resignation or the effective date, but he said he had requested that she resign.Burkett said he made the request due to her failed oversight of the Animal Control Department, now known as Animal Management and Welfare.Nair, Burkett and Demerson were the three councilmen needed - a majority - to place Atkinson's resignation on next week's agenda.But the council didn't stop with Atkinson.Burkett said he asked for next week's agenda to include discussion and potential action to remove the entire Amarillo Economic Development Corp. board of directors. The board is appointed by the council and serves at its pleasure.AEDC board member and former board Chairman Ginger Nelson learned of the development late Tuesday, she said."My reaction is sadness to the extreme overreaction by the new council members to this situation," Nelson said. "What Amarillo needs is a positive vision and leaders who will cast a positive vision, not leaders who are caught up in the venom and misinformation spoken by a minority."Nelson said she would not resign her post."They have the authority, I understand, to reappoint us (the board), but I believe I have to take a stand," Nelson said. "The decisions that have been made in the past are not poor decisions. They are good decisions and I stand by them."Before the drama, the council made some progress on issues during its regular meeting Tuesday, but a group of hotel owners presented a petition asking it to cancel part of the planned downtown redevelopment project.At the end of the meeting, hotelier Dipak Patel presented the council a petition with unverified signatures representing 75 completed or under-construction hotels and motels.The petition asked the city to do away with the proposed convention hotel downtown to be built with private investments of $45 million from Supreme Bright Amarillo II, a NewcrestImage company. It will receive incentives for it to build in what had formerly been considered an undesirable location. Those incentives include rebates of hotel occupancy taxes, the state portion of sales taxes and property taxes for a limited time. There is also a performance assurance of up to $2 million if the hotel can't meet performance goals for up to 42 months after the hotel has been opened nine months."All these owners pay their taxes - hard workers," he said. "Why are you helping downtown hotel against us?"(If we don't) get justice, we go further."The petition says the hotel and motel owners feel like they would be subsidizing competition."It is the taxes collected from these properties that you have pledged to the voters of the city of Amarillo which will pay for the Downtown Development including a hotel which will directly compete against the owners of hotels who have and always will continue to make Amarillo a viable tourist and convention town," the petition states.The downtown redevelopment catalyst project includes the hotel, a parking garage and a ballpark/event venue. Money from the hotel occupancy tax collected from customers of all hotels in the city goes to promote tourism such as funding budget shortfalls at the Amarillo Civic Center and Globe-News Center for the Performing Arts. It also funds the Amarillo Convention and Visitors Council. Some of that money will go to pay debt issued to build the event center but will not reduce the amount of funding available for existing functions, Atkinson said.Patel said believes the convention hotel itself will compete with the Civic Center for convention business, not complement it, he said. The civic center could lose revenue, he said.In other business, a project to attract a $41 million wind turbine tower manufacturing plant was on the agenda in two ways: approval of the Amarillo Economic Development Corp. pursuing a financial incentive agreement and approval of a 100 percent tax abatement on some of the property involved for 10 years.GRI Renewable Industries, a subsidiary of the Spain-based Gestamp Wind Steel Inc. promises to create up to 330 jobs over time. That would make for a $13 million annual payroll from a 200,000 square-foot plant it would build on 48 acres of donated land at AEDC's CenterPort Business Park. In return, the AEDC would not only donate the land but give a grant of up to $3.3 million based on the payroll reached. That amounts to about $10,000 per job.GRI already has wind tower plants in Spain, Brazil, China, Turkey, India and South America.The Texas Enterprise Fund has committed $1.8 million for the project.Burkett said he'd asked to see the contract for the project but hadn't received it, and another newly elected councilman agreed."While 300 jobs is always enticing, being a new kid on the block, I agree with Councilman Burkett," Demerson said. "We don't have a contract, so why are we acting?"However, AEDC President and CEO Buzz David said the contract is in a very preliminary form because AEDC needed the council's blessing to proceed with negotiations of the details. When it is complete, the council members will have an opportunity to approve it, he said.AEDC Vice President for Business Development Brian Jennings said the contract will include "clawback" provisions for the city to recover money if the company doesn't meet performance standards and a guarantee of payment from the parent company."If they do not perform, they owe us money," Jennings said.While GRI is trying to make a deal in Amarillo, there was competition from locations in New Mexico, Kansas and Oklahoma, David said.The city's central location in the heart of the country's best wind energy production and its superior rail and highway transportation gave it a boost, as did the possibility of training through Amarillo College and an attractive business climate, Jennings said. It would put the GRI factory nearer to its customers."You drive around and see all those towers and none of them were built here," David said.Council members unanimously approved proceeding on finalizing the project, then took up the proposed abatement. Demerson questioned why they had to do that before seeing the final contract."They look at it as a package, what would it look like at the end," said City Attorney Marcus Norris, noting the abatement would attach to the contract.Some concern was raised by resident Claudia Stravato and Burkett about the impact of tax abatements on public education, but the Amarillo Independent School District can't grant abatements. So in this case, only the city, Potter County, Amarillo College and the Panhandle Groundwater Conservation District will be asked for abatements.Resident Craig Gaultierre criticized the incentive process."There's not a contract to read the fine print on," he said. "Tax abatements - they don't work."He also criticized the process followed by the AEDC, which involves negotiating with a company in private before voting on a project."They don't even have a public comment period," he said.Burkett apologized to representatives of the company."We've had some trust issues in this town, and it has nothing to do with you," he said.Resident Cindy Bulla endorsed the process."We are competing for every job we can get with cities giving huge incentives," she said.Jennings said the abatement would not apply to everything at GRI including the original value of the land at $1.9 million and business property such as inventory at up to $37 million. That would generate an estimated $142,000 in tax revenue.The vote to approve the abatement was 4-1 with Demerson voting no."I wanted to send a message to AEDC," Demerson said in an interview. "I want them to know we won't accept what they give us and rubber stamp it." |
The influence of the constructivist perspective on the teaching of statistics and the importance of teaching with data In 1985, the North American Group of the Group for the Psychology of Mathematics Education affirmed the notion that the constructivist perspective would provide guidelines for instructors to facilitate student learning. The constructivist view of the student as an active learner as well as the present day emphases on the automation of the calculations and the student learning of the concepts has motivated the use of multimedia in the online teaching of statistics. Here, I examined the influence of the constructivist perspective on the teaching of statistics, the influence of the constructivist perspective on the use of computer assisted calculations and on the use of automated simulations and the significance of teaching with data. Based on this examination, one may improve upon the teaching of statistics by accounting for the omission of elements of sound and video that were observed for past online courses, by providing instruction on how data is produced for an empirical study, by providing small data sets to show the influence upon the statistic of outliers, skew and variance heterogeneity through the use of manual calculations, by providing large real datasets so that students may draw substantive conclusions about the statistics, and by using automated simulations to convey the statistical properties of the sampling distribution of the statistic. |
<reponame>zhangkn/iOS14Header
/*
* This header is generated by classdump-dyld 1.0
* on Sunday, September 27, 2020 at 11:39:54 AM Mountain Standard Time
* Operating System: Version 14.0 (Build 18A373)
* Image Source: /System/Library/PrivateFrameworks/AVFCore.framework/AVFCore
* classdump-dyld is licensed under GPLv3, Copyright © 2013-2016 by <NAME>.
*/
#import <AVFCore/AVFCore-Structs.h>
#import <AVFCore/AVDateRangeMetadataGroup.h>
@class AVDateRangeMetadataGroupInternal, NSDate, NSArray;
@interface AVMutableDateRangeMetadataGroup : AVDateRangeMetadataGroup {
AVDateRangeMetadataGroupInternal* _mutablePriv;
}
@property (nonatomic,copy) NSDate * startDate;
@property (nonatomic,copy) NSDate * endDate;
@property (nonatomic,copy) NSArray * items;
-(NSArray *)items;
-(id)mutableCopyWithZone:(NSZone*)arg1 ;
-(id)copyWithZone:(NSZone*)arg1 ;
-(id)_initWithTaggedRangeMetadataDictionary:(id)arg1 items:(id)arg2 ;
-(void)setStartDate:(NSDate *)arg1 ;
-(void)setItems:(NSArray *)arg1 ;
-(NSDate *)endDate;
-(NSDate *)startDate;
-(void)setEndDate:(NSDate *)arg1 ;
@end
|
threshold_value = 0.0001 |
<reponame>shaojiankui/iOS10-Runtime-Headers<gh_stars>10-100
/* Generated by RuntimeBrowser
Image: /System/Library/PrivateFrameworks/DictionaryUI.framework/DictionaryUI
*/
@interface DUEntryViewController : UIViewController {
DUDefinitionValue * _definitionValue;
}
@property (retain) DUDefinitionValue *definitionValue;
+ (id)entryViewControllerWithDefinitionValue:(id)arg1;
- (void).cxx_destruct;
- (id)definitionValue;
- (id)initWithDefinitionValue:(id)arg1;
- (void)setDefinitionValue:(id)arg1;
- (void)viewDidLoad;
@end
|
#!/usr/bin/python3
from numpy import array
from numpy.linalg import eig
from scipy.sparse import diags
import numpy as np
import sys
import ast
# define matrix
args = sys.argv
maind = ast.literal_eval(args[3])
second = ast.literal_eval(args[4])
k = array([
second, maind, second
])
offset = [-1, 0, 1]
A = diags(k, offset).toarray()
values, vectors = eig(A)
for val in values:
print(val)
|
Spermine uptake is necessary to induce haemoglobin synthesis in murine erythroleukaemia cells. To determine whether intracellular uptake of spermine is necessary to induce haemoglobin synthesis in murine erythroleukaemia (MEL) DS 19 cells, we used single-step selection for resistance to N1,N12-bis(ethyl)spermine (BESM), a cytotoxic spermine analogue, to isolate clones deficient in polyamine transport. The cells were approximately 500-fold more resistant to BESM than parental cells and were unable to accumulate BESM, putrescine, spermidine or spermine. Addition of spermine to the polyamine-transport-deficient cells failed to induce haemoglobin synthesis. Hexamethylene-1,6-bisacetamide, a well-known differentiating agent, induced haemoglobin synthesis in both parental and resistant cells. Polyamine-transport-deficient cells transfected with DNA purified from the parental cell line were further selected for their ability to grow in the presence of alpha-difluoromethylornithine and putrescine. The transfectants had an active transport system for polyamines, and spermine added to their culture medium accumulated inside the cells and induced haemoglobin production. These findings indicate that intracellular spermine uptake is required to induce haemoglobin production in MEL cells. |
<reponame>Humenger/Install-Apk
package cn.sll.installapk;
import com.sun.org.apache.xerces.internal.impl.dv.util.HexBin;
import sun.misc.HexDumpEncoder;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
public class Main {
private static final String adbPath = Windows.getAdbPath();// new File("adb/adb.bat").getAbsolutePath();
//private static final String adbPath =new File("adb/adb.bat").getAbsolutePath();
public static void main(String[] args) {
if(!checkEnv())return;
String apkPath = "";//D:\Download\pandownload\596[PRO].apk
String deviceID = "";
if (args.length != 1) {
System.out.println("请输入apk路径:");
Scanner scanner = new Scanner(System.in);
if (scanner.hasNext()) apkPath = scanner.next();
} else {
apkPath = args[0];
}
// apkPath=fixPathSpace(apkPath);
System.out.println("apk path:"+apkPath);
String devices = CommandUtils.run(adbPath + " devices").data;
System.out.println("devices:" + devices);
String[] devicesSplit = devices.split("\n");
List<String> deviceIDs = new ArrayList<>();
for (String d : devicesSplit) {
if (d.startsWith("List")) continue;
int index = d.indexOf("\tdevice");
if (index == -1) continue;
String device = d.substring(0, index);
deviceIDs.add(device);
}
if (deviceIDs.size() > 1) {
System.out.println("\n######Device list#######");
for (int i = 0; i < deviceIDs.size(); i++) {
System.out.println(String.format("%s.deviceID:%s model:%s", i, deviceIDs.get(i), CommandUtils.run(String.format("%s -s %s shell getprop ro.product.model", adbPath, deviceIDs.get(i))).data));
}
System.out.println("Please choose device:");
Scanner scanner = new Scanner(System.in);
if (scanner.hasNext()) {
int s = scanner.nextInt();
deviceID = deviceIDs.get(s);
} else {
return;
}
} else {
deviceID = deviceIDs.get(0);
}
System.out.println("start install apk...");
File tempApk=toTempApkFile(apkPath);
if(tempApk==null)return;
try {
String data = CommandUtils.run(String.format("%s -s %s install -r -d %s", adbPath, deviceID, tempApk.getAbsolutePath())).data;
System.out.println("result:" + data);
System.out.println("please input any key to exit....");
Scanner scanner = new Scanner(System.in);
scanner.next();
}catch (Exception e){
e.printStackTrace();
}finally {
tempApk.delete();
}
}
private static boolean checkEnv() {
if(!OSInfo.isWindows()){
System.out.println("Error: Only support windows OS!");
return false;
}
return true;
}
private static File toTempApkFile(String apkPath){
try {
File tempFile = new File(System.getProperty("java.io.tmpdir"), "_" + System.currentTimeMillis() + ".apk");
tempFile.createNewFile();
tempFile.deleteOnExit();
StreamUtils.copy(new FileInputStream(apkPath),new FileOutputStream(tempFile),true);
return tempFile;
}catch (Exception e){
e.printStackTrace();
}
return null;
}
}
|
The Future Looms: Weaving Women and Cybernetics Ada was not really Ada Byron, but Ada Lovelace, and her father was never Prime Minister: these are the fictions of William Gibson and Bruce Sterling, whose book The Difference Engine sets its tale in a Victorian England in which the software she designed was already running; a country in which the Luddites were defeated, a poet was Prime Minister, and Ada Lovelace still bore her maiden name. And one still grander: Queen of Engines. Moreover she was still alive. Set in the mid-1850s, the novel takes her into a middle-age she never saw: the real Ada died in 1852 while she was still in her thirties. Ill for much of her life with unspecified disorders, she was eventually diagnosed as suffering from cancer of the womb, and she died after months of extraordinary pain. Ada Lovelace, with whom the histories of computing and women's liberation are first directly woven together, is central to this paper. Not until a century after her death, however, did women and software make their respective and irrevocable entries on to the scene. After the military imperatives of the 1940s, neither would ever return to the simple service of man, beginning instead to organize, design and arouse themselves, and so acquiring unprecedented levels of autonomy. In later decades, both women and computers begin to escape the isolation they share in the home and office with the establishment of their own networks. These, in turn, begin to get in touch with each other in the 1990s. This convergence of woman and machine is one of the preoccupations of the cybernetic feminism endorsed here, a |
mod = 1000000007
eps = 10**-9
def main():
import sys
input = sys.stdin.readline
for t in range(int(input())):
N = int(input())
A = list(map(int, input().split()))
S = input().rstrip('\n')
A.reverse()
S = S[::-1]
A0 = [0] * 60
A1 = [0] * 60
zero = 0
one = 0
for i, a in enumerate(A):
if S[i] == "0":
for j in range(60):
A0[j] ^= ((a >> j & 1) << zero)
zero += 1
else:
for j in range(60):
A1[j] ^= ((a >> j & 1) << one)
one += 1
B = [0] * 60
for i in range(60):
B[i] = A0[i] + (A1[i] << zero)
rnk = 0
for i in range(zero):
#print(B)
for j in range(rnk, 60):
if B[j] >> i & 1:
B[j], B[rnk] = B[rnk], B[j]
b = B[rnk]
#print(b)
for jj in range(60):
if jj == rnk:
continue
if B[jj] >> i & 1:
B[jj] ^= b
rnk += 1
break
cnt0 = 0
cnt1 = zero
r = -1
win = 0
for s in S:
if s == "0":
if r+1 >= len(B):
break
if B[r+1] >> cnt0 & 1:
r += 1
cnt0 += 1
else:
rr = -1
for j in range(60):
if B[j] >> cnt1 & 1:
rr = j
if rr > r:
win = 1
break
cnt1 += 1
print(win)
if __name__ == '__main__':
main()
|
/**
* Fetches entities from the datastore using the event names.
*
* @param eventNames List containing the names of the entities to fetch from the Datastore
* @return List containing the requested entities
*/
public static List<Entity> fetchIDsFromDataStore(List<String> eventNames) {
Filter idFilter = new FilterPredicate("eventName", FilterOperator.IN, eventNames);
Query query = new Query("Event").setFilter(idFilter);
DatastoreService datastore = DatastoreServiceFactory.getDatastoreService();
PreparedQuery results = datastore.prepare(query);
List<Entity> events =
new ArrayList<Entity>(results.asList(FetchOptions.Builder.withDefaults()));
return events;
} |
Learning to be a normal mother: empowerment and pedagogy in postpartum classes. This qualitative study, which was conducted in the summer of 1992, presents the findings of how six first-time mothers and two public health nurses experienced pedagogical practices within postpartum classes offered by two public health units in Ontario, Canada. How concerns and aspirations of new mothers were constructed and mediated in the postpartum class are analyzed using concepts from poststructuralist and feminist methodologies. This study goes beyond an analysis of individual teaching and learning styles and discusses how social structures of isolation, investment in a medical discourse, and processes of normalization construct an individual's experiences and practices of mothering, which in turn influence pedagogical practices in postpartum classes. Issues of empowerment, language, support, and knowledge exchange are discussed. |
A Model for Supply Chain Strengthening of Developing Country Public Health Programs Supply Chain Evolution Introduction to a Framework for Supply Chain Strengthening of Developing Country Public Health Programs A Supply Chain Evolution Model demonstrates to countries how to implement and sustain an integrated supply chain. It illustrates how public health systems can move through a process management trajectory that leads to improved supply chain management capacity, from ad hoc to organized to integrated to extended stages. In the earlier stages, health system managers have little understanding of what their supply chain system looks like, how it is operating, and how to manage various supply chains as one cohesive system that interacts with its broader environment. As MOHs and donor partners coordinate and carry out efforts to define, measure, and manage public health supply chain processes, those supply chains can evolve. In the later stages, the flow of information and visibility into supply and demand improves at all levels of the supply chain. Roles and responsibilities of personnel are clarified and validated. In the integrated and extended stages, health system managers increasingly understand how their system operates, ways to use resources more efficiently, how to manage and align supply chain actors to achieve common goals, and, ultimately, ways to interact more effectively with the broader environment in which the supply chain is situated. This paper has been reviewed and is supported by USAIDs Office of Population and Reproductive Health of the Global Health Division and supports the broad objectives of the U.S. Global Health Initiative. Cover photo: Clockwise from upper leftfamily planning commodities; healthcare provider in Ethiopia at refrigerator; child in Liberia receiving routine health services; delivery truck outside a clinic in Liberia. JSI. USAID | DELIVER PROJECT John Snow, Inc. 1616 Fort Myer Drive, 11th Floor Arlington, VA 22209 USA Phone: 703-528-7474 Fax: 703-528-7480 Email: [email protected] Internet: deliver.jsi.com |
A Two-Bit Weighted Bit-Flipping Decoding Algorithm for LDPC Codes This letter proposes a novel two-bit weighted bit-flipping algorithm (TB-WBFA) for low-density parity-check (LDPC) codes on the binary symmetric channel. The proposed TB-WBFA produces reliability bits for the bit-decision results and syndrome values at bit and check nodes, respectively. The reliability bits are exchanged between bit and check nodes as the decoding proceeds. The message update in check nodes is carefully devised with simple bitwise operations, which are especially suitable for high-speed operations. It will be demonstrated that the proposed TB-WBFA achieves a significant performance improvement over existing bit-flipping decoding algorithms by conducting performance comparisons for LDPC codes under various setups, i.e., different rates, lengths, and structures. |
### Given the SAXS profiles of the target and individual states, calculate the SAXS discrepancy scores
### @<NAME>, <EMAIL>
import numpy as np
import math
import glob
### load the target SAXS profiles, including both intensities and errors
f_saxs_data = np.loadtxt("avg_native.dat")
f_saxs = np.transpose(f_saxs_data)[1]
f_saxs_err = np.transpose(f_saxs_data)[2]
N = 51
scores = []
### calculate the reduced chi^2 SAXS discrepancy scores between each state and the target
for i in range(500):
state_score = []
for j in range(100):
s_saxs = np.loadtxt("STATE" + str(i) + "_" + str(j) + ".txt")
s_saxs = np.transpose(s_saxs)[1]
#s_saxs = (f_saxs[0]/s_saxs[0])*s_saxs
sum = 0.
for k in range(N):
sum = sum + ( (f_saxs[k] - s_saxs[k])/f_saxs_err[k] )**2
sum = sum/(N-1)
state_score.append(sum)
scores.append([i, np.mean(state_score), np.std(state_score)])
### save the SAXS discrepancy scores
np.savetxt("ProteinG_discrepancy.txt", np.array(scores))
|
On the stabilization of steady continuous adjoint solvers in the presence of unsteadiness, in shape optimization Adjointbased shape optimization using unsteady solvers is costly and/or memory demanding. When mild unsteadiness is present or the flow in/around the optimized shape is not expected to be timevarying, steady primal and adjoint solvers can be used instead. However, in such a case, convergence difficulties caused by flow unsteadiness must properly be treated. In this article, the steady primal and the corresponding (continuous) adjoint solvers are both stabilized by implementing the recursive projection method (RPM). This is carried out in the adjointOptimisation library of OpenFOAM, developed, and made publicly available by the group of authors. Upon completion of the optimization using steady solvers, unsteady reevaluations of the optimized solutions confirm a reduction in the timeaveraged objective function. In complex cases, in which the RPM may not necessarily ensure convergence of the adjoint solver on its own, the controlled damping of the adjoint transposedconvection (ATC) term is additionally implemented. This is demonstrated in the shape optimization of a motorbike fairing where averaged primal fields over a number of iterations of the steady flow solver are used for the solution of the adjoint equations. Cases in which the RPM is, on its own, sufficient in ensuring convergence of the adjoint solver are additionally studied by using a controlled ATC damping, to assess its impact on the computed sensitivity derivatives. Comparisons show that controlled/mild ATC damping is harmless and greatly contributes to robustness. |
Subsets and Splits