content
stringlengths
7
2.61M
/** * Action for exporting an image of a RuleFlow. */ public class ExportImageAction extends ActionDelegate implements IEditorActionDelegate { private IEditorPart editor; public void run(IAction action) { execute(); } public void setActiveEditor(IAction action, IEditorPart targetEditor) { editor = targetEditor; } private void execute() { ExportImageDialog dialog = new ExportImageDialog(editor.getSite().getWorkbenchWindow().getShell()); dialog.setOriginalFile(((IFileEditorInput) editor.getEditorInput()).getFile()); dialog.open(); IPath path = dialog.getResult(); if (path == null) { return; } IWorkspace workspace = ResourcesPlugin.getWorkspace(); final IFile file = workspace.getRoot().getFile(path); WorkspaceModifyOperation op = new WorkspaceModifyOperation() { public void execute(final IProgressMonitor monitor) throws CoreException { try { ByteArrayOutputStream out = new ByteArrayOutputStream(); ((GenericModelEditor) editor).createImage(out, SWT.IMAGE_PNG); file.create(new ByteArrayInputStream(out.toByteArray()), true, monitor); out.close(); } catch (Exception e) { e.printStackTrace(); } } }; try { new ProgressMonitorDialog(editor.getSite().getWorkbenchWindow().getShell()).run(false, true, op); } catch (Exception e) { e.printStackTrace(); } } }
Silsesquioxane Materials as Sun Protection Factor Ingredients and as Films for Greenhouse Covers Using polyhedral oligomeric silsesquioxanes (POSS) derived from the hydrolytic condensation of (3methacryloxypropyl)trimethoxysilane (MPMS), vinyltrimethoxysilane (VMS), and (3-glycidoxypropyl)trimethoxysilane (GPMS), three hybrid nanofilms, f-MP (film-MPMS-POSS), f-GP and f-VP, were prepared using sol-gel and crosslinking processes. The average transparency (AT) and absorption coefficents (AC) of the films were measured in the ranges of 280-2500 nm. Two film transparency applications are described in this work: 1) The AT values of the POSS films in the range of the ultraviolet B (UV-B) spectrum (280320 nm) (a skin cancer-causing agent) and 2) the AT values in the visible light (VIS) region (400-750 nm) and the near infrared (NIR) region (750-2500 nm) (providing crops growth energy and the improvement of the photosynthetic process efficiency). The AT values of the POSS films in the UV-B range are only about 13%, indicating these films can provide a physical barrier to block UV-B absorption by the skin, and therefore are possible POSS materials for sunscreen ingredients. The AT values in the VIS region are 95.13%, 89.16% and 91.60%, respectively, and the AT values in the near infrared (NIR) region (750-2500 nm) are 95.39%, 93.11% and 90.50%, respectively. These high AT films are good candidates for greenhouse covers. The AT values among the three films in the 280-2500nm region are different and exhibit varied selectivity for absorbed spectra due to the dissimilar sizes of organic branches covalently bonded to the silica network in the film structure.
Q: absurd edit war The question and its edit war So this edit war happened. Here's a quick recap: A poster asked a very clear question, two different ways. A user edited the question to eliminate the second phrasing. The OP added the second phrasing back in. A user who was subsequently elected to a moderator position after the edit in question deleted the second phrasing of the question. The original poster restored the second phrasing of the question. A moderator deleted the alternate phrasing again. The OP restored the alternate phrasing and posted a comment to the effect that he thought the alternate wording provided additional clarity into what he was was asking. A moderator deleted the alternate phrasing and locked the question to prevent any more revisions or comments. What message might users discern from this edit pattern? The first deletion by the moderator-to-be does seem like a legitimate attempt at improving the question. Everything should be as simple as possible, but not any simpler. I think we all agree with that. However, the original poster apparently felt that the revisions made the question "too simple", i.e. that clarity was lost. The question I ask here is, what message might question-askers take away from not being able to ask questions the way they want? Am I alone in thinking that question-askers might be irritated by repeated "revisions"? Perhaps the only thing OPs will be able to take away from situations like this is that they aren't in control of their own questions? Perhaps they may feel that this is a site where others presume to know their own question better than they do? I don't think any of the moderators wanted to send messages like these, but I'm confident that these are the messages that are coming across. If this is what moderation looks like, I want to hear more about why its a useful strategy. Three lines of redundant content (note: whether the content is redundant is certainly arguable) don't seem worth worth fighting over to me. Wouldn't it make sense to either let the (arguably too verbose) original question stand, or at least to propose a "third-way" edit that provides a new possibility, instead of just repeatedly rolling back the OP's own question? Why focus on the moderator behavior here? Comments and answers to date have brought up the valid point that my summary mentions only the actions of moderators (or future moderators), not of all the question editors. I apologize for not making this clear in my initial post. Let me explain why I wrote this question here. First, I wrote this question when the post was locked. The link on the lock notice specifically recommends that people raise issues with a locked question here on Meta. Second, a very large number of the rollbacks were made by moderators or moderators-to-be. I recognize that editing questions is not specifically a duty of moderators, but of all high-rep users. However, I do believe that moderators are (and should be) held to the highest standards and in general should serve as exemplars to the community. This question was in my mind a rare example of (some of) the moderators not meeting this admittedly lofty standard. I do think it is worth examining how we can avoid situations like this in the future. A: I was involved in an edit war at EE.SE earlier this year, over this answer. The following meta discussion is here. Based on this experience, IMO if a particular edit is rolled back once by the OP, no further edits in that vein should be made, ever, except by the OP him/herself (or by someone who has their unambiguous blessing). The spurned editor then has three paths forward: Downvote Vote to close, if appropriate Invite OP to chat If OP's are okay with the general thrust of other people's edits, they have the option of tweaking those edits by making further edits of their own. If an OP objects so strongly to the change made by an edit that they desire to rescind it wholesale, then courtesy dictates that one should not try to force that change into their post. As an additional recommendation: Whenever I make what feels like a substantial revision to someone's post, whether excising a large portion or undertaking a major rewrite or whatever, I usually post a comment after I make the edit inviting the OP to review my edit and change anything they don't like, or roll back the entire thing if they wish. I do this to let them know that I know I've majorly changed their post, and that if they're not happy with what I've done, I won't be upset if they change/undo it. A: I thought long and hard whether or not I should comment, but because the post reads like that the moderating team enforced this edit war, which is simply not true, I think it is necessary to clear up a few things. First and foremost I agree that this kind of disagreement over the state of a post should be avoided at all costs as it transports the wrong message about the community. However, blaming this on one party alone is certainly equally as wrong. Now before you read my argumentation, you should know that I rolled back the post twice. Let's take it from the beginning. The very first draft of the question did not have the redundancy, it was later introduced as a means of clarification upon request by a comment. It then underwent some formatting changes, in which the homework was introduced. The original start of the dispute then removed the assignment like list, probably in an attempt to reshape the post as a terminology question. This was done by an at-the-time ordinary user. The Op then reintroduced the second paragraph. I somehow must have come across the post, and realised that it is the same question in different words. I had a look at the edit history and decided to roll back the edit that I was going to introduce myself. I always try to be as concise as possible, but that's my personal feeling about it, I am sorry that this was not the best choice here. I usually don't think twice about a post after I roll back an edit. If the OP wants it different than the community, it's just so much easier to walk away. In this case the extra paragraph doesn't even hurt anyone. However, in this case some other circumstances led to it that the post showed up again. Starting with that the Op then rolled back the edit. I am not watching any of the posts, I treat them as they come along. I forgot about the post and couldn't have cared less. However, while checking the review queues (and I only do that if it is double digits), there was a suggested edit to remove the second paragraph. At the time it seemed like a good idea, hence it was approved by me and someone else, without a counter vote. I regret approving this without checking the review history and noticing, that this was turning into a content dispute. The Op then rolled back the edit and commented Hello community. Can we not remove the "In other words" part of my question? I know it is redundant. I'd like to state it twice in slightly different ways. Sorry if that irks those out there who like conciseness. But here is what I regret the most: Not reading the comment when it popped up. Instead a day later I find this post in the close-review queue, with three "Homework" close reason votes already on it. Without (again) further checking the history of the post I rolled it back to the version of the post, that I previously approved as a suggested edit. I did this as a means to remove the "homeworky-ish" looking portion and keep it from getting closed. I further voted to leave it open end de-queued it. Yes, I totally missed to inform the Op that the second paragraph might be the reason, why it is about to be closed and I also did not justify this edit further. I really should have done this, or maybe even better, just stay out of it from the start. I really regret this, and also not checking the full edit history of the post. Because of my failures, this eventually led to the dispute being carried on, while I was somewhat unaware about its long history. The Op then rolled back the edit and commented OK. this has become a challenge. Rolling back for the fourth time. I'll eventually wear out. The post then had a flag on it, asking to roll back and lock it due to content dispute. And this is when another moderator stepped in, did that, and after a while this meta-post was opened. And here we are. There were a total of six people and the Op involved in how this came about. Only two of them were moderators and the last one had to step up to resolve the content dispute. I think we can all agree, that this outcome is far from desirable. Since I was elected as a moderator, this was the first time it escalated. It does somewhat show, that this is not the norm. Calling it absurd is probably legit, but not in the way it was presented here (and certainly not in the first version of the meta post). I agree with Brian's (hBy2Py) answer how such things should be ideally handled (in the future). This especially means being a lot more thorough checking the posts before approving edits and also being more transparent and comunicative about edits. However, this is obviously no guarantee that things like this won't happen again. We are all human and we make mistakes. I apologise for the inconvenience this caused. A: Leaving the tone of the question aside there is a series of misconceptions or misinformation that I wish to point out. As a moderator, one might think that I am biased, but I will let the community judge what I write. Here's a quick recap [...] A moderator deleted the second phrasing of the question. (1) I was the first editor of the question. At the time I edited it (24 Sep '16), I was not a moderator (was elected 18 Oct '16). I am personally frustrated that I am being implicated as being complicit in "mod abuse" when I wasn't even a mod. recap continues... (2) The edit by Melanie was not mentioned in the recap. The first deletion by the moderators does seem like a legitimate attempt at improving the question. [...] If this is what moderation looks like, I want to hear more about why its a useful strategy. (3) Editing the content of posts is not a moderator duty. This entire post strongly implies that Martin and I were acting as moderators when we edited the question, and that deletion of this paragraph constitutes "moderation". This is categorically false. Anybody over 2,000 reputation, moderator or not, can edit posts. Anybody below 2,000 reputation, moderator or not, can suggest edits to posts. The editing of content is a community job. This is evident especially in light of points (1) and (2), where the editors were not moderators. That said, yes, there is some fault, and the fault lies with the editors. The fault is that they were negligent in checking the edit history or the comments on the question. Martin has already talked about this. However, I insist on pointing out that it is a fault with the editors, and not the moderators. To the community: in the future, if this sort of thing happens, regardless of who the editors are, please raise a custom flag for moderator attention (or ping us in chat if we're there). The most likely explanation is that we didn't notice something going on. We are always ready to accept community input, but at the same time that doesn't mean we're doormats, which is why I feel compelled to issue this rather defensive response.
Low-Complexity Detection for Index Modulation Multiple Access Index modulation multiple access (IM-MA) is recently proposed to exploit the IM concept to the uplink multiple access system, where multiple users transmit their own signals via the selected time slots. However, the computational complexity of the optimal maximum-likelihood (ML) detection in IM-MA is tremendously high when the number of users or time slots is large. In this letter, we propose a low-complexity detection method for IM-MA, which is inspired by the log likelihood ratio (LLR) algorithm. In addition, because of the heavy search burden for all LLR values, we further propose a suboptimal method to determine the permutation set, which records the number of users allocated to each time slot. Simulation results and the complexity analysis verify that the proposed detection performs closely to the optimal ML detection with reduced computational complexity.
<filename>xfcd/src/main/java/org/amv/trafficsoft/datahub/VertxEventBusReactorAdapter.java package org.amv.trafficsoft.datahub; import io.vertx.core.Vertx; import io.vertx.core.eventbus.MessageConsumer; import io.vertx.core.eventbus.MessageProducer; import io.vertx.core.json.Json; import io.vertx.core.streams.Pump; import io.vertx.ext.reactivestreams.ReactiveReadStream; import io.vertx.ext.reactivestreams.ReactiveWriteStream; import org.reactivestreams.Publisher; import org.reactivestreams.Subscriber; import reactor.core.publisher.Flux; import static java.util.Objects.requireNonNull; public class VertxEventBusReactorAdapter<E> { private final Vertx vertx; public VertxEventBusReactorAdapter(Vertx vertx) { this.vertx = requireNonNull(vertx); } public <T extends E> void publish(Class<T> clazz, Publisher<T> publisher) { requireNonNull(clazz); requireNonNull(publisher); ReactiveReadStream<Object> rrs = ReactiveReadStream.readStream(); Flux.from(publisher) .map(Json::encode) .subscribe(rrs); MessageProducer<Object> messageProducer = vertx.eventBus().publisher(clazz.getName()); Pump pump = Pump.pump(rrs, messageProducer); pump.start(); rrs.endHandler(event -> { pump.stop(); }); } public <T extends E> void subscribe(Class<T> clazz, Subscriber<T> subscriber) { requireNonNull(clazz); requireNonNull(subscriber); final MessageConsumer<String> consumer = vertx.eventBus().consumer(clazz.getName()); ReactiveWriteStream<String> rws = ReactiveWriteStream.writeStream(vertx); Pump pump = Pump.pump(consumer.bodyStream(), rws); Flux.from(rws) .doOnSubscribe(subscription -> { pump.start(); }) .doOnComplete(() -> { pump.stop(); rws.close(); }) .map(json -> Json.decodeValue(json, clazz)) .subscribe(subscriber); } }
""" 472. Concatenated Words """ class Solution: def findAllConcatenatedWordsInADict(self, words): """ :type words: List[str] :rtype: List[str] """ self.dic, self.memo, res = set(words), set(), [] for i in words: if self.dfs(0,0,i): res.append(i) return res def dfs(self, beg, count, s): if beg == len(s) and count>1: return True for i in range(beg,len(s)): if s[beg:] in self.memo: return True if s[beg:i+1] in self.dic and self.dfs(i+1,count+1,s): if i != len(s)-1: self.memo |= {s[beg:]} return True return False """ 没有memoization 慢一点 """ class Solution: def findAllConcatenatedWordsInADict(self, words): """ :type words: List[str] :rtype: List[str] """ self.dic, res = set(words), [] for i in words: if self.dfs(0,0,i): res.append(i) return res def dfs(self, beg, count, s): if beg == len(s) and count>1: return True for i in range(beg,len(s)): if s[beg:i+1] in self.dic and self.dfs(i+1,count+1,s): return True return False """ 最快的,从尾部向前,如果尾部不在,肯定不在,省去了先看前面,再查后面的时间 """ class Solution: def findAllConcatenatedWordsInADict(self, words): """ :type words: List[str] :rtype: List[str] """ res = [] words_dict = set(words) for word in words: words_dict.remove(word) if self.check(word, words_dict) is True: res.append(word) words_dict.add(word) return res def check(self, word, d): if word in d: return True for i in range(len(word),0, -1): if word[:i] in d and self.check(word[i:], d): return True return False class Solution: def findAllConcatenatedWordsInADict(self, words: List[str]) -> List[str]: S = set(words) ans = [] for word in words: if not word: continue stack = [0] seen = {0} M = len(word) while stack: node = stack.pop() if node == M: ans.append(word) break for j in xrange(1, M - node + 1): # just start from 1 if word[node:node+j] in S and node + j not in seen \ and not (node==0 and node+j==M): # that is, the word must be broken but not a complete one stack.append(node + j) seen.add(node + j) return ans class Solution: def findAllConcatenatedWordsInADict(self, words: List[str]) -> List[str]: d = set(words) def dfs(word): for i in range(1, len(word)): prefix = word[:i] suffix = word[i:] if prefix in d and suffix in d: return True if prefix in d and dfs(suffix): return True if suffix in d and dfs(prefix): return True return False res = [] for word in words: if dfs(word): res.append(word) return res #Suffix Trie class Solution: def findAllConcatenatedWordsInADict(self, words: List[str]) -> List[str]: class Trie: def __init__(self): self.child = collections.defaultdict(lambda:Trie) self.isleaf = False trie = Trie() for word in words: if not word: continue t = trie for w in word: if w not in t.child: t.child[w] = Trie() t = t.child[w] t.isleaf = True def find(t, word, index, cnt): while index < len(word): if word[index] not in t.child: return False t = t.child[word[index]] if t.isleaf and find(trie, word, index+1, cnt+1): return True index += 1 return index == len(word) and cnt >= 1 and t.isleaf res = [] for word in words: if find(trie, word, 0, 0): res += word, return res
/* ptr_svc - emulated timeout - 1134 read operation complete */ static t_stat ptr_svc (UNIT *uptr) { CLRBIT(ptr_dsw, PTR1134_DSW_READER_BUSY); SETBIT(ptr_dsw, PTR1134_DSW_READER_NOT_READY); if (IS_ONLINE(uptr)) { ptr_char = getc(uptr->fileref); uptr->pos++; if (! feof(uptr->fileref)) CLRBIT(ptr_dsw, PTR1134_DSW_READER_NOT_READY); } SETBIT(ptr_dsw, PTR1134_DSW_READER_RESPONSE); SETBIT(ILSW[4], ILSW_4_1134_TAPE); calc_ints(); return SCPE_OK; }
Alteration of Liver Biomarkers in Patients with SARS-CoV-2 (COVID-19) Introduction Coronavirus disease 2019 (COVID-19) emerged in China and spread worldwide. In this study, we assessed the characteristics of markers of the liver in patients with COVID-19 to provide new insights in improving clinical treatment. Patients and Methods We recruited 279 patients who confirmed COVID-19 and the data of liver biomarkers and complete blood count of patients were defined as the day onset when the patients admitted to the hospital. Results The average of LDH value was 621.29 U/L in all patients with COVID-19, and CPK was 286.90 U/L. The average AST was 44.03 U/L in all patients, and ALT was 31.14 U/L. The AST/ALT ratio was 1.64 in all patients. The measurement of CRP was increased by 79.93% in all patients. Average ALT and AST values of patients with elevated ALT were significantly increased in comparison to patients with normal ALT (P-value = 0.001), while AST/ALT ratio was significantly decreased compared to patients with normal ALT (P-value= 0.014). In addition, the average LDH of patients with elevated ALT was significantly increased compared to patients with normal ALT (P-value = 0.014). Conclusion Hepatic injury and abnormal liver enzymes related to COVID-19 infection is an acute non-specific inflammation alteration. Introduction Since December 2019, coronavirus disease 2019 (COVID- 19) was first reported in China and has led to a major concern of health problems worldwide. 1 COVID-19 is caused by a novel coronavirus named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). 2,3 SARS-CoV-2 is classified as a single-stranded and positive-sense RNA virus, which belongs to the genus Betacoronavirus. 4 This virus is closely related to two bat-derived SARS-like coronaviruses (88-99% similarity) including bat-SL-CoVZXL21 and bat-SL-CovZC45, while it is more distant from Middle East respiratory syndrome coronavirus (MERS-CoV, approximately 50% similarity) and SARS-CoV (approximately 79% similarity). 4 The outstanding symptoms of COVID-19 are acute atypical pneumonia and pulmonary damage, which can progress to multiple organ failureleading to death in patients with underlying co-morbidities. 2,3,5 However, ordinary symptoms of respiratory system infection were found in non-severe cases. 2,3,5 The most common underlying diseases are cardiovascular disease and hypertension among adult patients followed by diabetes mellitus. 5,6 Adults older than 18 years of age are the most common patients infected with SARS-CoV-19 5 and there are some cases of children aged between 2 and 15 years. 7 In some cases, liver dysfunction followed by COVID-19 has also been observed, which could indicate a possibility for hepatic injury caused by COVID-19. The liver is constantly exposed to viruses, antigens, bacteria, and their products with inflammatory potential, which leads to hepatic injury. Several factors such as excessive alcohol consumption, exposure to toxins, viral infections and bile duct obstruction could cause hepatic injury. 8 Viral agents such as hepatitis B virus (HBV), hepatitis C virus (HCV) and hepatitis E virus (HEV) are common causes of varying degrees of hepatic injury. Furthermore, some reports demonstrated that SARS-infected patients and MERS-infected patients had an increased value of liver enzyme and various degrees of hepatic injury. 12,13 Li et al 14 reported that C-reactive protein (CRP) index in patients with elevated alanine aminotransferase (ALT) level was significantly higher. They suggested that cytokine storm syndrome may cause COVID-19-related hepatic injury. 14 However, the mechanism and reason for COVID-19-related hepatic injury is still unclear. At this time, COVID-19-related hepatic injury remains controversial and there is no more research regarding this issue. It is required to find the relationship between COVID-19 and its related hepatic injury, which could solve and decrease the practical clinical problems, and improve clinical treatment. Therefore, in this study, we assessed the characteristics of markers of liver function in patients with COVID-19 to provide new insights to better follow these patients. Methods and Materials Patient Selection In this retrospective study, we recruited 279 patients with confirmed COVID-19 from 23 February to 19 March 2020 at Imam-Reza teaching and treatment hospital, Tabriz, Iran. This hospital was isolated and all wards of the hospital admitted patients with COVID-19 infection. All the patients were hospitalized and COVID-19 infection had been confirmed with real-time PCR and chest CT scan. All patients had no history of liver diseases and chronic hepatitis viral infection and their liver disease was due to COVID-19 disease. Samples of throat and nasopharyngeal swabs were collected from patients suspicious for COVID-19 for RT-PCR diagnosis. Pneumonia was diagnosed based on the Infectious Diseases Society of America/American Thoracic Society (IDSA/ATS) guidelines. 15 Briefly, patients with pneumonia were diagnosed as having pneumonia who had at least one of the clinical symptoms of fever, cough, pleuritic chest pain, and dyspnea, as well as a finding of coarse crackles or elevated inflammatory biomarkers or auscultation and a new inflammation on chest CT. Pulmonary radiology showed that the lesions of COVID-19 in all patients progressed more than 50% within 24-48 hours. Treatment was done based on a combination of azithromycin and hydroxychloroquine and supportive therapy. Baseline Data Collection All laboratory reports were retrospectively extracted from the hospital information system (HIS). The data of liver function biomarkers and complete blood count (CBC) was defined as the day of onset when the patients were admitted to the hospital. The liver function tests included ALT, AST, lactate dehydrogenase (LDH). Statistical Analysis Microsoft Excel version 2016 was used for statistical analysis. All quantitative analyses were represented by the mean and standard deviation (STDEV). t-Student, Mann-Whitney U, and Chi-square tests were used to compare the differences between COVID-19 patients with elevated ALT group and normal ALT group. In addition, Spearman correlation coefficients were used to describe the strength and direction analysis of the linear relationship between ALT, AST, CPK and LDH variables. Significance was set as a P-value < 0.05. Results The average age of all patients with COVID-19 was almost 59 years old and 164 (58.78%) patients were male and 115 (41.22%) were female. The average LDH value was 621.29 U/L in all patients with COVID-19, and creatine phosphokinase (CPK) was 286.90 U/L. The average AST was 44.03 U/L in all patients and ALT was 31.14 U/L. The AST/ALT ratio was 1.64 in all patients. The measurement of CRP was increased by 79.93% in all patients. Patients' red blood cell values, platelet values and their indexes were normal. Furthermore, patients' white blood cell values were normal. The results are shown in Table 1. In total, for 279 patients, laboratory results showed that 41 (14.70%) patients had an elevated ALT value, while only 9.75% (4 of 41) of those with elevated ALT had a normal AST value. Seven patients (17.07%) with elevated ALT were recruited from the intensive care unit (ICU), while 22 patients (9.24%) with normal ALT were recruited from the ICU. There were more male patients with elevated ALT in comparison to normal patients (P-value = 0.017). Furthermore, the average age of patients with elevated ALT was almost lower than patients with normal ALT (P-value = 0.054). The average ALT and AST values of patients with elevated ALT were significantly increased in comparison with patients with normal ALT (approximately 89 U/L and 105 U/L vs 21 U/L and 33 U/L, respectively; P-value < 0.001), while the AST/ ALT ratio was significantly decreased in comparison to patients with normal ALT (1.25 vs 1.71, P-value= 0.014). Also, the average LDH of patients with elevated ALT was significantly increased compared to patients with normal ALT (approximately 866 U/L vs 578 U/L, P-value = 0.014; Figure 1). In addition, the ALT values were positively correlated with the AST values in patients with elevated ALT (P< 0.001). Also, there was a positive correlation in values of AST with CPK and LDH (P < 0.05) and CPK values with AST and LDH values (P< 0.05) in patients with elevated ALT (Figure 2), while all values were positively correlated with each other in patients with normal ALT (P< 0.05). Furthermore, we classified the patients by degree of ALT evaluation (1X ALT, 1.5X ALT, 2X ALT, 3X ALT and more than 3X ALT) and found that AST values were significantly different between groups (P < 0.001) and increased AST values were significantly increased in 3X ALT and more than 3X ALT groups (P = 0.033 and P < 0.001, respectively). The blood differential cells' results showed that the average neutrophil percentage of patients with elevated ALT was significantly increased in comparison to patients with normal ALT (P-value = 0.003), while the average lymphocyte percentage was significantly decreased compared to patients with normal ALT (P-value = 0.009). Discussion COVID-19 infection mainly causes pulmonary symptoms, but this infection may simultaneously lead to other organ injuries such as cardiac muscle, kidneys, and liver. 7,16,17 As shown in this study, an increase in factors of hepatic injury has occurred in COVID-19 patients. Liver enzymes including ALT and AST are useful biomarkers of hepatic dysfunction in patients. Most liver diseases initially cause mild symptoms, but they must be detected early. Liver dysfunction can be crucially important in some diseases. The hepatic injury could be determined by tests that are associated with liver function (e.g. albumin), liver cellular integrity (e.g. ALT and AST), and some are associated with conditions linked to the biliary tract (e.g. alkaline phosphatase (ALP) and gamma-glutamyl transferase (GGT)). The liver is constantly exposed to viruses, antigens, bacteria, and their products with inflammatory potential, which leads to hepatic dysfunction. Multitudinous factors such as excessive alcohol consumption, exposure to toxins, viral infections and bile duct obstruction prevention could cause hepatic injury. 8 Viral agents such as HBV, HCV and HEV are common causes of varying degrees of hepatic insufficiency. Furthermore, some reports demonstrated that SARS-infected patients and MERS-infected patients had an increased value of liver enzyme and various degrees of hepatic injury. 12,13 In our study, we found that 41 of 239 COVID-19 patients had higher values of liver enzymes (e.g. ALT and AST). Similarly to our study, in recent studies, researchers revealed that transaminases (e.g. ALT and AST) were increased in patients with COVID-19. 14, 16 Chen et al 16 demonstrated that 43 of 99 COVID-19 patients had various liver enzyme abnormalities (i.e. ALT and AST) and one patient had severe increase hepatic enzyme (ALT 7590 U/L, AST 1445 U/L) leading to hepatic dysfunction. Moreover, our results showed that the LDH value was increased in COVID-19 patients, as well as being significantly increased in patients with elevated ALT in comparison to patients with normal ALT. These results are consistent with other studies 7,14,21,22 and this phenomenon Abbreviations: ALT, alanine aminotransferase; AST, aspartate transaminase; CPK, creatine phosphokinase; CRP, C-reactive protein; HB, hemoglobin; HCT, hematocrit; LDH, lactate dehydrogenase; MCH, mean corpuscular hemoglobin; MCHC, mean corpuscular hemoglobin concentration; MCV, mean corpuscular volume; MPV, mean platelet volume; PDW, platelet distribution width; P-LCR, platelet-large cell ratio; RBC, red blood cell; RDW, red cell distribution width; WBC, white blood cell. may be due to the cell apoptosis induced by COVID-19 infection or to the use of antiviral compounds. Some studies suggested that the increased levels of LDH are an indicator for evaluating anti-influenza activity of antiviral compounds. 23,24 Also, Mori et al 23 suggested that influenza virus-infected cells are responsible for the LDH leakage due to releasing virus particles from host cells because antiviral drugs such as ribavirin, carbodine, and 3-deazaguanine pyrazofurine inhibited the LDH leakage. Uchide et al 25 also suggested that the apoptosis cell degeneration induced by influenza virus infection is associated with increasing LDH value. In addition, LDH is one of the diagnostic markers of lung diseases and liver diseases. Therefore, a significant increase in LDH values in patients with elevated ALT may be due to the simultaneous involvement of lungs and liver. CRP, as an inflammatory response to factors released by macrophages and adipocytes, is produced in the liver. 26 In the present study, an increase of CRP was found in almost 80% of COVID-19 patients. Similarly to patients with other coronaviruses such as avian flu H7N9 and H1N1 influenza strains, 27,28 CRP is increased in COVID-19 patients, and researchers reported that increased CRP is an observed clinical characteristic of most patients with COVID-19 infection. 16,29,30 Additionally, we found that there were no differences in CRP values between the elevated ALT group and the normal ALT group. In addition, CRP values were not significantly associated with ALT and AST. This indication was consistent with Guan et al 31 and Li et al 14 who found the abnormal rate of ALT and AST among COVID-19 patients. While, inconsistent with our results, Li et al 14 found that the CRP values were closely associated with hepatic injury in COVID-19 patients, and CRP was significantly higher in the elevated ALT group and the increase of CRP was associated with ALT level. These differences may be the result of a large number of samples or of using a lower-sensitivity diagnostic CRP kit in our study. Furthermore, Zhao et al 32 suggested that ACE2 is the putative receptor of SARS-CoV2. Additionally, Chai et al 17 demonstrated that specific ACE2 expressed in the liver as well as in the lungs may cause hepatic injury in COVID-19 patients. Some other studies suggested that the systemic inflammatory response to the drug used in the treatment of COVID-19 infection and pneumonia-associated hypoxia may be caused by a hepatic injury in COVID-19 patients. 30,33,34 According to the studies of Xu et al 33 pathological report of hepatic dysfunctions caused by a drug used in the treatment or caused by COVID-19 infection is difficult because some of the patients had a regime of lopinavir/ritonavir and interferon -2b. There were some limitations in this study, including some of the patients who died due to COVID-19 infection, owing to overloading of patients with COVID-19, assessment of other risk factors and conditions that may be changed in following up was not possible. Therefore, we focused on risk factors that are presented in the early identification of hepatic injuries. We expect these results could help medical settings to manage hepatic injury caused by COVID-19 infection. Conclusion These results suggested that hepatic injury and abnormal values of liver enzymes related to COVID-19 infection is an acute non-specific inflammation alteration. In addition, elevation of liver enzymes may be related to general immune activation and cytokine storming.
Quantifying the Impact of Lifting Community Nonpharmaceutical Interventions for COVID-19 During Vaccination Rollout in the United States Abstract Using a mathematical model, we estimated the potential impact on mortality and total infections of completely lifting community nonpharmaceutical interventions when only a small proportion of the population has been fully vaccinated in 2 states in the United States. Lifting all community nonpharmaceutical interventions immediately is predicted to result in twice as many deaths over the next 6 months as a more moderate reopening allowing 70% of prepandemic contacts. Using a mathematical model, we estimated the potential impact on mortality and total infections of completely lifting community nonpharmaceutical interventions when only a small proportion of the population has been fully vaccinated in 2 states in the United States. Lifting all community nonpharmaceutical interventions immediately is predicted to result in twice as many deaths over the next 6 months as a more moderate reopening allowing 70% of prepandemic contacts. More than a year after the start of the global coronavirus disease 2019 pandemic, the situation is evolving rapidly, with vaccines available to all individuals over 16 years old and variants of concern rapidly emerging throughout the world. In particular, more transmissible variants such as B.1.1.7 have been increasing their presence in the United States. There is a renewed enthusiasm within communities that life will soon return to normal. These positive expectations are fueled by evidence of high efficacy of COVID-19 vaccines and progress toward mass vaccination. Three highly effective vaccines (Pfizer, Moderna, and J&J) have been issued Emergency Use Authorization and distributed across the United States, with ~32% of Americans fully vaccinated by May 4, 2021. In parallel with the massive vaccination effort, multiple states are also considering the pace at which community nonpharmaceutical interventions (NPIs), for example, mask mandates, school closures, and closure or reduced capacity operations of businesses, can be relaxed. The Centers for Disease Control and Prevention (CDC) repeatedly warned that this should be a slow process and that vigilance is required in light of the spread of more infectious and virulent severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) variants. In this piece, we used a mathematical model to quantify the potential negative impact that rapid dismantling of existing NPIs could have on the population-level effectiveness of vaccination programs and the potential fourth epidemic wave that could result from these measures. We quantified these effects in 2 states that have had very different management approaches to the COVID-19 pandemic, Florida and Washington. METHODS Here we leveraged an age-structured deterministic model of SARS-CoV-2 transmission and vaccination that we previously developed. For each of the 16 age groups in the model, we track susceptible, exposed, asymptomatic, presymptomatic, symptomatic, and recovered individuals classed by disease severity. Symptomatic individuals have 1 of 3 fates: They become mildly symptomatic, hospitalized in a non-intensive care unit ward, or hospitalized requiring intensive care. We calibrated our model for each of the 2 states by considering state-specific demographics, infection prevalence, proportion of the population previously infected, vaccination rates, and vaccinated proportions with 1 and 2 doses of vaccine in different age groups as of May 4, 2021 (Supplementary Table 1). We assumed levels of vaccine efficacy against COVID-19 consistent with phase 3 trial results and that vaccine efficacy will be maintained against more transmissible strains. Based on parameter sets from preestablished distributions (full details in the Supplementary Data) and 1000 simulations, we report the mean number of deaths per million (deaths/1M) and 95% uncertainty intervals (UIs) (description in the Supplementary Data). We evaluated public health outcomes under 4 different scenarios, where we assumed that the ensemble of NPIs in place after May 4, 2021, would result in 30% (resembling lockdown), 50%, 70%, or 100% (lifting all NPIs) of prepandemic nonhousehold physical contacts, that is, interactions sufficient for transmission of infection in the absence of masking, hereafter "contacts. " We chose as a reference a reopening scenario where some NPIs are maintained, resulting in 70% of prepandemic contacts (a 30% reduction in nonhousehold contacts). This could be achieved, for example, by continuing to enforce mask mandates as well as some additional restrictions such as moderate reduced capacity in indoor spaces. We explored 2 scenarios: 1 in which the ensemble of circulating variants does not result in increased viral transmission and a second assuming that more transmissible variants become more prevalent, resulting in 20% increased viral transmission (full description of the methods in the Supplementary Data). DISCUSSION Here, we used recent numbers for vaccination rates and proportions vaccinated in Washington and Florida to provide a simple yet useful quantification of the impact of partial or total lifting of NPIs while vaccines are being rolled out. Our results suggest that under current transmission levels, a full reopening of society that restores prepandemic levels of physical interaction could result in at least 3 times more deaths as compared with a partial reopening where mask mandates and some moderate restrictions are kept in place until a larger proportion of the population has been vaccinated. Additionally, it is plausible that uncontrolled viral transmission will facilitate the establishment of new variants, some of which are known to be more virulent. Our results suggest that if a new, more transmissible variant like B.1.1.7 becomes more prevalent, resulting in 20% increased viral transmission, lifting all NPIs would result in twice as many deaths as a partial reopening even if existing vaccines are equally effective against this variant. These results buttress and provide quantitative evidence in support of the case being made by a number of other authors that complete reopening of society is premature. Our results are in line with several modeling studies to date that have suggested that population effectiveness of COVID-19 vaccination will be limited if the epidemic is not controlled using other means during rollout. We previously demonstrated that if an epidemic outbreak were to occur during the vaccination rollout before a substantial proportion of the active population was immunized, it would substantially decrease the population impact of vaccination both in terms of transmission and mortality reduction. In an earlier analysis, we showed that in the absence of emerging variants physical interactions should never increase beyond 70% of the pre-COVID-19 levels in order to prevent a new epidemic wave in 2021. Our work has several limitations. We assumed that the vaccination rate would remain constant throughout the ensuing 6-month period. Increasing numbers of vaccine doses are expected to be available in the next weeks or months, and rollout might accelerate. We assumed a fixed level of NPIs, although NPI utilization during the pandemic has been variable in space and time and new NPIs could be potentially imposed in the face of expanding numbers of infections. We also assumed that vaccines would remain highly efficacious against new variants, but studies suggest that decreased vaccine efficacy against certain variants is possible, which may increase the projected gap between scenarios with and without emerging variants. We assumed that a more transmissible variant would result in a 20% increased overall transmission, but this percentage will be highly dependent on the competition between circulating strains, their fitness vis--vis vaccines, and potential cross-immunity. As of May 4, 2021, genetic sequencing data suggest that B.1.1.7 was 39% and 64% prevalent in Washington and Florida, respectively. Assuming B.1.1.7 is 50% more transmissible, this would result in a 19% and 32% viral transmission increase, respectively. In this sense, our results for Florida are conservative. This highlights the need for close monitoring of the prevalence of emerging variants, for evaluation of their infectivity and virulence, and for studies estimating possible decreases in vaccine efficacy. All this information should be taken into account when decisions regarding decreasing or lifting NPIs are being made. The need to lift NPIs is urgent, particularly in light of impacts to the education system and the economy. Here, we demonstrate that as vaccines are rolled out, it is imperative to gradually lift the NPIs currently in place in order to safeguard the population impact of vaccination. A risk-stratified approach that takes into account the level of preexisting immunity as well as the proportion of the population vaccinated is needed to safely remove all restrictions. Supplementary Data Supplementary materials are available at Open Forum Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
Effects of guanethidine and methyldopa on a standardized test for renin responsiveness. A standardized test for renin responsiveness, employing the dual stimulus of upright posture and the loop diuretic furosemide, was applied to 19 hypertensive patients in the untreated state and during therapy with the antihypertensive agents guanethidine and methyldopa. During therapy with guanethidine, 6 of 10 patients with "low-renin essential hypertension" experienced elevations of plasma renin activity to levles ordinarily diagnostic of "normal-renin" hypertension (P less than 0.05), whereas methyldopa had no significant effect on plasma renin activity in either "low-renin" or "normal-renin" patients. It is suggested that methyldopa has a negligible influence on renin responsiveness when stimulated under the above conditions and that it may be used during assessment of plasma renin activity in hypertensive patients whose blood pressure is too severely elevated for temporary withdrawal of therapy.
MALE MATE CHOICE AND THE EVOLUTION OF FEMALE PLUMAGE COLORATION IN THE HOUSE FINCH The house finch (Carpodacus mexicanus) is a sexually dichromatic passerine in which males display colorful plumage and females are generally drab brown. Some females, however, have a subdued version of the same pattern of ornamental coloration seen in males. In previous research, I found that female house finches use male coloration as an important criterion when choosing mates and that the plumage brightness of males is a reliable indicator of male nest attentiveness. Male house finches invest substantially in the care of young and, like females, stand to gain by choosing highquality mates. I therefore hypothesized that a female's plumage brightness might be correlated with her quality and be the basis for male mate choice. In laboratory mate choice experiments, male house finches showed a significant preference for the most brightly plumaged females presented. Observations of a wild population of house finches, however, suggest that female age is the primary criterion in male choice and that female plumage coloration is a secondary criterion. In addition, yearling females tended to have more brightly colored plumage than older females, and there was no relationship between female plumage coloration and overwinter survival, reproductive success, or condition. These observations fail to support the idea that female plumage coloration is an indicator of individual quality. Male mate choice for brightly plumaged females may have evolved as a correlated response to selection on females to choose brightly colored males.
Time to pay for quality. CMS will partner with premier in trial project to give financial bonuses to hospitals that deliver the best care. The Centers for Medicare and Medicaid Services is poised to announce a new pilot program in which it will partner with healthcare alliance Premier to reward top performing hospitals with recognition and, more importantly, added dollars. Though a relatively risk-free step for the nation's largest payer, it marks a seminal moment for the burgeoning paying-for-quality movement in healthcare.
import { ERROR_BIZ } from '@alicloud/console-fetcher-interceptor-res-biz'; import createFetcher from './factory'; const fetcher = createFetcher(); fetcher.sealInterceptors(); export * from '@alicloud/fetcher'; export default fetcher; export { ERROR_BIZ, createFetcher }; export type { // 覆盖 @alicloud/fetcher 中的类型 IConsoleFetcherConfig as FetcherConfig, IConsoleFetcher as Fetcher, // 新增类型 IConsoleFetcherInterceptorOptions as FetcherInterceptorOptions, IConsoleApiOptions as FetcherConsoleApiOptions, IFnConsoleApi as FetcherFnOpenApi, IFnConsoleApi as FetcherFnInnerApi, IFnConsoleApi as FetcherFnContainerApi, IFnConsoleApiMulti as FetcherFnOpenApiMulti, IConsoleApiMultiAction as FetcherOpenApiMultiAction } from './types';
1. Field of Disclosure The present disclosure of invention relates to a deposition apparatus, a method of manufacturing an organic light emitting display device (OLEDD), and an organic light emitting display device. The disclosure relates more particularly to a deposition apparatus which is expected to significantly reduce maintenance time thereof, to a method of manufacturing an organic light emitting display device (OLEDD), and to an organic light emitting display device manufactured with use of the deposition apparatus. 2. Description of Related Technology From among various display devices, the organic light emitting display device (OLEDD) typically features wide viewing angle, excellent contrast, and fast response time, thus being spotlighted as a next-generation display apparatus. An organic light emitting display device generally has a structure in which an intermediate layer includes an emission layer interposed between a first electrode and a second electrode facing each other. Here, the first electrode, the second electrode, and the intermediate layer may be formed by using any of various methods, e.g., different kinds of deposition methods. In one manufacturing method for an organic light emitting display device using a deposition method, a fine metal mask (FMM) having an opening whose pattern is identical/similar to that of a desired intermediate layer is closely attached to a substrate on which the intermediate layer is to be formed and materials for forming the intermediate layer and/or other layers are deposited through the close-contact mask and onto the substrate, thereby forming the intermediate layer having a predetermined pattern. However, this deposition method of using a FMM requires a large-scale (large dimensioned) FMM to manufacture a correspondingly large-scaled organic light emitting display apparatus on a correspondingly large-scaled substrate. Alternatively, such a large-scaled FMM may be used to manufacture a plurality of smaller organic light emitting display devices while using a large-scaled “mother” substrate. In the case of large-scaled FMM's, due to weight when being lowered toward a face-up working surface of the substrate; the FMM may warp (e.g., droop) due to its own weight and then further warp as it contacts the face-up working surface of the substrate, and thus it becomes difficult to consistently form an intermediate layer having a precise preset pattern. Furthermore, a significant period of time may be wasted for aligning and for closely-attaching a large-scaled FMM to a corresponding large-scaled substrate at the start of the process and for nondestructively separating the FMM and the face-up working surface of the substrate from each other after the deposition completes. As a result, the overall manufacturing time increases and production efficiency is deteriorated. It is to be understood that this background of the technology section is intended to provide useful background for understanding the here disclosed technology and as such, the technology background section may include ideas, concepts or recognitions that were not part of what was known or appreciated by those skilled in the pertinent art prior to corresponding invention dates of subject matter disclosed herein.
/********************************************************************\ * winundocrand.h -- Windows random number generator * * * * Copyright (C) 2008 <NAME> * * * \********************************************************************/ /** @file winundocrand.h @brief Windows random number generator @author Copyright (C) 2008 <NAME> based on work by @author Copyright (C) 1996, 1997, 1998 Theodore Ts'o @author Copyright (C) 2004-2008 <NAME> <<EMAIL>> Use, modification, and distribution are subject to the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or a copy at <http://www.boost.org/LICENSE_1_0.txt>.) */ #ifndef KL_WINUNDOCRAND_H #define KL_WINUNDOCRAND_H #include "randomstream.h" #include "noncopyable.h" #include <stdexcept> #define WINVER 0x0501 #define _WIN32_WINNT 0x0501 #include <windows.h> namespace kashmir { namespace system { class WinUndocRand : public user::randomstream<WinUndocRand>, noncopyable { public: WinUndocRand() : hLib(LoadLibrary("ADVAPI32.DLL")), pfn(0) { if (!hLib) throw std::runtime_error("failed to load ADVAPI32.DLL."); pfn = (BOOLEAN (APIENTRY *)(void*,ULONG)) GetProcAddress(hLib,"SystemFunction036"); if (!pfn) { FreeLibrary(hLib); throw std::runtime_error("failed to get ADVAPI32!RtlGenRandom address."); } } ~WinUndocRand() { FreeLibrary(hLib); } void read(char* buffer, std::size_t count) { if (!pfn(buffer, count)) throw std::runtime_error("system failed to generate random data."); } private: HMODULE hLib; BOOLEAN (APIENTRY *pfn)(void*, ULONG); }; }} #endif
Biogas Power Generation from Palm Oil Mill Effluent (POME): Techno-Economic and Environmental Impact Evaluation Using palm oil mill effluent (POME) to produce biogas is an alternative and sustainable way to control POME GHG emissions while also providing economic benefits. The increasing area of oil palm plantations encourages an increase in palm oil production and the generation of POME in Indonesia. This could increase potential GHG emissions and global warming. In contrast, biogas power plants from POME are less attractive for economic investment in Indonesia. However, as the worlds largest palm oil producer, Indonesia still lacks techno-economic and environmental studies of biogas power generation from POME. This study aimed to evaluate the technical, economic, and environmental aspects of the biogas power generation from POME at the study site (Bangka Island, Indonesia). The result shows that the biogas plant at the study site can reduce COD levels of POME by up to 91% and produce biogas at 325,292 m3/month, with a 55% methane content. Biogas can be converted into electrical energy at 696,163 kWh/month. The operation of this biogas plant can reduce GHG emissions by 1131 tons CO2-eq/month, with low profitability (NPV of IDR1,281,136,274, IRR 6.75%, and a payback period of 10.8 years). This evaluation proves that the main problem in the factory is the POME used, which is insufficient, and which could be overcome by purchasing POME from other palm oil mills. Furthermore, using the mesophilic anaerobic degradation process at the study site is feasible. However, a technological shift from closed lagoons to more efficient bioreactors is urgently needed, to increase the process efficiency and economic benefits.
package frc.lib.control; /** * A class to act as a record for storing controller input. */ public class ControllerDriveInput { private double m_fwd; private double m_stf; private double m_rot; /** * A class to act as a record for storing controller input. * @param fwd Forward (-1...1) * @param stf Strafe (-1...1) * @param rot Rotation (-1...1) */ public ControllerDriveInput(double fwd, double stf, double rot) { this.m_fwd = fwd; this.m_stf = stf; this.m_rot = rot; } public double getFwd() { return this.m_fwd; } public double getStf() { return this.m_stf; } public double getRot() { return this.m_rot; } public ControllerDriveInput setFwd(double fwd) { this.m_fwd = fwd; return this; } public ControllerDriveInput setStf(double stf) { this.m_stf = stf; return this; } public ControllerDriveInput setRot(double rot) { this.m_rot = rot; return this; } }
<gh_stars>10-100 package com.sctrcd.qzr.web.resources; import javax.xml.bind.annotation.XmlRootElement; import org.springframework.hateoas.ResourceSupport; import com.fasterxml.jackson.annotation.JsonInclude; import com.fasterxml.jackson.annotation.JsonInclude.Include; /** * Provides details about a quiz. Not much detail here, as the main purpose is * to add links pointing at the various REST endpoints. * * @author <NAME> */ @JsonInclude(Include.NON_EMPTY) @XmlRootElement(name = "question") public class QuizResource extends ResourceSupport { private String title; public QuizResource() { } public QuizResource(String title) { this.title = title; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String toString() { return "Quiz: " + title; } }
Cellular immunity to heterotopic corneal allografts in the rat. Although the immune nature of corneal allograft rejection has been recognized for over thirty years, the specific mechanisms involved in such reactions remain obscure. We investigated the cellular immune responses of PVG (RT1c) rats that were grafted with fully allogeneic ACI (RT1a) skin, ACI cornea, or PVG cornea to the chest wall or given sham grafts. Cell-mediated lymphocytotoxicity (CML) was tested at 10 days posttransplant by placing recipient spleen cells in culture with irradiated ACI stimulator cells, and six days later measuring specific lysis of 51Cr-labeled target lymphoblasts at several effector-to-target ratios. Effector cells from animals receiving allogeneic skin or cornea grafts lysed targets from the donor (ACI) strain at levels significantly (P less than 0.01) above those obtained using effectors from control (sham grafted or syngeneic corneal graft recipient) animals. Significant lysis was also seen using target cells from PVG.1A (RT1a) or PVG.R1 (RT1r1) congenic rats, which differ from recipients only at the RT1 complex and the RT1.A (class I antigen) region, respectively. Stimulator cells from PVG.1A and PVG.R1 animals also permitted detection of specific responses in secondary CML, but syngeneic PVG stimulators did not, indicating that in vitro restimulation of effector cells can be met by using stimulator cells bearing only allogeneic class I major histocompatibility complex (MHC) antigens. These results indicate that corneal allografts evoke specific cellular immune responses in the rat, and that class I MHC antigens act as effective targets for these responses.
Persistence of transgenic and not transgenic extracellular DNA in soil and bacterial transformation. The study of the fate of transgenic and not transgenic extracellular DNA in soil is of extreme relevance because the soil extracellular DNA pool represents a genetic reservoir that could be utilized as a source of food by any heterotrophic microorganism or genetic information by recipient eukaryotic and prokaryotic cells. Several data have clearly evidenced that extracellular DNA could persist in soil for long time maintaining a sufficient integrity of the molecule. Recent microcosm studies under laboratory conditions have evidenced that extracellular DNA molecule could be leached or raised up by capillarity. The persistence and movement of extracellular DNA molecule in soil suggest that the genetic information of extracellular DNA could be taken up by microorganisms temporarily and spatially separated. Several authors have studied the persistence and transformation efficiency of the extracellular DNA in soil demonstrating that there is a sharp discrepancy between its biological efficiency and its persistence; fragments of target DNA were detected after a long time in soil but no transformations were determined probably because the genetic information originally present in the complete DNA molecule could be lost by degradation. It is also important to underline that the frequency of gene transfer in soil is markedly limited by the few number of bacteria able to develop competence and that this physiological state is reached only under certain conditions. Furthermore the dilution of the transgene in the soil extracellular DNA pool drastically decreases chances for the uptake of the transgene. Anyway the importance of transformation in evolutionary terms, represents a valid reason to continue the investigation on the fate of extracellular DNA in soil.
Dilute Element Compounds: A Route to Enriching Inorganic Functional Materials. The development of functional materials calls for ever-enriching the inorganic material database. Doping is an effective way of achieving this purpose. Herein, we propose the concept of dilute element compounds (DECs), which contain a small amount of a dopant element distributed in a host crystal structure in an ordered manner. Different from dilute alloys or solid solutions, the DECs could be more resistant to segregation and are ideal for dispersing functional elements for applications such as single-atom catalysts. It is also expected that the DECs will serve as a route to discovering new inorganic functional materials by controlling phase transitions and tuning intrinsic properties of the host materials with applications including, but not limited to, thermoelectrics and solid-state electrolytes for secondary batteries. As an initial work, we quantify the diluteness of DECs and find the limits of diluteness in existing DECs. We further provide a classification scheme for the DECs to guide future discoveries.
Recently I was leaving a building, and as I approached the heavy glass exterior door, I could see someone about to enter from the other side. We both paused and looked at each other for a moment too long, waiting for the other to act. And then I thought I ought to hold the door for the other person, so I pushed on it to open it. It soon became clear, as the other person made a hesitant half-step, that by holding it open from the inside without passing through the doorway, I was blocking the door. Finally I realized I’d need to walk through first, so I did, making sure to hold it open after I was through. Walking left and standing right on escalators is one of the most generally understood rules of etiquette in Toronto. But it's not always so clear, says Ed Keenan. ( Carlos Osorio / Toronto Star ) It felt awkward, as if I’d actually delayed the person I was trying to do a small favour for. “Sorry,” I said as they passed. “No, no, I’m sorry,” they said. And we went on our way. It wasn’t until I was reading the new Toronto Public Etiquette Guide by Dylan Reid and the editors of Spacing magazine that I realized I had enacted a particularly stereotypical series of Toronto etiquette situations. There was a version of the “Canadian standoff,” which according to the book happens at a door when two people are “each waiting for the other to go through first, resulting in general paralysis.” There was the dance of opening the door itself, since “Toronto culture enjoys an intricate yet unspoken etiquette of door-holding, a place where the inhabitants’ politeness, love of efficiency and avoidance of speaking to strangers meet in a kind of creative tension.” Article Continued Below And then there was my unnecessary apology, and the needlessly apologetic response to it, which the book makes clear is one of the primary Toronto customs. “The essential verbal lubricants of Toronto public life are ‘sorry’ and ‘excuse me.’” These can mean all kinds of things when said in differing tones — from “you are in my way” to “I apologize” to “you ought to apologize.” Living in the big city means sharing space together, often in close quarters, and in this book Reid and his fellow contributors attempt to outline the Ps and Qs of what’s expected in this particular city, where, as they say, getting places efficiently while generally avoiding spoken interactions with strangers appear to be priorities. In keeping with Spacing’s longstanding obsessions, the book heavily covers the situations you encounter on transit, on sidewalks, when cycling and driving and in public parks and includes other sections on general neighbourliness and Toronto historical anecdotes. For example, the section about how we keep to the right and pass on the left while walking on the sidewalk is fairly straightforward. That’s how we drive, too, after all, and it mimics our general behaviour (stand right, walk past on the left) on escalators. But this is followed by the story of how a widely mocked, seldom enforced 1944 bylaw codified it, threatening fines on those who did not keep to the right when walking. As someone who has lived most of my life here, it’s hard to know how many of these customs are common to most big cities and how many are purely local. One tip in particular notes that Toronto drivers do not honk their horns as often as those elsewhere (which I certainly noticed the first time I drove into Manhattan). Elsewhere Reid notes that some of his correspondents who moved here from abroad were surprised at the strength of the expectation that a dinner guest would bring wine or flowers — even if the host protests that a guest should bring nothing. Offering seats on the bus or subway to those who need them disabled, pregnant people should go without saying, but doesn’t, writes Ed Keenan. ( David Cooper ) The thing about etiquette guides, of course, is that the places where you find yourself nodding your head most aggressively in agreement are perhaps the most obvious — outlining the practices that offenders are most likely to already know are wrong even as they ignore the rules. Riding bikes on the sidewalk, for example. Adults shouldn’t do it at all. And if they do, then they certainly shouldn’t, as the book says, ring their bell at pedestrians to clear the way. And offering seats on the subway to those who need them — elderly, disabled, pregnant people — should go without saying, but doesn’t. But I was happy to see with it here an addendum that I think is often not understood: on a very crowded transit vehicle, when a seat opens up, you can offer it to others, but if there are no takers, you should sit down. Standing next to an empty seat just blocks access to it while also making the aisle more crowded for everyone else. “An empty seat on a crowded transit vehicle is a true waste of good space.” One tip in particular notes that Toronto drivers do not honk their horns as often as those elsewhere. ( Steve Russell ) Of course, there are lots of situations where the rules of etiquette aren’t clear. Do you line up while waiting for the bus or streetcar? Well, it depends. At some stations and stops, people always form rigid lines, at others, crowds are the custom. The best the book can do in this case is paraphrase a former prime minister (“queue if necessary, but not necessarily queue”) and advise people to pay attention to those around you. Article Continued Below Which is not bad advice in general, since being aware of those around you and trying to make them comfortable, is what etiquette, generally, is all about. Reid’s book is available at Spacing’s store at 401 Richmond Street or online at spacing.ca. But it’s such a fun topic to think about and debate, I wonder what rules of Toronto urban etiquette Star readers most want widely understood? What are your tips and suggestions? If you moved here from somewhere else, what set of expectations and conventions most surprised you? If you send me your Toronto etiquette notes by email to [email protected] or share them on social media with the hashtag #TorontoNice, I’ll collect the best and most interesting. And if there are enough good ones, I’ll write a follow-up column to share the manners lessons around.
/** * Copyright 2005-2015 The Kuali Foundation * * Licensed under the Educational Community License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.opensource.org/licenses/ecl2.php * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.kuali.rice.krad.uif.field; import org.apache.commons.lang.StringUtils; import org.kuali.rice.krad.uif.component.Component; import org.kuali.rice.krad.uif.view.View; /** * Field that encloses an image element * * @author Kuali Rice Team (<EMAIL>) */ public class ImageField extends FieldBase { private static final long serialVersionUID = -7994212503770623408L; private String source; private String altText; private String height; private String width; private boolean captionHeaderAboveImage; private String captionHeaderText; private HeaderField captionHeader; private String cutlineText; private MessageField cutline; public ImageField() { super(); } public void performFinalize(View view, Object model, Component parent) { super.performFinalize(view, model, parent); if (StringUtils.isNotBlank(captionHeaderText)) { captionHeader.setHeaderText(captionHeaderText); } if (StringUtils.isNotBlank(cutlineText)) { cutline.setMessageText(cutlineText); } } public String getSource() { return this.source; } public void setSource(String source) { this.source = source; } public String getAltText() { return this.altText; } public void setAltText(String altText) { this.altText = altText; } public String getHeight() { return this.height; } public void setHeight(String height) { this.height = height; } public void setWidth(String width) { this.width = width; } public String getWidth() { return width; } public String getCaptionHeaderText() { return captionHeaderText; } public void setCaptionHeaderText(String captionHeaderText) { this.captionHeaderText = captionHeaderText; } public HeaderField getCaptionHeader() { return captionHeader; } public void setCaptionHeader(HeaderField captionHeader) { this.captionHeader = captionHeader; } public String getCutlineText() { return cutlineText; } public void setCutlineText(String cutlineText) { this.cutlineText = cutlineText; } public MessageField getCutline() { return cutline; } /** * A cutline is the text describing the image in detail (this is also often confusingly called a caption). */ public void setCutline(MessageField cutline) { this.cutline = cutline; } public boolean isCaptionHeaderAboveImage() { return captionHeaderAboveImage; } public void setCaptionHeaderAboveImage(boolean captionHeaderAboveImage) { this.captionHeaderAboveImage = captionHeaderAboveImage; } }
#include "de_feat.h" static const struct de_feat *de_cur_features; static const int sun50iw2_de_num_chns[] = { /* DISP0 */ 4, /* DISP1 */ 2, }; static const int sun50iw2_de_num_vi_chns[] = { /* DISP0 */ 1, /* DISP1 */ 1, }; static const int sun50iw2_de_num_layers[] = { /* DISP0 CH0 */ 4, /* DISP0 CH1 */ 4, /* DISP0 CH2 */ 4, /* DISP0 CH3 */ 4, /* DISP1 CH0 */ 4, /* DISP1 CH1 */ 4, }; static const int sun50iw2_de_is_support_vep[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 0, /* DISP0 CH2 */ 0, /* DISP0 CH3 */ 0, /* DISP1 CH0 */ 0, /* DISP1 CH1 */ 0, }; static const int sun50iw2_de_is_support_smbl[] = { /* CH0 */ 0, /* CH1 */ 0, }; static const int sun50iw2_de_supported_output_types[] = { /* DISP0 */ #if defined(TCON1_DRIVE_PANEL) DE_OUTPUT_TYPE_LCD, #else DE_OUTPUT_TYPE_HDMI, #endif /* DISP1 */ DE_OUTPUT_TYPE_TV, }; static const int sun50iw2_de_is_support_wb[] = { /* DISP0 */ 1, /* DISP1 */ 0, }; static const int sun50iw2_de_is_support_scale[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 1, /* DISP0 CH2 */ 1, /* DISP0 CH3 */ 1, /* DISP1 CH0 */ 1, /* DISP1 CH1 */ 1, }; static const int sun50iw2_de_scale_line_buffer[] = { /* DISP0 */ 4096, /* DISP1 */ 2048, }; static const struct de_feat sun50iw2_de_features = { .num_screens = DE_NUM, .num_devices = DEVICE_NUM, .num_chns = sun50iw2_de_num_chns, .num_vi_chns = sun50iw2_de_num_vi_chns, .num_layers = sun50iw2_de_num_layers, .is_support_vep = sun50iw2_de_is_support_vep, .is_support_smbl = sun50iw2_de_is_support_smbl, .is_support_wb = sun50iw2_de_is_support_wb, .supported_output_types = sun50iw2_de_supported_output_types, .is_support_scale = sun50iw2_de_is_support_scale, .scale_line_buffer = sun50iw2_de_scale_line_buffer, }; static const int sun50iw1_de_num_chns[] = { /* DISP0 */ 4, /* DISP1 */ 2, }; static const int sun50iw1_de_num_vi_chns[] = { /* DISP0 */ 1, /* DISP1 */ 1, }; static const int sun50iw1_de_num_layers[] = { /* DISP0 CH0 */ 4, /* DISP0 CH1 */ 4, /* DISP0 CH2 */ 4, /* DISP0 CH3 */ 4, /* DISP1 CH0 */ 4, /* DISP1 CH1 */ 4, }; static const int sun50iw1_de_is_support_vep[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 0, /* DISP0 CH2 */ 0, /* DISP0 CH3 */ 0, /* DISP1 CH0 */ 0, /* DISP1 CH1 */ 0, }; static const int sun50iw1_de_is_support_smbl[] = { /* CH0 */ 1, /* CH1 */ 0, }; static const int sun50iw1_de_supported_output_types[] = { /* DISP0 */ DE_OUTPUT_TYPE_LCD, /* DISP1 */ DE_OUTPUT_TYPE_HDMI, }; static const int sun50iw1_de_is_support_wb[] = { /* DISP0 */ 1, /* DISP1 */ 0, }; static const int sun50iw1_de_is_support_scale[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 1, /* DISP0 CH2 */ 1, /* DISP0 CH3 */ 1, /* DISP1 CH0 */ 1, /* DISP1 CH1 */ 1, }; static const int sun50iw1_de_scale_line_buffer[] = { /* DISP0 */ 4096, /* DISP1 */ 2048, }; static const struct de_feat sun50iw1_de_features = { .num_screens = DE_NUM, .num_devices = DEVICE_NUM, .num_chns = sun50iw1_de_num_chns, .num_vi_chns = sun50iw1_de_num_vi_chns, .num_layers = sun50iw1_de_num_layers, .is_support_vep = sun50iw1_de_is_support_vep, .is_support_smbl = sun50iw1_de_is_support_smbl, .is_support_wb = sun50iw1_de_is_support_wb, .supported_output_types = sun50iw1_de_supported_output_types, .is_support_scale = sun50iw1_de_is_support_scale, .scale_line_buffer = sun50iw1_de_scale_line_buffer, }; static const int sun8iw11_de_num_chns[] = { /* DISP0 */ 4, /* DISP1 */ 2, }; static const int sun8iw11_de_num_vi_chns[] = { /* DISP0 */ 1, /* DISP1 */ 1, }; static const int sun8iw11_de_num_layers[] = { /* DISP0 CH0 */ 4, /* DISP0 CH1 */ 4, /* DISP0 CH2 */ 4, /* DISP0 CH3 */ 4, /* DISP1 CH0 */ 4, /* DISP1 CH1 */ 4, }; static const int sun8iw11_de_is_support_vep[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 0, /* DISP0 CH2 */ 0, /* DISP0 CH3 */ 0, /* DISP1 CH0 */ 0, /* DISP1 CH1 */ 0, }; static const int sun8iw11_de_is_support_smbl[] = { /* CH0 */ 1, /* CH1 */ 0, }; static const int sun8iw11_de_supported_output_types[] = { /* tcon0 */ DE_OUTPUT_TYPE_LCD, /* tcon0 */ DE_OUTPUT_TYPE_LCD, /* tcon2 */ DE_OUTPUT_TYPE_TV | DE_OUTPUT_TYPE_HDMI | DE_OUTPUT_TYPE_VGA, /* tcon3 */ DE_OUTPUT_TYPE_TV | DE_OUTPUT_TYPE_HDMI | DE_OUTPUT_TYPE_VGA, }; static const int sun8iw11_de_is_support_wb[] = { /* DISP0 */ 1, /* DISP1 */ 0, }; static const int sun8iw11_de_is_support_scale[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 1, /* DISP0 CH2 */ 1, /* DISP0 CH3 */ 1, /* DISP1 CH0 */ 1, /* DISP1 CH1 */ 1, }; static const int sun8iw11_de_scale_line_buffer[] = { /* DISP0 */ 2048, /* DISP1 */ 2048, }; static const struct de_feat sun8iw11_de_features = { .num_screens = DE_NUM, .num_devices = DEVICE_NUM, .num_chns = sun8iw11_de_num_chns, .num_vi_chns = sun8iw11_de_num_vi_chns, .num_layers = sun8iw11_de_num_layers, .is_support_vep = sun8iw11_de_is_support_vep, .is_support_smbl = sun8iw11_de_is_support_smbl, .is_support_wb = sun8iw11_de_is_support_wb, .supported_output_types = sun8iw11_de_supported_output_types, .is_support_scale = sun8iw11_de_is_support_scale, .scale_line_buffer = sun8iw11_de_scale_line_buffer, }; static const int default_de_num_chns[] = { /* DISP0 */ 4, /* DISP1 */ 2, }; static const int default_de_num_vi_chns[] = { /* DISP0 */ 1, /* DISP1 */ 1, }; static const int default_de_num_layers[] = { /* DISP0 CH0 */ 4, /* DISP0 CH1 */ 4, /* DISP0 CH2 */ 4, /* DISP0 CH3 */ 4, /* DISP1 CH0 */ 4, /* DISP1 CH1 */ 4, }; static const int default_de_is_support_vep[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 0, /* DISP0 CH2 */ 0, /* DISP0 CH3 */ 0, /* DISP1 CH0 */ 0, /* DISP1 CH1 */ 0, }; static const int default_de_is_support_smbl[] = { /* CH0 */ 1, /* CH1 */ 0, }; static const int default_de_supported_output_types[] = { /* DISP0 */ DE_OUTPUT_TYPE_LCD, /* DISP1 */ DE_OUTPUT_TYPE_HDMI, }; static const int default_de_is_support_wb[] = { /* DISP0 */ 1, /* DISP1 */ 0, }; static const int default_de_is_support_scale[] = { /* DISP0 CH0 */ 1, /* DISP0 CH1 */ 1, /* DISP0 CH2 */ 1, /* DISP0 CH3 */ 1, /* DISP1 CH0 */ 1, /* DISP1 CH1 */ 1, }; static const int default_de_scale_line_buffer[] = { /* DISP0 */ 4096, /* DISP1 */ 2048, }; static const struct de_feat default_de_features = { .num_screens = DE_NUM, .num_devices = DEVICE_NUM, .num_chns = default_de_num_chns, .num_vi_chns = default_de_num_vi_chns, .num_layers = default_de_num_layers, .is_support_vep = default_de_is_support_vep, .is_support_smbl = default_de_is_support_smbl, .is_support_wb = default_de_is_support_wb, .supported_output_types = default_de_supported_output_types, .is_support_scale = default_de_is_support_scale, .scale_line_buffer = default_de_scale_line_buffer, }; int de_feat_get_num_screens(void) { return de_cur_features->num_screens; } int de_feat_get_num_devices(void) { return de_cur_features->num_devices; } int de_feat_get_num_chns(unsigned int disp) { return de_cur_features->num_chns[disp]; } int de_feat_get_num_vi_chns(unsigned int disp) { return de_cur_features->num_vi_chns[disp]; } int de_feat_get_num_ui_chns(unsigned int disp) { return de_cur_features->num_chns[disp] - de_cur_features->num_vi_chns[disp]; } int de_feat_get_num_layers(unsigned int disp) { unsigned int i, index = 0, num_channels = 0; int num_layers = 0; if (disp >= de_feat_get_num_screens()) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); num_channels = de_feat_get_num_chns(disp); for (i = 0; i < num_channels; i++, index++) num_layers += de_cur_features->num_layers[index]; return num_layers; } int de_feat_get_num_layers_by_chn(unsigned int disp, unsigned int chn) { unsigned int i, index = 0; if (disp >= de_feat_get_num_screens()) return 0; if (chn >= de_feat_get_num_chns(disp)) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); index += chn; return de_cur_features->num_layers[index]; } int de_feat_is_support_vep(unsigned int disp) { unsigned int i, index = 0, num_channels = 0; int is_support_vep = 0; if (disp >= de_feat_get_num_screens()) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); num_channels = de_feat_get_num_chns(disp); for (i = 0; i < num_channels; i++, index++) is_support_vep += de_cur_features->is_support_vep[index]; return is_support_vep; } int de_feat_is_support_vep_by_chn(unsigned int disp, unsigned int chn) { unsigned int i, index = 0; if (disp >= de_feat_get_num_screens()) return 0; if (chn >= de_feat_get_num_chns(disp)) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); index += chn; return de_cur_features->is_support_vep[index]; } int de_feat_is_support_smbl(unsigned int disp) { return de_cur_features->is_support_smbl[disp]; } int de_feat_is_supported_output_types(unsigned int disp, unsigned int output_type) { return de_cur_features->supported_output_types[disp] & output_type; } int de_feat_is_support_wb(unsigned int disp) { return de_cur_features->is_support_wb[disp]; } int de_feat_is_support_scale(unsigned int disp) { unsigned int i, index = 0, num_channels = 0; int is_support_scale = 0; if (disp >= de_feat_get_num_screens()) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); num_channels = de_feat_get_num_chns(disp); for (i = 0; i < num_channels; i++, index++) is_support_scale += de_cur_features->is_support_scale[index]; return is_support_scale; } int de_feat_is_support_scale_by_chn(unsigned int disp, unsigned int chn) { unsigned int i, index = 0; if (disp >= de_feat_get_num_screens()) return 0; if (chn >= de_feat_get_num_chns(disp)) return 0; for (i = 0; i < disp; i++) index += de_feat_get_num_chns(i); index += chn; return de_cur_features->is_support_scale[index]; } int de_feat_get_scale_linebuf(unsigned int disp) { return de_cur_features->scale_line_buffer[disp]; } int de_feat_init(void) { #if defined(CONFIG_ARCH_SUN50IW2) de_cur_features = &sun50iw2_de_features; #elif defined(CONFIG_ARCH_SUN50IW1) de_cur_features = &sun50iw1_de_features; #elif defined(CONFIG_ARCH_SUN8IW11) de_cur_features = &sun8iw11_de_features; #else de_cur_features = &default_de_features; #endif return 0; }
//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ // \\ // Centre for Speech Technology Research \\ // University of Edinburgh, UK \\ // Copyright (c) 1996,1997 \\ // All Rights Reserved. \\ // Permission is hereby granted, free of charge, to use and distribute \\ // this software and its documentation without restriction, including \\ // without limitation the rights to use, copy, modify, merge, publish, \\ // distribute, sublicense, and/or sell copies of this work, and to \\ // permit persons to whom this work is furnished to do so, subject to \\ // the following conditions: \\ // 1. The code must retain the above copyright notice, this list of \\ // conditions and the following disclaimer. \\ // 2. Any modifications must be clearly marked as such. \\ // 3. Original authors' names are not deleted. \\ // 4. The authors' names are not used to endorse or promote products \\ // derived from this software without specific prior written \\ // permission. \\ // THE UNIVERSITY OF EDINBURGH AND THE CONTRIBUTORS TO THIS WORK \\ // DISCLAIM ALL WARRANTIES With REGARD TO THIS SOFTWARE, INCLUDING \\ // ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT \\ // SHALL THE UNIVERSITY OF EDINBURGH NOR THE CONTRIBUTORS BE LIABLE \\ // FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES \\ // WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN \\ // AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, \\ // ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF \\ // THIS SOFTWARE. \\ // \\ //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ // \\ // Author: <NAME> (<EMAIL>@cstr.ed.ac.uk) \\ // -------------------------------------------------------------------- \\ // Window which displays an Item_Content object. \\ // \\ //\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\//\\ package cstr.est.awt ; import java.lang.*; import java.util.*; import java.awt.*; import cstr.est.*; public class Item_Content_Window extends Container { protected Item_Content content; protected Item item; public Item_Content_Window(Item_Content cont) { super(); setLayout(new GridLayout(0,2)); content=cont; item=cont.getItem(); update(); } public void update() { removeAll(); if (content != null) { String [] names = getFeatureNames(content); Label l; for(int i=0; i<names.length; i++) { add(l=new Label(names[i], Label.RIGHT)); l.setBackground(Color.white); add(l=new Label(item.getS(names[i],"NULL"), Label.LEFT)); l.setBackground(Color.white); } } } protected String [] getFeatureNames(Item_Content content) { Vector names = new Vector(100); content.getFeatures().getPaths(null, names, false, true); String [] s = new String[names.size()]; for (int i=0; i<s.length; i++) s[i] = (String)names.elementAt(i); return s; } }
Bis(1,10-phenanthroline-2 N,N)(phenylacetato-O)copper(II) phenylacetate hexahydrate In the title compound, (C8H7O2)6H2O, the Cu atom is in a distorted square-pyramidal coordination environment. The six crystallographically independent uncoordinated water molecules are interconnected by hydrogen bonds, completing dodecawater (H2O)12 clusters which are hydrogen bonded to the carboxylate groups of phenylacetate anions, building up one-dimensional anionic chains propagating along. Between the cationic and anionic chains are hydrogen bonds from water molecules to the carboxylate O atoms belonging to the phenylacetato ligands. Experimental Crystal data [Cu(C 8 H 7 O 2 ) (C 12 Construction of supramolecular architectures with interesting physical properties has grown rapidly owing to their potential use as new functional materials. The most efficient and widely used approach for designing such materials is the self-assembly of organic ligands and metal ions (;). Here, we report a Cu(II) complex (C 8 H 7 O 2 ).6H 2 O from the self-assembly of Cu(OH) 2, phenylacetatic acid and phenanthroline. The title compound consists of [Cu (C 12 previously reported by us () and all the bonding parameters are normal (). As far as the phenylacetato ligand is concerned, the phenyl plane is found to be nearly perpendicular to the single bonded carbon backbone (dihedral angle: 89.5 °), which is significantly larger than the corresponding one of 68.6 ° in the non-coordinating phenylacetate anion, and the carboxylate group is twisted from the single bonded carbon backbone by 70.4 ° in the former coordinating one, and is considerably larger the 60.7 ° in the non-coordinating anion. As expected, the C-O bond distance for the coordinating oxygen atom is 1.281, which is longer than those for non-coordinating ones (1.247-1.254 ). The complex cations are distributed in such a way that the symmetry-related phenanthroline ligands are oriented antiparallel with a mean interplanar distance of 3.39, indicating a significant face-to-face - stacking interaction (). Owing to such intercationic - stacking interactions and weak intercationic C-HO interactions with the uncoordinating carboxylate oxygen atom, two centrosymmetrically related complex cations form dimers, which are further assembled via interdimeric - stacking interactions into 1D chains extending along the direction. Furthermore, the resulting chains are arranged in planes parallel to, between which the lattice water molecules and the phenylacetate anions are sandwiched. Out of the six crystallographically distinct lattice water molecules, three water molecules together with their centrosymmetry-related partners are hydrogen bonded to one another to generate chair-like hexawater clusters (Fig.2) gave a blue precipitate, which was separated by centrifugation and washed with water until no Cl anions were detectable in the supernatant. The collected blue precipitate was transferred to a mixture of ethanol and water (1:1 V/V, 10 mL), to which phenanthroline (0.198 g, 1.00 mmol) and phenylacetic acid (0.136 g, 1.00 mmol) were added successively. The resulting blue solution (pH = 7.52) was allowed to stand at room temperature. Blue blocklike crystals were grown by slow evaporation for over 7 days. Refinement All H atoms bound to C were positioned geometrically and refined as riding, with C-H = 0.93 and U iso (H) = 1.2U eq (C). Hydrogen atoms attached to O were located in a difference Fourier map and refined isotropically, with the O-H distances restrained to 0.85 and with U iso (H) = 1.2U eq (O). Fig. 1 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2, conventional R-factors R are based on F, with F set to zero for negative F 2. The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. (7 161.44 Symmetry codes: (i) −x+1, −y+2, −z+1; (ii) x−1, y+1, z; (iii) −x+2, −y, −z+1; (iv) −x+1, −y+1, −z+1; (v) x, y−1, z.
<filename>src/codechef/medium/section1/chglstgt/solution_test.go package main import "testing" func runSample(t *testing.T, n int, S string, expect int) { res := solve(n, []byte(S)) if res != expect { t.Errorf("Sample %d %s, expect %d, but got %d", n, S, expect, res) } } func TestSample1(t *testing.T) { n := 7 S := "ABCCBDA" expect := 4 runSample(t, n, S, expect) }
<reponame>dengbp/im-app_server package com.yr.net.app.common.aspect; import com.yr.net.app.configure.AppProperties; import com.yr.net.app.monitor.entity.SysLog; import com.yr.net.app.monitor.service.ILogService; import com.yr.net.app.tools.HttpContextUtil; import com.yr.net.app.tools.IPUtil; import lombok.extern.slf4j.Slf4j; import org.apache.commons.lang3.StringUtils; import org.apache.shiro.SecurityUtils; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import javax.servlet.http.HttpServletRequest; /** * AOP 记录用户操作日志 * */ @Slf4j @Aspect @Component public class LogAspect { @Autowired private AppProperties appProperties; @Autowired private ILogService logService; @Pointcut("@annotation(com.yr.net.app.common.annotation.Log)") public void pointcut() { // do nothing } @Around("pointcut()") public Object around(ProceedingJoinPoint point) throws Throwable { Object result = null; long beginTime = System.currentTimeMillis(); // 执行方法 result = point.proceed(); // 获取 request HttpServletRequest request = HttpContextUtil.getHttpServletRequest(); // 设置 IP 地址 String ip = IPUtil.getIpAddr(request); // 执行时长(毫秒) long time = System.currentTimeMillis() - beginTime; if (appProperties.isOpenAopLog()) { // 保存日志 String token = (String) SecurityUtils.getSubject().getPrincipal(); String username = ""; if (StringUtils.isNotBlank(token)) { username = null;//JWTUtil.getUsername(token); } SysLog log = new SysLog(); log.setUsername(username); log.setIp(ip); log.setTime(time); logService.saveLog(point, log); } return result; } }
def construct_channel(self, **kwargs): channel_info = self.channel_info channel = nodes.ChannelNode( source_domain = channel_info['CHANNEL_SOURCE_DOMAIN'], source_id = channel_info['CHANNEL_SOURCE_ID'], title = channel_info['CHANNEL_TITLE'], thumbnail = channel_info.get('CHANNEL_THUMBNAIL'), description = channel_info.get('CHANNEL_DESCRIPTION'), language = "en", ) download_all_languages(channel) return channel
A blind student’s birthday was ruined after restaurant owners told her to tie up her guide dog outside or take her business elsewhere. Holly Scott-Gardner was told by staff at PGR Coventry, in Priory Row, that it was against the restaurant’s policy to allow her guide dog, Isla, into the building. The first year Coventry University student explained to the pizzeria’s staff several times it is illegal to refuse her entry and that they could be fined. The 22-year-old filmed the conversation with two members of staff, believed to be the restaurant’s owner and manager. Holly uploaded the video to Facebook and it has since been shared more than 30,000 times. Holly told the Telegraph: “I just think it’s ridiculous and really disrespectful. Most people are receptive. “They’ve probably got away with it in the past. She added: “The argument went on for about five minutes. I only filmed about two minutes of it. “It makes no difference if it’s a franchise or not. A friend of Holly’s phoned the restaurant on Monday and was allegedly told guide dogs were not allowed in due to the open kitchen plan. Holly spent the rest of her birthday speaking to advisors at Guide Dogs in Leamington Spa. Last night Majed Bahgozen sent a message via Facebook to Holly apologising for the afternoon&apos;s events. It read: "We at PGR are truly sorry for this incident that happened this early afternoon. "Some things said and done were completely misunderstood and not looked into properly, as we know we mustn&apos;t allow any pets in restaurants, due to health and safety guidelines, but we truly didn&apos;t understand what guide dogs purposes were and for that we accept all the blame from this. "We made a horrible mistake. We are so sorry to have upset you on your birthday and personally I would love to do anything for you to be welcomed back at PGR." Another message from Mr Bahgozen read: "I completely apologise for my ignorance of the law concerning guide dogs. "It&apos;s completely my fault and I would like my staff to have training regarding this issue. "We have nothing against service dogs but as I said I was unaware of the law. "I would like to offer Holly and 10 of her friends a free meal to celebrate her birthday and a £1000 donation to a charity of her choice." This isn’t the first time that Holly, originally from York, has been denied entrance to a place because of Isla. This week, she was also told she couldn’t take Isla into European Mini Market in Far Gosford Street. The Disability Discrimination Act means business owners must waive certain policies like no dogs on the premises for assistance dogs.
package fr.eni.encheres.servlet; import java.io.IOException; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import javax.servlet.RequestDispatcher; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import fr.eni.encheres.bll.BllException; import fr.eni.encheres.bll.UtilisateurManager; /** * Servlet implementation class CreerProfile */ @WebServlet("/CreerProfile") public class CreerProfile extends HttpServlet { private static final long serialVersionUID = 1L; /** * @see HttpServlet#HttpServlet() */ public CreerProfile() { super(); // TODO Auto-generated constructor stub } /** * @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse response) */ protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if(request.getAttribute("pseudo")!= null) { RequestDispatcher rd = null; rd = request.getRequestDispatcher("/WEB-INF/profil.jsp"); rd.forward(request, response); }else { RequestDispatcher rd = request.getRequestDispatcher("/WEB-INF/profil.jsp"); rd.forward(request, response); } } /** * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse response) */ protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { UtilisateurManager user = UtilisateurManager.getInstance(); HashMap<String, String> formulaire = new HashMap<String, String>(); formulaire.put("pseudo",request.getParameter("pseudo") ); formulaire.put("nom", request.getParameter("nom")); formulaire.put("prenom", request.getParameter("prenom")); formulaire.put("telephone", request.getParameter("telephone")); formulaire.put("rue", request.getParameter("rue")); formulaire.put("codePostal", request.getParameter("codePostal")); formulaire.put("ville", request.getParameter("ville")); formulaire.put("password", request.getParameter("password")); formulaire.put("confirmation", request.getParameter("confirmation")); formulaire.put("email", request.getParameter("email")); try { user.creerCompte(formulaire); doGet(request, response); } catch (BllException e) { // TODO Auto-generated catch block request.setAttribute("erreurCreation", true); request.setAttribute("messageErreurCrea", e.getMessage()); request.setAttribute("pseudo", formulaire.get("pseudo")); request.setAttribute("nom", formulaire.get("nom")); request.setAttribute("prenom", formulaire.get("prenom")); request.setAttribute("telephone", formulaire.get("telephone")); request.setAttribute("rue", formulaire.get("rue")); request.setAttribute("codePostal", formulaire.get("codePostal")); request.setAttribute("ville", formulaire.get("ville")); request.setAttribute("password", <PASSWORD>("password")); request.setAttribute("email", formulaire.get("email")); request.setAttribute("confirmation", formulaire.get("password")); doGet(request, response); } } }
Debra Monk Life and career Monk was born in Middletown, Ohio. She was voted Best Personality" by her graduating class at Wheaton High School in Silver Spring, Maryland. In 1973, she graduated from Frostburg State University. In 1975, Monk was awarded a Master of Fine Arts from Southern Methodist University in Dallas, Texas. Monk garnered first attention in theatrical circles as one of the co-writers and co-stars of the musical Pump Boys and Dinettes (1982). She won the Tony Award for Best Featured Actress in a Play for performance in Redwood Curtain (1993). She was nominated for a Tony Award for roles in Picnic (1994), Steel Pier (1997), and Curtains (2007). In 2000, she won an Obie Award for The Time of the Cuckoo. She returned to the stage in Steppenwolf Theatre Company's production of Visiting Edna by David Rabe in September 2016. Monk appeared on the Food Channel cooking show Barefoot Contessa, where she cooked Roasted Chicken, Arugala and Bread Salad, and Tri-Berry Crumble. Monk has appeared in over 30 films since the early 1990s. She made her film debut in the movie version of Prelude to a Kiss, playing Aunt Dorothy. She later appeared in The Bridges of Madison County and The Devil's Advocate. On television, she has won a Primetime Emmy Award for Outstanding Guest Actress in a Drama Series for a recurring role as Katie Sipowicz in the ABC series, NYPD Blue. She also guest-starred on Law & Order, Desperate Housewives, The Closer, and Girls. Monk had recurring roles in A Nero Wolfe Mystery (2001-02), Grey's Anatomy (2006-11), and Damages (2007-12).
<reponame>woolts/wool-browser<filename>workspaces/ui/src/index.ts import { div } from 'wool/dom'; // -- Elements -- export const text = text => div(); export const el = (attrs, child) => div(); export const column = (attrs, children) => div(); export const row = (attrs, children) => div(); export const layout = (attrs, child) => div(); // -- Attributes -- type Color = RGB | RGBA | Hex; type RGB = [number, number, number]; type RGBA = [number, number, number, number]; type Hex = string; export const color = (color: Color) => {};
A Taiwanese trade representative has reiterated the need to set up a trade office in Cambodia, but the government said yesterday that its stance on the one-China policy remains intact and a Taiwanese-representative office is not allowed in the Kingdom. John Tang, director of the Taiwan Trade Center in Vietnam, said a trade office would increase investor confidence in Taiwan, thereby encouraging them to invest in Cambodia, which is not related to the Taiwanese government’s ongoing dispute with China. Tang was speaking on the sidelines of a trade delegation meeting from Taiwan, comprising 60 businesses and jointly hosted by the Cambodia Chamber of Commerce, which is looking to expand business interests in the Kingdom beyond the $750 million trade achieved with Taiwan last year. According to the Taiwanese media, the Taiwan External Trade Development Council (TAITRA) announced the opening of an office in Phnom Penh last July to facilitate business interactions with Cambodia. But the proposal was nixed by Prime Minister Hun Sen, who said Taiwan was only a province of China and that no representative office from there would be allowed in the Kingdom. Tang said the lack of a political relationship between the Cambodian and Taiwanese governments and the Kingdom’s economic dependence on the Chinese make it difficult to change Hun Sen’s decision. “Right now, the Cambodian government is so close and so dependent on China. Even though trade between our two countries is growing very fast, they still do not change their mind,” he said. Phay Siphan, spokesman for the Council of Ministers, said there were no restrictions on the Taiwanese engaging in business interests, but when it came to a Taiwanese trade office, there was no deviating from the Hun Sen’s directive to follow a one-China policy. “We welcome business activity. But we must not allow anyone to polarise it. We are officially engaged with nations that are only recognised by the United Nations,” Siphan said. China and Taiwan have been at loggerheads over the latter’s claim to establish an independent nation separate from the mainland. But proponents of the one-China policy advocate that both Taiwan and mainland China are inseparable parts of a single Chinese nation. Siphan said that investor confidence did not depend on them using the flag of Taiwan in Cambodia and that would not be allowed. Nuong Meng Tech, director general for the Cambodia Chamber of Commerce (CCC), said he was glad that these trade missions were increasing trade and that the CCC will help setup partnerships with local companies, but that the government’s policy was not impacting Taiwanese business in Cambodia. “I don’t think it [impact on business] is too much. If people want to invest they can come and invest,” he added. John Yu, sales manager at San-Shen Agricultural Machinery Science and Technology and sells rice driers in Cambodia, said Taiwanese companies had little information about the Cambodian market and a trade office will help reduce the obstacles of doing business in the Kingdom. “There is difficulty. But we have to overcome [on our own],” Yu said. “There are business and cultural difficulties. “We must come here [ourselves] to know this country. In Taiwan, we do not know about Cambodia. We have to increase business confidence,” he added.
1. Field of the Invention The present invention relates to a method suitably used for producing reduced metals, such as reduced iron and the like, by heating a metal oxide, such as iron oxide or the like, together with a reducing agent in a combustion furnace, and also to an apparatus for reducing metal oxides. 2. Description of the Related Art In order to produce reduced iron, i.e., metallic iron, a method is known in which iron oxide is reduced by being heated together with carbonaceous material in a furnace. The furnace known as being used in this case is an electric furnace in which heating is performed by means of electrical energy, and a combustion furnace in which heating is performed by means of combustion heat evolved from fuel. For example, a method designed to use a combustion furnace is known as disclosed in Japanese Unexamined Patent Application Publication No. 11-061216 and so on. In this method, agglomerates obtained from iron oxide and carbonaceous material, i.e., iron oxide pellets filled with carbonaceous material, are heated by a burner in a rotary hearth furnace, whereby reduced iron is produced. Of the accompanying drawings, FIG. 3 shows, as a schematic view, an apparatus taken up in explaining the production method of reduced iron using the rotary hearth furnace noted above. This apparatus is equipped with a rotary hearth furnace 1 that is constituted of a ring-shaped rotary hearth 2 and a furnace body 3 mounted to cover the rotary hearth 2. By driving means (not shown), the rotary hearth 2 can rotate, i.e., revolve, at appropriate speeds. Carbonaceous material-filled iron oxide pellets 7 are supplied to the rotary hearth furnace 1 through a feed hopper 5 for feedstock charge disposed in the furnace so that they are placed on the rotary hearth 2 and then heated and reduced while the rotary hearth is being traveled in the direction of rotation in the furnace. The pellets 7 thus reduced are taken out of the furnace by discharge means 8 located downstream in the direction of rotation. In the apparatus shown here, such discharge means are structured to be belt conveyor-type discharge means. In the rotary hearth furnace 1, a plurality of burners 4 are employed as heating means that are positioned on the inner wall surface of the furnace body 3 and along the direction of rotation. Thus, the pellets 7 can be substantially uniformly heated in the furnace. Exhaust gas, i.e., combustion gas, evolved by burner heating is exhausted via an exhaust gas line 6 arranged at a proper portion of the furnace body 3. Subsequently, the exhaust gas is subjected to heat removal by a waste heat-recovery unit (not shown), such as a heat exchanger or the like, followed by temperature control using a temperature control unit and then by dust removal using a bag filter. The exhaust gas after being so treated is released in the air. However, when iron oxide is reduced by burner heating as mentioned above, general-purpose fuel such as commercially available gas, heavy oil, pulverized coal or the like must be used in large amounts. Namely, mass consumption of combustion heat evolved from the general-purpose fuel is necessary, and as a result, is responsible for poor cost performance. On the other hand, a method in which organic matter is carbonized by heating is known as disclosed in Japanese Unexamined Patent Application Publication No. 2001-3062. In this method, dry-distilled gas generated while organic matter is being heated is utilized as fuel for a burner used to heat the organic matter. Another method is known as shown in FIG. 4. In the method of FIG. 4, an externally heated kiln 10 is used as a carbonization furnace, and feedstock to be carbonized, i.e., organic matter, is put into the carbonization furnace, i.e., the kiln. After dry-distilled gas generated from the carbonization feedstock is allowed to burn in a combustion furnace, part of the resulting combustion gas is released outside via a temperature control tower and a bag filter, while the remaining gas is supplied to a heat exchanger 11 disposed in the carbonization furnace 10 so that this gas is utilized to heat the carbonization feedstock. However, the amount of heat generated by combustion of the dry-distilled gas is larger than that needed for carbonizing the organic matter. For this reason, the amount of heat having been generated cannot be wholly utilized to advantage and is partly wasted. Japanese Unexamined Patent Application Publication No. 2000-309780 discloses a method in which large amounts of heat are supplied to a waste material such that the latter is caused to undergo dry distillation and thermal decomposition, and the resulting thermal decomposition solid products and gaseous products are reused, respectively, as fuel. However, the above publication fails to disclose how these solid and gaseous decomposition products are utilized. The publication also discloses using kilns in two stages with a view to avoiding the formation of carbonaceous solid products and gaseous products such as hydrogen, lower hydrocarbons and the like. In such an instance, equipment and facilities are so complicated as to present low cost performance. Moreover, the content of carbonaceous solid products is small so that the resulting thermal decomposition solids are difficult to be used as reducing agents. The present inventors have conducted extensive research in solving the above-mentioned problems of the conventional art. As a result of this research, it has been found that when a carbonization furnace and a reduction furnace are combined together, reduced metals can be produced with a sharp cut in production cost. More specifically, it has been found that when dry-distilled gas generated during carbonization of organic matter is used as fuel for burner heating in metal reduction, the consumption of general-purpose fuel, such as commercially available gas, heavy oil, pulverized coal or the like, can be greatly saved. This saving in the consumption of general-purpose fuel appears to be attributed to the fact that the metal reduction requires much heat unlike the carbonization of organic matter. Namely, it has been found that when both carbonization equipment and metal reduction equipment are considered as a whole, the overall thermal efficiency can be enhanced with consequential considerable cutting in the production cost of a reduced metal. With regard to the case where a carbonization furnace and a reduction furnace are used as combined, it has also been found that when a metal oxide is placed in advance in the carbonization furnace, a heat medium such as sand or the like, usually employed in the latter furnace is not required so that no sand separation is needed. Nor are extra process steps necessary for mixing carbonaceous matter and a metal oxide. Hence, feedstock such as a metal oxide, a reducing agent and the like can be prepared with good efficiency, and when both carbonization equipment and metal reduction equipment are considered as a whole, the production cost of a reduced metal can be markedly cut down. The present invention has been completed based on the foregoing findings. Accordingly, one object of the present invention is to provide a method of producing reduced metals, which can yield excellent cost performance, and an apparatus for reducing metals oxides. Another object of the invention is to provide a method of producing reduced metals, such as reduced iron, etc., which can yield excellent thermal efficiency and minimum consumption of combustion heat from general-purpose fuel, and an apparatus for reducing metals oxides. Yet another object of the invention is to provide a method of producing reduced metals, which can prepare feedstock, such as metal oxides, reducing agents and the like, with good efficiency, and an apparatus for reducing metals oxides. According to one aspect of the present invention, a method of producing reduced metals from metal oxides is provided which comprises the step of: heating a mixture comprising a metal oxide and a reducing agent by means of a burner, thereby reducing the metal oxide to a reduced metal; wherein dry-distilled gas generated during carbonization of an organic matter-containing component, such as town waste or industrial waste, or solid fuel obtained by treatment thereof, is used as fuel for the burner. Preferably, in this method, the sensible heat of exhaust gas evolved by the burner is used as heat for carbonizing the organic matter-containing component. Also preferably, carbide derived by carbonizing the organic matter-containing component is used as the reducing agent. According to another aspect of the present invention, a method of producing reduced metals from metal oxides comprises the steps of: carbonizing an organic matter-containing component to prepare a carbide; and heating a mixture comprising a metal oxide and the carbide, thereby reducing said metal oxide to a reduced metal; wherein said metal oxide is fed together with said organic matter-containing component to carbonization furnace as heat media. Furthermore, in this method, a metal oxide is caused to coexist as a heat medium when the organic matter-containing component is carbonized in a carbonization furnace, and a mixture of carbide taken out of the carbonization furnace and an organic matter-containing component is reduced in a reduction furnace. According yet to another aspect of the invention, an apparatus for reducing metal oxides is provided which comprises: a carbonization furnace for carbonizing an organic matter-containing component, thereby generating dry-distilled gas; a reduction furnace, such as a movable hearth furnace, for heating a mixture comprising a metal oxide and a reducing agent by means of a burner, thereby reducing the metal oxide; and a line for supplying the dry-distilled gas to the burner as fuel therefor from the carbonization furnace. Preferably, in this apparatus, a line for exhausting combustion gas generated by the burner is connected to the carbonization furnace for heat exchange to be performed. Also preferably, a line for supplying carbide taken out of the carbonization furnace is connected to the reduction furnace.
<gh_stars>1-10 #ifndef _TEST_HELPER_H #define _TEST_HELPER_H #include "test_helper.h" #include <json/json.h> #include <fstream> #include <iostream> #define ASSERT_JSON_STREQ(a,b) \ { \ if (strcmp(a,b) != 0) \ { \ FAIL () << "Expected string: \n" \ << a \ << "actual string \n" \ << b \ << "\n"; \ } \ } #define ASSERT_INVALID_CONTEXT_EXIST(mold,name,disir_context_type,error) \ { \ if (assert_invalid_context (mold,name, disir_context_type, error) == false) \ { \ FAIL () << "No invalid context of type " \ << disir_context_type \ << " with error " \ << error << "\n"; \ } \ } #define ASSERT_INVALID_CONTEXT_COUNT(mold,count) \ { \ if (invalid_context_count (mold) != count) \ { \ FAIL () << "Wrong invalid context count\n" \ << "Asserted count " \ << count \ << ", got " \ << invalid_context_count (mold); \ } \ } namespace testing { class JsonDioTestWrapper : public testing::DisirTestWrapper { protected: const std::string m_json_references_path = CMAKE_PROJECT_SOURCE_DIR "/test/plugins/json/json/"; const std::string m_override_entries_path = CMAKE_PROJECT_SOURCE_DIR "/test/plugins/json/override_test_data/"; // Variables enum disir_status status; std::map<std::string, struct disir_mold*> m_override_reference_molds; const std::string m_mold_base_dir = "/tmp/json_test/mold/"; public: static void SetUpTestCase (); static void TearDownTestCase (); static struct disir_instance *instance; static struct disir_mold *libdisir_mold; static struct disir_config *libdisir_config; bool GetJsonObject (Json::Value& root, std::string path); bool assert_invalid_context (struct disir_mold *mold, const char *name, const char *context_type, const char *errormsg); int invalid_context_count (struct disir_mold *mold); void read_override_mold_references (); void teardown_override_mold_references (); void copy_override_file (const char *namespace_entry, const char *name); void emplace_mold (const char *namespace_entry, const char *name); void emplace_mold_from_string (const char *namespace_entry, const char *name, std::string& mold); //! \brief checks whether a context with context_name is valid or not bool check_context_validity (struct disir_config *config, const char *context_name); //! \brief checks whether a context with context_name is valid or not bool check_context_validity (struct disir_context *context, const char *context_name); }; } #endif
BATHYMETRIC AND GEOLOGICAL PROPERTIES OF THE ADRIATIC SEA Advance in the visualization of the bathymetric and geological data from charted to digital maps and models opened the possibility to analyse data within Geographic Information System (GIS) functionalities. In this paper, bathymetric and geological properties of the Adriatic Sea were analysed using the General Bathymetric Chart of the Ocean (GEBCO) 2020 digital bathymetric model (DBM) and data from the European Marine Observation and Data Network (EMODnet). The bathymetric analysis includes depth statistics, area and volume calculation, hypsometry, and analysis of the heterogeneity of bathymetric data from the GEBCO 2020 DBM within the limits of the Adriatic defined by the International Hydrographic Organization (IHO) and 3 sub-basins delineated according to the bathymetry. The geological analysis includes seabed substrate map from EMODnet data and kilometre-scale seabed variability in the Adriatic. The GEBCO 2020 DBM shows that the Adriatic Sea is a shallow sea with a mean depth of -253 metres and over 50% of area shallower than 100 metres. The area of the Adriatic Sea is 138 516 km2 with a total volume of 35 521 km3. Patterns describing morphological variability coincide with the heterogeneity of the underlying source data of the GEBCO 2020 digital bathymetry model and major structures in the Adriatic Sea. Introduction Topography of the seabed is shaped by past and present geological processes. A digital bathymetric model (DBM) is a digital terrain model that represents the topography of the seabed (). Bathymetric data as an input parameter or project framework, underpin for almost all maritime activities. The first bathymetric map of the Adriatic Sea was constructed from sparse echosounder records in 1969 (Giorgetti and Mosseti, 1969) with the purpose of detecting and describing key morphological features. Modern bathymetric methods provide a complete cover of the seabed with submeter resolution, but such a highresolution bathymetric model of the Adriatic Sea is not available. Apart from digital navigational charts and scientific or industrial research on a local scale, a digital bathymetric model based on acoustic bathymetric methods has been generated only for the Italian part of the Adriatic (). Alternative data sources for bathymetric data are publicly available digital bathymetric models with uneven and/or unknown accuracy. However, advances in the visualization of seabed topography from charted maps to digital bathymetric models have opened the possibility to manipulate and analyse bathymetric data in a digital environment using Geographic Information System (GIS) functionalities (). Analyses of morphological variability of the seabed have been used in previous research for habitat mapping where biodiversity is linked to structural diversity () and in geomorphology where terrain variability reflects geomorphic processes (). In this research, the status of the bathymetric surveying in the Adriatic Sea was evaluated. A review of geomorphological evolution and tectonics of the Adriatic Basin was done. Present information about the depth, area, and volume of the Adriatic Sea is mostly given without reference to the used bathymetry set or the applied limits of the Adriatic. In this paper, a General Bathymetric Chart of the Ocean (GEBCO) 2020 digital bathymetric model and IHO limits of the Adriatic Sea were used to calculate statistics of depth in the Adriatic: mean, standard deviation, median, maximum as well as area and volume. Data from the European Marine Observation and Data Network (EMODnet) were used to construct a map of seabed substrate. Terrain variability was analysed on a kilometre scale by calculating the Terrain Ruggedness Index from the GEBCO 2020 digital bathymetric model. Status of the bathymetric survey in the Adriatic There are three major providers of bathymetric data on the global and regional level: the government (hydrographic institutes), the academic sector and industry. Six countries lie on the Adriatic coast: Italy, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro and Albania. Coastal states are obligated to ensure safety of navigation under the Safety of Life at Sea SOLAS convention in all navigable waters out to the limit of the continental shelf (Exclusive Economic Zone). Bathymetric surveys must meet the conditions defined by the International Hydrographic Organization IHO. According to IHO standards for hydrographic surveying (IHO, 2020a) 37% of the Adriatic is adequately measured, 47% requires re-survey and 16% has never been systematically surveyed (see Figure 1). The academic sector conducts bathymetric surveys as part of scientific marine research. Industry finances bathymetric surveys to exploit marine resources. The qual-ity of data in scientific or industrial research is defined by the objectives of the project. Access to bathymetric data in the Adriatic Sea is limited, but some data held by hydrographic institutes or academic sector are part of publicly available bathymetric models which will be further discussed in section 4.1. Present information about the depth, area, and volume of the Adriatic Sea from encyclopaedias and published work is presented in Table 1. These data are given without reference to the used bathymetry dataset or applied limits of the Adriatic. Geology of the Adriatic The Adriatic Sea bathymetry is characterized by strong transversal and longitudinal asymmetries (; Russo and Artegian, 1996). The transversal asymmetries consist of different topography of coastal areas caused by the difference in orography between the opposite coastlands, with the Dinarides along the eastern coast close to the shoreline and the Apennines more distant from the shoreline on the other side. The northwestern, Italian part of the Adriatic coast is low, with sediment-loaded beaches, which originate from strong Pleistocene to Holocene river discharge (Danovaro and Boero, 2019). The middle and southern Italian parts of the Adriatic are rugged and rocky. The eastern Adriatic coast is highly indented with more than 1200 islands, islets, and rocks (Duplani ). Islands along the eastern coast follow the morphology of the coast and spread following the west-east direction line of the Dinarides. The tectonic evolution of the peri-Mediterranean area consists of two happenings. The first is continental rifting which began in the Triassic to Lower Jurassic and continued until the second happening -collision that has begun in the Upper Cretaceous. The Adriatic Carbonate Platform (AdCP) (see Figure 2) covers the territory of Italy, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, and Albania. It includes sediments from the Lower Jurassic to the end of the Cre-. The middle sequence, Middle Permian -Middle Triassic (Bruane Formation -Bake Otarije Formation), is characterized by carbonates and evaporites of the Middle and Upper Permian (Bruane Formation), clasts and carbonates of the Lower Triassic and limestones and volcanic rocks of the Middle Triassic. The boundary between the Upper and Middle Triassic (Bake Otarije Formation) is characterized by a phase of emergence. Emersion and volcanism are a consequence of regional events associated with rifting in the Middle Triassic. The beginning of the third sequence is marked by rifting. The Middle and Upper Triassic are characterized by shallow-sea sedimentation, a thick sequence of carbonates including the Upper Triassic main dolomites and Lower Jurassic limestones (). By the Middle Triassic, the area of the future Adriatic Carbonate Platform corresponded to the area of the northern border of Gondwana. Volcanism led to the creation of a vast shallow space in southern Tethys called the Southern Tethian Megaplatform (STM) (). The disintegration of STM, i.e., the separation of the Adriatic Carbonate Platform (AdCP) from the Apennine and Apulia Carbonate Platforms, occurred in the Lower Jurassic by forming a seabed connecting the Ionian Basin with the Belluno and Umbria Marche pelagic basins (). From the Upper Triassic to the Paleogene (Eocene), shallow-water carbonate sedimentation predominates in the Adriatic Basin. In the Upper Cretaceous, the carbonate platform gradually disintegrates (;Veli, 2007). Tectonic movements accompanied by intensive clastic sedimentation (Veli, 2007) in the Middle and Upper Eocene and Lower Oligocene created the space for the present Adriatic Basin (Prelogovi and Kranjec, 1983;Veli, 2007). Tectonic movements never stopped and have been followed by frequent earthquakes (see Figure 3). Tectonically, they are the most active on the margins of the Adriatic Basin, and they occur in wider zones of parallel, mostly reverse faults. Recent movements include marine transgression with more pronounced lowering of the coast and the further deepening of depressions (Veli, 2007). By the end of the Cretaceous, large parts of the platform emerged so the sedimentary space of the platform was greatly reduced, and transgression over the paleorelief occurred during the Eocene. The deposition of Liburnian deposits and foraminiferal limestones was influenced by strong tectonics, i.e., the formation of inland basins. The resulting sequence of 200 m of carbonate was only an introduction to flysch deposition and the final filling of the basin with carbonate-clastic sediments, Promina deposits and Jelar breccias (). The sedimentation in the Adriatic Basin during the Quaternary was strongly influenced by glacial -interglacial cycles. Glacials were associated with lowered sea levels. For example, in the Late Miocene, global sea levels dropped due to the Antarctic glacial that caused a large extension of ice sheets and ice volumes increased beyond those of the present day. Due to that, the sea level drop may have been partly responsible for the isolation of the Mediterranean Basin. During this period, the Atlantic Ocean was no longer linked to the Mediterranean Sea at Gibraltar. At the beginning of the Pliocene, the Adriatic Sea occupied much more space than today and had a higher sea level. From this point, it started to form a shape as we know it today. During the Early Pliocene, the climate was warmer, which is indicated by planktonic and benthic foraminifera community composition found in the sediment and deposition in the deeper sea environment. Late Pliocene is characterised by moderate to cold climate, which is evident in reduced diversity and quantities of planktonic foraminifera (Biber ≈2.5 Ma). Pliocene sediments are thicker along the Italian offshore part of the Adriatic, because of the vicinity of depositional environment and rapid subsidence during the late Pliocene. This caused steep slopes with lot of sediment rich in organic matter that was transported from inland. Ice volume in the Pleistocene, on a global scale, was about three times higher than it is today and ice sheets were 2 km thicker. The Adriatic Sea was formed within its present borders following the Last Glacial (Wrm) when the exposed land was significantly larger. Pleistocene deposits comprise permeable and impermeable deposits, mostly sands, silty sands, clays, claystone, and clayey marls and are the thickest (100 m -1400 m). The Adriatic Basin is divided into depressions formed in the Miocene and Pliocene (see Figure 10). The Po Depression is located on the mainland, between the Southern Alps and the Apennines, and in the east, it ends under the Adriatic Sea. The Po Depression is filled with Pliocene -Pleistocene sediments with thicknesses in some places greater than 10 000 m. Pliocene -Pleistocene sediments are covered with Holocene sediments (Veli, 2007). Three depressions were formed in the Miocene: Dugi Otok Depression, South Adriatic -Albanian Depression and the Molise Depression. Other depressions appeared in the Pliocene: Venetian Depression, Po Depression, Marche -Abruzzi Depression, Middle Adriatic Depression, Bradan Depression, and the Adriatic -Ionian Depression (Veli, 2007). Depressions were not marked with constant limits of distribution and sedimentation conditions, which is expressed through unequal filling of sedimentation space and discordant relations between individual lithological units and asymmetry of depressions (Veli, 2007). Submarine structural features are created by complex tectonics. The diapir structure near the island of Jabuka is presented in Figure 4. Figure 5 shows an illustrated cross-section of the western edge of the Adriatic Carbonate Platform, which is characterized by submersion (). An example of faults near the islands of Bra and Hvar and fold-propagation type of folding in the roof of reverse faults is presented in Figure 6 (). Digital bathymetric model GEBCO 2020 The GEBCO 2020 digital bathymetric model is a continuous, global topography and bathymetry model with a resolution of 15 arc seconds. It was produced through the Nippon Foundation-General Bathymetric Chart of the Oceans (GEBCO) project: "Sea- urements, but in the east, it is mainly based on data from SRTM 15+ base layer with bathymetry data derived from gravity. The location of bathymetric soundings, in situ measured depths, that are part of SRTM 15+base layer is marked in orange. The GEBCO and EMODnet models are exchanging data, and it is evident that direct measurements in the GEBCO TID grid mostly originate from the EMODnet database (see Figure 8). The bathymetry of the Italian side of the Adriatic which is based on singlebeam data has been compiled by the Marine Institute CNR-ISMAR Bologna to illustrate the main geological features of the Western Adriatic Basin (). Contour lines were manually drawn between survey lines every 1 m from -5 m to -150 m and every 20 m from -150 m. The uniform grid in the western Adriatic (resolution 200 m) that was base of the EMODnet/GEBCO grid was interpolated from contour data (URL 1). High resolution multibeam data in the eastern Adriatic are a product of scientific research. A composite digital terrain model DTM on the eastern side of the Adriatic was compiled from chart data from the Hydrographic Institute of the Republic of Croatia. fies the type of source data that the corresponding grid cells from the GEBCO Grid are based on. As seen in Figure 7, in the west part of the Adriatic Basin digital bathymetry model is based on direct meas- EMODnet bathymetry portal (URL 1) provides source data reference for every cell through metadata with quality indicators and link to data source holders. The quality of underlying directly measured source data in the Adriatic Sea is presented in Table 2. The possibility to detect different morphological features from a digital bathymetry model depends on the underlying source data, the interpolation method and the resolution of the grid. Global bathymetry models derived from gravity give an overview of seabed topography, but in segments that are based on high resolution bathymetry (i.e., multibeam data) minor specific subsea geomorphological structures such as landslide and submarine canyons can be detected (see Figure 9). Limits of the Adriatic Sea The Adriatic Sea is the northernmost arm of the Mediterranean Sea. The spatial boundaries of the world's oceans and seas have been defined by the International Hydrographic Organization (IHO) in S-23 publication: Limits of oceans and seas. They have been digitized and made available online (URL2) in the form of a shapefile in the WGS 84 coordinate system. The GIS based analyses of the digital bathymetric model The digital bathymetric model GEBCO 2020 is a regular grid with 15 arc minute resolution and depth assigned to the centre of the cell (pixel). Geographic coordinates refer to WGS 84 ellipsoid and depths refer to the Statistics of the GEBCO 2020 digital bathymetric model: mean, median, standard deviation and maximum depth were calculated using the "Zonal statistics" tool. The "Zonal statistics" algorithm calculates a raster statistic for each feature of an overlapping polygon that is defined by the limits of Adriatic Basin and sub-basins. For further analyses, the GEBCO 2020 grid was transformed and projected to Lambert azimuthal equalarea projection with the parameters specified in the European Terrestrial Reference System (ETRS) 1989, with a pixel size of 330 metres which approximately equals a resolution of 15 arc minutes at the 45 parallel. This is recommended by the EU INSPIRE Directive for statistical analysis of data spanning large parts of Europe when true area representations are required. The vertical depth profiles running through the Adriatic Sea and over specific morphologic features: Middle Adriatic Pit, Palagrua Sill and South Adriatic Pit were constructed directly from the GEBCO 2020 bathymetric model. The "Profile tool" plugin analyses the pixel values across a defined profile, identifies pixels with a change in value and extracts the location of the intersection between the profile and the pixel (Northing N, Easting E), the distance from the starting point of a profile (d) and depth value (D). The area and volume of the Adriatic Basin and three sub-basins were calculated by summing the values of individual cells (pixels). The area distribution of depth over the Adriatic Sea is presented with a hypsometric curve. A hypsometric curve is a graph that represents the area or percentage of cells with depth values in the defined interval. The tool "Hypsometry curve" was used to construct a graph at the 1-metre depth interval. Terrain variability was calculated by applying the terrain ruggedness index (TRI). TRI represents a local variation in seabed morphology around the central pixel (see Figure 11). It is calculated using the method described by Riley et al. for the area defined by an n x n pixel grid, where n refers to the number of pixels. TRI is calculated as the square root of the sum of the square of the difference between a central pixel and its surrounding cells. A TRI value is assigned to the centre cell of the n x n pixel grid. QGIS uses a 3x3 pixel window to calculate TRI (QGIS Project 2020). Riley formula (), following the notation in Figure 11 for a n=3 is given in Equation 1 as follows: Where: TRI: Terrain Ruggedness Index; D 0,0 : Depth of the central cell; D i,j : Depth of the neighbouring cell. The pixel (cell) resolution of the GEBCO 2020 digital bathymetric model is 330 m, the largest difference between the centres of neighbouring cells is 467 metres and size of pixel window used in the calculation is 990 metres covering an area of 0.98 km 2. The result is a raster (grid) with the same pixel (cell) resolution of 330 metres as GEBCO DBM but containing only the central cells of a 3x3 pixel window with the TRI index value assigned to the cells instead of depth. The terrain ruggedness index is influenced by the heterogeneity of input datasets, the size of the analysed neighbourhood, the amount of sediment superimposed on the underlying relief and the bedrock structure (). Distribution of sediments All geological data regarding the distribution of sediments are based on data from the European Marine Observation and Data Network (EMODnet). The EMODnet Geology Project (URL8) is one of seven projects that bring together information on Geology, Chemistry, Biology, Physics, Bathymetry, Seabed Habitats, and Human Activities in the European marine environment. It has been continuously developed through 3 phases since 2009 when 14 organizations from 14 countries demonstrated the idea of compiling and harmonising geologi- cal information to provide map information and supporting data for parts of the regional seas of Europe. Various geological data are presented through the map: seabed substrates, sediment accumulation rate, seabed lithology, stratigraphy and geomorphology, coastal behaviour, mineral occurrences, geological events and probabilities and submerged landscapes. The seabed substrate map of the European marine areas includes the Mediterranean Sea at a 1: 250 000 scale. The map is collated and harmonized from seabed substrate information within the EMODnet-Geology project. Where necessary, the existing seabed substrate classifications have been translated to a scheme that is supported by EUNIS (European nature information system website). This EMODnet reclassification scheme includes at least five seabed substrate classes. Four substrate classes are based on the modified Folk triangle (mud to sandy mud; sand; coarse sediment; and mixed sediment) and one additional substrate class (rock and boulders) was included by the project team. If the original seabed substrate dataset has enabled more detailed substrate classification with 7 classes, then 16 substrate classes might be available. Geological data for the European seas are collected by national organizations using a range of tools and techniques. The main providers of data for EMODnet Geology are national geological surveys that began in the 1970s and 1980s, and continued later. Bathymetry The Adriatic Sea is a semi-enclosed sea located between two mountain chains: the Apennines and Dinarides. It is the northernmost part of the Mediterranean Sea connected through the Otranto Strait with the Ionian Sea. In this study, IHO limits version 3 (IHO, 1953; Flanders Institute, 2018) with an underlying coastline layer from ESRI World 2014 have been adopted as the limits of the Adriatic. The GEBCO 2020 digital bathymetric model (DBM) has been used as a bathymetric source and all pixels with values smaller than zero that fall within the defined boundaries have been included in the GIS based analyses. General statistics of depth: mean, median, standard deviation, and maximum as well as area and volume of the Adriatic Sea and sub-basins are presented in Table 3. The mean depth of the Adriatic derived from the GEB-CO grid is -253 metres with a standard deviation of 347 metres. In the GEBCO 2020 grid, the deepest point of the Adriatic is in the South Adriatic Pit (=17.91°, =42.11°) and has a depth of -1244 metres. The area of the Adriatic Sea equals 138 516 km 2 and the volume of the Adriatic Basin is 35 521 km 3. The total area comes out to 0.02% and the volume 0.03% smaller when summarizing the separate sub-ba- The hypsometric curve shows the area distribution of depth in 1 m depth intervals over the Adriatic Sea (see Figure 12). The Adriatic Sea is a shallow sea with and over 50% of the complete area shallower than 100 metres. As can be noticed from Figure 12, the curve has expressed peaks from 0 m to 130 m depth. These peaks happen when the bathymetry grid is generated from contour data (Marks and Smith, 2006; Jakobbson et. al. 2019). Vertical profiles along the Adriatic Sea have been derived from the GEBCO 2020 digital bathymetric model to demonstrate the topography of the seabed. The location of the profiles is presented in Figure 10: vertical cross-section aa' is running longitudinally across the Adriatic Sea, bb' across the Middle Adriatic Pit (MAP), cc' across Palagrua Sill and dd' across the South Adriatic Pit (SAP). The North, Middle, and South Sub-basins of the Adriatic Sea have different depth range and seabed topography (see Figure 13a). The North Adriatic Sub-basin is the shallowest part of the Adriatic Sea that comprises only 4% volume of the whole basin, with an average depth of -43 metres. The area of North Adriatic runs gently from the flooded Po River paleodelta, that flows seaward to the south-east during glacial periods, transporting and depositing large quantities of sandy and silty detritus in deltas and prodeltas, to the line connecting the cities of Ancona and Zadar. Turbidites were initiated from these river delta accumulations. The Middle Adriatic Sub-basin is a transitional zone between the shallow north part and the deepest part of the Adriatic in the South Adriatic Pit. From the northwest to the southeast of the Middle Adriatic sub-basin, the topography of seabed descends to the Middle Adriatic Pit with a maximum depth of -283 m (see Figure 13b). Further, the seabed rises to Palagrua Sill, a natural step lying near the line connecting Monte Gargano and Split with a depth of about -180 m (see Figure 13c). The Middle Sub-basin contains 14% of the total volume of the Adriatic with an average depth of -110 metres. The South Sub-basin extends from Palagrua Sill to the strait of Otranto. It contains more than 80% of the volume of the Adriatic with an average depth of -498 m, however the median depth is only -295 m due to terrain configuration. Both coasts, east, and west, have a narrow part of seabed shallower than 200 m, then a steep conti- nental slope goes down to a -1244 m deep and relatively flat South Adriatic Pit (see Figure 13d). From the South Adriatic Pit, the topography of the seabed slowly rises, forming the -750m deep Otranto Sill, a natural barrier between the Adriatic and the Ionian Sea. Geology A seabed substrate map of the Adriatic Sea has been developed through the EMODnet project, using data from Italy, Croatia, Slovenia, Montenegro, and Albania (see Figure 12). The Project aimed to deliver layers compiled on a map with a scale of 1:100 000, but due to insufficient input data, the final product is a combination of 1:100 000 and 1:250 000 scale. The large scale corresponds to the inner coastal area of the Croatian part while the smaller scale corresponds to the rest of the Adriatic Sea that is mainly tied to the offshore. The smallest cartographic unit (SCU), set up by the MESH project (Foster-Smith. et al., 2007), for 1:100 000 scale is 0.05 km 2 (5 hectares) and for 1:250 000 scale is 0.3 km 2 (30 hectares). The seabed substrate classification schema based on the hierarchy of Folk was also adapted from the MESH project (). The Folk 7 classes were adopted for the Adriatic Sea as follows: rock and boulders, coarse sediment, mixed sediment, mud, sandy mud, muddy sand, and sand. From a geological point of view, the geomorphological structure of the Adriatic Sea is quite recent because the present shape of the coast has been formed by changes in the sea level in the Holocene. The Adriatic Sea is a regional structural depression with a number of synclinorium and anticlinorium, consisting of two parts with different characteristics of Holocene sediments, separated by the Kornati -Pescara line: the Northern Adriatic sandy area and the Southern Adriatic with sand, silt, and mixed sediments. Deep basins with depths over 200 metres in the area of Jabuka and Palagrua Island, as well as a seabed in the narrow belt between the islands of Jabuka, Bievo, Suac, Lastovo and Palagrua and the South Adriatic Pit, are covered with silt (Favro & Kovai, 2010;Kuica, 2013). As observed in Figure 14, the EMODnet geology map of the Adriatic corresponds quite well with these findings. The terrain ruggedness index (TRI) expresses the amount of elevation difference between adjacent cells of the digital bathymetry model DBM. The terrain ruggedness index calculated from GEBCO 2020 DBM over the area of the Adriatic Sea is presented in Figure 15. Inconsistencies in bathymetric source data coverage with different resolutions are readily apparent in the terrain ruggedness index (TRI) (). Artificial sharp lines (see Figure 15a) are a product of blending direct soundings in the digital bathymetry model derived from gravity. In the northwest part of the Adriatic, the digital bathymetry model GEBCO 2020 is constructed by interpolation from contour lines (see Figure 15b). The TRI index equals zero between contours and slightly changes the value along the contour line. TRI depends on the data upon which the digital bathymetry model is interpolated and will represent the true picture of seabed roughness only in areas where high-resolution survey data are available (see Figure 7 and Figure 8). There is an evident change in TRI value spreading parallel with the west Adriatic coast presented in yellow and green in Figure 15. As discussed in Trincardi and Ridnete and Trincardi, this area is characterized by the presence of Late Holocene mud wedge deposits extending over 600 km along the coast of Italy from Po valley to Gargano peninsula, where muddy sediment was supplied by Apennine rivers through lateral advection. These coloured lines are denoting the modern forest of the mud wedge where sediment accumulates at the highest rate. In Figure 15c, an irregular pattern of contour lines evident in TRI represents the area of incised valleys on the north Adriatic shelf formed during the subaerial exposure of this area during the Last Glacial Maximum and the early stages of the post-glacial sea-level rise. The extreme change in TRI values around the margin of the South Adriatic Pit is a result of bathymetry gradient and southward flowing bottom water masses that are moving bottom deposits and are impacting the upper part of the slope. These currents are cascading down the slope with high energy and as such are creating bottom deposits and erosional features (furrows and scours) (Verdicchio and Trincardi, 2006). The continental shelf is not smooth at the subregional scale. Geomorphological and geological processes have contributed to produce different degrees of roughness on the seabed. Usually, greater seabed roughness is associ-ated with a consolidated substrate, although heterogeneous patterns can reflect old aeolian processes or presentday tidal currents (Smith, 2014;). Over the various glacial-interglacial cycles, the present Adriatic continental shelf was exposed as land, where river valleys formed deeper channels and depressions. Past sedimentary processes, faulting and folding, erosion, and deposition have all contributed to the roughness of the Adriatic seabed. Conclusion Bathymetric and geological properties of the Adriatic Sea were analysed using the General Bathymetric Chart of the Ocean GEBCO 2020 digital bathymetric model and data from the European Marine Observation and Data Network (EMODnet) portal. Less than 20% of the Adriatic Sea has not yet been systematically surveyed according to modern standards for the safety of navigation. However, access to survey data is limited, and publicly available digital bathymetric models in the Adriatic, particularly along the eastern coast, are based on gravity-predicted bathymetry augmented with the available in-situ soundings. Present information regarding statistics of the depth, area, and volume of the Adriatic Sea is mostly given without reference to the bathymetry source or the limits of the Adriatic Sea. This research adopted the limits of the Adriatic Sea version 3, defined by IHO and Flanders Institute and included all GEBCO 2020 cells smaller than zero in the analyses. The area of the Adriatic Sea is 138 516 km 2 and the basin volume is 35 521 km 3. The Adriatic Sea is a shallow sea with a mean depth of -253 m, standard deviation of depth is 347 m and more than 50% of the complete area of Adriatic is shallower than 100 m. The maximum depth of the GEBCO 2020 digital bathymetric model in the Adriatic Sea, located in the South Adriatic Pit (SAP), is -1244 metres. The Adriatic Basin is divided into three sub-basins regarding bathymetry. The North Sub-basin extends up to the line connecting Zadar and Ancona. It is the shallowest part of the Adriatic that covers ~25% of the area and compromises only 4% of the volume with a mean depth of -43 metres. The Middle Adriatic is a transitional zone between the shallow northern part and the deepest part of the Adriatic in the South Adriatic Pit (SAP) that comprises 14% of the basin volume. The topography of the seabed in the Middle Adriatic is characterised by two morphological structures: the Middle Adriatic Pit with a maximum depth of -283 metres and Palagrua Sill with a depth of about -180 metres, a natural step before the steep slope to the South Adriatic Pit in the South Adriatic. The South Adriatic Sub-basin extends from Palagrua Sill up to the Strait of Otranto. It is the deepest part (-1244 m) of the Adriatic that comprises 82% of the total volume. Kilometre scale variability of the seabed morphology was analysed by calculating the Terrain Ruggedness Index (TRI). Patterns of the Terrain Ruggedness Index reflect the heterogeneity of the source data from which the GEBCO 2020 grid was calculated. Along the west coast of the Adriatic, the TRI pattern coincides with contour lines that are manually drawn from singlebeam data and reveals incised valleys formed in past cycles. TRI has the highest value in the marginal area of the South Adriatic Pit because of the depth gradient and movement of the bottom deposit by the circulation of water masses. EMODnet seabed substrate data have been shown in the form of GIS layers that contain Folk 7 class hierarchy that is created from seabed substrate granulometry point data. Granulometric features of seabed substrates do not show a direct connection with TRI. However, the distribution of seabed substrates in interaction with sea currents and waves indirectly impacts bathymetric data by consequently altering the elevation difference between the adjacent cells of a digital bathymetric model.
<gh_stars>0 #ifndef DYALOG_NOTICEMESSAGE_H #define DYALOG_NOTICEMESSAGE_H #include "../MessageAbstract.h" /** * @inherit */ class NoticeMessage : public MessageAbstract { public: /// @inherit std::string getMessageType() final { return "Notice"; }; /// @inherit unsigned int getMessageLevel() final { return 200; }; }; #endif //DYALOG_NOTICEMESSAGE_H
Radiation-stimulated diffusion in single memory cells A model of radiation-stimulated diffusion (RSD) of phosphorus (P) in Si is proposed to interpret the results of the high-dose -irradiation of complementary metal-oxide-semiconductor (CMOS) memory transistors. The devices are irradiated with doses≥1 Mrad. A degradation of the parameters of memory transistors is explained on the basis of a model of RSD of impurities in the np junctions and a spreading of the impurity profiles. The RSD of P in Si occurs as a result of the ionization-induced decrease in potential barriers of diffusion hopping. A possible increase in the diffusion coefficient caused by the irradiation is estimated. An approach for the prediction of the exploitation resource of devices in extreme conditions is considered. In this case, the method of the similarity conversion is used. The approach is illustrated by a numerical example.
Doctors will see increased demand during Olympics despite preparations Health services in London will see demand for care rise by 3-5% during the Olympic Games, despite the new system that the Health Protection Agency (HPA) has put in place to monitor and respond to outbreaks of infectious diseases and environmental hazards.1 Brian McCloskey, the agencys Olympics lead and director for London, said that the increase would be in the order of the surge that the NHS sees in a mild winter. It is not as much as the increased numbers that create a crisis in the health service; it is much less than that, he said. Much of the predicted increase would be related to excessive alcohol consumption and acute conditions rather than infectious diseases, McCloskey added. At the Olympic Games in Athens in 2004 and in Sydney in 2000 only 1% of admissions to hospital
Intracellular Brucella melitensis in the bone marrow A 51yearold woman presented with a 4week history of recurrent fever. She had received outpatient care at a local private clinic. Chest radiography had shown pneumonia, which had been treated with 10 days of roxithromycin. The antibiotic treatment was ineffectual with fever persisting, and she was therefore referred to our hospital. Admission tests showed a hemoglobin concentration of 108 g/l, platelet count 212 9 10/l and leucocytes 5.27 9 10/l, with 2.02 9 10/l neutrophils, 2.7 9 10/l lymphocytes and 0.55 9 10/l monocytes. Creactive protein and erythrocyte sedimentation rate were elevated to 21.6 mg/l and 24 mm/h, respectively. Serum procalcitonin was 0.062 ng/ml (normal range 00.05). Urinalysis showed protein and blood. Urine culture was positive for Escherichia coli (extendedspectrum lactamase positive), but the therapeutic effect of targeted antibacterial drugs was inadequate. An initial blood culture was negative. A bone marrow aspirate showed 51% macrophages with erythrophagocytosis (top images). In addition, many macrophages contained a variable number of fine, pink sandlike particles consistent with intracellular bacteria (bottom images black arrows). Following these observations, a more detailed history was taken. The patient had been working with livestock, specifically sheep. A second blood culture yielded slowgrowing Gramnegative coccobacilli, which were later identified as Brucella melitensis. Four weeks treatment with rifampin, doxycycline and levofloxacin was effective and the patient remained well at followup at 1 year. Human brucellosis, also known as Malta fever or Mediterranean fever, commonly occurs as a result of working with infected livestock or through consumption of unpasteurised dairy products. The clinical manifestations are varied and nonspecific so that brucellosis may imitate other conditions and be misdiagnosed. The clinical diagnosis of this condition is therefore a challenge.Our report highlights the important role of a bone marrow film and a detailed clinical history in making the diagnosis.
def addIoVal(self, db, key, val): dups = set(self.getIoVals(db, key)) with self.env.begin(db=db, write=True, buffers=True) as txn: cnt = 0 cursor = txn.cursor() if cursor.set_key(key): cnt = cursor.count() result = False if val not in dups: if cnt > MaxForks: raise DatabaseError("Too many recovery forks at key = " "{}.".format(key)) val = (b'%06x.' % (cnt)) + val result = txn.put(key, val, dupdata=True) return result
<reponame>vt-technologies-us/cryptobeet # CryptoCurrency Price Predictor base on LSTM-GRU Neural Network # Author: @VT-tech # Copyright 2018 The VT tech co. All Rights Reserved. # # Licensed under the Apache License, Version 1.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://github.com/vt-technologies-us/CryptoBeet/blob/master/LICENSE # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== # Bring in all of the public InBeet interface into this # module. import datetime import json import pickle import matplotlib.pyplot as plt import numpy as np import pandas as pd import tables from sklearn.preprocessing import MinMaxScaler from standard import config class DataManager: _USD_book = np.array([[1, 1, np.inf], [1, 1, -np.inf]]) def __init__(self, **kwargs): self.start_date = kwargs.get('start_date', None) self.end_date = kwargs.get('end_date', None) self.max_size_of_database = kwargs.get('max_size_of_database', np.inf) self._price_or_buy_sell = kwargs.get('price_or_buy_sell', 'buy_sell') self._plot_from_last = kwargs.get('plot_from_last', 1000) self._train_test_ratio = kwargs.get('train_test_ratio', 0.85) self.look_back = kwargs.get('look_back', 10) self.stride = kwargs.get('stride', config.stride) self.bad_coins = kwargs.get('bad_coins', config.bad_coins) # self.scaler = MinMaxScaler(feature_range=(-1, 1)) def initialize(self, file_address): self._load_databases(file_address) self._remove_null_data() self._scale_data() self._prepare_x_y() def _load_databases(self, file_address): with tables.File(file_address) as f: self.coins = list(f.root._v_children) for coin in self.bad_coins: if coin in self.coins: self.coins.remove(coin) # shapes = list(x.shape[0] for x in f.root) max_len_coin = max(self.coins, key=lambda x: getattr(f.root, x).shape[0]) max_len = min(getattr(f.root, max_len_coin).shape[0], self.max_size_of_database) # only manage between start time and end time data_coins = {} # todo one of them if self.start_date and self.end_date: period = getattr(f.root, max_len_coin).get_where_list( f'(timestamp > {self.start_date.timestamp()}) & (timestamp < {self.end_date.timestamp()})') drop_coins = [] for coin in self.coins: p_coin = getattr(f.root, coin).get_where_list( f'(timestamp > {self.start_date.timestamp()}) & (timestamp < {self.end_date.timestamp()})') if p_coin.shape[0] > 0: data_coins[coin] = getattr(f.root, coin)[p_coin] else: drop_coins.append(coin) for coin in drop_coins: self.coins.remove(coin) else: period = np.arange(min(max_len, self.max_size_of_database)) for coin in self.coins: p_coin = np.arange(min(getattr(f.root, coin).shape[0], self.max_size_of_database)) data_coins[coin] = getattr(f.root, coin)[p_coin] self.num_coins = len(self.coins) if np.isfinite(self.max_size_of_database): period = period[:self.max_size_of_database] times = getattr(f.root, max_len_coin)[period]['timestamp'] self._book_orders = {c: np.full((times.shape[0], 50, 3), np.nan) for c in self.coins} if self._price_or_buy_sell == 'price': data = np.full((times.shape[0], self.num_coins), np.nan) elif self._price_or_buy_sell == 'buy_sell': data = np.full((times.shape[0], self.num_coins * 2), np.nan) indices = [0] * self.num_coins for i in range(times.shape[0]): for k in range(len(indices)): # todo use all data while times[i] - data_coins[self.coins[k]][indices[k]]['timestamp'] > 150: indices[k] += 1 j = indices[k] if abs(times[i] - data_coins[self.coins[k]][j]['timestamp']) < 150: if self._price_or_buy_sell == 'price': data[i, k] = data_coins[self.coins[k]][j]['ticker'][6] elif self._price_or_buy_sell == 'buy_sell': data[i, 2 * k] = data_coins[self.coins[k]][j]['book'][0, 0] data[i, 2 * k + 1] = data_coins[self.coins[k]][j]['book'][25, 0] self._book_orders[self.coins[k]][i] = data_coins[self.coins[k]][j]['book'] indices[k] += 1 if self._price_or_buy_sell == 'price': cols = self.coins elif self._price_or_buy_sell == 'buy_sell': cols = pd.MultiIndex.from_tuples(zip(np.repeat(self.coins, 2), ('buy', 'sell') * 2 * len(self.coins))) times = np.array([pd.Timestamp(datetime.datetime.fromtimestamp(t)) for t in times]) self.df = pd.DataFrame(data, columns=cols, index=times) def _remove_null_data(self): drop_columns = [] for c in self.df: if self._price_or_buy_sell == 'price': if np.isfinite(self.df[c].values[0]): drop_columns.append(c) elif self._price_or_buy_sell == 'buy_sell': if not all(np.isfinite(self.df[c[0]].values[0])): drop_columns.append(c) # mask = [c not in np.unique(np.asarray(drop_columns)[:, 0]) for c in self.coins] # self._book_orders = self._book_orders[mask] if drop_columns: self.df = self.df.drop(columns=drop_columns) for drop_coin in np.unique(np.asarray(drop_columns)[:, 0]): self.coins.remove(drop_coin) self._book_orders.pop(drop_coin, None) for coin in self.coins: x = self.df[coin].values for idx in np.unique(np.where(~np.isfinite(x))[0]): x[idx] = x[idx - 1] * (1 + np.random.randn() * 0.0005) for idx in np.unique(np.where(x < 0)[0]): # change == -1 to < 0 x[idx] = x[idx - 1] * (1 + np.random.randn() * 0.0005) self.df[coin] = x def _scale_data(self): self.scalers = {c: MinMaxScaler(feature_range=(-1, 1)) for c in self.coins} self.df_scaled = self.df.copy() for coin, sc in self.scalers.items(): x = self.df_scaled[coin].values x_scaled = sc.fit_transform(x) self.df_scaled[coin] = x_scaled def _scale_new_data(self): self.df_scaled = self.df.copy() for coin, sc in self.scalers.items(): x = self.df_scaled[coin].values x_scaled = sc.transform(x) self.df_scaled[coin] = x_scaled def _prepare_x_y(self): self.ds = {c: None for c in self.coins} for coin in self.ds: self.ds[coin] = self.create_datasets(self.df_scaled[coin].values) def create_datasets(self, dataset): sequence_length = self.look_back * self.stride + 1 seq_dataset = [] for i in range(len(dataset) - sequence_length + 1): seq_dataset.append(dataset[i: i + sequence_length: self.stride]) # seq_dataset = np.array(seq_dataset) seq_dataset = np.array(seq_dataset, dtype=np.float16) data_x = seq_dataset[:, :-1] data_y = seq_dataset[:, -1] return data_x, data_y def _prepare_x_y_test(self, len): ds = {c: None for c in self.coins} for coin in ds: dataset = self.df_scaled[coin].values[-self.look_back * self.stride - len + 1:] dataset = np.concatenate((dataset, np.full((len, dataset.shape[1]), np.nan))) ds[coin] = self.create_datasets(dataset) return ds def get_book_orders(self, time): book_orders = dict() for c in self._book_orders: book_orders[c] = self._book_orders[c][time] return book_orders def new_data(self, times, book_orders): if not isinstance(times, list): times = [times] for coin in book_orders: book_orders[coin] = np.expand_dims(book_orders[coin], axis=0) times = np.array([t if isinstance(times[0], pd.Timestamp) else pd.Timestamp(t * 1e9) for t in times]) new_len = times.shape[0] data = {} for coin, book_order in self._book_orders.items(): self._book_orders[coin] = np.concatenate((self._book_orders[coin], book_orders[coin]), axis=0) data[coin, 'buy'] = book_orders[coin][:, 0, 0] data[coin, 'sell'] = book_orders[coin][:, 25, 0] # data[coin] = book_orders[coin][:, [0, 25], 0] new_df = pd.DataFrame(data, index=times) # remove null datas for coin in self.coins: x = new_df[coin].values for idx in np.where(~np.isfinite(x))[0]: if idx == 0: x[idx] = self.df[coin].values[-1] * (1 + np.random.randn() * 0.0005) else: x[idx] = x[idx - 1] * (1 + np.random.randn() * 0.0005) for idx in np.where(x == -1)[0]: if idx == 0: x[idx] = self.df[coin].values[-1] * (1 + np.random.randn() * 0.0005) else: x[idx] = x[idx - 1] * (1 + np.random.randn() * 0.0005) new_df[coin] = x self.df = self.df.append(new_df) # Scale Data new_df_scaled = new_df.copy() for coin, sc in self.scalers.items(): x = new_df_scaled[coin].values x_scaled = sc.transform(x) new_df_scaled[coin] = x_scaled self.df_scaled = self.df_scaled.append(new_df_scaled) # create new dataset return self._prepare_x_y_test(new_len) def get_train(self, coin): train_len = int(self._train_test_ratio * self.df[coin].shape[0]) # return self.ds[coin][0][:train_len], self.ds[coin][1][:train_len] return { 't': self.df.index[self.look_back * self.stride:train_len + self.look_back * self.stride], 'x': self.ds[coin][0][:train_len], 'y': self.ds[coin][1][:train_len]} def get_test(self, coin): train_len = int(self._train_test_ratio * self.df[coin].shape[0]) return { 't': self.df.index[train_len + self.look_back * self.stride:], 'x': self.ds[coin][0][train_len:], 'y': self.ds[coin][1][train_len:]} def plot_train_test(self, coin, ax, bounded=False): if bounded: train_len = min(int(self._train_test_ratio * self.df.shape[0]), self._plot_from_last) else: train_len = int(self._train_test_ratio * self.df.shape[0]) data = self.df[coin].values times = self.df.index train_data = np.full(data.shape, np.nan) test_data = np.full(data.shape, np.nan) train_data[:train_len] = data[:train_len] test_data[train_len:] = data[train_len:] ax.plot(times, train_data, 'r', label='training set') ax.plot(times, test_data, 'b', label='predicted price/test set') ax.legend(loc='upper left') # ax.set_xlabel('Time in 5 minutes') ax.set_ylabel('Price') # ax.set_xlim(0, 1000) def plot_trends(self, ax, bounded=False): if bounded: plot_len = min(self.df.shape[0], self._plot_from_last) else: plot_len = self.df.shape[0] for coin in self.coins: data = self.df[coin].values / self.df[coin].values[0] * 100 line = ax.plot(self.df.index, data, label=coin) plt.setp(line, linewidth=.5) line = ax.plot(self.df.index, np.full(self.df.index.shape, 100), label='USD') plt.setp(line, linewidth=.5) ax.legend(loc='upper left') # ax.set_xlabel('Time in 5 minutes') ax.set_ylabel('Price') # ax.set_xlim(0, 1000) return ax def load(self, filename): f = open(filename, 'rb') tmp_dict = pickle.load(f) f.close() self.__dict__.update(tmp_dict) def save(self, filename): f = open(filename, 'wb') pickle.dump(self.__dict__, f, 2) f.close() def save_for_tarding(self, filename): train_len = int(self._train_test_ratio * self.df.shape[0]) self.save_scalers(filename) with open('models/start_trading_time.json', 'w') as f: json.dump(self.df.index[train_len - 1].isoformat(), f) config.start_date_test = self.df.index[train_len - 1] save_data = dict() save_data['dataframe'] = self.df[train_len - self.look_back * self.stride - 1:train_len] save_data['book_orders'] = dict() for coin, book_order in self._book_orders.items(): save_data['book_orders'][coin] = book_order[train_len - self.look_back * self.stride - 1:train_len] f = open(f'models/last_data_{filename}.pkl', 'wb') pickle.dump(save_data, f, 2) f.close() def load_for_tarding(self, filename): self.load_scalers(filename) f = open(f'models/last_data_{filename}.pkl', 'rb') load_data = pickle.load(f) f.close() self.df = load_data['dataframe'] self._book_orders = load_data['book_orders'] self.df_scaled = self.df.copy() for coin, sc in self.scalers.items(): x = self.df_scaled[coin].values x_scaled = sc.transform(x) self.df_scaled[coin] = x_scaled self.coins = list(self.scalers.keys()) self.df.drop(columns=set(self.bad_coins).intersection(self.coins)) for coin in self.bad_coins: if coin in self.coins: self.coins.remove(coin) self._book_orders.pop(coin, None) self.scalers.pop(coin, None) self.num_coins = len(self.coins) def copy_for_trading(self, dm): train_len = int(dm._train_test_ratio * dm.df.shape[0]) self.scalers = dm.scalers self.df = dm.df[train_len - dm.look_back * dm.stride - 1:train_len].copy() self._book_orders = dict() for coin, book_order in dm._book_orders.items(): self._book_orders[coin] = book_order[train_len - dm.look_back * dm.stride - 1:train_len] self.df_scaled = self.df.copy() for coin, sc in self.scalers.items(): x = self.df_scaled[coin].values x_scaled = sc.transform(x) self.df_scaled[coin] = x_scaled self.coins = list(self.scalers.keys()) self.df.drop(columns=set(self.bad_coins).intersection(self.coins)) for coin in self.bad_coins: if coin in self.coins: self.coins.remove(coin) self._book_orders.pop(coin, None) self.scalers.pop(coin, None) self.num_coins = len(self.coins) def save_scalers(self, filename): f = open(f'models/scalers_{filename}.pkl', 'wb') pickle.dump(self.scalers, f, 2) f.close() def load_scalers(self, filename): f = open(f'models/scalers_{filename}.pkl', 'rb') scalers = pickle.load(f) f.close() self.scalers = scalers if __name__ == '__main__': dm = DataManager( # price_or_buy_sell='price', # max_size_of_database=1000, # start_date=config.start_date_train, # end_date=config.end_date_train, ) dm.initialize(config.db_path, ) dm.plot_trends(plt.figure(figsize=(20, 10)).add_subplot(111)) # dm.save('dm.pkl') plt.show()
. In order to explore the probability of curcumin treating multiple myeloma (MM) via the inhibition of angiogenesis, the expressions of brain derived neurotrophic factor (BDNF) and its specific receptor in human MM cells and endothelial cells were detected by reverse transcriptase-polymerase chain reaction (RT-PCR). The angiogenic activity was evaluated by endothelial cell migration assay and tubule formation assay in vitro. The results showed that exogenous BDNF significantly induced endothelial cell tubule formation and endothelial cell migration, these two effects were inhibited by curcumin. Furthermore, BDNF was detected in the MM cell and TrkB was detected in the endothelial cell and curcumin depressed the mRNA expression of BDNF and TrkB in the dose- and time-dependent manners. It is concluded that BDNF is a novel angiogenesis protein. Curcumin interrupts the interaction between multiple myeloma cells and endothelial cells by reducing TrkB expression in endothelial cells and inhibiting BDNF production in multiple myeloma cells, eventually, resulting in inhibition of angiogenesis. This is probably one part of the mechanism of the curcumin treating MM via the inhibition of angiogenesis.
A Multicenter Evaluation of the Feasibility, Patient/Provider Satisfaction, and Value of Virtual Spine Consultation During the COVID-19 Pandemic Objective To assess the feasibility, patient/provider satisfaction, and perceived value of telehealth spine consultation after rapid conversion from traditional in-office visits during the COVID-19 pandemic. Methods Data were obtained for patients undergoing telehealth visits with spine surgeons in the first 3 weeks after government restriction of elective surgical care at 4 sites (March 23, 2020, to April 17, 2020). Demographic factors, technique-specific elements of the telehealth experience, provider confidence in diagnostic and therapeutic assessment, patient/surgeon satisfaction, and perceived value were collected. Results A total of 128 unique visits were analyzed. New (74 ), preoperative (26 ), and postoperative (28 ) patients were assessed. A total of 116 (91%) visits had successful connection on the first attempt. Surgeons felt very confident 101 times (79%) when assessing diagnosis and 107 times (84%) when assessing treatment plan. The mean and median patient satisfaction was 89% and 94%, respectively. Patient satisfaction was significantly higher for video over audio-only visits (P < 0.05). Patient satisfaction was not significantly different with patient age, location of chief complaint (cervical or thoracolumbar), or visit type (new, preoperative, or postoperative). Providers reported that 76% of the time they would choose to perform the visit again in telehealth format. Sixty percent of patients valued the visit cost as the same or slightly less than an in-office consultation. Conclusions This is the first study to demonstrate the feasibility and high patient/provider satisfaction of virtual spine surgical consultation, and appropriate reimbursement and balanced regulation for spine telehealth care is essential to continue this existing work. INTRODUCTION T elehealth is an emerging platform that had relatively limited utilization among spine surgeons before the COVID-19 pandemic and the resultant shutdown of traditional face-to-face care. Reasons for this included the challenge inherent to any transformative change to traditional methods for providing health care, burdensome regulatory restraints such as the need for multistate medical licensure, and inconsistent or uncertain insurance reimbursement. Further, reliability and patient satisfaction of the telehealth evaluation was unknown. All of these created real or perceived prohibitive functional barriers to telehealth care for spine surgeons. Although limitations mentioned above have tempered enthusiasm and acceptance of telehealth among surgical subspecialists in the civilian sector, the military and Veterans Administration have been early adopters. Their entrenched hub-and-spoke organizational structure, which covers broad regions with varying degrees of resources, makes for a perfect environment to realize the unique benefits of telehealth. Despite the aforementioned challenges, surgical and nonsurgical specialty groups have already reported successful results with telehealth consultation within their fields. In addition, telehealth for evaluation and management of patients with spinal cord injuries continues to be increasingly explored by interdisciplinary teams. However, the authors are unaware of any studies investigating feasibility, patient/provider satisfaction, or perceived value with telehealth as a vehicle for evaluating and treating spinal disease. Because of restrictions limiting in-office visits during the COVID-19 pandemic and broadly reduced barriers to practicing telehealth, many health systems quickly adopted telehealth platforms to continue delivering the best care possible to their patients. The mass migration to the telehealth platform was further enabled by Centers for Medicare and Medicaid Services decisions to provide expanded reimbursement and decreased restriction on the format for telehealth services. A secondary effect of the changes in telehealth regulations was that video conferences in the patient's home were broadly authorized. This added a level of empathetic connection between the patient and provider at a time when the world was forced to be socially distanced. In some ways, this was a process of clinical care coming full circle. From the original house calls of the past, COVID-19 had ushered in a chance to see patients where and how they live. But questions remained, especially at the provider level, most commonly from spine providers who were nave to the use of telehealth. Reliably establishing a diagnosis for spinal disease is challenging even with traditional in-office examination. An often-mentioned concern among spine surgeons with telehealth is the inability to perform a physical examination and the possibility that this may result in missed diagnosis and inappropriate treatment. However, Heflin et al 16 showed that physical examination has limited specificity for diagnosis of cervical myelopathy. Likewise, Fogarty et al 17 demonstrated in a systematic review that physical examination, specifically a present Hoffman's sign, adds little to the diagnosis of cervical myelopathy over and above imaging and history. The limited incremental diagnostic value of physical examination has been reported for other common spinal conditions. 18,19 Further, there is no responsible application of telehealth for managing spine surgical disease that does not include an in-person examination before a surgical intervention. Adoption of telehealth as a routine element of outpatient care does not invoke an all-or-nothing condition on physical examination. There are elements of gross neurological assessment that can be replicated virtually, such as gait assessment and single-leg heel rise. Further, the forced experience with telehealth that has arisen from the COVID-19 pandemic may serve to reinforce the fact that spine surgical diagnosis is a multifactorial phenomenon that is most influenced by history and imaging/testing, with physical examination having a greater impact on assessing severity of disease as opposed to presence. In the end, consideration of telehealth as a viable augment to routine clinical practice for the spine surgeon requires an empiric assessment of the ability for this medium to generate usable information that supports clinical decision-making and/or tracking outcomes. Faced with a void in the literature on the topic of virtual spine consultation and a real need to find alternative, effective means for communicating with and caring for our patients during the COVID-19 experience incidentally created an incubator for a natural experiment on the feasibility of telehealth for assessing spine disease. The purpose of this study was to assess the feasibility, patient and provider satisfaction, and perceived value of the rapid conversion from traditional in-office to telehealth visits for spine consultation during the COVID-19 pandemic. The life-altering and unprecedented experience associated with this pandemic has produced a call to arms for our medical community, and many have stepped up as heroes and disruptive innovators. Although most spine providers have not been directly involved in care of patients with COVID-19, many have applied this time away from elective surgery to explore telehealth as a new format for delivering care to their patients in need. Coming out of this experience we may better understand the feasibility and eventually best practices for incorporating telehealth as a care platform that can rival traditional in-office visitation for certain patients, conditions, or situations. Study Design This study was initiated as an institutional review board exempt quality-improvement project at 4 health care institutions within the first week of restrictions imposed on elective surgical care as a result of the COVID-19 pandemic. Three institutions were in the Midwest and one was a US military medical center on the east coast. Overall, 10 fellowship-trained surgeons including 8 orthopedic spine surgeons and 2 neurosurgeons performed telehealth visits. The data from each site were later deidentified and combined under an institutional review boardeapproved retrospective observational protocol. Data were retrospectively collected using a standardized data collection tool and surveys for patients undergoing telehealth visits with a spine surgeon between March 23, 2020, and April 17, 2020. Inclusion and Exclusion Criteria Patients were eligible if they underwent telehealth consultation with a participating spine surgeon via video or phone between March 23, 2020, and April 17, 2020. Outcomes For each telehealth visit, the following elements were recorded: demographic factors, patient satisfaction, surgeon satisfaction, technique-specific elements of the telehealth experience such as audio and video quality, provider confidence in diagnostic and therapeutic assessment, and patient perceived value. Both phone and video telehealth visits were included. Both new and established patient visits were included. Patient and provider satisfaction was assessed using a modified Agency for Healthcare Research and Quality telehealth questionnaire, 20 which used a 5-point Likert response scale (Figures 1 and 2). A single question regarding the patient perceived value for a telehealth spine consultation visit in comparison to an in-office visit was recorded. The patient survey was conducted via telephone after the visit by a coordinator or resident/fellow. Each of the 13 patient satisfaction questions was reported as means, and top-box and top-2-box percentages were calculated. ORIGINAL ARTICLE Further, the questions were grouped into 3 domains: technical, provider-specific, and patient experience. Overall satisfaction of each domain was calculated as the sum of score/total score possible times 100. This method accounted for when a telephone visit was performed and the question regarding video did not apply. Statistical Analysis Descriptive statistics were used to describe the baseline characteristics of the enrolled cohort of patients. Means were reported for all variables, and median and interquartile range (IQR) (25 th percentile to 75 th percentile) were reported for the skewed (nonnormal) data obtained from the satisfaction survey. To compare patient satisfaction between subgroups, both Mann-Whitney U test and analysis of variance were used for 2-and multiple-mean groups as the data were nonparametric. Statistical analysis was performed using JASP (Amsterdam, the Netherlands). Subject Cohort A total of 143 unique patient visits were recorded. Of those, 15 were excluded because of incomplete or incorrectly completed patient surveys, resulting in a total of 128 unique visits available for analysis. The mean age was 55.1 (standard deviation, 14.9) years, and 69 (53.9%) were male. Video telehealth visits were used in 90 (70.3%) and audio-only in 38 (29.7%). The telehealth visits were conducted for new patients in 74 (58%), preoperative patients in 26 (20%), and postoperative patients in 28 (22%). The region of disease was cervical in 35 (27.3%) and thoracolumbar in 93 (72.7%). Feasibility Of the 128 visits with provider-reported data, 116 (91%) reported a successful connection on the first attempt and zero reported an unsuccessful connection resulting in cancellation of the visit. Of the remaining 12 visits with initial unsuccessful connections, 7 (58%) resulted in a delay of less than or equal to 15 minutes, 4 (33%) resulted in a delay of greater than 15 minutes, and 1 (8%) visit had to be converted to audio-only from video. Surgeons self-reported their level of confidence in diagnosis and treatment plan for each patient encounter. Of 128 visits, surgeons answered ">75% confident or as confident as if I had seen the patient in-office" 101 times (79%) when assessing their diagnosis and 107 times (84%) when assessing their treatment plan. For confidence in diagnosis, surgeons reported confidence at less than 75% for 27 visits (21%). Of those, 13 (48%) were attributed to the telehealth-specific format and 14 (52%) were attributed to need for additional information such as imaging or other evaluation, which may have been a similar feature of an in-office visit. For confidence in treatment plan, surgeons reported confidence at less than 75% for 21 visits (16%). Of those, 11 (52%) were attributed to the telehealth-specific format and 10 (48%) were attributed to the need for additional information such as imaging or other evaluation. Thus, the majority of visits occurred successfully from a technical perspective. For those visits that did not lead to a confident diagnosis and treatment plan, in approximately one-half of cases, this was related to insufficient information, which may have hindered an in-office examination to a similar degree. Mean patient satisfaction did not significantly vary based on prior use of telehealth or with surgeon experience over the time frame of this study. Patient satisfaction did not differ significantly between surgeons reporting prior experience in telehealth compared with surgeons reporting no prior experience (P 0.192). Further, a significant learning curve effect was not witnessed for surgeons unaccustomed to using a telehealth format. Patient satisfaction did not significantly change between the first 3 patients seen and the final 3 patients seen during the study period. Technical Domain. The 4 questions regarding technical satisfaction (questions 1 through 4) all had a mean score >4.5 out of possible 5 and median of 5 due to right skew of results. The mean and median overall technical result was 93%; 95% (IQR: 87%e100%) ( Table 1). The top-2-box analysis (strongly agree and agree) was all greater than 91% (91%e99%) for the 4 questions relating to the technical factor ( Figure 2). Provider-Specific Domain. Patients reported at least a mean of 4.3 or greater out of 5 for the 5 provider domain questions (questions 5 through 9). The overall mean score of the 5 provider-related questions was 92% and the median score was 96% (IQR: 84%e 100%) ( Table 1). Patient Experience Domain. The patient experience domain was based on 4 questions (questions 10 through 13). Patients strongly agreed and agreed >79% for 3 of the questions, 10, 12, and 13. The ORIGINAL ARTICLE mean score for these 3 questions was 4.2 or greater out of 5. The question with the lowest top-box and top-2-box for the complete 13-question survey was question 11, a question within the patient experience domain. This question was phrased, "I liked seeing the provider this way as much as seeing him/her in person." A total of 58% of patients agreed or strongly agreed with this statement (Figure 2). This question was an outlier in terms of top-2-box responses, where all other questions ranged from 80% to 99% for top-2-box ( Figure 2). Some patients who had overall excellent satisfaction scores commented during the interview that they simply preferred in-person contact with the provider, despite a productive telehealth experience. Provider Satisfaction The results of the provider satisfaction survey are detailed in Figure 3. In general, the satisfaction was high across the entire survey with 1 notable exception, question 7. The top-box and top-2-box responses for question 7 were selected 26% and 30%, respectively ( Figure 4). This question asked, "I would have preferred to see this patient in person, instead of via telehealth." A corollary was asked in the patient survey (question 11), and it was an outlier in that survey. However, the wording was substantively different, in that it did not include the word "instead." Thus, the question asked of the providers created a competition between telehealth and in-office examination, implying a need for determining a superiority of the two. As a result, this question has the furthest deviation from the norm of all questions asked in the provider survey. For purposes of summarizing results, it may be more accurate to report the inverse response to this question. A total of 41% of providers selected "Strongly Disagree" and 65% of providers selected "Disagree" or "Strongly Disagree" for this question. Demonstrating that this deviation was likely an artifact of the question, as opposed to a real concern with the concept of telehealth, a single summary provider survey question asked, "I would choose to perform this visit as a telehealth visit, after the COVID restrictions are lifted." Answer "Yes" was selected for 74% of visits and "No" for 26%. Similarly, patients were asked on the 5-point Likert scale to answer the question "Based on my experience, I would choose to use telehealth again" (question 13), and 80% selected "Agree" or "Strongly Agree." The Mann-Whitney U test was used to assess whether the type of visit affected provider confidence in diagnosis and/or treatment. Postoperative visits were compared with combined new and preoperative visits, and confidence was grouped as either >75% or <75%. No significant difference was noted in provider confidence between postoperative and new/preoperative visits for diagnosis (P > 0.005) or treatment (P > 0.005). Patient Perception of Value Patients were asked to assess their perception of the value of a telehealth visit, with the question "Compared to an in-office exam, how much do you think a telehealth visit should cost?" Of 128 patients, 14 (11%) replied "Nothing," 32 (25%) replied "Much less than an in-office visit (less than 50%)," 53 (41%) reported "A little less than an in-office visit (50%e99%)," 24 (19%) reported "Same as an in-office visit," and 5 (4%) reported "More than an in-office visit" (Figure 4). DISCUSSION This is the first study to demonstrate excellent feasibility and high patient/provider satisfaction, as well as perceived value of virtual spine surgical consultation. The COVID-19 pandemic has stressed the global health care system in an unprecedented way. The only comparator to this cataclysmic event is war. Every war, whether the enemy is a military foe or a pathogenic microbe, presents immense opportunities to innovate and educate. The traumatic events are so painful that we owe it to ourselves and our progeny to learn every lesson we can. A lesson learned from the response to the ongoing pandemic is that telehealth is a meaningful platform for spine surgical care. The modifications in regulation and insurance approval, which have greatly deterred the use of this technology in the past, must be reassessed going forward. The empathetic connection that is fostered by in-home consultation must remain a protected opportunity to realize the maximum benefit from telehealth consultation. A limitation of telehealth is the lack of the hands-on examination, where further research is needed. Lastly, in regard to physical examination, there is room to look at this as another opportunity to innovate. The rapid assimilation of telehealth consultation across medical specialties was supported in part by the ubiquitous access to technology that supports internet realtime communication. It may be that additional readily available technology can be used to gain unique and better insight into the physical assessment of spinal function in our patients. Wearable ORIGINAL ARTICLE technologies that can monitor gait (e.g., cycle, cadence, step length, and velocity) have been a source of great research interest recently. Traditional elements of physical examination that may or may not have a direct impact on diagnosis and prognosis may be replaced by data from wearable technology that does provide a direct impact. One overarching lesson learned from our COVID-19 pandemic experience is that health care needs to adapt and innovate to remain effective. Spine surgeons are certainly well aware of the need and value of exploring innovative technology in the operating room (i.e., implants, biologics, and imaging modalities), but changes in the basic process of clinical care delivery have not been a focus in the past. Exploration of and ultimate conversion to telehealth as a part of routine clinical practice for spine providers represents a new mandate to innovate. Although the social distancing required by the COVID-19 pandemic induced a nationwide need for alternate means of connecting with patients, in our internet-driven age, there existed other environmental factors, which also may benefit from the ability to access patients at a distance, for instance congested metropolitan areas or sparsely populated rural ones. The prior successful military experience with telehealth for many subspecialized fields serves as a good example of the relevance of this platform of care beyond the unique times in which we now find ourselves. 1,2,4 Before and even at the beginning of the COVID-19 pandemic, the concept of virtual spine consultation invoked many perceived concerns, especially for those who had previously not engaged in this platform of care. Specific to spine surgery, criticisms, and critiques have included difficulty with physical examination, poor confidence in diagnosis and/or treatment, technical connection issues, and poor patient satisfaction or perceived value. This study empirically assessed each of these concerns. The data collection tools were specifically created to explore the validity of the aforementioned concerns, because they were potent barriers to acceptance of telehealth in the early days of the COVID-19 pandemiceinduced restrictions on elective spine surgical care. The results of this study should provide a level of confidence and comfort to providers not familiar with virtual spine consultation. This study provides a cursory assessment of the general feasibility and impact of telehealth-mediated spine care. It demonstrates that patients and surgeons are overwhelmingly capable of effectively communicating in this format and that these visitations result in meaningful benefit to patients as evidenced by high patient satisfaction and provider confidence in the diagnosis and treatment plan. Validation of the accuracy of diagnosis obtained can and should be compared with the "gold standard" in-office clinical examination, and our group plans to perform this follow-on study. Further, we are actively assessing the practicality and utility of virtual physical examination for spine patients. In the end, another name for "perceived concern" is a myth, and the best way to myth-bust is to empirically assess. The results of this study should start to put to rest the myths mentioned above and allay concerns among providers nave to the concept of virtual consultation for spinal disease. As a means for assessing satisfaction, we applied a common convention of top-box and top-2-box assessment. Clinic satisfaction surveys often are skewed toward the positive end, and as such industry consulting (e.g., Press-Ganey) on this aspect of patient experience and customer experience, in general, dichotomously assess satisfaction based on reporting the highest score possible or the highest 2 ratings possible and considering all other scores failure. This convention was applied and reported in our results, and it seems appropriate for most of the patient and provider survey questions asked. However, 1 notable exception was question 11 and question 7 for the patient and provider surveys, respectively. In these questions, despite strongly favorable responses for the other questions, indicating an overall high satisfaction with the technical format and experience, patients and providers retained a relatively strong preference for in-person visitation. This was a new experience for patients and providers, alike. As telehealth becomes a more common platform of care, one would expect that this nostalgic conception will come in line with the other measured elements of the encounter. In testament to this, despite the reduced rate of top-box and top-2-box responses for questions 11 and 7, when patients and providers were asked directly if they would choose to use telehealth again, they responded almost 80% in the affirmative. This study is certainly subject to limitations. The largest limitation is that during the time period of the study, patients were not able to see the provider for an in-office visit due to mandated guidelines. When in-office visits and telehealth are equally available, patient perception of telehealth may be different. The methods of this study and the reality of the situation do not allow for validation against in-office examination. Further, the variation in technical platforms used, which were largely site-specific, and the organic support (i.e., pre-existing telehealth activities and telehealth on-site experts to participate in "rooming" and addressing patient technical concerns) available are covariants that cannot be independently assessed by our study. Thus, this work is preliminary. It does not establish best practice, but does report several methods, each of which was associated with a high rate of patient/provider satisfaction. Comments from patients, which do not translate to quantifiable metrics for statistical analysis, highlighted that specific elements of the encounter are favored by patients. One specific one is the ability to "share screens" and demonstrate in real-time key imaging findings. Despite this limitation, most patients and providers reported that they would use telehealth again in the future. CONCLUSIONS This work demonstrates that patients new and established to a spine surgeon can adequately be assessed and provided highquality medical information that supports a definitive diagnosis and treatment plan. As a testament to the feasibility and value of virtual spine consultation, patients who had no other access to spine surgical care during the early days of the COVID-19 pandemic other than going to already overburdened and potentially dangerous emergency rooms have been seen within the context of this quality assurance project and already progressed to successful intervention (e.g., injections and surgery) to address their semiurgent spine needs. Spinal disease is a biopsychosocial phenomenon, and telehealth provided a vehicle for empathetic connection capable of allaying patient concerns during these unprecedented times. Patients very much appreciated the opportunity to have some connection with a care network at this time. Based on the results of this study, myths that have tempered adoption of telehealth as a part of the routine process of outpatient care have been dispelled. However, at the same time, if virtual spine consultation is to persist as a standard care platform, it is important that insurers and regulators not reinstate restrictions that quench the newfound interest in telehealth.
Predictive Value of the CHA2DS2-VASc Score for Mortality in Hospitalized Acute Coronary Syndrome Patients With Chronic Kidney Disease Background Chronic kidney disease (CKD) patients have a high prevalence of coronary artery disease and a high risk of cardiovascular events. The present study assessed the value of the CHA2DS2-VASc score for predicting mortality among hospitalized acute coronary syndrome (ACS) patients with CKD. Methods This was a retrospective cohort study that included CKD patients who were hospitalized for ACS from January 2015 to May 2020. The CHA2DS2-VASc score for each eligible patient was determined. Patients were stratified into two groups according to CHA2DS2-VASc score: <6 (low) and ≥6 (high). The primary endpoint was all-cause mortality. Results A total of 313 eligible patients were included in the study, with a mean CHA2DS2-VASC score of 4.55 ± 1.68. A total of 220 and 93 patients were assigned to the low and high CHA2DS2-VASc score groups, respectively. The most common reason for hospitalization was unstable angina (39.3%), followed by non-ST-elevation myocardial infarction (35.8%) and ST-elevation myocardial infarction (24.9%). A total of 67.7% of the patients (212/313) received coronary reperfusion therapy during hospitalization. The median follow-up time was 23.0 months (interquartile range: 1238 months). A total of 94 patients (30.0%) died during follow-up. The high score group had a higher mortality rate than the low score group (46.2 vs. 23.2%, respectively; p < 0.001). The cumulative incidence of all-cause death was higher in the high score group than in the low score group (Log-rank test, p < 0.001). Multivariate Cox regression analysis indicated that CHA2DS2-VASc scores were positively associated with all-cause mortality (hazard ratio: 2.02, 95% confidence interval: 1.263.27, p < 0.001). Conclusion The CHA2DS2-VASc score is an independent predictive factor for all-cause mortality in CKD patients who are hospitalized with ACS. This simple and practical scoring system may be useful for the early identification of patients with a high risk of death. INTRODUCTION Chronic kidney disease (CKD) is an important contributor to morbidity and mortality from non-communicable diseases and has become a considerable public health issue. Patients with CKD have a high prevalence of coronary artery disease, and many of these patients die from cardiovascular disease, especially those with acute coronary syndrome (ACS). The early identification of high-risk ACS patients is important for assessing prognosis and guiding treatment. Current international guidelines recommend Global Registry of Acute Coronary Events (GRACE) scores to predict the cumulative risk of death and myocardial infarction. However, derivations of GRACE scores are based on unselected and generalizable patients, and the calculation of GRACE scores is relatively complicated, which may limit its application in CKD patients, especially those with end-stage renal disease. The CHA 2 DS 2 -VASc score is used to assess the combination of congestive heart failure, hypertension, diabetes, prior stroke, vascular disease, and age. It is an easily calculated scoring system that can assess the risk of stroke in patients with atrial fibrillation. All of these risk factors have been proven to be associated with cardiovascular prognosis. Recent studies also used CHA 2 DS 2 -VASc scores to predict poor prognosis in patients with cardiovascular disease, regardless of atrial fibrillation. The risk factors that are included in this scoring system are also common in CKD patients with coronary artery disease. The objective of the present study was to evaluate the predictive value of CHA 2 DS 2 -VASc scores in hospitalized ACS patients with CKD. Study Design and Population This was a retrospective cohort study that included CKD patients who were hospitalized for ACS from January 2015 to May 2020. We consecutively enrolled patients in the Cardiology Department, China-Japan Friendship Hospital. Cases were identified using International Classification of Diseases-Clinical Modification code 9. All enrolled patients were confirmed to have at least one major coronary artery with more than 50% stenosis, determined by coronary angiography. Data on demographics, medical history, and laboratory tests were abstracted from electronic medical records. The glomerular filtration rate was estimated according to serum creatinine and the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI). Chronic kidney disease was defined by an estimated glomerular filtration rate <60 ml/min/1.73 m 2, including dialysis. Coronary reperfusion therapy included percutaneous transluminal coronary angioplasty (PTCA) ± stenting, PTCA alone, or coronary artery bypass grafting. The study conformed to the Declaration of Helsinki and was approved by the Research Ethical Review Committee of China-Japan Friendship Hospital (2020-112-K71). CHA 2 DS 2 -VASc Score For each patient, the CHA 2 DS 2 -VASc score was calculated at admission based on the following scoring system: point for congestive heart failure, hypertension, 65-74 years of age, diabetes mellitus, vascular disease, and female sex and points for ≥75 years of age and prior stroke or transient ischemia attack. We performed a receiver operating characteristic analysis that showed that the best cut-off value of the CHA 2 DS 2 -VASc score to predict mortality was ≥6 with 45.7% sensitivity and 77.2% specificity . Therefore, the CHA 2 DS 2 -VASc score was classified as <6 and ≥6. The patients were not further divided into more than these two groups because of the relatively small sample size. Follow-Up and Outcome The primary outcome of the study was all-cause mortality, which was the rate of death from any cause from the date of admission until the occurrence of endpoint events or until the latest followup date (June 1-July 1, 2021). Clinical events were ascertained by longitudinally tracking patients' medical records or through telephone interviews. Statistical Analysis Continuous variables are expressed as the mean ± standard deviation or median and interquartile range and compared using t-tests or the Mann-Whitney U-test when appropriate. Categorical variables are expressed as frequencies and percentages and were compared using the 2 -test or Fisher's exact test. Univariate and multivariate Cox regression analyses were performed to determine risk factors for all-cause death, and the hazard ratio (HR) and 95% CI were calculated. Variables with values of p < 0.10 in the univariate analysis were included in the multivariate analysis. Time-dependent survival between groups was evaluated using Kaplan-Meier curves and the Log-rank test. Stratified analyses were performed using the following variables: age (≥65 vs. <65 years), sex, hyperlipidemia, diabetes, prior myocardial infarction, hemodialysis, main diagnosis, left ventricular ejection fraction (≥50 vs. <50%), and reperfusion therapy. Multiplicative interactions were calculated in each subgroup. All statistical analyses were performed using SPSS 27.0 software (IBM Corp., Armonk, NY, USA). Two-tailed values of p < 0.05 were considered statistically significant. RESULTS A total of 313 eligible patients were recruited in the study. Baseline characteristics are presented in Table 1. Among these patients, the mean CHA 2 DS 2 -VASC score was 4.55 ± 1.68. A total of 220 patients (70.3%) had a low CHA 2 DS 2 -VASc score (<6 points), and 93 (29.7%) had a high CHA 2 DS 2 -VASc score (≥6 points). The high CHA 2 DS 2 -VASC score group included patients who were older and had a higher prevalence of comorbidities, including diabetes mellitus, heart failure, and cerebrovascular disease. Patients who were diagnosed with non-ST-elevation myocardial infarction (35.8%) and unstable angina pectoris (39.3%) were more common than patients who were diagnosed with ST-elevation myocardial infarction (24.9%). Among the 313 patients, 67.7% received coronary reperfusion therapy, including PTCA ± stenting (n = 187), PTCA (n = 15), and coronary artery bypass grafting (n = 10). Accordingly, in-hospital treatment was comparable between the two groups. The median follow-up time was 23.0 months (interquartile range: 12-38 months). During the follow-up period, a total of 94 patients (30.0%) died. High CHA 2 DS 2 -VASC scores were associated with a higher risk of mortality (46.2 vs. 23.2%, p < 0.001). Kaplan-Meier curves for patients who were stratified by CHA 2 DS 2 -VASC scores are presented in Figure 1. The cumulative incidence of all-cause mortality (Log-rank test, p < 0.001) was higher in the high CHA 2 DS 2 -VASC score group than in the low CHA 2 DS 2 -VASC score group. We performed Cox univariate and multivariate analyses using the low CHA 2 DS 2 -VASc score group as the reference group. The HR for all-cause mortality was 2.49 (95% CI: 1.66-3.74, p < 0.001). After adjusting for hypertension, diabetes, prior myocardial infarction, and CKD stage, the HR of all-cause mortality was 2.029 (95% CI: 1.33-3.10, p = 0.001). The HR of all-cause mortality was largely unchanged after adding all other variables with p < 0.10 in the univariate analysis (HR: 2.027, 95% CI: 1.26-3.27, p < 0.001). The univariate analysis of factors that were related to all-cause mortality is presented in Table 2. The multivariate analyses between the CHA 2 DS 2 -VASc score group and outcomes are shown in Table 3. A significant between-group difference in outcome was found in the subgroup analyses of sex . A similar result was found for death in the subgroup analyses of hemodialysis. No significant interactions were found between the other subgroups and CHA 2 DS 2 -VASC scores for the prediction of all-cause mortality. The results of the subgroup analyses are shown in Figure 2. DISCUSSION The present study found that CHA 2 DS 2 -VASC scores were associated with worse clinical outcome in CKD patients with ACS. High scores (≥6) were an independent predictor of allcause mortality and may be useful for risk stratification. The subgroup analyses indicated that high scores were a slightly better predictor of all-cause mortality in men than in women and in patients who did not undergo hemodialysis. Compared with patients with low scores, patients with high scores were more often older and women and had a higher prevalence of comorbidities. Additionally, patients with high scores were less likely to receive reperfusion therapy in clinical practice. Acute coronary syndrome is a common critical cardiovascular disease and primary focus of cardiologists. Benefiting from the application of stents, the mortality rate of ACS has gradually decreased over the past decade. Patients with coronary disease and CKD, especially end-stage kidney disease, have a very high risk of cardiovascular events. The high rate of all-cause mortality in the present study aligns with the highrisk feature of these patients in previous studies. Despite having worse outcomes after a cardiovascular event, patients with CKD are often excluded from the majority of ACS or heart failure cardiovascular outcome trials. The reasons for this are likely multifactorial, such as the potential for diminished effects of medical treatment and coronary intervention in trials, complex pathophysiological mechanisms that contribute to cardiovascular disease, safety concerns, and trial recruitment difficulties. Therefore, clinical evidence from the general population may not be suitable for this specific patient population. The Framingham risk score is the most well-validated coronary artery disease risk prediction tool, but it has been shown to have poor overall accuracy in predicting cardiac events in individuals with CKD. Data from GRACE indicated that the GRACE risk score underestimates the risk of major events in end-stage kidney disease patients who undergo dialysis. Moreover, the inclusion of multiple types of variables and relatively complex calculation significantly limit clinical utility of the GRACE risk score. The CHA 2 DS 2 -VASC score is a validated and extensively used score to estimate thromboembolic risk in patients with atrial fibrillation, consisting of several cardiovascular risk factors. Among these factors, old age, hypertension, diabetes, and heart failure have been proven to influence the prognosis of cardiovascular disease (4,5,. Prior stroke is also associated with a high risk of major adverse cardiovascular and cerebrovascular events. Sex differences in the epidemiology, manifestation, pathophysiology, and outcome of cardiovascular disease have been observed in previous studies. Therefore, all components of the CHA 2 DS 2 -VASc score have a close association with the prognosis of cardiovascular disease. Tufan Cinar et al. evaluated 267 patients with mechanical mitral valve thrombosis and found that a CHA 2 DS 2 -VASc score ≥ 2.5 was associated with a higher risk of prosthetic valve thrombosis. Several recent studies evaluated the predictive value of the CHA 2 DS 2 -VASc score for clinical outcome. A large real-world cohort study reported that CHA 2 DS 2 -VASc scores were significantly associated with mortality in heart failure patients. Hsu et al. reported the predictive value of CHA 2 DS 2 -VASc scores for all-cause mortality and cardiovascular mortality in CKD patients without ACS. A similar study found that CHA 2 DS 2 -VASc scores were strongly associated with 1-year mortality and cardiovascular risk in hemodialysis patients. Studies that investigated patients with ST-elevation myocardial infarction showed that CHA 2 DS 2 -VASc scores were an independent predictor of no-reflow and an independent predictor of inhospital and long-term mortality in patients who underwent primary percutaneous coronary intervention. Although the association between CHA 2 DS 2 -VASc score and clinical outcome in ACS patients without CKD or CKD patients without ACS have been estimated, the value of these scores in ACS patients with CKD is unclear. In the present study, we found a significant association between CHA 2 DS 2 -VASc scores and allcause mortality in ACS patients with CKD, which may be useful for the risk stratification of these patients. The mean CHA 2 DS 2 -VASc score in the present study was significantly higher than in patients without CKD in a previous study, which may help explain the high mortality in ACS patients with CKD. Variables that are included in the CHA 2 DS 2 -VASC score can be readily found in patients' medical histories. Furthermore, CHA 2 DS 2 -VASC scores may be useful for quickly identifying very high-risk ACS patients with CKD. The present study has limitations. This was a single-center, retrospective study. We were unable to control the variables that were included in the analyses given the study's observational design. In addition to traditional cardiovascular risk factors (e.g., diabetes and hypertension), non-traditional CKD-related CVD risk factors (e.g., mineral and bone disease abnormalities, vascular calcification, inflammation, and oxidative stress) may also play an important role in the prognosis of cardiovascular disease. However, we focused on the prognostic value of the CHA 2 DS 2 -VASc scoring system in ACS patients with CKD, based on variables that were readily obtained from the patients' medical records. Another limitation was that the sample size was not sufficiently large to evaluate prognostic value in dialysis and nondialysis populations separately. Future studies should integrate CHA 2 DS 2 -VASc scores with non-traditional CKD-related CVD risk factors and develop and validate novel CVD risk prediction scores for the CKD population and dialysis population. In conclusion, CHA 2 DS 2 -VASc scores were an independent predictive factor for mortality in ACS patients with CKD. The CHA 2 DS 2 -VASc scoring system is a simple and practical method for identifying very high-risk ACS patients among the CKD population. Further studies are needed to evaluate whether CHA 2 DS 2 -VASc scoring can improve the management and outcome of this high-risk population. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Ethical Review Committee of China-Japan Friendship Hospital. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
use kube::{client::APIClient, config::Configuration}; use crate::schematic::component::Component; use crate::workload_type::*; use std::collections::BTreeMap; #[test] fn test_replicated_service_kube_name() { let cli = APIClient::new(mock_kube_config()); let rs = ReplicatedService { name: "de".into(), component_name: "hydrate".into(), instance_name: "dehydrate".into(), namespace: "tests".into(), definition: Component { ..Default::default() }, params: BTreeMap::new(), client: cli, owner_ref: None, }; assert_eq!("dehydrate", rs.kube_name().as_str()); } /// This mock builds a KubeConfig that will not be able to make any requests. fn mock_kube_config() -> Configuration { Configuration { base_path: ".".into(), client: reqwest::Client::new(), } }
package test.tmp; import org.testng.annotations.Factory; public class B { @Factory public Object[] f() { return new Object[] { new A(), new A() }; } }
/* Programming Challenge Description: Take the following example: Portfolio Ticker Name Quantity VOD Vodafone 10 GOOG Google 15 MSFT Microsoft 12 Benchmark Ticker Name Quantity VOD Vodafone 6 GOOG Google 10 MSFT Microsoft 25 Passive portfolio management is act of trying to make our portfolio "look like" the benchmark. In the example above, we currently hold 10 shares of Vodafone but there are 16 shares in the benchmark, so we need to buy 6 shares of Vodafone in order to match it. The Problem: You will receive a string in the following format Portfolio:Benchmark where Portfolio is a string in the format of question 1 and 2 and Benchmark is also a string in the same format. Calculate the transactions you need to make for your portfolio from question 2 to match the benchmark. Build a string in the following format for each transaction [transaction type, ticker, quantity] In the example above, my string would say [SELL, GOOG, 5.00], [BUY, MSFT, 13.00], [BUY, VOD, 6.00] Order alphabetically by ticker. Recommendation Create an object to hold the transactions as it will be used in further problems Quantities must be formatted to 2 decimal places All numbers are positive Remember to copy your code for the following question. Input: The portfolio and benchmark is read from standard input. You will receive a string in the following format Portfolio:Benchmark. The Portfolio will be represented in the following format: ticker, name, quantity and each holding is separated by the '@' symbol: VOD,Vodafone,10@GOOG,Google,15@MSFT,Microsoft,12 Benchmark is also a string in the same format. Output: Print the transaction list to standard out. Build a string in the following format for each transaction and order each transaction alphabetically by ticker. [transaction type, ticker, quantity], [transaction type, ticker, quantity] Test 1 Test Input Download Test InputBLK,BlackRock,65@JPM,JPMorgan,78@BK,Bank of New York Mellon,13@WFC,Wells Fargo & Co,25:BLK,BlackRock,52@JPM,JPMorgan,19@BK,Bank of New York Mellon,64@WFC,Wells Fargo & Co,125 Expected Output Download Test Output[BUY, BK, 51.00], [SELL, BLK, 13.00], [SELL, JPM, 59.00], [BUY, WFC, 100.00] Test 2 Test Input Download Test InputVOD,Vodafone,10@GOOG,Google,15@MSFT,Microsoft,12:VOD,Vodafone,16@GOOG,Google,10@MSFT,Microsoft,25 Expected Output Download Test Output[SELL, GOOG, 5.00], [BUY, MSFT, 13.00], [BUY, VOD, 6.00] */ import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; class Portfolio{ String ticker; String name; String quantity; public Portfolio(String ticker, String name, String quantity) { this.ticker=ticker; this.name=name; this.quantity=quantity; } String get_info() { return ticker+", "+name+", "+quantity; } double get_quantity() { double quant=Double.parseDouble(quantity); return quant; } } class Benchmark{ String ticker; String name; String quantity; public Benchmark(String ticker, String name, String quantity) { this.ticker=ticker; this.name=name; this.quantity=quantity; } String get_info() { return ticker+", "+name+", "+quantity; } double get_quantity() { double quant=Double.parseDouble(quantity); return quant; } } class Transaction{ String type; String ticker; String quantity; public Transaction(String type, String ticker, String quantity) { this.type=type; this.ticker=ticker; this.quantity=quantity; } String get_info() { return type+", "+ticker+", "+quantity; } } public class Main { public static String SEPARATOR = "@"; public static final String COLON = ":"; /* * Complete the function below. * * Note: The questions in this test build upon each other. We recommend you * copy your solutions to your text editor of choice before proceeding to * the next question as you will not be able to revisit previous questions. */ static String generateTransactions(String input) { String[] div=input.split(COLON,2); String div1=div[0]; String div2=div[1]; String[] arr1=div1.split(SEPARATOR); String[] arr2=div2.split(SEPARATOR); //splitting the string w.r.t @ Arrays.sort(arr1); Arrays.sort(arr2); //Sorting the array according to teaser String result=""; Portfolio[] portfolios=new Portfolio[arr1.length]; Benchmark[] benchmarks=new Benchmark[arr2.length]; Transaction[] transactions=new Transaction[portfolios.length]; for(int i=0;i<arr1.length;i++) { String[] temp=arr1[i].split(","); int j=0; portfolios[i]=new Portfolio(temp[j++],temp[j++],temp[j++]); } for(int i=0;i<arr2.length;i++) { String[] temp=arr2[i].split(","); int j=0; benchmarks[i]=new Benchmark(temp[j++],temp[j++],temp[j++]); } int count=0; //Assumption is that there is no exclusive ticker in portfolio or benchmark for(int i=0;i<portfolios.length;i++) { if(portfolios[i].ticker.equals(benchmarks[i].ticker)) { double pfquant=portfolios[i].get_quantity(); double bmquant=benchmarks[i].get_quantity(); double quant=Math.abs(pfquant-bmquant); String quantity=String.format("%.2f",quant); if(pfquant>bmquant) { transactions[count++]=new Transaction("SELL",portfolios[i].ticker,quantity); } else { transactions[count++]=new Transaction("BUY",portfolios[i].ticker,quantity); } } } for(Transaction tn : transactions) { result=result+"["+tn.get_info()+"], "; } result=result.substring(0,result.length()-2); //dropping the last comma in the string return result; } public static void main(String[] args) throws IOException{ Scanner in = new Scanner(System.in); String res; String _input; try { _input = in.nextLine(); } catch (Exception e) { _input = null; } res = generateTransactions(_input); System.out.println(res); } }
import { isNode } from 'detect-node-es'; export const isBackend = isNode || typeof window === 'undefined';
<gh_stars>0 package seedu.address.model; /** * Represents the a memory model of a postal code. */ public class PostalData { private final String postal; private final double x; private final double y; public PostalData(String postal, double x, double y) { this.postal = postal; this.x = x; this.y = y; } public String getPostal() { return this.postal; } public double getX() { return this.x; } public double getY() { return this.y; } @Override public String toString() { return "postal:" + postal + " X:" + x + " Y:" + y; } @Override public boolean equals(Object other) { if (other == this) { return true; } if (!(other instanceof PostalData)) { return false; } PostalData otherPostalData = (PostalData) other; return otherPostalData.getPostal().equals(getPostal()) && otherPostalData.getX() == (getX()) && otherPostalData.getY() == (getY()); } }
<gh_stars>1-10 # -------------------------------------------------------- # General settings # -------------------------------------------------------- MODULE = "archiver" # X-Road instances in Estonia: ee-dev, ee-test, EE INSTANCE = "sample" APPDIR = '/opt/archive' # -------------------------------------------------------- # MongoDB settings # -------------------------------------------------------- MONGODB_SERVER = "#NA" MONGODB_PORT = "27017" MONGODB_USER = '{0}_{1}'.format(MODULE, INSTANCE) MONGODB_PWD = <PASSWORD>" # MONGODB_QUERY_DB = "query_db_sample" MONGODB_QUERY_DB = 'query_db_{0}'.format(INSTANCE) # MONGODB_AUTH_DB = "auth_db" # or MONGODB_AUTH_DB = "admin" MONGODB_AUTH_DB = "auth_db" # -------------------------------------------------------- # Module settings # -------------------------------------------------------- # Amount of operational monitoring clean_data logs to be archived (in days). X_DAYS_AGO = 180 # in days # Minimum queries to be archived, default MINIMUM_TO_ARCHIVE = 100000 # Total queries to be archived, default TOTAL_TO_ARCHIVE = 150000 # Raw messages archive directory # RAW_MESSAGES_ARCHIVE_DIR = "/srv/archive/sample" RAW_MESSAGES_ARCHIVE_DIR = '/srv/archive/{0}'.format(INSTANCE) # Clean data archive directory # CLEAN_DATA_ARCHIVE_DIR = "/srv/archive/sample" CLEAN_DATA_ARCHIVE_DIR = '/srv/archive/{0}'.format(INSTANCE) # -------------------------------------------------------- # Configure logger # -------------------------------------------------------- # Ensure match with external logrotate settings # LOGGER_NAME = "archiver" LOGGER_NAME = '{0}'.format(MODULE) # LOGGER_PATH = "/opt/archive/sample/logs" LOGGER_PATH = '{0}/{1}/logs'.format(APPDIR, INSTANCE) # LOGGER_FILE = "log_archiver_sample.json" LOGGER_FILE = 'log_{0}_{1}.json'.format(MODULE, INSTANCE) # LOGGER_LEVEL = logging.DEBUG # Deprecated # -------------------------------------------------------- # Configure heartbeat # -------------------------------------------------------- # Ensure match with external application monitoring settings # HEARTBEAT_PATH = "/opt/archive/archiver/sample/heartbeat" HEARTBEAT_PATH = '{0}/{1}/heartbeat'.format(APPDIR, INSTANCE) # HEARTBEAT_FILE = "heartbeat_archiver_sample.json" HEARTBEAT_FILE = 'heartbeat_{0}_{1}.json'.format(MODULE, INSTANCE) # -------------------------------------------------------- # End of settings # --------------------------------------------------------
package problems.easy; import java.util.HashSet; import java.util.Set; /** * Problem: https://leetcode.com/problems/happy-number/ * Time Complexity: * Space Complexity: */ class Solution202 { //from solutions with Set public boolean isHappy(int n) { Set<Integer> set = new HashSet<>(); while (n != 1 && !set.contains(n)) { set.add(n); n = String.valueOf(n).chars() .mapToObj(c -> String.valueOf((char) c)) .map(Integer::parseInt) .map(t -> t * t) .mapToInt(Integer::intValue) .sum(); } return n == 1; } public boolean isHappy2(int n) { int[] count = new int[1000]; int k = 0; while (k < 1000) { if (n == 1) { return true; } if (n <= 1000) { if (count[n] > 0) { return false; } count[n]++; } k++; n = String.valueOf(n).chars() .mapToObj(c -> String.valueOf((char) c)) .map(Integer::parseInt) .map(t -> t * t) .mapToInt(Integer::intValue) .sum(); } return false; } }
An Enhanced Image Reconstruction Tool for Computed Tomography on GPUs The algebraic reconstruction technique (ART) is an iterative algorithm for CT (i.e., computed tomography) image reconstruction that delivers better image quality with less radiation dosage than the industry-standard filtered back projection (FBP). However, the high computational cost of ART requires researchers to turn to high-performance computing to accelerate the algorithm. Alas, existing approaches for ART suffer from inefficient design of compressed data structures and computational kernels on GPUs. Thus, this paper presents our enhanced CUDA-based CT image reconstruction tool based on the algebraic reconstruction technique (ART) or cuART. It delivers a compression and parallelization solution for ART-based image reconstruction on GPUs. We address the under-performing, but popular, GPU libraries, e.g., cuSPARSE, BRC, and CSR5, on the ART algorithm and propose a symmetry-based CSR format (SCSR) to further compress the CSR data structure and optimize data access for both SpMV and SpMV_T via a column-indices permutation. We also propose sorting-based and sorting-free blocking techniques to optimize the kernel computation by leveraging the sparsity patterns of the system matrix. The end result is that cuART can reduce the memory footprint significantly and enable practical CT datasets to fit into a single GPU. The experimental results on a NVIDIA Tesla K80 GPU illustrate that our approach can achieve up to 6.8x, 7.2x, and 5.4x speedups over counterparts that use cuSPARSE, BRC, and CSR5, respectively.
/** * Default translator that converts exceptions into {@link OAuth2Exception}s. The output matches the OAuth 2.0 * specification in terms of error response format and HTTP status code. * * <p> * * @author Dave Syer * */ public class DefaultWebResponseExceptionTranslator implements WebResponseExceptionTranslator<OAuth2Exception> { private ThrowableAnalyzer throwableAnalyzer = new DefaultThrowableAnalyzer(); @Override public ResponseEntity<OAuth2Exception> translate(Exception e) throws Exception { // Try to extract a SpringSecurityException from the stacktrace Throwable[] causeChain = throwableAnalyzer.determineCauseChain(e); Exception ase = (OAuth2Exception) throwableAnalyzer.getFirstThrowableOfType(OAuth2Exception.class, causeChain); if (ase != null) { return handleOAuth2Exception((OAuth2Exception) ase); } ase = (AuthenticationException) throwableAnalyzer.getFirstThrowableOfType(AuthenticationException.class, causeChain); if (ase != null) { return handleOAuth2Exception(new UnauthorizedException(e.getMessage(), e)); } ase = (AccessDeniedException) throwableAnalyzer .getFirstThrowableOfType(AccessDeniedException.class, causeChain); if (ase instanceof AccessDeniedException) { return handleOAuth2Exception(new ForbiddenException(ase.getMessage(), ase)); } ase = (HttpRequestMethodNotSupportedException) throwableAnalyzer.getFirstThrowableOfType( HttpRequestMethodNotSupportedException.class, causeChain); if (ase instanceof HttpRequestMethodNotSupportedException) { return handleOAuth2Exception(new MethodNotAllowed(ase.getMessage(), ase)); } return handleOAuth2Exception(new ServerErrorException(HttpStatus.INTERNAL_SERVER_ERROR.getReasonPhrase(), e)); } private ResponseEntity<OAuth2Exception> handleOAuth2Exception(OAuth2Exception e) throws IOException { int status = e.getHttpErrorCode(); HttpHeaders headers = new HttpHeaders(); headers.set("Cache-Control", "no-store"); headers.set("Pragma", "no-cache"); if (status == HttpStatus.UNAUTHORIZED.value() || (e instanceof InsufficientScopeException)) { headers.set("WWW-Authenticate", String.format("%s %s", OAuth2AccessToken.BEARER_TYPE, e.getSummary())); } ResponseEntity<OAuth2Exception> response = new ResponseEntity<OAuth2Exception>(e, headers, HttpStatus.valueOf(status)); return response; } public void setThrowableAnalyzer(ThrowableAnalyzer throwableAnalyzer) { this.throwableAnalyzer = throwableAnalyzer; } @SuppressWarnings("serial") private static class ForbiddenException extends OAuth2Exception { public ForbiddenException(String msg, Throwable t) { super(msg, t); } @Override public String getOAuth2ErrorCode() { return "access_denied"; } @Override public int getHttpErrorCode() { return 403; } } @SuppressWarnings("serial") private static class ServerErrorException extends OAuth2Exception { public ServerErrorException(String msg, Throwable t) { super(msg, t); } @Override public String getOAuth2ErrorCode() { return "server_error"; } @Override public int getHttpErrorCode() { return 500; } } @SuppressWarnings("serial") private static class UnauthorizedException extends OAuth2Exception { public UnauthorizedException(String msg, Throwable t) { super(msg, t); } @Override public String getOAuth2ErrorCode() { return "unauthorized"; } @Override public int getHttpErrorCode() { return 401; } } @SuppressWarnings("serial") private static class MethodNotAllowed extends OAuth2Exception { public MethodNotAllowed(String msg, Throwable t) { super(msg, t); } @Override public String getOAuth2ErrorCode() { return "method_not_allowed"; } @Override public int getHttpErrorCode() { return 405; } } }
package gcaptcha import ( "bytes" "image" "image/color" "image/draw" "image/png" "io/ioutil" "sort" "strings" ) import ( "github.com/golang/freetype" "github.com/golang/freetype/truetype" "github.com/sanxia/glib" ) /* ================================================================================ * 文字图片 * qq group: 582452342 * email : <EMAIL> * author : 美丽的地球啊 - mliu * ================================================================================ */ type ( textImage struct { title string texts []string //外部数据源 option ImageOption itemMap map[int]string //数据映射 cellMap map[int]string //文字映射 colors []*image.Uniform width int height int count int } ) /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 初始化文字图 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func NewTextImage(title string, texts []string, count int) IImage { textImage := &textImage{ option: ImageOption{ FontSize: 12, }, } textImage.title = title textImage.texts = texts textImage.count = count //init textImage.itemMap = make(map[int]string, 0) textImage.cellMap = make(map[int]string, 0) textImage.colors = make([]*image.Uniform, 0) textImage.colors = append(textImage.colors, &image.Uniform{color.RGBA{120, 120, 50, 255}}) textImage.colors = append(textImage.colors, &image.Uniform{color.RGBA{120, 126, 60, 255}}) textImage.colors = append(textImage.colors, &image.Uniform{color.RGBA{120, 132, 40, 255}}) textImage.colors = append(textImage.colors, &image.Uniform{color.RGBA{120, 120, 50, 255}}) textImage.colors = append(textImage.colors, &image.Uniform{color.RGBA{120, 127, 14, 255}}) return textImage } func (s *textImage) SetOption(option ImageOption) { s.option = option } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取图片数据 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) GetImage() ([]byte, error) { var imageBuffer bytes.Buffer headerHeight := s.option.HeaderHeight width := s.option.CellWidth height := s.option.CellHeight texts := s.shuffle() s.width = s.count*(width+s.option.Gap) + s.option.Gap + ((s.count - 1) * s.option.Padding) s.height = 1*(height+s.option.Gap) + s.option.Gap + headerHeight + (2 * s.option.Padding) //偏移点 graphics := image.NewRGBA(image.Rect(0, 0, s.width, s.height)) offsetPoint := image.Point{s.option.Padding, s.option.Padding} //背景图 if s.option.Backgroud != "" { backgroundImage, _ := glib.GetImageFile(s.option.Backgroud) draw.Draw(graphics, graphics.Bounds(), backgroundImage, image.ZP, draw.Over) } else { white := color.RGBA{255, 255, 255, 255} draw.Draw(graphics, graphics.Bounds(), &image.Uniform{white}, image.ZP, draw.Src) } //标题图 if len(s.title) > 0 { if titleImage, err := s.getTitleImage(); err == nil { draw.Draw(graphics, titleImage.Bounds().Add(offsetPoint), titleImage, image.ZP, draw.Over) } } //文字图 offsetPoint = image.Point{s.option.Padding, offsetPoint.Y} if len(s.title) > 0 { offsetPoint = image.Point{s.option.Padding, offsetPoint.Y + headerHeight} } textImage, _ := s.getTextImage(texts) draw.Draw(graphics, textImage.Bounds().Add(offsetPoint), textImage, image.ZP, draw.Over) if err := png.Encode(&imageBuffer, graphics); err != nil { return nil, err } return imageBuffer.Bytes(), nil } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取标题图 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) getTitleImage() (image.Image, error) { graphics := image.NewRGBA(image.Rect(0, 0, s.width, s.height)) draw.Draw(graphics, graphics.Bounds(), image.Transparent, image.ZP, draw.Src) font, _ := s.getFont(s.option.FontPath) ctx := freetype.NewContext() ctx.SetDPI(72) ctx.SetFontSize(s.option.FontSize) ctx.SetFont(font) ctx.SetClip(graphics.Bounds()) ctx.SetDst(graphics) ctx.SetSrc(image.Black) space := float64(12) pt := freetype.Pt(2, 14) for _, s := range s.title { if _, err := ctx.DrawString(string(s), pt); err != nil { return nil, err } pt.X += ctx.PointToFixed(space) } return graphics, nil } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取文字图 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) getTextImage(texts []string) (image.Image, error) { graphics := image.NewRGBA(image.Rect(0, 0, s.width, s.height)) draw.Draw(graphics, graphics.Bounds(), image.Transparent, image.ZP, draw.Src) font, _ := s.getFont(s.option.FontPath) ctx := freetype.NewContext() ctx.SetDPI(72) ctx.SetFont(font) ctx.SetClip(graphics.Bounds()) ctx.SetDst(graphics) textPoint := freetype.Pt(2, 5) flags := make(map[int]bool, 0) var nextIndex int newTexts := strings.Join(texts, "") for _, text := range newTexts { colorIndex := glib.RandInt(len(s.colors)) ctx.SetSrc(s.colors[colorIndex]) fontSize := glib.RandIntRange(int(s.option.FontSize), int(s.option.FontSize)+2) offsetX := 14 + glib.RandIntRange(-2, 2) offsetY := glib.RandIntRange(12, 18) if string(text) == "#" || string(text) == "b" { fontSize = glib.RandIntRange(int(s.option.FontSize)-6, int(s.option.FontSize)-2) flags[nextIndex] = true } if flags[nextIndex] { offsetY = glib.RandIntRange(8, 14) } if nextIndex > 0 && flags[nextIndex-1] { offsetX = 8 + glib.RandIntRange(-5, 0) } ctx.SetFontSize(float64(fontSize)) textPoint.X += ctx.PointToFixed(float64(offsetX)) textPoint.Y = ctx.PointToFixed(float64(offsetY)) if _, err := ctx.DrawString(string(text), textPoint); err != nil { return nil, err } nextIndex++ } return graphics, nil } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 随机文字 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) shuffle() []string { //随机打散texts到cellMap for index, text := range s.texts { s.itemMap[index] = text } index := glib.RandIntRange(0, s.count) s.cellMap[index] = s.itemMap[index] count := s.count //第一个已提前写入,所以是大于1 for count > 1 { index = glib.RandIntRange(0, len(s.itemMap)) for { if _, ok := s.cellMap[index]; !ok { break } else { index = glib.RandIntRange(0, len(s.itemMap)) } } s.cellMap[index] = s.itemMap[index] count-- } return s.GetText() } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取文字宽度 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) getTextWidth(fontSize int) int { font, _ := s.getFont(s.option.FontPath) ctx := freetype.NewContext() ctx.SetDPI(72) ctx.SetFontSize(float64(fontSize)) ctx.SetFont(font) space := float64(fontSize) return int(ctx.PointToFixed(space)) } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取字体 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) getFont(fontPath string) (*truetype.Font, error) { absolutePath := glib.GetAbsolutePath(fontPath) fontBytes, err := ioutil.ReadFile(absolutePath) if err != nil { return nil, err } font, err := freetype.ParseFont(fontBytes) if err != nil { return nil, err } return font, nil } /* ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ * 获取文字 * ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ */ func (s *textImage) GetText() []string { //维持字典索引有序 keys := make([]int, 0) for keyIndex := range s.cellMap { keys = append(keys, keyIndex) } sort.Ints(keys) texts := make([]string, 0) for _, key := range keys { texts = append(texts, string(s.cellMap[key])) } return texts }
Image copyright Getty Images Two separate reports for the government have raised the possibility that millions of people may have to work longer to qualify for a state pension. An analysis for the Department for Work and Pensions (DWP) has suggested that workers under the age of 30 may not get a pension until the age of 70. A second report, by John Cridland, proposes that those under the age of 45 may have to work a year longer, to 68. The government is due to make a decision on both reports by May. Ministers are under pressure to address the expected rise in the cost of pensions, which stems from longer life expectancy and the increasing ratio of pensioners to workers. But at least six million people face the prospect of having to work longer. Reality Check: Is lack of cash making women work past 70? "This report is going to be particularly unwelcome for anyone in their early 40s, as they're now likely to see their state pension age pushed back another year," said Tom McPhail, head of retirement at Hargreaves Lansdown. "For those in their 30s and younger, it reinforces the expectation of a state pension from age 70, which means an extra two years of work." Image copyright Getty Images In an extreme scenario, experts from the Government Actuary's Department (GAD) said the state pension age could be raised as high as 70 as soon as 2054. Under existing plans, the state pension age is due to rise to 68 for those born after 1978. The "extreme" scenario involves an assumption that people spend 32% of their adult life in retirement. The conventional assumption until now has been that people will spend 33.3% of their lives in retirement. In the worst-case situation, the GAD calculations also suggest that the change in the retirement age from 67 to 68 could be pulled forward by as much as 16 years. So while that increase is not due to happen until 2044, it could be brought in as soon as 2028, affecting those now in their late 50s. 'Extra year' Former pensions minister Steve Webb was highly critical of the GAD's scenario. "This is not what parliament voted for and is clearly driven by the Treasury. It is one thing asking people to work longer to make pensions affordable, but it is another to hike up pension ages because the Treasury sees it as an easy way to raise money," he said. However, the other report, by the former CBI chief John Cridland, foresees more modest changes. He recommends bringing the change from 67 to 68 forward by seven years, from 2046 to 2039. That would mean anyone currently under the age of 45 having to work an extra year. The changes are due to be phased in gradually, over a two-year period in each case. In addition Mr Cridland said there should be no up-rating from 68 to 69 before 2047 at the earliest, and that the pension age should never rise by more than one year in each ten-year period. He also suggests that the so-called triple lock be ended in the next parliament. Up to now the triple lock has guaranteed that the state pension rises each year by inflation, earnings or 2.5%, whichever is the highest. However, by linking the rise in pension payouts to earnings alone, the bill for pensions would fall from 6.7% of GDP to 5.9% of GDP by 2066. Media playback is unsupported on your device Media caption Baroness Ros Altmann tells Today there shouldn't be a 'magic age' for receiving pensions Mr Cridland also recommends:
A large-scale comparison of social media coverage and mentions captured by the two altmetric aggregators- Altmetric.com and PlumX The increased social media attention to scholarly articles has resulted in efforts to create platforms&services to track and measure the social media transactions around scholarly articles in different social platforms (such as Twitter, Blog, Facebook) and academic social networks (such as Mendeley, Academia and ResearchGate). Altmetric.com and PlumX are two popular aggregators that track social media activity around scholarly articles from a variety of social platforms and provide the coverage and transaction data to researchers for various purposes. However, some previous studies have shown that the social media data captured by the two aggregators have differences in terms of coverage and magnitude of mentions. This paper aims to revisit the question by doing a large-scale analysis of social media mentions of a data sample of 1,785,149 publication records (drawn from multiple disciplines, demographies, publishers). Results obtained show that PlumX tracks more wide sources and more articles as compared to Altmetric.com. However, the coverage and average mentions of the two aggregators vary across different social media platforms, with Altmetric.com recording higher mentions in Twitter and Blog, and PlumX recording higher mentions in Facebook and Mendeley, for the same set of articles. The coverage and average mentions captured by the two aggregators across different document types, disciplines and publishers is also analyzed. Introduction The newer forms of social media metrics (aka altmetrics) about scholarly articles, collected from different social media platforms and academic social networks, present useful insight about the importance and impact of the articles. Altmetrics are now being collected and analyzed for a variety of purposes ranging from early impact assessment to measure the correlations between altmetrics and citations. Several studies have tried to propose that altmetrics could be an alternative to citations for assessment of the impact of research (such as Costas, Zahedi, & Wouters, 2015;Huang, Wang, & Wu, 2018;Thelwall & Nevil, 2018;Thelwall, 2017Thelwall,, 2018). Owing to the increased attention on social media harvesting, Web crawlers, and RSS feeds 8. The data harvesting is updated on different time periods, ranging from daily to monthly basis, based on the different licensing policies of the harvested platforms. Plum Analytics refreshes PlumX in every 3-4 hours to keep it most updated. The data can be accessed through end-user interfaces, widgets, and APIs of Plum analytics. Both the aggregators, Altmetric.com and PlumX provide metrics based on data collected from various social media, bibliographic and policy document sources. Tables 1 and 2 list the social media, bibliographic and other sources tracked by the two aggregators. The table 1 lists a total of 33 social media sources tracked by the two aggregators. Out of these 33 sources, 14 sources are captured in Altmetric.com, whereas PlumX tracks 28 sources. The platforms/ sources tracked by Altmetric.com are Twitter, Facebook, Youtube, Reddit, F1000, Blog, Mendeley, Stack Overflow, Wikipedia, News, CiteULike, LinkedIn, Google+, and Pinterest. Out of these 14 sources, PlumX tracks most except five sources-F1000, LinkedIn, Google+, Stack Overflow, and Pinterest. PlumX additionally tracks 19 social media sources that are not tracked by Altmetric.com. These sources are bit.ly, Figshare, Github, Slideshare, SoundCloud, SourceForge, Vimeo, Stack Exchange, Goodreads, Amazon, Delicious, Dryad, Dspace, SSRN, EBSCO, ePrints, AritiiRead eBooks, Ariti Library, WorldCat. The table 2 lists the bibliographic and policy document sources tracked by the two aggregators. Here, PlumX has a better coverage and also provides citation metrics. PlumX covers total 16 sources whereas Altmetric.com tracks only 6 sources. Both aggregators have only one bibliographic source in common, which is Policy document source. Related Work Several previous studies tried to analyze the data from different altmetric aggregators for different purposes ranging from assessing their accuracy to finding how much they agree on altmetric counts for same set of scholarly articles. Zahedi, Fenner, & Costas explored the agreement/disagreement among metric scores across three altmetric providers namely, Mendeley, Lagotto, and Altmetric.com. They analyzed 30,000 DOIs for the year 2013 in five common sources and analyzed possible reasons for the differences. They found that Altmetric.com reports more tweets as compared to Lagotto and concluded that the data capture procedure of Altmetric.com, which includes tweets, public retweets, and comments in real-time, could be a probable reason for such differences. In later studies, Zahedi & Costas (2018a, 2018b have analyzed 31,437PloSOne DOIs and explored the differences in metrics provided by four aggregators Crossref Event Data (CED), Altmetric.com, Lagotto, and Plum Analytics. They focused on the process of data collection used by different aggregators and that how different aggregators define metrics from the data collected. The results showed that Mendeley (r>0.8) and Twitter (0.5≤r≤0.9) have good agreement across aggregators, whereas Facebook (0.1≤r≤0.3) and Wikipedia (0.2≤r≤0.8) have the lowest agreement. They attributed this to the methods of tracking and processing data. For example, the effect of direct data collection or collection through third-party APIs, aggregation of data based on different versions, identifiers, types, etc., and the impact of frequency of update, etc. They recommended that one should not rely only on the aggregators showing a higher count for the metric. Meschede & Siebenlist explored about the relationship between the metrics across (inter-correlation) two aggregators PlumX and Altmetric.com, as well as between the metrics within the aggregator itself (intra-correlation). They analyzed sample of 5,000 journal articles from six disciplines ('Computer Science, Engineering and Mathematics', 'Natural Sciences', 'Multidisciplinary', 'Medicine and Health Sciences', 'Arts, Humanities and Social Sciences' and 'Life Sciences') and analyzed them for the eight common sources ('Facebook', 'Blogs', 'Google+', 'News', 'Reddit', 'Twitter', 'Wikipedia' and 'Mendeley')in both aggregators. The study showed that PlumX (99%) has higher overall coverage of the data chosen for analysis as compared to Altmetric.com (39%). The intra-correlation between the metrics within the same platforms are weak. They further observed that PlumX and Altmetric.com are highly intercorrelated in terms of Mendeley and Wikipedia (with correlation coefficient values 0.97 and 0.82 respectively) but weakly correlated for other sources-Facebook (0.29), Blogs (0.46), Google+ (0.28), News (0.11), Twitter (0.49), and Reddit (0.41). Ortega (2018a) analyzed the difference in altmetric indicator counts in Crossref event data, Altmetric.com, and PlumX, using a sample 67,000 papers. For each platform, the difference in metrics across aggregators was quantified in terms of counting differences. Counting difference was computed by taking the sum of the differences in metrics provided by two aggregators at the document level and dividing by the number of publications that have non zero altmetric events and occur in both aggregators. They concluded that different aggregators should be used for data from different platforms, such as PlumX for Mendeley reads and Altmetric.com for tweets, news & blogs. In another study, Ortega (2018b) has grouped different altmetrics into three groups: social media, usage, citations using principle component analysis (PCA). In this study data from Altmetric.com, Scopus and PlumX for a set of 3,793 articles published in 2013, was used. Considering that the earlier studies provided evidence that some specific aggregators perform better for some specific data sources; they collected different indicators from different aggregators. These included tweets, Facebook mentions, news, blogs etc. from Altmetric.com; citations from Scopus; Wikipedia mentions from CED; and views & Mendeley reads from PlumX. Results showed that instead of using a single metric, such as altmetric score, one should consider the relatedness of metrics and their impact across different disciplines for evaluating research. Ortega (2018c) examined the emergence and evolution of five altmetrics (download, views, tweets, readers, and blog mention) along with bibliometric citations from the publication date of a document. The study also investigated the evolution of the relationships among these metrics by analyzing 5,185 papers from PlumX on a month to month basis. The results showed that in a document's entire life cycle, altmetric mentions are fast appearing ones, whereas citations appearance is slow. Based on the relationship analysis of metrics, the study suggested that the reader counts influence citations. A series of studies (Ortega, 2019a(Ortega,, 2019b(Ortega,, 2020b analyzed coverage of news and blog sources in three aggregators namely, Crossref event data, Altmetric.com, and PlumX, by taking 100,000 Crossref DOIs. The results showed that, the overlap of these sources across aggregators are comparatively low in numbers (Ortega, 2019a). As for example, Altmetric.com has a higher coverage for blog (37.8%) but only 7.8% of the publication set is commonly covered in the three aggregators. The coverage in one aggregator might be high for the same set of articles but the lower overlapping ratio shows that the sources covered in aggregators vary widely. The main objective of the study in Ortega (2020b) was to explore altmetric biases with respect to country, language and subjects with a dataset of 100,000 DOIs. Author has retrieved the sources which covered the randomly selected publication set and categorized them based on their regions, language and interest level. It shows that, Altmetric.com is the most heterogeneous aggregator geographically and linguistically. However, PlumX has more coverage towards local news events, particularly for USA. Their conclusion serves as evidence that English is the most prevailing language. News and blog sources are mostly from general interest, social science and humanities disciplines. From this same dataset, Ortega (2019b) extracted the blog and news links to verify the validity, coverage, and presence of the tracked blog mentions and news mentions of the scholarly articles. There were 51,000 news & blog links found in this extraction process, which were explored for their existence and it was found that almost one-third of the links are broken. This elaborate longitudinal study concluded that these mentions should be audited periodically as the aggregators are dependent on third-party providers. Bar-Ilan, Halevi, & Milojevi have analyzed altmetric data of 2,728 JASIST articles and reviews, provided by Mendeley, Altmetric.com and PlumX in two different points of time 2017 and 2018. They observed increase in overlap in coverage of documents with Mendeley reader across the three sources over time. There were 874 papers commonly covered in all sources in 2017, which increased to 1,021 papers in 2018. Further an increase in Mendeley reader counts and citations was also observed. They suggested using more than one aggregator to obtain altmetric indicators and to compare them in order to get reliable altmetrics. Ortega (2020a) has performed a meta-analysis over a set of 107 altmetric articles related to five altmetric aggregators, namely, Altmetric.com, Mendeley, PlumX, Lagotto, and ImpactStory, published during 2012-2019. The dataset consisted of papers that had either computed or published data useful in the computation of three metrics: coverage, platform wise coverage, and average mentions. The usage percentage of all the aggregators was explored. Almetric.com (54%) was found to be the most prevalent provider, followed by Mendeley (18%) and PlumX (17%). The analysis showed that Altmetric.com tracks more events for Twitter, News, and Blogs whereas PlumX performs well in FB and Mendeley platforms. The results exhibited gradual increase in tweet capture by PlumX. Data Since the focus of the work is comparing the coverage of research publications and altmetrics provided in two popular social media aggregators-Altmetric.com and PlumX, we analyzed the variation in coverage for the whole world's research output for the year 2016. The complete set of research publications indexed in Web of Science (WoS) for the year 2016 are downloaded. The download was performed in the month of Sep. 2019. Since WoS does not allow downloading data above 100,000 records, therefore the data is collected based on Web of Science Categories (WC). The WC based data collection has an inherent problem of duplicity since in WoS a paper is generally tagged under many WCs. Due to this duplicity initially, a total of 3,545,720 records are obtained for all WCs taken together, comprising of the standard 67 fields-including TI (title), PY (publication year), DI (DOI), DT (document type), SO (publication name), DE (author keywords), AB (abstract). After the removal of duplicate entries and some erroneous records, we were left with 1,785,149 publication records. The altmetric data for the publication records found in the two aggregators-Altmetric.com and PlumX-was obtained thereafter. In order to obtain altmetric data from Altmetric.com, a DOI look up was performed for all the DOIs in the WoS data. Out of 1,785,149 publication records, a total of 902,990 records are found to be covered by Altmetric.com, which is about 50.58% of the total data. The data obtained from Altmetric.com had 46 fields, including DOI, Title, Twitter mentions, Facebook mentions, News mentions, Altmetric Attention Score, OA Status, Subjects (FoR), Publication Date, URI, etc. The data from Altmetric.com was downloaded in the month of Sep. 2019.Since we did not have an API access for PlumX, we contacted PlumX team to provide us with access to PlumX data for the 1,785,149 publication records we had. They agreed to provide us with data and created a dashboard access for us for the concerned publication records. Out of the 1,785,149 publication records, a total of 1,661,477 publication records were found covered in PlumX, which constitutes about 93.07% of the whole data. PlumX provides metrics in five categories, from a wide range of source platforms. The PlumX data was downloaded in the month of Nov. 2019. This data included fields like DOI, Title, Year, Repo URL, Researcher Name(s), Captures:Readers:Mendeley, Social Media:Tweets:Twitter, Social Media:Shares, Likes &Comments:Facebook etc. This data also has a field named Plum stable url which redirects to the page from where one can get the actual tweets, blogs etc. For our analysis we have analyzed data for four platforms-Twitter, Facebook, Mendeley, and Blog platforms-as covered in both the aggregators. Methodology In this exploratory analysis, the data obtained from three different data sources have been analyzed on six aspects, for the two aggregators: variation in coverage, difference in magnitude of mentions, correlations in mention values, variation across document types, variation by discipline and variation across publishers. First of all, the coverage of scholarly articles in different social media platforms by the two aggregators was compared. The altmetric data for the articles in consideration was obtained from the two aggregators corresponding to the four social media platforms: Twitter, Facebook, Blogs and Mendeley. The percentage of articles covered in the four social media platforms as per the data from the two aggregators was identified. Secondly, the magnitude of mentions in the four social media platforms for the articles was analyzed and the difference in magnitude of mentions in the data drawn from the two aggregators was computed. Statistical measures (mean and median) were computed for the differences in values from each of these platforms. Thirdly, the correlation between mention values for different social media platforms, as drawn from the two aggregators, was computed. For computing correlations, the options were to compute Pearson Correlation or Spearman Rank Correlation. However, as it has been observed in previous studies (such as Thelwall & Nevill, 2018) that the altmetric data are highly skewed, therefore, we have used Spearman Rank Correlation, which is more suitable for such skewed data. The Spearman Rank Correlation Coefficient (SRCC) was computed between the different types of mentions available from the two aggregators. The built-in function 'corr' available in pandas module of python programming language was used for this purpose, value 'spearman' passed as parameter to the function. The value of SRCC lies between -1 to +1, with positive values indicating positive correlation, value of 0 indicating no correlation, and negative value indicates negative correlation. Fourthly, the difference in coverage and mentions captured by the two aggregators for the five social media platforms was analyzed across different document types. The document type of articles was taken from the 'DT' tag in the WoS record file. The values for 'DT' include journal articles, proceedings paper, book chapters, reviews, book reviews, editorial material etc. The variation in coverage levels and magnitude of mentions for different social media platforms was thus obtained for these document types. Fifthly, the difference in coverage and mentions across different disciplines was computed by grouping the publication records into different disciplines. Each publication record was grouped into one of the fourteen major disciplinary categories as per the scheme proposed in (). The Web of Science Category (WC) field information for each publication record is seen and based on its value the publication record is assigned to one of the fourteen broad disciplinary categories. These fourteen broad disciplinary categories are as follows: Agriculture ( 'MDPI', 'Cambridge University Press', and 'Emerald' across two aggregators. The publisher information was obtained from the 'PU' tag of WoS records. In the PU field same publishers are present in different forms. For e.g. Elsevier have variants such as Elsevier Science Ltd, Elsevier Masson, and Elsevier Science Bv. All such variants represent same parent publisher Elsevier. To capture all such variants of any publisher we employed partial string matches in the PU field. This way all publication records for different publishers are obtained and differences in their coverage and in mention counts are computed across the two aggregators. Results The altmetric data captured by the two aggregators for the large sample of articles was analysed to identify differences in coverage and magnitude of mentions across different platforms. The differences in mentions captured by the two aggregators was also analysed across different disciplines, document types and publishers. Difference in coverage of the two aggregators First of all, the difference in coverage of the two altmetric aggregators was identified. It was observed that out of the total set of 1,785,149 publication records, a total of 902,990 records are found to be covered by Altmetric.com (which is about 50.58% of the total data), and a total of 1,661,477 publication records were found covered in PlumX (which constitutes about 93.07% of the whole data). Figure 1 shows the overlap of coverage of the two aggregators. It can be seen that a total of 879,981 articles are commonly covered by the two aggregators. About 97.5% of articles covered by Altmetric.com are also covered by PlumX, whereas Altmetric.com covers only 53% of articles tracked by PlumX. The PlumX aggregator has 47% of articles uniquely covered. Thus, it is observed that PlumX has a higher overall coverage of articles (including uniquely covered articles) as compared to Altmetric.com. We have tried to find out whether the difference in coverage of the two aggregators is similar across different platforms. Figure 2 shows a bar chart of article coverage of the two aggregators for four different platforms-Twitter, Facebook, Mendeley and Blog. It can be observed that PlumX has a better coverage for Mendeley platform, whereas Altmetric.com has an edge over PlumX in coverage in the Twitter and Blog platforms. The coverage for Facebook platform of the two aggregators is almost similar. The magnitude of coverage difference between the two aggregators is highest for Mendeley and lowest for Facebook. Thus, while PlumX has an overall higher coverage of articles, Altmetric.com has better coverage in two of the four platforms analysed. Difference in magnitude of mentions The mean and median values of number of mentions for the four platforms as tracked by the two aggregators was computed. Table 3 shows the number of articles tracked by the two aggregators across the different platforms, along with the mean and median values of mentions. It can be observed that the mean value of mentions for Twitter and Blog platforms is higher in Altmetric.com, whereas the mean value of mentions for Facebook and Mendeley is higher in PlumX. In case of Facebook platform, PlumX platform has significantly higher value of mean and median of mentions as compared to Altmetric.com. It may, however, be noted that these values are for different number of articles tracked by the two aggregators. A more useful comparison of the value of mentions would require comparing the mentions for the commonly covered set of articles by the two aggregators. Therefore, we have compared the values of mentions captured by the two aggregators across different platforms for the same set of commonly covered articles. The difference in mention value for the papers in Altmetric.com and PlumX is computed. Figure 3 shows the mean of differences in mentions in the four platforms as tracked by the two aggregators. It is observed that in case of Twitter and Blog platforms, the mean value of differences is positive indicating that Altmetric.com captures higher number of mentions as compared to PlumX in these platforms. In the Facebook and Mendeley platforms, PlumX appears to have higher number of mentions tracked as compared to Altmetric.com. In order to gain further insight into the difference in magnitude of mentions, we have also plotted the frequency of differences in mentions across different platforms by the two aggregators. Figure 4 (a) -(d) present the frequency values of differences in mentions in the two aggregators for the Twitter, Facebook, Mendeley and Blog, respectively. Figure 4(a) shows the histogram for the differences in Twitter platform. These differences are for 565,445 commonly covered articles for Twitter platform, with at least one tweet captured in both the aggregators. It can be seen that a good percentage (approximately 50%) of papers has tweet difference equal to zero, indicating that both platforms record same number of tweets for these papers. However, the slope is inclined towards the positive side, indicating that Altmetric.com captures more tweets for a good number of the articles as compared to PlumX. We looked at some examples to verify this and found this valid. One example paper titled "When the Great Power Gets a Vote: The Effects of Great Power Electoral Interventions on Election Results" has 24,318 tweets captured by Altmetric.com, whereas PlumX captured only 493 tweets for this paper. Figure 4(b) shows the histogram of article level differences in Facebook platform. Here the plot is created for 71,437 commonly covered articles in Facebook platform, that have non-zero FB mentions in both aggregators. It is observed that in approximately 16% of the articles, the difference in mentions is zero. However, the slope is clearly inclined towards the negative side, indicating that PlumX captures more mentions per article as compared to Altmetric.com for majority of the articles. One example article to mention would be the article titled "CRISPR gene-editing tested in a person for the first time", which has 62,290 mentions captured by PlumX but only 341 mentions captured by Altmetric.com. The histogram for the differences in mentions in the Mendeley platform is shown in Figure 4(c). Here, the plot is made for a total of 830,520 commonly covered articles in Mendeley platforms, that have at least one read recorded in both the aggregators. In this case too, it is seen that pattern is inclined more towards the negative side, indicating that PlumX captures more reads per article than Altmetric.com for majority of the articles. About 25% of articles have the same number of mentions recorded by the two aggregators. One example article would be article titled "Mastering the game of Go with deep neural networks and tree search" that has 39,621 records captured by PlumX but only 7,900 reads captured by Altmetric.com. Figure 4(d) plots the histogram of the differences in Blog mentions for the 14,387 commonly covered articles in Blog platform, with non-zero mentions captured by both the aggregators. In this case, it is observed that more than 40 % articles have this difference equal to zero. The pattern, however, is inclined towards the positive side, indicating that Altmetric.com captures more mentions as compared to PlumX for a good number of articles. One example article to mention would be an article titled "Planet Hunters IX. KIC 8462852 -where's the flux?" that has 95 mentions captured by Altmetric.com but only 12 mentions captured by PlumX. Thus, a perusal of the figures 4 (a) to (d) indicate that Altmetric.com captures more mentions per article in case of Twitter and Blog platform, whereas PlumX captures more mentions per article for the Mendeley and Facebook platforms. Correlations in mentions We have also computed correlation between the mention values for different platforms across the two aggregators. The Spearman Rank Correlation Coefficient (SRCC) between mentions is computed for the articles commonly covered by the two aggregators. Table 4 shows the SRCC values in the article-mentions across the two aggregators. It can be observed that the correlation values for Twitter and Mendeley platforms are 0.823 and 0.95, respectively, indicating strong correlation. In case of Facebook and Blog platforms, these values are 0.272 and 0.424, respectively, indicating lower correlation. Thus, it can be said that there is more agreement in mention-based ranks of articles in Twitter and Mendeley platforms, between the two aggregators. The mention values differ in more random manner in the other two platforms. The intra-platform correlations across the two aggregators are also shown, each of which are less than 0.5, indicating weak positive rank correlations across the platforms in the two aggregators. Variations across document types It would be interesting to check whether the coverage and mentions in different platforms as captured by the two aggregators vary across different document types. We have, therefore, analysed the coverage and mentions for the articles of different document types. These document types correspond to Article, Book, Book Chapter, proceedings paper and Review, as defined by the Web of Science. Table 5 shows the coverage and average mentions for Twitter, Facebook, Mendeley and Blog platforms of the different document types. It is observed that Altmetric.com has better coverage in Twitter and Blog platforms for almost all document types. In case of average mention values too, Altmetric.com has higher values across almost all document types in Twitter and Blog platforms. The PlumX platform is found to have better coverage and average mention values across almost all document types in the Facebook and Mendeley platforms. Thus, looking at the results across document types, it is seen that the overall trend of better coverage of Altmetric.com for Twitter and Blog and of PlumX for Facebook and Mendeley appear to be hold valid across different document types. Variations across disciplines The variations in coverage and mentions between the two aggregators is also analysed across different disciplines. We have used the data grouping into fourteen broad disciplines. Table 6 presents the coverage and avg mention values in the four platforms. It is observed that in the Twitter platform, Altmetric.com has better coverage and higher mention values than PlumX for almost all disciplines. In case of Facebook platform, PlumX has higher mention values for almost all disciplines. The coverage, however, is not higher for PlumX in Facebook for all disciplines as disciplines like MED, AH, SS, BIO, and AGR has higher coverage by Altmetric.com. In Mendeley platform, PlumX has higher coverage in all disciplines, but in terms of reads, PlumX captures higher reads only for MED, SS, BIO, GEO, and MUL disciplines. In case of Blog platform, Altmetric.com has better coverage than PlumX across almost all disciplines and the average mention values of Altmetric.com are higher except for PHY, ENV, MAT and ENG disciplines. Thus, the analysis of data across different disciplines shows an overall trend of better coverage of Altmetric.com of Twitter and Blog and PlumX of Facebook and Mendeley, except in case of some disciplines where slightly different patterns are observed. Variations across Publishers We have also tried to see if the patterns of variations in coverage and mentions in the two aggregators change across different publishers. In order to analyse this, articles for the 16 most frequent publishers in the data are identified and analysed. Table 7 present the coverage and average mention values for data for these publishers in the four platforms for the two aggregators. In terms of number of journals for which data is covered, the PlumX aggregator has an edge over Altmetric.com. For example, PlumX covers 76 more journals of Springer than Altmetric.com, 34 more journals of Elsevier and 31 more journals of Taylor & Francis. It can be further observed that PlumX covers more than 90% of publication records for all Publishers except for Cambridge Univ Press (84.1%), whereas coverage of Altmetric.com varies significantly between 25% to 86%. Altmetric.com has minimum coverage of about 25% for IEEE and highest coverage of 86.88% for PLoS publications. In terms of coverage and mentions for the four platforms, it is found that Altmetric.com has higher coverage in Twitter for almost all publishers. In Facebook platform, Altmetric.com shows higher coverage for all publishers except PLoS, Hindawi, and MDPI. In Mendeley platform, the coverage and average reads captured by PlumX are higher for all publishers except Springer, IEEE, Taylor & Francis, ACM and Hindawi. In case of Blog platform, Altmetric.com has in general better coverage and average mention values than PlumX, though for IEEE, IOP, Hindawi, and Emerald publishers the PlumX aggregators capture more mentions. Thus, in general it is observed that Altmetric.com has better coverage of Twitter and PlumX has better coverage of Facebook, irrespective of the publisher. However, in case of Mendeley and Blog platforms, the coverage and mention values of the two aggregators do not show same patterns for all the publishers. Discussion The article tried to present a comparative analysis of two well-known altmetric aggregators, namely, Altmetric.com and PlumX for four platforms-Twitter, Facebook, Mendeley and Blog. The variations in coverage and mention values captured by the two aggregators for different platforms is analyzed across different document types, disciplines and publishers as well. The results show that PlumX has an overall higher and wider coverage than Altmetric.com, with PlumX tracking about 93% articles as compared to Altmetric.com tracking about 50% articles. Some previous studies (Meschede & Siebenlist, 2018;Ortega, 2018b;2019a;Zahedi & Costas, 2018b) have also found that PlumX has higher coverage of articles, with close to 95% articles tracked. However, the coverage differentiation is not same across all the four platforms. In case of Twitter and Blog platforms, Altmetric.com has better coverage than PlumX. Ortega (2018a) found similar pattern for Twitter, with Altmetric.com having better tracking of Twitter platform than PlumX. In case of Blog platform, Ortega (2020b) noted that Altmetric.com has better coverage than PlumX. In case of Mendeley platform, PlumX has higher coverage than Altmetric.com. One possible reason for this may be that PlumX and Mendeley are from the same parent company and hence better integration of data capture process. The coverage level of Facebook platform by the two aggregators is found to be quite similar, with PlumX having an edge over Altmetric.com. One possible reason for Altmetric.com recording slightly lesser Facebook mentions is that it has a policy of recording posts only from public pages 9. Ortega (2020a) has found that in collecting mentions from Facebook and Mendeley, Altmetric.com has performed poorly as compared to PlumX. It was found in previous studies (Zahedi & Costas, 2018b;Ortega, 2018a) that in general, Altmetric.com captures more mentions per article in case of Twitter and Blog platform, whereas PlumX captures more mentions per article for the Mendeley and Facebook platforms. In terms of correlations between the two aggregators, for mentions in the same platform, Twitter and Mendeley achieve higher correlation values indicating similar magnitude of mentions captured by the two aggregators for the commonly covered set of articles. The correlation values are in the lower range in case of Facebook and Blog platform, indicating higher differences in mentions captured by the two aggregators in these platforms. These findings for Mendeley shows agreement with previous studies where this platform was noted to have the highest inter-correlations across the aggregators (Zahedi & Costas, 2018b). However, for Twitter platform, it contradicts with finding of Meschede & Siebenlist, where it was found to be in the lower similarity group along with other platforms like Facebook, Blog etc. The variations of coverage and mention values of the two aggregators in the four platforms across different document types, disciplines and publishers show interesting patterns. Out of these three aspects, only discipline has been explored earlier by Ortega (2020b) for Blog and News mentions. The variations by publisher and document types have not been explored earlier. The results show that, in general Altmetric.com has better coverage and higher mention values than PlumX for Twitter platform across different document types, disciplines and publishers. Similarly, PlumX platform is seen to have better coverage and higher mention values than Altmetric.com in case of Facebook, irrespective of the document type, discipline and publisher. However, in case of Mendeley and Blog platforms, the coverage and average mention values of the two aggregators do not show a consistent pattern across all the document types, disciplines and publishers. In some cases, PlumX has better coverage and higher mentions than Altmetric.com while in several other cases Altmetric.com has better coverage and higher mentions. Thus, the variations in coverage and mentions across document types, discipline and publishers are more clearly seen in case of Mendeley and Blog platforms. The present study, thus, presents a comprehensive account of variations in coverage and mention values captured by the two aggregators-Altmetric.com and PlumX-across four different platforms. Further, the study also is perhaps the first effort to have analysed the variations across different document types and publishers. The analytical results are interesting and useful, with some contradicting whereas several others agreeing to findings of the previous studies, as illustrated above. The results have practical implications in terms of suggestion for use of a specific aggregator for data from different platforms. Conclusion The article presents following useful results and conclusions. Firstly, PlumX has an overall higher coverage than Altmetric.com and tracks a wider number of platforms. Secondly, Altmetric.com captures more mentions per article in case of Twitter and Blog platform, whereas PlumX captures more mentions per article in case of Mendeley and Facebook platforms. Thirdly, Altmetric.com and PlumX agree more in their mention values for Twitter and Mendeley platforms but mention values differ more in case of Facebook and Blog platforms, as observed by the correlation values. Fourthly, Altmetric.com is found to have better coverage of Twitter whereas PlumX has better coverage of Facebook, across different document types, disciplines and publishers. In case of Mendeley and Blog platforms, variations in patterns of coverage and magnitude of mentions are observed across different document types, disciplines and publishers. Overall, the analytical results present a comprehensive account of the variations in coverage and mentions of the two aggregators across four different platforms.
Standards for comprehensive sexual health services for young people under 25 years This document is a first response to the need to develop sexual health services for young people on a single site whilst awaiting research from pilot studies of 'one stop shops' suggested in the Sexual Health and HIV strategy. It is a document which is intended to be a tool to use for those wishing to set up a service providing testing for sexually transmitted infections and provision of contraceptive services for those under 25 years. It is not intended that such a service would replace existing specialist or general practice care but complement it, allowing clients to choose the service most appropriate and acceptable to them, with close links and clear pathways of care for referral between services. This paper should be used as a template when initiating and monitoring a clinic but some of the standards may not be achievable without significant financial input. However, economic limitations should not detract from striving to achieve the best possible care for those most at risk from sexually transmitted infections and unwanted pregnancies. For example, not all clinics will be able to provide the recommended tests for the diagnosis for gonorrhoea and chlamydia immediately, but should work towards achieving them. Although the upper age limit in this document is defined as 25 years, some providers may wish to limit clinics to those under 20 depending on local needs. Detailed information on specific issues such as consent and confidentiality, provision of contraception, investigation of non-sexually transmitted vaginal infections and sexually transmitted infection management and diagnosis are referenced and we recommend these are accessed by the users of this document. Many of the references themselves are live documents available on the worldwide web, and are constantly updated. The Sexual Health and HIV Strategy has now been published and these standards are aimed at those who wish to provide a level 2 sexual health service for young people wherever the setting e.g. genitourinary outreach clinic, contraceptive services, general practice. This document is a starting point to be reviewed and updated as new research becomes available, as the Sexual Health Strategy is implemented and with further input from providers of care (family planning, general practice, genitourinary medicine, gynaecology and paediatrics) and service users. All service providers must maintain a high quality of care and have networks both with those who provide more specialized services (Level 3) and Level 1 services. This document is an initial attempt to ensure that there is equity of clinical provision wherever a Level 2 sexual health service is provided and should be a useful tool for those setting up or monitoring services.
// Called repeatedly when this Command is scheduled to run @Override protected void execute() { /* if (encoderDone == false) { if (( (Math.signum(encoderTarget) == 1) && (Robot.controlPanelSubsystem.readEncoderRaw() <= encoderTarget)) || ((Math.signum(encoderTarget) == -1) && (Robot.controlPanelSubsystem.readEncoderRaw() >= encoderTarget))) {Robot.controlPanelSubsystem.moveTalonInDirection(encoderTarget, 0.5); } else {encoderDone = true; } } else if (Robot.controlPanelSubsystem.getSuspectedColor(Robot.controlPanelSubsystem.getSeenColor()) != colorWantedUnderSensor) Robot.controlPanelSubsystem.moveTalonInDirection(encoderTarget, 0.2); */ }
/** * Utility class that provides window decorations, custom border and resize * handler for borders useful for window resizing. * * @author Dafe Simonek */ final class DecorationUtils { /** No instances, utils class. */ private DecorationUtils () { } /** Creates and returns border suitable for decorating separate windows * in window system. * * @return Border for separate windows */ public static Border createSeparateBorder () { return new SeparateBorder(); } /** Creates and returns handler of window resizing, which works in given * insets. * @return The handler for resizing. */ public static ResizeHandler createResizeHandler (Insets insets) { return new ResizeHandler(insets); } /** Simple border with line and the space */ private static class SeparateBorder extends AbstractBorder { public Insets getBorderInsets (Component c) { return new Insets(3, 3, 3, 3); } public void paintBorder (Component c, Graphics g, int x, int y, int width, int height) { g.setColor(Color.DARK_GRAY); g.drawRect(x, y, width - 1, height - 1); } } // end of SeparateBorder /** Takes care about resizing of the window on mouse drag, * with proper resize cursors. * * Usage: Attach handler as mouse and mouse motion listener to the content pane of * the window:<br> * <code>rootPaneContainer.getContentPane().addMouseListener(resizeHandler);</code> * <code>rootPaneContainer.getContentPane().addMouseMotionListener(resizeHandler);</code> * */ static class ResizeHandler extends MouseAdapter implements MouseMotionListener { private Insets insets; private int cursorType; private boolean isPressed = false; /** Window resize bounds, class fields to prevent from allocating new * objects */ private Rectangle resizedBounds = new Rectangle(); private Rectangle movedBounds = new Rectangle(); private Point startDragLoc; private Rectangle startWinBounds; /** holds minimum size of the window being resized */ private Dimension minSize; public ResizeHandler (Insets insets) { this.insets = insets; } public void mouseDragged(MouseEvent e) { check(e); Window w = SwingUtilities.getWindowAncestor((Component)e.getSource()); if (Cursor.DEFAULT_CURSOR == cursorType) { // resize only when mouse pointer in resize areas return; } Rectangle newBounds = computeNewBounds(w, getScreenLoc(e)); if (!w.getBounds().equals(newBounds)) { w.setBounds(newBounds); } } public void mouseMoved(MouseEvent e) { check(e); Component comp = (Component)e.getSource(); movedBounds = comp.getBounds(movedBounds); cursorType = getCursorType(movedBounds, e.getPoint()); comp.setCursor(Cursor.getPredefinedCursor(cursorType)); } public void mousePressed(MouseEvent e) { isPressed = true; startDragLoc = getScreenLoc(e); Window w = SwingUtilities.getWindowAncestor((Component)e.getSource()); startWinBounds = w.getBounds(); resizedBounds.setBounds(startWinBounds); minSize = w.getMinimumSize(); } public void mouseReleased(MouseEvent e) { isPressed = false; startDragLoc = null; startWinBounds = null; minSize = null; } public void mouseExited(MouseEvent e) { Component comp = (Component)e.getSource(); comp.setCursor(Cursor.getDefaultCursor()); } private int getCursorType (Rectangle b, Point p) { int leftDist = p.x - b.x; int rightDist = (b.x + b.width) - p.x; int topDist = p.y - b.y; int bottomDist = (b.y + b.height) - p.y; boolean isNearTop = topDist >= 0 && topDist <= insets.top; boolean isNearBottom = bottomDist >= 0 && bottomDist <= insets.bottom; boolean isNearLeft = leftDist >= 0 && leftDist <= insets.left; boolean isNearRight = rightDist >= 0 && rightDist <= insets.right; boolean isInTopPart = topDist >= 0 && topDist <= insets.top + 10; boolean isInBottomPart = bottomDist >= 0 && bottomDist <= insets.bottom + 10; boolean isInLeftPart = leftDist >= 0 && leftDist <= insets.left + 10; boolean isInRightPart = rightDist >= 0 && rightDist <= insets.right + 10; if (isNearTop && isInLeftPart || isInTopPart && isNearLeft) { return Cursor.NW_RESIZE_CURSOR; } if (isNearTop && isInRightPart || isInTopPart && isNearRight) { return Cursor.NE_RESIZE_CURSOR; } if (isNearBottom && isInLeftPart || isInBottomPart && isNearLeft) { return Cursor.SW_RESIZE_CURSOR; } if (isNearBottom && isInRightPart || isInBottomPart && isNearRight) { return Cursor.SE_RESIZE_CURSOR; } if (isNearTop) { return Cursor.N_RESIZE_CURSOR; } if (isNearLeft) { return Cursor.W_RESIZE_CURSOR; } if (isNearRight) { return Cursor.E_RESIZE_CURSOR; } if (isNearBottom) { return Cursor.S_RESIZE_CURSOR; } return Cursor.DEFAULT_CURSOR; } private Rectangle computeNewBounds (Window w, Point dragLoc) { if (startDragLoc == null) { throw new IllegalArgumentException("Can't compute bounds when startDragLoc is null"); //NOI18N } int xDiff = dragLoc.x - startDragLoc.x; int yDiff = dragLoc.y - startDragLoc.y; resizedBounds.setBounds(startWinBounds); switch (cursorType) { case Cursor.E_RESIZE_CURSOR: resizedBounds.width = startWinBounds.width + (dragLoc.x - startDragLoc.x); break; case Cursor.W_RESIZE_CURSOR: resizedBounds.width = startWinBounds.width - xDiff; resizedBounds.x = startWinBounds.x + xDiff; break; case Cursor.N_RESIZE_CURSOR: resizedBounds.height = startWinBounds.height - yDiff; resizedBounds.y = startWinBounds.y + yDiff; break; case Cursor.S_RESIZE_CURSOR: resizedBounds.height = startWinBounds.height + (dragLoc.y - startDragLoc.y); break; case Cursor.NE_RESIZE_CURSOR: resize(resizedBounds, 0, yDiff, xDiff, -yDiff, minSize); break; case Cursor.NW_RESIZE_CURSOR: resize(resizedBounds, xDiff, yDiff, -xDiff, -yDiff, minSize); break; case Cursor.SE_RESIZE_CURSOR: resize(resizedBounds, 0, 0, xDiff, yDiff, minSize); break; case Cursor.SW_RESIZE_CURSOR: resize(resizedBounds, xDiff, 0, -xDiff, yDiff, minSize); break; default: System.out.println("unknown cursor type : " + cursorType); //throw new IllegalArgumentException("Unknown/illegal cursor type: " + cursorType); //NOI18N break; } return resizedBounds; } private static void resize (Rectangle rect, int xDiff, int yDiff, int widthDiff, int heightDiff, Dimension minSize) { rect.x += xDiff; rect.y += yDiff; rect.height += heightDiff; rect.width += widthDiff; // keep size at least at minSize rect.height = Math.max(rect.height, minSize.height); rect.width = Math.max(rect.width, minSize.width); } private Point getScreenLoc (MouseEvent e) { Point screenP = new Point(e.getPoint()); SwingUtilities.convertPointToScreen(screenP, (Component) e.getSource()); return screenP; } /* Checks that handler is correctly attached to the window */ private void check(MouseEvent e) { Object o = e.getSource(); if (!(o instanceof Component)) { throw new IllegalArgumentException("ResizeHandler works only with Component, not with " + o); //NOI18N } Window w = SwingUtilities.getWindowAncestor((Component)o); if (w == null) { throw new IllegalStateException("Can't find and resize the window, not attached."); //NOI18N } } } // end of ResizeHandler }
Heritage Harbor is a planned vacation community in Ottawa near Starved Rock State Park, located less than two hours from downtown Chicago. Illinois agents should become well-versed in second home and investment property knowledge to help buyers find a place in the state to purchase. Asking the right questions and understanding what a buyer is specifically looking for can help ensure vacation properties are in the correct neighborhood, the right distance and with the perfect amenities. It’s no secret that Illinois isn’t a popular vacation spot for its own residents, but that doesn’t mean local agents shouldn’t be well-versed in selling second homes. Although the majority of Chicagoans searching for a secluded oasis will search east into Michigan or north into Wisconsin, southern communities in the state have plenty to offer the vacation home seeker. Tammy Barry with Heritage Harbor is all too familiar with the lack of knowledge of the market in one of the state’s most beloved getaway spots: Starved Rock State Park. Located just outside of Ottawa in central Illinois, Starved Rock is known for its beautiful canyon-like formations, old tree growth and network of lake and river amenities. In 2007, just before the market crashed, the vision to create a vacation oasis less than two hours from downtown Chicago blossomed. Despite the tumbling market, the team behind Heritage Harbor stuck with their intentions and master plan to develop a nature-oriented lifestyle with homes of various sizes, a 160-slip marina, three swimming pools, a restaurant and more. One big obstacle for the team, though, was getting both agents and homebuyers to become familiar with the vacation resort. Tammy Barry spoke with Inman News about the most important considerations for agents working with vacation home and investment property buyers in the Chicagoland area. Agents who are feeling stale or stuck with their current client relationships can use the opportunity to sell vacation homes as a way to reconnect. Chances are, that family that purchased a home two years ago in Downers Grove isn’t looking for a new property, but they may be considering a second home. Barry suggests reconnecting to discuss options and open up as a resource for past clients. But agents who are looking for new clients or new agents seeking to broaden their base can use their knowledge of vacation rental properties as a conversation starter over cocktails or coffee. She suggests simply asking “What do you have going on this weekend?” The question can begin to stir the pot and lead you down the right path to discuss more in-depth about vacation properties, she says. Sunday summer traffic from Michigan and Wisconsin is all too familiar for any Chicagoan who has taken a weekend getaway to popular destinations in these states. But having a location that is a bit more “off the beaten path” eliminates some of the stress of traveling during non-peak times. Do you plan to rent out the property? What type of amenities are you looking for? How many seasons do you plan to live there? What’s your budget? What kind of lifestyle do you want? How close do you need to be to amenities? Do you want a fixer upper or to be a part of a new community? These are questions agents need to ask buyers looking for a vacation or investment property, Barry says. Many vacation home buyers also want to know what type of residents live in a specific community.
<filename>src/i18n/languages/ko.ts import { AUDIFICATION, GUIDE_DOGE, I18n, VISUALIZATION } from '../types'; import { en } from './en'; export const ko: I18n = { ...en, [GUIDE_DOGE.TITLE]: 'Guide-Doge', [GUIDE_DOGE.VISUALIZATION]: '데이터 시각화', [GUIDE_DOGE.AUDIFICATION]: '데이터 청각화', [AUDIFICATION.INSTRUCTIONS]: [ '<kbd>SPACE</kbd>를 눌러 청각화된 음향을 재생하고 <kbd>SHIFT</kbd> + <kbd>SPACE</kbd>를 눌러 거꾸로 재생합니다.', '<kbd>X</kbd> 또는 <kbd>Y</kbd>를 눌러 정의역 또는 치역을 읽습니다.', '<kbd>L</kbd>을 눌러 범례 항목들을 읽습니다.', '<kbd>UP</kbd> 또는 <kbd>DOWN</kbd>을 눌러 범례 항목을 변경합니다.', '<kbd>0</kbd> ... <kbd>9</kbd>를 눌러 재생 위치를 이동합니다.', ].join(' <br/>'), [AUDIFICATION.DOMAIN]: '정의역은 %(domain_min)s 부터 %(domain_max)s 까지이며 한 음표는 %(domain_unit)s을 나타냅니다.', [AUDIFICATION.RANGE]: '치역은 %(range_min)s 부터 %(range_max)s 까지입니다.', [AUDIFICATION.ACTIVE_POINT]: '%(y)s, %(x)s.', };
Welcome to Bauhaus Brew Labs! We assume you're here because you not only enjoy delicious, world-class local beer, but you also enjoy the combination of delicious beer with your favorite tunes and über-awesome friends. Well, you’ve come to the right place – welcome to the family! We are a growing family of forward drinkers who celebrate the joy of art and craft in everyday life – the Bauhaus way. We’ve designed our beers to do nothing less than ignite your senses and make your wilder dreams come true.* *Note: Bauhaus will not be held responsible if you attempt to speak German, mistakenly purchase a fanny pack, finally buy that pogo stick, high-five a stranger, adopt a litter of kittens, claim to know karate or attempt to name all variations of wurst after tasting our beer for the first time. On the other hand, we wouldn't be surprised if you did.
“There’s a lot more demand for people who want to just improve themselves than anyone would have guessed,” says Salman Khan, founder of the wildly popular free educational video series that bears his name and author of the new book The One World Schoolhouse: Education Reimagined (Twelve). Khan, a 36-year-old Bangladeshi American, first put together a couple of video tutorials in 2004 to help his young cousins learn math. The videos proved so popular on YouTube that two years later he launched the nonprofit Khan Academy to offer free online lectures and tutorials that are now used by more than 6 million students each month. More than 3,000 individual videos, covering mathematics, physics, history, economics, and other subjects, have drawn more than 200 million views, generating significant funding from both the Bill & Melinda Gates Foundation and Google. Khan Academy is one of the best-known names in online education and has grown to include not just tutorials but complete course syllabi and a platform to track student progress. Reason TV Editor in Chief Nick Gillespie sat down with Khan in October to discuss how American education can be radically transformed, why technology is so widely misused in K?12, and how massive amounts of taxpayer money never make it inside conventional public classrooms. reason: Talk a little bit about the videos and the enormous growth in their audience during the last few years. Salman Khan: People who look at the videos will see someone writing on a digital blackboard. And you’ll hear a voice. For a lot of the videos it’ll be my voice, working through things, thinking through things—very conversational. It started with me making it for my cousins. It soon became clear that people who were not my cousins were watching them. They just kind of took on a life of their own. Many things have surprised me over the last several years. The biggest thing is that when I made these things I assumed these were for my cousins; they were pretty motivated students. I made them for what I would have wanted if I were 12 years old or 13 or 18 years old. I said: Well, maybe this will be for the subset of people who are really motivated, whatever that means. They’ll actually seek out knowledge on the Internet, and then they’ll find it useful. It didn’t take long to realize that the feedback we were getting was from people who were not the traditionally motivated: kids who were about to fail classes, kids who were thinking about dropping out, people who were going back to school. And they were saying [the videos] make me understand the intuition, the big picture, and I’m starting to get excited about math. So the big realization is—and I think this surprised frankly everybody—there’s a lot more demand from people who just want to improve themselves than anyone would have guessed. reason: In the book you mention that New York state spends about $18,000 per public school student per year. Clearly New York state is not known for great schools or great outcomes. We’re spending $18,000 a year for flat results over the past 40 years for public schools. What’s wrong with the status quo? Khan: The reason I highlighted that in the book is that a lot of times people make it sound like it’s a money issue. The problem is you can never say you’re spending too much on education. It’s such an important thing; if you can get a dollar of value in education, it’s worth it. reason: Although that’s not what’s been happening. Khan: Exactly. And when you look at the $18,000 number (or even in the lower districts that spend less, $8,000 or $9,000), and you multiply that by how many students are in a classroom—someplace between 20 and 30—you get a fairly large number. You get something [in the range of] $300,000, $400,000, $500,000. When you do that very simple back-of-the-envelope calculation, you realize how little of that money is actually touching the student. Very little of that is going to the teacher. Very little is going directly for the facilities. Most of that is going for layers of administration. We can actually professionalize teachers as they are, turn it into a career that pays as well as doctors. The money is there. There just has to be major restructuring in how you spend that money. (Interview continues below video.) reason: Is there any reason to believe that if we tripled what we pay teachers we would have teachers that are 200 percent better? Khan: I don’t know. I think the general sense is that there’s a lot of lip service being given to teachers: Oh, we need to respect you. We want the best of the best to be doing this. But society’s not sending that economic signal. In engineering I used to say: How come more people are going into finance than engineering? Well, look at the salaries, and you get a very clear picture of why. Now that’s actually changing in engineering. Engineers can do just as well as or better than people in finance. I think that has to happen in teaching. We are already getting a lot of great talent in teaching, but we’ll get even more people who aspire to do this. And it will change the dynamic in the classroom to where the students say, I wish I had a chance of becoming that person who I have the privilege to be with in this room. That completely changes the dynamic of the classroom. I think that’s possible. A lot of the excuses—oh, we can’t have technology; it’s too expensive—those are a round-off error compared to the amount of money that’s being spent even on things like textbooks and whatever else. reason: One of the things you emphasize is that there are multiple ways and multiple sites of education. Talk a bit about how we have to start reimagining education so it’s not something that happens eight-and-a-half months a year in a brick building with bad air conditioning.
East Malling Research Station History A research station was established on the East Malling site in 1913 on the impetus of local fruit growers. The original buildings are still in use today. Some of the finest and most important research on perennial crops has been conducted on the site, resulting in East Malling’s worldwide reputation. Some of the more well-known developments have been achieved in the areas of plant raising, fruit plant culture (especially the development of rootstocks), fruit breeding, ornamental breeding, fruit storage and the biology and control of pests and diseases. From 1990 a division of Horticulture Research International (HRI) was on the site. HRI closed in 2009. In 2016, East Malling Research became part of the National Institute of Agricultural Botany (NIAB) group. Apple rootstocks In 1912, Ronald Hatton initiated the work of classification, testing and standardisation of apple tree rootstocks. With the help of Dr Wellington, Hatton sorted out the incorrect naming and mixtures then widespread in apple rootstocks distributed throughout Europe. These verified and distinct apple rootstocks are called the "Malling series". The most widespread used was the M9 rootstock. Structure It is situated east of East Malling, and north of the Maidstone East Line. The western half of the site is in East Malling and Larkfield and the eastern half is in Ditton. It is just south of the A20, and between junctions 4 and 5 of the M20 motorway. Function Today the Research Centre also acts as a business enterprise centre supported by leading local businesses including QTS Analytical and Network Computing Limited. The conference centre trades as East Malling Ltd, being incorporated on 17 February 2004.
<reponame>jezzi23/code-indexer #include "file_mapped_io.h" #include <iostream> #include "Windows.h" #include "types.h" #include "utils.h" internal_ const u64 file_fill_size = 1024ULL; FileMapper::FileMapper(const char* file_path) { u64 file_size; HANDLE file_handle = CreateFile(file_path, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ, NULL, OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (file_handle == INVALID_HANDLE_VALUE) { std::cerr << "File handle creation failed: "; std::cerr << GetLastError() << std::endl; } else if (ERROR_ALREADY_EXISTS == GetLastError()) { LARGE_INTEGER file_size_tmp; GetFileSizeEx(file_handle, &file_size_tmp); file_size = file_size_tmp.QuadPart; } else { file_size = 0; } u32 low_order_val, high_order_val; if (file_size == 0) { unpack(file_fill_size, high_order_val, low_order_val); } else { unpack(file_size, high_order_val, low_order_val); } HANDLE map_handle = CreateFileMapping(file_handle, NULL, PAGE_READWRITE, high_order_val, low_order_val, NULL); if (map_handle == NULL) { std::cerr << "File map handle creation failed: "; std::cerr << GetLastError() << std::endl; } //TODO: Can file_handle be closed here? handle = { map_handle, file_size ? file_size : file_fill_size }; } FileMapper::~FileMapper() { CloseHandle(handle.file_handle); } u64 FileMapper::getFileSize() { return handle.file_size; } void* FileMapper::map(u64 byte_offset, u32 length) { u32 low_order; u32 high_order; unpack(byte_offset, high_order, low_order); void* result = MapViewOfFile(handle.file_handle, FILE_MAP_WRITE, high_order, low_order, length); if (result == NULL) { std::cerr << "Mapping file to memory failed: "; std::cerr << GetLastError() << std::endl; } return result; } void FileMapper::unmap(void* mapped_mem, u32 length) { BOOL success = UnmapViewOfFile(mapped_mem); if (!success) { std::cerr << "Unmapping file from memory failed: "; std::cerr << GetLastError() << std::endl; } }
The following background information may present examples of specific aspects of the prior art (e.g., without limitation, approaches, facts, or common wisdom) that, while expected to be helpful to further educate the reader as to additional aspects of the prior art, is not to be construed as limiting the present invention, or any embodiments thereof, to anything stated or implied therein or inferred thereupon. The following is an example of a specific aspect in the prior art that, while expected to be helpful to further educate the reader as to additional aspects of the prior art, is not to be construed as limiting the present invention, or any embodiments thereof, to anything stated or implied therein or inferred thereupon. By way of educational background, another aspect of the prior art generally useful to be aware of is that A flatbread is a simple bread made with flour or corn, water, and salt, and then thoroughly rolled into flattened dough. A tortilla is a type of flatbread. Typically, the tortilla is a type of thin flatbread made from finely ground wheat flour. Tortillas are commonly prepared with meat to make dishes such as tacos, burritos, and enchiladas. It is well known that the preparation of tortillas involves numerous manipulations of dough, cooking pans, and finished tortillas. In one common preparation, a dough composition made from flour or corn, water, and salt is mixed and formed into a generally large ball shape. Next, the dough is divided into smaller 1½″ balls. Each ball is flattened and rolled into an approximate 7″ circle with a rolling pin. The circle of dough may then be cooked in a greased griddle for approximately one minute on each side. Typically, it is at the stage of coking, filling, and serving that difficulties may arise with preparation of the tortilla. The griddle is hot and has hot oil therein. An inadequate utensil makes grasping the tortilla difficult without breaking it. Also, the tortilla must be filled and rolled while still hot to achieve optimal taste. This can be difficult without the proper utensil. Even though the above cited methods for holding a flatbread some of the needs of the market, a utensil and method for holding a flatbread that enables the flatbread to be clamped, penetrated, flipped, dipped, and rolled during preparation is still desired.
Kingship, Society and the Church in Anglo-Saxon Yorkshire Some hard work with a magnifying glass and some far-fetched speculation (mine) in the search room about a Midlands Don Quixote and his broken-down nag Rocinante, and you will see that the Speaking Steed being ridden towards Hutton is not a horse or a donkey. What the animal utters (Maaa) is not as important as what it is: An Ass. For was Hutton not elected a Fellow of the Society of Antiquaries of Scotland in 1781? Did the election not gratify and delight him beyond measure? Did he not plaster F.A.S.S. (Fellow, Antiquaries Society of Scotland) over everything he wrote? paste it up in his shop window? print it on his bookplates? Who would have thought it? We have been told for two centuries now that visual satire could only emanate from the Great Metropolis, but it too was Made in Birmingham.
<gh_stars>1-10 // incomplete declarations for the postcss-values-parser declare module 'postcss-values-parser' { export type Node = Root | ChildNode; interface NodeBase { next(): ChildNode | void; prev(): ChildNode | void; } interface ContainerBase extends NodeBase { nodes: ChildNode[]; first?: ChildNode; last?: ChildNode; } export interface Root extends ContainerBase { type: 'root'; } export type ChildNode = | AtWord | Comment | Func | Interpolation | Numeric | Operator | Punctuation | Quoted | UnicodeRange | Word; export type Container = Root | Func | Interpolation; export interface AtWord extends ContainerBase { type: 'atrule'; parent: Container; name: string; params: string; } export interface Comment extends ContainerBase { type: 'comment'; parent: Container; inline: boolean; text: string; } export interface Func extends ContainerBase { type: 'func'; parent: Container; isColor: boolean; isVar: boolean; name: string; params: string; } export interface Interpolation extends ContainerBase { type: 'interpolation'; parent: Container; params: string; prefix: string; } export interface Numeric extends NodeBase { type: 'numeric'; parent: Container; unit: string; value: string; } export interface Operator extends NodeBase { type: 'operator'; parent: Container; value: string; } export interface Punctuation extends NodeBase { type: 'punctuation'; parent: Container; value: string; } export interface Quoted extends NodeBase { type: 'quoted'; parent: Container; quote: string; value: string; contents: string; } export interface UnicodeRange extends NodeBase { type: 'unicodeRange'; parent: Container; name: string; } export interface Word extends NodeBase { type: 'word'; parent: Container; isColor: boolean; isHex: boolean; isUrl: boolean; isVariable: boolean; value: string; } interface ParseOptions { ignoreUnknownWords?: boolean; interpolation?: boolean | InterpolationOptions; variables?: VariablesOptions; } interface InterpolationOptions { prefix: string; } interface VariablesOptions { prefixes: string[]; } export function parse(css: string, options?: ParseOptions): Root; }
def _check_password_history(self, password): crypt = self._crypt_context() for rec_id in self: recent_passes = rec_id.company_id.password_history if recent_passes < 0: recent_passes = rec_id.password_history_ids else: recent_passes = rec_id.password_history_ids[ 0: recent_passes - 1 ] if recent_passes.filtered( lambda r: crypt.verify(password, r.password_crypt)): raise PassError( _(u'No se puede usar las %d contraseña más reciente') % rec_id.company_id.password_history )
<filename>spring-vaadin/src/main/java/org/vaadin/spring/i18n/Translator.java /* * Copyright 2014 The original authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.vaadin.spring.i18n; import org.springframework.context.MessageSource; import org.springframework.util.StringUtils; import org.vaadin.spring.internal.ClassUtils; import java.io.IOException; import java.io.ObjectInputStream; import java.io.Serializable; import java.lang.reflect.Field; import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.util.HashMap; import java.util.Locale; import java.util.Map; import static org.vaadin.spring.internal.ClassUtils.visitClassHierarchy; /** * This translator class has been designed with Vaadin UIs in mind, but it works with other classes as well. The idea is that * a UI component is composed of other components (e.g. labels and text fields) that contain properties that need to be translated (e.g. captions, descriptions). * These translatable properties can be mapped to {@link org.springframework.context.MessageSource} keys by using the {@link org.vaadin.spring.i18n.TranslatedProperty} annotation. * A {@code Translator} instance would then be created for the UI component (the target object). When the {@link #translate(java.util.Locale, org.springframework.context.MessageSource)} method * is invoked, the translator will go through all fields and getter methods that are annotated and update the translated properties with values from the {@code MessageSource}. * <p/> * For example, a label with a translatable value would be defined like this: * <code> * <pre> * &#64;TranslatedProperty(property = "value", key = "myLabel.value") * private Label myLabel; * </pre> * </code> * When translating the UI, the translator would first get the message with the key {@code myLabel.value} from the {@code MessageSource}, and then * invoke {@link com.vaadin.ui.Label#setValue(String)}, passing in the message as the only parameter. * <p/> * The UI component is itself responsible for creating the translator and invoking the {@link #translate(java.util.Locale, org.springframework.context.MessageSource)} method * when necessary (normally on initial component setup and when the locale changes). * * @author <NAME> (<EMAIL>) * @see org.vaadin.spring.i18n.TranslatedProperties * @see org.vaadin.spring.i18n.TranslatedProperty */ public class Translator implements Serializable { private final Object target; private transient Map<TranslatedProperty, Field> translatedFields; private transient Map<TranslatedProperty, Method> translatedMethods; /** * Creates a new translator. * * @param target the object that will be translated, never {@code null}. */ public Translator(Object target) { this.target = target; analyzeTargetClass(); } private void readObject(ObjectInputStream io) throws IOException, ClassNotFoundException { io.defaultReadObject(); analyzeTargetClass(); } private void analyzeTargetClass() { translatedFields = new HashMap<TranslatedProperty, Field>(); translatedMethods = new HashMap<TranslatedProperty, Method>(); visitClassHierarchy(new ClassUtils.ClassVisitor() { @Override public void visit(Class<?> clazz) { analyzeFields(clazz); analyzeMethods(clazz); } }, target.getClass()); } private void analyzeMethods(Class<?> clazz) { for (Method m : clazz.getDeclaredMethods()) { if (m.getParameterTypes().length == 0 && m.getReturnType() != Void.TYPE) { if (m.isAnnotationPresent(TranslatedProperty.class)) { translatedMethods.put(m.getAnnotation(TranslatedProperty.class), m); } else if (m.isAnnotationPresent(TranslatedProperties.class)) { for (TranslatedProperty annotation : m.getAnnotation(TranslatedProperties.class).value()) { translatedMethods.put(annotation, m); } } } } } private void analyzeFields(Class<?> clazz) { for (Field f : clazz.getDeclaredFields()) { if (f.isAnnotationPresent(TranslatedProperty.class)) { translatedFields.put(f.getAnnotation(TranslatedProperty.class), f); } else if (f.isAnnotationPresent(TranslatedProperties.class)) { for (TranslatedProperty annotation : f.getAnnotation(TranslatedProperties.class).value()) { translatedFields.put(annotation, f); } } } } /** * Translates the target object using the specified locale and message source. * * @param locale the locale to use when fetching messages, never {@code null}. * @param messageSource the message source to fetch messages from, never {@code null}. */ public void translate(Locale locale, MessageSource messageSource) { translateFields(locale, messageSource); translatedMethods(locale, messageSource); } private void translateFields(Locale locale, MessageSource messageSource) { for (Map.Entry<TranslatedProperty, Field> fieldEntry : translatedFields.entrySet()) { final Field field = fieldEntry.getValue(); final TranslatedProperty annotation = fieldEntry.getKey(); field.setAccessible(true); Object fieldValue; try { fieldValue = field.get(target); } catch (IllegalAccessException e) { throw new RuntimeException("Could not access field " + field.getName()); } if (fieldValue != null) { setPropertyValue(fieldValue, annotation.property(), messageSource.getMessage(annotation.key(), null, annotation.defaultValue(), locale)); } } } private void translatedMethods(Locale locale, MessageSource messageSource) { for (Map.Entry<TranslatedProperty, Method> methodEntry : translatedMethods.entrySet()) { final Method method = methodEntry.getValue(); final TranslatedProperty annotation = methodEntry.getKey(); method.setAccessible(true); Object methodValue; try { methodValue = method.invoke(target); } catch (IllegalAccessException e) { throw new RuntimeException("Could not access method " + method.getName()); } catch (InvocationTargetException e) { throw new RuntimeException("Could not invoke method " + method.getName(), e); } if (methodValue != null) { setPropertyValue(methodValue, annotation.property(), messageSource.getMessage(annotation.key(), null, annotation.defaultValue(), locale)); } } } private static void setPropertyValue(Object target, String propertyName, String value) { final String setterMethodName = "set" + StringUtils.capitalize(propertyName); try { final Method setterMethod = target.getClass().getMethod(setterMethodName, String.class); setterMethod.invoke(target, value); } catch (NoSuchMethodException e) { throw new IllegalArgumentException("No public setter method found for property " + propertyName); } catch (InvocationTargetException e) { throw new RuntimeException("Could not invoke setter method for property " + propertyName, e); } catch (IllegalAccessException e) { throw new RuntimeException("Could not access setter method for property " + propertyName); } } }
Mumbaikars are fed up of excavations in the name of either the metro, or road repairs, or sewerage and drainage work, water supply etc. Now residents of Matunga West, near DG Ruparel College, are complaining of continuous digging without heed to time of day or night. mid-day has found that the digging work is also being carried out though traffic police's permission for it has lapsed. The Brihanmumbai Municipal Corporation is laying a sewerage system at the site, using a new micro-tunneling technology. However, this is being done in violation of the traffic department permission, as it is being carried out on the ever busy Senapati Bapat Road even during peak hours. The traffic department had initially permitted the work only at night. Locals allege that it is also causing traffic congestion on Senapati Bapat Road. The initial permission for carrying out this work which is spread over a 1 km area, was from October 12 to December 12, as per the NOC of the traffic department, a copy of which is with mid-day. mid-day visited the junction of Bal Govindas Road and Senapati Bapat Road, where the work is on. The NOC from the Mumbai Traffic Police states that the work should be carried out between 12.00 am and 6.00 am which clearly was not the case, with the work going on in full swing at 5.00 pm. Ghanshyam Mulani, an activist, local resident and a BJP south Central Mumbai District President (Sindhi Cell), said, "I have complained to the senior authorities of traffic police and also written to government officials but still there is no respite. I demand that action be taken against the traffic authorities for not following up with the issues and allowing gross violations time and again. This is completely irresponsible on the part of the BMC to carry out work in such a manner." Traffic Police Inspector from Dadar, Sujata Shejale said, "I am not aware off hand if the permissions were renewed. But generally they (BMC) will not work without permissions, in case of something like this. I will ask officers to get it checked."
Effect of Substitution in Perovskite-type Oxide Superconductor Several kinds of ions were substituted into perovskite-type BaPb1-xBixO3 (BPB), and the changes thereby brought about in electrical and superconducting properties have been studied. For B site substitution, only Bi is effective to bring about superconductivity with Tc above 3 K. Meanwhile, Cs, Rb and Sr were substituted into A site. These resulted in a reduction in lattice constants, accompanied by a decrease in Tc, when the substitution amount exceeded about 10% of A site ions.
Fiber based products used as packages must both be able to protect the packed product from outer influences as well as withstand the influence of the packed product. One way to achieve the desired protection is to provide the package with a barrier. In the case of perishable products, such as oxygen sensitive products, oxygen barrier characteristics of the package are required to provide extended shelf life for the packaged product. By limiting the exposure of oxygen-sensitive products to oxygen, the quality and shelf life of many products are maintained and enhanced. For instance, by limiting the oxygen exposure of oxygen-sensitive food products in a packaging system, the quality of the food product can be maintained and spoilage retarded. In addition, such packaging also keeps the product in inventory longer, thereby reducing costs incurred from waste and having to restock. Barriers against for example liquids and/or grease can be used in order to increase the package's resistance of the packed product. Barriers are normally created by coating the fiber based substrate with a composition which gives the substrate barrier properties. Different coatings can be applied depending on the needed properties of the barrier. The most commonly used materials when forming a barrier on a fiber based product, are polyethylene (PE), polypropylene (PP), polyethylene terephthalate (PET), ethylene vinyl alcohol (EVOH) or ethylene vinyl acetate (EVA). EVOH is normally used in order to create oxygen barriers and PE or PET is normally used in order to create a liquid and/or vapor barrier. The polymers are normally either laminated or extrusion coated to the fiber based product. However, the thickness of a polymer layer which gives a product barrier properties normally need to be very thick and it is quite costly to produce such barrier product. Another commonly used material in order to produce a barrier is aluminum. A layer comprising aluminum is above all used in order to improve the oxygen and light barrier of a paper or paperboard product. The thickness of an aluminum layers is normally quite thin, often around 7-9 μm. Aluminum gives excellent barrier properties but it strongly increases the carbon dioxide load of the product. Furthermore, aluminum decreases the possibility to recycle the package. There is therefore a demand from both producers and end users to avoid the use of aluminum layers in paper or paperboard products in order to decrease the carbon dioxide load of a product. There is still a need for an improved fiber based product with barrier properties which is both more economical beneficial to produce and which can be produced at a low carbon dioxide load.
// Generate the nth Fibonacci number. fn fib(n: i64) -> u128 { let mut curr = 0; let mut n1 = 0; let mut n2 = 1; for _ in 0..(n - 1) { curr = n1 + n2; n1 = n2; n2 = curr; } curr }
Talking About Classical Music In the spacious, public foyer of Londons Southbank Centre, Europes largest arts centre, a wall-sized advert trails the concerts of the venues four resident orchestras with the slogan a classical music season exclusively for pretty much everyone. Orthodox marketing practice might well blanche at the use of exclusively to describe classical music. Inclusivity and accessibility are the contemporary watchwords of a musical genre long dogged by cultural stereotypes, particularly surrounding (middle) class and (old) age. But the slogans deliberate oxymoron is surely self-aware and provocative, aiming to stop readers in their tracks, to play on classical musics image problem, and ultimately, of course, to attract concertgoers. More broadly, then, the slogan underlines the importance of language to how classical music is perceived today, and the sensitivities that influence and regulate that association. As a marketing ploy, exclusively here is both an invitationthe music these orchestras produce is for you, dear readerand a qualified reminder of classical musics elite credentials. Potential concertgoers are invited to imagine a special or premier event, not one that is cliquish or exclusory. How such language frames classical music is the central theme of this chapter. Language is used in myriad ways to contextualise and set expectations about classical music, but many such forms currently slip under musicologys radar, despite being essential to how the genre is perceived: from programme notes, liner notes, and reviews that steer audiences experiences, to bluffers guides and the efforts of marketers to promote and demystify classical music. Consider also the rise of social media, societys keen appropriation of classical music, and oral media such as podcasts and radio, and the work required to understand how perceptions of classical music are shaped in the broadest sense becomes clear. To appreciate this argument is also to begin to make the case for public musicology, a bidirectional process that recognises and attaches greater significance to public-musicological artefacts (such as liner notes and radio) and considers how musicology can make music relevant and useful in the public sphere. This nascent field is particularly pertinent to classical music, with its grand history and exclusive image. This chapter focusses on one of the most public forms of musicology to classify and critique how BBC Radio 3 and Classic FM speak about the music they broadcast. To survey the types and range of language they use is to reveal not only how the genre is portrayed on the radio today, but also the assumptions about what classical music is, and what it is supposed or presumed to do. In turn, the chapter will offer an account of how Radio 3 and Classic FM fulfil different but overlapping roles in todays classical music industry. Figures show that these stations reach 1.89 and 5.36 million listeners per quarter respectively, making radio by far the most popular way in which people access classical music. Radio is therefore a meaningful way to critique the dilemmascrises, as some commentators would have itclassical music faces. Indeed, radio itself, and particularly Classic FM, has been criticised heavily over the years, as we shall see. Such views are historically engrained, but how credible or true are they today? Might radio, in fact, be less a symptom of certain parts of classical musics supposed malaise, and more a cure? Admittedly, examining radio as a conduit for musical understanding and enjoyment is challenging: the complete task would be as much philosophical and linguistic as cultural and musicological. This chapter is intended to be a midpoint that builds on recent musicology and sociology on both radio and the state of classical music, and which looks ahead to consider how public musicology might respond to the modern realities of classical music. A study of the vocabulary Radio 3 and Classic FM use to characterise classical music is therefore framed by two field-scoping sections: on public musicology itself and, first, on the intense debates that encircle the genre today.
/** * Return true iff input folder time is between compaction.timebased.min.time.ago and * compaction.timebased.max.time.ago. */ private boolean folderWithinAllowedPeriod(Path inputFolder, DateTime folderTime) { DateTime currentTime = new DateTime(this.timeZone); PeriodFormatter periodFormatter = getPeriodFormatter(); DateTime earliestAllowedFolderTime = getEarliestAllowedFolderTime(currentTime, periodFormatter); DateTime latestAllowedFolderTime = getLatestAllowedFolderTime(currentTime, periodFormatter); if (folderTime.isBefore(earliestAllowedFolderTime)) { log.info(String.format("Folder time for %s is %s, earlier than the earliest allowed folder time, %s. Skipping", inputFolder, folderTime, earliestAllowedFolderTime)); return false; } else if (folderTime.isAfter(latestAllowedFolderTime)) { log.info(String.format("Folder time for %s is %s, later than the latest allowed folder time, %s. Skipping", inputFolder, folderTime, latestAllowedFolderTime)); return false; } else { return true; } }
Respiratory variation in carotid peak systolic velocity predicts volume responsiveness in mechanically ventilated patients with septic shock: a prospective cohort study Background The evaluation of fluid responsiveness in patients with hemodynamic instability remains to be challenging. This investigation aimed to determine whether respiratory variation in carotid Doppler peak velocity (CDPV) predicts fluid responsiveness in patients with septic shock and lung protective mechanical ventilation with a tidal volume of 6 ml/kg. Methods We performed a prospective cohort study at an intensive care unit, studying the effect of 59 fluid challenges on 19 mechanically ventilated patients with septic shock. Pre-fluid challenge CDPV and other static or dynamic measurements were obtained. Fluid challenge responders were defined as patients whose stroke volume index increased more than 15 % on transpulmonary thermodilution. The area under the receiver operating characteristic curve (AUROC) was compared for each predictive parameter. Results Fluid responsiveness rate was 51 %. The CDPV had an AUROC of 0.88 (95 % confidence interval (CI) 0.770.95); followed by stroke volume variation (0.72, 95 % CI 0.630.88), passive leg raising (0.69, 95 % CI 0.560.80), and pulse pressure variation (0.63, 95 % CI 0.490.75). The CDPV was a statistically significant superior predictor when compared with the other parameters. Sensitivity, specificity, and positive and negative predictive values were also the highest for CDPV, with an optimal cutoff at 14 %. There was good correlation between CDPV and SVI increment after the fluid challenge (r = 0.84; p < 0.001). Conclusions CDPV can be more accurate than other methods for assessing fluid responsiveness in patients with septic shock receiving lung protective mechanical ventilation. CDPV also has a high correlation with SVI increase after fluid challenge. Background In a patient with acute hemodynamic instability, a fluid challenge will cause an increase in stroke volume, according to the Frank-Starling curve. This increase in stroke volume has a salutary effect because it improves tissue perfusion. In contrast, higher hydrostatic pressures in the vascular system predispose the patient to edema, organic dysfunction, and increased risk of inhospital mortality. Relative hypovolemia has been described in the setting of septic shock. However, only 50 % of patients with hemodynamic instability are fluid responsive. Therefore, expeditious fluid resuscitation is advised, and clinicians must always weigh the benefits and risks of intravenous fluids. Currently, both static and dynamic parameters are utilized for prediction of fluid responsiveness. Static parameters (e.g., central venous pressure and pulmonary artery occlusion pressure) are much less reliable than dynamic parameters, which are based on respirophasic variation in stroke volume (e.g., pulse pressure variation and changes in aortic blood flow). Most common dynamic parameters are invasive (arterial and/or central venous cannulation is required) and expensive. Echocardiography is a well-established method for evaluating fluid responsiveness. Nevertheless, measurement of left ventricular outflow tract velocities for the estimation of stroke volume is labor intense, requires specific training for adequate performance, and is not easily reproducible or obtainable. Thus, alternative methods, including brachial or carotid artery velocity, have been examined as surrogates for stroke volume in the non-septic shock patient population. Moreover, most predictive indices for volume responsiveness are not validated in patients receiving lung protective ventilatory strategies. The aim of this study was to determine if respiratory variation in carotid Doppler peak velocity (CDPV) can predict fluid responsiveness in patients with septic shock and lung protective mechanical ventilation. Patients This was a single-center, prospective, cohort study. Inclusion criteria were mechanical ventilation, septic shock, and hemodynamic instability for which the attending intensivist determined the need for fluid challenge based on signs of inadequate tissue perfusion according to Surviving Sepsis Campaign recommendations. The investigation was conducted in a medical/surgical intensive care unit and tertiary academic hospital from May 2014 through October 2014. Exclusion criteria were age under 18 years, non-septic origin of shock, known heart failure, valvular disease or arrhythmia, intra-abdominal hypertension, peripheral arterial disease, common carotid artery stenosis greater than 50 % (systolic peak velocity >182 cm/s and/or diastolic velocity >30 cm/s by Doppler ultrasound), spontaneous respiratory efforts, and utilization of colloids other than albumin for the fluid challenge. Volume controlled mechanical ventilation was performed with tidal volumes at 6 ml/kg of predicted body weight. We usually administer fluid challenges with normal saline at a 7 mL/kg dose over a 30-min period and perform thermodilution before and after each challenge. The Institutional Review Board at Hospital Civil de Guadalajara deemed the investigation to be of minimal risk and waived the need for written consent. Measurements and volume responsiveness Before each fluid challenge, carotid peak systolic velocity was measured with a Micromaxx System (Sonosite, WA, USA), using a 5-10-mHz linear array transducer. After procuring a longitudinal view of the common carotid artery, pulsed Doppler analysis at 2 cm from the bifurcation was performed. The sample volume was positioned at the center of the vessel, with angulation at no more than 60°. Maximum and minimum peak systolic velocities were obtained in a single respiratory cycle (Fig. 1), and the CDPV was calculated with the following formula: (MaxCDPV − MinCDPV) / 100, expressed as a percentage. Two investigators with previous formal training in critical care ultrasound estimated the CDPV. These investigators were blinded to each other's results and to all other variables. The mean of both measurements obtained by the two investigators was used. In addition, the same investigators evaluated the adequate procurement of transthoracic echocardiographic windows for estimation of the stroke volume. Pulse pressure variation (PPV) was calculated with the formula: PPV (%) = 100 (Pp max − Pp min) / , with pressures measured from a femoral arterial catheter with the v2.6e monitor (Phillips Healthcare, Eindhoven, the Netherlands). The passive leg raising (PLR) test was performed as previously reported before each challenge by placing the patient's head and upper torso upright at 45°. This was followed by a flat supine position and raising both legs to a 45°angle from the bed, while measuring the SVI before and after the maneuver. The highest SVI from the first 3 min after the test was taken, and the percentage increase in SVI with the PLR was recorded. Inferior vena cava diameter (IVC-d) measurement was performed with a two-dimensional view at a subxyphoidal long axis, approximately 2 cm caudal to the hepatic vein inlet. Maximum and minimum diameters over a single respiratory cycle were recorded, and respiratory variation in inferior vena cava diameter (D-IVC) was calculated with the formula: (Max Transpulmonary thermodilution was performed before and after each fluid challenge with the Pulse Contour Cardiac Output system (Pulsion Medical Systems, Mnich, Germany) to obtain an automated SVI, stroke volume variation (SVV), and other variables. Patients with an increase of more than 15 % in the SVI after the fluid challenge were classified as "responders", and those with an increase of less than 15 % in the SVI or those with no increase were classified as "non-responders." Statistical analysis Continuous variables were reported as the mean (standard deviation) if they were normally distributed or the median (interquartile range) if they were not normally distributed, using the Shapiro-Wilk test. Preload indices were compared in responders and non-responders using the Mann-Whitney test. Categorical variables were expressed as the number of measurements (%) and were compared by the chi-squared test. For analyzing the trend in response at repeated fluid challenges per patient, we used the Cochran-Armitage test. We constructed receiver operating characteristic (ROC) curves for static and dynamic indices of preload to determine the ability to predict fluid responsiveness, and their area under the curve was compared using the Hanley-McNeil test. Optimal cutoff values were obtained with the greatest sum of sensitivity and specificity using the Youden index. The relationship between preload indices and changes in SVI after the fluid challenge was estimated with Spearman's correlation coefficient test. We determined inter-observer reproducibility for CDPV by using the Bland-Altman Plot, described as mean bias. Inter-rater agreement was calculated with the kappa statistic and a 95 % confidence interval (CI). Assuming a fluid responsiveness rate of 50 %, we determined that 36 measurements would be needed to detect differences of 0.30 between the area under the receiver operating characteristic (AUROC) curve of central venous pressure (0.55) and CDPV (0.85), with an 80 % power and type I error of 5 %. For all tests, p values were two-sided, and a p value lower than 0.05 was considered statistically significant. We used Med-Calc (Ver 13.2, Mariakerke, Belgium) for calculating the sample size and for the statistical analysis. Results A total of 59 fluid challenges were performed in 19 patients, with a responsiveness rate of 51 %. In eight patients (40 %), the velocity-time integral at the left ventricle outflow tract was not obtained due to an unfavorable transthoracic echocardiographic window. Baseline characteristics of the patients are shown in Table 1. Predictors of fluid responsiveness The CDPV, SVV, SVI increment following the PLR test, and PPV were significantly higher in responders than in non-responders. There was no significant difference in the D-IVC or in any of the static parameters (Table 2). Among dynamic variables, CDPV had the highest AUROC (0.88, p < 0.001; 95 % CI 0.77-0.95) (Table 3), with an optimum cutoff value of greater than 14 % based on the Youden index. Using the Hanley-McNeil test, CDPV was significantly superior to the other variables, (p = 0.03 versus SVV, p = 0.01 versus PLR, p = 0.001 versus PPV, and p < 0.001 versus D-IVC; Fig. 2). CDPV showed the highest sensitivity and specificity, as well as positive and negative predictive values (Table 4). Because it may be arguable as to what cutoff points of increase in SVI is truly clinically meaningful, we calculated ROC curves of the main indices taking an increase in SVI >10 %, instead of >15 % as cutoff. The results were similar, as CDPV maintained the greatest AUROC (0.90, p = <0.001). Responders had a significantly higher median rise in the SVI after the fluid challenge compared to nonresponders (42 versus 9.3 %, p < 0.001), notwithstanding pre-challenge SVI was not different (16 ml/m 2 versus 17 ml/m 2, p = 0.09). As seen in Table 5, at repeated measures analysis, there was no significant trend in the progressive number of fluid challenges per patient and responsiveness rate (p = 0.29) or CDPV (p = 0.32). Median in time between fluid challenges per patient was 4 h (IQR 3.2-5). The presence of acute respiratory distress syndrome or acute kidney injury was not associated with a lack of response to fluid challenge. There was no newly detected carotid stenosis or diminished ejection fraction. The mean time to obtain CDPV was 54 s (SD, 3.9 s). Prediction of the hemodynamic effects of fluid challenge Only CDPV was positively correlated with a fluid challenge-induced change in the SVI, and CDPV had the highest correlation coefficient (r = 0.84, p < 0.001, 95 % CI 0.74-0.90). A regression formula for predicting a rise in the SVI after fluid challenge was obtained (Fig. 3). The correlation between SVV and SVI increase due to fluid challenge was low (r = 0.24, p = 0.058, 95 % CI −0.009-0.47, r 2 = 0.06). There was no significant correlation between the other indices and change in the SVI. Reproducibility and agreement of CDPV Bland-Altman analysis showed good concordance between estimation of CDPV by the two investigators, with a mean bias of 0.2 and limits of agreement between −1.9 and 2.3 (Fig. 4). The inter-observer variability was good, with a kappa statistic of 0.87 (95 % CI 0.84-0.91). Tidal volume (ml/kg) 6 (6.0-6.3) 6 (6.0-6.3) 0.96 All data are expressed as median (interquartile range), except those marked with a, which are expressed as mean (standard deviation) BSA body surface area, ARDS acute respiratory distress syndrome, AKI acute kidney injury, MAP mean arterial pressure, HR heart rate, UO urinary output, ScvO2 oxygen saturation at central venous blood, SVI stroke volume index, NE norepinephrine, SOFA Sequential Organ Failure Assessment score, range from 0 to 24, with higher scores indicating a greater risk of mortality, PEEP positive end-expiratory pressure Discussion The principal finding of this study is that CDPV is easily obtainable and more accurate than conventional methods (central venous pressure, respiratory variation in inferior vena cava diameter, pulse pressure variation) for assessing fluid responsiveness in mechanically ventilated patients with septic shock. Furthermore, the CDPV has a high correlation with SVI increase after fluid challenge. To our knowledge, this is the first investigation that utilizes the CDPV as a predictor of fluid responsiveness in patients with lung protective mechanical ventilation and septic shock. In our study, up to 40 % of the patient population had technically difficult echocardiographic apical views, which limited the measurement of the velocity time integral at the left ventricle outflow tract. Hence, alternative non-invasive and practical methods for assessment of fluid responsiveness in septic shock should be investigated. This is a limitation of point-of-care echocardiography in the ICU because the procurement of different acoustic windows varies and presents different degrees of difficulty for mastery. Moreover, Young et al. demonstrated that TTE failed to evaluate the ejection fraction in 69 % of the patients in the ICU. Furthermore, measurement of carotid peak flow can be rapidly performed with less difficulty than for other echocardiographic variables. Other authors have explored the applicability of echocardiography in mechanically ventilated patients. Feissel et al. reported high accuracy, sensitivity (100 %), and specificity (89 %) of respiratory variation in aortic blood velocity (cutoff value higher than 12 %) for prediction of fluid responsiveness in septic patients receiving mechanical ventilation. Similarly, Monnet et al. showed the respiratory variation in aortic peak velocity (cutoff value higher than 13 %) as a predictor of fluid responsiveness with an AUROC of 0.82 and sensitivity and specificity of 80 and 72 %, respectively. However, these studies utilized an invasive method such as transesophageal echocardiography, and patients with an inadequate aortic blood flow signal were excluded. Moreover, not all patients had septic shock, and they were ventilated with tidal volumes greater than 6 mL/kg. Monge Garcia et al. demonstrated that the variation in brachial artery peak velocity was a good predictor of fluid responsiveness, with a sensitivity and specificity of 74 and 95 %, respectively. The AUROC was 0.88, similar to our method. In contrast to our study, only half of the patients in their study were septic, and they used the Flo Trac/Vigileo system. This system is a non-calibrated monitoring device for which the accuracy for tracking changes in the cardiac index has come under question. A recent study compared the arterial pressure waveform-derived cardiac index provided by the Vigileo system and pulse contour-derived cardiac index provided by the PiCCO device. The former device performed poorly, with lack of response to therapeutic interventions (volume expansion and vasopressor administration). We used the PiCCO system in this investigation. The preferential diversion of blood flow toward the carotid arteries, away from the peripheral arteries, is a relevant pathophysiological consideration in patients suffering from shock. Considering these facts as well as the flaws in radial artery-based monitoring, Song et al. evaluated peak velocity variation at the carotid artery. They showed an AUROC of 0.85, with a threshold value for fluid responsiveness of 11 % (sensitivity and specificity of 0.83 and 0.82). These results are similar to our study. However, their study population primarily consisted of coronary artery disease patients. In comparison to Song's study, we showed a higher correlation between CDPV and a fluid challenge-induced SVI increase (r = 0.84 versus r = 0.63). This finding could be explained by the higher mean age in their study. Perhaps their patients have lower vessel compliance and/or reduced cardiac reserve with concomitant coronary artery disease. Recently, Marik et al. evaluated the blood flow changes in the carotid artery after a PLR maneuver as a predictor of fluid responsiveness in 34 hemodynamically unstable patients. Among these patients, 65 % presented with severe sepsis/septic shock, and 56 % required mechanical ventilation. The increase in carotid blood flow of greater than or equal to 20 % after PLR was found to have a sensitivity and specificity of 94 and 86 %, respectively. The AUROC curve was, however, not estimated. Their method differed from our study because they not only measured the systolic peak velocity but also calculated the variation in blood flow, which is more labor intensive because it requires measurement of the vessel diameter. In addition, they coupled Doppler estimations with a PLR maneuver. In contrast, we obtained an adequate discriminatory performance with a simpler method, using only the peak systolic velocities at a single respiratory cycle, which showed good interobserver agreement and reproducibility. There has been a recent interest in the response to fluid administration over time in patients with shock. Nunes et al. highlighted the limited success of volume resuscitation in patients with circulatory shock after initial resuscitation (>6 h). Thus, a fluid challenge response is not always sustained. We found similar results, as the time from diagnosis of septic shock had a median of 22 h. Another relevant finding in the aforementioned study is that after a fluid challenge, the cardiac index with crystalloid (500 mL infused over a 30-min period) decreased toward baseline values 60 min after infusion, even in responders. Hence, we performed our analysis with the number of "measurements" rather than the number of "patients." Most patients, who are classified as responders, could have received additional fluid challenges at any time for different clinical contexts (e.g., fluid balance and challenges, presence of organic failure, and different vasopressor dosages). The performance of PPV and SVV for prediction of fluid responsiveness has been previously reported, with both sensitivity and specificity higher than 90 %. However, in our study, its predictive accuracy was lower than expected. A possible explanation for this difference could be the aforementioned preferential diversion of blood flow toward the carotid arteries and away from the peripheral arteries, as well as that the fluid challenges in the study were administrated to mechanically ventilated patients with tidal volumes greater than or equal to 8 mL/kg, whereas this variable is not specified in the other studies. We followed a lung protective ventilator strategy with tidal volumes of 6 mL/kg in all patients, and high tidal volume influences the hemodynamic effects of a fluid challenge. Limitations This study has some limitations. As a respirophasic dynamic index, CDPV does not apply to patients with spontaneous breathing, arrhythmias, valvular disorders, significant heart failure, and common carotid stenosis. Nonetheless, measurement of CDPV is a reliable method, without the inherent risk of central artery cannulation, which is present for thermodilution or pulse contour analysis systems. Additionally, there is no need to raise the patient's legs, which is a time-consuming maneuver. Further, it is discouraged and/or unreliable in postsurgical, abdominal hypertension, or fractured patients. We performed point-of-care echocardiography to address ejection fraction in all patients. However, we did not record the ratio of early transmitral flow velocity to the early diastolic velocity of the mitral annulus (E/E'); therefore, the incidence of elevated left ventricular filling pressures was unknown. This could be a possible bias in the study, as an elevated E/E' (>15) is negatively correlated with lower performance at prediction of fluid responsiveness. In order to minimize time, we performed carotid measurements on a single respiratory cycle; therefore, we do not know if the accuracy could have been improved with the average of three respiratory cycles. Although the reliability of PiCCO system has been found to be good in heterogeneous groups of patients, it is still questioned, mainly on the time-dependent accuracy on recalibrations and its variability on agreement at different vasopressor dose. These data were not addressed in our study. External validity is limited, as physicians involved at estimation of CDPV were trained in critical care sonography for more than 1 year. Also, due to the observational nature of the study, management was not CDPV-guided, and the spectrum of patients was narrow, including septic shock patients only. As long as there are clinical trials addressing these issues, our results should be interpreted cautiously. For statistical comparison between ROC curves with the Hanley-McNeil test, "measurements" are required. Therefore, we do not consider the relatively small sample size to be a limitation of our study. Conclusions In this single-center study, we showed that CDPV could be more accurate than other methods for assessing fluid responsiveness in patients with septic shock receiving lung protective mechanical ventilation. The CDPV also has a high correlation with SVI increase after fluid challenge.
A, B, C = map(int, input().split()) array = [A, B, C] count = 0 if (A % 2 == 1 and B % 2 == 0 and C % 2 == 0) or (A % 2 == 0 and B % 2 == 1 and C % 2 == 1): count += 1 B += 1 C += 1 elif (A % 2 == 0 and B % 2 == 1 and C % 2 == 0) or (A % 2 == 1 and B % 2 == 0 and C % 2 == 1): count += 1 A += 1 C += 1 elif (A % 2 == 0 and B % 2 == 0 and C % 2 == 1) or (A % 2 == 1 and B % 2 == 1 and C % 2 == 0): count += 1 A += 1 B += 1 if max(A, B, C) == A: count = count + (A - B) // 2 + (A - C) // 2 elif max(A, B, C) == B: count = count + (B - A) // 2 + (B - C) // 2 else: count = count + (C - A) // 2 + (C - B) // 2 print(count)
/** sometimes we want to specify relName and relAlias of the created CRelation, such as replacing a relation with a newly created one; however the user should make it sure that the relAlias is unique */ public CRemoteRelation addRemoteRelation(String relName, String relAlias) { CRemoteRelation relation = new CRemoteRelation(relName, relAlias, this, namingService); namingService.register(relation.getQualifiedName(), relation); relationAliases.add(relation.getRelAlias()); return relation; }
For the first time, scientists have demonstrated that a component of cannabis reduces seizures in children with a rare form of epilepsy, marking a significant step in efforts to use marijuana and its derivatives to treat serious medical conditions. The company that sponsored the Phase 3 trial, GW Pharmaceuticals, had already announced some of the results, but researchers said the full peer-reviewed study, published Wednesday in the New England Journal of Medicine, validated the importance of the research. They also pointed out that the drug, cannabidiol, helped some patients more than others and was associated with a range of sometimes severe side effects, a significant finding because some families have been treating their children on their own in states where recreational marijuana use is legal. Cannabidiol, which GW has branded as Epidiolex, is a non-hallucinogenic component of marijuana that can be purified and administered in oil. READ MORE: The DEA is looking for candidates to grow marijuana for research — but will it find any takers? For the trial, researchers enrolled 120 children from 2 to 18 years old with Dravet syndrome, a rare genetic form of epilepsy that kills up to 20 percent of patients by the time they are 20. There are no drugs approved specifically for Dravet. During the study, the patients stayed on their normal treatment regimen, and half of them also received cannabidiol while the remainder were given a placebo. Over a 14-week treatment period, the median number of convulsive seizures in the cannabidiol group decreased from 12.4 to 5.9 per month; for the placebo group, the number went from 14.9 to 14.1. In the cannabidiol group, 43 percent of patients had their number of seizures cut in half or more, compared with 27 percent in the placebo group. And 5 percent of patients taking cannabidiol saw their seizures disappear, compared with none in the placebo group. Common side effects seen in the cannabidiol group included vomiting, fatigue, fever, drowsiness, and diarrhea. Eight patients in the group withdrew from the trial because of the severity of the side effects. In an editorial published with the study, Dr. Samuel Berkovic of the University of Melbourne called the trial “welcome” and “the beginning of solid evidence for the use of cannabinoids in epilepsy.” But he noted that it needs to be replicated and that other studies will be required to know if cannabinoids — the different components of cannabis — can help with other forms of epilepsy and to treat adults. As desperate families have sought to treat their children with cannabis or cannabidiol on their own, experts have cautioned that it can be risky. Researchers don’t know, for example, how cannabidiol will interact with other medications, and they know even less about how adding THC — a hallucinogenic cannabinoid — to the mix might affect children with epileptic syndromes. They also don’t know the long-term effects of taking cannabidiol. This article is reproduced with permission from STAT. It was first published on May 24, 2017. Find the original story here.
Screenshot/CNA The new Wild West The Arctic, long considered an almost worthless backwater, is primed to become one of the most important regions in the world as its ice melts over the next few decades. Unlike every other maritime area in the world, there is no overarching legal treaty governing the Arctic. Instead, the Arctic Council, made up of Canada, Denmark, Finland, Iceland, Norway, Russia, Sweden, and the U.S., oversees and coordinates policy. But the Arctic Council has no regulatory power. The countries only use the Council to communicate on policy and research and each member state is free to pursue its own policies within their declared Arctic boundaries. According to a presentation by the Council of Foreign Relations, the Arctic is of primary strategic significance to the five bordering Arctic Ocean states — the U.S. (red), Canada (orange), Russia (grey), Norway (blue), and Denmark (green). Council of Foreign Relations Opening Up The 1.1 million square miles of open water north of accepted national boundaries — dubbed the Arctic Ocean "donut hole" — is considered the high sea and is therefore beyond the Arctic states' jurisdictions. As the Arctic ice melts, the area is predicted to become a center of strategic competition and economic activity. Last year, China signed a free trade agreement with Iceland and sent an icebreaker to the region despite having no viable claims in the Arctic. The Arctic summer sea ice is melting rapidly Council on Foreign Relations Wildly rich The region is stocked with valuable oil, gas, mineral, and fishery reserves. The U.S. estimates that a significant proportion of the Earth's untapped petroleum — including about 15% of the world's remaining oil, up to 30% of its natural gas deposits, and about 20% of its liquefied natural gas — are stored in the Arctic seabed. And in terms of preparation, America is lagging behind its potential competitors. In front is Russia, which symbolically placed a Russian flag on the bottom of the Arctic Ocean near the North Pole in 2007. The country, one-fifth of which lies within the Arctic Circle, has by far the most amount of developed oil fields in the region. Council on Foreign Relations Russia's increasing advantage CFR notes that many observers "consider Russia, which is investing tens of billions of dollars in its northern infrastructure, the most dominant player in the Arctic." Shipping throughout the Arctic will also take on unprecedented importance as the ice recedes — and the Kremlin has a plan for taking advantage of this changing geography. Russia wants the Northern Sea Route, where traffic jumped from four vessels in 2010 to 71 in 2013, to eventually rival the Suez Canal as a passage between Europe and Asia. And it could: The Northern Sea Route from Europe to Asia takes only 35 days, compared to a 48-day journey between the continents via the Suez Canal. CNA 'A new Cold War' Because of the Arctic's potential resources and trade impact, countries are stepping up military development in the region. For years, Norway has been conducting "Operation Cold Response." This year, the military exercise brought in more than 16,000 troops from 15 participating NATO members. A U.S. Arctic Roadmap promotes naval security, the development of operational experience in an Arctic environment, and the bolstering of naval readiness and capability. The Navy has accelerated its plan after noting that it is "inadequately prepared to conduct sustained maritime operations in the Arctic." The US Navy attack submarine USS Annapolis (SSN 760) rests in the Arctic Ocean after surfacing through three feet of ice during Ice Exercise 2009 on March 21, 2009. http://i.imgur.com/qQv2LlY.jpg Russia, meanwhile, has reinvigorated its process of building its naval operations on its northern coast. "Russia, the only non-NATO littoral Arctic state, has made a military buildup in the Arctic a strategic priority, restoring Soviet-era airfields and ports and marshaling naval assets," the CFR presentation explains. "In late 2013, President Vladimir Putin instructed his military leadership to pay particular attention to the Arctic, saying Russia needed 'every lever for the protection of its security and national interests there.' He also ordered the creation of a new strategic military command in the Russian Arctic by the end of 2014." CFR notes that while most experts dismiss the prospects for armed aggression in the Arctic, "some defense analysts and academics assert that territorial disputes and a competition for resources have primed the Arctic for a new Cold War."
<filename>pkg/snowflake/file_format.go package snowflake import ( "database/sql" "encoding/json" "fmt" "strings" "github.com/jmoiron/sqlx" ) // FileFormatBuilder abstracts the creation of SQL queries for a Snowflake file format type FileFormatBuilder struct { name string db string schema string formatType string compression string recordDelimiter string fieldDelimiter string fileExtension string skipHeader int skipBlankLines bool dateFormat string timeFormat string timestampFormat string binaryFormat string escape string escapeUnenclosedField string trimSpace bool fieldOptionallyEnclosedBy string nullIf []string errorOnColumnCountMismatch bool replaceInvalidCharacters bool validateUTF8 bool emptyFieldAsNull bool skipByteOrderMark bool encoding string enableOctal bool allowDuplicate bool stripOuterArray bool stripNullValues bool ignoreUTF8Errors bool binaryAsText bool preserveSpace bool stripOuterElement bool disableSnowflakeData bool disableAutoConvert bool comment string } // QualifiedName prepends the db and schema and escapes everything nicely func (ffb *FileFormatBuilder) QualifiedName() string { var n strings.Builder n.WriteString(fmt.Sprintf(`"%v"."%v"."%v"`, ffb.db, ffb.schema, ffb.name)) return n.String() } // WithFormatType adds a comment to the FileFormatBuilder func (ffb *FileFormatBuilder) WithFormatType(f string) *FileFormatBuilder { ffb.formatType = f return ffb } // WithCompression adds compression to the FileFormatBuilder func (ffb *FileFormatBuilder) WithCompression(c string) *FileFormatBuilder { ffb.compression = c return ffb } // WithRecordDelimiter adds a record delimiter to the FileFormatBuilder func (ffb *FileFormatBuilder) WithRecordDelimiter(r string) *FileFormatBuilder { ffb.recordDelimiter = r return ffb } // WithFieldDelimiter adds a field delimiter to the FileFormatBuilder func (ffb *FileFormatBuilder) WithFieldDelimiter(f string) *FileFormatBuilder { ffb.fieldDelimiter = f return ffb } // WithFileExtension adds a file extension to the FileFormatBuilder func (ffb *FileFormatBuilder) WithFileExtension(f string) *FileFormatBuilder { ffb.fileExtension = f return ffb } // WithSkipHeader adds skip header to the FileFormatBuilder func (ffb *FileFormatBuilder) WithSkipHeader(n int) *FileFormatBuilder { ffb.skipHeader = n return ffb } // WithSkipBlankLines adds skip blank lines to the FileFormatBuilder func (ffb *FileFormatBuilder) WithSkipBlankLines(n bool) *FileFormatBuilder { ffb.skipBlankLines = n return ffb } // WithDateFormat adds date format to the FileFormatBuilder func (ffb *FileFormatBuilder) WithDateFormat(s string) *FileFormatBuilder { ffb.dateFormat = s return ffb } // WithTimeFormat adds time format to the FileFormatBuilder func (ffb *FileFormatBuilder) WithTimeFormat(s string) *FileFormatBuilder { ffb.timeFormat = s return ffb } // WithTimestampFormat adds timestamp format to the FileFormatBuilder func (ffb *FileFormatBuilder) WithTimestampFormat(s string) *FileFormatBuilder { ffb.timestampFormat = s return ffb } // WithBinaryFormat adds binary format to the FileFormatBuilder func (ffb *FileFormatBuilder) WithBinaryFormat(s string) *FileFormatBuilder { ffb.binaryFormat = s return ffb } // WithEscape adds escape to the FileFormatBuilder func (ffb *FileFormatBuilder) WithEscape(s string) *FileFormatBuilder { ffb.escape = s return ffb } // WithEscapeUnenclosedField adds escape unenclosed field to the FileFormatBuilder func (ffb *FileFormatBuilder) WithEscapeUnenclosedField(s string) *FileFormatBuilder { ffb.escapeUnenclosedField = s return ffb } // WithTrimSpace adds trim space to the FileFormatBuilder func (ffb *FileFormatBuilder) WithTrimSpace(n bool) *FileFormatBuilder { ffb.trimSpace = n return ffb } // WithFieldOptionallyEnclosedBy adds field optionally enclosed by to the FileFormatBuilder func (ffb *FileFormatBuilder) WithFieldOptionallyEnclosedBy(s string) *FileFormatBuilder { ffb.fieldOptionallyEnclosedBy = s return ffb } // WithNullIf adds null if to the FileFormatBuilder func (ffb *FileFormatBuilder) WithNullIf(s []string) *FileFormatBuilder { ffb.nullIf = s return ffb } // WithErrorOnColumnCountMismatch adds error on column count mistmatch to the FileFormatBuilder func (ffb *FileFormatBuilder) WithErrorOnColumnCountMismatch(n bool) *FileFormatBuilder { ffb.errorOnColumnCountMismatch = n return ffb } // WithReplaceInvalidCharacters adds replace invalid characters to the FileFormatBuilder func (ffb *FileFormatBuilder) WithReplaceInvalidCharacters(n bool) *FileFormatBuilder { ffb.replaceInvalidCharacters = n return ffb } // WithValidateUTF8 adds validate utf8 to the FileFormatBuilder func (ffb *FileFormatBuilder) WithValidateUTF8(n bool) *FileFormatBuilder { ffb.validateUTF8 = n return ffb } // WithEmptyFieldAsNull adds empty field as null to the FileFormatBuilder func (ffb *FileFormatBuilder) WithEmptyFieldAsNull(n bool) *FileFormatBuilder { ffb.emptyFieldAsNull = n return ffb } // WithSkipByteOrderMark adds skip byte order mark to the FileFormatBuilder func (ffb *FileFormatBuilder) WithSkipByteOrderMark(n bool) *FileFormatBuilder { ffb.skipByteOrderMark = n return ffb } // WithEnableOctal adds enable octal to the FileFormatBuilder func (ffb *FileFormatBuilder) WithEnableOctal(n bool) *FileFormatBuilder { ffb.enableOctal = n return ffb } // WithAllowDuplicate adds allow duplicate to the FileFormatBuilder func (ffb *FileFormatBuilder) WithAllowDuplicate(n bool) *FileFormatBuilder { ffb.allowDuplicate = n return ffb } // WithStripOuterArray adds strip outer array to the FileFormatBuilder func (ffb *FileFormatBuilder) WithStripOuterArray(n bool) *FileFormatBuilder { ffb.stripOuterArray = n return ffb } // WithStripNullValues adds strip null values to the FileFormatBuilder func (ffb *FileFormatBuilder) WithStripNullValues(n bool) *FileFormatBuilder { ffb.stripNullValues = n return ffb } // WithIgnoreUTF8Errors adds ignore UTF8 errors to the FileFormatBuilder func (ffb *FileFormatBuilder) WithIgnoreUTF8Errors(n bool) *FileFormatBuilder { ffb.ignoreUTF8Errors = n return ffb } // WithBinaryAsText adds binary as text to the FileFormatBuilder func (ffb *FileFormatBuilder) WithBinaryAsText(n bool) *FileFormatBuilder { ffb.binaryAsText = n return ffb } // WithPreserveSpace adds preserve space to the FileFormatBuilder func (ffb *FileFormatBuilder) WithPreserveSpace(n bool) *FileFormatBuilder { ffb.preserveSpace = n return ffb } // WithStripOuterElement adds strip outer element to the FileFormatBuilder func (ffb *FileFormatBuilder) WithStripOuterElement(n bool) *FileFormatBuilder { ffb.stripOuterElement = n return ffb } // WithDisableSnowflakeData adds disable Snowflake data to the FileFormatBuilder func (ffb *FileFormatBuilder) WithDisableSnowflakeData(n bool) *FileFormatBuilder { ffb.disableSnowflakeData = n return ffb } // WithDisableAutoConvert adds disbale auto convert to the FileFormatBuilder func (ffb *FileFormatBuilder) WithDisableAutoConvert(n bool) *FileFormatBuilder { ffb.disableAutoConvert = n return ffb } // WithEncoding adds encoding to the FileFormatBuilder func (ffb *FileFormatBuilder) WithEncoding(e string) *FileFormatBuilder { ffb.encoding = e return ffb } // WithComment adds a comment to the FileFormatBuilder func (ffb *FileFormatBuilder) WithComment(c string) *FileFormatBuilder { ffb.comment = c return ffb } // FileFormat returns a pointer to a Builder that abstracts the DDL operations for a file format. // // Supported DDL operations are: // - CREATE FILE FORMAT // - ALTER FILE FORMAT // - DROP FILE FORMAT // - SHOW FILE FORMATS // - DESCRIBE FILE FORMAT // // [Snowflake Reference](https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html) func FileFormat(name, db, schema string) *FileFormatBuilder { return &FileFormatBuilder{ name: name, db: db, schema: schema, } } // Create returns the SQL query that will create a new file format. func (ffb *FileFormatBuilder) Create() string { q := strings.Builder{} q.WriteString(`CREATE`) q.WriteString(fmt.Sprintf(` FILE FORMAT %v`, ffb.QualifiedName())) q.WriteString(fmt.Sprintf(` TYPE = '%v'`, ffb.formatType)) if ffb.compression != "" { q.WriteString(fmt.Sprintf(` COMPRESSION = '%v'`, ffb.compression)) } if ffb.recordDelimiter != "" { q.WriteString(fmt.Sprintf(` RECORD_DELIMITER = '%v'`, ffb.recordDelimiter)) } if ffb.fieldDelimiter != "" { q.WriteString(fmt.Sprintf(` FIELD_DELIMITER = '%v'`, ffb.fieldDelimiter)) } if ffb.fileExtension != "" { q.WriteString(fmt.Sprintf(` FILE_EXTENSION = '%v'`, ffb.fileExtension)) } if ffb.skipHeader > 0 { q.WriteString(fmt.Sprintf(` SKIP_HEADER = %v`, ffb.skipHeader)) } if ffb.dateFormat != "" { q.WriteString(fmt.Sprintf(` DATE_FORMAT = '%v'`, ffb.dateFormat)) } if ffb.timeFormat != "" { q.WriteString(fmt.Sprintf(` TIME_FORMAT = '%v'`, ffb.timeFormat)) } if ffb.timestampFormat != "" { q.WriteString(fmt.Sprintf(` TIMESTAMP_FORMAT = '%v'`, ffb.timestampFormat)) } if ffb.binaryFormat != "" { q.WriteString(fmt.Sprintf(` BINARY_FORMAT = '%v'`, ffb.binaryFormat)) } if ffb.escape != "" { q.WriteString(fmt.Sprintf(` ESCAPE = '%v'`, EscapeString(ffb.escape))) } if ffb.escapeUnenclosedField != "" { q.WriteString(fmt.Sprintf(` ESCAPE_UNENCLOSED_FIELD = '%v'`, ffb.escapeUnenclosedField)) } if ffb.fieldOptionallyEnclosedBy != "" { q.WriteString(fmt.Sprintf(` FIELD_OPTIONALLY_ENCLOSED_BY = '%v'`, EscapeString(ffb.fieldOptionallyEnclosedBy))) } if len(ffb.nullIf) > 0 { nullIfStr := "'" + strings.Join(ffb.nullIf, "', '") + "'" q.WriteString(fmt.Sprintf(` NULL_IF = (%v)`, nullIfStr)) } else if strings.ToUpper(ffb.formatType) != "XML" { q.WriteString(` NULL_IF = ()`) } if ffb.encoding != "" { q.WriteString(fmt.Sprintf(` ENCODING = '%v'`, ffb.encoding)) } // set boolean values if ffb.formatType == "CSV" { q.WriteString(fmt.Sprintf(` SKIP_BLANK_LINES = %v`, ffb.skipBlankLines)) q.WriteString(fmt.Sprintf(` TRIM_SPACE = %v`, ffb.trimSpace)) q.WriteString(fmt.Sprintf(` ERROR_ON_COLUMN_COUNT_MISMATCH = %v`, ffb.errorOnColumnCountMismatch)) q.WriteString(fmt.Sprintf(` REPLACE_INVALID_CHARACTERS = %v`, ffb.replaceInvalidCharacters)) q.WriteString(fmt.Sprintf(` VALIDATE_UTF8 = %v`, ffb.validateUTF8)) q.WriteString(fmt.Sprintf(` EMPTY_FIELD_AS_NULL = %v`, ffb.emptyFieldAsNull)) q.WriteString(fmt.Sprintf(` SKIP_BYTE_ORDER_MARK = %v`, ffb.skipByteOrderMark)) } else if ffb.formatType == "JSON" { q.WriteString(fmt.Sprintf(` TRIM_SPACE = %v`, ffb.trimSpace)) q.WriteString(fmt.Sprintf(` ENABLE_OCTAL = %v`, ffb.enableOctal)) q.WriteString(fmt.Sprintf(` ALLOW_DUPLICATE = %v`, ffb.allowDuplicate)) q.WriteString(fmt.Sprintf(` STRIP_OUTER_ARRAY = %v`, ffb.stripOuterArray)) q.WriteString(fmt.Sprintf(` STRIP_NULL_VALUES = %v`, ffb.stripNullValues)) q.WriteString(fmt.Sprintf(` REPLACE_INVALID_CHARACTERS = %v`, ffb.replaceInvalidCharacters)) q.WriteString(fmt.Sprintf(` IGNORE_UTF8_ERRORS = %v`, ffb.ignoreUTF8Errors)) q.WriteString(fmt.Sprintf(` SKIP_BYTE_ORDER_MARK = %v`, ffb.skipByteOrderMark)) } else if ffb.formatType == "AVRO" || ffb.formatType == "ORC" { q.WriteString(fmt.Sprintf(` TRIM_SPACE = %v`, ffb.trimSpace)) } else if ffb.formatType == "PARQUET" { q.WriteString(fmt.Sprintf(` BINARY_AS_TEXT = %v`, ffb.binaryAsText)) q.WriteString(fmt.Sprintf(` TRIM_SPACE = %v`, ffb.trimSpace)) } else if ffb.formatType == "XML" { q.WriteString(fmt.Sprintf(` IGNORE_UTF8_ERRORS = %v`, ffb.ignoreUTF8Errors)) q.WriteString(fmt.Sprintf(` PRESERVE_SPACE = %v`, ffb.preserveSpace)) q.WriteString(fmt.Sprintf(` STRIP_OUTER_ELEMENT = %v`, ffb.stripOuterElement)) q.WriteString(fmt.Sprintf(` DISABLE_SNOWFLAKE_DATA = %v`, ffb.disableSnowflakeData)) q.WriteString(fmt.Sprintf(` DISABLE_AUTO_CONVERT = %v`, ffb.disableAutoConvert)) q.WriteString(fmt.Sprintf(` SKIP_BYTE_ORDER_MARK = %v`, ffb.skipByteOrderMark)) } if ffb.comment != "" { q.WriteString(fmt.Sprintf(` COMMENT = '%v'`, EscapeString(ffb.comment))) } return q.String() } // ChangeComment returns the SQL query that will update the comment on the file format. func (ffb *FileFormatBuilder) ChangeComment(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET COMMENT = '%v'`, ffb.QualifiedName(), c) } // RemoveComment returns the SQL query that will remove the comment on the file format. func (ffb *FileFormatBuilder) RemoveComment() string { return fmt.Sprintf(`ALTER FILE FORMAT %v UNSET COMMENT`, ffb.QualifiedName()) } // ChangeCompression returns the SQL query that will update the compression on the file format. func (ffb *FileFormatBuilder) ChangeCompression(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET COMPRESSION = '%v'`, ffb.QualifiedName(), c) } // ChangeRecordDelimiter returns the SQL query that will update the record delimiter on the file format. func (ffb *FileFormatBuilder) ChangeRecordDelimiter(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET RECORD_DELIMITER = '%v'`, ffb.QualifiedName(), c) } // ChangeDateFormat returns the SQL query that will update the date format on the file format. func (ffb *FileFormatBuilder) ChangeDateFormat(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET DATE_FORMAT = '%v'`, ffb.QualifiedName(), c) } // ChangeTimeFormat returns the SQL query that will update the time format on the file format. func (ffb *FileFormatBuilder) ChangeTimeFormat(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET TIME_FORMAT = '%v'`, ffb.QualifiedName(), c) } // ChangeTimestampFormat returns the SQL query that will update the timestamp format on the file format. func (ffb *FileFormatBuilder) ChangeTimestampFormat(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET TIMESTAMP_FORMAT = '%v'`, ffb.QualifiedName(), c) } // ChangeBinaryFormat returns the SQL query that will update the binary format on the file format. func (ffb *FileFormatBuilder) ChangeBinaryFormat(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET BINARY_FORMAT = '%v'`, ffb.QualifiedName(), c) } // ChangeErrorOnColumnCountMismatch returns the SQL query that will update the error_on_column_count_mismatch on the file format. func (ffb *FileFormatBuilder) ChangeErrorOnColumnCountMismatch(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ERROR_ON_COLUMN_COUNT_MISMATCH = %v`, ffb.QualifiedName(), c) } // ChangeValidateUTF8 returns the SQL query that will update the error_on_column_count_mismatch on the file format. func (ffb *FileFormatBuilder) ChangeValidateUTF8(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET VALIDATE_UTF8 = %v`, ffb.QualifiedName(), c) } // ChangeEmptyFieldAsNull returns the SQL query that will update the error_on_column_count_mismatch on the file format. func (ffb *FileFormatBuilder) ChangeEmptyFieldAsNull(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET EMPTY_FIELD_AS_NULL = %v`, ffb.QualifiedName(), c) } // ChangeEscape returns the SQL query that will update the escape on the file format. func (ffb *FileFormatBuilder) ChangeEscape(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ESCAPE = '%v'`, ffb.QualifiedName(), c) } // ChangeEscapeUnenclosedField returns the SQL query that will update the escape unenclosed field on the file format. func (ffb *FileFormatBuilder) ChangeEscapeUnenclosedField(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ESCAPE_UNENCLOSED_FIELD = '%v'`, ffb.QualifiedName(), c) } // ChangeFileExtension returns the SQL query that will update the FILE_EXTENSION on the file format. func (ffb *FileFormatBuilder) ChangeFileExtension(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET FILE_EXTENSION = '%v'`, ffb.QualifiedName(), c) } // ChangeFieldDelimiter returns the SQL query that will update the FIELD_DELIMITER on the file format. func (ffb *FileFormatBuilder) ChangeFieldDelimiter(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET FIELD_DELIMITER = '%v'`, ffb.QualifiedName(), c) } // ChangeFieldOptionallyEnclosedBy returns the SQL query that will update the field optionally enclosed by on the file format. func (ffb *FileFormatBuilder) ChangeFieldOptionallyEnclosedBy(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET FIELD_OPTIONALLY_ENCLOSED_BY = '%v'`, ffb.QualifiedName(), c) } // ChangeNullIf returns the SQL query that will update the null if on the file format. func (ffb *FileFormatBuilder) ChangeNullIf(c []string) string { nullIfStr := "" if len(c) > 0 { nullIfStr = "'" + strings.Join(c, "', '") + "'" } return fmt.Sprintf(`ALTER FILE FORMAT %v SET NULL_IF = (%v)`, ffb.QualifiedName(), nullIfStr) } // ChangeEncoding returns the SQL query that will update the encoding on the file format. func (ffb *FileFormatBuilder) ChangeEncoding(c string) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ENCODING = '%v'`, ffb.QualifiedName(), c) } // ChangeSkipHeader returns the SQL query that will update the skip header on the file format. func (ffb *FileFormatBuilder) ChangeSkipHeader(c int) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET SKIP_HEADER = %v`, ffb.QualifiedName(), c) } // ChangeSkipBlankLines returns the SQL query that will update SKIP_BLANK_LINES on the file format. func (ffb *FileFormatBuilder) ChangeSkipBlankLines(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET SKIP_BLANK_LINES = %v`, ffb.QualifiedName(), c) } // ChangeTrimSpace returns the SQL query that will update TRIM_SPACE on the file format. func (ffb *FileFormatBuilder) ChangeTrimSpace(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET TRIM_SPACE = %v`, ffb.QualifiedName(), c) } // ChangeEnableOctal returns the SQL query that will update ENABLE_OCTAL on the file format. func (ffb *FileFormatBuilder) ChangeEnableOctal(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ENABLE_OCTAL = %v`, ffb.QualifiedName(), c) } // ChangeAllowDuplicate returns the SQL query that will update ALLOW_DUPLICATE on the file format. func (ffb *FileFormatBuilder) ChangeAllowDuplicate(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET ALLOW_DUPLICATE = %v`, ffb.QualifiedName(), c) } // ChangeStripOuterArray returns the SQL query that will update STRIP_OUTER_ARRAY on the file format. func (ffb *FileFormatBuilder) ChangeStripOuterArray(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET STRIP_OUTER_ARRAY = %v`, ffb.QualifiedName(), c) } // ChangeStripNullValues returns the SQL query that will update STRIP_NULL_VALUES on the file format. func (ffb *FileFormatBuilder) ChangeStripNullValues(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET STRIP_NULL_VALUES = %v`, ffb.QualifiedName(), c) } // ChangeReplaceInvalidCharacters returns the SQL query that will update REPLACE_INVALID_CHARACTERS on the file format. func (ffb *FileFormatBuilder) ChangeReplaceInvalidCharacters(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET REPLACE_INVALID_CHARACTERS = %v`, ffb.QualifiedName(), c) } // ChangeIgnoreUTF8Errors returns the SQL query that will update IGNORE_UTF8_ERRORS on the file format. func (ffb *FileFormatBuilder) ChangeIgnoreUTF8Errors(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET IGNORE_UTF8_ERRORS = %v`, ffb.QualifiedName(), c) } // ChangeSkipByteOrderMark returns the SQL query that will update SKIP_BYTE_ORDER_MARK on the file format. func (ffb *FileFormatBuilder) ChangeSkipByteOrderMark(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET SKIP_BYTE_ORDER_MARK = %v`, ffb.QualifiedName(), c) } // ChangeBinaryAsText returns the SQL query that will update BINARY_AS_TEXT on the file format. func (ffb *FileFormatBuilder) ChangeBinaryAsText(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET BINARY_AS_TEXT = %v`, ffb.QualifiedName(), c) } // ChangePreserveSpace returns the SQL query that will update PRESERVE_SPACE on the file format. func (ffb *FileFormatBuilder) ChangePreserveSpace(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET PRESERVE_SPACE = %v`, ffb.QualifiedName(), c) } // ChangeStripOuterElement returns the SQL query that will update STRIP_OUTER_ELEMENT on the file format. func (ffb *FileFormatBuilder) ChangeStripOuterElement(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET STRIP_OUTER_ELEMENT = %v`, ffb.QualifiedName(), c) } // ChangeDisableSnowflakeData returns the SQL query that will update DISABLE_SNOWFLAKE_DATA on the file format. func (ffb *FileFormatBuilder) ChangeDisableSnowflakeData(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET DISABLE_SNOWFLAKE_DATA = %v`, ffb.QualifiedName(), c) } // ChangeDisableAutoConvert returns the SQL query that will update DISABLE_AUTO_CONVERT on the file format. func (ffb *FileFormatBuilder) ChangeDisableAutoConvert(c bool) string { return fmt.Sprintf(`ALTER FILE FORMAT %v SET DISABLE_AUTO_CONVERT = %v`, ffb.QualifiedName(), c) } // Drop returns the SQL query that will drop a file format. func (ffb *FileFormatBuilder) Drop() string { return fmt.Sprintf(`DROP FILE FORMAT %v`, ffb.QualifiedName()) } // Describe returns the SQL query that will describe a file format.. func (ffb *FileFormatBuilder) Describe() string { return fmt.Sprintf(`DESCRIBE FILE FORMAT %v`, ffb.QualifiedName()) } // Show returns the SQL query that will show a file format. func (ffb *FileFormatBuilder) Show() string { return fmt.Sprintf(`SHOW FILE FORMATS LIKE '%v' IN SCHEMA "%v"."%v"`, ffb.name, ffb.db, ffb.schema) } type fileFormatShow struct { CreatedOn sql.NullString `db:"created_on"` FileFormatName sql.NullString `db:"name"` DatabaseName sql.NullString `db:"database_name"` SchemaName sql.NullString `db:"schema_name"` FormatType sql.NullString `db:"type"` Owner sql.NullString `db:"owner"` Comment sql.NullString `db:"comment"` FormatOptions sql.NullString `db:"format_options"` } type fileFormatOptions struct { Type string `json:"TYPE"` Compression string `json:"COMPRESSION,omitempty"` RecordDelimiter string `json:"RECORD_DELIMITER,omitempty"` FieldDelimiter string `json:"FIELD_DELIMITER,omitempty"` FileExtension string `json:"FILE_EXTENSION,omitempty"` SkipHeader int `json:"SKIP_HEADER,omitempty"` DateFormat string `json:"DATE_FORMAT,omitempty"` TimeFormat string `json:"TIME_FORMAT,omitempty"` TimestampFormat string `json:"TIMESTAMP_FORMAT,omitempty"` BinaryFormat string `json:"BINARY_FORMAT,omitempty"` Escape string `json:"ESCAPE,omitempty"` EscapeUnenclosedField string `json:"ESCAPE_UNENCLOSED_FIELD,omitempty"` TrimSpace bool `json:"TRIM_SPACE,omitempty"` FieldOptionallyEnclosedBy string `json:"FIELD_OPTIONALLY_ENCLOSED_BY,omitempty"` NullIf []string `json:"NULL_IF,omitempty"` ErrorOnColumnCountMismatch bool `json:"ERROR_ON_COLUMN_COUNT_MISMATCH,omitempty"` ValidateUTF8 bool `json:"VALIDATE_UTF8,omitempty"` SkipBlankLines bool `json:"SKIP_BLANK_LINES,omitempty"` ReplaceInvalidCharacters bool `json:"REPLACE_INVALID_CHARACTERS,omitempty"` EmptyFieldAsNull bool `json:"EMPTY_FIELD_AS_NULL,omitempty"` SkipByteOrderMark bool `json:"SKIP_BYTE_ORDER_MARK,omitempty"` Encoding string `json:"ENCODING,omitempty"` EnabelOctal bool `json:"ENABLE_OCTAL,omitempty"` AllowDuplicate bool `json:"ALLOW_DUPLICATE,omitempty"` StripOuterArray bool `json:"STRIP_OUTER_ARRAY,omitempty"` StripNullValues bool `json:"STRIP_NULL_VALUES,omitempty"` IgnoreUTF8Errors bool `json:"IGNORE_UTF8_ERRORS,omitempty"` BinaryAsText bool `json:"BINARY_AS_TEXT,omitempty"` PreserveSpace bool `json:"PRESERVE_SPACE,omitempty"` StripOuterElement bool `json:"STRIP_OUTER_ELEMENT,omitempty"` DisableSnowflakeData bool `json:"DISABLE_SNOWFLAKE_DATA,omitempty"` DisableAutoConvert bool `json:"DISABLE_AUTO_CONVERT,omitempty"` } func ScanFileFormatShow(row *sqlx.Row) (*fileFormatShow, error) { r := &fileFormatShow{} err := row.StructScan(r) return r, err } func ParseFormatOptions(fileOptions string) (*fileFormatOptions, error) { ff := &fileFormatOptions{} err := json.Unmarshal([]byte(fileOptions), ff) return ff, err }
Chloroplast-mediated regulation of nuclear genes in Arabidopsis thaliana in the absence of light stress. Chloroplast signaling involves mechanisms to relay information from chloroplasts to the nucleus, to change nuclear gene expression in response to environmental cues. Aside from reactive oxygen species (ROS) produced under stress conditions, changes in the reduction/oxidation state of photosynthetic electron transfer components or coupled compounds in the stroma and the accumulation of photosynthesis-derived metabolites are likely origins of chloroplast signals. We attempted to investigate the origin of the signals from chloroplasts in mature Arabidopsis leaves by differentially modulating the redox states of the plastoquinone pool and components on the reducing side of photosystem I, as well as the rate of CO2 fixation, while avoiding the production of ROS by excess light. Differential expression of several nuclear photosynthesis genes, including a set of Calvin cycle enzymes, was recorded. These responded to the stromal redox conditions under prevailing light conditions but were independent of the redox state of the plastoquinone pool. The steady-state CO2 fixation rate was reflected in the orchestration of the expression of a number of genes encoding cytoplasmic proteins, including several glycolysis genes and the trehalose-6-phosphate synthase gene, and also the chloroplast-targeted chaperone DnaJ. Clearly, in mature leaves, the redox state of the compounds on the reducing side of photosystem I is of greater importance in light-dependent modulation of nuclear gene expression than the redox state of the plastoquinone pool, particularly at early signaling phases. It also became apparent that photosynthesis-mediated generation of metabolites or signaling molecules is involved in the relay of information from chloroplast to nucleus.
def isLocalhost(ip): return ip in (settings.localhost_IP, 'localhost')
/*************************************************************************** * File: bit.c * * * * Much time and thought has gone into this software and you are * * benefitting. We hope that you share your changes too. What goes * * around, comes around. * * * * This code was written by <NAME> and inspired by <NAME>, * * and has been used here for OLC - OLC would not be what it is without * * all the previous coders who released their source code. * * * ***************************************************************************/ /* The code below uses a table lookup system that is based on suggestions from <NAME>. There are many routines in handler.c that would benefit with the use of tables. You may consider simplifying your code base by implementing a system like below with such functions. -<NAME> */ #if defined(macintosh) #include <types.h> #else #include <sys/types.h> #endif #include <stdio.h> #include <stdlib.h> #include <string.h> #include <time.h> #include "merc.h" struct flag_stat_type { const struct flag_type *structure; bool stat; }; /***************************************************************************** Name: flag_stat_table Purpose: This table catagorizes the tables following the lookup functions below into stats and flags. Flags can be toggled but stats can only be assigned. Update this table when a new set of flags is installed. ****************************************************************************/ const struct flag_stat_type flag_stat_table[] = { /* { structure stat }, */ { area_flags, FALSE }, { sex_flags, TRUE }, { exit_flags, FALSE }, { door_resets, TRUE }, { room_flags, FALSE }, { sector_flags, TRUE }, { type_flags, TRUE }, { extra_flags, FALSE }, { wear_flags, FALSE }, { act_flags, FALSE }, { affect_flags, FALSE }, { apply_flags, TRUE }, { wear_loc_flags, TRUE }, { wear_loc_strings, TRUE }, { weapon_flags, TRUE }, { container_flags, FALSE }, { liquid_flags, TRUE }, /*spellsong trap flags*/ { trap_triggers, FALSE }, { trap_effects, FALSE }, /* ROM specific flags: */ { material_type, TRUE }, { form_flags, FALSE }, { part_flags, FALSE }, { ac_type, TRUE }, { size_flags, TRUE }, { position_flags, TRUE }, { off_flags, FALSE }, { imm_flags, FALSE }, { res_flags, FALSE }, { vuln_flags, FALSE }, { weapon_class, TRUE }, { weapon_type, FALSE }, { gate_flags, FALSE }, { 0, 0 } }; /***************************************************************************** Name: is_stat( table ) Purpose: Returns TRUE if the table is a stat table and FALSE if flag. Called by: flag_value and flag_string. Note: This function is local and used only in bit.c. ****************************************************************************/ bool is_stat( const struct flag_type *flag_table ) { int flag; for (flag = 0; flag_stat_table[flag].structure; flag++) { if ( flag_stat_table[flag].structure == flag_table && flag_stat_table[flag].stat ) return TRUE; } return FALSE; } /* * This function is Russ Taylor's creation. Thanks Russ! * All code copyright (C) <NAME>, permission to use and/or distribute * has NOT been granted. Use only in this OLC package has been granted. */ /***************************************************************************** Name: flag_lookup( flag, table ) Purpose: Returns the value of a single, settable flag from the table. Called by: flag_value and flag_string. Note: This function is local and used only in bit.c. ****************************************************************************/ int flag_lookup (const char *name, const struct flag_type *flag_table) { int flag; for (flag = 0; flag_table[flag].name[0] != '\0'; flag++) { if ( !str_cmp( name, flag_table[flag].name ) && flag_table[flag].settable ) return flag_table[flag].bit; } return NO_FLAG; } /***************************************************************************** Name: flag_value( table, flag ) Purpose: Returns the value of the flags entered. Multi-flags accepted. Called by: olc.c and olc_act.c. ****************************************************************************/ int flag_value( const struct flag_type *flag_table, char *argument) { char word[MAX_INPUT_LENGTH]; int bit; int marked = 0; bool found = FALSE; if ( is_stat( flag_table ) ) { one_argument( argument, word ); if ( ( bit = flag_lookup( word, flag_table ) ) != NO_FLAG ) return bit; else return NO_FLAG; } /* * Accept multiple flags. */ for (; ;) { argument = one_argument( argument, word ); if ( word[0] == '\0' ) break; if ( ( bit = flag_lookup( word, flag_table ) ) != NO_FLAG ) { SET_BIT( marked, bit ); found = TRUE; } } if ( found ) return marked; else return NO_FLAG; } /***************************************************************************** Name: flag_string( table, flags/stat ) Purpose: Returns string with name(s) of the flags or stat entered. Called by: act_olc.c, olc.c, and olc_save.c. ****************************************************************************/ char *flag_string( const struct flag_type *flag_table, int bits ) { static char buf[512]; int flag; buf[0] = '\0'; for (flag = 0; flag_table[flag].name[0] != '\0'; flag++) { if ( !is_stat( flag_table ) && IS_SET(bits, flag_table[flag].bit) ) { strcat( buf, " " ); strcat( buf, flag_table[flag].name ); } else if ( flag_table[flag].bit == bits ) { strcat( buf, " " ); strcat( buf, flag_table[flag].name ); break; } } return (buf[0] != '\0') ? buf+1 : "none"; } const struct flag_type area_flags[] = { { "none", AREA_NONE, FALSE }, { "changed", AREA_CHANGED, TRUE }, { "added", AREA_ADDED, TRUE }, { "loading", AREA_LOADING, FALSE }, { "", 0, 0 } }; const struct flag_type sex_flags[] = { { "male", SEX_MALE, TRUE }, { "female", SEX_FEMALE, TRUE }, { "neutral", SEX_NEUTRAL, TRUE }, { "random", 3, TRUE }, /* ROM */ { "none", SEX_NEUTRAL, TRUE }, { "", 0, 0 } }; const struct flag_type exit_flags[] = { { "door", EX_ISDOOR, TRUE }, { "closed", EX_CLOSED, TRUE }, { "locked", EX_LOCKED, TRUE }, { "pickproof", EX_PICKPROOF, TRUE }, { "passproof", EX_PASSPROOF, TRUE }, { "hidden", EX_HIDDEN, TRUE }, { "", 0, 0 } }; const struct flag_type door_resets[] = { { "open and unlocked", 0, TRUE }, { "closed and unlocked", 1, TRUE }, { "closed and locked", 2, TRUE }, { "close,lock,pass", 3, TRUE }, { "close,lock,pass,pick", 4, TRUE }, { "closed,hidden", 5, TRUE }, { "closed,hidden,locked", 6, TRUE }, { "closed,hidden,locked,pass", 7, TRUE }, { "closed,hidden,locked,pass,pick", 8, TRUE }, { "closed,hidden,locked,pick", 9, TRUE }, { "", 0, 0 } }; const struct flag_type room_flags[] = { { "dark", ROOM_DARK, TRUE }, { "no_mob", ROOM_NO_MOB, TRUE }, { "indoors", ROOM_INDOORS, TRUE }, { "private", ROOM_PRIVATE, TRUE }, { "safe", ROOM_SAFE, TRUE }, { "solitary", ROOM_SOLITARY, TRUE }, { "pet_shop", ROOM_PET_SHOP, TRUE }, { "mount_shop", ROOM_MOUNT_SHOP, TRUE }, { "no_recall", ROOM_NO_RECALL, TRUE }, { "imp_only", ROOM_IMP_ONLY, TRUE }, { "gods_only", ROOM_GODS_ONLY, TRUE }, { "heroes_only", ROOM_HEROES_ONLY, TRUE }, { "newbies_only", ROOM_NEWBIES_ONLY, TRUE }, { "law", ROOM_LAW, TRUE }, { "no_teleport", ROOM_NOTELEPORT, TRUE }, { "library", ROOM_LIBRARY, TRUE }, { "funnel", ROOM_FUNNEL, TRUE }, { "", 0, 0 } }; const struct flag_type sector_flags[] = { { "inside", SECT_INSIDE, TRUE }, { "city", SECT_CITY, TRUE }, { "field", SECT_FIELD, TRUE }, { "forest", SECT_FOREST, TRUE }, { "hills", SECT_HILLS, TRUE }, { "mountain", SECT_MOUNTAIN, TRUE }, { "noswim", SECT_WATER_NOSWIM, TRUE }, { "air", SECT_AIR, TRUE }, { "desert", SECT_DESERT, TRUE }, { "jungle", SECT_JUNGLE, TRUE }, { "swamp", SECT_SWAMP, TRUE }, { "", 0, 0 } }; const struct flag_type type_flags[] = { { "light", ITEM_LIGHT, TRUE }, { "scroll", ITEM_SCROLL, TRUE }, { "wand", ITEM_WAND, TRUE }, { "staff", ITEM_STAFF, TRUE }, { "weapon", ITEM_WEAPON, TRUE }, { "treasure", ITEM_TREASURE, TRUE }, { "armor", ITEM_ARMOR, TRUE }, { "potion", ITEM_POTION, TRUE }, { "furniture", ITEM_FURNITURE, TRUE }, { "trash", ITEM_TRASH, TRUE }, { "container", ITEM_CONTAINER, TRUE }, { "drink-container", ITEM_DRINK_CON, TRUE }, { "key", ITEM_KEY, TRUE }, { "food", ITEM_FOOD, TRUE }, { "money", ITEM_MONEY, TRUE }, { "boat", ITEM_BOAT, TRUE }, { "npc corpse", ITEM_CORPSE_NPC, TRUE }, { "pc corpse", ITEM_CORPSE_PC, FALSE }, { "fountain", ITEM_FOUNTAIN, TRUE }, { "pill", ITEM_PILL, TRUE }, { "clothing", ITEM_CLOTHING, TRUE }, { "protect", ITEM_PROTECT, TRUE }, { "map", ITEM_MAP, TRUE }, { "portal", ITEM_PORTAL, TRUE }, { "trap", ITEM_TRAP, TRUE }, /*spellsong for traps*/ { "", 0, 0 } }; const struct flag_type extra_flags[] = { { "glow", ITEM_GLOW, TRUE }, { "hum", ITEM_HUM, TRUE }, { "dark", ITEM_DARK, TRUE }, { "lock", ITEM_LOCK, TRUE }, { "evil", ITEM_EVIL, TRUE }, { "invis", ITEM_INVIS, TRUE }, { "magic", ITEM_MAGIC, TRUE }, { "nodrop", ITEM_NODROP, TRUE }, { "bless", ITEM_BLESS, TRUE }, { "anti-good", ITEM_ANTI_GOOD, TRUE }, { "anti-evil", ITEM_ANTI_EVIL, TRUE }, { "anti-neutral", ITEM_ANTI_NEUTRAL, TRUE }, { "noremove", ITEM_NOREMOVE, TRUE }, { "inventory", ITEM_INVENTORY, TRUE }, { "nopurge", ITEM_NOPURGE, TRUE }, { "rot-death", ITEM_ROT_DEATH, TRUE }, { "vis-death", ITEM_VIS_DEATH, TRUE }, { "nolongdesc", ITEM_NOLONG, TRUE }, { "burnproof", ITEM_BURN_PROOF, TRUE }, { "rare", ITEM_RARE, TRUE }, { "nouncurse", ITEM_NO_UNCURSE, TRUE }, { "good", ITEM_GOOD, TRUE }, { "", 0, 0 } }; const struct flag_type wear_flags[] = { { "take", ITEM_TAKE, TRUE }, { "finger", ITEM_WEAR_FINGER, TRUE }, { "neck", ITEM_WEAR_NECK, TRUE }, { "body", ITEM_WEAR_BODY, TRUE }, { "head", ITEM_WEAR_HEAD, TRUE }, { "legs", ITEM_WEAR_LEGS, TRUE }, { "feet", ITEM_WEAR_FEET, TRUE }, { "hands", ITEM_WEAR_HANDS, TRUE }, { "arms", ITEM_WEAR_ARMS, TRUE }, { "shield", ITEM_WEAR_SHIELD, TRUE }, { "about", ITEM_WEAR_ABOUT, TRUE }, { "waist", ITEM_WEAR_WAIST, TRUE }, { "wrist", ITEM_WEAR_WRIST, TRUE }, { "wield", ITEM_WIELD, TRUE }, { "hold", ITEM_HOLD, TRUE }, { "two-hands", ITEM_TWO_HANDS, TRUE }, { "ear", ITEM_WEAR_EAR, TRUE }, /*Start Spellsong ADD*/ { "floating", ITEM_WEAR_FLOATING, TRUE }, { "bicep", ITEM_WEAR_BICEP, TRUE }, { "face", ITEM_WEAR_FACE, TRUE }, { "ankle", ITEM_WEAR_ANKLE, TRUE }, { "shoulders", ITEM_WEAR_SHOULDERS, TRUE },/*end spellsong add*/ { "", 0, 0 } }; const struct flag_type act_flags[] = { { "npc", ACT_IS_NPC, FALSE }, { "sentinel", ACT_SENTINEL, TRUE }, { "scavenger", ACT_SCAVENGER, TRUE }, { "aggressive", ACT_AGGRESSIVE, TRUE }, { "stay-area", ACT_STAY_AREA, TRUE }, { "wimpy", ACT_WIMPY, TRUE }, { "pet", ACT_PET, TRUE }, { "mount", ACT_MOUNT, TRUE }, { "train", ACT_TRAIN, TRUE }, { "practice", ACT_PRACTICE, TRUE }, { "undead", ACT_UNDEAD, TRUE }, { "cleric", ACT_CLERIC, TRUE }, { "mage", ACT_MAGE, TRUE }, { "thief", ACT_THIEF, TRUE }, { "warrior", ACT_WARRIOR, TRUE }, { "noalign", ACT_NOALIGN, TRUE }, { "nopurge", ACT_NOPURGE, TRUE }, { "healer", ACT_IS_HEALER, TRUE }, { "gain", ACT_GAIN, TRUE }, { "update-always", ACT_UPDATE_ALWAYS, TRUE }, { "no-kill", ACT_NO_KILL, TRUE }, { "", 0, 0 } }; const struct flag_type affect_flags[] = { { "blind", AFF_BLIND, TRUE }, { "invisible", AFF_INVISIBLE, TRUE }, { "detect evil", AFF_DETECT_EVIL, TRUE }, // used to be detect-evil, Eris { "detect-invis", AFF_DETECT_INVIS, TRUE }, { "detect-magic", AFF_DETECT_MAGIC, TRUE }, { "detect-hidden", AFF_DETECT_HIDDEN, TRUE }, { "protect good", AFF_PROTECT_GOOD, TRUE }, { "sanctuary", AFF_SANCTUARY, TRUE }, { "faerie-fire", AFF_FAERIE_FIRE, TRUE }, { "infrared", AFF_INFRARED, TRUE }, { "curse", AFF_CURSE, TRUE }, { "slow", AFF_SLOW, TRUE }, { "poison", AFF_POISON, TRUE }, { "preservation", AFF_PRESERVATION, TRUE }, { "protect evil", AFF_PROTECT_EVIL, TRUE }, /* { "paralysis", AFF_PARALYSIS, FALSE }, Pulled by Eris 30 April 2000 */ { "sneak", AFF_SNEAK, TRUE }, { "hide", AFF_HIDE, TRUE }, { "sleep", AFF_SLEEP, TRUE }, { "charm", AFF_CHARM, TRUE }, { "flying", AFF_FLYING, TRUE }, { "pass-door", AFF_PASS_DOOR, TRUE }, { "haste", AFF_HASTE, TRUE }, /* ROM: */ { "calm", AFF_CALM, TRUE }, { "plague", AFF_PLAGUE, TRUE }, { "weaken", AFF_WEAKEN, TRUE }, { "dark-vision", AFF_DARK_VISION, TRUE }, { "berserk", AFF_BERSERK, TRUE }, { "swim", AFF_SWIM, TRUE }, { "regeneration", AFF_REGENERATION, TRUE }, { "web", AFF_WEB, TRUE }, { "", 0, 0 } }; /* * Used when adding an affect to tell where it goes. * See addaffect and delaffect in act_olc.c */ const struct flag_type apply_flags[] = { { "none", APPLY_NONE, TRUE }, { "strength", APPLY_STR, TRUE }, { "dexterity", APPLY_DEX, TRUE }, { "intelligence", APPLY_INT, TRUE }, { "wisdom", APPLY_WIS, TRUE }, { "constitution", APPLY_CON, TRUE }, { "sex", APPLY_SEX, TRUE }, { "class", APPLY_CLASS, TRUE }, { "level", APPLY_LEVEL, TRUE }, { "age", APPLY_AGE, TRUE }, { "height", APPLY_HEIGHT, TRUE }, { "weight", APPLY_WEIGHT, TRUE }, { "mana", APPLY_MANA, TRUE }, { "hp", APPLY_HIT, TRUE }, { "move", APPLY_MOVE, TRUE }, { "gold", APPLY_GOLD, TRUE }, { "experience", APPLY_EXP, TRUE }, { "ac", APPLY_AC, TRUE }, { "hitroll", APPLY_HITROLL, TRUE }, { "damroll", APPLY_DAMROLL, TRUE }, { "saving-para", APPLY_SAVING_PARA, TRUE }, { "saving-rod", APPLY_SAVING_ROD, TRUE }, { "saving-petri", APPLY_SAVING_PETRI, TRUE }, { "saving-breath", APPLY_SAVING_BREATH, TRUE }, { "saving-spell", APPLY_SAVING_SPELL, TRUE }, { "alignment", APPLY_ALIGN, TRUE }, { "", 0, 0 } }; /* * What is seen. */ const struct flag_type wear_loc_strings[] = { { "in the inventory", WEAR_NONE, TRUE }, { "as a light", WEAR_LIGHT, TRUE }, { "on the left finger", WEAR_FINGER_L, TRUE }, { "on the right finger", WEAR_FINGER_R, TRUE }, { "around the neck (1)", WEAR_NECK_1, TRUE }, { "around the neck (2)", WEAR_NECK_2, TRUE }, { "on the body", WEAR_BODY, TRUE }, { "over the head", WEAR_HEAD, TRUE }, { "on the legs", WEAR_LEGS, TRUE }, { "on the feet", WEAR_FEET, TRUE }, { "on the hands", WEAR_HANDS, TRUE }, { "on the arms", WEAR_ARMS, TRUE }, { "as a shield", WEAR_SHIELD, TRUE }, { "about the shoulders", WEAR_ABOUT, TRUE }, { "around the waist", WEAR_WAIST, TRUE }, { "on the left wrist", WEAR_WRIST_L, TRUE }, { "on the right wrist", WEAR_WRIST_R, TRUE }, { "wielded", WEAR_WIELD, TRUE }, { "held in the hands", WEAR_HOLD, TRUE }, { "in the left ear", WEAR_EAR_L, TRUE },/*start spellsong add*/ { "in the right ear", WEAR_EAR_R, TRUE }, { "floating nearby", WEAR_FLOATING, TRUE }, { "on the left bicep", WEAR_BICEP_L, TRUE }, { "on the right bicep", WEAR_BICEP_R, TRUE }, { "over the face", WEAR_FACE, TRUE }, { "on the left ankle", WEAR_ANKLE_L, TRUE }, { "on the right ankle", WEAR_BICEP_R, TRUE }, { "over the shoulders", WEAR_SHOULDERS, TRUE },/*end spellsong add*/ { "", 0 } }; const struct flag_type wear_loc_flags[] = { { "none", WEAR_NONE, TRUE }, { "light", WEAR_LIGHT, TRUE }, { "lfinger", WEAR_FINGER_L, TRUE }, { "rfinger", WEAR_FINGER_R, TRUE }, { "neck1", WEAR_NECK_1, TRUE }, { "neck2", WEAR_NECK_2, TRUE }, { "body", WEAR_BODY, TRUE }, { "head", WEAR_HEAD, TRUE }, { "legs", WEAR_LEGS, TRUE }, { "feet", WEAR_FEET, TRUE }, { "hands", WEAR_HANDS, TRUE }, { "arms", WEAR_ARMS, TRUE }, { "shield", WEAR_SHIELD, TRUE }, { "about", WEAR_ABOUT, TRUE }, { "waist", WEAR_WAIST, TRUE }, { "lwrist", WEAR_WRIST_L, TRUE }, { "rwrist", WEAR_WRIST_R, TRUE }, { "wielded", WEAR_WIELD, TRUE }, { "hold", WEAR_HOLD, TRUE }, { "lear", WEAR_EAR_L, TRUE },/*start spellsong add*/ { "rear", WEAR_EAR_R, TRUE }, { "floating", WEAR_FLOATING, TRUE }, { "lbicep", WEAR_BICEP_L, TRUE }, { "rbicep", WEAR_BICEP_R, TRUE }, { "face", WEAR_FACE, TRUE }, { "lankle", WEAR_ANKLE_L, TRUE }, { "rankle", WEAR_BICEP_R, TRUE }, { "shoulders", WEAR_SHOULDERS, TRUE },/*end spellsong add*/ { "", 0, 0 } }; const struct flag_type weapon_flags[] = { { "hit", 0, TRUE }, { "slice", 1, TRUE }, { "stab", 2, TRUE }, { "slash", 3, TRUE }, { "whip", 4, TRUE }, { "claw", 5, TRUE }, { "blast", 6, TRUE }, { "pound", 7, TRUE }, { "crush", 8, TRUE }, { "grep", 9, TRUE }, { "bite", 10, TRUE }, { "pierce", 11, TRUE }, { "suction", 12, TRUE }, { "beating", 13, TRUE }, /* ROM */ { "digestion", 14, TRUE }, { "charge", 15, TRUE }, { "slap", 16, TRUE }, { "punch", 17, TRUE }, { "wrath", 18, TRUE }, { "magic", 19, TRUE }, { "divine-power", 20, TRUE }, { "cleave", 21, TRUE }, { "scratch", 22, TRUE }, { "peck-pierce", 23, TRUE }, { "peck-bash", 24, TRUE }, { "chop", 25, TRUE }, { "sting", 26, TRUE }, { "smash", 27, TRUE }, { "shocking-bite", 28, TRUE }, { "flaming-bite", 29, TRUE }, { "freezing-bite", 30, TRUE }, { "acidic-bite", 31, TRUE }, { "chomp", 32, TRUE }, { "", 0, TRUE } }; const struct flag_type container_flags[] = { { "closeable", 1, TRUE }, { "pickproof", 2, TRUE }, { "closed", 4, TRUE }, { "locked", 8, TRUE }, { "", 0, 0 } }; const struct flag_type liquid_flags[] = { { "water", 0, TRUE }, { "beer", 1, TRUE }, { "wine", 2, TRUE }, { "ale", 3, TRUE }, { "dark-ale", 4, TRUE }, { "whisky", 5, TRUE }, { "lemonade", 6, TRUE }, { "firebreather", 7, TRUE }, { "local-specialty", 8, TRUE }, { "slime-mold-juice", 9, TRUE }, { "milk", 10, TRUE }, { "tea", 11, TRUE }, { "coffee", 12, TRUE }, { "blood", 13, TRUE }, { "salt-water", 14, TRUE }, { "cola", 15, TRUE }, { "mocha latte", 16, TRUE }, { "vodka", 17, TRUE }, { "honey mead", 18, TRUE }, { "green tea", 19, TRUE }, { "fruit juice", 20, TRUE }, { "hot chocolate", 21, TRUE }, { "", 0, 0 } }; /***************************************************************************** ROM - specific tables: ****************************************************************************/ const struct flag_type form_flags[] = { { "edible", FORM_EDIBLE, TRUE }, { "poison", FORM_POISON, TRUE }, { "magical", FORM_MAGICAL, TRUE }, { "decay", FORM_INSTANT_DECAY, TRUE }, { "other", FORM_OTHER, TRUE }, { "animal", FORM_ANIMAL, TRUE }, { "sentient", FORM_SENTIENT, TRUE }, { "undead", FORM_UNDEAD, TRUE }, { "construct", FORM_CONSTRUCT, TRUE }, { "mist", FORM_MIST, TRUE }, { "intangible", FORM_INTANGIBLE, TRUE }, { "biped", FORM_BIPED, TRUE }, { "centaur", FORM_CENTAUR, TRUE }, { "insect", FORM_INSECT, TRUE }, { "spider", FORM_SPIDER, TRUE }, { "crustacean", FORM_CRUSTACEAN, TRUE }, { "worm", FORM_WORM, TRUE }, { "blob", FORM_BLOB, TRUE }, { "mammal", FORM_MAMMAL, TRUE }, { "bird", FORM_BIRD, TRUE }, { "reptile", FORM_REPTILE, TRUE }, { "snake", FORM_SNAKE, TRUE }, { "dragon", FORM_DRAGON, TRUE }, { "amphibian", FORM_AMPHIBIAN, TRUE }, { "fish", FORM_FISH, TRUE }, { "cold-blood", FORM_COLD_BLOOD, TRUE }, { "", 0, 0 } }; const struct flag_type part_flags[] = { { "head", PART_HEAD, TRUE }, { "arms", PART_ARMS, TRUE }, { "legs", PART_LEGS, TRUE }, { "heart", PART_HEART, TRUE }, { "brains", PART_BRAINS, TRUE }, { "guts", PART_GUTS, TRUE }, { "hands", PART_HANDS, TRUE }, { "feet", PART_FEET, TRUE }, { "fingers", PART_FINGERS, TRUE }, { "ear", PART_EAR, TRUE }, { "eye", PART_EYE, TRUE }, { "long-tongue", PART_LONG_TONGUE, TRUE }, { "eyestalks", PART_EYESTALKS, TRUE }, { "fins", PART_TENTACLES, TRUE }, { "wings", PART_FINS, TRUE }, { "tail", PART_WINGS, TRUE }, { "claws", PART_CLAWS, TRUE }, { "fangs", PART_FANGS, TRUE }, { "horns", PART_HORNS, TRUE }, { "scales", PART_SCALES, TRUE }, { "tusks", PART_TUSKS, TRUE }, { "", 0, 0 } }; const struct flag_type ac_type[] = { { "pierce", AC_PIERCE, TRUE }, { "bash", AC_BASH, TRUE }, { "slash", AC_SLASH, TRUE }, { "exotic", AC_EXOTIC, TRUE }, { "", 0, 0 } }; const struct flag_type size_flags[] = { { "tiny", SIZE_TINY, TRUE }, { "small", SIZE_SMALL, TRUE }, { "medium", SIZE_MEDIUM, TRUE }, { "large", SIZE_LARGE, TRUE }, { "huge", SIZE_HUGE, TRUE }, { "giant", SIZE_GIANT, TRUE }, { "", 0, 0 }, }; const struct flag_type weapon_class[] = { { "exotic", 0, TRUE }, { "sword", 1, TRUE }, { "dagger", 2, TRUE }, { "spear", 3, TRUE }, { "mace", 4, TRUE }, { "axe", 5, TRUE }, { "flail", 6, TRUE }, { "whip", 7, TRUE }, { "polearm", 8, TRUE }, { "", 0, 0 } }; const struct flag_type weapon_type[] = { { "flaming", WEAPON_FLAMING, TRUE }, { "frost", WEAPON_FROST, TRUE }, { "vampiric", WEAPON_VAMPIRIC, TRUE }, { "sharp", WEAPON_SHARP, TRUE }, { "vorpal", WEAPON_VORPAL, TRUE }, { "two-hands", WEAPON_TWO_HANDS, TRUE }, /*spellsong add*/ { "poison", WEAPON_POISON, TRUE }, { "shocking", WEAPON_SHOCKING, TRUE }, { "acid", WEAPON_ACID, TRUE }, { "serrated", WEAPON_SERRATED, TRUE }, { "none", 0, TRUE }, { "" , 0, 0 } }; const struct flag_type off_flags[] = { { "area-attack", OFF_AREA_ATTACK, TRUE }, { "backstab", OFF_BACKSTAB, TRUE }, { "bash", OFF_BASH, TRUE }, { "berserk", OFF_BERSERK, TRUE }, { "disarm", OFF_DISARM, TRUE }, { "dodge", OFF_DODGE, TRUE }, { "fade", OFF_FADE, TRUE }, { "fast", OFF_FAST, TRUE }, { "kick", OFF_KICK, TRUE }, { "kick-dirt", OFF_KICK_DIRT, TRUE }, { "parry", OFF_PARRY, TRUE }, { "rescue", OFF_RESCUE, TRUE }, { "tail", OFF_TAIL, TRUE }, { "trip", OFF_TRIP, TRUE }, { "crush", OFF_CRUSH, TRUE }, { "assist-all", ASSIST_ALL, TRUE }, { "assist-align", ASSIST_ALIGN, TRUE }, { "assist-race", ASSIST_RACE, TRUE }, { "assist-player", ASSIST_PLAYERS, TRUE }, { "assist-guard", ASSIST_GUARD, TRUE }, { "assist-vnum", ASSIST_VNUM, TRUE }, { "", 0, 0 } }; const struct flag_type imm_flags[] = { { "summon", IMM_SUMMON, TRUE }, { "charm", IMM_CHARM, TRUE }, { "magic", IMM_MAGIC, TRUE }, { "weapon", IMM_WEAPON, TRUE }, { "bash", IMM_BASH, TRUE }, { "pierce", IMM_PIERCE, TRUE }, { "slash", IMM_SLASH, TRUE }, { "fire", IMM_FIRE, TRUE }, { "cold", IMM_COLD, TRUE }, { "lightning", IMM_LIGHTNING, TRUE }, { "acid", IMM_ACID, TRUE }, { "poison", IMM_POISON, TRUE }, { "negative", IMM_NEGATIVE, TRUE }, { "holy", IMM_HOLY, TRUE }, { "energy", IMM_ENERGY, TRUE }, { "mental", IMM_MENTAL, TRUE }, { "disease", IMM_DISEASE, TRUE }, { "drowning", IMM_DROWNING, TRUE }, { "light", IMM_LIGHT, TRUE }, { "", 0, 0 } }; const struct flag_type res_flags[] = { { "charm", RES_CHARM, TRUE }, { "magic", RES_MAGIC, TRUE }, { "weapon", RES_WEAPON, TRUE }, { "bash", RES_BASH, TRUE }, { "pierce", RES_PIERCE, TRUE }, { "slash", RES_SLASH, TRUE }, { "fire", RES_FIRE, TRUE }, { "cold", RES_COLD, TRUE }, { "lightning", RES_LIGHTNING, TRUE }, { "acid", RES_ACID, TRUE }, { "poison", RES_POISON, TRUE }, { "negative", RES_NEGATIVE, TRUE }, { "holy", RES_HOLY, TRUE }, { "energy", RES_ENERGY, TRUE }, { "mental", RES_MENTAL, TRUE }, { "disease", RES_DISEASE, TRUE }, { "drowning", RES_DROWNING, TRUE }, { "light", RES_LIGHT, TRUE }, { "", 0, 0 } }; const struct flag_type vuln_flags[] = { { "magic", VULN_MAGIC, TRUE }, { "weapon", VULN_WEAPON, TRUE }, { "bash", VULN_BASH, TRUE }, { "pierce", VULN_PIERCE, TRUE }, { "slash", VULN_SLASH, TRUE }, { "fire", VULN_FIRE, TRUE }, { "cold", VULN_COLD, TRUE }, { "lightning", VULN_LIGHTNING, TRUE }, { "acid", VULN_ACID, TRUE }, { "poison", VULN_POISON, TRUE }, { "negative", VULN_NEGATIVE, TRUE }, { "holy", VULN_HOLY, TRUE }, { "energy", VULN_ENERGY, TRUE }, { "mental", VULN_MENTAL, TRUE }, { "disease", VULN_DISEASE, TRUE }, { "drowning", VULN_DROWNING, TRUE }, { "light", VULN_LIGHT, TRUE }, { "wood", VULN_WOOD, TRUE }, { "silver", VULN_SILVER, TRUE }, { "iron", VULN_IRON, TRUE }, { "", 0, 0 } }; const struct flag_type material_type[] = /* not yet implemented */ { { "none", 0, TRUE }, { "", 0, 0 } }; const struct flag_type position_flags[] = { { "dead", POS_DEAD, FALSE }, { "mortal", POS_MORTAL, FALSE }, { "incap", POS_INCAP, FALSE }, { "stunned", POS_STUNNED, FALSE }, { "sleeping", POS_SLEEPING, TRUE }, { "resting", POS_RESTING, TRUE }, { "sitting", POS_SITTING, TRUE }, { "fighting", POS_FIGHTING, FALSE }, { "standing", POS_STANDING, TRUE }, { "", 0, 0 } }; const struct flag_type gate_flags[] = { { "normal", GATE_NORMAL, TRUE }, { "no_curse", GATE_NOCURSE, TRUE }, { "go_with", GATE_FOLLOW, TRUE }, { "buggy", GATE_BUGGY, TRUE }, { "random", GATE_RANDOM, TRUE }, { "", 0, 0 } }; const struct flag_type furniture_flags[] = { { "sit_on", FURN_SIT_ON, TRUE }, { "sit_in", FURN_SIT_IN, TRUE }, { "sit_at", FURN_SIT_AT, TRUE }, { "sit_by", FURN_SIT_BY, TRUE }, { "rest_on", FURN_REST_ON, TRUE }, { "rest_in", FURN_REST_IN, TRUE }, { "rest_at", FURN_REST_AT, TRUE }, { "rest_by", FURN_REST_BY, TRUE }, { "sleep_on", FURN_SLEEP_ON, TRUE }, { "sleep_in", FURN_SLEEP_IN, TRUE }, { "sleep_at", FURN_SLEEP_AT, TRUE }, { "sleep_by", FURN_SLEEP_BY, TRUE }, { "stand_on", FURN_STAND_ON, TRUE }, { "stand_in", FURN_STAND_IN, TRUE }, { "stand_at", FURN_STAND_AT, TRUE }, { "stand_by", FURN_STAND_BY, TRUE }, { "lay_on", FURN_LAY_ON, TRUE }, { "set_on", FURN_SET_ON, TRUE }, { "", 0, 0 } }; /**************************************************** * SPELLSONG DEFINES FOR TRAP OBJECTS * ****************************************************/ const struct flag_type trap_triggers[] = { { "entry", TRAP_ENTRY, TRUE }, { "up", TRAP_UP, TRUE }, { "down", TRAP_DOWN, TRUE }, { "east", TRAP_EAST, TRUE }, { "west", TRAP_WEST, TRUE }, { "south", TRAP_SOUTH, TRUE }, { "north", TRAP_NORTH , TRUE }, { "open", TRAP_OPEN, TRUE }, { "object", TRAP_OBJECT, TRUE }, { "none", 0, TRUE }, { "" , 0, 0 } }; const struct flag_type trap_effects[] = { { "dispel", TRAP_AFF_DISPEL, TRUE }, { "acid", TRAP_AFF_ACID, TRUE }, { "poison", TRAP_AFF_POISON, TRUE }, { "fire", TRAP_AFF_FIRE, TRUE }, { "lightning", TRAP_AFF_LIGHTNING, TRUE }, { "sleep", TRAP_AFF_SLEEP, TRUE }, { "teleport", TRAP_AFF_TELEPORT, TRUE }, { "cleave", TRAP_AFF_CLEAVE, TRUE }, { "bludgeon", TRAP_AFF_BLUDGEON, TRUE }, { "none", 0, TRUE }, { "" , 0, 0 } };
<gh_stars>1-10 /** * @file src/examples/random_number_generator.cpp * * @brief Demos the random number generator. * * @date July, 2014 **/ /***************************************************************************** ** Includes *****************************************************************************/ #include <iostream> #include "../../include/ecl/time/random_number_generator.hpp" /***************************************************************************** ** Main *****************************************************************************/ int main() { ecl::RandomNumberGenerator<float> random_number_generator; int n = 5; std::cout << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << " Uniform Numbers" << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << std::endl; std::cout << "Default (0, 1):" << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.uniform(); } std::cout << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.uniform(); } std::cout << std::endl; std::cout << "Over Range (-5, 5): " << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.uniform(5.0); } std::cout << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.uniform(5.0); } std::cout << std::endl; std::cout << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << " Guassian" << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << std::endl; std::cout << "std = 1, avg = 0: " << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(1.0); } std::cout << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(1.0); } std::cout << std::endl; std::cout << "std = 1, avg = 3: " << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(1.0, 3.0); } std::cout << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(1.0, 3.0); } std::cout << std::endl; std::cout << "std = 3, avg = 5: " << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(3.0, 5.0); } std::cout << std::endl; for (int i = 0; i < n; ++i) { std::cout << " " << random_number_generator.gaussian(3.0, 5.0); } std::cout << std::endl; std::cout << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << " Passed" << std::endl; std::cout << "***********************************************************" << std::endl; std::cout << std::endl; return 0; }
Gaussian Optimality for Derivatives of Differential Entropy Using Linear Matrix Inequalities Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X+tZ, McKean noticed that Gaussian X achieves the extreme for the first and second derivatives, among distributions with a fixed variance, and he conjectured that this holds for general orders of derivatives. This conjecture implies that the signs of the derivatives alternate. Recently, Cheng and Geng proved that this alternation holds for the first four orders. In this work, we employ the technique of linear matrix inequalities to show that: firstly, Cheng and Gengs method may not generalize to higher orders; secondly, when the probability density function of X+tZ is log-concave, McKeans conjecture holds for orders up to at least five. As a corollary, we also recover Toscanis result on the sign of the third derivative of the entropy power of X+tZ, using a much simpler argument. There have been numerous generalizations of the EPI. In, Costa considered the case where X is perturbed by an independent standard Gaussian Z, and showed that N(X + √ tZ) is concave in t for t > 0: Toscani further showed that d 3 dt 3 N(X + √ tZ) is the Fisher information J(X + √ tZ). The above conjecture is equivalent to hypothesizing that the Fisher information of X + √ tZ is completely monotone, thus admitting a very simple characterization using the Laplace Transform : there exists a finite Borel measure () such that Back in 1966, McKean also studied the derivatives in t of h(X + √ tZ), and noticed that Gaussian X achieves the minimum of d dt h(X + √ tZ) and − d 2 dt 2 h(X + √ tZ), subject to Var(X) = 2. Then, McKean implicitly made the following conjecture that Gaussian optimality holds generally: Conjecture 2 (). Subject to Var(X) = 2, Gaussian X with variance 2 achieves the minimum of (−1) n−1 d n dt n h(X + √ tZ) for t > 0 and n ≥ 1. Hence, McKean's conjecture implies the one by Cheng and Geng. Compared with the progress made by Cheng and Geng on Conjecture 1, there has been little progress on Conjecture 2. Most of the existing results are on the second derivative of the differential entropy (or the mutual information), and on generalizing the EPI to other settings. For example: Guo et al. represents the derivatives in the signal-to-noise ratio of the mutual information in terms of the minimum mean-square estimation error, building on de Bruijn's identity ; Wibisono and Jog study the mutual information along the density flow defined by the heat equation and show that it is a convex function of time if the initial distribution is log-concave; Wang and Madiman recover the proof of the EPI via rearrangements; Courtade generalizes Costa's EPI to non-Gaussian additive perturbations; and Knig and Smith propose a quantum version of the EPI. In this paper, we work on Conjecture 2. The main results are to show that Conjecture 2 holds for higher orders up to at least five under the log-concavity condition, and the introduction of the technique of linear matrix inequalities. The paper is organized as follows: in Section 2, we obtain the formulae for the derivatives of the differential entropy h(X + √ tZ) (Theorem 1) and show that McKean's conjecture holds for higher orders up to at least five under the log-concavity condition (Corollary 1). As a corollary, we recover Toscani's result on the third derivative of the entropy power, using the Cauchy-Schwartz inequality, which is much simpler. In Section 3, we introduce the linear matrix inequality approach, and transform the above two conjectures to the feasibility check of semidefinite programming problems. With this approach, we can easily obtain the coefficients in Theorem 1. Then, we show that the direct generalization of the method by Cheng and Geng might not work for orders higher than four for proving Conjecture 1. In Section 4, we prove the main theorem of Section 2. Main Results We first introduce the notation that is used throughout this paper. When the functions are single-variate, we use d d for its derivative. For the multi-variate cases, we use ∂ ∂ for the partial derivative. To simplify the notation, for the derivatives of a general single-variate function g(y), we also use g (y), g (y) and g (y) to represent the first, second and third derivatives, respectively; and g (n) (y) denotes the n-th derivative for n ≥ 1. In the rest of the paper, let Z be a standard Gaussian random variable, and X be independent of Z. Denote According to, Y t has nice properties: The probability density function f (y, t) of Y t exists, is strictly positive and infinitely differentiable; The differential entropy h (Y t ) exists. Denote f n := ∂ n ∂y n f (y, t), where it is understood that f n and T n are functions of (y, t). We also present some properties of f (y, t) in the following lemma. The proof can be found in, say, and Propositions 1 and 2 in. Lemma 1. For t > 0, the probability density function f (y, t) satisfies the following properties: The heat equation holds: The expectation of the product of the T i, E exists, and lim |y|→∞ f ∏ i T i = 0, ∀t > 0. In Lemma 1,part, in writing E, we think of each T i as a function of (Y t, t). Notice that, given X and Z, the differential entropy h(X + √ tZ) is a function of t. The formulae for the first and second derivatives of h(X + √ tZ) are presented in the following lemma. According to Stam, the first equality is due to de Bruijn, and the right-hand side is actually the Fisher information (page 671 of ); the second one is due to McKean, Toscani and Villani ; the Gaussian optimality is due to McKean. Lemma 2. For the first and second derivatives of the differential entropy h(X + √ tZ), the following expressions hold for t > 0: Subject to VarX = 2, Gaussian X with variance 2 minimizes h (X + √ tZ) and −h (X + √ tZ). By standard manipulations, one has Thus, it is straightforward to rewrite the derivatives as For the third and fourth derivatives, one can refer to Theorems 1 and 2 in, where they were represented by the f i. Notice that these representations are not unique, and the ones in are sufficient for identifying the signs. Instead, in Theorem 1, we use the T i, and this will facilitate our proof of the Gaussian optimality in Corollary 1. Theorem 1. For t > 0, the derivatives of the differential entropy h(X + √ tZ) can be expressed as: −2h 2h The proof to this theorem is left to Section 4. The existence of such expressions and how to obtain the coefficients are left to Section 3, where the method of linear matrix inequalities is introduced. Log-Concave Case Lemma 2 already ensures the optimality of Gaussians, subject to Var(X) = 2, for the first and second derivatives. For higher ones, we do not know if we can show the optimality based on the expressions in Theorem 1. Here, we impose the constraint of log-concavity on f (y, t) and summarize the results in Corollaries 1-3. A nonnegative function f () is logarithmically concave (or log-concave for short) if its domain is convex and it satisfies the inequality for all x, y in the domain and 0 < < 1. If f is strictly positive, this is equivalent to saying that the logarithm of the function is concave (Section 2.5 of ). In our case, assuming that f (y, t) is log-concave in y is equivalent to T 2 ≤ 0. Examples of log-concave distributions include the Gaussian, exponential, Laplace, and the Gamma with parameter larger than one. Notice that, if the probability density function of X is log-concave, then so is that of X + √ tZ (Section 3.5.2 of ). Proof. Let X G be Gaussian with mean and variance 2. The probability density function of The key observation is that the second derivative of the logarithm in the Gaussian case is Hence, from Equation, the derivatives of the differential entropy in the Gaussian case are Now, if one can show the following chain of inequalities: then one is done. For inequality (b), the log-concavity condition, namely T 2 ≤ 0, suffices. This can be proved using Lemma 2: Notice that where the last equality is due to Lemma 1. Now, from Equation, Combining this with Lemma 2, one has This part is finished by noticing that E > 0 from Equation. For inequality (a), we show each case of n using Theorem 1 and the condition T 2 ≤ 0. For n = 3, where the inequality is due to Now, the proof is finished. The following corollary deals with the fifth-order case in, under the log-concavity assumption. The proof follows directly from Corollary 1 and Equation. Corollary 2. If the probability density function of X + √ tZ is log-concave, then the fifth derivative of the differential entropy is strictly positive: h Regarding the entropy power, it is already known that N (X + √ tZ) ≥ 0 from the connection with Fisher information, and N (X + √ tZ) ≤ 0 according to. For the third derivative, Toscani showed that N (X + √ tZ) ≥ 0, under the log-concavity assumption. Here, we simplify Toscani's proof, using a Cauchy-Schwartz argument. Corollary 3. If the probability density function of X + √ tZ is log-concave, then the third derivative of the entropy power is nonnegative: Proof. For brevity, let h := h (X + √ tZ), and, similarly, we omit the arguments for higher orders. Routine manipulations yield that Thus, it suffices to show 2h in the form of the T i : according to Lemma 2 and Equation, 2h = E; from Equation,. Now, under the log-concavity condition, namely T 2 ≤ 0, from the Cauchy-Schwartz inequality for random variables, we have: Thus, we have The proof is finished by noticing that E ≥ E 2 ≥ 0, which implies that the right-hand side is nonnegative. Linear Matrix Inequalities In this section, we introduce the method of linear matrix inequalities (LMI), and transform the proof of Conjectures 1 and 2 to the feasibility problem of LMI. This transformation also enables us to find the right coefficients in Theorem 1. Recall that, in, the authors first obtained the fourth derivative as the following (Equation in ) Then, with some equalities (from integration by parts), they showed this derivative can be expressed as the negative of a sum of squares (Theorem 2 in ): 70, 000 Hence, the fourth derivative is nonpositive. The sum of squares has a natural connection with positive semidefinite matrices. The right-hand side of Equation can be written as −E, where u is the column vector with coordinates and F is a positive semidefinite matrix. Thus, the method in is actually to verify the existence of a suitable positive semidefinite matrix F. This can be cast as the feasibility of a linear matrix inequality. A linear matrix inequality (Chapter 2 of ) has the form where the m m symmetric matrices F 0, F i, G j, i = 1,..., I, j = 1,..., J are given, variables x i are real and y j ' are nonnegative, and the notation F(x, y) 0 means F(x, y) is positive semidefinite. The feasibility problem refers to identifying if there exists a set of x i and y j such that F(x, y) is positive semidefinite. To reformulate the method used by Cheng and Geng as an LMI feasibility problem, using the fourth derivative as an illustrative example, the main idea is: first, transform the original expression of the derivative to the form −2h Then, transform the equalities resulting from integration by parts to the form Finally, try to find a set of variables One can notice that there is no matrix G j in the above statement. This is mainly because only equalities were available in. When one imposes inequality constraints, for example T 2 ≤ 0, as in this paper, then one will be able to construct matrices G j. Before we proceed to introduce the details on constructing those matrices, the following observations are clear regarding ): (a) the sum-order of derivatives for each entry of u is four, for example, the sum-order of f 2 1 f 2 / f 3 is 1 2 + 2 = 4; (b) the highest order of a single term in the entries of u is four, namely f 4 / f ; (c) the sum-order of each entry in the fourth derivative is eight, which is twice that of u. In the following, we take the fourth derivative as an example, and show how to construct these matrices F 0 (Section 3.3), F i (Sections 3.1 and 3.2), and G j (Section 3.4). We decide to use the T k as the entries of u, instead of the f k, the motivation for which is clear from the proof of Corollary 1 and the desire to exploit the assumption T 2 ≤ 0. Based on the above observation and the expressions in Equation, our vector u is Thus, F 0, F i, G j are 5 5 symmetric matrices. Here, we mention that the expressions appearing as coordinates in u correspond to the integer partitions of four. The organization of this section is as follows: Sections 3.1-3.3 deal with the sign of the fourth derivative with only equality constraints (see Conjecture 1); Section 3.4 further incorporates the inequality constraints, namely T 2 ≤ 0; Section 3.5 shows the manipulation for the optimality of Gaussian inputs (see Conjecture 2). In Section 3.6, we consider the sign and the Gaussian optimality for the fifth derivative. Matrices F i from Multiple Representations The matrices F i are such that E = 0. A trivial case is to notice that different products of the form u(i)u(j) may map to the same term, for example That is, T 2 2 T 4 1 admits multiple representations as u(i)u(j). It is easy to construct the corresponding matrix F 1 such that u T F 1 u = 0: For the fourth derivative, only one term has multiple representations. There is none for the third derivative, and three for the fifth (F 1, F 2 and F 3 in Section 3.6). Matrices F i from Integration by Parts The equalities of the type E = 0 used in are from integration by parts. Here, we list them one by one. Notice that all the possible terms with sum-order eight and highest-order four are the following (the numbers in the left column are indices): Denote this vector as w. These terms are arranged in the order such that the first (fourteen) terms can be expressed as u(i)u(j) for some i and j, while the last term(s) cannot be. We call the first terms the quadratic part w qua, and the last term(s) the non-quadratic part w non. Thus, w = (w qua, w non ). It is not difficult to conclude that, for non-repetition, one only needs to perform integration by parts on the entries whose highest-order term is of power one. All of these entries are (eight in total): Taking T 4 T 3 T 1 as an example, one can show that (Equation, see the end of this subsection) In addition, this can be written as E = 0, where There are eight equalities in total and hence there are vectors c 1,..., c 8. We put each c i as the i-th row of C ∈ R 815, and write those equalities as The entries can be found in Equations -. We need to extract matrices F from these eight equalities E = 0, such that E ≡ 0. The main problem is that c T k w may contain entries that are not expressible as u(i)u(j). In particular, for the fourth derivative, this happens when c k = 0. One needs to do some work to cancel these entries. The general method, which can also be used in higher-order cases, is stated below: 1. Firstly, since w = (w qua, w non ), we separate the blocks of C accordingly, w qua w non ] = 0. In particular, for the first row of C 21, the matrix is Notice a scaling of a factor of two is added here just for conciseness, and this does not affect the feasibility of. Similarly, the other five matrices, corresponding to the remaining rows of C 21, are 3. Thirdly, for C 11 and C 12, the equalities are E = 0. Notice w non cannot be expressed in a quadratic form. Supposing that we can find a column vector z such that z T C 12 = 0, then E = E = 0. The vector z actually lies in the null space of C T 12, and it suffices to find the basis. One way is to do the QR decomposition: where U is upper-triangular. The null-space of C T 12 has the same dimensions as the number of rows of 0 above, and a basis as the last several columns of Q-in particular, for the fourth derivative Hence, one takes z as the second column of Q, which is (after scaling for conciseness) z T = −2, 1. Then, one calculates z T C 11 w qua = −4T 4 T 3 T 1 + T 4 T 2 2 − 2T 2 3 T 2 1 + T 3 T 2 2 T 1, and the corresponding matrix F 8 (scaled by a factor of two) is The rest of this subsection is devoted to calculating the equalities obtained from integration by parts. This is similar to that in, except in the form of the T i. To begin, we need the following lemma. Lemma 3. Let A be a linear combination of terms of products of the T i, then, for n ≥ 2, Proof. From calculus, where (a) is due to Lemma 1, and (b) is due to Equation. Now, using Lemma 3, one obtains the following equalities: With these equalities, matrix can be constructed. Matrix F 0 from the Derivative Suppose we have already obtained the fourth derivative in the form (see Equation later) −2h where d 1 ∈ R 14, d 2 ∈ R 1. Then, similar to F 8, we can find the matrix F 0 such that −2h To cancel the non-quadratic term d T 2 w non, we solve for z T 2 C 12 = d T 2 (the solution z 2 should exist, otherwise it is not possible to find a quadratic form and the LMI approach fails). Then, since E = 0, we have −2h Now, F 0 can be constructed from d T 1 − z T C 11. The details are as follows. First, we need to express the derivative using the entries of w. This can be done recursively using the following lemma. Lemma 4. Let A be a linear combination of terms of products of the T i. The following equalities hold: The proof is left to Appendix A. Now, with Equation : and Equation, one can easily obtain that For the fourth derivative, One solves for z 2 such that z T 2 C 12 = d T 2 and obtains has nonzero entries at locations, with values, respectively. Furthermore, F 0 (scaled by a factor of two) is found as By the end of this subsection, it is easy to see that Cheng and Geng's method can be reformulated as identifying if there exist x 1,..., x 8 ∈ R such that We use the convex optimization package to identify the feasibility of the above LMI problem, and it turns out to be feasible as it should be according to Equation. Matrices G j from Log-Concavity Recall that, in, there is no matrix G j, since there is no inequality constraint. In this paper, we consider the log-concave case T 2 ≤ 0, thus introducing inequality constraints. For the fourth order, T 2 ≤ 0 actually implies that the following entries in w are nonpositive: It is clear that the powers of T 2 are odd, and the others are even. To transform these nonpositive terms into matrices G j, the first two terms, T 3 2 T 2 1 and T 2 T 6 1 are trivial, since they can be expressed by u(i)u(j) directly: For the term T 2 T 2 3, the idea is similar to the third part in Section 3.2. One first finds z 3 ∈ R 2 such that z T 3 C 12 w non = T 2 T 2 3, namely z T 3 C 12 = 1. The solution is z T 3 = 0, 1/2. Then, At this point, we are done with the procedure for calculating all these matrices F 0, the F i and the G j. To show the negativity of the fourth derivative, it suffices to find a set of variables x i ∈ R and y j ≥ 0 such that Remark 2. The matrix G 2 is actually redundant, since we know that E ≡ − 1 7 E ≤ 0, which is already included in the matrices F i (in particular, matrix F 7 in Section 3.2). Including G 2 will not affect the feasibility check. MatrixF 0 for Gaussian Optimality However, to show the optimality of the Gaussian, the above formulation is not enough. According to inequality (a) in Equation, it would suffice to show that Thus, one needs to calculate the matrix F 0 such that The procedure is the same as that in Section 3.3. In particular, for the fourth derivative, since n = 4 is even, we directly have the quadratic form E = uu. It is straightforward to construct the matrixF 0 (scaled by a factor of two) her Again, we use the convex optimization package to check the feasibility. It turns out to be feasible and the solution helps us to identify the coefficients in Equation. Fifth Derivative For the fifth derivative, we omit the details of the manipulations since they are routine, and just provide the matrices here. For brevity, we only list out the nonzero entries of the upper-triangular part of a symmetric matrix. These matrices (with scaling) are For the sign of the fifth derivative, we used the convex optimization package to solve the following LMI problem, but could not find a feasible solution x 1,..., x 16 ∈ R. This suggests to us that a direct generalization of Cheng and Geng's method may not work for the fifth derivative. Instead, if we consider the log-concavity constraint T 2 ≤ 0 and check the optimality of Gaussian inputs, then we have a new matrixF 0 (similar to Section 3.5) and several matrices G j as the following: Now, one would like to find x 1,..., x 16 ∈ R and y 1,..., y 5 ∈ R + such that This can be solved by the convex optimization package. Again, the solution helps us to arrive at Equation. Proof of Theorem 1 Proof. For the third derivative, according to Equation, we have For the fourth derivative, according to Equation : Adding multiples of the left-hand sides of the equations: where (a) is due to Equation, and (b) is due to Equation. For the fifth derivative, For each term above on the right-hand side: According to Equation, For the second term, Then, adding multiples of the left-hand sides of Equations -, we have 2h On the Derivatives We are not able to say anything conclusive about the sign of the fifth derivative of the differential entropy h(X + √ tZ). If we impose the log-concavity condition, namely T 2 ≤ 0, then the fifth derivative is at least 4! E. This motivates us to consider the following problem: Without additional constraints, what are the values c 5 > 0 such that If one finds such a value c 5, then so long as E ≥ 0, the sign of the fifth derivative is determined. This condition is much weaker than T 2 ≤ 0. For the computational part, one only needs to construct the matrixF 0 such that 2h 0 u], and then solve the problem (see Section 3.6 for the matrices F i ) It turns out that c 5 = 0.13 works, while c 5 = 0.125 fails. The authors guess that c 5 ∈ works, but, at the moment, can just partly confirm this with limited simulation. Notice that the third derivative of the entropy power N(X + √ tZ) was shown to be nonnegative under the log-concavity condition, and we recover this in Corollary 3. We also considered the fourth derivative, but failed to obtain the sign because we were unable to apply the Cauchy-Schwartz inequality as we did for the third derivative. Possible Proofs To prove Conjecture 1, besides the method proposed in, we are also considering the following ways: the first one is constructive and inspired by Equation. Given a random variable X, if we can construct a proper measure () such that Equation holds, then one proves Conjecture 1. However, this is difficult even when X is binary symmetric, which is a very simple random variable. The second one is recursive. Suppose one can find a formula for the n-th derivative such that then it is clear that However, this fails for n = 2 (see Equation and Theorem 1). Instead, one may expect that and then If further one can show that E = E for some C k n +1, then one finishes the proof. Notice here that a clever observation is needed for this way to work. Applications The topic of Gaussian optimality has wide applications, for example in. In this work, besides the Gaussian optimality, we also have some new observations. In, the derivatives in the signal-noise ratio (snr) of I(X; √ snrX + Z) are studied. In particular, the first four derivatives are obtained in the language of the minimum mean-square error (Equations - in Corollary 1 of ). However, it is not clear whether some of these derivatives are signed or not. With some standard manipulations, it is not difficult to show that By letting t = 1/ √ snr, one can easily connect the minimum mean-square error formulae in with the signs of the derivatives of h(X + √ tZ) in t. The verification of Conjectures 1 and 2 would imply the bounding and extremal properties of Equations - in, and thus deepen our understanding of the minimum mean-square error estimation under the additive-Gaussian setting. In addition, notice that the probability density function f (y, t) of Y = X + √ tZ is the solution of the heat equation ∂ ∂t f (y, t) = 1 2 ∂ 2 ∂y 2 f (y, t) with the initial condition that f (y, 0) = f X (y). Hence, Conjectures 1 and 2, if true, reveal the properties of the differential entropy of functions that satisfy the heat equation. For more results related to diffusion equations, one may refer to. Conclusions In this paper, we studied two conjectures on the derivatives of the differential entropy of a general random variable with added Gaussian noise. Regarding the conjecture on the signs of the derivatives made by Cheng and Geng, we introduced the linear matrix inequality approach to provide evidence that their original method might not generalize to orders higher than four. Instead, we considered imposing an additional constraint, namely the log-concavity assumption, and showed the optimality of Gaussian random variables for orders three, four and five. Thus, we made progress on McKean's conjecture, under a mild condition.
Fabrication of high internal phase Pickering emulsions with calcium-crosslinked whey protein nanoparticles for -carotene stabilization and delivery. Whey protein isolate (WPI) nanoparticles were fabricated with Ca2+ induced cross-linking and used as an effective particle stabilizer for high internal phase Pickering emulsion (HIPPE) formulation aiming to improve the chemical stability and bioaccessibility of -carotene (BC). Ca2+ concentration dominated the characteristics of WPI nanoparticles. Spherically shaped and homogeneously dispersed WPI nanoparticles with a Z-average diameter of approximately 150.0 nm were obtained with 5.0 mM Ca2+ concentration. No cytotoxicity was observed for WPI nanoparticles even at 10.0 mg mL-1 concentration. HIPPE (oil fraction 80.0%, w/w) can be successfully prepared with WPI nanoparticles at a concentration as low as 0.2% (w/w) and was stable for at least 2 months at room temperature. A higher WPI nanoparticle concentration resulted in more solid-like HIPPEs. BC exhibited appreciably higher retention in HIPPEs than in bulk oil during 30 days of storage at 50 °C. Moreover, BC bioaccessibility was appreciably improved with the HIPPE delivery system. Both the chemical stability and bioaccessibility of BC increased with the increase of WPI nanoparticle concentrations from 0.2 to 1.0% (w/w). The results obtained in this study may facilitate the fabrication of edible and biocompatible protein-based nanoparticle stabilizers for HIPPE formulation with more innovative and tailored functionalities.
// Copyright (c) Microsoft Corporation. // Licensed under the MIT license. import { GeneratedClient } from "./generated/generatedClient"; import { Service } from "./generated/operations"; import { Table } from "./generated/operations"; import { TableEntity, ListTablesOptions, ListTableEntitiesOptions, GetTableEntityResponse, ListEntitiesResponse, CreateTableEntityOptions, UpdateTableEntityOptions, UpsertTableEntityOptions, UpdateMode, TableEntityQueryOptions, DeleteTableEntityOptions, CreateTableOptions, GetTableEntityOptions, ListTableItemsResponse, CreateTableEntityResponse, CreateTableItemResponse } from "./models"; import { TableServiceClientOptions, GetStatisticsOptions, GetStatisticsResponse, GetPropertiesOptions, GetPropertiesResponse, SetPropertiesOptions, ServiceProperties, SetPropertiesResponse, DeleteTableOptions, DeleteTableResponse, DeleteTableEntityResponse, UpdateEntityResponse, UpsertEntityResponse, GetAccessPolicyOptions, GetAccessPolicyResponse, SetAccessPolicyResponse, SetAccessPolicyOptions } from "./generatedModels"; import { QueryOptions as GeneratedQueryOptions, TableDeleteEntityOptionalParams } from "./generated/models"; import { getClientParamsFromConnectionString } from "./utils/connectionString"; import { TablesSharedKeyCredential } from "./TablesSharedKeyCredential"; import { serialize, deserialize, deserializeObjectsArray } from "./serialization"; /** * A TableServiceClient represents a Client to the Azure Tables service allowing you * to perform operations on the tables and the entities. */ export class TableServiceClient { private table: Table; private service: Service; /** * Creates a new instance of the TableServiceClient class. * * @param {string} url The URL of the service account that is the target of the desired operation., such as * "https://myaccount.table.core.windows.net". You can append a SAS, * such as "https://myaccount.table.core.windows.net?sasString". * @param {TablesSharedKeyCredential} credential TablesSharedKeyCredential used to authenticate requests. Only Supported for Browsers * @param {TableServiceClientOptions} options Optional. Options to configure the HTTP pipeline. * * Example using an account name/key: * * ```js * const account = "<storage account name>" * const sharedKeyCredential = new TablesSharedKeyCredential(account, "<account key>"); * * const tableServiceClient = new TableServiceClient( * `https://${account}.table.core.windows.net`, * sharedKeyCredential * ); * ``` */ // eslint-disable-next-line @azure/azure-sdk/ts-naming-options constructor( url: string, credential: TablesSharedKeyCredential, // eslint-disable-next-line @azure/azure-sdk/ts-naming-options options?: TableServiceClientOptions ); /** * Creates a new instance of the TableServiceClient class. * * @param {string} url The URL of the service account that is the target of the desired operation., such as * "https://myaccount.table.core.windows.net". You can append a SAS, * such as "https://myaccount.table.core.windows.net?sasString". * @param {TableServiceClientOptions} options Optional. Options to configure the HTTP pipeline. * Example appending a SAS token: * * ```js * const account = "<storage account name>"; * const sasToken = "<SAS token>"; * * const tableServiceClient = new TableServiceClient( * `https://${account}.table.core.windows.net?${sasToken}`, * ); * ``` */ // eslint-disable-next-line @azure/azure-sdk/ts-naming-options constructor(url: string, options?: TableServiceClientOptions); constructor( url: string, credentialOrOptions?: TablesSharedKeyCredential | TableServiceClientOptions, // eslint-disable-next-line @azure/azure-sdk/ts-naming-options options?: TableServiceClientOptions ) { const credential = credentialOrOptions instanceof TablesSharedKeyCredential ? credentialOrOptions : undefined; const clientOptions = (!(credentialOrOptions instanceof TablesSharedKeyCredential) ? credentialOrOptions : options) || {}; if (credential) { clientOptions.requestPolicyFactories = (defaultFactories) => [ ...defaultFactories, credential ]; } const client = new GeneratedClient(url, clientOptions); this.table = client.table; this.service = client.service; // TODO: Add the required policies and credential pipelines #9909 } /** * Retrieves statistics related to replication for the Table service. It is only available on the * secondary location endpoint when read-access geo-redundant replication is enabled for the account. * @param options The options parameters. */ public getStatistics(options?: GetStatisticsOptions): Promise<GetStatisticsResponse> { return this.service.getStatistics(options); } /** * Gets the properties of an account's Table service, including properties for Analytics and CORS * (Cross-Origin Resource Sharing) rules. * @param options The options parameters. */ public getProperties(options?: GetPropertiesOptions): Promise<GetPropertiesResponse> { return this.service.getProperties(options); } /** * Sets properties for an account's Table service endpoint, including properties for Analytics and CORS * (Cross-Origin Resource Sharing) rules. * @param properties The Table Service properties. * @param options The options parameters. */ public setProperties( properties: ServiceProperties, options?: SetPropertiesOptions ): Promise<SetPropertiesResponse> { return this.service.setProperties(properties, options); } /** * Creates a new table under the given account. * @param tableName The name of the table. * @param options The options parameters. */ public createTable( tableName: string, options?: CreateTableOptions ): Promise<CreateTableItemResponse> { return this.table.create({ tableName }, { ...options, responsePreference: "return-content" }); } /** * Operation permanently deletes the specified table. * @param tableName The name of the table. * @param options The options parameters. */ public deleteTable( tableName: string, options?: DeleteTableOptions ): Promise<DeleteTableResponse> { return this.table.delete(tableName, options); } /** * Queries tables under the given account. * @param options The options parameters. */ public async listTables(options?: ListTablesOptions): Promise<ListTableItemsResponse> { const { _response, xMsContinuationNextTableName: nextTableName, value = [] } = await this.table.query(options); return Object.assign([...value], { _response, nextTableName }); } /** * Returns a single entity in a table. * @param tableName The name of the table. * @param partitionKey The partition key of the entity. * @param rowKey The row key of the entity. * @param options The options parameters. */ public async getEntity<T extends object>( tableName: string, partitionKey: string, rowKey: string, options?: GetTableEntityOptions ): Promise<GetTableEntityResponse<T>> { const { queryOptions, ...getEntityOptions } = options || {}; const { _response } = await this.table.queryEntitiesWithPartitionAndRowKey( tableName, partitionKey, rowKey, { ...getEntityOptions, queryOptions: this.convertQueryOptions(queryOptions || {}) } ); const tableEntity = deserialize<TableEntity<T>>(_response.parsedBody); return { ...tableEntity, _response }; } /** * Queries entities in a table. * @param tableName The name of the table. * @param options The options parameters. */ public async listEntities<T extends object>( tableName: string, options?: ListTableEntitiesOptions ): Promise<ListEntitiesResponse<T>> { const queryOptions = this.convertQueryOptions(options?.queryOptions || {}); const { _response, xMsContinuationNextPartitionKey: nextPartitionKey, xMsContinuationNextRowKey: nextRowKey, value } = await this.table.queryEntities(tableName, { ...options, queryOptions }); const tableEntities = deserializeObjectsArray<TableEntity<T>>(value || []); return Object.assign([...tableEntities], { _response, nextPartitionKey, nextRowKey }); } /** * Insert entity in a table. * @param tableName The name of the table. * @param entity The properties for the table entity. * @param options The options parameters. */ public createEntity<T extends object>( tableName: string, entity: TableEntity<T>, options?: CreateTableEntityOptions ): Promise<CreateTableEntityResponse> { const { queryOptions, ...createTableEntity } = options || {}; return this.table.insertEntity(tableName, { ...createTableEntity, queryOptions: this.convertQueryOptions(queryOptions || {}), tableEntityProperties: serialize(entity), responsePreference: "return-no-content" }); } /** * Deletes the specified entity in a table. * @param tableName The name of the table. * @param partitionKey The partition key of the entity. * @param rowKey The row key of the entity. * @param options The options parameters. */ public deleteEntity( tableName: string, partitionKey: string, rowKey: string, options?: DeleteTableEntityOptions ): Promise<DeleteTableEntityResponse> { const { etag = "*", queryOptions, ...rest } = options || {}; const deleteOptions: TableDeleteEntityOptionalParams = { ...rest, queryOptions: this.convertQueryOptions(queryOptions || {}) }; return this.table.deleteEntity(tableName, partitionKey, rowKey, etag, deleteOptions); } /** * Update an entity in a table. * @param tableName The name of the table. * @param entity The properties of the entity to be updated. * @param mode The different modes for updating the entity: * - Merge: Updates an entity by updating the entity's properties without replacing the existing entity. * - Replace: Updates an existing entity by replacing the entire entity. * @param options The options parameters. */ public updateEntity<T extends object>( tableName: string, entity: TableEntity<T>, mode: UpdateMode, options?: UpdateTableEntityOptions ): Promise<UpdateEntityResponse> { if (!entity.PartitionKey || !entity.RowKey) { throw new Error("PartitionKey and RowKey must be defined"); } const { etag = "*", ...updateOptions } = options || {}; if (mode === "Merge") { return this.table.mergeEntity(tableName, entity.PartitionKey, entity.RowKey, { tableEntityProperties: entity, ifMatch: etag, ...updateOptions }); } if (mode === "Replace") { return this.table.updateEntity(tableName, entity.PartitionKey, entity.RowKey, { tableEntityProperties: entity, ifMatch: etag, ...updateOptions }); } throw new Error(`Unexpected value for update mode: ${mode}`); } /** * Upsert an entity in a table. * @param tableName The name of the table. * @param entity The properties for the table entity. * @param mode The different modes for updating the entity: * - Merge: Updates an entity by updating the entity's properties without replacing the existing entity. * - Replace: Updates an existing entity by replacing the entire entity. * @param options The options parameters. */ public upsertEntity<T extends object>( tableName: string, entity: TableEntity<T>, mode: UpdateMode, options?: UpsertTableEntityOptions ): Promise<UpsertEntityResponse> { if (!entity.PartitionKey || !entity.RowKey) { throw new Error("PartitionKey and RowKey must be defined"); } const { queryOptions, etag = "*", ...upsertOptions } = options || {}; if (mode === "Merge") { return this.table.mergeEntity(tableName, entity.PartitionKey, entity.RowKey, { tableEntityProperties: entity, queryOptions: this.convertQueryOptions(queryOptions || {}), ...upsertOptions }); } if (mode === "Replace") { return this.table.updateEntity(tableName, entity.PartitionKey, entity.RowKey, { tableEntityProperties: entity, queryOptions: this.convertQueryOptions(queryOptions || {}), ...upsertOptions }); } throw new Error(`Unexpected value for update mode: ${mode}`); } /** * Retrieves details about any stored access policies specified on the table that may be used with * Shared Access Signatures. * @param tableName The name of the table. * @param options The options parameters. */ public getAccessPolicy( tableName: string, options?: GetAccessPolicyOptions ): Promise<GetAccessPolicyResponse> { return this.table.getAccessPolicy(tableName, options); } /** * Sets stored access policies for the table that may be used with Shared Access Signatures. * @param tableName The name of the table. * @param acl The Access Control List for the table. * @param options The options parameters. */ public setAccessPolicy( tableName: string, options?: SetAccessPolicyOptions ): Promise<SetAccessPolicyResponse> { return this.table.setAccessPolicy(tableName, options); } private convertQueryOptions(query: TableEntityQueryOptions): GeneratedQueryOptions { const { select, ...queryOptions } = query; const mappedQuery: GeneratedQueryOptions = { ...queryOptions }; if (select) { mappedQuery.select = select.join(","); } return mappedQuery; } /** * * Creates an instance of TableServiceClient from connection string. * * @param {string} connectionString Account connection string or a SAS connection string of an Azure storage account. * [ Note - Account connection string can only be used in NODE.JS runtime. ] * Account connection string example - * `DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=accountKey;EndpointSuffix=core.windows.net` * SAS connection string example - * `BlobEndpoint=https://myaccount.table.core.windows.net/;QueueEndpoint=https://myaccount.queue.core.windows.net/;FileEndpoint=https://myaccount.file.core.windows.net/;TableEndpoint=https://myaccount.table.core.windows.net/;SharedAccessSignature=sasString` * @param {TableServiceClientOptions} [options] Options to configure the HTTP pipeline. * @returns {TableServiceClient} A new TableServiceClient from the given connection string. */ public static fromConnectionString( connectionString: string, // eslint-disable-next-line @azure/azure-sdk/ts-naming-options options?: TableServiceClientOptions ): TableServiceClient { const { url, options: clientOptions } = getClientParamsFromConnectionString( connectionString, options ); return new TableServiceClient(url, clientOptions); } }
<filename>security/integration/webserver/src/test/java/io/helidon/security/webserver/WebSecurityProgrammaticTest.java /* * Copyright (c) 2018 Oracle and/or its affiliates. All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package io.helidon.security.webserver; import java.util.Optional; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import java.util.regex.Pattern; import io.helidon.common.http.Http; import io.helidon.common.http.MediaType; import io.helidon.config.Config; import io.helidon.security.Security; import io.helidon.security.SecurityContext; import io.helidon.security.util.TokenHandler; import io.helidon.webserver.Routing; import io.helidon.webserver.WebServer; import org.junit.jupiter.api.BeforeAll; import static org.hamcrest.CoreMatchers.is; import static org.hamcrest.MatcherAssert.assertThat; /** * Unit test for {@link WebSecurity}. */ public class WebSecurityProgrammaticTest extends WebSecurityTests { private static String baseUri; @BeforeAll public static void initClass() throws InterruptedException { WebSecurityTestUtil.auditLogFinest(); myAuditProvider = new UnitTestAuditProvider(); Config config = Config.create(); Security security = Security.builderFromConfig(config) .addAuditProvider(myAuditProvider).build(); Routing routing = Routing.builder() .register(WebSecurity.from(security) .securityDefaults( SecurityHandler.newInstance() .queryParam( "jwt", TokenHandler.builder() .tokenHeader("BEARER_TOKEN") .tokenPattern(Pattern.compile("bearer (.*)")) .build()) .queryParam( "name", TokenHandler.builder() .tokenHeader("NAME_FROM_REQUEST") .build()))) .get("/noRoles", WebSecurity.secure()) .get("/user[/{*}]", WebSecurity.rolesAllowed("user")) .get("/admin", WebSecurity.rolesAllowed("admin")) .get("/deny", WebSecurity.rolesAllowed("deny"), (req, res) -> { res.status(Http.Status.INTERNAL_SERVER_ERROR_500); res.send("Should not get here, this role doesn't exist"); }) .get("/auditOnly", WebSecurity .audit() .auditEventType("unit_test") .auditMessageFormat(AUDIT_MESSAGE_FORMAT) ) .get("/{*}", (req, res) -> { Optional<SecurityContext> securityContext = req.context().get(SecurityContext.class); res.headers().contentType(MediaType.TEXT_PLAIN.withCharset("UTF-8")); res.send("Hello, you are: \n" + securityContext .map(ctx -> ctx.getUser().orElse(SecurityContext.ANONYMOUS).toString()) .orElse("Security context is null")); }) .build(); server = WebServer.create(routing); long t = System.currentTimeMillis(); CountDownLatch cdl = new CountDownLatch(1); server.start().thenAccept(webServer -> { long time = System.currentTimeMillis() - t; System.out.println("Started server on localhost:" + webServer.port() + " in " + time + " millis"); cdl.countDown(); }); //we must wait for server to start, so other tests are not triggered until it is ready! assertThat("Timeout while waiting for server to start!", cdl.await(5, TimeUnit.SECONDS), is(true)); baseUri = "http://localhost:" + server.port(); } @Override String serverBaseUri() { return baseUri; } }
Detecting context-dependent sentences in parallel corpora In this article, we provide several approaches to the automatic identification of parallel sentences that require sentence-external linguistic context to be correctly translated. Our long-term goal is to automatically construct a test set of context-dependent sentences in order to evaluate machine translation models designed to improve the translation of contextual, discursive phenomena. We provide a discussion and critique that show that current approaches do not allow us to achieve our goal, and suggest that for now evaluating individual phenomena is likely the best solution. Introduction Recent work in Machine Translation (MT) has focused on using information beyond the current sentence boundary to aid translation (Libovick & Helcl, 2017;;). The aim of these contextual MT systems is to remedy the flaw of traditional MT of translating sentences independently of each other, in particular to improve the translation of discourse phenomena. Despite the progress made in incorporating linguistic context into MT (), these gains are often not observable using automatic evaluation metrics, such as BLEU (), and manual analysis of translations is often anecdotal. Whilst strategies such as producing contrastive sentence pairs to be reranked by MT models is a promising strategy for evaluation (Rios ;), producing the test sets is often time-consuming and unrepresentative of real data. Morevoer, the distinction is often lacking between examples that need extra-sentential context to be translated and those that do not. A very useful addition to the test suites available would therefore be a test set of real, attested examples that require extra-sentential linguistic context to be correctly translated, as this would enable us to evaluate the progress made by contextual MT models specifically on the most difficult examples. Since manually identifying real sentences is very time-consuming, our long-term goal is to automatically construct such a test set. In this paper, we aim to show that designing and implementing a method of automatically detecting these sentences in a parallel corpus remains problematic, as shown by a reflection on what such a method would entail and preliminary experiments using tools currently at our disposal to implement it. We begin by discussing the types of phenomena we wish to identify (Section 2) and existing work on evaluating discourse phenomena (Section 3). We then define the goals and principles such an identification method would adhere to (Section 4). Finally, in Section 5, we critique two possible approaches to the problem, suggesting theoretical limitations of each approach. Our hope is for this work to provide the basis for discussion on modelling contextual phenomena in a multilingual setting, in a view to automatically identifying context-dependent sentences in the long term. Context-dependent phenomena In practice, many sentences can be correctly translated in isolation, without surrounding context, which explains why most MT systems today translate sentences independently of each other. However certain phenomena, mostly related to discourse, whose scope, by definition is defined at the discourse rather than the sentence level of discourse, cannot be systematically and correctly translated without extra context. Examples include anaphoric pronoun translation (Hardmeier & Federico, 2010;Guillou, 2016;Loaiciga Sanchez, 2017), lexical disambiguation (Carpuat & Wu, 2007;Rios ) and cross-lingual discourse connective prediction (Meyer & Popescu-Belis, 2012). These phenomena have a common characteristic: they are cross-lingually ambiguous and can only be disambiguated with the help of linguistic context. This linguistic context can appear within the sentence containing the ambiguous element or elsewhere in the text, in which case we refer to it as extra-sentential linguistic context. Cross-lingual ambiguity occurs because of mismatches in the language systems of the source and target languages, such that there are several translations possible out of context and only one correct in context. 1 The ambiguity can be morphological, syntactic, semantic and/or discursive. One example of the morphological level is the translation of anaphoric pronouns, which poses difficulties in MT due to structural differences in gender marking cross-lingually. For example, the French translation of English it is ambiguous between variants il (masc.) and elle (fem.), depending on the gender of the French noun with which the pronoun corefers. On the syntactic level, ambiguity can arise from an inherent ambiguity in the source language that is not preserved in the target language. For example, English 'green chestnuts and pears' is ambiguous between French 'des marrons verts et des poires' (only the chestnuts are green) and 'des marrons et des poires verts' (chestnuts and pears are green). Cross-lingual semantic ambiguity is where semantic ambiguity in the source language is not preserved in the target language and a choice must be made between the different meanings. For example, the English word spade is ambiguous between French bche 'gardening implement' and pique 'suit of cards' (Cf. work on word sense disambiguation in MT by Carpuat & Wu ). 2 Finally, at the discursive level, elements such as discourse connectives are often language-specific and are often expressed differently cross-lingually (Cf. work on implicitation of discourse connectives by Meyer & Webber ). Although the examples given above are relatively well-studied phenomena, particularly in a monolingual setting (coreference resolution, word sense disambiguation, discourse relation labelling, etc.), this cannot be seen as an exhaustive list of context-dependent phenomena. Ideally, it would be useful to study translation quality on all types of context-dependent sentences, not just those in a pre-defined list, especially as they are dependent on the particular language pair. Evaluating discourse in MT Evaluating discourse and other context-dependent phenomena in MT poses a problem for two main reasons. 3 Firstly, most sentences do not require context to be translated. When they do, few words are affected by an incorrect translation relative to the total number of words in the dataset, despite the fact that these errors can be seriously detrimental to the understanding of the translation. 4 Secondly, the correct translation of certain discourse phenomena, including anaphoric pronoun prediction, depends on previously made translation choices (ensuring translation coherence), ruling out metrics that rely on comparing surface forms of the predicted translation with a reference translation. As interest in contextual MT surges, the question of how to correctly evaluate the impact of the added context has not been far behind, with different solutions for evaluation, both manual and automatic, being proposed, aimed to overcome the problems described above. In terms of manual evaluation, the aim has been to construct a corpus containing only examples of interest in order to combat the sparsity problem cited above. Isabelle et al. provides such a set of test examples designed to test different well-known problems faced by MT systems, including discourse phenomena. Two solutions have been proposed for automatic evaluation of specific phenomenon. The first, adopted by shared task organisers for both the cross-lingual pronoun prediction task at WMT'16 and DiscoMT'17 (;) and the cross-lingual word sense disambiguation (WSD) task at SemEval-2013(Lefever & Hoste, 2013, is to change the nature of the task, and to evaluate the models' ability to translate solely the word of interest, whilst the rest of the translation is imposed for all contestants. This alleviates the second problem of translation coherence. The second automatic method, which also involves avoiding comparison of different models' translations, is to evaluate the capacity of MT models to rerank different hypotheses. Presenting models with contrastive pairs of examples and comparing the models on their ability to rank the correct hypothesis higher than an incorrect one is a way of indirectly evaluating them (Cf. for grammatical errors (Rios ) for WSD and () for coreference and lexical coherence/cohesion. Aside from (), which imposes that the disambiguating context occur in the previous sentence, the other automatic evaluation methods do not control for the fact that the disambiguating context can appear within the current sentence or beyond the sentence boundary. 5 This means that many examples in the sets can be resolved using sentence-internal context and therefore do not directly evaluate the ability of contextual models to use context beyond the current sentence. This notably proved problematic for the evaluation of the 2016 pronoun task, of which the highest performing model did not use any extra-sentential context (achieved higher scores based on the inter-sentential examples alone). A useful complement to these test suites would therefore be a method of automatically constructing a test set of sentences that require linguistic context to be correctly translated. The advantages of such a method would be its automatic nature, given the difficulty of manually finding representative examples of context-dependent phenomena and the fact that it could potentially find more diverse phenomena than a human annotator is capable of finding. Automatic context-dependent sentence detection Our long-term goal is to propose and develop a method of identifying real corpus examples that are cross-lingually ambiguous and necessarily require extra-sentential context (as opposed to intrasentential context) to be correctly translated. 6 In theory, such a method would separate parallel sentences for which all information needed to produce the target sentence is found within the source sentence (non-context-dependent) from those for which part of the information can only be found in the surrounding sentences (context-dependent). Goals and principles To achieve our goal, the ideal method would adhere to a certain number of principles to ensure (i) the unbiased nature of the test set, (ii) diversity and a large coverage of the phenomena detected and (iii) easy transferability to other language pairs. Although these properties may not be mutually attainable, attempting to adhere to these three properties is key to developing a detection method. (i) Unbiased test set A test set should be inherently unbiased towards a certain MT model or a certain type of model if it is to be used to fairly evaluate and compare models. This means that, ideally, the detection method itself should not rely on an existing MT model whose goal is to accomplish a task that the test set is designed to test. In our specific case, this means that any use of contextual MT models would violate this principle. (ii) Diversity and large coverage of phenomena A number of cases have been previously identifed in the literature as requiring context to be correctly translated, for example anaphoric pronouns, lexical ambiguity, discourse connectives, other cases of lexical cohesion. However, in practice, the main focus has been on only a couple of these phenomena, namely anaphoric pronoun resolution, and to a lesser extent lexical ambiguity. It is therefore interesting to keep the method as generic as possible, giving us the opportunity of identifying new context-dependent phenomena. (iii) Easy transferability to other language pairs To ensure that similar test sets can be easily produced for other language pairs, the detection method should be independent or at least only weakly dependent on the language pair. Since the majority of contextual phenomena depend on the language systems of the source and target language, this third point complements the previous point concerning the diversity of linguistic phenomena; the less a priori knowledge of the language pair required, the more adaptable the method will be to new language pairs, for which we do not have such knowledge. The question is, is such a method currently possible? Comparison of methods An ideal method would be one relying on complete and comparable representations of the source sentence and of the target sentence both with and without linguistic context. Intuitively, for contextdependent sentences to be correctly translated, the information present in the representation of the target sentence would be impossible to reconstruct from the representation of the source sentence, unless the information from the context is also included. We look at two different approaches for simulating this idealised scenario, working (i) at the sentence level and (ii) at the word level. Modelling at the sentence level Following promising work on distributional representations of words (;), recent work has emerged on the distributional representation of larger units of text, such as sentences. These representations are meant to encode generic, often semantic information about the sentence in fixed-size vectors. If sentence embeddings can encode information about a sentence, can they provide the necessary framework to determine whether or not a target sentence is translatable from its source sentence alone, ignoring its context? A positive answer to this question would require the following to be true: (i) a neural network model can be trained to predict the target sentence embedding from the source sentence embedding; a poor prediction for a given source embedding would be a sign that all the information necessary to produce its corresponding target embedding is not present in the source embedding; (ii) a second model trained to predict the target sentence embedding from a joint embedding of the source sentence and its context (source-or target-side) would predict a better target embedding for this context-dependent sentence. The problem with this method is the number of assumptions that are made: (i) the sentence embedding fully represents the sentence, (ii) a mapping can be learnt between source and target sentence embeddings, (iii) we have a reliable metric to evaluate whether the contextually predicted sentence embedding is significantly more similar to the real target sentence embedding than the non-contextual one. Preliminary exploratory experiments in this direction which aimed to learn the mapping between DOC2VEC embeddings () in the source and target languages using a small feedforward neural network confirmed that these assumptions were too great. One fundamental flaw with such an approach is that we have little control over the type of information stored in the representation, and no guarantee that this information will be useful for predicting cross-lingual ambiguity. With no control over the type of information modelled, evaluating whether the predicted embedding is sufficiently similar to the true target representation is also an open problem, and makes the method untractable. Given an imperfect representation of a sentence, judging whether a prediction is more similar to the target representation than another is impossible without knowing on what criteria we base the similarity. The approach could only really work with a near-perfect representation of all the information in a sentence, or more control over what kind of information is stored. Given our very generic aim to identify all types of context-dependent phenomena, this approach is not yet feasible. The problem is almost circular; if we had a method to perfectly map the representation of a sentence in context from one language to another, machine translation itself would be a solved task. Modelling at the word level Given the problem of obtaining sufficiently complete sentence-level embedding representations, a reasonable compromise is to try to work at the word level. We therefore consider a second, reduced approach, this time assuming that the ambiguity arises from a single word in the source sentence and only affects its translation in the target sentence. 7 We therefore also need to make the assumption that we have a method to identify sentences containing an ambiguous element. This splits the problem into two steps: (i) identifying sentences containing ambiguous elements, and (ii) separating the sentences which do not need extra context to be translated from those that do. Given that methods exist to detect specific phenomena in corpora, e.g. anaphoric pronouns () and semantically ambiguous words (Rios ), we suppose that new methods can be developed for more phenomena. This reduces the task to identifying whether the disambiguating context is found within the sentence, in the neighbouring sentences or cannot be found in the text at all. An approach at the word level would typically look at the probability of the ambiguous target word given just the current sentence and also given sentence-external context, compared to the probability of the alternative, incorrect solution(s). For example, in the le chat miaule et dort for "the cat meow and it sleeps", where the target word is il and the incorrect alternative elle, we would expect the target word to have a higher probability than the alternative word, regardless of the addition of extra context, since coreference is resolved within the sentence. In a sentence where the antecedent appears in the previous sentence, we would expect the probability of the target word to increase with the addition of this previous sentence relative to the probability of the alternative translation. Yet again, this method suffers from strong assumptions about the capacity of current NLP models to use contextual information to make such predictions. The assumptions were confirmed to be false by exploratory experiments using tool CONTEXT2VEC (), which can be used like a language model to make predictions about the form of a target word given a certain context. Three limitations were observed: (i) the intrinsic probability of a word, as determined by its frequency, has a very large effect on its probability in context, making it very complicated to assess the effect of adding context, (ii) the capacity of generic language models to model complex and structured problems such as coreference chains is insufficient, even for simple, short utterances, and (iii) in light of the second limitation, all context, even if not directly relevant to the translation of the ambiguous word, has an effect on the probability of the word. We have little control over which information is considered important by the model, particularly if we wish to keep the approach as general as possible. Conclusion We have described and motivated a theoretically interesting task of identifying sentences that are cross-lingually ambiguous and dependent on extra-sentential linguistic context. Beyond translation, this could have a wider impact on NLP applications, including dialogue generation and understanding. Through a reflection on the pre-requisites for such a detection method, and by exploring two different approaches to the problem, we have found that the task is very ambitious. The limitations identified have shown us that as long as complete and robust representations of all information within sentences are not yet achievable, the task of identifying context-dependent sentences using a method that is agnostic to the type of phenomenon is unlikely to be attainable. For now it appears that detecting contextual phenomena is better performed on a per-phenomenon basis.
import requests import jwt import gzip import platform import hashlib from collections import defaultdict from pathlib import Path from datetime import datetime, timedelta import time import json from typing import List from enum import Enum, auto from .resources import * from .__version__ import __version__ as version ALGORITHM = 'ES256' BASE_API = "https://api.appstoreconnect.apple.com" class UserRole(Enum): ADMIN = auto() FINANCE = auto() TECHNICAL = auto() SALES = auto() MARKETING = auto() DEVELOPER = auto() ACCOUNT_HOLDER = auto() READ_ONLY = auto() APP_MANAGER = auto() ACCESS_TO_REPORTS = auto() CUSTOMER_SUPPORT = auto() class HttpMethod(Enum): GET = 1 POST = 2 PATCH = 3 DELETE = 4 class APIError(Exception): def __init__(self, error_string, status_code=None): try: self.status_code = int(status_code) except (ValueError, TypeError): pass super().__init__(error_string) class Api: def __init__(self, key_id, key_file, issuer_id, submit_stats=True): self._token = None self.token_gen_date = None self.exp = None self.key_id = key_id self.key_file = key_file self.issuer_id = issuer_id self.submit_stats = submit_stats self._call_stats = defaultdict(int) if self.submit_stats: self._submit_stats("session_start") self._debug = False token = self.token # generate first token # def __del__(self): # if self.submit_stats: # self._submit_stats("session_end") def _generate_token(self): try: key = open(self.key_file, 'r').read() except IOError as e: key = self.key_file self.token_gen_date = datetime.now() exp = int(time.mktime((self.token_gen_date + timedelta(minutes=20)).timetuple())) return jwt.encode({'iss': self.issuer_id, 'exp': exp, 'aud': 'appstoreconnect-v1'}, key, headers={'kid': self.key_id, 'typ': 'JWT'}, algorithm=ALGORITHM).decode('ascii') def _get_resource(self, Resource, resource_id): url = "%s%s/%s" % (BASE_API, Resource.endpoint, resource_id) if Resource.suffix: url += Resource.suffix payload = self._api_call(url) data = payload.get('data', {}) if isinstance(data, list): data = data[0] return Resource(data, self) def _get_resource_from_payload_data(self, payload): try: resource_type = resources[payload.get('type')] except KeyError: raise APIError("Unsupported resource type %s" % payload.get('type')) return resource_type(payload, self) def get_related_resource(self, full_url): payload = self._api_call(full_url) data = payload.get('data') if data is None: return None elif type(data) == dict: return self._get_resource_from_payload_data(data) def get_related_resources(self, full_url): payload = self._api_call(full_url) data = payload.get('data', []) for resource in data: yield self._get_resource_from_payload_data(resource) def _create_resource(self, Resource, args): attributes = {} for attribute in Resource.attributes: if attribute in args and args[attribute] is not None: attributes[attribute] = args[attribute] relationships_dict = {} for relation in Resource.relationships.keys(): if relation in args and args[relation] is not None: relationships_dict[relation] = {} if Resource.relationships[relation].get('multiple', False): relationships_dict[relation]['data'] = [] relationship_objects = args[relation] if type(relationship_objects) is not list: relationship_objects = [relationship_objects] for relationship_object in relationship_objects: relationships_dict[relation]['data'].append({ 'id': relationship_object.id, 'type': relationship_object.type }) else: relationships_dict[relation]['data'] = { 'id': args[relation].id, 'type': args[relation].type } post_data = { 'data': { 'attributes': attributes, 'relationships': relationships_dict, 'type': Resource.type } } url = "%s%s" % (BASE_API, Resource.endpoint) if self._debug: print(post_data) payload = self._api_call(url, HttpMethod.POST, post_data) return Resource(payload.get('data', {}), self) def _modify_resource(self, resource, args): attributes = {} for attribute in resource.attributes: if attribute in args and args[attribute] is not None: if type(args[attribute]) == list: value = list(map(lambda e: e.name if isinstance(e, Enum) else e, args[attribute])) elif isinstance(args[attribute], Enum): value = args[attribute].name else: value = args[attribute] attributes[attribute] = value relationships = {} if hasattr(resource, 'relationships'): for relationship in resource.relationships: if relationship in args and args[relationship] is not None: relationships[relationship] = {} relationships[relationship]['data'] = [] for relationship_object in args[relationship]: relationships[relationship]['data'].append( { 'id': relationship_object.id, 'type': relationship_object.type } ) post_data = { 'data': { 'attributes': attributes, 'id': resource.id, 'type': resource.type } } if len(relationships): post_data['data']['relationships'] = relationships url = "%s%s/%s" % (BASE_API, resource.endpoint, resource.id) if self._debug: print(post_data) payload = self._api_call(url, HttpMethod.PATCH, post_data) return type(resource)(payload.get('data', {}), self) def _delete_resource(self, resource: Resource): url = "%s%s/%s" % (BASE_API, resource.endpoint, resource.id) self._api_call(url, HttpMethod.DELETE) def _get_resources(self, Resource, filters=None, sort=None, full_url=None): class IterResource: def __init__(self, api, url): self.api = api self.url = url self.index = 0 self.total_length = None self.payload = None def __getitem__(self, item): items = list(self) return items[item] def __iter__(self): return self def __repr__(self): return "Iterator over %s resource" % Resource.__name__ def __len__(self): if not self.payload: self.fetch_page() return self.total_length def __next__(self): if not self.payload: self.fetch_page() if self.index < len(self.payload.get('data', [])): data = self.payload.get('data', [])[self.index] self.index += 1 return Resource(data, self.api) else: self.url = self.payload.get('links', {}).get('next', None) self.index = 0 if self.url: self.fetch_page() if self.index < len(self.payload.get('data', [])): data = self.payload.get('data', [])[self.index] self.index += 1 return Resource(data, self.api) raise StopIteration() def fetch_page(self): self.payload = self.api._api_call(self.url) self.total_length = self.payload.get('meta', {}).get('paging', {}).get('total', 0) url = full_url if full_url else "%s%s" % (BASE_API, Resource.endpoint) url = self._build_query_parameters(url, filters, sort) return IterResource(self, url) def _build_query_parameters(self, url, filters, sort = None): separator = '?' if type(filters) is dict: for index, (filter_name, filter_value) in enumerate(filters.items()): filter_name = "filter[%s]" % filter_name url = "%s%s%s=%s" % (url, separator, filter_name, filter_value) separator = '&' if type(sort) is str: url = "%s%ssort=%s" % (url, separator, sort) return url def _api_call(self, url, method=HttpMethod.GET, post_data=None): headers = {"Authorization": "Bearer %s" % self.token} if self._debug: print("%s %s" % (method.value, url)) if self._submit_stats: endpoint = url.replace(BASE_API, '') if method in (HttpMethod.PATCH, HttpMethod.DELETE): # remove last bit of endpoint which is a resource id endpoint = "/".join(endpoint.split('/')[:-1]) request = "%s %s" % (method.name, endpoint) self._call_stats[request] += 1 if method == HttpMethod.GET: r = requests.get(url, headers=headers) elif method == HttpMethod.POST: headers["Content-Type"] = "application/json" r = requests.post(url=url, headers=headers, data=json.dumps(post_data)) elif method == HttpMethod.PATCH: headers["Content-Type"] = "application/json" r = requests.patch(url=url, headers=headers, data=json.dumps(post_data)) elif method == HttpMethod.DELETE: r = requests.delete(url=url, headers=headers) else: raise APIError("Unknown HTTP method") if self._debug: print(r.status_code) content_type = r.headers['content-type'] if content_type in ["application/json", "application/vnd.api+json"]: payload = r.json() if 'errors' in payload: raise APIError( payload.get('errors', [])[0].get('detail', 'Unknown error'), payload.get('errors', [])[0].get('status', None) ) return payload elif content_type == 'application/a-gzip': # TODO implement stream decompress data_gz = b"" for chunk in r.iter_content(1024 * 1024): if chunk: data_gz = data_gz + chunk data = gzip.decompress(data_gz) return data.decode("utf-8") else: if not 200 <= r.status_code <= 299: raise APIError("HTTP error [%d][%s]" % (r.status_code, r.content)) return r def _submit_stats(self, event_type): """ this submits anonymous usage statistics to help us better understand how this library is used you can opt-out by initializing the client with submit_stats=False """ payload = { 'project': 'appstoreconnectapi', 'version': version, 'type': event_type, 'parameters': { 'python_version': platform.python_version(), 'platform': platform.platform(), 'issuer_id_hash': hashlib.sha1(self.issuer_id.encode()).hexdigest(), # send anonymized hash } } if event_type == 'session_end': payload['parameters']['endpoints'] = self._call_stats requests.post('https://stats.ponytech.net/new-event', json.dumps(payload)) @property def token(self): # generate a new token every 15 minutes if (self._token is None) or (self.token_gen_date + timedelta(minutes=15) < datetime.now()): self._token = self._generate_token() return self._token # Users and Roles def modify_user_account( self, user: User, allAppsVisible: bool = None, provisioningAllowed: bool = None, roles: List[UserRole] = None, visibleApps: List[App] = None, ): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/modify_a_user_account :return: a User resource """ return self._modify_resource(user, locals()) def list_users(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_users :return: an iterator over User resources """ return self._get_resources(User, filters, sort) def list_invited_users(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_invited_users :return: an iterator over UserInvitation resources """ return self._get_resources(UserInvitation, filters, sort) # TODO: implement POST requests using Resource def invite_user(self, all_apps_visible, email, first_name, last_name, provisioning_allowed, roles, visible_apps=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/invite_a_user :return: a UserInvitation resource """ post_data = {'data': {'attributes': {'allAppsVisible': all_apps_visible, 'email': email, 'firstName': first_name, 'lastName': last_name, 'provisioningAllowed': provisioning_allowed, 'roles': roles}, 'type': 'userInvitations'}} if visible_apps is not None: visible_apps_relationship = list(map(lambda a: {'id': a, 'type': 'apps'}, visible_apps)) visible_apps_data = {'visibleApps': {'data': visible_apps_relationship}} post_data['data']['relationships'] = visible_apps_data payload = self._api_call(BASE_API + "/v1/userInvitations", HttpMethod.POST, post_data) return UserInvitation(payload.get('data'), {}) def read_user_invitation_information(self, user_invitation_id: str): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_user_invitation_information :return: a UserInvitation resource """ return self._get_resource(UserInvitation, user_invitation_id) # Beta Testers and Groups def create_beta_tester(self, email: str, firstName: str = None, lastName: str = None, betaGroups: BetaGroup = None, builds: Build = None) -> BetaTester: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/create_a_beta_tester :return: an BetaTester resource """ return self._create_resource(BetaTester, locals()) def delete_beta_tester(self, betaTester: BetaTester) -> None: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/delete_a_beta_tester :return: None """ return self._delete_resource(betaTester) def list_beta_testers(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_testers :return: an iterator over BetaTester resources """ return self._get_resources(BetaTester, filters, sort) def read_beta_tester_information(self, beta_tester_id: str): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_beta_tester_information :return: a BetaTester resource """ return self._get_resource(BetaTester, beta_tester_id) def create_beta_group(self, app: App, name: str, publicLinkEnabled: bool = None, publicLinkLimit: int = None, publicLinkLimitEnabled: bool = None) -> BetaGroup: """ :reference:https://developer.apple.com/documentation/appstoreconnectapi/create_a_beta_group :return: a BetaGroup resource """ return self._create_resource(BetaGroup, locals()) def modify_beta_group(self, betaGroup: BetaGroup, name: str = None, publicLinkEnabled: bool = None, publicLinkLimit: int = None, publicLinkLimitEnabled: bool = None) -> BetaGroup: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/modify_a_beta_group :return: a BetaGroup resource """ return self._modify_resource(betaGroup, locals()) def delete_beta_group(self, betaGroup: BetaGroup): return self._delete_resource(betaGroup) def list_beta_groups(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_groups :return: an iterator over BetaGroup resources """ return self._get_resources(BetaGroup, filters, sort) def read_beta_group_information(self, beta_group_ip): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_beta_group_information :return: an BetaGroup resource """ return self._get_resource(BetaGroup, beta_group_ip) def add_build_to_beta_group(self, beta_group_id, build_id): post_data = {'data': [{ 'id': build_id, 'type': 'builds'}]} payload = self._api_call(BASE_API + "/v1/betaGroups/" + beta_group_id + "/relationships/builds", HttpMethod.POST, post_data) return BetaGroup(payload.get('data'), {}) # App Resources def read_app_information(self, app_ip): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_app_information :param app_ip: :return: an App resource """ return self._get_resource(App, app_ip) def read_app_infos(self, app_ip): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_all_app_infos_for_an_app :param app_ip: :return: an App resource """ return self._get_resource(AppInfos, app_ip) def list_apps(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_apps :return: an iterator over App resources """ return self._get_resources(App, filters, sort) def list_prerelease_versions(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_prerelease_versions :return: an iterator over PreReleaseVersion resources """ return self._get_resources(PreReleaseVersion, filters, sort) def list_beta_app_localizations(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_app_localizations :return: an iterator over BetaAppLocalization resources """ return self._get_resources(BetaAppLocalization, filters) def read_beta_app_localization_information(self, beta_app_id: str): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_beta_app_localization_information :return: an BetaAppLocalization resource """ return self._get_resource(BetaAppLocalization, beta_app_id) def create_beta_app_localization(self, app: App, locale: str, description: str = None, feedbackEmail: str = None, marketingUrl: str = None, privacyPolicyUrl: str = None, tvOsPrivacyPolicy: str = None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/create_a_beta_app_localization :return: an BetaAppLocalization resource """ return self._create_resource(BetaAppLocalization, locals()) def list_app_encryption_declarations(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_app_encryption_declarations :return: an iterator over AppEncryptionDeclaration resources """ return self._get_resources(AppEncryptionDeclaration, filters) def list_beta_license_agreements(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_license_agreements :return: an iterator over BetaLicenseAgreement resources """ return self._get_resources(BetaLicenseAgreement, filters) # Build Resources def list_builds(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_builds :return: an iterator over Build resources """ return self._get_resources(Build, filters, sort) # TODO: handle fields on get_resources() def build_processing_state(self, app_id, version): return self._api_call(BASE_API + "/v1/builds?filter[app]=" + app_id + "&filter[version]=" + version + "&fields[builds]=processingState") # TODO: implement POST requests using Resource def set_uses_non_encryption_exemption_setting(self, build_id, uses_non_encryption_exemption_setting): post_data = {'data': {'attributes': {'usesNonExemptEncryption': uses_non_encryption_exemption_setting}, 'id': build_id, 'type': 'builds'}} payload = self._api_call(BASE_API + "/v1/builds/" + build_id, HttpMethod.PATCH, post_data) return Build(payload.get('data'), {}) def list_build_beta_details(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_build_beta_details :return: an iterator over BuildBetaDetail resources """ return self._get_resources(BuildBetaDetail, filters) def create_beta_build_localization(self, build: Build, locale: str, whatsNew: str = None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/create_a_beta_build_localization :return: a BetaBuildLocalization resource """ return self._create_resource(BetaBuildLocalization, locals()) def modify_beta_build_localization(self, beta_build_localization: BetaBuildLocalization, whatsNew: str): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/modify_a_beta_build_localization :return: a BetaBuildLocalization resource """ return self._modify_resource(beta_build_localization, locals()) def list_beta_build_localizations(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_build_localizations :return: an iterator over BetaBuildLocalization resources """ return self._get_resources(BetaBuildLocalization, filters) def list_beta_app_review_details(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_app_review_details :return: an iterator over BetaAppReviewDetail resources """ return self._get_resources(BetaAppReviewDetail, filters) def submit_app_for_beta_review(self, build: Build) -> BetaAppReviewSubmission: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/submit_an_app_for_beta_review :return: a BetaAppReviewSubmission resource """ return self._create_resource(BetaAppReviewSubmission, locals()) def list_beta_app_review_submissions(self, filters=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_beta_app_review_submissions :return: an iterator over BetaAppReviewSubmission resources """ return self._get_resources(BetaAppReviewSubmission, filters) def read_beta_app_review_submission_information(self, beta_app_id: str): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/read_beta_app_review_submission_information :return: an BetaAppReviewSubmission resource """ return self._get_resource(BetaAppReviewSubmission, beta_app_id) # Provisioning def list_bundle_ids(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_bundle_ids :return: an iterator over BundleId resources """ return self._get_resources(BundleId, filters, sort) def list_certificates(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_and_download_certificates :return: an iterator over Certificate resources """ return self._get_resources(Certificate, filters, sort) def list_devices(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_devices :return: an iterator over Device resources """ return self._get_resources(Device, filters, sort) def register_new_device(self, name: str, platform: str, udid: str) -> Device: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/register_a_new_device :return: a Device resource """ return self._create_resource(Device, locals()) def modify_registered_device(self, device: Device, name: str = None, status: str = None) -> Device: """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/modify_a_registered_device :return: a Device resource """ return self._modify_resource(device, locals()) def list_profiles(self, filters=None, sort=None): """ :reference: https://developer.apple.com/documentation/appstoreconnectapi/list_and_download_profiles :return: an iterator over Profile resources """ return self._get_resources(Profile, filters, sort) # Reporting def download_finance_reports(self, filters=None, split_response=False, save_to=None): # setup required filters if not provided for required_key, default_value in ( ('regionCode', 'ZZ'), ('reportType', 'FINANCIAL'), # vendorNumber is required but we cannot provide a default value # reportDate is required but we cannot provide a default value ): if required_key not in filters: filters[required_key] = default_value url = "%s%s" % (BASE_API, FinanceReport.endpoint) url = self._build_query_parameters(url, filters) response = self._api_call(url) if split_response: res1 = response.split('Total_Rows')[0] res2 = '\n'.join(response.split('Total_Rows')[1].split('\n')[1:]) if save_to: file1 = Path(save_to[0]) file1.write_text(res1, 'utf-8') file2 = Path(save_to[1]) file2.write_text(res2, 'utf-8') return res1, res2 if save_to: file = Path(save_to) file.write_text(response, 'utf-8') return response def download_sales_and_trends_reports(self, filters=None, save_to=None): # setup required filters if not provided default_versions = { 'SALES': '1_0', 'SUBSCRIPTION': '1_2', 'SUBSCRIPTION_EVENT': '1_2', 'SUBSCRIBER': '1_2', 'NEWSSTAND': '1_0', 'PRE_ORDER': '1_0', } default_subtypes = { 'SALES': 'SUMMARY', 'SUBSCRIPTION': 'SUMMARY', 'SUBSCRIPTION_EVENT': 'SUMMARY', 'SUBSCRIBER': 'DETAILED', 'NEWSSTAND': 'DETAILED', 'PRE_ORDER': 'SUMMARY', } for required_key, default_value in ( ('frequency', 'DAILY'), ('reportType', 'SALES'), ('reportSubType', default_subtypes.get(filters.get('reportType', 'SALES'), 'SUMMARY')), ('version', default_versions.get(filters.get('reportType', 'SALES'), '1_0')), # vendorNumber is required but we cannot provide a default value ): if required_key not in filters: filters[required_key] = default_value url = "%s%s" % (BASE_API, SalesReport.endpoint) url = self._build_query_parameters(url, filters) response = self._api_call(url) if save_to: file = Path(save_to) file.write_text(response, 'utf-8') return response
Regulation of interferon gene expression: mechanism of action of the If-1 locus We have examined the mechanism of action of the If-1 interferon (IFN) regulatory locus. This locus controls the level of circulating IFN produced in inbred mice in response to intravenous injection of Newcastle disease virus. Mice carrying the If-1h (high) allele show circulating IFN levels 10- to 15-fold higher than those carrying the If-1l (low) allele. In this report we show that induced splenocytes from If-1h and If-1l mice produce IFN at levels which are in the same proportions as those found in the circulation. Higher levels of IFN-specific mRNA were observed in splenocyte populations from If-1h animals. This was due to increased transcription of IFN genes. At the same time, the high- and low-producing populations showed no significant difference in the number of IFN mRNA-containing cells. We conclude that the effect of If-1 in the spleen is to control the levels of transcription of the IFN genes in individual induced splenocytes.
Mitigating cavitation on high head orifice spillways ABSTRACT A large hydropower potential lies in the Himalayan region due to perennial rivers. North east India, especially Arunachal Pradesh, is blessed with Brahmaputra and its tributaries which makes it a rich hydropower potential state. Heavy sediment laden rivers are the main challenge to tap hydropower in this region. Orifice spillways are the obvious choice as they serve the dual function of flood disposal and flushing of sediment through the reservoir. The crest of the spillway is provided as near to the river bed as possible with high head over spillway crest. This arrangement, due to high head over crest, is susceptible to cavitation damage due to very high velocity flows. Cavitation damage and its prevention are of increasing concern in designing and operating high head spillways. Proper design of spillway leading to acceptable pressures for all operating conditions, improvement of spillway surface, use of cavitation resistant materials and aeration of the flow are few techniques to prevent the cavitation erosion on the spillway surface. Three high head spillway case studies where the head over spillways is more than 50 m and spillway surface is prone to cavitation damage due to high velocity of the order of 30 to 50 m/s are discussed. This paper describes the choice of cavitation prevention method depending upon hydraulic and functional constraints of particular case.
import React, { Component, MouseEvent, ReactNode } from "react"; import styled from "styled-components"; const SpanElement = styled.span` width: 30%; margin-left: 3%; background-color: #adecad; margin-left: 2%; padding-left: 1%; padding-top: 0.5%; @media (max-width: 768px) { width: auto; margin-left: 1%; } `; interface Props { children: ReactNode; onClick?: (e: MouseEvent) => void; } class Span extends Component<Props> { // eslint-disable-next-line @typescript-eslint/explicit-module-boundary-types render() { return ( <SpanElement onClick={this.props.onClick}> {this.props.children} </SpanElement> ); } } export default Span;
Multi-degree-of-freedom adjustable plasma cutting device The utility model relates to a multi-degree-of-freedom adjustable plasma cutting device which comprises a frame. A support seat is fixed on the frame, and the support seat is connected with a connection rod through a rotating shaft. A fixed plate is fixedly connected with the front end of the connection rod through a bolt, the lower end of the fixed plate is fixedly connected with a limit block, and the front end of the fixed plate is fixedly connected with an adjusting plate. A U-shaped block is connected onto the adjusting plate, a cutting support is connected in the U-shaped block, a cutting head is fixedly connected onto the cutting support through a fixed block, and the cutting head penetrates through the cutting support to extend to the outer portion of the cutting support. A semicircular through hole is arranged on the limit block, and two directional wheels are fixedly connected onto the limit block. Two 1/4 circular grooves are symmetrically arranged on the adjusting plate, and two through holes corresponding to groove holes are arranged on the U-shaped block. A key groove is respectively arranged on two sides of the U-shaped block. The device is capable of cutting round steel by adjusting the direction of the cutting head in multiple angles, is simple and quick in cutting, has small pollution to onsite environment and is good in cutting quality.
The agency will name its target asteroid in 2019, a year before the ARM spacecraft is scheduled to launch. It's eyeing Itokawa, Bennu and 2008 EV5 at the moment, but it's still looking for more viable candidates. Once it finds its target, the spacecraft will spend 400 days circling the asteroid to test a technique that could prevent one from crashing into Earth. No, not drilling and embedding an explosive into it, but using the spacecraft's gravitational field to alter the asteroid's orbit. After that, the vehicle will deploy a robotic arm to the surface of the celestial object to dig up a boulder, and then embark on a six-year journey to bring the sample rock to the moon's orbit. The mission doesn't end there, though. In mid-2020s, after the boulder reaches its destination, NASA will send a manned spacecraft aboard the SLS rocket to rendezvous with and collect samples from it. At the moment, the agency's planning to deploy a two-person crew to the site for 25 days, but that might change when the time comes. That will serve as some sort of trial phase for the astronauts, giving them ideas on how to best collect and return samples from Mars. [image credit: GETTY/Elenarts]
A Conceptual Model of Spirituality in Music Education This article aims to describe the phenomenon of spirituality in music education by means of a model derived from the academic literature on the topic. Given the centrality of lived experience within this literature, we adopted a hermeneutic phenomenological theoretical framework to describe the phenomenon. The NCT (noticing, collecting, and thinking) model was used for the qualitative document analysis. Atlas.ti 7, computer-aided qualitative data analysis software, was used to support and organize the inductive qualitative data analysis process. After data saturation, we used Van Manens lifeworld existentials (corporeality, relationality, spatiality, and temporality) to help organize the many quotes, codes, and categories that emerged from analyzing the literature. The model that results assigns codes to quotes and codes to categories, which in turn appear within one of these four lifeworlds. This article not only offers a working conceptual model of spirituality in music education but may also help to foster an awareness of spiritual experience in pedagogical contexts and thus contribute to what Van Manen calls pedagogic thoughtfulness and tact.