text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Lead Image © Sergey Nivens, 123RF.com Improving Docker security now and in the future Caution! Security [1] seems to be lagging behind the pace of other developments in the Docker camp. Although increasing numbers of enterprises are using Docker at the data center, the technologies administrators use to safeguard containers are only slowly establishing themselves. In many cases, it is precisely the features that make Docker popular that also open up vulnerabilities (Figure 1). What the Kernel Does Not Isolate Docker relies on the Linux kernel's ability to create reciprocally isolated environments in which applications run. These containers are lean because they share the same kernel but are executed in separate run-time environments, thanks to cgroups [2] and namespaces [3], which define which resources a container can use. At the same time, the container itself only sees certain processes and network functions. Although an attacker will find it difficult to interact with the host's kernel from a hijacked virtual machine, container isolation does not provide the same defenses. The attacker can reach critical kernel subsystems such as SELinux [4] and cgroups, as well as /sys and /proc, which means attackers can potentially work around the host's security features. At the same time, containers use the same user namespace as their host systems. In other words, if the process is running with root privileges in a container, it keeps these privileges when it interacts with the kernel or a mounted filesystem. Admins are thus well advised not to run software with root privileges in containers; in fact, this is something that is only rarely necessary. Risks: Docker's Daemon The Docker daemon needs root privileges, however. It manages the containers on the host and needs to talk to the kernel that provides the isolated environments. Users who interact with the daemon are thus given privileged access to the system. This situation would be particularly critical if a hosting service provider were to offer containers as a self-service option via a web interface. Although using the docker group is an option, it is unlikely to improve security, because the group members can create containers in which, first, a root shell can be run and, second, the host's root filesystem can be mounted. The only option here is to regulate access strictly to the Docker service to avoid undesirable privilege escalation. Moreover, Docker's daemon can communicate through a REST API via HTTP(S). If you use this function [5], you will want to restrict access to the API to a trusted network or restrict it through SSL client certificates or the like. Countermeasures New versions of Docker come with features for mitigating the effect of the described attack scenarios. Docker mounts read-only the /sys filesystem and important files in /proc. The container cannot write to them, thus preventing privileged processes in containers from manipulating the host system. The kernel breaks down root privileges into capabilities [6] and assigns capabilities individually to processes. By default, Docker blocks important capabilities, to keep privileged processes in the container from creating mischief. These capabilities include network configuration, the ability to load kernel modules, or access to the audit subsystem. If a special application needs the blocked capabilities, Docker allows them for an individual container. Buy this article as PDF (incl. VAT)
https://www.admin-magazine.com/Articles/Improving-Docker-security-now-and-in-the-future
CC-MAIN-2020-34
en
refinedweb
/> list,.py - main.py Source Code Links Open levenshtein.py in your favourite editor and enter or paste this. levenshtein.py import math # ANSI terminal codes for colours BLUE = "\x1B[94m" RESET = "\x1B[0m" class Levenshtein(object): """ Implements the Levenshtein Word Distance Algorithm for calculating the minimum number of changes needed to transform one word into another. """ def __init__(self): """ Just creates costs, and other attributes with default values. """ self.insert_cost = 1 self.delete_cost = 1 self.substitute_cost = 1 self.grid = [] self.source_word = "" self.target_word = "" self.minimum_cost = -1 def calculate(self): """ Creates a grid for the given words and iterates rows and columns, calculating missing values. """ self.__init_grid() for sourceletter in range(0, len(self.source_word)): for targetletter in range(0, len(self.target_word)): if self.target_word[targetletter] != self.source_word[sourceletter]: total_substitution_cost = self.grid[sourceletter][targetletter] + self.substitute_cost else: total_substitution_cost = self.grid[sourceletter][targetletter] total_deletion_cost = self.grid[sourceletter][targetletter+1] + self.delete_cost total_insertion_cost = self.grid[sourceletter+1][targetletter] + self.insert_cost self.grid[sourceletter+1][targetletter+1] = min(total_substitution_cost, total_deletion_cost, total_insertion_cost) self.minimum_cost = self.grid[len(self.source_word)][len(self.target_word)] def print_grid(self): """ Prints the target and source words and all transformation costs in a grid """ print(" ", end="") for t in self.target_word: print(BLUE + "%5c" % t + RESET, end="") print("\n") for row in range(0, len(self.grid)): if row > 0: print(BLUE + "%3c" % self.source_word[row - 1] + RESET, end="") else: print(" ", end="") for column in range(0, len(self.grid[row])): print("%5d" % self.grid[row][column], end="") print("\n") def print_cost(self): """ This is a separate function to allow printing just the cost if you are not interested in seeing the grid """ if self.minimum_cost >= 0: print("Minimum cost of transforming \"%s\" to \"%s\" = %d" % (self.source_word, self.target_word, self.minimum_cost)) else: print("Costs not yet calculated") def __init_grid(self): """ Sets values of first row and first column to 1, 2, 3 etc. Other values initialized to 0 """ del self.grid[:] # Don't forget we need one more row than the letter count of the source # and one more column than the target word letter count for row in range(0, (len(self.source_word) + 1)): self.grid.append([0] * (len(self.target_word) + 1)) self.grid[row][0] = row if row == 0: for column in range(0, (len(self.target_word) + 1)): self.grid[row][column] = column self.minimum_cost = -1 The __init__ method simply creates a few attributes in our class with default values. The calculate method is central to this whole project, and after calling __init_grid it iterates the source word letters and then the target word letters in a nested loop. Within the inner loop it calculates the substitution, deletion and insertion cost according to the rules described above, finally setting the grid value to the minimum of these. After the loop we set the minimum cost to the bottom right value. The print_grid method is necessarily rather fiddly, and prints the words and grid in the format shown above. The separate print_cost method is much simpler and just prints the cost with a suitable message, or a warning if it has not yet been calculated. The __init_grid function called in calculate firstly empties the grid for those circumstances where we are reusing the object. It then loops the rows and columns, setting the first values to 1, 2, 3... and the rest to 0. Finally minimum_cost is set to -1 as an indicator that the costs have not yet been calculated, and is used in print_cost. Let's now move on to main.py. main.py import levenshtein def main(): """ Simple demo of the Levenshtein class. """ print("-----------------------------") print("| codedrome.com |") print("| Levenshtein Word Distance |") print("-----------------------------\n") source_words = ["banama", "banama", "levinstein"] target_words = ["banana", "elephant", "levenshtein"] lp = levenshtein.Levenshtein() for i in range(0, len(source_words)): lp.source_word = source_words[i] lp.target_word = target_words[i] lp.calculate() lp.print_grid() lp.print_cost() print("") main() Firstly we create two lists of word pairs to run the algorithm on, and then create a Levenshtein object. Then we iterate the lists, setting the words and calling the methods. Run the code with this command. Running the program python3.7 main.py.
https://www.codedrome.com/levenshtein-word-distance-in-python/
CC-MAIN-2020-34
en
refinedweb
You. But there needs to be an interaction between the code running in the sandbox and the code that created the sandbox, so the sandboxed code can control or react to things that happen in the controlling application. Sandboxed code needs to be able to call code outside the sandbox. Now, there are various methods of allowing cross-appdomain calls, the two main ones being .NET Remoting with MarshalByRefObject, and WCF named pipes. I'm not going to cover the details of setting up such mechanisms here, or which you should choose for your specific situation; there are plenty of blogs and tutorials covering such issues elsewhere. What I'm going to concentrate on here is the more general problem of running fully-trusted code within a sandbox, which is required in most methods of app-domain communication and control. In my last post, I mentioned that when you create a sandboxed appdomain, you can pass in a list of assembly strongnames that run as full-trust within the appdomain: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); Any assembly that is loaded into the sandbox with a strong name the same as one in the list of full-trust strong names is unconditionally given full-trust permissions within the sandbox, irregardless of permissions and sandbox setup. This is very powerful! You should only use this for assemblies that you trust as much as the code creating the sandbox. So now you have a class that you want the sandboxed code to call: // within assemblyWithApi public class MyApi { public static void MethodToDoThings() { ... } } // within the sandboxed dll public class UntrustedSandboxedClass { public void DodgyMethod() { ... MyApi.MethodToDoThings(); ... } } However, if you try to do this, you get quite an ugly exception: MethodAccessException: Attempt by security transparent method 'UntrustedSandboxedClass.DodgyMethod()' to access security critical method 'MyApi.MethodToDoThings()' failed. So the solution is easy, right? Make MethodToDoThings SafeCritical, then the transparent code running in the sandbox can call the api: MethodToDoThings [SecuritySafeCritical] public static void MethodToDoThings() { ... } However, this doesn't solve the problem. When you try again, exactly the same exception is thrown; MethodToDoThings is still running as Critical code. What's going on? By default, a fully-trusted assembly always runs Critical code, irregardless of any security attributes on its types and methods. This is because it may not have been designed in a secure way when called from transparent code - as we'll see in the next post, it is easy to open a security hole despite all the security protections .NET 4 offers. When exposing an assembly to be called from partially-trusted code, the entire assembly needs a security audit to decide what should be transparent, safe critical, or critical, and close any potential security holes. This is where AllowPartiallyTrustedCallersAttribute (APTCA) comes in. Without this attribute, fully-trusted assemblies run Critical code, and partially-trusted assemblies run Transparent code. When this attribute is applied to an assembly, it confirms that the assembly has had a full security audit, and it is safe to be called from untrusted code. All code in that assembly runs as Transparent, but SecurityCriticalAttribute and SecuritySafeCriticalAttribute can be applied to individual types and methods to make those run at the Critical or SafeCritical levels, with all the restrictions that entails. SecurityCriticalAttribute SecuritySafeCriticalAttribute So, to allow the sandboxed assembly to call the full-trust API assembly, simply add APCTA to the API assembly: [assembly: AllowPartiallyTrustedCallers] and everything works as you expect. The sandboxed dll can call your API dll, and from there communicate with the rest of the application. That's the basics of running a full-trust assembly in a sandboxed appdomain, and allowing a sandboxed assembly to access it. The key is AllowPartiallyTrustedCallersAttribute, which is what lets partially-trusted code call a fully-trusted assembly. However, an assembly with APTCA applied to it means that you have run a full security audit of every type and member in the assembly. If you don't, then you could inadvertently open a security hole. I'll be looking at ways this can happen in my next post.
http://geekswithblogs.net/simonc/archive/2013/05/16/.net-security-part-3.aspx
CC-MAIN-2020-34
en
refinedweb
Attributes are one of the key features of modern C++ which allows the programmer to specify additional information to the compiler to enforce constraints(conditions), optimise certain pieces of code or do some specific code generation. In simple terms, an attribute acts as an annotation or a note to the compiler which provides additional information about the code for optimization purposes and enforcing certain conditions on it. Introduced in C++11, they have remained one of the best features of C++ and are constantly being evolved with each new version of C++. Syntax: // C++11 [[attribute-list]] // C++17 [[using attribute-namespace:attribute-list]] // Upcoming C++20 [[contract-attribute-token contract-level identifier : expression]] Except for some specific ones, most of the attributes can be applied with variables, functions, classes, structures etc. Purpose of attributes in C++ - To enforce constraints on the code: Here constraint refers to a condition, that the arguments of a particular function must meet for its execution (precondition). In previous versions of C++, the code for specifying constraints was written in this mannerchevron_rightfilter_none It increases the readability of your code and avoids the clutter that is written inside the function for argument checking.chevron_rightfilter_none - To give additional information to the compiler for optimisation purposes: Compilers are very good at optimization but compared to humans they still lag at some places and propose generalized code which is not very efficient. This mainly happens due to the lack of additional information about the “problem” which humans have. To reduce this problem to an extent C++ standard has introduced some new attributes that allow specifying a little more to the compiler rather than the code statement itself. Once such example is that of likely.chevron_rightfilter_none When the statement is preceded by likely compiler makes special optimizations with respect to that statement which improves the overall performance of the code. Some examples of such attributes are [carries_dependency], [likely], [unlikely] - Suppressing certain warnings and errors that programmer intended to have in his code: It happens rarely but sometimes the programmer intentionally tries to write a faulty code which gets detected by the compiler and is reported as an error or a warning. One such example is that of an unused variable which has been left in that state for a specific reason or of a switch statement where the break statements are not put after some cases to give rise to fall-through conditions. In order to circumvent errors and warnings on such conditions, C++ provides attributes such as [maybe_unused] and [fallthrough] that prevent the compiler from generating warnings or errors.chevron_rightfilter_none List of standard attributes in C++ C++11 - noreturn: indicates that the function does not return a value Usage: [[noreturn]] void f(); While looking at the code above, the question arises what is the point of having noreturn when the return type is actually void? If a function has a void type, then it actually returns to the caller without a value but if the case is such that the function never returns back to the caller (for example an infinite loop) then adding a noreturn attribute gives hints to the compiler to optimise the code or generate better warnings. Example:chevron_rightfilter_none Warning: main.cpp: In function 'void f()': main.cpp:8:1: warning: 'noreturn' function does return } ^ - deprecated: Indicates that the name or entity declared with this attribute has become obsolete and must not be used for some specific reason. This attribute can be applied to namespaces, functions, classes structures or variables. Usage: [[deprecated("Reason for deprecation")]] // For Class/Struct/Union struct [[deprecated]] S; // For Functions [[deprecated]] void f(); // For namespaces namespace [[deprecated]] ns{} // For variables (including static data members) [[deprecated]] int x; Example:chevron_rightfilter_none Warning: main.cpp: In function 'int main()': main.cpp:26:9: warning: 'void gets(char*)' is deprecated: Susceptible to buffer overflow [-Wdeprecated-declarations] gets(a); ^ - nodiscard: The entities declared with nodiscard should not have their return values ignored by the caller. Simply saying if a function returns a value and is marked nodiscard then the return value must be utilized by the caller and not discarded. Usage : // Functions [[nodiscard]] void f(); // Class/Struct declaration struct [[nodiscard]] my_struct{}; The main difference between nodiscard with functions and nodiscard with struct/class declaration is that in case of function, nodiscard applies to that particular function only which is declared no discard, whereas in case of class/struct declaration nodiscard applies to every single function that returns the nodiscard marked object by value. Example:chevron_rightfilter_none Warning: prog.cpp:5:21: warning: 'nodiscard' attribute directive ignored [-Wattributes] [[nodiscard]] int f() ^ prog.cpp:10:20: warning: 'nodiscard' attribute directive ignored [-Wattributes] class[[nodiscard]] my_class{}; ^ - maybe_unused: Used to suppress warnings on any unused entities (For eg: An unused variable or an unused argument to a function). Usage: //Variables [[maybe_used]] bool log_var = true; //Functions [[maybe_unused]] void log_without_warning(); //Function arguments void f([[maybe_unused]] int a, int b); Example:chevron_rightfilter_none - fallthrough: [[fallthrough]] indicates that a fallthrough in a switch statement is intentional. Missing a break or return in a switch statement is usually considered a programmer’s error but in some cases fallthrough can result in some very terse code and hence it is used. Note: Unlike other attributes a fallthrough requires a semicolon after it is declared. Example:chevron_rightfilter_none - likely: For optimisation of certain statements that have more probability to execute than others. Likely is now available in latest version of GCC compiler for experimentation purposes. Examplechevron_rightfilter_none - no_unique_address: Indicates that this data member need not have an address distinct from all other non-static data members of its class. This means that if the class consist of an empty type then the compiler can perform empty base optimisation on it. Example:chevron_rightfilter_none - expects: It specifies the conditions (in form of contract) that the arguments must meet for a particular function to be executed. Usage: return_type func ( args...) [[expects : precondition]] Example:chevron_rightfilter_none Violation of the contract results in invocation of violation handler or if not specified then std::terminate() C++14 C++17 Upcoming C++20 attributes Difference between standard and non standard attributes Changes since C++11 - Ignoring unknown attributes: Since C++17, one of the major changes introduced for the attribute feature in C++ were regarding the clarification of unknown attributes by the compiler. In C++11 or 14, if an attribute was not recognized by the compiler, then it would produce an error and prevent the code from getting compiled. As a workaround, the programmer had to remove the attribute from the code to make it work. This introduced a major issue for portability. Apart from the standard attributes none of the vendor-specific attributes could be used, as the code would break. This prevented the actual use of this feature. As a solution, the standard made it compulsory for all the compilers to ignore the attributes that were not defined by them. This allowed the programmers to use vendor-specific attributes freely in their code and ensure that the code was still portable. Most of the compilers supporting C++17 now ignore the undefined attributes and produce a warning on when encountered. This allows the programmers to make the code more flexible as now, they can specify multiple attributes for the same operation under different vendor’s namespaces. (Support: MSVC(NOT YET), GCC, CLANG (YES)) Example:chevron_rightfilter_none - Use of attribute namespaces without repetition: In C++17 some of the rules regarding the use of “non-standard” attributes were relaxed. One such case is that of prefixing namespaces with a subsequent non-standard attribute. In C++11 or 14 when multiple attributes were written together each one of them had to be prefixed with their enclosing namespaces which gave rise to the pattern of code as shown below.chevron_rightfilter_none Looking at the code above, it can be seen that it seems bloated and cluttered. So the committee decided to “simplify the case when using multiple attributes” together. As of now, it is not mandatory for the programmer to prefix the namespace again and again with subsequent attributes being used together. This gives rise to the pattern of code shown below which looks clean and understandable.chevron_rightfilter_none - Multiple attributes over a particular piece of code: Several attributes can now be applied to a certain piece of code in C++. The compiler, in that case, evaluates each of the attributes in the order they are written. This allows the programmers to write pieces of code that can contain multiple constraints. Example:chevron_rightfilter_none Difference between attributes in C++ and C# There is a notable difference between attributes in C# and C++. In the case of C#, the programmer can define new attributes by deriving from System.Attribute; whereas in C++, the meta information is fixed by the compiler and cannot be used to define new user-defined attributes. This restriction is placed to prevent the language from evolving into a new form which could have made the language more complicated. Attention reader! Don’t stop learning now. Get hold of all the important DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. Recommended Posts: - Tags vs Elements vs Attributes in HTML - How to Get a List of Class Attributes in Python? - Class and Instance Attributes in Python - Accessing Attributes and Methods in Python -++ - 25 Facts about LTE Technology - Difference between Neo4j and MongoDB - Difference between Neo4j and Redis - Header files in C/C++ with Examples - Trigraphs.
https://www.geeksforgeeks.org/attributes-in-c/?ref=rp
CC-MAIN-2020-34
en
refinedweb
Normalization Blocks¶ When training deep neural networks there are a number of techniques that are thought to be essential for model convergence. One important area is deciding how to initialize the parameters of the network. Using techniques such as Xavier initialization, we can can improve the gradient flow through the network at the start of training. Another important technique is normalization: i.e. scaling and shifting certain values towards a distribution with a mean of 0 (i.e. zero-centered) and a standard distribution of 1 (i.e. unit variance). Which values you normalize depends on the exact method used as we’ll see later on. Figure 1: Data Normalization (Source) Why does this help? Some research has found that networks with normalization have a loss function that’s easier to optimize using stochastic gradient descent. Other reasons are that it prevents saturation of activations and prevents certain features from dominating due to differences in scale. Data Normalization¶ One of the first applications of normalization is on the input data to the network. You can do this with the following steps: Step 1 is to calculate the mean and standard deviation of the entire training dataset. You’ll usually want to do this for each channel separately. Sometimes you’ll see normalization on images applied per pixel, but per channel is more common. Step 2 is to use these statistics to normalize each batch for training and for inference too. Tip: A BatchNorm layer at the start of your network can have a similar effect (see ‘Beta and Gamma’ section for details on how this can be achieved). You won’t need to manually calculate and keep track of the normalization statistics. Warning: You should calculate the normalization means and standard deviations using the training dataset only. Any leakage of information from you testing dataset will effect the reliability of your testing metrics. When using pre-trained models from the Gluon Model Zoo you’ll usually see the normalization statistics used for training (i.e. statistics from step 1). You’ll want to use these statistics to normalize your own input data for fine-tuning or inference with these models. Using transforms.Normalize is one way of applying the normalization, and this should be used in the Dataset. import mxnet as mx from mxnet.gluon.data.vision.transforms import Normalize image_int = mx.nd.random.randint(low=0, high=256, shape=(1,3,2,2)) image_float = image_int.astype('float32')/255 # the following normalization statistics are taken from gluon model zoo normalizer = Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) image = normalizer(image_float) image Activation Normalization¶ We don’t have to limit ourselves to normalizing the inputs to the network either. A similar idea can be applied inside the network too, and we can normalize activations between certain layer operations. With deep neural networks most of the convergence benefits described are from this type of normalization. MXNet Gluon has 3 of the most commonly used normalization blocks: BatchNorm, LayerNorm and InstanceNorm. You can use them in networks just like any other MXNet Gluon Block, and are often used after Activation Blocks. Watch Out: Check the architecture of models carefully because sometimes the normalization is applied before the Activation. Advanced: all of the following methods begin by normalizing certain input distribution (i.e. zero-centered with unit variance), but then shift by (a trainable parameter) beta and scale by (a trainable parameter) gamma. Overall the effect is changing the input distribution to have a mean of beta and a variance of gamma, also allowing to the network to ‘undo’ the effect of the normalization if necessary. Batch Normalization¶ One of the most popular normalization techniques is Batch Normalization, usually called BatchNorm for short. We normalize the activations across all samples in a batch for each of the channels independently. See Figure 1. We calculate two batch (or local) statistics for every channel to perform the normalization: the mean and variance of the activations in that channel for all samples in a batch. And we use these to shift and scale respectively. Tip: we can use this at the start of a network to perform data normalization, although this is not exactly equivalent to the data normalization example seen above (that had fixed normalization statistics). With BatchNorm the normalization statistics depend on the batch, so could change each batch, and there can also be a post-normalization shift and scale. Warning: the estimates for the batch mean and variance can themselves have high variance when the batch size is small (or when the spatial dimensions of samples are small). This can lead to instability during training, and unreliable estimates for the global statistics. Warning: it seems that BatchNorm is better suited to convolutional networks (CNNs) than recurrent networks (RNNs). We expect the input distribution to the recurrent cell to change over time, so normalization over time doesn’t work well. LayerNorm is better suited for this case. When you do need to use BatchNorm on sequential data, make sure the axis parameter is set correctly. With data in NTC format you should set axis=2 (or axis=-1 equivalently). See Figure 2. As an example, we’ll apply BatchNorm batch normalization with the mx.gluon.nn.BatchNorm block. It can be created and used just like any other MXNet Gluon block (such as Conv2D). Its input will typically be unnormalized activations from the previous layer, and the output will be the normalized activations ready for the next layer. Since we’re using data in NCHW format we can use the default axis. net = mx.gluon.nn.BatchNorm() We still need to initialize the block because it has a number of trainable parameters, as we’ll see later on. net.initialize() We can now run the network as we would during training (under autograd.record context scope). Remember: BatchNorm runs differently during training and inference. When training, the batch statistics are used for normalization. During inference, a exponentially smoothed average of the batch statistics that have been observed during training is used instead. Warning: BatchNorm assumes the channel dimension is the 2nd in order (i.e. axis=1). You need to ensure your data has a channel dimension, and change the axis parameter of BatchNorm if it’s not the 2nd dimension. A batch of greyscale images of shape (100,32,32) would not work, since the 2nd dimension is height and not channel. You’d need to add a channel dimension using data.expand_dims(1) in this case to give shape (100,1,32,32). with mx.autograd.record(): output = net(data) loss = output.abs() loss.backward() print(output) We can immediately see the activations have been scaled down and centered around zero. Activations are the same for each channel, because each channel was normalized independently. We can do a quick sanity check on these results, by manually calculating the batch mean and variance for each channel. batch_means = data.mean(axis=1, exclude=True) batch_vars = (data - batch_means.reshape(1, -1, 1, 1)).square().mean(axis=1, exclude=True) print('batch_means:', batch_means.asnumpy()) print('batch_vars:', batch_vars.asnumpy()) And use these to scale the first entry in data, to confirm the BatchNorm calculation of -1.324 was correct. print("manually calculated:", ((data[0][0][0][0] - batch_means[0])/batch_vars[0].sqrt()).asnumpy()) print("automatically calculated:", output[0][0][0][0].asnumpy()) As mentioned before, BatchNorm has a number of parameters that update throughout training. 2 of the parameters are not updated in the typical fashion (using gradients), but instead are updated deterministically using exponential smoothing. We need to keep track of the average mean and variance of batches during training, so that we can use these values for normalization during inference. Why are global statistics needed? Often during inference, we have a batch size of 1 so batch variance would be impossible to calculate. We can just use global statistics instead. And we might get a data distribution shift between training and inference data, which shouldn’t just be normalized away. Advanced: when using a pre-trained model inside another model (e.g. a pre-trained ResNet as a image feature extractor inside an instance segmentation model) you might want to use global statistics of the pre-trained model during training. Setting use_global_stats=True is a method of using the global running statistics during training, and preventing the global statistics from updating. It has no effect on inference mode. After a single step (specifically after the backward call) we can see the running_mean and running_var have been updated. print('running_mean:', net.running_mean.data().asnumpy()) print('running_var:', net.running_var.data().asnumpy()) You should notice though that these running statistics do not match the batch statistics we just calculated. And instead they are just 10% of the value we’d expect. We see this because of the exponential average process, and because the momentum parameter of BatchNorm is equal to 0.9 : i.e. 10% of the new value, 90% of the old value (which was initialized to 0). Over time the running statistics will converge to the statistics of the input distribution, while still being flexible enough to adjust to shifts in the input distribution. Using the same batch another 100 times (which wouldn’t happen in practice), we can see the running statistics converge to the batch statsitics calculated before. for i in range(100): with mx.autograd.record(): output = net(data) loss = output.abs() loss.backward() print('running_means:', net.running_mean.data().asnumpy()) print('running_vars:', net.running_var.data().asnumpy()) Beta and Gamma¶ As mentioned previously, there are two additional parameters in BatchNorm which are trainable in the typical fashion (with gradients). beta is used to shift and gamma is used to scale the normalized distribution, which allows the network to ‘undo’ the effects of normalization if required. Advanced: Sometimes used for input normalization, you can prevent beta shifting and gamma scaling by setting the learning rate multipler (i.e. lr_mult) of these parameters to 0. Zero centering and scaling to unit variance will still occur, only post normalization shifting and scaling will prevented. See this discussion post for details. We haven’t updated these parameters yet, so they should still be as initialized. You can see the default for beta is 0 (i.e. not shift) and gamma is 1 (i.e. not scale), so the initial behaviour is to keep the distribution unit normalized. print('beta:', net.beta.data().asnumpy()) print('gamma:', net.gamma.data().asnumpy()) We can also check the gradient on these parameters. Since we were finding the gradient of the sum of absolute values, we would expect the gradient of gamma to be equal to the number of points in the data (i.e. 16). So to minimize the loss we’d decrease the value of gamma, which would happen as part of a trainer.step. print('beta gradient:', net.beta.grad().asnumpy()) print('gamma gradient:', net.gamma.grad().asnumpy()) Inference Mode¶ When it comes to inference, BatchNorm uses the global statistics that were calculated during training. Since we’re using the same batch of data over and over again (and our global running statistics have converged), we get a very similar result to using training mode. beta and gamma are also applied by default (unless explicitly removed). output = net(data) print(output) Layer Normalization¶ An alternative to BatchNorm that is better suited to recurrent networks (RNNs) is called LayerNorm. Unlike BatchNorm which normalizes across all samples of a batch per channel, LayerNorm normalizes across all channels of a single sample. Some of the disadvantages of BatchNorm no longer apply. Small batch sizes are no longer an issue, since normalization statistics are calculated on single samples. And confusion around training and inference modes disappears because LayerNorm is the same for both modes. Warning: similar to having a small batch sizes in BatchNorm, you may have issues with LayerNorm if the input channel size is small. Using embeddings with a large enough dimension size avoids this (approx >20). Warning: currently MXNet Gluon’s implementation of LayerNorm is applied along a single axis (which should be the channel axis). Other frameworks have the option to apply normalization across multiple axes, which leads to differences in LayerNorm on NCHW input by default. See Figure 3. Other frameworks can normalize over C, H and W, not just C as with MXNet Gluon. Remember: LayerNorm is intended to be used with data in NTC format so the default normalization axis is set to -1 (corresponding to C for channel). Change this to axis=1 if you need to apply LayerNorm to data in NCHW format. As an example, we’ll apply LayerNorm to a batch of 2 samples, each with 4 time steps and 2 channels (in NTC format). data = mx.nd.arange(start=0, stop=2*4*2).reshape(2, 4, 2) print(data) With MXNet Gluon we can apply layer normalization with the mx.gluon.nn.LayerNorm block. We need to call initialize because LayerNorm has two learnable parameters by default: beta and gamma that are used for post normalization shifting and scaling of each channel. net = mx.gluon.nn.LayerNorm() net.initialize() output = net(data) print(output) We can see that normalization has been applied across all channels for each time step and each sample. We can also check the parameters beta and gamma and see that they are per channel (i.e. 2 of each in this example). print('beta:', net.beta.data().asnumpy()) print('gamma:', net.gamma.data().asnumpy()) Instance Normalization¶ Another less common normalization technique is called InstanceNorm, which can be useful for certain tasks such as image stylization. Unlike BatchNorm which normalizes across all samples of a batch per channel, InstanceNorm normalizes across all spatial dimensions per channel per sample (i.e. each sample of a batch is normalized independently). Watch out: InstanceNorm is better suited to convolutional networks (CNNs) than recurrent networks (RNNs). We expect the input distribution to the recurrent cell to change over time, so normalization over time doesn’t work well. LayerNorm is better suited for this case. As an example, we’ll apply InstanceNorm instance normalization with the mx.gluon.nn.InstanceNorm block. We need to call initialize because InstanceNorm has two learnable parameters by default: beta and gamma that are used for post normalization shifting and scaling of each channel. net = mx.gluon.nn.InstanceNorm() net.initialize() output = net(data) print(output) We can also check the parameters beta and gamma and see that they are per channel (i.e. 2 of each in this example). print('beta:', net.beta.data().asnumpy()) print('gamma:', net.gamma.data().asnumpy())
https://mxnet.apache.org/versions/1.6/api/python/docs/tutorials/packages/gluon/training/normalization/index.html
CC-MAIN-2020-34
en
refinedweb
Python utilities for Manubot: Manuscripts, open and automated Project description Python utilities for Manubot: Manuscripts, open and automated Manubot is a workflow and set of tools for the next generation of scholarly publishing. This repository contains a Python package with several Manubot-related utilities, as described in the usage section below. Package documentation is available at (auto-generated from the Python source code). · PMCID: PMC6611653,webpage} ... Manubot: the manuscript bot for scholarly writing optional arguments: -h, --help show this help message and exit --version show program's version number and exit subcommands: All operations are done through subcommands: {process,cite,webpage} process process manuscript content cite citation to CSL command line utility webpage deploy Manubot outputs to a webpage directory tree \ --skip-citations \ --content-directory=content \ --output-directory=output See manubot process --help for documentation of all command line arguments: usage: manubot process [-h] --content-directory CONTENT_DIRECTORY --output-directory OUTPUT_DIRECTORY [--template-variables-path TEMPLATE_VARIABLES_PATH] --skip-citations [- file containing template variables for jinja2. Serialization format is inferred from the file extension, with support for JSON, YAML, and TOML. If the format cannot be detected, the parser assumes JSON. Specify this argument multiple times to read multiple files. Variables can be applied to a namespace (i.e. stored under a dictionary key) like `--template-variables-path=namespace=path_or_url`. Namespaces must match the regex `[a-zA- Z_][a-zA-Z0-9_]*`. --skip-citations Skip citation and reference processing. Support for citation and reference processing has been moved from `manubot process` to the pandoc-manubot-cite filter. Therefore this argument is now required. If citation- tags.tsv is found in content, these tags will be inserted in the markdown output using the reference- link syntax for citekey aliases. Appends content/manual-references*.* paths to Pandoc's metadata.bibliography field. -*.*. These files are stored in the Pandoc metadata bibliography field, such that they can be loaded by pandoc-manubot-cite. Cite manubot cite is a command line utility to create CSL JSON items for one or more citation keys. Citation keys should be in the format source:identifier. For example, the following example generates CSL JSON for four references: manubot cite doi:10.1098/rsif.2017.0387 pubmed:29424689 pmc:PMC5640425 arxiv:1806.05726 The following terminal recording demonstrates the main features of manubot cite: Additional usage information is available from manubot cite --help: usage: manubot cite [-h] [--render] [--csl CSL] [--bibliography BIBLIOGRAPHY] [--format {plain,markdown,docx,html,jats}] [--output OUTPUT] [--allow-invalid-csl-data] [--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}] citekeys [citekeys ...] Retrieve bibliographic metadata for one or more citation keys. positional arguments: citekeys One or more (space separated) citation keys. --bibliography BIBLIOGRAPHY File to read manual reference metadata. Specify multiple times to load multiple files. Similar to pandoc --bibliography. - Pandoc filter This package creates the pandoc-manubot-cite Pandoc filter, providing access to Manubot's cite-by-ID functionality from within a Pandoc workflow. Currently, this filter is experimental and subject to breaking changes at any point. usage: pandoc-manubot-cite [-h] [--input [INPUT]] [--output [OUTPUT]] target_format Pandoc filter for citation by persistent identifier. Filters are command-line programs that read and write a JSON-encoded abstract syntax tree for Pandoc. Unless you are debugging, run this filter as part of a pandoc command by specifying --filter=pandoc-manubot-cite. positional arguments: target_format output format of the pandoc command, as per Pandoc's --to option optional arguments: -h, --help show this help message and exit --input [INPUT] path read JSON input (defaults to stdin) --output [OUTPUT] path to write JSON output (defaults to stdout) Other Pandoc filters exist that do something similar: pandoc-url2cite, pandoc-url2cite-hs, & pwcite. Currently, pandoc-manubot-cite supports the most types of persistent identifiers. We're interested in creating as much compatibility as possible between these filters and their syntaxes. Manual references Manual references are loaded from the references and bibliography Pandoc metadata fields. key. Webpage The manubot webpage command populates a webpage directory with Manubot output files. usage: manubot webpage [-h] [--checkout [CHECKOUT]] [--version VERSION] [--timestamp] [--no-ots-cache | --ots-cache OTS_CACHE] [--log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}] Update the webpage directory tree with Manubot output files. This command should be run from the root directory of a Manubot manuscript that follows the Rootstock layout, containing `output` and `webpage` directories. HTML and PDF outputs are copied to the webpage directory, which is structured as static source files for website hosting. optional arguments: -h, --help show this help message and exit --checkout [CHECKOUT] branch to checkout /v directory contents from. For example, --checkout=upstream/gh-pages. --checkout is equivalent to --checkout=gh-pages. If --checkout is ommitted, no checkout is performed. --version VERSION Used to create webpage/v/{version} directory. Generally a commit hash, tag, or 'local'. When omitted, version defaults to the commit hash on CI builds and 'local' elsewhere. --timestamp timestamp versioned manuscripts in webpage/v using OpenTimestamps. Specify this flag to create timestamps for the current HTML and PDF outputs and upgrade any timestamps from past manuscript versions. --no-ots-cache disable the timestamp cache. --ots-cache OTS_CACHE location for the timestamp cache (default: ci/cache/ots). --log-level {DEBUG,INFO,WARNING,ERROR,CRITICAL} Set the logging level for stderr logging Development Environment Create a development environment using: conda create --name manubot-dev --channel conda-forge \ python=3.8 pandoc=2.8 conda activate manubot-dev # assumes conda >= 4.4 pip install --editable ".[all]" Commands Below are some common commands used for development. They assume the working directory is set to the repository's root, and the conda environment is activated. # run the test suite pytest # reformat Python files according to the black style rules (required to pass CI) black . # detect any flake8 linting violations flake8 # regenerate the README codeblocks for --help messages python manubot/tests/test_readme.py # generate the docs portray as_html --overwrite --output_dir=docs # process the example testing manuscript manubot process \ --content-directory=manubot/process/tests/manuscripts/example/content \ --output-directory=manubot/process/tests/manuscripts/example/output \ --skip-citations \ --log-level=INFO Release instructions This section is only relevant for project maintainers. Travis CI deployments are used to upload releases to PyPI. To create a new release, bump the __version__ in manubot/__init__.py. Then, set the TAG and OLD_TAG environment variables: TAG=v$(python setup.py --version) # fetch tags from the upstream remote # (assumes upstream is the manubot organization remote) git fetch --tags upstream master # get previous release tag, can hardcode like OLD_TAG=v0.3.1 OLD_TAG=$(git describe --tags --abbrev=0) The following commands can help draft release notes: # check out a branch for a pull request as needed git checkout -b "release-$TAG" # create release notes file if it doesn't exist touch "release-notes/$TAG.md" # commit list since previous tag echo "\n\nCommits\n-------\n" >> "release-notes/$TAG.md" git log --oneline --decorate=no $OLD_TAG..HEAD >> "release-notes/$TAG.md" # commit authors since previous tag echo "\n\nCode authors\n------------\n" >> "release-notes/$TAG.md" git log $OLD_TAG..HEAD --format='%aN <%aE>' | sort --unique >> "release-notes/$TAG.md" After a commit with the above updates is part of upstream:master, for example after a PR is merged, use the GitHub inteferace to create a release with the new "Tag version". Monitor GitHub Actions and PyPI for successful deployment of the release..
https://pypi.org/project/manubot/0.4.0/
CC-MAIN-2020-34
en
refinedweb
Components and supplies Apps and online services About this project What if you don't have a Pi or beagle borne board and you still want to setup a server that can be used for Home automation projects, then this is the project for you. In this project, we will be setting up an arduino server that is powered by the WIZ750SR serial to Ethernet module. Then we will be sending the data commands which are received from arduino(serially) to the node mcu powered home automation device via internet. We will be using a Tenda router for this project, but feel free to use any router you like. So lets start! First of all this is the list of all the components required :1. Components Required - Arduino Uno - WIZ750SR serial to Ethernet module - NodeMCU - Tenda Router for connection setup - Breadboard - Jumper wires - Cables to power the Arduino and NodeMCU. The images of these components are shown below: So we can now move on to setup our WIZ750SR module.2. Configuration of Module Now, We need WIZ S2E configuration tool in our laptop or pc to setup our module. Here is the download link for the same: This is the git link. You can also download the software from WIZnet's official website. Once you have downloaded and installed it, open the tool and set it up following the image below: We can get our IP by simply going to our command prompt window and typing "ipconfig". This completes the setup of our module, and hence our server. Now it will be ready for transmission once we have run our code into the arduino. Now, lets configure our receiving end, i.e. the node MCU.3. Node MCU Configuration For configuring this, there is a really easy way available to us now. That is using the arduino IDE. Download the additional esp8266 package using the additional package download in arduino ide. Now, its easy to code for both arduino and node MCU. The codes are attached below. Now we will see how our System will work using a block diagram shown below. We have used this tenda router as an access point for this project as mentioned above. So, now you can build your own Ethernet powered Arduino Server for home automation. The code I have given is for simple LED control using this method. You can utilize this for home automation by using relays with the node mcu. The schematic of the following is shown below, So, the project is over here, feel free to comment or message me privately if you have any doubt with the project, Thank You! Code Node mcu codeC/C++ #include <ESP8266WiFi.h> const char*<button>Turn On </button></a>"); client.println("<a href=\"/LED=OFF\"\"><button>Turn Off </button></a><br />"); client.println("</html>"); delay(1); Serial.println("Client disonnected"); Serial.println(""); } Arduino codeC/C++ #include <SoftwareSerial.h> SoftwareSerial mySerial(10, 11); //RX,TX int LEDPIN = 13; void setup() { pinMode(LEDPIN, OUTPUT); Serial.begin(9600); // communication with the host computer //while (!Serial) { ; } // Start the software serial for communication with the ESP8266 mySerial.begin(115200); Serial.println(""); Serial.println("Remember to to set Both NL & CR in the serial monitor."); Serial.println("Ready"); Serial.println(""); } void loop() { // listen for communication from the ESP8266 and then write it to the serial monitor if ( mySerial.available() ) { Serial.write( mySerial.read() ); } // listen for user input and send it to the ESP8266 if ( Serial.available() ) { mySerial.write( Serial.read() ); } } Schematics Author Dhairya Parikh - 5 projects - 103 followers Published onJuly 3, 2018 Members who respect this project you might like
https://create.arduino.cc/projecthub/dhairya-parikh/an-ethernet-powered-server-for-home-automation-0f87ee?ref=tag&ref_id=relay&offset=19
CC-MAIN-2021-49
en
refinedweb
The abstract base class for comparative chart views. More... #include <pqComparativeContextView.h> The abstract base class for comparative chart views. It handles the layout of the individual chart views in the comparative view. Definition at line 44 of file pqComparativeContextView.h. Constructor: type :- view type. group :- SManager registration group. name :- SManager registration name. view :- View proxy. server:- server on which the proxy is created. parent:- QObject parent. Reimplemented from pqContextView. Returns the context view proxy associated with this object. Reimplemented from pqContextView. Returns the proxy of the root plot view in the comparative view. Called when the layout on the comparative vis changes. Definition at line 98 of file pqComparativeContextView.h.
https://kitware.github.io/paraview-docs/latest/cxx/classpqComparativeContextView.html
CC-MAIN-2021-49
en
refinedweb
Java toString example. * @author alvin alexander, alvinalexander.com */ public class JavaEnumToStringExample { public static void main(String[] args) { // loop through the enum values, calling the // enum toString method for each enum constant for (Day d: Day.values()) { System.out.println(d); } } } enum Day { SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY } This enum toString example class iterates through the Day enum with the enum for loop shown here: // this enum for loop iterates through each constant in the // Day enum, implicitly calling the toString method for each enum // constant that is passed into the println method: for (Day d: Day.values()) { System.out.println(d); } Java enum toString program output The output from the enum toString example looks like this: SUNDAY MONDAY TUESDAY WEDNESDAY THURSDAY FRIDAY SATURDAY Calling the enum toString method explicitly In the enum for loop code above, I mentioned that the toString method is being called implicitly. If you prefer to call the enum toString method explicitly, you can just change the print statement, as shown here: for (Day d: Day.values()) { System.out.println(d.toString()); } Summary Again, nothing major here, I just wanted to explore the enum toString behavior while I was in the neighborhood. If you have any questions, comments, or more complicated "enum toString" examples you'd like to share, feel free to leave a note in the comments section below. Java enum toString example - What's Related As I finish up my Java enum series, including this Java enum toString example, here's a collection of the Java enum tutorials I've put together during the last week. I hope you find them helpful:
https://alvinalexander.com/java/using-java-enum-tostring-tutorial/
CC-MAIN-2021-49
en
refinedweb
When you run a query in the bq command-line tool, tables or views referenced by the query. Access to the tables or views can be granted at the following levels, listed in order of range of resources allowed (largest to smallest): - at a high level in the Google Cloud resource hierarchy such as the project, folder, or organization level - at the dataset level - at the table level The following predefined IAM roles in BigQuery, see Predefined roles and permissions. Performing dry runs You can perform a dry run for a query job by using: - The --dry_runflag with the querycommand in the bqcommand-line tool - The dryRunparameter in the job configuration when you use the API or client libraries Performing a dry run To perform a dry run: Console Go to the BigQuery page in the Google Cloud Console. Enter your query in the Query editor. If the query is valid, then a check mark automatically appears along with the amount of data that the query will process. If the query is invalid, then an exclamation point appears along with an error message. bConfiguration type. Go Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Go API reference documentation. import ( "context" "fmt" "io" "cloud.google.com/go/bigquery" ) // queryDryRun demonstrates issuing a dry run query to validate query structure and // provide an estimate of the bytes scanned. func queryDryRun(w io.Writer, projectID string) error { // projectID := "my-project-id" ctx := context.Background() client, err := bigquery.NewClient(ctx, projectID) if err != nil { return fmt.Errorf("bigquery.NewClient: %v", err) } defer client.Close() q := client.Query(` SELECT queryDryRun() { // Runs a dry query of the U.S. given names dataset for the state of Texas. const query = `SELECT name FROM \`bigquery-public-data.usa_names.usa_1910_2013\` WHERE state = 'TX' LIMIT 100`; // For all options, see const options = { query: query, // Location must match that of the dataset(s) referenced in the query. location: 'US', dryRun: true, }; // Run the query as a job const [job] = await bigquery.createQueryJob(options); // Print the status and statistics console.log('Status:'); console.log(job.metadata.status); console.log('\nJob Statistics:'); console.log(job.metadata.statistics); } Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries. For more information, see the BigQuery Node.js API reference documentation. Python To perform a dry run using the Python client library, set the QueryJobConfig.dry_run property to True. Client.query() always returns a completed QueryJob when provided a dry run query configuration.))
https://cloud.google.com/bigquery/docs/dry-run-queries?hl=he
CC-MAIN-2021-49
en
refinedweb
One of the things so rarely covered in advanced deep learning books is the specifics of shaping data to input into a network. Along with shaping data is the need to alter the internals of a network to accommodate the new data. The final version of this example is Chapter_3_3.py, but for this exercise, start with the Chapter_3_wgan.py file and follow these steps: - We will start by changing the training set of data from MNIST to CIFAR by swapping out the imports like so: from keras.datasets import mnist #remove or leavefrom keras.datasets import cifar100 #add - At the start of the class, we will change the image size parameters from 28 x 28 grayscale to 32 x 32 color like so: class WGAN(): def __init__(self): ...
https://www.oreilly.com/library/view/hands-on-deep-learning/9781788994071/7cfa4b90-629a-476f-ae80-a5294aa0fecb.xhtml
CC-MAIN-2021-49
en
refinedweb
Writing single-node HPX applications¶ HPX is a C++ Standard Library for Concurrency and Parallelism. This means that it implements all of the corresponding facilities as defined by the C++ Standard. Additionally, HPX implements functionalities proposed as part of the ongoing C++ standardization process. This section focuses on the features available in HPX for parallel and concurrent computation on a single node, although many of the features presented here are also implemented to work in the distributed case. Using LCOs¶ Lightweight Control Objects (LCOs) provide synchronization for HPX applications. Most of them are familiar from other frameworks, but a few of them work in slightly different ways adapted to HPX. The following synchronization objects are available in HPX: future queue object_semaphore barrier Channels¶ Channels combine communication (the exchange of a value) with synchronization (guaranteeing that two calculations (tasks) are in a known state). A channel can transport any number of values of a given type from a sender to a receiver: hpx::lcos::local::channel<int> c; hpx::future<int> f = c.get(); HPX_ASSERT(!f.is_ready()); c.set(42); HPX_ASSERT(f.is_ready()); std::cout << f.get() << std::endl; Channels can be handed to another thread (or in case of channel components, to other localities), thus establishing a communication channel between two independent places in the program: void do_something(hpx::lcos::local::receive_channel<int> c, hpx::lcos::local::send_channel<> done) { // prints 43 std::cout << c.get(hpx::launch::sync) << std::endl; // signal back done.set(); } void send_receive_channel() { hpx::lcos::local::channel<int> c; hpx::lcos::local::channel<> done; hpx::apply(&do_something, c, done); // send some value c.set(43); // wait for thread to be done done.get().wait(); } Note how hpx::lcos::local::channel::get without any arguments returns a future which is ready when a value has been set on the channel. The launch policy hpx::launch::sync can be used to make hpx::lcos::local::channel::get block until a value is set and return the value directly. A channel component is created on one locality and can be sent to another locality using an action. This example also demonstrates how a channel can be used as a range of values: // channel components need to be registered for each used type (not needed // for hpx::lcos::local::channel) HPX_REGISTER_CHANNEL(double) void channel_sender(hpx::lcos::channel<double> c) { for (double d : c) hpx::cout << d << std::endl; } HPX_PLAIN_ACTION(channel_sender) void channel() { // create the channel on this locality hpx::lcos::channel<double> c(hpx::find_here()); // pass the channel to a (possibly remote invoked) action hpx::apply(channel_sender_action(), hpx::find_here(), c); // send some values to the receiver std::vector<double> v = {1.2, 3.4, 5.0}; for (double d : v) c.set(d); // explicitly close the communication channel (implicit at destruction) c.close(); } Composable guards¶ Composable guards operate in a manner similar to locks, but are applied only to asynchronous functions. The guard (or guards) is automatically locked at the beginning of a specified task and automatically unlocked at the end. Because guards are never added to an existing task’s execution context, the calling of guards is freely composable and can never deadlock. To call an application with a single guard, simply declare the guard and call run_guarded() with a function (task): hpx::lcos::local::guard gu; run_guarded(gu,task); If a single method needs to run with multiple guards, use a guard set: boost::shared<hpx::lcos::local::guard> gu1(new hpx::lcos::local::guard()); boost::shared<hpx::lcos::local::guard> gu2(new hpx::lcos::local::guard()); gs.add(*gu1); gs.add(*gu2); run_guarded(gs,task); Guards use two atomic operations (which are not called repeatedly) to manage what they do, so overhead should be extremely low. The following guards are available in HPX: conditional_trigger counting_semaphore dataflow event mutex once recursive_mutex spinlock spinlock_no_backoff trigger Extended facilities for futures¶ Concurrency is about both decomposing and composing the program from the parts that work well individually and together. It is in the composition of connected and multicore components where today’s C++ libraries are still lacking. The functionality of std::future offers a partial solution. It allows for the separation of the initiation of an operation and the act of waiting for its result; however, the act of waiting is synchronous. In communication-intensive code this act of waiting can be unpredictable, inefficient and simply frustrating. The example below illustrates a possible synchronous wait using futures: #include <future> using namespace std; int main() { future<int> f = async([]() { return 123; }); int result = f.get(); // might block } For this reason, HPX implements a set of extensions to std::future (as proposed by __cpp11_n4107__). This proposal introduces the following key asynchronous operations to hpx::future, hpx::shared_future and hpx::async, which enhance and enrich these facilities. The standard also omits the ability to compose multiple futures. This is a common pattern that is ubiquitous in other asynchronous frameworks and is absolutely necessary in order to make C++ a powerful asynchronous programming language. Not including these functions is synonymous to Boolean algebra without AND/OR. In addition to the extensions proposed by N4313, HPX adds functions allowing users to compose several futures in a more flexible way. High level parallel facilities¶ In preparation for the upcoming C++ Standards, there are currently several proposals targeting different facilities supporting parallel programming. HPX implements (and extends) some of those proposals. This is well aligned with our strategy to align the APIs exposed from HPX with current and future C++ Standards. At this point, HPX implements several of the C++ Standardization working papers, most notably N4409 (Working Draft, Technical Specification for C++ Extensions for Parallelism), N4411 (Task Blocks), and N4406 (Parallel Algorithms Need Executors). Using parallel algorithms¶ A parallel algorithm is a function template described by this document which is declared in the (inline) namespace hpx::parallel::v1. Note For compilers that do not support inline namespaces, all of the namespace v1 is imported into the namespace hpx::parallel. The effect is similar to what inline namespaces would do, namely all names defined in hpx::parallel::v1 are accessible from the namespace hpx::parallel as well. All parallel algorithms are very similar in semantics to their sequential counterparts (as defined in the namespace std) with an additional formal template parameter named ExecutionPolicy. The execution policy is generally passed as the first argument to any of the parallel algorithms and describes the manner in which the execution of these algorithms may be parallelized and the manner in which they apply user-provided function objects. The applications of function objects in parallel algorithms invoked with an execution policy object of type hpx::execution::sequenced_policy or hpx::execution::sequenced_task_policy execute in sequential order. For hpx::execution::sequenced_policy the execution happens in the calling thread. The applications of function objects in parallel algorithms invoked with an execution policy object of type hpx::execution::parallel_policy or hpx::execution::parallel_task_policy are permitted to execute in an unordered fashion in unspecified threads, and are indeterminately sequenced within each thread. Important It is the caller’s responsibility to ensure correctness, such as making sure that the invocation does not introduce data races or deadlocks. The applications of function objects in parallel algorithms invoked with an execution policy of type hpx::execution::parallel_unsequenced_policy is, in HPX, equivalent to the use of the execution policy hpx::execution::parallel_policy. Algorithms invoked with an execution policy object of type hpx::parallel::v1::execution_policy execute internally as if invoked with the contained execution policy object. No exception is thrown when an hpx::parallel::v1::execution_policy contains an execution policy of type hpx::execution::sequenced_task_policy or hpx::execution::parallel_task_policy (which normally turn the algorithm into its asynchronous version). In this case the execution is semantically equivalent to the case of passing a hpx::execution::sequenced_policy or hpx::execution::parallel_policy contained in the hpx::parallel::v1::execution_policy object respectively. Parallel exceptions¶ During the execution of a standard parallel algorithm, if temporary memory resources are required by any of the algorithms and no memory is available, the algorithm throws a std::bad_alloc exception. During the execution of any of the parallel algorithms, if the application of a function object terminates with an uncaught exception, the behavior of the program is determined by the type of execution policy used to invoke the algorithm: If the execution policy object is of type hpx::execution::parallel_unsequenced_policy, hpx::terminateshall be called. If the execution policy object is of type hpx::execution::sequenced_policy, hpx::execution::sequenced_task_policy, hpx::execution::parallel_policy, or hpx::execution::parallel_task_policy, the execution of the algorithm terminates with an hpx::exception_listexception. All uncaught exceptions thrown during the application of user-provided function objects shall be contained in the hpx::exception_list. For example, the number of invocations of the user-provided function object in for_each is unspecified. When hpx::parallel::v1::for_each is executed sequentially, only one exception will be contained in the hpx::exception_list object. These guarantees imply that, unless the algorithm has failed to allocate memory and terminated with std::bad_alloc, all exceptions thrown during the execution of the algorithm are communicated to the caller. It is unspecified whether an algorithm implementation will “forge ahead” after encountering and capturing a user exception. The algorithm may terminate with the std::bad_alloc exception even if one or more user-provided function objects have terminated with an exception. For example, this can happen when an algorithm fails to allocate memory while creating or adding elements to the hpx::exception_list object. Executor parameters and executor parameter traits¶ HPX introduces the notion of execution parameters and execution parameter traits. At this point, the only parameter that can be customized is the size of the chunks of work executed on a single HPX thread (such as the number of loop iterations combined to run as a single task). An executor parameter object is responsible for exposing the calculation of the size of the chunks scheduled. It abstracts the (potentially platform-specific) algorithms of determining those chunk sizes. The way executor parameters are implemented is aligned with the way executors are implemented. All functionalities of concrete executor parameter types are exposed and accessible through a corresponding hpx::parallel::executor_parameter_traits type. With executor_parameter_traits, clients access all types of executor parameters uniformly: std::size_t chunk_size = executor_parameter_traits<my_parameter_t>::get_chunk_size(my_parameter, my_executor, [](){ return 0; }, num_tasks); This call synchronously retrieves the size of a single chunk of loop iterations (or similar) to combine for execution on a single HPX thread if the overall number of tasks to schedule is given by num_tasks. The lambda function exposes a means of test-probing the execution of a single iteration for performance measurement purposes. The execution parameter type might dynamically determine the execution time of one or more tasks in order to calculate the chunk size; see hpx::execution::auto_chunk_size for an example of this executor parameter type. Other functions in the interface exist to discover whether an executor parameter type should be invoked once (i.e., it returns a static chunk size; see hpx::execution::static_chunk_size) or whether it should be invoked for each scheduled chunk of work (i.e., it returns a variable chunk size; for an example, see hpx::execution::guided_chunk_size). Although this interface appears to require executor parameter type authors to implement all different basic operations, none are required. In practice, all operations have sensible defaults. However, some executor parameter types will naturally specialize all operations for maximum efficiency. HPX implements the following executor parameter types: hpx::execution::auto_chunk_size: Loop iterations are divided into pieces and then assigned to threads. The number of loop iterations combined is determined based on measurements of how long the execution of 1% of the overall number of iterations takes. This executor parameter type makes sure that as many loop iterations are combined as necessary to run for the amount of time specified. hpx::execution::static_chunk_size: Loop iterations are divided into pieces of a given size and then assigned to threads. If the size is not specified, the iterations are, if possible, evenly divided contiguously among the threads. This executor parameters type is equivalent to OpenMP’s STATIC scheduling directive. hpx::execution::dynamic_chunk_size: Loop iterations are divided into pieces of a given size and then dynamically scheduled among the cores; when a core finishes one chunk, it is dynamically assigned another. If the size is not specified, the default chunk size is 1. This executor parameter type is equivalent to OpenMP’s DYNAMIC scheduling directive. hpx::execution::guided_chunk_size: Iterations are dynamically assigned to cores in blocks as cores request them until no blocks remain to be assigned. This is similar to dynamic_chunk_sizeexcept that the block size decreases each time a number of loop iterations is given to a thread. The size of the initial block is proportional to number_of_iterations / number_of_cores. Subsequent blocks are proportional to number_of_iterations_remaining / number_of_cores. The optional chunk size parameter defines the minimum block size. The default minimal chunk size is 1. This executor parameter type is equivalent to OpenMP’s GUIDED scheduling directive. Using task blocks¶ The define_task_block, run and the wait functions implemented based on N4411 are based on the task_block concept that is a part of the common subset of the Microsoft Parallel Patterns Library (PPL) and the Intel Threading Building Blocks (TBB) libraries. These implementations adopt a simpler syntax than exposed by those libraries— one that is influenced by language-based concepts, such as spawn and sync from Cilk++ and async and finish from X10. They improve on existing practice in the following ways: The exception handling model is simplified and more consistent with normal C++ exceptions. Most violations of strict fork-join parallelism can be enforced at compile time (with compiler assistance, in some cases). The syntax allows scheduling approaches other than child stealing. Consider an example of a parallel traversal of a tree, where a user-provided function compute is applied to each node of the tree, returning the sum of the results: template <typename Func> int traverse(node& n, Func && compute) { int left = 0, right = 0; define_task_block( [&](task_block<>& tr) { if (n.left) tr.run([&] { left = traverse(*n.left, compute); }); if (n.right) tr.run([&] { right = traverse(*n.right, compute); }); }); return compute(n) + left + right; } The example above demonstrates the use of two of the functions, hpx::parallel::define_task_block and the hpx::parallel::task_block::run member function of a hpx::parallel::task_block. The task_block function delineates a region in a program code potentially containing invocations of threads spawned by the run member function of the task_block class. The run function spawns an HPX thread, a unit of work that is allowed to execute in parallel with respect to the caller. Any parallel tasks spawned by run within the task block are joined back to a single thread of execution at the end of the define_task_block. run takes a user-provided function object f and starts it asynchronously—i.e., it may return before the execution of f completes. The HPX scheduler may choose to run f immediately or delay running f until compute resources become available. A task_block can be constructed only by define_task_block because it has no public constructors. Thus, run can be invoked directly or indirectly only from a user-provided function passed to define_task_block: void g(); void f(task_block<>& tr) { tr.run(g); // OK, invoked from within task_block in h } void h() { define_task_block(f); } int main() { task_block<> tr; // Error: no public constructor tr.run(g); // No way to call run outside of a define_task_block return 0; } Extensions for task blocks¶ Using execution policies with task blocks¶ HPX implements some extensions for task_block beyond the actual standards proposal N4411. The main addition is that a task_block can be invoked with an execution policy as its first argument, very similar to the parallel algorithms. An execution policy is an object that expresses the requirements on the ordering of functions invoked as a consequence of the invocation of a task block. Enabling passing an execution policy to define_task_block gives the user control over the amount of parallelism employed by the created task_block. In the following example the use of an explicit par execution policy makes the user’s intent explicit: template <typename Func> int traverse(node *n, Func&& compute) { int left = 0, right = 0; define_task_block( execution::par, // execution::parallel_policy [&](task_block<>& tb) { if (n->left) tb.run([&] { left = traverse(n->left, compute); }); if (n->right) tb.run([&] { right = traverse(n->right, compute); }); }); return compute(n) + left + right; } This also causes the hpx::parallel::v2::task_block object to be a template in our implementation. The template argument is the type of the execution policy used to create the task block. The template argument defaults to hpx::execution::parallel_policy. HPX still supports calling hpx::parallel::v2::define_task_block without an explicit execution policy. In this case the task block will run using the hpx::execution::parallel_policy. HPX also adds the ability to access the execution policy that was used to create a given task_block. Using executors to run tasks¶ Often, users want to be able to not only define an execution policy to use by default for all spawned tasks inside the task block, but also to customize the execution context for one of the tasks executed by task_block::run. Adding an optionally passed executor instance to that function enables this use case: template <typename Func> int traverse(node *n, Func&& compute) { int left = 0, right = 0; define_task_block( execution::par, // execution::parallel_policy [&](auto& tb) { if (n->left) { // use explicitly specified executor to run this task tb.run(my_executor(), [&] { left = traverse(n->left, compute); }); } if (n->right) { // use the executor associated with the par execution policy tb.run([&] { right = traverse(n->right, compute); }); } }); return compute(n) + left + right; } HPX still supports calling hpx::parallel::v2::task_block::run without an explicit executor object. In this case the task will be run using the executor associated with the execution policy that was used to call hpx::parallel::v2::define_task_block.
https://hpx-docs.stellar-group.org/branches/master/html/manual/writing_single_node_hpx_applications.html
CC-MAIN-2021-49
en
refinedweb
Multithreaded Thread Synchronization 1. Methods involved - wait (): Once this method is executed, the current thread is blocked and the synchronization monitor is released - notify(): Once this method is executed, a process being waited is waked up, and if more than one thread is waiting, a higher priority thread is waked up - notifyAll(): Once this method is executed, all wait ed threads will be awakened 2. Description - 1.wait(), notify(), notifyAll() three methods must be used in the synchronization block or synchronization method - Callers of the 2.wait(), notify(), and notifyAll() methods must synchronize the code block or synchronization monitor calls in the synchronization method at the same time. Otherwise, IllegalMonitorSTATEexception exception will occur - 3.wait(), notify(), notifyAll() are defined in the java.lang.Object class 3. Examples An example of thread communication: printing 1-100 with two threads. Alternating printing with thread1 and thread2 class Number implements Runnable{ private int number = 1; @Override public void run() { while(true){ synchronized (this) { // Wake Up Process notify(); if (number <= 100) { System.out.println(Thread.currentThread().getName() + ":" + number); number++; try { //Causes the thread calling the following wait() method to become blocked wait(); //wait releases the lock } catch (InterruptedException e) { e.printStackTrace(); } } else { break; } } } } } public class CommunicationTest { public static void main(String[] args) { Number number = new Number(); Thread t1 = new Thread(number); Thread t2 = new Thread(number); t1.setName("Thread 1"); t2.setName("Thread 2"); t1.start(); t2.start(); } } Interview Question: What are the similarities and differences between sleep () and wait ()? - Same point: once a method is executed, the current thread can be blocked - The differences are as follows: 1) The positions of the two method declarations are different: sleep () is declared in the Thread class and wait () in the Object class. - The scope of the call varies: sleep () can be called in any scenario you want. wait() must be used in a synchronization block or synchronization method - About whether to release the synchronization monitor: If both methods are used in the synchronization block or synchronization method, sleep () will not release the lock, wait () will release the lock - The location of the two method declarations is different: sleep () in the Thread class and wait () in the Object class. Classic example: producer/consumer problem The Productor delivers the product to the Clerk and the Customer takes the product from the clerk, thinking that the clerk can only hold a fixed number of products at a time. (e.g. 20) If a producer tries to produce more products, the shop assistant will ask the producer to stop and notify the producer to continue production if there is space in the store. If there is no product in the store, the shop assistant will tell the consumer to wait and if there is a product in the store, notify the consumer to take the product. Two problems may arise here: - When a producer is faster than a consumer, the consumer will miss some data and miss it. - When consumers are faster than producers, consumers get the same data Analysis: - 1. Is it a multi-threaded problem? Yes, producer thread, consumer thread - 2. Is there shared data? Yes, the shop assistant (or product) - 3. How to resolve thread security? There are three ways to synchronize - 4. Do you want to design communication between threads? package day9.tenone; class Clerk{ //product private int produceCount = 0; //yield a product public synchronized void produceProduct() { if (produceCount < 20){ produceCount++; System.out.println(Thread.currentThread().getName()+":Start production" +produceCount+ "Products"); notify(); }else { try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } } //Consumer Products public synchronized void consumeProduct() { if (produceCount > 0){ System.out.println(Thread.currentThread().getName()+":Start Consuming" +produceCount+ "Products"); produceCount--; notify(); }else { //wait for try { wait(); } catch (InterruptedException e) { e.printStackTrace(); } } } } class Producer extends Thread{ //Producer private Clerk clerk; public Producer(Clerk clerk){ this.clerk = clerk; } @Override public void run() { System.out.println(getName()+":Start production...."); while (true){ try { sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } clerk.produceProduct(); } } } class Consumer extends Thread{ private Clerk clerk; public Consumer(Clerk clerk){ this.clerk = clerk; } @Override public void run() { System.out.println(getName()+":Start Consumer Products...."); while (true){ try { sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } clerk.consumeProduct(); } } } public class ProductTest { public static void main(String[] args) { Clerk clerk = new Clerk(); Producer p1 = new Producer(clerk); p1.setName("Producer 1"); Consumer c1 = new Consumer(clerk); c1.setName("Consumer 1"); p1.start(); c1.start(); } }
https://programmer.group/java-thread-communication.html
CC-MAIN-2021-49
en
refinedweb
The documentation you are viewing is for Dapr v1.4 which is an older version of Dapr. For up-to-date documentation, see the latest version. Secret store components Dapr integrates with secret stores to provide apps and other components with secure storage and access to secrets such as access keys and passwords. Each secret store component has a name and this name is used when accessing a secret. As with other building block components, secret store components are extensible and can be found in the components-contrib repo. A secret store in Dapr is described using a Component file with the following fields: apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: secretstore namespace: default spec: type: secretstores.<NAME> version: v1 metadata: - name: <KEY> value: <VALUE> - name: <KEY> value: <VALUE> ... The type of secret store is determined by the type field, and things like connection strings and other metadata are put in the .metadata section. Different supported secret stores will have different specific fields that would need to be configured. For example, when configuring a secret store which uses AWS Secrets Manager the file would look like this: apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: awssecretmanager namespace: default spec: type: secretstores.aws.secretmanager version: v1 metadata: - name: region value: "[aws_region]" - name: accessKey value: "[aws_access_key]" - name: secretKey value: "[aws_secret_key]" - name: sessionToken value: "[aws_session_token]" secret-store.yaml, run: kubectl apply -f secret-store.yaml Supported secret stores Visit the secret stores reference for a full list of supported secret stores. Related links Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
https://v1-4.docs.dapr.io/operations/components/setup-secret-store/
CC-MAIN-2021-49
en
refinedweb
HPX V1.5.0 (Sep 02, 2020)¶ General changes¶ The main focus of this release is on APIs and C++20 conformance. We have added many new C++20 features and adapted multiple algorithms to be fully C++20 conformant. As part of the modularization we have begun specifying the public API of HPX in terms of headers and functionality, and aligning it more closely to the C++ standard. All non-distributed modules are now in place, along with an experimental option to completely disable distributed features in HPX. We have also added experimental asynchronous MPI and CUDA executors. Lastly this release introduces CMake targets for depending projects, performance improvements, and many bug fixes. We have added the C++20 features hpx::jthreadand hpx::stop_token. hpx::condition_variable_anynow exposes new functions supporting hpx::stop_token. We have added hpx::stable_sortbased on Francisco Tapia’s implementation. We have adapted existing synchronization primitives to be fully conformant C++20: hpx::barrier, hpx::latch, hpx::counting_semaphore, and hpx::binary_semaphore. We have started using customization point objects (CPOs) to make the corresponding algorithms fully conformant to C++20 as well as to make algorithm extension easier for the user. all_of/ any_of/ none_of, copy, count, destroy, equal, fill, find, for_each, generate, mismatch, move, reduce, transform_reduceare using those CPOs (all in namespace hpx). We also have adapted their corresponding hpx::rangesversions to be conforming to C++20 in this release. We have adapted support for co_awaitto C++20, in addition to hpx::futureit now also supports hpx::shared_future. We have also added allocator support for futures returned by co_return. It is no longer in the experimentalnamespace. We added serialization support for std::variantand std::tuple. result_ofand is_callableare now deprecated and replaced by invoke_resultand is_invocableto conform to C++20. We continued with the modularization, making it easier for us to add the new experimental HPX_WITH_DISTRIBUTED_RUNTIMECMake option (see below) . An significant amount of headers have been deprecated. We adapted the namespaces and headers we could to be closer to the standard ones (Public API). Depending code should still compile, however warnings are now generated instructing to change the include statements accordingly. It is now possible to have a basic CUDA support including a helper function to get a future from a CUDA stream and target handling. They are available under the hpx::cuda::experimentalnamespace and they can be enabled with the -DHPX_WITH_ASYNC_CUDA=ONCMake option. We added a new hpx::mpi::experimentalnamespace for getting futures from an asynchronous MPI call and a new minimal MPI executor hpx::mpi::experimental::executor. These can be enabled with the -DHPX_WITH_ASYNC_MPI=OnCMake option. A polymorphic executor has been implemented to reduce compile times as a function accepting executors can potentially be instantiated only once instead of multiple times with different executors. It accepts the function signature as a template argument. It needs to be constructed from any other executor. Please note, that the function signatures that can be scheduled using then_execute, bulk_sync_execute, bulk_async_executeand bulk_then_executeare slightly different (See the comment in PR #4514 for more details). The underlying executor of block_executorhas been updated to a newer one. We have added a parameter to auto_chunk_sizeto control the amount of iterations to measure. All executor parameter hooks can now be exposed through the executor itself. This will allow to deprecate the .with()functionality on execution policies in the future. This is also a first step towards simplifying our executor APIs in preparation for the upcoming C++23 executors (senders/receivers). We have moved all of the existing APIs related to resiliency into the namespace hpx::resiliency::experimental. Please note this is a breaking change without backwards-compatibility option. We have converted all of those APIs to be based on customization point objects. Two new executors have been added to enable easy integration of the existing resiliency features with other facilities (like the parallel algorithms): replay_executorand replicate_executor. We have added performance counters type information ( aggregating, monotonically increasing, average count, average timer, etc.). HPX threads are now re-scheduled on the same worker thread they were suspended on to avoid cache misses from moving from one thread to the other. This behavior doesn’t prevent the thread from being stolen, however. We have added a new configuration option hpx.exception_verbosityto allow to control the level of verbosity of the exceptions (3 levels available). broadcast_to, broadcast_from, scatter_toand scatter_fromhave been added to the collectives, modernization of gather_hereand gather_therewith futures taken by rvalue references. See the breaking change on all_to_allin the next section. None of the collectives need supporting macros anymore (e.g. specifying the data types used for a collective operation using HPX_REGISTER_ALLGATHERand similar is not needed anymore). New API functions have been added: a) to get the number of cores which are idle ( hpx::get_idle_core_count) and b) returning a bitmask representing the currently idle cores ( hpx::get_idle_core_mask). We have added an experimental option to only enable the local runtime, you can disable the distributed runtime with HPX_WITH_DISTRIBUTED_RUNTIME=OFF. You can also enable the local runtime by using the --hpx:localruntime option. We fixed task annotations for actions. The alias hpx::promiseto hpx::lcos::promiseis now deprecated. You can use hpx::lcos::promisedirectly instead. hpx::promisewill refer to the local-only promise in the future. We have added a prepare_checkpointAPI function that calculates the amount of necessary buffer space for a particular set of arguments checkpointed. We have added hpx::upgrade_lockand hpx::upgrade_to_unique_lock, which make hpx::shared_mutex(and similar) usable in more flexible ways. We have changed the CMake targets exposed to the user, it now includes HPX::hpx, HPX::wrap_main( int mainas the first HPX thread of the application, see Starting the HPX runtime), HPX::plugin, HPX::component. The CMake variables HPX_INCLUDE_DIRSand HPX_LIBRARIESare deprecated and will be removed in a future release, you should now link directly to the HPX::hpxCMake target. A new example is demonstrating how to create and use a wrapping executor ( quickstart/executor_with_thread_hooks.cpp) A new example is demonstrating how to disable thread stealing during the execution of parallel algorithms ( quickstart/disable_thread_stealing_executor.cpp) We now require for our CMake build system configuration files to be formatted using cmake-format. We have removed more dependencies on various Boost libraries. We have added an experimental option enabling unity builds of HPX using the -DHPX_WITH_UNITY_BUILD=OnCMake option. Many bug fixes. Breaking changes¶ HPX now requires a C++14 capable compiler. We have set the HPX C++ standard automatically to C++14 and if it needs to be set explicitly, it should be specified through the CMAKE_CXX_STANDARDsetting as mandated by CMake. The HPX_WITH_CXX*variables are now deprecated and will be removed in the future. Building and using HPX is now supported only when using CMake V3.13 or later, Boost V1.64 or newer, and when compiling with clang V5, gcc V7, or VS2019, or later. Other compilers might still work but have not been tested thoroughly. We have added a hpx::init_paramsstruct to pass parameters for HPX initialization e.g. the resource partitioner callback to initialize thread pools (Using the resource partitioner). The all_to_allalgorithm is renamed to all_gather, and the new all_to_allalgorithm is not compatible with the old one. We have moved all of the existing APIs related to resiliency into the namespace hpx::resiliency::experimental. Closed issues¶ Issue #4918 - Rename distributed_executors module Issue #4900 - Adding JOSS status badge to README Issue #4897 - Compiler warning, deprecated header used by HPX itself Issue #4886 - A future bound to an action executing on a different locality doesn’t capture exception state Issue #4880 - Undefined reference to main build error when HPX_WITH_DYNAMIC_HPX_MAIN=OFF Issue #4877 - hpx_main might not able to start hpx runtime properly Issue #4850 - Issues creating templated component Issue #4829 - Spack package & HPX_WITH_GENERIC_CONTEXT_COROUTINES Issue #4820 - PAPI counters don’t work Issue #4818 - HPX can’t be used with IO pool turned off Issue #4816 - Build of HPX fails when find_package(Boost) is called before FetchContent_MakeAvailable(hpx) Issue #4813 - HPX MPI Future failed Issue #4811 - Remove HPX::hpx_no_wrap_main target before 1.5.0 release Issue #4810 - In hpx::for_each::invoke_projected the hpx::util::decay is misguided Issue #4787 - transform_inclusive_scan gives incorrect results for non-commutative operator Issue #4786 - transform_inclusive_scan tries to implicitly convert between types, instead of using the provided conv function Issue #4779 - HPX build error with GCC 10.1 Issue #4766 - Move HPX.Compute functionality to experimental namespace Issue #4763 - License file name Issue #4758 - CMake profiling results Issue #4755 - Building HPX with support for PAPI fails Issue #4754 - CMake cache creation breaks when using HPX with mimalloc Issue #4752 - HPX MPI Future build failed Issue #4746 - Memory leak when using dataflow icw components Issue #4731 - Bug in stencil example, calculation of locality IDs Issue #4723 - Build fail with NETWORKING OFF Issue #4720 - Add compatibility headers for modules that had their module headers implicitly generated in 1.4.1 Issue #4719 - Undeprecate some module headers Issue #4712 - Rename HPX_MPI_WITH_FUTURES option Issue #4709 - Make deprecation warnings overridable in dependent projects Issue #4691 - Suggestion to fix and enhance the thread_mapper API Issue #4686 - Fix tutorials examples Issue #4685 - HPX distributed map fails to compile Issue #4680 - Build error with HPX_WITH_DYNAMIC_HPX_MAIN=OFF Issue #4679 - Build error for hpx w/ Apex on Summit Issue #4675 - build error with HPX_WITH_NETWORKING=OFF Issue #4674 - Error running Quickstart tests on OS X Issue #4662 - MPI initialization broken when networking off Issue #4652 - How to fix distributed action annotation Issue #4650 - thread descriptions are broken…again Issue #4648 - Thread stacksize not properly set Issue #4647 - Rename generated collective headers in modules Issue #4639 - Update deprecation warnings in compatibility headers to point to collective headers Issue #4628 - mpi parcelport totally broken Issue #4619 - Fully document hpx_wrap behaviour and targets Issue #4612 - Compilation issue with HPX 1.4.1 and 1.4.0 Issue #4594 - Rename modules Issue #4578 - Default value for HPX_WITH_THREAD_BACKTRACE_DEPTH Issue #4572 - Thread manager should be given a runtime_configuration Issue #4571 - Add high-level documentation to new modules Issue #4569 - Annoying warning when compiling - pls suppress or fix it. Issue #4555 - HPX_HAVE_THREAD_BACKTRACE_ON_SUSPENSION compilation error Issue #4543 - Segfaults in Release builds using sleep_for Issue #4539 - Compilation Error when HPX_MPI_WITH_FUTURES=ON Issue #4537 - Linking issue with libhpx_initd.a Issue #4535 - API for checking if pool with a given name exists Issue #4523 - Build of PR #4311 (git tag 9955e8e) fails Issue #4519 - Documentation problem Issue #4513 - HPXConfig.cmake contains ill-formed paths when library paths use backslashes Issue #4507 - User-polling introduced by MPI futures module should be more generally usable Issue #4506 - Make sure force_linking.hpp is not included in main module header Issue #4501 - Fix compilation of PAPI tests Issue #4497 - Add modules CI checks Issue #4489 - Polymorphic executor Issue #4476 - Use CMake targets defined by FindBoost Issue #4473 - Add vcpkg installation instructions Issue #4470 - Adapt hpx::future to C++20 co_await Issue #4468 - Compile error on Raspberry Pi 4 Issue #4466 - Compile error on Windows, current stable: Issue #4453 - Installing HPX on fedora with dnf is not adding cmake files Issue #4448 - New std::variant serialization broken Issue #4438 - Add performance counter flag is monotically increasing Issue #4436 - Build problem: same code build and works with 1.4.0 but it doesn’t with 1.4.1 Issue #4429 - Function descriptions not supported in distributed Issue #4423 - –hpx:ini=hpx.lock_detection=0 has no effect Issue #4422 - Add performance counter metadata Issue #4419 - Weird behavior for –hpx:print-counter-interval with large numbers Issue #4401 - Create module repository Issue #4400 - Command line options conflict related to performance counters Issue #4349 - –hpx:use-process-mask option throw an exception on OS X Issue #4345 - Move gh-pages branch out of hpx repo Issue #4323 - Const-correctness error in assignment operator of compute::vector Issue #4318 - ASIO breaks with C++2a concepts Issue #4317 - Application runs even if –hpx:help is specified Issue #4063 - Document hpxcxx compiler wrapper Issue #3983 - Implement the C++20 Synchronization Library Issue #3696 - C++11 constexpr support is now required Issue #3623 - Modular HPX branch and an alternative project layout Issue #2836 - The worst-case time complexity of parallel::sort seems to be O(N^2). Closed pull requests¶ PR #4936 - Minor documentation fixes part 2 PR #4935 - Add copyright and license to joss paper file PR #4934 - Adding Semicolon in Documentation PR #4932 - Fixing compiler warnings PR #4931 - Small documentation formatting fixes PR #4930 - Documentation Distributed HPX applications localvv with local_vv PR #4929 - Add final version of the JOSS paper PR #4928 - Add HPX_NODISCARD to enable_user_polling structs PR #4926 - Rename distributed_executors module to executors_distributed PR #4925 - Making transform_reduce conforming to C++20 PR #4923 - Don’t acquire lock if not needed PR #4921 - Update the release notes for the release candidate 3 PR #4920 - Disable libcds release PR #4919 - Make cuda event pool dynamic instead of fixed size PR #4917 - Move chrono functionality to hpx::chrono namespace PR #4916 - HPX_HAVE_DEPRECATION_WARNINGS needs to be set even when disabled PR #4915 - Moving more action related files to actions modules PR #4914 - Add alias targets with namespaces used for exporting PR #4912 - Aggregate initialize CPOs PR #4910 - Explicitly specify hwloc root on Jenkins CSCS builds PR #4908 - Fix algorithms documentation PR #4907 - Remove HPX::hpx_no_wrap_main target PR #4906 - Fixing unused variable warning PR #4905 - Adding specializations for simple for_loops PR #4904 - Update boost to 1.74.0 for the newest jenkins configs PR #4903 - Hide GITHUB_TOKEN environment variables from environment variable output PR #4902 - Cancel previous pull requests builds before starting a new one with Jenkins PR #4901 - Update public API list with updated algorithms PR #4899 - Suggested changes for HPX V1.5 release notes PR #4898 - Minor tweak to hpx::equal implementation PR #4896 - Making generate() and generate_n conforming to C++20 PR #4895 - Update apex tag PR #4894 - Fix exception handling for tasks PR #4893 - Remove last use of std::result_of, removed in C++20 PR #4892 - Adding replay_executor and replicate_executor PR #4889 - Restore old behaviour of not requiring linking to hpx_wrap when HPX_WITH_DYNAMIC_HPX_MAIN=OFF PR #4887 - Making sure remotely thrown (non-hpx) exceptions are properly marshaled back to invocation site PR #4885 - Adapting hpx::find and friends to C++20 PR #4884 - Adapting mismatch to C++20 PR #4883 - Adapting hpx::equal to be conforming to C++20 PR #4882 - Fixing exception handling for hpx::copy and adding missing tests PR #4881 - Adds different runtime exception when registering thread with the HPX runtime PR #4876 - Adding example demonstrating how to disable thread stealing during the execution of parallel algorithms PR #4874 - Adding non-policy tests to all_of, any_of, and none_of PR #4873 - Set CUDA compute capability on rostam Jenkins builds PR #4872 - Force partitioned vector scan tests to run serially PR #4870 - Making move conforming with C++20 PR #4869 - Making destroy and destroy_n conforming to C++20 PR #4868 - Fix miscellaneous header problems PR #4867 - Add CPOs for for_each PR #4865 - Adapting count and count_if to be conforming to C++20 PR #4864 - Release notes 1.5.0 PR #4863 - adding libcds-hpx tag to prepare for hpx1.5 release PR #4862 - Adding version specific deprecation options PR #4861 - Limiting executor improvements PR #4860 - Making fill and fill_n compatible with C++20 PR #4859 - Adapting all_of, any_of, and none_of to C++20 PR #4857 - Improve libCDS integration PR #4856 - Correct typos in the documentation of the hpx performance counters PR #4854 - Removing obsolete code PR #4853 - Adding test that derives component from two other components PR #4852 - Fix mpi_ring test in distributed mode by ensuring all ranks run hpx_main PR #4851 - Converting resiliency APIs to tag_invoke based CPOs PR #4849 - Enable use of future_overhead test when DISTRIBUTED_RUNTIME is OFF PR #4847 - Fixing ‘error prone’ constructs as reported by Codacy PR #4846 - Disable Boost.Asio concepts support PR #4845 - Fix PAPI counters PR #4843 - Remove dependency on various Boost headers PR #4841 - Rearrange public API headers PR #4840 - Fixing TSS problems during thread termination PR #4839 - Fix async_cuda build problems when distributed runtime is disabled PR #4837 - Restore compatibility for old (now deprecated) copy algorithms PR #4836 - Adding CPOs for hpx::reduce PR #4835 - Remove using util::result_of from namespace hpx PR #4834 - Fixing the calculation of the number of idle cores and the corresponding idle masks PR #4833 - Allow thread function destructors to yield PR #4832 - Fixing assertion in split_gids and memory leaks in 1d_stencil_7 PR #4831 - Making sure MPI_CXX_COMPILE_FLAGS is interpreted as a sequence of options PR #4830 - Update documentation on using HPX::wrap_main PR #4827 - Update clang-newest configuration to use clang 10 PR #4826 - Add Jenkins configuration for rostam PR #4825 - Move all CUDA functionality to hpx::cuda::experimental namespace PR #4824 - Add support for building master/release branches to Jenkins configuration PR #4821 - Implement customization point for hpx::copy and hpx::ranges::copy PR #4819 - Allow finding Boost components before finding HPX PR #4817 - Adding range version of stable sort PR #4815 - Fix a wrong #ifdef for IO/TIMER pools causing build errors PR #4814 - Replace hpx::function_nonser with std::function in error module - PR #4808 - Make internal algorithms functions const PR #4807 - Add Jenkins configuration for running on Piz Daint PR #4806 - Update documentation links to new domain name PR #4805 - Applying changes that resolve time complexity issues in sort PR #4803 - Adding implementation of stable_sort PR #4802 - Fix datapar header paths PR #4801 - Replace boost::shared_array<T> with std::shared_ptr<T[]> if supported PR #4799 - Fixing #include paths in compatibility headers PR #4798 - Include the main module header (fixes partially #4488) PR #4797 - Change cmake targets PR #4794 - Removing 128bit integer emulation PR #4793 - Make sure global variable is handled properly PR #4792 - Replace enable_if with HPX_CONCEPT_REQUIRES_ and add is_sentinel_for constraint PR #4790 - Move deprecation warnings from base template to template specializations for result_of etc. structs PR #4789 - Fix hangs during assertion handling and distributed runtime construction PR #4788 - Fixing inclusive transform scan algorithm to properly handle initial value PR #4785 - Fixing barrier test PR #4784 - Fixing deleter argument bindings in serialize_buffer PR #4783 - Add coveralls badge PR #4782 - Make header tests parallel again PR #4780 - Remove outdated comment about hpx::stop in documentation PR #4776 - debug print improvements PR #4775 - Checkpoint cleanup PR #4771 - Fix compilation with HPX_WITH_NETWORKING=OFF PR #4767 - Remove all force linking leftovers PR #4765 - Fix 1d stencil index calculation PR #4764 - Force some tests to run serially PR #4762 - Update pointees in compatibility headers PR #4761 - Fix running and building of execution module tests on CircleCI PR #4760 - Storing hpx_options in global property to speed up summary report PR #4759 - Reduce memory requirements for our main shared state PR #4757 - Fix mimalloc linking on Windows PR #4756 - Fix compilation issues PR #4753 - Re-adding API functions that were lost during merges PR #4751 - Revert “Create coverage reports and upload them to codecov.io” PR #4750 - Fixing possible race condition during termination detection PR #4749 - Deprecate result_of and friends PR #4748 - Create coverage reports and upload them to codecov.io PR #4747 - Changing #include for MPI parcelport PR #4745 - Add is_sentinel_for trait implementation and test PR #4743 - Fix init_globally example after runtime mode changes PR #4742 - Update SUPPORT.md PR #4741 - Fixing a warning generated for unity builds with msvc PR #4740 - Rename local_lcos and basic_execution modules PR #4739 - Undeprecate a couple of hpx/modulename.hpp headers PR #4738 - Conditionally test schedulers in thread_stacksize_current test PR #4734 - Fixing a bunch of codacy warnings PR #4733 - Add experimental unity build option to CMake configuration PR #4730 - Fixing compilation problems with unordered map PR #4729 - Fix APEX build PR #4727 - Fix missing runtime includes for distributed runtime PR #4726 - Add more API headers PR #4725 - Add more compatibility headers for deprecated module headers - PR #4721 - Attempt to fixing migration tests PR #4717 - Make the compatilibility headers macro conditional PR #4716 - Add hpx/runtime.hpp and hpx/distributed/runtime.hpp API headers PR #4714 - Add hpx/future.hpp header PR #4713 - Remove hpx/runtime/threads_fwd.hpp and hpx/util_fwd.hpp PR #4711 - Make module deprecation warnings overridable PR #4710 - Add compatibility headers and other fixes after module header renaming PR #4708 - Add termination handler for parallel algorithms PR #4707 - Use hpx::function_nonser instead of std::function internally PR #4706 - Move header file to module PR #4705 - Fix incorrect behaviour of cmake-format check PR #4704 - Fix resource tests PR #4701 - Fix missing includes for future::then specializations PR #4700 - Removing obsolete memory component PR #4699 - Add short descriptions to modules missing documentation PR #4696 - Rename generated modules headers PR #4693 - Overhauling thread_mapper for public consumption PR #4688 - Fix thread stack size handling PR #4687 - Adding all_gather and fixing all_to_all PR #4684 - Miscellaneous compilation fixes PR #4683 - Fix HPX_WITH_DYNAMIC_HPX_MAIN=OFF PR #4682 - Fix compilation of pack_traversal_rebind_container.hpp PR #4681 - Add missing hpx/execution.hpp includes for future::then PR #4678 - Typeless communicator PR #4677 - Forcing registry option to be accepted without checks. PR #4676 - Adding scatter_to/scatter_from collective operations PR #4673 - Fix PAPI counters compilation PR #4671 - Deprecate hpx::promise alias to hpx::lcos::promise PR #4670 - Explicitly instantiate get_exception PR #4667 - Add stopValue in Sentinel struct instead of Iterator PR #4666 - Add release build on Windows to GitHub actions PR #4664 - Creating itt_notify module. - PR #4659 - Making sure declarations match definitions in register_locks implementation PR #4655 - Fixing task annotations for actions PR #4653 - Making sure APEX is linked into every application, if needed PR #4651 - Update get_function_annotation.hpp - PR #4645 - Add a few more API headers PR #4644 - Fixing support for mpirun (and similar) PR #4643 - Fixing the fix for get_idle_core_count() API PR #4638 - Remove HPX_API_EXPORT missed in previous cleanup PR #4636 - Adding C++20 barrier PR #4635 - Adding C++20 latch API PR #4634 - Adding C++20 counting semaphore API PR #4633 - Unify execution parameters customization points PR #4632 - Adding missing bulk_sync_execute wrapper to example executor PR #4631 - Updates to documentation; grammar edits. PR #4630 - Updates to documentation; moved hyperlink PR #4624 - Export set_self_ptr in thread_data.hpp instead of with forward declarations where used PR #4623 - Clean up export macros PR #4621 - Trigger an error for older boost versions on power architectures PR #4617 - Ignore user-set compatibility header options if the module does not have compatibility headers PR #4616 - Fix cmake-format warning PR #4615 - Add handler for serializing custom exceptions PR #4614 - Fix error message when HPX_IGNORE_CMAKE_BUILD_TYPE_COMPATIBILITY=OFF PR #4613 - Make partitioner constructor private PR #4611 - Making auto_chunk_size execute the given function using the given executor PR #4610 - Making sure the thread-local lock registration data is moving to the core the suspended HPX thread is resumed on PR #4609 - Adding an API function that exposes the number of idle cores PR #4608 - Fixing moodycamel namespace PR #4607 - Moving winsocket initialization to core library PR #4606 - Local runtime module etc. PR #4604 - Add config_registry module PR #4603 - Deal with distributed modules in their respective CMakeLists.txt PR #4602 - Small module fixes PR #4598 - Making sure current_executor and service_executor functions are linked into the core library PR #4597 - Adding broadcast_to/broadcast_from to collectives module PR #4596 - Fix performance regression in block_executor PR #4595 - Making sure main.cpp is built as a library if HPX_WITH_DYNAMIC_MAIN=OFF PR #4592 - Futures module PR #4591 - Adapting co_await support for C++20 PR #4590 - Adding missing exception test for for_loop() PR #4587 - Move traits headers to hpx/modulename/traits directory PR #4586 - Remove Travis CI config PR #4585 - Update macOS test blacklist PR #4584 - Attempting to fix missing symbols in stack trace PR #4583 - Fixing bad static_cast PR #4582 - Changing download url for Windows prerequisites to circumvent bandwidth limitations PR #4581 - Adding missing using placeholder::_X PR #4579 - Move get_stack_size_name and related functions PR #4575 - Excluding unconditional definition of class backtrace from global header PR #4574 - Changing return type of hardware_concurrency() to unsigned int PR #4570 - Move tests to modules PR #4564 - Reshuffle internal targets and add HPX::hpx_no_wrap_main target PR #4563 - fix CMake option typo PR #4562 - Unregister lock earlier to avoid holding it while suspending PR #4561 - Adding test macros supporting custom output stream PR #4560 - Making sure hash_any::operator()() is linked into core library PR #4559 - Fixing compilation if HPX_WITH_THREAD_BACKTRACE_ON_SUSPENSION=On PR #4557 - Improve spinlock implementation to perform better in high-contention situations PR #4553 - Fix a runtime_ptr problem at shutdown when apex is enabled PR #4552 - Add configuration option for making exceptions less noisy PR #4551 - Clean up thread creation parameters PR #4549 - Test FetchContent build on GitHub actions PR #4548 - Fix stack size PR #4545 - Fix header tests PR #4544 - Fix a typo in sanitizer build PR #4541 - Add API to check if a thread pool exists PR #4540 - Making sure MPI support is enabled if MPI futures are used but networking is disabled PR #4538 - Move channel documentation examples to examples directory PR #4536 - Add generic allocator for execution policies PR #4534 - Enable compatibility headers for thread_executors module PR #4532 - Fixing broken url in README.rst PR #4531 - Update scripts PR #4530 - Make sure module API docs show up in correct order PR #4529 - Adding missing template code to module creation script PR #4528 - Make sure version module uses HPX’s binary dir, not the parent’s PR #4527 - Creating actions_base and actions module PR #4526 - Shared state for cv PR #4525 - Changing sub-name sequencing for experimental namespace PR #4524 - Add API guarantee notes to API reference documentation PR #4522 - Enable and fix deprecation warnings in execution module PR #4521 - Moves more miscellaneous files to modules PR #4520 - Skip execution customization points when executor is known PR #4518 - Module distributed lcos PR #4516 - Fix various builds PR #4515 - Replace backslashes by slashes in windows paths PR #4514 - Adding polymorphic_executor PR #4512 - Adding C++20 jthread and stop_token PR #4510 - Attempt to fix APEX linking in external packages again PR #4508 - Only test pull requests (not all branches) with GitHub actions PR #4505 - Fix duplicate linking in tests (ODR violations) PR #4504 - Fix C++ standard handling PR #4503 - Add CMakelists file check PR #4500 - Fix .clang-format version requirement comment PR #4499 - Attempting to fix hpx_init linking on macOS PR #4498 - Fix compatibility of pool_executor PR #4496 - Removing superfluous SPDX tags PR #4494 - Module executors PR #4493 - Pack traversal module PR #4492 - Update copyright year in documentation PR #4491 - Add missing current_executor header PR #4490 - Update GitHub actions configs PR #4487 - Properly dispatch exceptions thrown from hpx_main to be rethrown from hpx::init/hpx::stop PR #4486 - Fixing an initialization order problem PR #4485 - Move miscellaneous files to their rightful modules PR #4483 - Clean up imported CMake target naming PR #4481 - Add vcpkg installation instructions PR #4479 - Add hints to allow to specify MIMALLOC_ROOT - PR #4475 - Fix rp init changes PR #4474 - Use #pragma once in headers PR #4472 - Add more descriptive error message when using x86 coroutines on non-x86 platforms PR #4467 - Add mimalloc find cmake script PR #4465 - Add thread_executors module PR #4464 - Include module PR #4462 - Merge hpx_init and hpx_wrap into one static library PR #4461 - Making thread_data test more realistic PR #4460 - Suppress MPI warnings in version.cpp PR #4459 - Make sure pkgconfig applications link with hpx_init PR #4458 - Added example demonstrating how to create and use a wrapping executor PR #4457 - Fixing execution of thread exit functions PR #4456 - Move backtrace files to debugging module PR #4455 - Move deadlock_detection and maintain_queue_wait_times source files into schedulers module PR #4450 - Fixing compilation with std::filesystem enabled PR #4449 - Fixing build system to actually build variant test PR #4447 - This fixes an obsolete #include PR #4446 - Resume tasks where they were suspended PR #4444 - Minor CUDA fixes PR #4443 - Add missing tests to CircleCI config PR #4442 - Adding a tag to all auto-generated files allowing for tools to visually distinguish those PR #4441 - Adding performance counter type information PR #4440 - Fixing MSVC build PR #4439 - Link HPX::plugin and component privately in hpx_setup_target PR #4437 - Adding a test that verifies the problem can be solved using a trait specialization PR #4434 - Clean up Boost dependencies and copy string algorithms to new module PR #4433 - Fixing compilation issues (!) if MPI parcelport is enabled PR #4431 - Ignore warnings about name mangling changing PR #4430 - Add performance_counters module PR #4428 - Don’t add compatibility headers to module API reference PR #4426 - Add currently failing tests on GitHub actions to blacklist PR #4425 - Clean up and correct minimum required versions PR #4424 - Making sure hpx.lock_detection=0 works as advertized PR #4421 - Making sure interval time stops underlying timer thread on termination PR #4417 - Adding serialization support for std::variant (if available) and std::tuple PR #4415 - Partially reverting changes applied by PR 4373 PR #4414 - Added documentation for the compiler-wrapper script hpxcxx.in in creating_hpx_projects.rst PR #4413 - Merging from V1.4.1 release PR #4412 - Making sure to issue a warning if a file specified using –hpx:options-file is not found PR #4411 - Make test specific to HPX_WITH_SHARED_PRIORITY_SCHEDULER PR #4407 - Adding minimal MPI executor PR #4405 - Fix cross pool injection test, use default scheduler as falback PR #4404 - Fix a race condition and clean-up usage of scheduler mode PR #4399 - Add more threading modules PR #4398 - Add CODEOWNERS file PR #4395 - Adding a parameter to auto_chunk_size allowing to control the amount of iterations to measure PR #4393 - Use appropriate cache-line size defaults for different platforms PR #4391 - Fixing use of allocator for C++20 PR #4390 - Making –hpx:help behavior consistent PR #4388 - Change the resource partitioner initialization PR #4387 - Fix roll_release.sh PR #4386 - Add warning messages for using thread binding options on macOS - PR #4384 - Make enabling dynamic hpx_main on non-Linux systems a configuration error PR #4383 - Use configure_file for HPXCacheVariables.cmake PR #4382 - Update spellchecking whitelist and fix more typos PR #4380 - Add a helper function to get a future from a cuda stream PR #4379 - Add Windows and macOS CI with GitHub actions PR #4378 - Change C++ standard handling PR #4377 - Remove Python scripts PR #4374 - Adding overload for hpx::init/hpx::start for use with resource partitioner PR #4373 - Adding test that verifies for 4369 to be fixed PR #4372 - Another attempt at fixing the integral mismatch and conversion warnings PR #4370 - Doc updates quick start PR #4368 - Add a whitelist of words for weird spelling suggestions PR #4366 - Suppress or fix clang-tidy-9 warnings PR #4365 - Removing more Boost dependencies PR #4363 - Update clang-format config file for version 9 PR #4362 - Fix indices typo - - PR #4358 - Doc updates; generating documentation. Will likely need heavy editing. PR #4356 - Remove some minor unused and unnecessary Boost includes PR #4355 - Fix spellcheck step in CircleCI config PR #4354 - Lightweight utility to hold a pack as members PR #4352 - Minor fixes to the C++ standard detection for MSVC PR #4351 - Move generated documentation to hpx-docs repo PR #4347 - Add cmake policy - CMP0074 PR #4346 - Remove file committed by mistake PR #4342 - Remove HCC and SYCL options from CMakeLists.txt PR #4341 - Fix launch process test with APEX enabled PR #4340 - Testing Cirrus CI PR #4339 - Post 1.4.0 updates PR #4338 - Spelling corrections and CircleCI spell check PR #4333 - Flatten bound callables PR #4332 - This is a collection of mostly minor (cleanup) fixes PR #4331 - This adds the missing tests for async_colocated and async_continue_colocated PR #4330 - Remove HPX.Compute host default_executor PR #4328 - Generate global header for basic_execution module PR #4327 - Use INTERNAL_FLAGS option for all examples and components PR #4326 - Usage of temporary allocator in assignment operator of compute::vector PR #4325 - Use hpx::threads::get_cache_line_size in prefetching.hpp PR #4324 - Enable compatibility headers option for execution module PR #4316 - Add clang format indentppdirectives PR #4313 - Introduce index_pack alias to pack of size_t PR #4312 - Fixing compatibility header for pack.hpp PR #4311 - Dataflow annotations for APEX PR #4309 - Update launching_and_configuring_hpx_applications.rst PR #4306 - Fix schedule hint not being taken from executor PR #4305 - Implementing hpx::functional::tag_invoke PR #4304 - Improve pack support utilities PR #4303 - Remove errors module dependency on datastructures PR #4301 - Clean up thread executors PR #4294 - Logging revamp PR #4292 - Remove SPDX tag from Boost License file to allow for github to recognize it PR #4291 - Add format support for std::tm PR #4290 - Simplify compatible tuples check PR #4288 - A lightweight take on boost::lexical_cast PR #4287 - Forking boost::lexical_cast as a new module - PR #4270 - Refactor future implementation PR #4265 - Threading module PR #4259 - Module naming base PR #4251 - Local workrequesting scheduler PR #4250 - Inline execution of scoped tasks, if possible PR #4247 - Add execution in module headers PR #4246 - Expose CMake targets officially PR #4239 - Doc updates miscellaneous (partially completed during Google Season of Docs) PR #4233 - Remove project() from modules + fix CMAKE_SOURCE_DIR issue PR #4231 - Module local lcos PR #4207 - Command line handling module PR #4206 - Runtime configuration module PR #4141 - Doc updates examples local to remote (partially completed during Google Season of Docs) PR #4091 - Split runtime into local and distributed parts -
https://hpx-docs.stellar-group.org/branches/master/html/releases/whats_new_1_5_0.html
CC-MAIN-2021-49
en
refinedweb
After folowing patches, the recipe doesn't work anymore. - build: build everything from the root dir, use obj=$subdir - build: introduce if_changed_deps First patch mean that $(names) already have $(path), and the second one, the .*.d files are replaced by .*.cmd files which are much simpler to parse here. Also replace the makefile programming by a much simpler shell command. This doesn't check anymore if the source file exist, but that can be fixed by running `make clean`, and probably doesn't impact the calculation. `cloc` just complain that some files don't exist. Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> --- xen/Makefile | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/xen/Makefile b/xen/Makefile index 36a64118007b..b09584e33f9c 100644 --- a/xen/Makefile +++ b/xen/Makefile @@ -490,14 +490,7 @@ _MAP: .PHONY: cloc cloc: - $(eval tmpfile := $(shell mktemp)) - $(foreach f, $(shell find $(BASEDIR) -name *.o.d), \ - $(eval path := $(dir $(f))) \ - $(eval names := $(shell grep -o "[a-zA-Z0-9_/-]*\.[cS]" $(f))) \ - $(foreach sf, $(names), \ - $(shell if test -f $(path)/$(sf) ; then echo $(path)/$(sf) >> $(tmpfile); fi;))) - cloc --list-file=$(tmpfile) - rm $(tmpfile) + find . -name '*.o.cmd' -exec awk '/^source_/{print $$3;}' {} + | cloc --list-file=- endif #config-build --.
https://lists.xenproject.org/archives/html/xen-devel/2021-08/msg01035.html
CC-MAIN-2021-49
en
refinedweb
Introduction: Shimmering Chameleon (smart)Skirt ~ I love to sew and I'm on the LED bandwagon, oh, and it's fashion show season. This would be a unique Prom Outfit, for! Step 1: The Code #include <Wire.h> #include <Adafruit_TCS34725.h> #include <Adafruit_LSM303) #define NUM_PIXELS 4 Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_PIXELS, 6, NEO_GRB + NEO_KHZ800); Adafruit_TCS34725 color_sensor = Adafruit_TCS34725(TCS34725_INTEGRATIONTIME_50MS, TCS34725_GAIN_4X); Adafruit_LSM303 accel; #define STILL_LIGHT // define if light is to be on when no movement. // Otherwise dark // our RGB -> eye-recognized gamma color byte gammatable[256]; int g_red, g_green, g_blue; // global colors read by color sensor int j; // mess with this number to adjust TWINklitude :) // lower number = more sensitive #define MOVE_THRESHOLD 45 #define FADE_RATE 5 int led = 7; double newVector; void flash(int times) { for (int i = 0; i < times; i++) { digitalWrite(led, HIGH); // turn the LED on (HIGH is the voltage level) delay(150); // wait for a second digitalWrite(led, LOW); // turn the LED off by making the voltage LOW delay(150); } } float r, g, b; double storedVector; void setup() { pinMode(led, OUTPUT); // Try to initialise and warn if we couldn't detect the chip if (!accel.begin()) { Serial.println("Oops ... unable to initialize the LSM303. Check your wiring!"); while (1) { flash(4); delay(1000); }; } strip.begin(); strip.show(); // Initialize all pixels to 'off' if (!color_sensor.begin()) { Serial.println("No TCS34725 found ... check your connections"); while (1) { flash(3); delay(1000); }; } // thanks PhilB for this gamma table! // it helps convert RGB colors to what humans see for (int i = 0; i < 256; i++) { float x = i; x /= 255; x = pow(x, 2.5); x *= 255; gammatable[i] = x; } //this sequence flashes the first pixel three times // as a countdown to the color reading. for (int i = 0; i < 3; i++) { //white, but dimmer-- 255 for all three values makes it blinding! strip.setPixelColor(0, strip.Color(188, 188, 188)); strip.show(); delay(1000); strip.setPixelColor(0, strip.Color(0, 0, 0)); strip.show(); delay(500); } uint16_t clear, red, green, blue; color_sensor.setInterrupt(false); // turn on LED delay(60); // takes 50ms to read color_sensor.getRawData(&red, &green, &blue, &clear); color_sensor.setInterrupt(true); // turn off LED // Figure out some basic hex code for visualization uint32_t sum = red; sum += green; sum += blue; sum = clear; r = red; r /= sum; g = green; g /= sum; b = blue; b /= sum; r *= 256; g *= 256; b *= 256; g_red = gammatable[(int)r]; g_green = gammatable[(int)g]; g_blue = gammatable[(int)b]; // Get the magnitude (length) of the 3 axis vector // accel.read(); storedVector = accel.accelData.x*accel.accelData.x; storedVector += accel.accelData.y*accel.accelData.y; storedVector += accel.accelData.z*accel.accelData.z; storedVector = sqrt(storedVector); } void loop() { // get new data accel.read(); double newVector = accel.accelData.x*accel.accelData.x; newVector += accel.accelData.y*accel.accelData.y; newVector += accel.accelData.z*accel.accelData.z; newVector = sqrt(newVector); // are we moving if (abs(newVector - storedVector) > MOVE_THRESHOLD) { colorWipe(strip.Color(0, 0, 0), 0); flashRandom(10, 25); // first number is 'wait' delay, // shorter num == shorter twinkle // second number is how many neopixels to // simultaneously light up } #ifdef STILL_LIGHT else { colorWipe(strip.Color(gammatable[(int)r], gammatable[(int)g], gammatable[(int)b]), 0); storedVector = newVector; } #endif } void flashRandom(int wait, uint8_t howmany) { for (uint16_t i = 0; i < howmany; i++) { for (int simul_pixels = 0; simul_pixels < 8; simul_pixels++) { // get a random pixel from the list j = random(strip.numPixels()); strip.setPixelColor(j, strip.Color(g_red, g_green, g_blue)); } strip.show(); delay(wait); colorWipe(strip.Color(0, 0, 0), 0); // now we will 'fade' it in FADE_RATE steps for (int x = 0; x < FADE_RATE; x++) { int r = g_red * (x + 1); r /= FADE_RATE; int g = g_green * (x + 1); g /= FADE_RATE; int b = g_blue * (x + 1); b /= FADE_RATE; strip.setPixelColor(j, strip.Color(r, g, b)); strip.show(); delay(wait); } // & fade out for (int x = FADE_RATE; x >= 0; x--) { int r = g_red * x; r /= FADE_RATE; int g = g_green * x; g /= FADE_RATE; int b = g_blue * x; b /= FADE_RATE; strip.setPixelColor(j, strip.Color(r, g, b)); strip.show(); delay(wait); } } #ifdef STILL_LIGHT colorWipe(strip.Color(gammatable[(int)r], gammatable[(int)g], gammatable[(int)b]), 0); #endif } // Fill the dots one after the other with a color void colorWipe(uint32_t c, uint8_t wait) { for (uint16_t i = 0; i < strip.numPixels(); i++) { strip.setPixelColor(i, c); strip.show(); delay(wait); } } Step 2: The Test First I needed to conduct some tests for what I wanted to accomplish, using the materials I had. I had to test the resistance of the conductive thread for the distances I was going. I made a mock-up on the sewing machine, creating 3 rows of conductive thread, doubled up. Stainless steel is very strong, but it has a lot of resistance and can shift and become loose around pin-outs. I wanted to see the max distance the info could travel along this set-up. I set everything up with alligator clips to test the thread and make sure the code still worked as it was supposed to. Step 3: Supplies and Fashioning First I made a skirt out of a stretchy spandex fabric that was futuristic silver, and just used the wrong side. I had some leftover gold stretchy silk from when I hemmed a couture wedding dress and got to keep the cutoff. This allows me to have the skirt fit a range of sizes, BUT also means that I have to do a lot of hand-sewing. I chose to put the Party of the skirt at the bottom, which has a large enough circumference to be unaffected by the stretch limitations. Most of the electronics are from Adafruit. The color-tip glove I made from a pair of beautiful gloves from the 40's. I love mixing old with new! I marked with disappearing ink where I wanted all the neopixels to go. I then sewed them on with a small amount of regular and conductive thread. The real connections would come later. Step 4: The Brain This combo is the brain of the system. It is a Flora, attached to the Color Sensor attached to the Accelerometer/compass. I sewed everything with the 2-ply conductive thread, making solid connections with plenty of passes and spacing lines sufficiently apart. For the pin-out connections of data, power and ground that would be traveling down the length of the skirt, I used beefier, insulated wire. You could just use the 3-ply conductive thread, but you'd need to have at least 6 wires bundled together for each of the three lines. I then sewed the patch onto the skirt, feeding the battery wires though a small opening and into a pocket I created for the LiPo. The battery is 1200mAh, 3.7V. Step 5: My Work Is Cut Out for Me The first pic is the first neopixel of the 8 neopixel train. The rest would all be connected with 3-ply stainless steel conductive thread, strung with semi-precious stones and beads. These would all add a nice weight and rigidity, plus look more intentional than just dotted lines of wire connecting the neopixels. The trick was finding a needle eye big enough to accept the steel thread but small enough to fit though the stones/beads. There were a lot of beads that didn't make the cut. : D Step 6: The Underneath I wanted to cover all the exposed thread so as to minimize any chance of shorts. I used iron-on interfacing, clipping to allow curves, and ironed on with a single layer of cheesecloth between the skirt and iron. The initial wires I just wove in between the serged seam. Step 7: In Action! I have entered this into the Code Creations Contest and would love a vote if you think I deserve one! : D ~ Cynthia Runner Up in the Coded Creations Be the First to Share Recommendations 4 Comments 6 years ago AWESOME Project!! I made the chameleon scarf a few months ago, following Adafruit's tutorial. I love your adaptation, especially the sparkles. Cool idea! Reply 6 years ago Thank you so much! I want to improve my felting skills and inspirations and your instructables are so nice especially in that regard. Keep them coming! 6 years ago on Introduction Neat-o! Reply 6 years ago on Introduction Thanks! I love your Headboard Lighting 'ible.
https://www.instructables.com/Shimmering-Chameleon-smartSkirt-/
CC-MAIN-2021-49
en
refinedweb
It’s common in iOS apps to use a Tab View. The one with a few choices at the bottom, and you can completely switch what’s in the screen by tapping the icon / label. SwiftUI conveniently provides us a view called TabView, which makes it easy to implement such a UI pattern. Here’s the simplest possible example of a TabView: import SwiftUI struct ContentView: View { var body: some View { TabView { Text("First") .tabItem { Label("First", systemImage: "tray") } Text("Second") .tabItem { Label("Second", systemImage: "calendar") } } } } And here’s the result: See? We have a TabView view, and inside it, we have 2 views. Both are Text views to make it simple. Their tabItem modifier will add them to the TabView with a label provided as a Label view. Of course you will want to use a custom view instead of Text in most cases. Check out my Web Development Bootcamp. Next cohort is in April 2022, join the waiting list!
https://flaviocopes.com/swiftui-tabview/
CC-MAIN-2021-49
en
refinedweb
Handling of international domain names. More... #include "config.h" #include <stdint.h> #include <stdio.h> #include "mutt/lib.h" #include "config/lib.h" #include "core/lib.h" #include "idna2.h" Go to the source code of this file. Handling of international domain names. idna.c. Convert an email's domain from Punycode. If $idn_decode is set, then the domain will be converted from Punycode. For example, "xn--ls8h.la" becomes the emoji domain: ":poop:.la" Then the user and domain are changed from 'utf-8' to the encoding in $charset. If the flag MI_MAY_BE_IRREVERSIBLE is NOT given, then the results will be checked to make sure that the transformation is "undo-able". Definition at line 144 of file idna.c. Convert an email's domain to Punycode. The user and domain are assumed to be encoded according to $charset. They are converted to 'utf-8'. If $idn_encode is set, then the domain will be converted to Punycode. For example, the emoji domain: ":poop:.la" becomes "xn--ls8h.la" Definition at line 264 of file idna.c. Create an IDN version string. Definition at line 314 of file idna.c.
https://neomutt.org/code/idna_8c.html
CC-MAIN-2021-49
en
refinedweb
We are given a string str. The goal is to count all substrings of str that are special palindromes and have length greater than 1. The special palindromes are strings that have either all the same characters or only the middle character as different. For example if string is “baabaa” then special palindromes that are substring of original are “aa”, “aabaa”, “aba”, “aa” Let us understand with examples. Input − str=”abccdcdf” Output − Count of special palindromes in a String is − 3 Explanation − Substrings that are special palindromes are − “cc”, “cdc”, “dcd” Input − str=”baabaab” Output − Count of special palindromes in a String is − 4 Explanation − Substrings that are special palindromes are − “aa”, “aabaa”, “aba”, “aa” Create a string of alphabets and calculate its length. Pass the data to the function for further processing. Declare a temporary variables count and i and set them to 0 Create an array of size of a string and initialize it with 0. Start WHILE till i less than the length of a string Inside the while, set a variable as total to 1 and a variable j to i + 1 Start another WHILE till str[i] = str[j] AND j less than length of a string Inside the while, increment the total by 1 and j by 1 Set the count as count + ( total * (total + 1) / 2, arr[i] to total and i to j Start loop FOR from j to 1 till the length of a string Check if str[j] = str[j-1] then set arr[j] to arr[j-1] Set a variable temp to str[j-1] and check if j > 0 AND j < one less the length of a string AND temp = str[j+1] AND sr[j]!=temp then set count as count + min(arr[j-1], arr[j+1]) Set the count a count - length of a string Return the count Print the result. #include <bits/stdc++.h> using namespace std; int count_palindromes(string str, int len){ int count = 0, i = 0; int arr[len] = { 0 }; while (i < len){ int total = 1; int j = i + 1; while (str[i] == str[j] && j < len){ total++; j++; } count += (total * (total + 1) / 2); arr[i] = total; i = j; } for (int j = 1; j < len; j++){ if (str[j] == str[j - 1]){ arr[j] = arr[j - 1]; } int temp = str[j - 1]; if (j > 0 && j < (len - 1) && (temp == str[j + 1] && str[j] != temp)){ count += min(arr[j-1], arr[j+1]); } } count = count - len; return count; } int main(){ string str = "bcbaba"; int len = str.length(); cout<<"Count of special palindromes in a String are: "<< count_palindromes(str, len); return 0; } If we run the above code it will generate the following output − Count of special palindromes in a String are: 3
https://www.tutorialspoint.com/count-special-palindromes-in-a-string-in-cplusplus
CC-MAIN-2021-49
en
refinedweb
poseLib PoseLib is based on a donation system. It means that if poseLib is useful to you or your studio then you can make a donation to reflect your satisfaction. Thanks for using poseLib! 😀 Number of times downloaded: ~ 23,000 Number of donations: 13 It is strongly advised that you backup your existing poseLib directory before you use the new version! COMPATIBILITY: PoseLib supports Maya 2011, 2012, 2013, 2014, 2015 and 2016. DOWNLOAD: Last updated 8 November 2015 Click on the icon to download poseLib version 6.6.0: - Fixed support for Macs (thanks Ludo!) - Fixed problem with not being able to create a new project - Fixed bug with switching projects - Fixed choosing text editor for Macs and PCs (See the complete history at the bottom of this article) INSTALLATION: For Windows: 1) Copy poseLib.mel and poseLibModule.py to your …\Documents\maya\20xx\scripts folder. 2) Copy poseLib.png to your …\Documents\maya\20xx\prefs\icons folder . 3) Restart Maya if it was open. 4) Type: source poseLib.mel; poseLib; For OSX: 1) Copy poseLib.mel and poseLibModule.py to /Users/<yourname>/Library/Preferences/Autodesk/maya/20xx/scripts. 2) Copy poseLib.png to /Users/<yourname>/Library/Preferences/Autodesk/maya/20xx/prefs/icons. 3) Restart Maya if it was open and source poseLib.mel. Finally call the command poseLib. FEATURES:. DIRECTORY STRUCTURE: PoseLib stores poses in a “category” directory, which is itself stored in a “character” directory, which is itself stored in a “casting” directory, which itself resides in a “archetype” directory. Sounds complicated, so here’s a diagram of the way things are organized: The Archetype (or “Type”) directory: This is where the different types of assets are separated. Usually you find “chars” for characters, “sets” for sets, “props” for props, “cams” for cameras, etc… The Casting (or “Cast”) directory: This is where you separate the main actors (“main”, “primary”, “hero”, etc…) from the rest (“crowd, “secondary”, etc…). The Character directory: This is where you find the names of the characters, or the sets, or the props, depending on which branch you’re in at the archetype level. The Category directory: This is where you find the poses themselves. Categories could be “face”, “body”, etc… A valid question would be “Why do we need archetype or casting folders?” Because poseLib is a tool used in production on movies such as “Despicable Me” and “The Lorax”, and we have hundreds of characters, many sets, props, etc… And it would quickly become tedious for artists to have to scroll through huge messy lists of names. Separating things by type and importance allows us to keep things clean and readily accessible. The poses themselves are .xml files and the icons are .bmp files. So a pose displayed as “toto” in the UI is made up of two files: toto.xml (which stores all the controls and attributes settings), and toto.bmp (the icon captured when the pose was created). LIBRARY STATUS: The library status is either “Private” or “Public”. The “private” path should point to your private library, where you store your poses and organize things the way you like. The “public” path should lead to the poses that are available to other animators. Again, this is most useful if you’re in a studio structure and you need to share poses while keeping things separated between you own playing ground and the common library. If you don’t need that, then the private path is the only one you’ll ever care about. WORKFLOW: Creating a new pose: - Select the controls for which you want to record a pose. - Click on the “Create New Pose” button. - Move the camera in the little preview camera frame to define the way the icon will look like. - Click “Create Pose”. Once the pose is created, it will appear automatically in the list of poses available (they’re sorted in alphabetical order). Note: You can move poses around by middle-mouse clicking them and drag-and-dropping them where you want. Applying a pose: Just click on a pose icon. It works differently depending on what you’ve selected: - If you don’t have anything selected, poseLib will attempt to apply the entire pose. or - If you’ve selected some controls the pose will just be applied to those. (You. You define the amount of the pose being applied with the “ALT/CTRL Pose” slider. Note: Remember you can also apply a pose only to the selected channels in the channelBox! Editing a pose: Right-click on the pose icon; A menu will appear, letting you: Rename, Move, Replace, Delete, or Edit the pose. Replacing the pose simply means that you don’t have to go through the process of re-capturing a new icon. The edit sub-menu will let you: Select the pose’s controls (if you don’t remember what was part of the pose), Add/Replace the selected controls (they’ll be added if they weren’t part of the pose, or replaced if they are), or Remove the selected controls. The “Ouput Pose Info” will print out information (pose author, when the pose was created, modified, etc…) about the pose in the script editor. NAMESPACES: When using a referenced rig with a namespace, you have three choices: 1) Use Selection Namespace: This means that when you click on a pose with some controls selected, poseLib will apply the pose if those controls were parts of the pose, regardless of the namespace stored in the pose. This lets you apply a pose recorded with a certain namespace to the namespace of your selection. For example, if the pose only contains a control named “Tintin:head_joint” and your current selection is “Gandalf:head_joint”, the pose will be applied. Basically this lets you apply a pose from a character to another character. 2) Use Pose Namespace: This means that poseLib will only apply the pose if the pose’s controls and namespaces are present in your selection (or in the scene if you don’t have anything selected). Again, if the pose only contains a control named “Tintin:head_joint” and your current selection is “Gandalf:head_joint”, the pose will NOT be applied. This is so you can record a single pose containing multiple characters and still only apply the pose to the one selected character. 3) Use Custom Namespace: This means that the pose will only be applied to the controls whose namespace matches the one defined in the text field. Note: The afore-mentioned namespace options play no role when saving poses: the namespace options are only relevant when applying poses. OPTIONS: Archetypes/Casting: Now if you want to create a new entry for a character name or a category, just click on the “Edit Options” button. Display:. Paths:. Text Editor: This is where you choose the text editor to be launched when manually editing a pose. TROUBLESHOOTING: The icons for my poses come up as red squares: Check the Images path of your current project (in the Project Manager). It should just say “images” or something similar. I keep getting the “# Error: NameError: name ‘poseLibModule’ is not defined” error: That’s because you have to source poseLib before you launch it. Please follow carefully step 4 of the installation instructions. There are a bunch of similarly named directories in similar places; make sure you didn’t mistakenly copy the files to the wrong ones. I am sure I copied the files to the right folders, but I still get the “No module named” error: Then try to edit the “Maya.env” file in your “…\maya\20xx” directory and add the following line: PYTHONPATH = C:\[…]\maya\20xx\scripts; … Where you need of course to indicate the correct path (where you copied the files), as well as the correct Maya version. Note: Be aware that you could have several Maya.env files in different directories (eg: in “…/Documents/maya” or “…/Documents/maya/20xx”. But Maya will only look at ONE of them (the first one it finds). So make sure it’s the right one! seith[at]seithcg[dot]com History: v6.5.0: - Support for Maya 2014 and up! - Reorder icons! - Colored icons! - Too many changes to list here! v6.2.3: -. - Fixed right-click menu not displaying properly in Maya 2013. - Now only shows poses whose file actually exists (no more empty red icons). v6.1.7: - Fixed a bug with the Options window not opening the very first time it’s called. - Fixed a bug where old poses conversion would fail due to CRLF symbols. - Fixed a bug with old poses conversion ignoring the last character of a pose file. - The projects menu in the Path options tab now accurately reflects the current poseLib project. - Removed useless warnings when a character or category is not found. - Fixed a bug with the Public path not being properly updated. - PoseLib now handles cases when switching to a project without an existing proper directory structure. - Fixed a bug when switching between Private and Public library status. - Fixed a bug with setting a project to a networked path. - Fixed a bug wen selecting a pose’s controls while using a custom namespace. - Conversion of old poses does not truncate the first word before a “_” character in the pose name anymore. - Fixed a bug when creating or applying a pose with controls devoid of keyable attributes. v6.0.8: - Fixed a nasty bug that could crash Maya when deleting a pose. v6.0.7: - Fixed a bug with blendshapes when saving and applying poses. - Fixed erroneous user warning reporting success when the pose was not applied. v6.0.1: - Added support for Macs (OSX)._4<< I still use an older version of Maya and as a result the script doesn’t support Maya 2017 (yet), sorry. I agree a mirror tool is indispensable but it is not really the purpose of poseLib: rigs are very different between studios/productions and it would be impossible to try and guess all the varying rigging configurations. Usually in a studio the rigging department provides tools for things like mirroring as it is very much linked to choices made during the process of building the characters. Sorry,I cant download it from the link.It tells me that the file isn’t there anymore. Hi, I just fixed the link. Sorry about that!
https://seithcg.com/wordpress/?page_id=19
CC-MAIN-2022-40
en
refinedweb
="stylesheet" href="" /> <script src=""></script> <script src=""></script> 都道府県名を入力すると候補が表示されます(前方一致)<br> <input id="keyword"> $(function () { var words = [ { label: "北海道", kana: "ほっかいどう" }, { label: "青森県", kana: "あおもりけん" }, { label: "岩手県", kana: "いわてけん" }, { label: "宮城県", kana: "みやぎけん" }, { label: "秋田県", kana: "あきたけん" }, { label: "山形県", kana: "やまがたけん" }, { label: "福島県", kana: "ふくしまけん" }, { label: "茨城県", kana: "いばらきけん" }, { label: "栃木県", kana: "とちぎけん" }, { label: "群馬県", kana: "ぐんまけん" }, { label: "埼玉県", kana: "さいたまけん" }, { label: "千葉県", kana: "ちばけん" }, { label: "東京都", kana: "とうきょうと" }, { label: "神奈川県", kana: "かながわけん" }, { label: "新潟県", kana: "にいがたけん" }, { label: "富山県", kana: "とやまけん" }, { label: "石川県", kana: "いしかわけん" }, { label: "福井県", kana: "ふくいけん" }, { label: "山梨県", kana: "やまなしけん" }, { label: "長野県", kana: "ながのけん" }, { label: "岐阜県", kana: "ぎふけん" }, { label: "静岡県", kana: "しずおかけん" }, { label: "愛知県", kana: "あいちけん" }, { label: "三重県", kana: "みえけん" }, { label: "滋賀県", kana: "しがけん" }, { label: "京都府", kana: "きょうとふ" }, { label: "大阪府", kana: "おおさかふ" }, { label: "兵庫県", kana: "ひょうごけん" }, { label: "奈良県", kana: "ならけん" }, { label: "和歌山県", kana: "わかやまけん" }, { label: "鳥取県", kana: "とっとりけん" }, { label: "島根県", kana: "しまねけん" }, { label: "岡山県", kana: "おかやまけん" }, { label: "広島県", kana: "ひろしまけん" }, { label: "山口県", kana: "やまぐちけん" }, { label: "徳島県", kana: "とくしまけん" }, { label: "香川県", kana: "かがわけん" }, { label: "愛媛県", kana: "えひめけん" }, { label: "高知県", kana: "こうちけん" }, { label: "福岡県", kana: "ふくおかけん" }, { label: "佐賀県", kana: "さがけん" }, { label: "長崎県", kana: "ながさきけん" }, { label: "熊本県", kana: "くまもとけん" }, { label: "大分県", kana: "おおいたけん" }, { label: "宮崎県", kana: "みやざきけん" }, { label: "鹿児島県", kana: "かごしまけん" }, { label: "沖縄県", kana: "おきなわけん" } ]; $("#keyword").autocomplete({ source: function (request, response) { var list = []; list = words.filter(function (word) { return ( word.label.indexOf(request.term) === 0 || word.kana.indexOf(request.term) === 0 ); }); response(list); } }); }); Also see: Tab Triggers
https://codepen.io/masyu/pen/poEqBYJ
CC-MAIN-2022-40
en
refinedweb
QmlAttached# This decorator declares that the enclosing type attaches the type passed as an attached property to other types. This takes effect if the type is exposed to QML using a QmlElement() or @QmlNamedElement() decorator. QML_IMPORT_NAME = "com.library.name" QML_IMPORT_MAJOR_VERSION = 1 QML_IMPORT_MINOR_VERSION = 0 # Optional @QmlAnonymous class LayoutAttached(QObject): @Property(QMargins) def margins(self): ... @QmlElement() @QmlAttached(LayoutAttached) class Layout(QObject): ... Afterwards the class may be used in QML: import com.library.name 1.0 Layout { Widget { Layout.margins: [2, 2, 2, 2] } }
https://doc-snapshots.qt.io/qtforpython-dev/PySide6/QtQml/QmlAttached.html
CC-MAIN-2022-40
en
refinedweb
back the iotedged pod using persistent volumes. iotedged contains certificates and other security state which must be persisted on durable storage in order for the edge deployment to be remain functional should the iotedged pod be restarted and/or relocated to another node. This tutorial requires a Azure Kubernetes (AKS) cluster with Helm initialized and kubectl installed as noted in the prerequisites. A persistent volume backed by remote or replicated storage to provide resilience to node failure in a multi-node cluster setup. This example uses azurefilebut you can use any persistent volume provider. Local storage backed persistent volumes provide resilience to pod failure if the new pod happens to land on the same node but does not help in cases where the pod migrates nodes. See the prerequisites section for more details. Setup steps As needed, follow the steps to register an IoT Edge device. Take note of the device connection string. Create an AKS cluster and connect to it. Create a Kubernetes namespace for your IoT Edge deployment kubectl create ns pv-iotedged Create an Azure Files storage class. Create a persistent volume claim: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iotedged-data-azurefile namespace: pv-iotedged spec: accessModes: - ReadWriteMany storageClassName: azurefile resources: requests: storage: 100Mi Specify persistent volume claim name to use for storing iotedgeddata during install. # Install IoT Edge CRD, if not already installed helm install --repo edge-crd edge-kubernetes-crd # Store the device connection string in a variable (enclose in single quotes) export connStr='replace-with-device-connection-string-from-step-1' # Install helm install --repo pv-iotedged-example edge-kubernetes \ --namespace pv-iotedged \ --set "iotedged.data.persistentVolumeClaim.name=iotedged-data-azurefile" \ --set "provisioning.deviceConnectionString=$connStr" In addition to iotedged, the edgeHubmodule's message store should also be a backed by a persistent volume to prevent data loss when deployed in a Kubernetes environment. See this tutorial for the steps on how to do this. Cleanup # Cleanup helm del pv-iotedged-example -n pv-iotedged && \ kubectl delete ns pv-iotedged ...will remove all the Kubernetes resources deployed as part of the edge deployment in this tutorial (IoT Edge CRD will not be deleted).
https://microsoft.github.io/iotedge-k8s-doc/examples/ha.html
CC-MAIN-2022-40
en
refinedweb
On Mon, Sep 10, 2001 at 01:51:22PM -0400, Christopher Faylor wrote: >On Mon, Sep 10, 2001 at 02:48:16PM +1000, Danny Smith wrote: >>The last change to the anonymous struct in LARGE_INTEGER in winnt.h, >>doesn't make sense to me. > >I must be missing something. > >Reading windows.h, I'm having a hard time seeing how ANONYMOUS_STRUCT could >ever be undefined. (Sorry. Please ignore the above two sentences. I typed them, then decided that I'd better do some more research and forgot to delete them after I refreshed my memory on why I made the change.) >With my my cross gcc compiler, which is based on gcc 3.0.1, this code does >not work: > >#define _ANONYMOUS_STRUCT >#if _ANONYMOUS_STRUCT || defined(foo) >"foo"=1; >#endif > >% i686-pc-cygwin-gcc tst.c -c >tst.c:2:23: operator 'EOL' has no left operand > >With gcc 2.95.3, I get this (as expected): > >% /cygwin/bin/i686-pc-cygwin-gcc tst.c -c >tst.c:2: parse error > >If I change the file to this: > >define _ANONYMOUS_STRUCT __extension__ >#if _ANONYMOUS_STRUCT || defined(foo) >"foo"=1; >#endif > >Then I get this for both 3.0.1 and 2.95.3: > >% i686-pc-cygwin-gcc -c /tmp/tst.c >% > >In other words, the compiler ignores line three of the file, which is >not the expected behavior. > >>Sat Sep 1 10:40:37 2001 Christopher Faylor <[email protected]> >> >> * include/winnt.h: Use defined(_ANONYMOUS_STRUCT) to determine if >> anonymous structs are available rather than just testing preprocessor >> variable directly. >> >> >> >> _ANONYMOUS_STRUCT is always defined in windows.h, so the >>#if defined(_ANONYMOUS_STRUCT) conditional doesn't do anything. >>If you compile this >> >>#define NONMAMELESSUNION >>#include <windows.h> >> >>with current CVS winnt.h, the _[U]LARGE_INTEGER structs throw pedantic >>warnings. >> >>If you don't like the #if _ANONYMOUS_STRUCT syntax (which doesn't >>cause any problems for me with 3.0.1 or with 2.95.3, as long as I >>include windows.h first), here is a macro guard that actually does >>something. > >I don't like it for the above reasons. > >I'm not wild about using something called NONAMELESSUNION to control >whether a nameless *structure* is defined but I guess it's ok. > >cgf > >>I've also picked up another nameless union that wasn't protected. >> >>Now, if we are really serious about pedantic warnings,we need to >>protect against all the non-ANSI bit-fields in w32api structs. >> >>Danny >> >>ChangeLog >> >>2001-09-10 Danny Smith <[email protected]> >> * include/winnt.h (_[U]LARGE_INTEGER): Protect nameless struct with >> !defined(NONAMELESSUNION), rather than defined(_ANONYMOUS_STRUCT). >> (_REPARSE_DATA_BUFFER): Name union field DUMMYUNIONNAME. >> >>--- winnt.h.orig Mon Sep 10 15:55:31 2001 >>+++ winnt.h Mon Sep 10 16:06:55 2001 >>@@ -1705,7 +1705,7 @@ typedef union _LARGE_INTEGER { >> DWORD LowPart; >> LONG HighPart; >> } u; >>-#if defined(_ANONYMOUS_STRUCT) || defined(__cplusplus) >>+#if ! defined(NONAMELESSUNION) || defined(__cplusplus) >> struct { >> DWORD LowPart; >> LONG HighPart; >>@@ -1718,7 +1718,7 @@ typedef union _ULARGE_INTEGER { >> DWORD LowPart; >> DWORD HighPart; >> } u; >>-#if defined(_ANONYMOUS_STRUCT) || defined(__cplusplus) >>+#if ! defined(NONAMELESSUNION) || defined(__cplusplus) >> struct { >> DWORD LowPart; >> DWORD HighPart; >>@@ -2502,7 +2502,7 @@ typedef struct _REPARSE_DATA_BUFFER { >> struct { >> BYTE DataBuffer[1]; >> } GenericReparseBuffer; >>- }; >>+ } DUMMYUNIONNAME; >> } REPARSE_DATA_BUFFER, *PREPARSE_DATA_BUFFER; >> typedef struct _REPARSE_GUID_DATA_BUFFER { >> DWORD ReparseTag; >> >>_____________________________________________________________________________ >> - Yahoo! Messenger >>- Voice chat, mail alerts, stock quotes and favourite news and lots more! > >-- >[email protected] Red Hat, Inc. > -- [email protected] Red Hat, Inc.
https://cygwin.com/pipermail/cygwin-patches/2001q3/001119.html
CC-MAIN-2022-40
en
refinedweb
Create a function on Linux using a custom container In this tutorial, you create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. You typically use a custom image when your functions require a specific language version or have a specific dependency or configuration that isn't provided by the built-in image. Azure Functions supports any language or runtime using custom handlers. For some languages, such as the R programming language used in this tutorial, you need to install the runtime or more libraries as dependencies that require the use of a custom container. Deploying your function code in a custom Linux container requires Premium plan or a Dedicated (App Service) plan hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can minimize by cleaning-up resources when you're done. You can also use a default Azure App Service container as described in Create your first function hosted on Linux. Supported base images for Azure Functions are found in the Azure Functions base images repo. In this tutorial, you learn how to: -. - Add a Queue storage output binding. -. You can follow this tutorial on any computer running Windows, macOS, or Linux. Configure your local environment Before you begin, you must have the following requirements in place: - Azure Functions Core Tools version 4.x. One of the following tools for creating Azure resources: Azure CLI version 2.4 or later. The Azure Az PowerShell module version 5.9.0 or later. Azure Functions Core Tools. Azure CLI version 2.4 or later. - Python 3.8 (64-bit), Python 3.7 (64-bit), Python 3.6 (64-bit), which are supported by Azure Functions. The Java Developer Kit version 8 or 11. Apache Maven version 3.0 or above. - Development tools for the language you're using. This tutorial uses the R programming language as an example. If you don't have an Azure subscription, create an Azure free account before you begin. You also need to get a Docker and Docker ID: Create and activate a virtual environment In a suitable folder, run the following commands to create and activate a virtual environment named .venv. Ensure that you use Python 3.8, 3.7 or 3.6, which are supported by Azure Functions. python -m venv .venv source .venv/bin/activate If Python didn't install the venv package on your Linux distribution, run the following command: sudo apt-get install python3-venv You run all subsequent commands in this activated virtual environment. Create and test the local functions project In a terminal or command prompt, run the following command for your chosen language to create a function app project in the current folder: func init --worker-runtime dotnet --docker func init --worker-runtime node --language javascript --docker func init --worker-runtime powershell --docker func init --worker-runtime python --docker func init --worker-runtime node --language typescript --docker In an empty folder, run the following command to generate the Functions project from a Maven archetype: mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=8 -Ddocker The -DjavaVersion parameter tells the Functions runtime which version of Java to use. Use -DjavaVersion=11 if you want your functions to run on Java 11. When you don't specify -DjavaVersion, Maven defaults to Java 8. For more information, see Java versions. Important The JAVA_HOME environment variable must be set to the install location of the correct version of the JDK to complete this article. Maven asks you for values needed to finish generating the project on deployment. Follow the prompts and provide the following information: Type Y or press Enter to confirm. Maven creates the project files in a new folder named artifactId, which in this example is fabrikam-functions. func init --worker-runtime custom --docker The --docker option generates a Dockerfile for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime. Navigate into the project folder: cd fabrikam-functions No changes are needed to the Dockerfile. Use the following command to add a function to your project, where the --name argument is the unique name of your function and the --template argument specifies the function's trigger. func new creates a C# code file in your project. func new --name HttpExample --template "HTTP trigger" --authlevel anonymous Use the following command to add a function to your project, where the --name argument is the unique name of your function and the --template argument specifies the function's trigger. func new creates a subfolder matching the function name that contains a configuration file named function.json. func new --name HttpExample --template "HTTP trigger" --authlevel anonymous In a text editor, create a file in the project folder named handler.R. Add the following code as its content: library(httpuv) PORTEnv <- Sys.getenv("FUNCTIONS_CUSTOMHANDLER_PORT") PORT <- strtoi(PORTEnv , base = 0L) http_not_found <- list( status=404, body='404 Not Found' ) http_method_not_allowed <- list( status=405, body='405 Method Not Allowed' ) hello_handler <- list( GET = function (request) { list(body=paste( "Hello,", if(substr(request$QUERY_STRING,1,6)=="?name=") substr(request$QUERY_STRING,7,40) else "World", sep=" ")) } ) routes <- list( '/api/HttpExample' = hello_handler ) router <- function (routes, request) { if (!request$PATH_INFO %in% names(routes)) { return(http_not_found) } path_handler <- routes[[request$PATH_INFO]] if (!request$REQUEST_METHOD %in% names(path_handler)) { return(http_method_not_allowed) } method_handler <- path_handler[[request$REQUEST_METHOD]] return(method_handler(request)) } app <- list( call = function (request) { response <- router(routes, request) if (!'status' %in% names(response)) { response$status <- 200 } if (!'headers' %in% names(response)) { response$headers <- list() } if (!'Content-Type' %in% names(response$headers)) { response$headers[['Content-Type']] <- 'text/plain' } return(response) } ) cat(paste0("Server listening on :", PORT, "...\n")) runServer("0.0.0.0", PORT, app) In host.json, modify the customHandler section to configure the custom handler's startup command. "customHandler": { "description": { "defaultExecutablePath": "Rscript", "arguments": [ "handler.R" ] }, "enableForwardingHttpRequest": true } To test the function locally, start the local Azure Functions runtime host in the root of the project folder. func start func start npm install npm start mvn clean package mvn azure-functions:run R -e "install.packages('httpuv', repos='')" func start After you see the HttpExample endpoint appear in the output, navigate to. The browser must display a "hello" message that echoes back Functions, the value supplied to the name query parameter. Press Ctrl+C to stop the host. Build the container image and test locally (Optional) Examine the Dockerfile in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. The complete list of supported base images for Azure Functions can be found in the Azure Functions base image page. Examine the Dockerfile in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. Custom handler applications use the mcr.microsoft.com/azure-functions/dotnet:3.0-appservice image as its base. Modify the Dockerfile to install R. Replace the contents of the Dockerfile with the following code: FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice ENV AzureWebJobsScriptRoot=/home/site/wwwroot \ AzureFunctionsJobHost__Logging__Console__IsEnabled=true RUN apt update && \ apt install -y r-base && \ R -e "install.packages('httpuv', repos='')" COPY . /home/site/wwwroot In the root project folder, run the docker build command, provide a name as azurefunctionsimage, and tag as v1.0.0. Replace <DOCKER_ID> with your Docker Hub account ID. This command builds the Docker image for the container. docker build --tag <DOCKER_ID>/azurefunctionsimage:v1.0.0 . When the command completes, you can run the new container locally. To test the build, run the image in a local container using the docker run command, replace <docker_id> again with your Docker Hub account ID, and add the ports argument as -p 8080:80: docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0 verifying the function app in the container, press Ctrl+C to stop the docker. Push the image to Docker Hub Docker Hub is a container registry that hosts images and provides image and container services. To share your image, which includes deploying to Azure, you must push it to a registry. If you haven't already signed in to Docker, do so with the docker login command, replacing <docker_id>with your Docker Hub account ID. This command prompts you for your username and password. A "Login Succeeded" message confirms that you're signed in. docker login After you've signed in, push the image to Docker Hub by using the docker push command, again replace the <docker_id>with your Docker Hub account ID. docker push <docker_id>/azurefunctionsimage:v1.0.0 Depending on your network speed, pushing the image for the first time might take a few minutes (pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section and create Azure resources in another terminal. already, sign in to Azure. az login The az login command signs you into your Azure account. Create a resource group named AzureFunctionsContainers-rgin your chosen region. az group create --name AzureFunctionsContainersContainers-rg --sku Standard_LRS The az storage account create command creates the storage account. In the previous example, replace <STORAGE_NAME>with a name that is appropriate to you and unique in Azure Storage. Storage names must contain 3 to 24 characters numbers and lowercase letters only. Standard_LRSspecifies a general-purpose account supported by Functions. Use the command to create a Premium plan for Azure Functions named myPremiumPlanin the Elastic Premium 1 pricing tier ( --sku EP1), in your <REGION>, and in a Linux container ( --is-linux). az functionapp plan create --resource-group AzureFunctionsContainers-rg --name myPremiumPlan --location <REGION> --number-of-workers 1 --sku EP1 --is-linux We use the Premium plan here, which can scale as needed. For more information about hosting, see Azure Functions hosting plans comparison. For more information on how to calculate costs, see the Functions pricing page. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see Monitor Azure Functions. The instance incurs no costs until you activate it. Create and configure a function app on Azure with the image A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage. Create a function app using the following command: az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0 In the az functionapp create command, the deployment-container-image-name parameter specifies the image to use for the function app. You can use the az functionapp config container show command to view information about the image used for deployment. You can also use the az functionapp config container set command to deploy from a different image. Note If you're using a custom container registry, then the deployment-container-image-name parameter will refer to the registry URL. In this example, replace <STORAGE_NAME>with the name you used in the previous section for the storage account. Also, replace <APP_NAME>with a globally unique name appropriate to you, and <DOCKER_ID>with your Docker Hub account ID. When you're deploying from a custom container registry, use the deployment-container-image-nameparameter to indicate the URL of the registry. Tip You can use the DisableColorsetting in the host.json file to prevent ANSI control characters from being written to the container logs. Use the following command to get the connection string for the storage account you created: az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <STORAGE_NAME> --query connectionString --output tsv The connection string for the storage account is returned by using the az storage account show-connection-string command. Replace <STORAGE_NAME>with the name of the storage account you created earlier. Use the following command to add the setting to the function app: az functionapp config appsettings set --name <APP_NAME> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<CONNECTION_STRING> The az functionapp config appsettings set command creates the setting. In this command, replace <APP_NAME>with the name of your function app and <CONNECTION_STRING>with the connection string from the previous step. The connection should be a long encoded string that begins with DefaultEndpointProtocol=. The function can now use this connection string to access the storage account. Note If you publish your custom image to a private container registry, you must use environment variables in the Dockerfile for the connection string instead. For more information, see the ENV instruction. You must also set the DOCKER_REGISTRY_SERVER_USERNAME and DOCKER_REGISTRY_SERVER_PASSWORD variables. To use the values, you must rebuild the image, push the image to the registry, and then restart the function app on Azure. Verify your functions on Azure With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests. In your browser, navigate to the following URL: https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions Replace <APP_NAME> with the name of your function app. When you navigate to this URL, the browser must display similar output as when you ran the function locally. Enable continuous deployment to Azure You can enable Azure Functions to automatically update your deployment of an image whenever you update the image in the registry. Use the following command to enable continuous deployment and to get the webhook URL: az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <APP_NAME> --resource-group AzureFunctionsContainers-rg The az functionapp deployment container config command enables continuous deployment and returns the deployment webhook URL. You can retrieve this URL at any later time by using the az functionapp deployment container show-cd-url command. As before, replace <APP_NAME>with your function app name. Copy the deployment webhook URL to the clipboard. Open Docker Hub, sign in, and select Repositories on the navigation bar. Locate and select the image, select the Webhooks tab, specify a Webhook name, paste your URL in Webhook URL, and then select Create. With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub. Enable SSH connections SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). For easy connection to your container using SSH, Azure Functions provides a base image that has SSH already enabled. You only need to edit your Dockerfile, then rebuild, and redeploy the image. You can then connect to the container through the Advanced Tools (Kudu). In your Dockerfile, append the string -appserviceto the base image in your FROMinstruction. FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice FROM mcr.microsoft.com/azure-functions/node:2.0-appservice FROM mcr.microsoft.com/azure-functions/powershell:2.0-appservice FROM mcr.microsoft.com/azure-functions/python:2.0-python3.7-appservice FROM mcr.microsoft.com/azure-functions/node:2.0-appservice Rebuild the image by using the docker buildcommand again, replace the <docker_id>with your Docker Hub account ID. docker build --tag <docker_id>/azurefunctionsimage:v1.0.0 . Push the updated image to Docker Hub, which should take considerably less time than the first push. Only the updated segments of the image need to be uploaded now. docker push <docker_id>/azurefunctionsimage:v1.0.0 Azure Functions automatically redeploys the image to your functions app; the process takes place in less than a minute. In a browser, open https://<app_name>.scm.azurewebsites.net/and replace <app_name>with your unique name. This URL is the Advanced Tools (Kudu) endpoint for your function app container. Sign in to your Azure account, and then select the SSH to establish a connection with the container. Connecting might take a few moments if Azure is still updating the container image. After a connection is established with your container, run the topcommand to view the currently running processes. Write to Azure Queue Storage Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These bindings, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. For more information, see Azure Functions triggers and bindings concepts. This section shows you how to integrate your function with an Azure Queue Storage. The output binding that you add to this function writes data from an HTTP request to a message in the queue. Retrieve the Azure Storage connection string Earlier, you created an Azure Storage account for function app's use. The connection string for this account is stored securely in app settings in Azure. By downloading the setting into the local.settings.json file, you can use the connection to write to a Storage queue in the same account when running the function locally. From the root of the project, run the following command, replace <APP_NAME>with the name of your function app from the previous step. This command overwrites any existing values in the file. func azure functionapp fetch-app-settings <APP_NAME> Open local.settings.json file and locate the value named AzureWebJobsStorage, which is the Storage account connection string. You use the name AzureWebJobsStorageand the connection string in other sections of this article. Important Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control. The .gitignore file created with a local functions project excludes the file by default. Register binding extensions Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet add package command in the Terminal window to add the Storage extension package to your project. dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage Now, you can add the storage output binding to your project. Add an output binding definition to the function Although a function can have only one trigger, it can have multiple input and output bindings, which lets you connect to other Azure services and resources without writing custom integration code. You declare these bindings in the function.json file in your function folder. From the previous quickstart, your function.json file in the HttpExample folder contains two bindings in the bindings collection: "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "res" } ] "scriptFile": "__init__.py", "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "$return" } "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "Request", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "Response" } ] Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type httpTrigger with the direction in. For the in direction, name specifies the name of an input parameter that's sent to the function when invoked by the trigger.": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "res" }, { "type": "queue", "direction": "out", "name": "msg", "queueName": "outqueue", "connection": "AzureWebJobsStorage" } ] } The second binding in the collection is of type http with the direction out, in which case the special name of $return indicates that this binding uses the function's return value rather than providing an input parameter. To write to an Azure Storage queue from this function, add an out binding of type queue with the name msg, as shown in the code below: "bindings": [ { "authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "$return" }, { "type": "queue", "direction": "out", "name": "msg", "queueName": "outqueue", "connection": "AzureWebJobsStorage" } ]": "Request", "methods": [ "get", "post" ] }, { "type": "http", "direction": "out", "name": "Response" }, { "type": "queue", "direction": "out", "name": "msg", "queueName": "outqueue", "connection": "AzureWebJobsStorage" } ] } In this case, msg is given to the function as an output argument. For a queue type, you must also specify the name of the queue in queueName and provide the name of the Azure Storage connection (from local.settings.json file) in connection. In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions depend on whether your app runs in-process (C# class library) or in an isolated process. Open the HttpExample.cs project file and add the following parameter to the Run method definition: [Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg, The msg parameter is an ICollector<T> type, representing a collection of messages written to an output binding when the function completes. In this case, the output is a storage queue named outqueue. The StorageAccountAttribute sets the connection string for the storage account. This attribute indicates the setting that contains the storage account connection string and can be applied at the class, method, or parameter level. In this case, you could omit StorageAccountAttribute because you're already using the default storage account. The Run method definition must now look like the following code: [FunctionName("HttpExample")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, [Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg, ILogger log) In a Java project, the bindings are defined as binding annotations on the function method. The function.json file is then autogenerated based on these annotations. Browse to the location of your function code under src/main/java, open the Function.java project file, and add the following parameter to the run method definition: @QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String> msg The msg parameter is an OutputBinding<T> type, which represents a collection of strings. These strings are written as messages to an output binding when the function completes. In this case, the output is a storage queue named outqueue. The connection string for the Storage account is set by the connection method. You pass the application setting that contains the Storage account connection string, rather than passing the connection string itself. The run method definition must now look like the following example: @FunctionName("HttpTrigger-Java") public HttpResponseMessage run( @HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION) HttpRequestMessage<Optional<String>> request, @QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String> msg, final ExecutionContext context) { ... } Add code to use the output binding With the queue binding defined, you can now update your function to write messages to the queue using the binding parameter. Update HttpExample\__init__.py to match the following code, add the msg parameter to the function definition and msg.set(name) under the if name: statement: import logging import azure.functions as func def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) -> str: name = req.params.get('name') if not name: try: req_body = req.get_json() except ValueError: pass else: name = req_body.get('name') if name: msg.set(name) return func.HttpResponse(f"Hello {name}!") else: return func.HttpResponse( "Please pass a name on the query string or in the request body", status_code=400 ) The msg parameter is an instance of the azure.functions.Out class. The set method writes a string message to the queue. In this case, it's the name passed to the function in the URL query string. Add code that uses the msg output binding object on context.bindings to create a queue message. Add this code before the context.res statement. // Add a message to the Storage queue, // which is the name passed to the function. context.bindings.msg = (req.query.name || req.body.name); At this point, your function must look as follows: module.exports = async function (context, req) { context.log('JavaScript HTTP trigger function processed a request.'); if (req.query.name || (req.body && req.body.name)) { // Add a message to the Storage queue, // which is the name passed to the function. context.bindings.msg = (req.query.name || req.body.name); context.res = { // status: 200, /* Defaults to 200 */ body: "Hello " + (req.query.name || req.body.name) }; } else { context.res = { status: 400, body: "Please pass a name on the query string or in the request body" }; } }; Add code that uses the msg output binding object on context.bindings to create a queue message. Add this code before the context.res statement. context.bindings.msg = name; At this point, your function must look as follows: import { AzureFunction, Context, HttpRequest } from "@azure/functions" const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> { context.log('HTTP trigger function processed a request.'); const name = (req.query.name || (req.body && req.body.name)); if (name) { // Add a message to the storage queue, // which is the name passed to the function. context.bindings.msg = name; // Send a "hello" response. context.res = { // status: 200, /* Defaults to 200 */ body: "Hello " + (req.query.name || req.body.name) }; } else { context.res = { status: 400, body: "Please pass a name on the query string or in the request body" }; } }; export default httpTrigger; Add code that uses the Push-OutputBinding cmdlet to write text to the queue using the msg output binding. Add this code before you set the OK status in the if statement. $outputMsg = $name Push-OutputBinding -name msg -Value $outputMsg At this point, your function must look as follows:) { # Write the $name value to the queue, # which is the name passed to the function. $outputMsg = $name Push-OutputBinding -name msg -Value $outputMsg $status = [HttpStatusCode]::OK $body = "Hello $name" } else { $status = [HttpStatusCode]::BadRequest $body = "Please pass a name on the query string or in the request body." } # Associate values to output bindings by calling 'Push-OutputBinding'. Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{ StatusCode = $status Body = $body }) Add code that uses the msg output binding object to create a queue message. Add this code before the method returns. if (!string.IsNullOrEmpty(name)) { // Add a message to the output collection. msg.Add(name); } At this point, your function must look as follows: [FunctionName("HttpExample")] public static async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, [Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg, ILogger log) { log.LogInformation("C# HTTP trigger function processed a request."); string name = req.Query["name"]; string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); dynamic data = JsonConvert.DeserializeObject(requestBody); name = name ?? data?.name; if (!string.IsNullOrEmpty(name)) { // Add a message to the output collection. msg.Add(name); } return name != null ? (ActionResult)new OkObjectResult($"Hello, {name}") : new BadRequestObjectResult("Please pass a name on the query string or in the request body"); } Now, you can use the new msg parameter to write to the output binding from your function code. Add the following line of code before the success response to add the value of name to the msg output binding. msg.setValue(name); When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you. Your run method must now look like the following example: public HttpResponseMessage run( @HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request, @QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String> msg, { // Write the name to the message queue. msg.setValue(name); return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build(); } } Update the tests Because the archetype also creates a set of tests, you need to update these tests to handle the new msg parameter in the run method signature. Browse to the location of your test code under src/test/java, open the Function.java project file, and replace the line of code under //Invoke with the following code: @SuppressWarnings("unchecked") final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class); final HttpResponseMessage ret = new Function().run(req, msg, context); Update the image in the registry In the root folder, run docker buildagain, and this time update the version in the tag to v1.0.1. As before, replace <docker_id>with your Docker Hub account ID. docker build --tag <docker_id>/azurefunctionsimage:v1.0.1 . Push the updated image back to the repository with docker push. docker push <docker_id>/azurefunctionsimage:v1.0.1 Because you configured continuous delivery, updating the image in the registry again automatically updates your function app in Azure. View the message in the Azure Storage queue In a browser, use the same URL as before to invoke your function. The browser must display the same response as before, because you didn't modify that part of the function code. The added code, however, wrote a message using the name URL parameter to the outqueue storage queue. You can view the queue in the Azure portal or in the Microsoft Azure Storage Explorer. You can also view the queue in the Azure CLI, as described in the following steps: Open the function project's local.setting.json file and copy the connection string value. In a terminal or command window, run the following command to create an environment variable named AZURE_STORAGE_CONNECTION_STRING, and paste your specific connection string in place of <MY_CONNECTION_STRING>. (This environment variable means you don't need to supply the connection string to each subsequent command using the --connection-stringargument.) export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>" (Optional) Use the az storage queue listcommand to view the Storage queues in your account. The output from this command must include a queue named outqueue, which was created when the function wrote its first message to that queue. az storage queue list --output tsv Use the az storage message getcommand to read the message from this queue, which should be the value you supplied when testing the function earlier. The command reads and removes the first message from the queue. echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') | base64 --decode` Because the message body is stored base64 encoded, the message must be decoded before it's displayed. After you execute az storage message get, the message is removed from the queue. If there was only one message in outqueue, you won't retrieve a message when you run this command a second time and instead get an error. Clean up resources If you want to continue working with Azure Function using the resources you created in this tutorial, you can leave all those resources in place. Because you created a Premium Plan for Azure Functions, you'll incur one or two USD per day in ongoing costs. To avoid ongoing costs, delete the AzureFunctionsContainers-rg resource group to clean up all the resources in that group: az group delete --name AzureFunctionsContainers-rg Next steps Feedback Submit and view feedback for
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image?tabs=bash%2Cportal&pivots=programming-language-csharp
CC-MAIN-2022-40
en
refinedweb
Learn more about these different git repos. Other Git URLs f61db18 @@ -115,6 +115,14 @@ def add_appdata(path, username, projectname, lock=None): out = "" + + # We need to have a possibility to disable an appstream builder for some projects + # because it doesn't properly scale up for a large ammount of packages + parent_dir = os.path.dirname(os.path.normpath(path)) + assert parent_dir.endswith(os.path.join(username, projectname)) + if os.path.exists(os.path.join(parent_dir, ".disable-appstream")): + return out kwargs = { "packages_dir": path, "username": username, See #738 Documented in SOP: +1 os.path.join(username, projectname) ? Metadata Update from @frostyx: - Pull-request tagged with: needs-work rebased onto df8f0cede2044be97e328aa4aee843f0fd3cec8a I've switched to os.path.join, os.path.exists and os.path.realpath as it was suggested on meeting. os.path.join os.path.exists os.path.realpath Metadata Update from @frostyx: - Pull-request untagged with: needs-work Sorry, I probably meant os.path.normpath. Even though we don't use symlinks, realpath() has some potential to cause problems in future... os.path.normpath rebased onto aed994cf31a6224497aabc80741222c245e1a043 Sorry, I probably meant os.path.normpath. Sorry, I probably meant os.path.normpath. Fixed :-) rebased onto f61db18 Thanks, please merge (for some reason I can not, again ..). Pull-Request has been merged by praiskup See #738
https://pagure.io/copr/copr/pull-request/742
CC-MAIN-2022-40
en
refinedweb
OAuth token value in AuthManager update form test (groovy) Hello all, I am using AuthManager in my project. But unfortunetally we implemented new dynamic part of the token. So I must refresh it after each new deployment. So for CI/CD is problem. In my AuthManager I have token value. Now I can send my HTTP to get new one.. But remains question, how to update the value in AuthManager? Any ideas? I saw some examples with update of the use name, etc.. but somehow this is not aplicable for me. I need just update the token value. I dont know the object model.. And form some reason the ReadyAPI groovy step does not help me with syntax: import com.eviware.soapui.config.AuthEntryTypeConfig; def project = testRunner.getTestCase().getTestSuite().getProject(); def authProfile = project.getAuthRepository().getEntry("Name of your profile"); So I have no idea what are all the available variables and methods in authProfile ... Any idea wellcomed. Solved! Go to Solution. def authContainer = testRunner.testCase.testSuite.project.OAuth2ProfileContainer def authProfile = authContainer.getProfileByName("admin_default") def oldToken = authProfile.getAccessToken(); log.info oldToken; // updated authProfile.setAccessToken("some"); log.info authProfile.getAccessToken(); Can anybody tell me, why the helper dont know the setAccessToken method? When I am typing the code in groovy step? I just found it by my sugesstion.. But IT MUST be somewhere descripted.. Like in object mode.. but I did not found it there.. It works.. but is stranght, that this was done by gessing... I cant answer your question, but just wanted to say thanks for answering your query so quickly....i'm gonna completely steal your code for my own use, so thanks a lot! 😉 Rich Problem is, that is nowhere descripted.. the object model of ReadyAPI does not know the class neighter the methods. I think, this is the problem.. I want to do something, but how??? :-DDD I just tryed many variants.. and this works.. BTW the include rows are not necessary..
https://community.smartbear.com/t5/ReadyAPI-Questions/OAuth-token-value-in-AuthManager-update-form-test-groovy/td-p/213505
CC-MAIN-2022-40
en
refinedweb
Introduction to #Define in C The #define is a function in the C programming language that helps define macros along with the source code present. Using macro definitions, we can define constant values, which can be used globally throughout the code we have. These macro definitions are different than variables. They cannot be changed like variables can be changed in a program. Macros can be used to create some expressions of strings or even numbers. Macros are efficient, reusable, and fast. It is a way of creating constants. Syntax #define CONSTNAME value Or #define CONSTNAME expression The directive #define helps in creating an identifier that will be a constant value. The CONSTNAME is replaced only when it forms a token. The argument after the CONSTNAME can be tokens, values for constants, and also complete statements. This directive can be used throughout the program as and when needed. How does C# directive work in C? #include <stdio.h> #define MATHPI 3.14 main() { printf("%f",MATHPI); } As stated earlier, the program helps us in creating constant values that can be used directly. In the above example, we try to understand the working of #define function. Like #include, we have used #define and declared its value as 3.14. It helps us in having a constant value for this MATHPI constant variable. Once this is defined, this function is stored in the memory and then can be directly used throughout the program. Its value can just be referred by the MATHPI variable name. It will get replaced with its value wherever this function is used in the program. This can be done for values whose value will not be changing. Examples of #Define in C Given below are the examples of #Define in C: Example #1 Replacing a numeric value using the #define function Code: #include <stdio.h> #define MATHPI 3.1415 int main() { float radius, area, circum; printf("Enter the radius for the circle: "); scanf("%f", &radius); area = MATHPI*radius*radius; printf("The area of circle is= %.2f",area); return 0; circum = 2*MATHPI*radius; printf("The circumference of circle is= %.2f",circum); } The above function helps us get the area of a circle by making use of a constant value derived by using the #define function. Here we have used the define and defined the value of MATHPI as 3.1415. This value will remain constant throughout the program and can be used multiple times. We have taken three float variables. These variables will store the local value of variables. We can calculate the area and circumference of the circle by using the #define variable and the local variables. We have used the MATHPI variable twice in the code. Once to calculate the area and the second time to calculate the circumference of the circle. Both the times we have used these variables. The value for this remains constant, and we get the area and circumference. Below will be the output of the above program. Output: Example #2 Replacing a string value using the #define Code: // C program to demonstrate #define to replace a string value #include <stdio.h> // We have defined a string PUN for Pune #define PUN "Pune" int main() { printf("The city I live in is %s ", PUN); return 0; } The above program is an example where we have defined a string variable PUN using the #define function. It helps us in using this string value anywhere in the code. In the main program, we have printed a string that displays the city you live in. The variable to be printed in PUN was defined using the #define. It cannot be changed further. The output of the above program will be as below. The value for variable PUN will be replaced with the constant string that we have declared. Output: Example #3 Defining an expression using #define Code: #include <stdio.h> #define MAX(x,y) ((x)>(y)?(x):(y)) void main() { printf("The maximum by using #define function is: %d\n", MAX(97,23)); } The above program demonstrates the use of the #define function as an expression. We have defined an expression that is helping us in finding the maximum between two numbers. The expression MAX is defined with the logic of finding the maximum between two numbers. This #define variable once has this value of finding the max of two numbers. In the main function, we are just using a print function that helps us find the max of any two numbers. In the print function, we have just called the MAX function. We have passed two numbers to this MAX variable, which is defined as the maximum of two numbers. The output of this code will be as below. Output: Example #4 Use of ‘#’ when define function is used Code: #include <stdio.h> #define msg_for(a) \ printf(#a " : Let us learn something new!\n") int main(void) { msg_for(EduCBAians); return 0; } In this example, we are making use of the ‘#’ operator. The # operator here acts as an operator that helps us accept user input for a particular variable. We can send parameters to the variable, which is passed and created through the #define a variable. We defined a variable name here, and this is printed using #. The value of id sent from main by calling the msg_for function. Here we send the parameter as EduCBAians. The function printing is message is defined using the #define function. This is a constant function. This function, whenever called, will print the variable, which is passed to the msg_for function. Below will be the output of the above code, which helps us in even defining constant functions. Output: Conclusion The #define function helps us in defining constant variables that can be used throughout the program. This value will remain constant. It can be the expression, variable, or any value which you would want to be constant. It helps in having uniformity for that variable. It is faster as the value is already defined before the code starts running. It is also efficient as you will just have to specify the variable name in the code. The code also looks neat when the #define function is used. It is quick and efficient using this function of C which is also easy. Recommended Articles This is a guide to #Define in C. Here we discuss How does C# directive work in C and Examples along with codes and outputs. You may also have a look at the following articles to learn more –
https://www.educba.com/sharp-define-in-c/?source=leftnav
CC-MAIN-2022-40
en
refinedweb
public static class ImportKeyDetails.Builder extends Object clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public Builder() public ImportKeyDetails.Builder compartmentId(String compartmentId) The OCID of the compartment that contains this key. compartmentId- the value to set public ImportKeyDetails.Builder definedTags(Map<String,Map<String,Object>> definedTags) Usage of predefined tag keys. These predefined keys are scoped to namespaces. Example: {"foo-namespace": {"bar-key": "foo-value"}} definedTags- the value to set public ImportKeyDetails.Builder displayName(String displayName) A user-friendly name for the key. It does not have to be unique, and it is changeable. Avoid entering confidential information. displayName- the value to set public ImportKeyDetails.Builder freeformTags(Map<String,String> freeformTags) Simple key-value pair that is applied without any predefined name, type, or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"} freeformTags- the value to set public ImportKeyDetails.Builder keyShape(KeyShape keyShape) public ImportKeyDetails.Builder wrappedImportKey(WrappedImportKey wrappedImportKey) public ImportKeyDetails.Builder protectionMode(ImportKeyDetails.ProtectionMode protection. protectionMode- the value to set public ImportKeyDetails build() public ImportKeyDetails.Builder copy(ImportKeyDetails model)
https://docs.oracle.com/en-us/iaas/tools/java/2.44.0/com/oracle/bmc/keymanagement/model/ImportKeyDetails.Builder.html
CC-MAIN-2022-40
en
refinedweb
The Intent class is used in Android programs to communicate between various types of processes. We’ll consider the case where an Intent is used to start one Activity from within another Activity, and another Intent is used to send data back from the second Activity to the first. Create a new project in Eclipse as described in our previous post. Add a main Activity called StartActivity which is displayed when the app starts. The layout of this Activity consists of: - a Button with ID getTextButton that has the caption “Get text” - a TextView with the text “You entered:” - another TextView, with ID enteredText, initially blank When the button is pressed, a second Activity, called GetTextActivity, will start that will have the following layout: - a horizontal LinearLayout (we’ll get to these in a later post) which contains: - a TextView with the text “Enter your text:” - an EditText, with ID userText, into which the user can type some text - a Button with ID sendTextButton and caption “Send text” All of this can be set up using Eclipse’s graphical UI editor, which will create the corresponding code in the various XML files automatically. The idea is that the user presses the “Get text” button to show the second Activity, then types some text into the EditText box and presses the “Send text” button. This returns the app to the first Activity, which displays the entered text in the enteredText TextView. In order to do this, we need to use one Intent to start GetTextActivity from within StartActivity and another Intent to send the entered text back from GetTextActivity to StartActivity. Here’s the code for StartActivity: package com.example.ex02explicitintent; import android.os.Bundle; import android.app.Activity; import android.content.Intent; import android.view.View; import android.widget.TextView; public class StartActivity extends Activity { static final int GET_TEXT_ACTIVITY = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_start); } public void onClickGetText(View view) { Intent getTextIntent = new Intent(getBaseContext(), GetTextActivity.class); startActivityForResult(getTextIntent, GET_TEXT_ACTIVITY); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (resultCode != RESULT_OK) { return; } if (requestCode == GET_TEXT_ACTIVITY) { String userText = data.getStringExtra(getString(R.string.userTextTag)); TextView enteredText = (TextView) findViewById(R.id.enteredTextView); enteredText.setText(userText); } } } The onClickGetText() (lines 19-22) method is the event handler for getTextButton. It illustrates how to create an explicit Intent, which is an Intent that creates a specifically named Activity. (There are also implicit Intents). Here we want a GetTextActivity to start in response to the button press, so we name this class explicitly in the Intent constructor. We then start the Activity by calling startActivityForResult(). The ‘ForResult’ in the method name means that we expect the Activity to return a result after it finishes running. If we just wanted to start an Activity with no return data, we can call startActivity(). The startActivityForResult() method takes the Intent as its first argument, and an int label (defined on line 11) as its second argument. We need a label because if we start more than one Activity that will return data, they all call the same method onActivityResult() when they return, so we need to tag each Activity so we know which one is returning the data. (OK, since we’re creating only one Activity here, technically we don’t need a tag, but in the more general case we will so it’s a good idea to get into the habit of adding the tag.) Before we see how to handle the returned result, we need to look at the other Activity to see how it sends back its data. Here’s the code for GetTextActivity: package com.example.ex02explicitintent; import android.os.Bundle; import android.app.Activity; import android.content.Intent; import android.view.View; import android.widget.EditText; public class GetTextActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_get_text); } public void onClickSendText(View view) { EditText userEditText = (EditText)findViewById(R.id.userText); String userText = userEditText.getText().toString(); Intent userIntent = new Intent(); userIntent.putExtra(getString(R.string.userTextTag), userText); setResult(RESULT_OK, userIntent); finish(); } } This Activity contains only the event handler for the sendText button. We retrieve the EditText control by using its ID and extract the text from it (lines 18-19). Then we create a new Intent. This time, we’re not starting a new Activity, so the Intent doesn’t have an Activity specified in its constructor. What we do want to do is send back userText to StartActivity. We can attach data to an Intent by adding Extras to it. Each Extra consists of a key (which is a String) and a corresponding value, which here is just userText. Note that we’ve defined the key as a string resource (by adding it to the strings.xml file) and retrieved it on line 21 using getString() which looks up a string using its ID. We do it this way since we’ll need to use the same key back in StartActivity to extract the Extra from the Intent. After adding userText to the Intent, we use setResult() on line 22 to attach the Intent to the GetTextActivity. The first argument to setResult() is a result code to indicate the status of GetTextActivity when it finishes. This result code is also sent back to StartActivity and can be checked to determine what to do with the returned Intent. Finally, we call finish() to shut down GetTextActivity and send the Intent back to StartActivity. Returning to the code above for StartActivity, the onActivityResult() method is called when GetTextActivity finishes. The arguments in this method are: - requestCode: the tag that was attached to the Activity that has just finished - resultCode: the result code that was set in the Activity that has just finished - data: the Intent returned by the other Activity On line 26, we check the resultCode to ensure that GetTextActivity finished properly. Then we check the requestCode to ensure that it really is GetTextActivity that is calling onActivityResult(). If so, then we extract the userText from the Intent, again using the string resource for the tag. Then we retrieve the enteredText TextView control and set its text to userText. Trackbacks […] seen how to use an explicit intent to start a second Activity from within an existing Activity by giving the name of the second […] […] notification to restart a closed app when clicked, we need to provide some more code. We can use an explicit Intent to restart the app in the way we did earlier, so we define mainIntent on line 6 to do this. Since […]
https://programming-pages.com/2014/02/19/android-explicit-intents/
CC-MAIN-2018-22
en
refinedweb
Thanks for pointing - I've republished Carousel under Ext.ux.layout namespace Documentation will be updated maximum in 2 hours. Has anyone used this extension with 2.2? I have a very simple layout and it presents a few problems: 1. I'm not able to resize the panels 2. When collapsing the panels, I'm not able to uncollapse them again. Thoughts?Thoughts?Code:Ext.onReady(function(){ var extra = '<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>...'; new Ext.Viewport({ layout: 'row-fit', id: 'container', items: [ { xtype: 'panel', title: "I'll take some too...", html: "...if you don't mind. I don't have a <tt>height</tt> in the config.<br>Btw, you can resize me!"+extra, id: 'panel3', autoScroll:true, collapsible: true }, { xtype: 'panel', id: 'slider', height: 5 }, { xtype: 'panel', title: "Let me settle too", html: "Since there are two of us without <tt>height</tt> given, each will take 1/2 of the unallocated space (which is 50%), that's why my height initially is 25%."+extra, autoScroll: true, collapsible: true } ] }); var split = new Ext.SplitBar("slider", "panel3", Ext.SplitBar.VERTICAL, Ext.SplitBar.TOP); split.setAdapter(new Ext.ux.layout.RowFitLayout.SplitAdapter(split)); }); bluefox, I am having the same issue as you with not being able to expand them panels once they have been collapsed. I can resize mine, but something is "weird" with the splitter. Hopefully, someone more familar with this code can reolve the issue with Ext 2.2. Mike V. I seem the same 2.2 resize issue with IE, but not firefox. Shame, it was really good with 2.1. I will have to remove it from my app if it is not resolved. I've made some fixes to this ux recently and code clean-up - you can try with the latest version from repository: If the bug still persists please upload the test case to - I'll try to fix it. Fixed bug with collapsing/expanding childs. Latest version in repository: Thank you! It works great now. Regards, Don McClean Hi, thanks for your extension... I'm using Ext.ux.layout.RowFitLayout.SplitAdapter but it doesn't looks like Ext.SplitBar (used in a border layout panel). Using firebug I discovered that: - Ext.SplitBar is a simple DIV that is sibling of its resizable DIVs and it styles itself using x-layout-split css class - Ext.ux.layout.RowFitLayout.SplitAdapter is composed by 3 DIVs nested each other and styles itself using x-panel css class Is there a reason for all?? thanks
https://www.sencha.com/forum/showthread.php?17116-Ext.ux.layout.RowFitLayout/page6
CC-MAIN-2018-22
en
refinedweb
; } }()); Hopefully this was't too long. I'm also trying to find a nice way to store the context of the canvas within the Game namespace. Maybe making a private var and creating a public function to get it? Thanks! an infinite loop... while(true){ // run this code 4eva } or do{ }while(true); Is that what you want? Also why are you using an anon function? Thanks for the quick response. Regarding the Anonymous function, it's a way I can give vars in several js files their own local scope. Global variables are still shared among files. At least that's what I've read. In production I'd put everything in one file. Minimized. A while loop would block the UI thread, and most modern browsers support requestAnimationFrame which will turn into the standard for animation on the web. The loop already works perfectly it also implements a fallback to setTimer. I wanted to know if this organization can use optimization or it's nice the way it is. As far as I know, it looks okay. But who knows, putting it out there for further opinions. \\.\;1341787 wrote:an infinite loop... while(true){ // run this code 4eva }or do{ }while(true);Is that what you want? Also why are you using an anon function? I wouldn't attach game() to window, you shouldn't need to do that given what you're doing with it. Little tip, if you are declaring a bunch of VAR in a row, you only need to say VAR once -- you can then comma delimit the list. The delta you are using of aiming for 60fps as your average is going to have problems in browsers that don't support window.performance; when it drops through to that Date.getTime() you're going to be having that function return the same time multiple times in a row, as the Date functions timer granularity is way down at 36.4ms on most browsers. (which of course is why we have Performance in the first place). You might want to run some sort of granularity check to make sure you aren't drawing frames that don't update because of the low timer granularity (which is below 30fps). I'd suggest dropping your desired 1:1 rate (step) to 24. It runs faster, great, and it will reduce the number of systems that will run at a slower playrate thanks to falling back on getTime to nil. That said, it's nice to see someone doing it properly, separating playrate from framerate. Oh, and because you're going to use document and window so much, this cute trick: (function(d,w) { })(document,window); Can not only save you some typing, it will actually run a hair faster since as an interpreted language, less code == faster, and shorter variable names == faster lookups. ... and yeah, @nogDog if you don't release scripting execution, nothing gets drawn -- EVER. That's why EVERY javascript animation has to use setTimeout, setInterval, requestAnimationFrame, or something similar to actually be usable... hence all white(true) will do is send the browser off to never-never land, effectively locking it up until (if you are in a good browser) it comes up with the message about the script taking too long to run and asking if you want to kill it. WAIT -- if using requestAnimationFrame, the only browsers that support it also by definition support Performance -- so there's no reason to have a fallback to Date.gettime --- or is there some sort of polyfill for that we're not seeing? I'd consider simply seeing if both requestAnimationFrame and window.performance exist, and if they don't simply bomb out with an error and lose the attempts to have fallbacks... Well, unless you REALLY care about Safari in which case you'd still need the webkitRequestAnimationFrame crap and have no window.performance; Even before Chrome forked off into Blink, the Safari builds of webkit have been such horrible code-rot lagging behind Chrome they've made IE 10+ look good on stuff like this.. Hey Shadow thanks for the great answer. You're right about the vars, I should have just used commas. Thanks for the tip. I actually made my previous version re-size based on the browser width/height so it takes up the browser viewing area. window.resize would fire and resize everything. Well, you already know this Were you suggesting the use of it because of performance or an aesthetic opinion? I checked out your demo and I don't think it's crappy. The water animation was pretty cool and the boats are floating nicely. Are they "floating" based on the water or independently? (function(d,w) { })(document,window); // THANKS FOR THIS TIP :) I'm going to remove window.performance. After your post, I went ahead and read up on it at MDN , and I have a feeling it's better to use it for debugging. In fact, according to the article, Safari completely doesn't support it. Our beloved IE only supports it in version 10.0. Ooops sorry for not including the Polyfill. I'm using the famous Paul Irish solution: window.requestAnimationFrame = function() { return window.requestAnimationFrame || window.webkitRequestAnimationFrame || window.mozRequestAnimationFrame || window.msRequestAnimationFrame || window.oRequestAnimationFrame || function(f) { window.setTimeout(f,1000/60); } }(); What if I do the same thing for window.performance? As in: window.performance = window.performance || {}; performance.now = (function() { return performance.now || performance.mozNow || performance.msNow || performance.oNow || performance.webkitNow || function () { return new Date().getTime();}; })(); By the way, I checked out the rest of your site. You write DOS and Commadore64 based games? That's pretty awesome man. deathshadow;1341803 wrote. Infinite Loop;1341809 wrote:Were you suggesting the use of it because of performance or an aesthetic opinion? Were you suggesting the use of it because of performance or an aesthetic opinion? More a usability issue -- I've seen a few too many canvas games that are annoying to try and play because they're stuck in a little tiny window (you'll see flash games on sites similarly afflicted) or worse, too big for the display I'm on. Infinite Loop;1341809 wrote:The water animation was pretty cool and the boats are floating nicely. Are they "floating" based on the water or independently? The water animation was pretty cool and the boats are floating nicely. Are they "floating" based on the water or independently? Independently, but on purpose. It tricks the brain into seeing more waves than are actually drawn. Trick I picked up back in the '80's working on true parallax depth of field scrolling -- A great example is the ground in the SMS version of Choplifter. If you moved the enemy tanks with the ground tile, it actually looks wrong so it gets it's own sideways scroll rate in-between the tile-row in front of it and the tile-row behind it. Infinite Loop;1341809 wrote:Ooops sorry for not including the Polyfill. I'm using the famous Paul Irish solution: Like most of his solutions I wonder if he even understands the languages he's writing for... See that garbage he did with the endless stupid malfing IE CC around the HTML tag crap that he came up with that damned near every framework pisses on websites with. REALLY not a fan of anything of his. I mean, setting self to self? Would that ONE if statement REALLY hurt that much? Much less there is no such thing as msRequestAnimationFrame -- never even existed -- and of course setTimeout just doesn't work at 16.6~ms (1000/60), as the minimum across browsers hovers between 30ms and 36.4ms -- so sending that low a timeout is never going to work right. Infinite Loop;1341809 wrote:What if I do the same thing for window.performance? What if I do the same thing for window.performance? I'd actually handle that a bit differently if I were to go that route. I'd probably set my own function name instead of recycling, but make it an alias of the ones that exist... THOUGH, Performance.now only has a -webkit prefix, none of the other ones have ever bothered having it. var now = window.performance ? ( Performance.webkitNow || Performance.now ) : function() { return new Date().getTime(); }; Infinite Loop;1341809 wrote:By the way, I checked out the rest of your site. You write DOS and Commadore64 based games? That's pretty awesome man. Thanks. I didn't get the memo. I find writing code for fun on the older narrower targets makes me a better programmer when I have the endless resources of today. It makes you think more, and when you're done it feels like way more of an accomplishment. Sad part being in many ways writing DOS games is simpler and more powerful than what JS can do today... admittedly much of that is the dreadful state of AUDIO in browsers, which is what made me shelve that CANVAS demo which was originally going to be "Missile Command on Steroids".
https://www.webdeveloper.com/forum/d/297101-your-opinions-on-my-game-loop-and-code-organization
CC-MAIN-2018-22
en
refinedweb
While browsing the log of one of my Bazaar branches, I noticed that the commit messages were being recorded as occurring in the +0800 time zone even though WA switched over to daylight savings. Bazaar stores commit dates as a standard UNIX seconds since epoch value and a time zone offset in seconds. So the problem was with the way that time zone offset was recorded. The code in bzrlib that calculates the offset looks like this: def local_time_offset(t=None): """Return offset of local zone from GMT, either at present or at time t.""" # python2.3 localtime() can't take None if t is None: t = time.time() if time.localtime(t).tm_isdst and time.daylight: return -time.altzone else: return -time.timezone Now the tm_isdst flag was definitely being set on the time value, so it must have something to do with one of the time module constants being used in the function. Looking at the values, I was surprised: >>> time.timezone -28800 >>> time.altzone -28800 >>> time.daylight 0 So the time module thinks that I don’t have daylight saving, and the alternative time zone has the same offset as the main time zone (+0800). This seems a bit weird since time.localtime() says that the time value is in daylight saving time. Looking at the Python source code, the way these variables are calculated on Linux systems goes something like this: - Get the current time as seconds since the epoch. - Round this to the nearest year (365 days plus 6 hours, to be exact). - Pass this value to localtime(), and record the tm_gmtoff value from the resulting struct tm. - Add half a year to the rounded seconds since epoch, and pass that to localtime(), recording the tm_gmtoff value. - The earlier of the two offsets is stored as time.timezone and the later as time.altzone. If these two offsets differ, then time.daylight is set to True. Unfortunately, the UTC offset used in Perth at the beginning of 2006 and the middle of 2006 was +0800, so +0800 gets recorded as the daylight saving time zone too. In the new year, the problem should correct itself, but this highlights the problem of relying on these constants. Unfortunately, the time.localtime() function from the Python standard library does not expose tm_gmtoff, so there isn’t an easy way to correctly calculate this value. With the patch I did for pytz to parse binary time zone files, it would be possible to use the /etc/localtime zone file with the Python datetime module without much trouble, so that’s one option. It would be nice if the Python standard library provided an easy way to get this information though.
https://blogs.gnome.org/jamesh/2006/12/
CC-MAIN-2018-22
en
refinedweb
In this guide, we'll briefly discuss the concept of immutability in software development and how to leverage its benefits when developing React-Redux applications. Specifically, we'll demonstrate how to use Facebook's Immutable.js library to enforce immutability in our applications. Table of Contents Redux and Immutability In the React ecosystem, Redux, a state management paradigm, is fast becoming the preferred implementation of Facebook's Flux architecture. One of Redux's core tenets is maintaining state immutability to ensure state determinism, unlock performace gains and enable time travel debugging capability. You probably have one question now though. What is immutability? An immutable object is one whose state cannot be modified once created. Enforcing immutability means ensuring that once objects are created, they cannot be modified. What is ImmutableJS?.Related Course: Getting Started with React Other than the benefit of not having to worry about accidentally mutating the state of our application directly, Immutable.js data structures are highly performant because of the library's implementation of structural sharing through hash maps and vector tries. If you'd like to read about how it does this, I've posted a few handy references in the reference section of this article. Before we dig deep into using Immutable.js with Redux, let's look at immutability in general in JavaScript. What we'll be Building In order to demonstrate implementing Immutable.js in a Redux application, we'll be building the Redux layer of a reviews application, where we can: - Add reviews for an item. - Delete reviews for an item. - Flag reviews made by other reviewers, we'll have this as a boolean. - Rate reviews made by other reviewers on a scale of 1-5. To show the differences between using Immutable and native Node.js methods, we'll write out the purely Node version of our reducer and then modify our reducer so that it uses Immutable.js. All in all, our goal is to make our state object immutable. Therefore, all modifications made to our application state should return a new modified object, leaving the previous state unchanged. As is the pattern with Redux, our reducer will return a new state object dependent on the action it receives. Object Immutability in JavaScript Natively, objects are mutable in JavaScript. However, if we're careful enough, we can implement some immutability. Amongst other methods, we can use the following: - The spread operator: The ...operator can be used to transform the properties of an object and returns a new object which is the result of the mutation. - Object.assign: Object.assign(target, ...sources). This method is used to copy the values of all enumerable own properties from one or more source objects to a target object. - Other non-mutating array methods like filter, concat and slice. Redux reducer model As a reminder, before we begin to write out our reducers, our reducers will emulate the standard redux reducer pattern below. Reviews reducer without Immutable.js First, let's use the methods we mentioned above to quickly bootstrap a reviews reducer for our application without Immutable. const reviews = (state=[], action) => { switch (action.type) { case 'ADD_REVIEW': return [ ...state, { id: action.id, reviewer: action.reviewer, text: action.text, rating: action.rating, flag: false } ] case 'DELETE_REVIEW': return state.filter(review => review.id !== action.id); case 'FLAG_REVIEW': return state.map(review => review.id === action.id ? Object.assign({}, review, { flag: action.flag}): review) case 'RATE_REVIEW': return state.map(review => review.id === action.id ? {...review, rating: action.rating }: review) default: return state; } } ADD REVIEW Here, we use the spread operator to copy the existing state and append to it a new review object. We could easily just have used the array concat or any other array manipulation method that returns a new array object. To demonstrate how easy it is to accidentally mutate state directly, instead of using the spread operator, let's use push to append the new review object to our state object. // code as before case 'ADD_REVIEW': return state.push( { id: action.id, reviewer: action.reviewer, text: action.text, rating: action.rating, flag: false } ) // as after Since push directly alters the original array, when a review is added into state, previous information about the shape of the state object before the addition will be lost to us. DELETE REVIEW Filter creates a new state array with the elements in the original array that pass the conditional in the callback function we pass it. Here, the condition is review.id !== action.id, If the review id in state matches the action id passed, then that review is omitted from the resultant array. FLAG REVIEW We map over all existing reviews and look for one with an id that matches the specified action.id. Once we've found it, we use Object.assign to create a copy of that review with its flag property changed to the one given in the action. RATE REVIEW Again, we map over all reviews and when we find one whose id is equivalent to the action.id, we create a copy of it using the object spread operator and change its rating property to the one specified in the action. The methods we've used make it quick and easy to implement state immutability in our application. So why would we look to anything else to help us enforce immutability? I wondered that too myself, so fortunately, you're not alone. Before we answer that question though, let's discuss some considerations we'll have to make before we use Immutable.js. Costs vs Benefits of using Immutable.js As always, before we attempt to integrate a library into an application, especially one as pervasive and far reaching as this one, let's weigh the cost versus the benefit of using Immutable.js. Benefits - As previously discussed, Immutable helps us enforce immutability from the start, eliminating the possibility of inadvertent state mutation. - Immutable improves state/object copy performance significantly through its implementation of structural sharing. Costs - Immutable.js is a library and thus requires installation. If we're using Node's inbuilt non mutating object methods, we don't need to perform any installation. - Since Immutable has its own syntax for performing read and write object operations, referencing Immutable.js data becomes slightly tedious. - The above reason makes it cumbersome to integrate with projects that expect plain JavaScript objects. Granted, Immutable eases conversion of its immutable data structures to plain JS objects with the method toJS(), but it's slow and leads to performance losses. - Converting an Immutable.js object to JavaScript using toJS()will return a new object every time. If the method is used in Redux's mapStateToProps()it will cause React to believe that the whole state object has changed every time even if only part of the state tree changes, causing a re-render of the whole component and negating any performance gains due to React's shallow equality checking. - Since all state is wrapped with Immutable.js and objects have to be accessed using Immutable.js syntax, this dependency may spread to your components. Such a high level of coupling would make removing Immutable.js from your codebase difficult in the future. - Immutable.js objects may prove difficult to debug since the actual data is nested within a hierarchy of properties. However, this can be resolved using the ImmutableJS object formatter in Chrome Dev Tools. Prerequisites Before we get started, there are a few libraries we need to install. First, let's create a package.json at the root of our project by running the command npm init Install our dependencies by executing, npm install --save immutable npm install --save-dev babel-cli babel-preset-es2015-node6 babel-preset-stage-3 babel-register mocha chai Let's modify the .babelrc to include the following: .babelrc { "presets": [ "es2015-node6", "stage-3" ] } Project Structure Now that you have a pretty comprehensive picture of what you might need to consider before using Immutable.js, let's build our application. We'll be using the following directory structure, so go ahead and create it. You can use the following command if you're using a Unix kernel. mkdir -p immutable-redux/src/reducers immutable-redux/test touch immutable-redux/.babelrc immutable-redux/src/{actionTypes.js,reducers/reviews.js} immutable-redux/test/reviews_test.js ├── immutable-redux ├── .babelrc ├── package.json ├── src │ ├── actionTypes.js │ └── reducers │ └── reviews.js └── test └── reviews_test.js Test Driven Development We'll be taking a tests first approach with our application, so before we begin writing out the reviews reducer, let's create tests for it. This approach will help us understand what our reducer should do. Gratefully, Redux reducers are pure functions so this makes them pretty straightforward to test. Our initial state will be an empty array and our actions of the form { type, id, item_id, reviewer, text, rating, flag }. As before, we'll use a switch-case statement to execute certain behaviour when our action types are triggered. For our testing assertion library we're using chai along with mocha. To start, fill out the following in the test/reviews_test.js file. test/reviews_test.js import { expect } from 'chai'; import { List, Map } from 'Immutable'; import reviews from '../src/reducers/reviews'; // This file will hold our reviews reducer describe('ImmutableJS Review reducer tests', () => { const state = List([ { id: 1, item_id: '200', reviewer: 'Bombadill', text: 'It needs a song really', rating: 4, flag: false }, { id: 2, item_id: '200', reviewer: 'Strider', text: `That's not what happened!`, rating: 3, flag: false }, { id: 3, item_id: '200', reviewer: 'Gollum', text: `Preciousss`, rating: 1, flag: true }, ]); describe('ADD_REVIEW TESTS', () => { const action = { type: 'ADD_REVIEW', id: 4, item_id: '200', reviewer: 'Gandalf', text: 'Not all those who wander are lost.', rating: 4, flag: false }; it('Should return a new state object when adding a review', () => { expect(state.size).to.equal(3); }); it('Should append the added review object to the new state object', () => { const newState = reviews(state, action); expect(reviews(state, action).size).to.equal(4); }); }); describe('DELETE_REVIEW TESTS', () => { const action = { type: 'DELETE_REVIEW', id: 3, item_id: '200' }; it('Should return a new state object when deleting a review', () => { expect(state.size).to.equal(3); }); it('Should return a state object without the deleted review', () => { const newState = reviews(state,action); expect(reviews(state, action).size).to.equal(2); expect(newState.indexOf({ id: 3, item_id: '200', reviewer: 'Gollum', text: `Preciousss`, rating: 1, flag: true })).to.equal(-1); }); }); describe('FLAG_REVIEW TESTS', () => { const action = { type: 'FLAG_REVIEW', id: 2, item_id: '200', flag: true }; const newState = reviews(state, action); it('Should return a new state object', () => { expect(newState).not.to.equal(state); }); it('Should return a state object with the specified review\'s flag property changed', () => { expect(newState.get(1).flag).to.equal(true); }); }); describe('RATE_REVIEW TESTS', () => { const action = { type: 'RATE_REVIEW', id: 1, item_id: '200', rating: 5 } const newState = reviews(state, action); it('Should return a new state object', () => { expect(newState).to.not.equal(state); // will assert that objects are not in the same slice of memeory }); it('Should return a state object with the specified review with the correct rating', () => { expect(newState.get(0).rating).to.equal(5); }); }); }); One of the things you'll notice is that we're not referring directly to values in our state object using dot or bracket notation. We now use get to access values from our state List object. Also, to compute the length of our immutable Lists, we use size and not length as we would have done. These two changes sum up the extent of any modifications we have to make to our test syntax. All in all, that was relatively painless. Running our tests Since we don't have any code to run them against, our tests should fail. Let's confirm that they do. Edit your package.json file to add this. package.json "scripts": { "test": "mocha --compilers js:babel-core/register" } This gets babel to transpile our code on the fly before mocha runs our tests. Now, we're ready to fail forward. Execute, npm test Expect the following npm ERR! Test failed. See above for more details. Take heart! Our failure is only temporary. We'll soon be in the green. Reviews reducer with Immutable.js Now that we have our reducer tests, let's finally write out our reducer. First, we define our action types. src/actionTypes.js const types = { reviews: { ADD_REVIEW: 'ADD_REVIEW', DELETE_REVIEW: 'DELETE_REVIEW', FLAG_REVIEW : 'FLAG_REVIEW', RATE_REVIEW: 'RATE_REVIEW' } } export default types; Finally, our reducer. src/reducers/reviews.js import { List, Map } from 'Immutable'; import types from '../../actionTypes'; const reviews = (state=List(), action) => { switch (action.type) { case types.reviews.ADD_REVIEW: const newReview = Map( { id: action.id, item_id: action.item_id, reviewer: action.reviewer, text: action.text, rating: action.rating, flag: false } ) return state.push(newReview); // Note that Immutable's push will return a new array case types.reviews.DELETE_REVIEW: return state.filter(review => review.id !== action.id); case types.reviews.FLAG_REVIEW: return state.map(review => review.id === action.id ? Object.assign({}, review, { flag: action.flag}): review) case types.reviews.RATE_REVIEW: return state.map(review => review.id === action.id ? {...review, rating: action.rating }: review) default: return state; } } export default reviews; Modifications made to the reducer How has our reviews reducer changed? Actually, not by much. - Instead of having an empty array as our initial state, we now have an empty Immutable.js List. - Newly created reviews aren't JavaScript objects anymore but Immutable.js Maps. - In the place of the spread operator, we push a new review into the state object. The Immutable Push unlike the native Node push will return a new state object. Let's make sure everything's running as it should by running npm test at the root of our project again. We expect our tests to pass. 8 passing (74ms) There you go. Conclusion If you've stuck with me till now, congratulations! Our state object is now immutable by default. We'll soon implement a view layer and the Redux store for our application, but we're off to a good start. You have a head start so go ahead. I'm excited to hear and see what you'll build with your newly acquired skills. All the code we've written can be found here. For comparison, I've written out the purely Node versions of our reducer tests and reducer without immutable. As always, I'd love your feedback on the article. Don't be shy, drop me a line in the comment box below. References If you're interested in learning more about Immutable.js, here are a few links to follow.
https://scotch.io/tutorials/using-immutablejs-in-react-redux-applications
CC-MAIN-2018-22
en
refinedweb
class Animal(models.Model): .... class Meta: abstract = True class Cat(models.Model, Animal): ... class Dog(models.Model, Animal): .... allData x = animal.allData()[0] # should return the first element in the array. django-model-utils This is not possible in one query. You have two options, one use to use django-model-utils or you can use django_polymorphic. Polymorphic is better suited to your task, however django-model-utils is made by a very prominent member of the django community and as such has a lot of good support. If I had to choose, I'd choose django-model-utilts since its made by a member of the django team, and thus will be supported. Polymorphic is supported by divio, which is a private company that heavily uses django based in Switzerland. As for how to select Sub-classes. You need to do two things using django-model-utils. Firstly, you need to change the objects variable in your model to InheritanceManager() like so (adapted from docs): from model_utils.managers import InheritanceManager class Place(models.Model): # ... objects = InheritanceManager() class Restaurant(Place): # ... class Bar(Place): # ... nearby_places = Place.objects.filter(location='here').select_subclasses() for place in nearby_places: # "place" will automatically be an instance of Place, Restaurant, or Bar The code above will return all Bars and Restaurants because it uses the select_subclasses.
https://codedump.io/share/vCwvlVKVhwXl/1/django-access-to-subclasses-items-from-abstract-class
CC-MAIN-2018-22
en
refinedweb
Magical numbers 7 and 2 with Python (7 minutes read) **** thats my first try for a Medium story**** 7 days a week, 7 dwarfs in Snowwhite 7 deadly sins 7 colors of a rainbow 7 fingers in my hand 7 ISO protocol levels 007 James Bond 7 Plus or Minus Two, by the cognitive psychologist George A. Miller I traveled the world and the 7 seas, by Eurythmics 7, the seven-continent model usually taught in China. 7, Seven Samurai, fim by Akira Kurosawa, 1954 What else ? Seven is more complex and more magical than you believe, so is 2. The magical 7 circuitries. Let’s have a look to this red countdown circuit (7,6,5,4,3,2,1,7or zero). Look at it during a long time, following the arrows ; now, are you zenfull, relax, stressless ? Did’t you notice some strange symetry ? 7 6 5 4 .. 3 2 1 zero ……4+ 3 = 7 …..5+ ….2 = 7 …6+ ………1 = 7 7+ …………..zero = 7 Now, let me swap these labels in my little triangle. 4 swaps with 3, 5 swaps with 2, 6 swaps with 1, 7 swaps with 3zero. Let’s paint the countdown circuit (zero,6,5,4,3,2,1,zero) with a green pencil. O miracle, the circuit is exactly the same, except an inverted orientation. Thank you for your attention (Clapping shows your admiration for the magic trick, hmm). Not clapping shows that you are aware of some properties of permtations in graph theory. Opening the complex unit circle. Here is a seventh degree equation : z power 7 = 1, inside the field on complex numbers. Ttey are seven solutions, let they be r0, r1, r2, r3, r4, r5and r6. Some people used to draw them on the unit trigonometric circle. (source: en.wikipedia, complex number) This image shows a visualisation of the square to sixth roots of a complex number z, in polar form reiφ where φ = arg z and r = |z | — if z is real, φ = 0 or π with Euler’s formula. My image is simpler beacause Z=1, so the visualisation of the seven of the 7 roots is the following : Opening the Python box. It’s time to illustrate some circle properties with a little help from Python langage. A) please add r1, r2and r4. You obtain p124, red-colored, I draw this addition with the parallelogram rule, firstly p12 with the green #, secondly p124 with the blue #. Then, you can verify at home on your kitchen table, the length of segment Op124 is square root of 2, who should have guessed ? And p124 is projected on the x-axis in the middle of OM, who should ask for more ? The python programm import math alpha_rad = math.pi*2/7 print(“alpha_rad==” , alpha_rad) # hello Code Like a Girl ! # r0 = complex(1) r1 =complex(math.cos( 1*alpha_rad ),math.sin( 1*alpha_rad )) r2 =complex(math.cos( 2*alpha_rad ),math.sin( 2*alpha_rad )) r3 =complex(math.cos( 3*alpha_rad ),math.sin( 3*alpha_rad )) r4 =complex(math.cos( 4*alpha_rad ),math.sin( 4*alpha_rad )) r5 =complex(math.cos( 5*alpha_rad ),math.sin( 5*alpha_rad )) r6 =complex(math.cos( 6*alpha_rad ),math.sin( 6*alpha_rad )) print(“r0 r1 r2 r3 r4 r5 r6==” , r0,r1,r2,r3,r4,r5,r6) p12 = r1+r2 p124 = p12+r4 print(“ p12,p124==” , p12,p124 ) x= p124.real y= p124.imag print(“x y==” ,x,y ) print(“ math.sqrt(7)/2==” , math.sqrt(7)/2 ) and the result is >>> alpha_rad== 0.8975979010256552 r0 r1 r2 r3 r4 r5 r6== (1+0j) (0.6234898018587336+0.7818314824680298j) (-0.22252093395631434+0.9749279121818236j) (-0.900968867902419+0.43388373911755823j) (-0.9009688679024191–0.433883739117558j) (-0.2225209339563146–0.9749279121818236j) (0.6234898018587334–0.7818314824680299j) p12,p124== (0.40096886790241926+1.7567593946498534j) (-0.4999999999999999+1.3228756555322954j) x y== -0.4999999999999999 1.3228756555322954 math.sqrt(7)/2== 1.3228756555322954 B) do you see the axis of symetry in half-plotted line ? This line is also the conjugacy axis of two complex numbers. So r6 is the conjugate of r1, r5 is the conjugate of r2, r3 is the conjugate of r4, p653 is the conjugate of p124. If you multiply p124 and p653 you obtain x²+y², that’s a well-known formula oof conjugate complex numbers. Let’s look with athis little python programm : p65 = r6+r5 p653 = p65+r3 print(“ p65,p653==” , p65,p653 ) result_multiply = p124*p653 print(“ result_multiply==” , result_multiply ) and the result is >>> p65,p653== (0.4009688679024188–1.7567593946498534j) (-0.5000000000000002–1.3228756555322951j) result_multiply== (1.9999999999999998–5.551115123125783e-16j) C) so, w have got two results : with Pythagore theorem of O-m-p124 triangle sqrt(2)² = ( 1/2 )²+( sqrt(7)/2)² that is a pretty magic relationship between 7 and 2. ( r1 + r2 + r4 )*( r6 + r5 + r3 ) = 2 you can notice that 1–2–4 are the labels at the base of the first red circuit and 6–5–3 those of the green circuit, hence I chose ( r1 + r2 + r4 ) and ( r6 + r5 + r3 ). Opening several substraction tables. What is the next question ? Why are 1–2–4 the labels at the base of the red circuit, and not 1–2–3 ? the answer is not that simple. Now let 0 1 2 3 4 5 6 be the integer numbers modulo 7 where zero=7. The full substraction table is now that substraction subtable the blue cells contain only 0 1 2 5 6, but 4 and 3 are missing. Another substraction subtable nobody is missing in blue cells, but that table is 4x4 sized, it’s heavy. Endly this smart substraction subtable smart because 0 1 2 3 4 5 6 all occur in the blue cells and its size is 3x3. We call it a minimal substraction subtable. That’s the reason why I chose 1–2–4 for labelling the base of the red circuit. You easily can verify that (6,5,3)x(6,5,3) is a suitable minimal substraction subtable too. What a strange relationship between 7 and 2 ! Abstract before a further story. We got tree magical numbers : 7, 3, 1, with four equations : 2=3–1 sqrt(3–1)² = ( 1/2 )²+( sqrt(7)/2 )² ( r1 + r2 + r4 )*( r6 + r5 + r3 ) = 3–1 (3–1)**0 + (3–1)**1 + (3–1)**2 =7 I call (7, 3, 1) a magical triptych. And what about the ( 3, 2 , 1 ) triptych ? Come on to the magical mystery tour. (dedicated to Pythagore, Euler, Fano, Beattles, Claude Berge)
https://medium.com/@m.bailly/magical-numbers-7-and-2-with-python-d5a2d8f360d3
CC-MAIN-2018-22
en
refinedweb
Retrieve lists of free HTTP proxies from online sites. Project description Package Description GetProx is a library for retrieving lists of free HTTP proxies from various online sites. Installation The package may be installed as follows: pip install getprox Usage Examples To retrieve proxies from all available sources, invoke the package as follows: import getprox proxy_uri_list = getprox.proxy_get() Proxies are returned in format. By default, the proxies will be tested using a simple timeout test to determine whether they are alive. A list of supported proxy sources can be obtained via proxy_src_list = getprox.sources() Proxies may also be obtained from a specific source or sources. For example: proxy_uri_list = getprox.proxy_get('letushide') Internally, proxy retrieval and testing is performed asynchronously; one can also access the asynchronous mechanism as follows: p = getprox.ProxyGet() # .. wait for a while .. proxy_src_list = p.get() Instantiation of the ProxyGet class will launch threads that perform retrieval and testing. If the threads finish running, the get() method will return the retrieved proxy URIs; if not, the method will return an empty list. To Do - Add support for more proxy sources. - Expose proxy selection options for specific sources. - Provide more robust proxy checking algorithm. License This software is licensed under the BSD License. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/getprox/
CC-MAIN-2018-22
en
refinedweb
05 Upgrade to Java SE 7 Programmer Demo Product - For More Information - Visit: Edition = DEMO ProductFull Version Features: 90 Days Free Updates 30 Days Money Back Guarantee Instant Download Once Purchased 24/7 Online Chat Support Page | 1 Preparation Material Question: 1 Which statement is true about the take method defined in the WatchService interface? A. Retrieves and removes the next watch key, or returns null of none are present. B. Retrieves and removes the next watch key. If a queued key is not immediately available, the program waits for the specified wait time. C. Retrieves and removes the next watch key: waits if no key is yet present. D. Retrieves and removes all pending events for the watch key, returning a list of the events that were retrieved. Answer: C Explanation: The WatchKey take() method retrieves and removes next watch key, waiting if none are yet present. Note: A watch service that watches registered objects for changes and events. For example a file manager may use a watch service to monitor a directory for changes so that it can update its display of the list of files when files are created or deleted.. Reference: Interface WatchService Question: 2 Given the code fragment: private static void copyContents (File source, File target) { try {inputStream fis = new FileInputStream(source); outputStream fos = new FileOutputStream (target); byte [] buf = new byte [8192]; int i; while ((i = fis.read(buf)) != -1) { fos.write (buf, 0, i); } //insert code fragment here. Line ** System.out.println ("Successfully copied"); } Which code fragments, when inserted independently at line **, enable the code to compile? A. } catch (IOException | NoSuchFileException e) { System.out.println(e); } B. } catch (IOException | IndexOutOfBoundException e) { System.out.println(e); } C. } catch (Exception | IOException | FileNotFoundException e ) { System.out.println(e); } D. } catch (NoSuchFileException e ) { System.out.println(e); } Page | 2 Preparation Material E. } catch (InvalidPathException | IOException e) { System.out.println(e); } Answer: B, D, E Explanation: B: Two mutually exclusive exceptions. Will work fine. D: A single exception. Will work fine. E: Two mutually exclusive exceptions. Will work fine. Note: In Java SE 7 and later, a single catch block can handle more than one type of exception. This feature can reduce code duplication and lessen the temptation to catch an overly broad exception. In the catch clause, specify the types of exceptions that block can handle, and separate each exception type with a vertical bar (|). Note 2: NoSuchFileException: Checked exception thrown when an attempt is made to access a file that does not exist. InvalidPathException: Unchecked exception thrown when path string cannot be converted into a Path because the path string contains invalid characters, or the path string is invalid for other file system specific reasons.. Incorrect answers: A: This first exception is of type IOException; therefore, it catches any IOexception, including NoSuchFileException. This code will not compile. C: This first exception is of type Exception; therefore, it catches any exception, including IOException and FileNotFoundException. This code will not compile. Question: 3 Which two statements are true about the walkFileTree method of the files class? A. The file tree traversal is breadth-first with the given FileVisitor invoked for each file encountered. B. If the file is a directory, and if that directory could not be opened, the postVisitFileFailed method is invoked with the I/O exception. C. The maxDepth parameter’s value is the maximum number of directories to visit. D. By default, symbolic links are not automatically followed by the method. Answer: C, D Explanation: C: The method walkFileTree(Path start, Set<FileVisitOption> options, int maxDepth, FileVisitor<? super Path> visitor) walks a file tree. The maxDepth parameter is the maximum number of levels of directories to visit. A value of 0 means that only the starting file is visited, unless denied by the security manager. A value of MAX_VALUE may be used to indicate that all levels should be visited. The visitFile method is invoked for all files, including directories, encountered at maxDepth, unless the basic file attributes cannot be read, in which case the visitFileFailed method is invoked. D: You need to decide whether you want symbolic links to be followed. If you are deleting files, for example, following symbolic links might not be advisable. If you are copying a file tree, you might want to allow it. By default, walkFileTree does not follow symbolic links. Incorrect answers: A: A file tree is walked depth first, but you cannot make any assumptions about the iteration order that subdirectories are visited. Page | 3 Preparation Material B: The method visitFileFailed(T file, IOException exc) is invoked for a file that could not be visited. This method is invoked if the file's attributes could not be read, the file is a directory that could not be opened, and other reasons. However, there is no method named postVisitFileFailed. Reference: The Java Tutorials, Walking the File Tree Reference: walkFileTree Question: 4 Which code fragments print 1? A. String arr [] = {"1", "2", "3"}; List <? extends String > arrList = new LinkedList <> (Arrays.asList (arr)); System.out.println (arrList.get (0)); B. String arr [] = {"1", "2", "3"}; List <Integer> arrList = new LinkedList <> (Arrays.asList (arr)); System.out.println (arrList.get (0)); C. String arr [] = {"1", "2", "3"}; List <?> arrList = new LinkedList <> (Arrays.asList (arr)); System.out.println (arrList.get (0)); D. String arr [] = {"1", "2", "3"}; List <?> arrList = new LinkedList <?> (Arrays.asList (arr)); System.out.println (arrList.get (0)); E. String arr [] = {"1", "2", "3"}; List <Integer> extends String > arrList = new LinkedList <Integer> (Arrays.asList (arr)); System.out.println (arrList.get (0)); Answer: A, C Explanation: Note: You can replace the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>) as long as the compiler can infer the type arguments from the context. This pair of angle brackets is informally called the diamond. Incorrect answers: B: The Array is of type char. The List is of type Integer. Incompatible types. E: Type mismatch (Integer and char). Question: 5 Given the code fragment: public static void main(String[] args) { String source = "d:\\company\\info.txt"; String dest = "d:\\company\\emp\\info.txt"; //insert code fragment here Line ** } catch (IOException e) { System.err.println ("Caught IOException: " + e.getmessage(); } } Which two try statements, when inserted at line **, enable the code to successfully move the file info.txt to the destination directory, even if a file by the same name already exists in the destination directory? A. try {FileChannel in = new FileInputStream(source).getChannel(); FileChannel out = new FileOutputStream(dest).getChannel (); in.transferTo (0, in.size(), out); B. try {Files.copy(Paths.get(source), Paths.get(dest)); Files.delete(Paths.get(source)); Page | 4 Preparation Material C. try {Files.copy(Paths.get(source), Paths.get(dest)); Files.delete(Paths.get(source)); D. try {Files.move (Paths.get(source), Paths.get(dest)); E. try {BufferedReader br = Files.newBufferedReader(Paths.get(source), Charset.forName ("UTF-8")); BufferedWriter bw = Files.newBufferedWriter (Paths.get(dest), Charset.forName ("UTF-8")); String record = ""; while ((record = br.readLine()) != null){ bw.write (record); bw.newLine(); } Files.delete(Paths.get(source)); Answer: B, D Explanation: Incorrect answers: A: Copies (not moving) the file, but the original file is left. C: Moves a file, but only if the destination does not exist. E. The file is moved fine, but the content of the file is lost. Question: 6 What design pattern does the Drivermanager.getconnection () method characterize? A. DAO B. Factory C. Singleton D. composition Answer: B Explanation: DriverManager has a factory method getConnection() that returns a Connection object. Note 1: A factory method is a method that creates and returns new objects. The factory pattern (also known as the factory method pattern) is a creational design pattern. A factory is a Java class that is used to encapsulate object creation code. A factory class instantiates and returns a particular type of object based on data passed to the factory. The different types of objects that are returned from a factory typically are subclasses of a common parent class. Note 2: The method DriverManager.getConnection establishes a database connection. This method requires a database URL, which varies depending on your DBMS. The following are some examples of database URLs: MySQL, Java DB. Question: 7)); Page | 5 Preparation Material Answer: C Explanation: DateFormat is an abstract class that provides the ability to format and parse dates and times. The getDateInstance() method returns an instance of DateFormat that can format date information. It is available in these forms: static final DateFormat getDateInstance( ) static final DateFormat getDateInstance(int style) static final DateFormat getDateInstance(int style, Locale locale) The argument style is one of the following values: DEFAULT, SHORT, MEDIUM, LONG, or FULL. These are int constants defined by DateFormat. Incorrect answers: A, B, E: Incorrect syntax for the Locale. The correct syntax is: Locale.UK D: Incorrect syntax. Question: 8 Given three resource bundles with these values set for menu1: ( The default resource bundle is written in US English US resource Bundle Menu1 = small French resource Bundle Menu1 = petit Chinese Resource Bundle Menu = 1 And given the code fragment: Locale.setDefault (new Locale("es", "ES")); // Set default to Spanish and Spain loc1 = Locale.getDefault(); ResourceBundle messages = ResourceBundle.getBundle ("messageBundle", loc1); System.out.println (messages.getString("menu1")); What is the result? A. No message is printed B. petit C. : D. Small E. A runtime error is produced Answer: E Explanation: Compiles fine, but runtime error when trying to access the Spanish Resource bundle (which does not exist): Exception in thread "main" java.util.MissingResourceException: Can't find bundle for base name messageBundle, locale es_ES Question: 9 Given: import java.util.*; public class StringApp { public static void main (String [] args) { Set <String> set = new TreeSet <> (); set.add("X"); set.add("Y"); set.add("X"); set.add("Y"); Page | 6 Preparation Material set.add("X"); Iterator <String> it = set.iterator (); int count = 0; while (it.hasNext()) { switch (it.next()){ case "X": System.out.print("X "); break; case "Y": System.out.print("Y "); break; } count++; } System.out.println ("\ncount = " + count); } } What is the result? A. X X Y X Y count = 5 B. X Y X Y count = 4 C. X Y count = s D. X Y count = 2 Answer: D Explanation: A set is a collection that contains no duplicate elements. So set will include only two elements at the start of while loop. The while loop will execute once for each element. Each element will be printed. Note: * public interface Iterator An iterator over a collection. Iterator takes the place of Enumeration in the Java collections framework. Iterators differ from enumerations in two ways: Iterators allow the caller to remove elements from the underlying collection during the iteration with well-defined semantics. Method names have been improved. * hasNext public boolean hasNext() Returns true if the iteration has more elements. (In other words, returns true if next would return an element rather than throwing an exception.) public Object next() Returns the next element in the iteration. Question: 10 Given the code fragment: List<Person> pList = new CopyOnWriteArrayList<Person>(); Which statement is true? A. Read access to the List should be synchronized. Page | 7 Preparation Material B. Write access to the List should be synchronized. C. Person objects retrieved from the List are thread-safe. D. A Person object retrieved from the List is copied when written to. E. Multiple threads can safely delete Person objects from the List. Answer: C Explanation: CopyOnWriteArrayList produces a thread-safe variant of ArrayList in which all mutative operations (add, set, and so on) are implemented by making a fresh copy of the underlying array. Note: his.. Reference: java.util.concurrent.CopyOnWriteArrayList<E> Page | 8 Preparation Material Demo Product - For More Information - Visit: 20% Discount Coupon Code: 20off2016 Page | 9
http://www.slideserve.com/certschief3/1z0-807-exam-certification-test
CC-MAIN-2016-50
en
refinedweb
Your message dated Fri, 14 Oct 2016 11:51:37 +0000 with message-id <[email protected]> and subject line Bug#835374: Removed package(s) from unstable has caused the Debian Bug report #656425, regarding libapache2-mod-fastcgi: "filedescriptor (1069) larger than FD_SETSIZE (1024) found" in error log. FastCGI not.) -- 656425: Debian Bug Tracking System Contact [email protected] with problems --- Begin Message ---Package: libapache2-mod-fastcgi Version: 2.4.6-1 Severity: normal Tags: upstream patch The exact error message I had in error_log is as follow : [Thu Jan 12 07:36:56 2012] [error] [client 127.0.0.1] (2)No such file or directory: FastCGI: failed to connect to server "/var/alternc/cgi-bin/php52.fcgi": socket file descriptor (1345) is larger than FD_SETSIZE (1024), you probably need to rebuild Apache with a larger FD_SETSIZE This bug has already been reported for apache1.3 long ago : #280206 but was never fixed. It is still buggy in apache2, but only appears when you have more than 1024 file descriptors opened in Apache itself (in my case, more than 512 Vhosts with 2 filehandle per vhost...) The FD_SETSIZE check from fastcgi source code definitely looks like a buggy check: It checks that a file descriptor number is <1024, and pretends it may be above ulimit limits if that's the case... but that's useless: if you try to open a file above ulimit limits, you will have 0 as file descriptor returned from open() syscall... More than that, it compares that file descriptor number with FD_SETSIZE, which is definitely not the maximum number of file an apache2 can open, you may have made it far higher with ulimit -n in the system configuration ... So the patch I'm using since (successfully) is just removing that useless check ... Hope this can help solving this, don't hesitate to ask for help on this package, as you can see, I use it on many vhosts ;) -- System Information: Debian Release: 6.0.3 APT prefers stable APT policy: (500, 'stable') Architecture: i386 (i686) Kernel: Linux 2.6.32-5-vserver-686-bigmem (SMP w/4 CPU cores) Locale: LANG=fr_FR.UTF-8, LC_CTYPE=fr_FR.UTF-8 (charmap=UTF-8) Shell: /bin/sh linked to /bin/bash Versions of packages libapache2-mod-fastcgi depends on: ii apache2.2-common 2.2.16-6+squeeze4 Apache HTTP Server common files ii libc6 2.11.2-10 Embedded GNU C Library: Shared lib libapache2-mod-fastcgi recommends no packages. libapache2-mod-fastcgi suggests no packages. -- no debconf information--- libapache-mod-fastcgi-2.4.6.orig/mod_fastcgi.c 2007-11-13 00:00:10.000000000 +0100 +++ libapache-mod-fastcgi-2.4.6/mod_fastcgi.c 2012-01-11 18:08:56.000000000 +0100 @@ -1366,16 +1366,6 @@ return FCGI_FAILED; } -#ifndef WIN32 - if (fr->fd >= FD_SETSIZE) { - ap_log_rerror(FCGI_LOG_ERR, r, - "FastCGI: failed to connect to server \"%s\": " - "socket file descriptor (%u) is larger than " - "FD_SETSIZE (%u), you probably need to rebuild Apache with a " - "larger FD_SETSIZE", fr->fs_path, fr->fd, FD_SETSIZE); - return FCGI_FAILED; - } -#endif /* If appConnectTimeout is non-zero, setup do a non-blocking connect */ if ((fr->dynamic && dynamicAppConnectTimeout) || (!fr->dynamic && fr->fs->appConnectTimeout)) { --- End Message --- --- Begin Message ---Version: 2.4.7~0910052141-1.2+rm Dear submitter, as the package libapache-mod-fastcgi ---
https://www.mail-archive.com/[email protected]/msg532490.html
CC-MAIN-2016-50
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? Dr. Vandana Bansal STRUCTURE 11.0 Introduction 11.1 Objective 11.2 Meaning of important terms 11.3 No loss can be set off against winnings from lotteries, crossword puzzles 11. 4 Steps involved in set off and carry forwards 11.5 Inter source adjustment 11.5.1 Loss from a speculation business 11.5.2. Loss from the activity of owning and maintaining race horses 11.5.3. Long term capital loss 11.5.4. Loss from a source, which is exempt 11.6 Inter-head adjustment 11.6.1 Loss under the Head Capital Gains 11.6.2 Loss under the Head Business or Profession 11.7 Carry forward and set off of losses 11.7.1 Carry forward and set off of loss from house property 11.7.2 Carry forward and set off of business losses 11.7.3. Set off and carry forward of speculation loss 11.7.4 Capital Loss 11.7.5 Loss on Owning and Maintaining Race Horses 11.8 Let Us Sum Up 11.9 Glossary 11.10 Self Assessment Exercise 11.11Further Readings ________________________________________________________________________ 11.0 INTRODUCTION Income-tax is a composite tax on the total income of a person earned during a period of one previous year. There might be cases where an assessee has different sources of income under the same head of income. Similarly, he may have income under different heads of income. It might also happen that the net result from a particular source/head may be a loss. This loss can be set off against other sources/head in a particular manner. For example, where a person carries on two businesses and one business gives him a loss and the other a profit, then the income under the head ‘Profits and gains of business or profession’ will be the net income i.e. after the adjustment of the loss. Similarly, if there is a loss under one head of income, it should normally be adjusted against the income from another 152 head of income while computing the Gross Total Income, of course subject to certain restrictions. These provisions for set off or carry forward and set off of loss are contained in sections 70 to 80 of Income-tax Act. __________________________________________________________________ 11.1 OBJECTIVE After going through this lesson you should be able to understand: If there is a loss sustained by the assessee, Whether such a loss can be • Set off against income from any other source/head and the restrictions for the same. • If it cannot be set off, can that loss be carried forward and, • What are the provisions/restrictions for the above and for how many years it can be carried forward 11.2 MEANING OF IMPORTANT TERMS Head of Income An assessee may have income from various sources like employment, business, interest, rent, etc. For the purpose of income tax we divide these incomes into five heads namely, Income from Salaries Income from house property Income from Business & Profession Capital Gains Income from other sources The rules for computation of income under each head is different ( You have already studied these five heads) Source of Income An assessee may have 3-4 sources of income under one particular head . For example a person might have two businesses A and B which are two sources of income under the same head business and profession . Similarly a person might be having two part time employments. He will receive salary from both the employers; each salary received is a source of income. But both are taxable under the head ‘Income from Salary.’ __________________________________________________________________ 11.3 NO LOSS TO BE SET OFF AGAINST WINNINGS FROM LOTTERIES, CROSSWORD PUZZLES, ETC. As discussed in the introduction, losses sustained by an assessee can be set off against other incomes subject to certain restrictions. The detailed provisions for the same will be discussed later. 153 The first and the foremost restriction that I.T. Act puts are ; No loss can be set off against any income from lottery, crossword puzzles, horse races, gambling, etc. Government does not promote such kind of activities which depend totally on luck/ chance and likelihood of losing is very high. Hence any income from these sources is fully taxable irrespective of any loss sustained by the assessee even from the same source or a different source or a different head. As already discussed in the lesson Income from other sources no expenditure or allowance is even allowed to be deducted from winnings from lotteries or crossword puzzle, etc. Hence it is very clear and obvious that no question arises of any loss to be set off against such incomes. Hence we can conclude any income from lottery, crossword puzzles, horse races, gambling, etc. is fully taxable always. No expenses even if they relate to earning of such income are allowed to be deducted. Furthermore, no loss can be set off against such income even if it is from the same source. __________________________________________________________________ 11.4 STEPS IN SET OFF AND CARRY FORWARDS There are three steps involved in this Step1 Inter-source adjustment under the same head of income (Para 10.5) Step 2 Inter head adjustment in the same assessment year. (Para 10.6). Step 2 is only applied if it is not possible to set off a particular loss under Step1 Step3 Carry forward of a loss (Para 10.7) this step is only applicable if a loss is not set off under Step 1 and 2. __________________________________________________________________ 11.5 INTER SOURCE ADJUSTMENT (SEC.70) Where the net result for any assessment year in respect of any source is a loss, the assessee shall be entitled to have the amount of such loss set off against his income from any other source under the same head. This may also be referred to as inter source adjustment. For example, if the assessee has two houses and the net income from one house is Rs. 84,000 while from the other house there is a loss of Rs. 60,000 the loss shall be adjusted against the income (as both fall under the same head i.e. ‘Income from house property) and after set off, the income under the head ‘income from house property’ shall be Rs. 24,000.This is Inter source adjustment. On the other hand, if an assessee has two houses and there is a net Income of Rs.80,000 from one house and loss of Rs.1,20,000 from another, the net loss under this head will be (-) 40,000. Such a loss can be set off against any kind of 154 income under any other head which is termed as Inter-head adjustment. This has been explained in detail later in the Chapter. However, there are certain exceptions to this general rule of inter source adjustment. In the following cases loss from one source cannot be adjusted against income from another source of income although it falls under the same head: __________________________________________________________________ 11.5.1 LOSS FROM SPECULATION BUSINESS “Speculation business” means a business in which contracts for the purchase or sale of any commodity including stocks and shares is periodically or ultimately settled without the actual delivery or transfer of the commodity or scrips . As per section 73 any loss arising from a speculation business carried on by an assessee shall be set off only against income of any another speculation business run by the assessee. It cannot be set off from a non-speculative business income, although income from both kinds of businesses are taxable under the head ‘profits and gains of business or profession’. However, a loss from a non-speculative business can be set off against income from speculation business but vice versa is not possible. Illustration 11.1: R carries only two businesses A and B. Business A is a manufacturing business while business B is a speculative business. State whether the loss can be set off in the following two situations. Situation I Rs. Manufacturing business Speculation business Solution In situation I, set off is not possible as speculation loss can be set off only against income from speculation business. In situation II, set off is possible and the manufacturing business loss will have to be set off against income from speculation business. (+) 800,000 (-) 3, 40,000 Situation II Rs. (-) 5, 00,000 (+) 2, 00,000 CHECK YOUR PROGRESS 155 Activity A X carries two businesses P and Q. Business P is a retail business while business Q is a speculative business. State whether the loss can be set off in the following two situations. Situation I Rs. Retail business Speculation business (+) 200,000 (-) 1, 20,000 Situation II Rs. (-) 3, 00,000 (+) 5, 00,000 Situation I ------------------------------------------------------------------------------------Situation II -----------------------------------------------------------------------------------__________________________________________________________________ 11.5.2. LOSS FROM THE ACTIVITY OF OWNING AND MAINTAINING RACE HORSES As per section 74A, the loss incurred by an assessee, in the activity of owning and maintaining race horses, shall only be set off against the income from such an activity. It cannot be set off against the income from any other sources. __________________________________________________________________ 11.5.3. LONG TERM CAPITAL LOSS Long term capital loss can be set off only against long-term capital gain. However, short-term capital loss can be set off from any capital gain (long-term or short-term) Illustration 11.2: Situation I Short-term capital gain Long-term capital gain (-) 2, 00,000 (+) 5, 20,000 Situation II (+) 5, 00,000 (-) 1, 80,000 Solution: In situation I, short-term capital loss of Rs. 2, 00,000 will have to be set off from long-term capital gain. Hence, the net long-term capital gain in this case shall be RS. 3, 20,000. In situation II, it is not possible to set off long-term capital loss from short-term capital gain. Hence, short-term capital gain of Rs. 5, 00,000 shall be taxable and Rs. 1, 80,000 of long-term capital loss shall have to be carried forward 156 _________________________________________________ 11.5.4. LOSS FROM A SOURCE WHICH IS EXEMPT Loss incurred by an assessee from a source of income which is exempt, cannot be set off against income from a taxable source. CHECK YOUR PROGRESS Activity B Fill in the blanks a) Set off of loss from one source of income to another source of income under the same head is known as ----------------. b) Speculation business losses can be set-off only against profits of -----------business. c) Long term capital loss can be set off only against ---------------------. d) No loss ( of whatsoever nature) can be set off against any income from ------------, crossword puzzle , etc. e) Loss from activity of owning & maintaining race horses can be set off only against income from ----------------. 11.6 INTER HEAD ADJUSTMENT (SECTION 71) As explained above, any loss from one source of income is firstly set off against any gain from another source within the same head. Any remaining loss can then be set off against Income from any other Head. This is known as Inter-Head adjustment. However, there are exceptions to this rule also as discussed below. As already discussed above, in the following cases no inter source adjustment is permitted, hence, the question does not arise of any inter-head adjustment. In other words following are the exceptions to inter-head adjustment also. • No loss of whatsoever nature can be set off against winnings from lotteries, crossword puzzles, card games etc. • Loss from a speculation business; • Loss from the activity of owning and maintaining race horses; • Loss from a source which is exempt. • Long-term capital loss- only from LTCC Besides the above mentioned exceptions which are applicable both in inter source adjustment and inter head adjustment, there are two more exceptions to inter head adjustment. They are; • Loss under the head capital gain ( Para 10.6.1) • Loss under the head business and profession ( Para 10.6.2) _________________________________________ 11.6.1 LOSS UNDER THE HEAD ‘CAPITAL GAINS’ 157 In case of an Inter-head adjustment of losses, any capital loss, whether short-term or long-term, shall not be allowed to be set off against income under any other head. It shall however be allowed to be carried forward. __________________________________________________________________ 11.6.2 LOSS UNDER THE HEAD BUSINESS OR PROFESSION [SECTION 7 (2A)] From the Assessment Year 2005-06, any loss under the head ‘Business and Profession’ cannot be set off against income from ‘Salaries’. However, it can be set off against the Income from any other head. Illustration 11.3 From the following information submitted to you, compute the taxable income in the following situation. Situation I Rs. Long term capital gain/loss Short term capital gain/loss Business income/loss Solution Situation I Rs. Capital gain Long term capital gain/loss Short term capital gain/loss Capital gain/loss after set off Set off of business income/loss Total income (+) 2, 80,000 (-) 50,000 2, 30,000 (-) 1, 80,000 1, 50,000 Situation II Rs. 50,000 (-) 1, 20,000 (-) 70,000 1, 40,000 1, 40,000* (+) 2, 80,000 (-) 50,000 (-) 1, 80,000 Situation II Rs. 50,000 (-) 1, 20,000 1, 40,000 *In situation II, capital loss of Rs. 70,000 will be carried forward and the total income shall be Rs.1, 40,000. Hence , we observe business loss can be set off against capital loss but vice-versa is not allowed. Illustration 11.4 From the following information submitted to you by Mr. X, calculate the gross total income for the A.Y 2006-07. I II Income from salary 2, 00,000 2, 00,000 Income from Bus/Prof (-) 50 ,000 (-) 50 ,000 Income from House Property _ 80,000 158 Solution In situation I his GTI would be 2, 00,000 and his loss from business and profession will be carried forward ( Any loss under the head Business & Profession cannot be set off against any income from Salary). In situation II business loss can be set off against income from House Property and his GTI would be Rs. 2, 30,000. CHECK YOUR PROGRESS Activity C Match the following: Table B (a) only against profit from speculation business ii). Loss from speculation business can (b) against winning from lottery or be set off any other kind of income. iii)loss of lottery cannot be set off (c) can be set off against income under any head of income iv)Loss under the head house (d) only against long term capital property gain __________________________________________________________________ Table A i).Long term capital loss can be set off 11.7 CARRY FORWARD AND SET OFF OF LOSSES If the losses could not be set off under the same head or under different heads in the same assessment year, such losses are allowed to be carried forward to be claimed as set off from the income of the subsequent assessment years. All losses are not allowed to be carried forward. Another very important aspect is that in case of carry forward, losses can be only set off under the same head of income only. Inter head adjustment is not allowed. Only the following losses are allowed to be carried forward and set off in the subsequent years. a) House property loss; ( Para 10.7.1) b) Business loss; ( Para 10.7.2) c) Speculation loss; ( Para 10.7.3) d) Capital loss; ( Para 10.7.4) e) Loss on account of owning and maintaining race horses.( Para 10.7.5) Hence any loss under the head income from other sources is not allowed to be carried forward.( except race horses) 159 Compulsory filing of loss of return (Section 80): Although the above losses are allowed to be carried forward, it is allowed only when such loss has been determined in pursuance of a return of loss submitted by the assessee on or before the due date for filing of the returns prescribed under section 139(1) ( See Para --------) However loss under the head income from house property can be carried forward even if the return is not filed within the due date mentioned under section 139(1). __________________________________________________________________ 11.7.1 CARRY FORWARD AND SET OFF OF LOSS FROM HOUSE PROPERTY (SECTION 71B) A loss under the head house property will be allowed to be carried forward for 8 assessment years to claim it as a set off in the subsequent years under the head ‘Income from house property’. Therefore, if the loss of house property of the previous year 2003-2004 which could not be set off because of absence or inadequacy of the income of previous year 2003-2004, it may be carried forward for 8 assessment years succeeding assessment year 2004-2005 to be set off from income under the head house property. __________________________________________________________________ 11.7.2 CARRY FORWARD AND SET OFF OF BUSINESS LOSSES (SECTION 72) Where the loss under the head ‘profits and gains of business or profession’ other than loss from speculation, However it is subject to following conditions. I) Business losses can be adjusted only against business income: Business income may be from the same business in which the loss was incurred, or may be any other business. Business in respect of which a loss is incurred may or may not be continued. III) Losses can be set off only by the assessee who has incurred loss with a few exceptions like when a partnership firm is converted into a company, amalgamation of companies, etc. IV) Period of carry forward: Each year’s loss is a separate loss and no loss shall be carried forward for more than eight assessment years immediately succeeding the assessment year for which the loss was first computed. Therefore, a loss of previous year 2002-2003, i.e. assessment year 2003-2004 can be carried II) 160 forward till assessment year 2011-12. Besides the above, the following can also be carried forward indefinitely, as per income tax law: i) Unabsorbed depreciation ii) Unabsorbed capital expenditure on scientific research; iii) Unabsorbed expenditure on family planning. __________________________________________________________________ 11.7.3. SET OFF AND CARRY SPECULATION LOSS (SECTION 73). FORWARD OF As stated earlier, the loss of a speculation business of any assessment year is allowed to be set off only against the profits and gains of another speculation business in the same assessment year. If a speculation loss could not be set off from the income of another speculation business in the same assessment year, it is allowed to be carried forward for 8 assessment years immediately succeeding the assessment year for which the loss was first computed. Also, it can only be set off against the income of only a speculation business. It may be observed that it is not necessary that the same speculation business must continue in the assessment year in which the loss is set off. However, filing of return before the due date is necessary for carry forward of such a loss. __________________________________________________________________ 10.7.1 LOSS UNDER THE HEAD CAPITAL GAIN Loss on short term capital asset Any loss on short-term capital asset is allowed to be carried forward to be set-off in subsequent years against capital gains (short-term as well as long-term). The period of carry forward is 8 years. Loss on Long-term capital asset Any loss from long-term capital assets can also be carried forward to be set-off in subsequent years but against only long-term capital gains. The period of carry forward is 8 years. __________________________________________________________________ 10.7.5 LOSS ON OWNING AND MAINTAINING RACE HORSES SECTION 74 A (3) Any loss suffered by the assessee in respect of maintaining of race horses can be set-off against the income from the activity of owning and maintaining race horses in subsequent years .The period for carry forward of such a loss is only four years immediately succeeding the assessment year in which the loss was computed for the first time. 161 Illustration11.5 Following are the particulars of net incomes and losses of X for the year ending 31 March 2006. Find out his total income. Rs. 1. 2. Income from salary (net) Income from house property: i) Income from house A ii) Loss from house B Income from business: i) Profit from cloth business ii) Loss from hardware business iii) Profit from speculation business Income from capital gains: i) Short-term capital gain ii) Long-term capital gain iii) Loss from another long-term capital asset 1,50,000 5,000 8,000 20,000 45,000 15,000 10,000 5,000 18,000 3. 4. Solution: 1. Income from Salary 2. Income from house property Income from House A Less: Loss from B 3. Income from business: Profit from cloth business Profit from speculation business Less: Loss from hardware business 4. Income from capital gains i) Short-term capital gain ii) Long-term capital gain Less: Loss from long-term capital asset Long-term capital loss to be carried forward Total Income 1,50,000 5,000 -8,000 20,000 15,000 35,000 -45,000 -3,000 -10,000 10,000 5,000 -18,000 -13,000 _______ 1,47,000 Note: 1. Loss of hardware business is set-off against the profit of cloth business combined with profit from speculation. But a loss from speculation business, if there is any, is not allowed to be set-off against other business income. 2. Loss from long-term capital asset can be set-off against long-term capital gains only. Therefore capital loss of Rs 13,000 will be carried forward. 162 3. Loss from house property can be set off against income from house property or income under any other head in the same year.. 4. Loss from business has been set-off against STCG. __________________________________________________________________ 11.8 LET US SUM UP Set-off of losses means setting-off losses against the income of the same year. Where it is not possible to set-off the losses during the same assessment year in which they occurred, so much of the loss as has not been so set-off (only certain specified losses) can be carried forward for being set-off against his income in the succeeding years. Set-off can be inter source and Inter-head. Inter-source means when loss of one source is set-off against the income of some other source under the same head of income. Inter-head means when a loss remains unabsorbed from inter-source set-off, the balance of it can be set-off against income under other head of income. If both the adjustments are not possible then certain losses namely loss from house property, loss from business including speculation business, capital loss, and loss from activity of maintaining race horses can be carried forward. __________________________________________________________________ 11.9 GLOSSARY The various key words that arise in the chapter are: Inter-head adjustment: Where in respect of any assessment year the net result of the computation under any head of income is a loss. He shall be entitled to have the amount of such loss set-off against his income, if any, under any other head of income. Inter-source Adjustment: When there is more than one source of income under the same head, the loss from one or more sources is allowed to be set-off against income from the other source under the same head. Set-off losses: Setting off losses against the income of the same year. Carry forward of losses: The losses which cannot be set-off in the same year are carried to the next year to set-off against income of next year. __________________________________________ 11.10 SELF ASSESSMENT EXERCISE Q 1) State whether they are true or false: A) Speculation business losses can be set-off against profits of regular business. b) Non-speculative (regular) business losses can be set-off against speculation business profits. c) When inter-source Adjustment exhausts, inter-head adjustments begin. 163 d) Loss under the head house property can be carried forward for 8 years to be set-off in the following and subsequent seven years. E) If a speculation business is discontinued its brought forward losses cannot be carried forward any further to be set-off against profits of any other speculation business. F) Loss from business and profession can be set off against income from salary. Q 3) Discuss the provisions relating to set-off of losses in the following cases: A) Speculation loss B) Short term capital loss C) Long term capital loss D)Losses from horse race, gambling and crossword puzzles. Q 4) Discuss the conditions subject to which losses are allowed to be set –off in the current year and carried forward. 11.11 FURTHER READINGS AND SOURCES 1. Bhagwati Prasad, Law and practice of Income Tax, Navaman Prakashan, Aligarh 2. Mahesh Chandra & S.P. Goyal, Income Tax Law and practice, Himalaya Publishing House, Delhi 3. Vinod K. Singhania, Monica Singhania, Students Guide to Income Tax, Taxmann Publications Private Ltd. 4. Girish Ahuja & Ravi Gupta, Simplified Approach to Income Tax and Sales Tax, Sahitya Bhawan Publishers and Distributors Ltd., Agra 164 This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/32753894/Set-Off-Nd-Carry
CC-MAIN-2016-50
en
refinedweb
Chatlog 2010-03-25 From RDFa Working Group Wiki See CommonScribe Control Panel, original RRSAgent log and preview nicely formatted version. 13:46:12 <RRSAgent> RRSAgent has joined #rdfa 13:46:12 <RRSAgent> logging to 13:46:37 <ivan> ivan has left #rdfa 13:47:12 <manu> manu has changed the topic to: RDFa WG Telecon. Agenda: (manu) 13:47:15 <manu> Agenda: 13:47:22 <manu> trackbot, start meeting 13:47:24 <trackbot> RRSAgent, make logs world 13:47:26 <trackbot> Zakim, this will be 7332 13:47:26 <Zakim> ok, trackbot; I see SW_RDFa()10:00AM scheduled to start in 13 minutes 13:47:27 <trackbot> Meeting: RDFa Working Group Teleconference 13:47:27 <trackbot> Date: 25 March 2010 13:47:43 <manu> Chair: Manu Sporny 13:52:29 <jeffs> jeffs has joined #rdfa 14:00:16 <Knud> Knud has joined #rdfa 14:01:09 <Zakim> the conference code is 7332 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), manu 14:01:27 <Zakim> SW_RDFa()10:00AM has now started 14:01:28 <ShaneM> ShaneM has joined #rdfa 14:01:34 <Zakim> + +03539149aaaa 14:01:36 <Zakim> +??P6 14:01:50 <Zakim> + +1.415.524.aabb 14:01:52 <manu> zakim, I am ??P6 14:01:52 <Zakim> +manu; got it 14:02:21 <jeffs> zakim, I am aabb 14:02:21 <Zakim> +jeffs; got it 14:02:33 <Zakim> +McCarron 14:02:42 <ShaneM> zakim, McCarron is ShaneM 14:02:42 <Zakim> +ShaneM; got it 14:03:41 <manu> zakim, who is making noise? 14:03:53 <Zakim> manu, listening for 10 seconds I heard sound from the following: +03539149aaaa (40%), manu (100%), jeffs (3%) 14:04:54 <Zakim> +knud; got it 14:07:22 <manu> zakim, pick a victim 14:07:22 <Zakim> Not knowing who is chairing or who scribed recently, I propose manu 14:07:26 <manu> zakim, pick a victim 14:07:26 <Zakim> Not knowing who is chairing or who scribed recently, I propose knud (muted) 14:07:50 <Benjamin> Benjamin has joined #rdfa 14:08:33 <manu> Present: Jeff, Manu, Shane, Abhijit, Knud, Benjamin 14:14:32 <Benjamin> scribenick: Benjamin 14:14:33 <Benjamin> Topic: Action Items 14:08:19 <manu> ACTION: Shane to prepare RDFa Core 1.1 document for FPWD 14:08:19 <trackbot> Created ACTION-16 - Prepare RDFa Core 1.1 document for FPWD [on Shane McCarron - due 2010-04-01]. 14:08:57 <manu> ACTION: Jeff to prepare RDFa Cookbook document for FPWD for May 2010 14:08:57 <trackbot> Created ACTION-17 - Prepare RDFa Cookbook document for FPWD for May 2010 [on Jeffrey Sonstein - due 2010-04-01]. 14:10:00 <Zakim> + +49.631.205.7.aacc 14:10:28 <Knud> zakim, aacc is Benjamin 14:10:28 <Zakim> +Benjamin; got it 14:11:44 <manu> ACTION: Shane to prepare XHTML+RDFa 1.1 document for FPWD. 14:11:44 <trackbot> Created ACTION-18 - Prepare XHTML+RDFa 1.1 document for FPWD. [on Shane McCarron - due 2010-04-01]. 14:12:40 <Benjamin> scribenick: Benjamin 14:13:24 <manu> ACTION: Benjamin to prepare RDFa API document for FPWD 14:13:24 <trackbot> Created ACTION-19 - Prepare RDFa API document for FPWD [on Benjamin Adrian - due 2010-04-01]. 14:14:24 <manu> April 1st - Editors have a FPWD Editors Draft document published via 14:14:26 <manu> a W3C URL (though cvs.w3.org) 14:14:28 <manu> April 8th - Any Editors Draft corrections are made 14:14:30 <manu> April 15th - Notification to W3C Pubs to publish RDFa WG FPWDs 14:14:31 <manu> April 22nd - Slack time 14:15:12 <Benjamin> manu: Does this timeline for creating first drafts of FPWD drafts look good? Any objections to the timeline? 14:15:35 <Benjamin> ... three documents by April, 15 14:15:39 <jeffs> +1 on timeline & tasks 14:16:04 <Benjamin> ... any changes on the agenda? 14:16:00 <Benjamin> Topic: RDFa Vocabulary Proposal in FPWD 14:16:22 <manu> 14:17:43 <Benjamin> Manu: The straw poll items that have strong support are going to be part of the RDFa 1.1 FPWD specification 14:18:01 <jeffs> I can live with the external documents part, and I am very hesitant with pulling in external documents. 14:18:33 <Benjamin> Toby and Jeff do not agree with having an external document 14:19:54 <Benjamin> Jeff: it's more a hesitation than disagreement 14:21:20 <jeffs> expressed my concerns about having external document(s) and liking of xmlns: namespace approach 14:22:17 <Benjamin> Manu: We have discussed much on this concern on the mailing list - the downsides with having an external document. There is support to put it in the FPWD, let's do that and then see how the community responds to the FPWD. 14:23:25 <manu> 14:23:48 <Benjamin> ... Ivan is going to update the document above 14:24:16 <Benjamin> ... a similar version is in the rdfa.info wiki 14:25:15 <jeffs> I see no consensus on the 'combination of prefixes and keywords into a single "list of mappings" concept' proposal 14:25:22 <Benjamin> ... we should keep prefixes and keywords in separate documents. Any objections on not combining prefixes/keywords in the FPWD document? 14:25:25 <Benjamin> ... No objections, moving on. 14:25:28 <Knud> Topic: @token Proposal in FPWD 14:26:36 <Benjamin> ... token proposal is not going into the RDFa 1.1 document, no support for it as of right now. Any objections to @token proposal not going in RDFa document? 14:27:01 <Benjamin> ... Hearing none, let's move on. 14:27:16 <Benjamin> Topic: Default prefix proposal in FPWD 14:28:06 <manu> <div vocab=""><span property="foo">prop</span></div> 14:28:30 <manu> <> "prop" . 14:28:59 <Knud> vocab is good 14:29:01 <jeffs> vocab is simple and expressive 14:29:10 <Benjamin> Shane: I really like vocab 14:30:14 <Benjamin> manu: vocab is going to be in RDFa 1.1 FPWD spec 14:31:01 <jeffs> +1 to keeping sep until more discussion 14:31:46 <Benjamin> ... prefixes and keywords are kept separate for now 14:32:36 <Benjamin> ShaneM: No objections to keep them separate for now 14:32:40 <Benjamin> Topic: Alternative to xmlns: in FPWD 14:33:08 <manu> General agreement that RDFa 1.1 should have a mechanism to express mappings that does not depend on xmlns: (for example: @token, @vocab or @map) 14:33:24 <Benjamin> manu: We haven't really talked about the exact mechanism behind this 14:34:21 <manu> One option: prefix="foaf:" token="Person:" 14:35:03 <manu> Second option: prefix="foaf:" <--- prefixes can only be set, tokens are only set in RDFa Profile documents. 14:35:22 <Benjamin> ... explains options for defining prefixes and tokens 14:35:44 <manu> Third option: map="foaf:; Person:;" 14:36:35 <jeffs> I do not think we have talked about this enough to do much of anything "on paper" 14:37:36 <jeffs> the previous stuff has an effect upon this issue 14:39:15 <Zakim> +??P7 14:39:28 <Benjamin> ... Ok, then an alternative to xmlns: is not going into the FPWD RDFa 1.1 Core document yet, but there is strong support for such a mechanism. We'll work out the details over the next couple of weeks. 14:40:10 <Benjamin> Manu: Do Shane and Benjamin feel that they have enough information to proceed with editing the documents. Is there anything else they need to complete RDFa Core 1.1 FPWD, XHTML+RDFa 1.1 FPWD and RDFa DOM API FPWD? 14:41:13 <Benjamin> ... We have a tight schedule 14:40:10 <Benjamin> ... Ok, we're expecting draft FPWD documents to be available in W3C CVS by next Thursday. We'll review, make changes and get them to W3C Pubs by April 15th. Meeting adjourned. 14:41:36 <Zakim> -knud 14:41:40 <Zakim> -jeffs 14:43:25 <Benjamin> rrsagent, make minutes 14:43:25 <RRSAgent> I have made the request to generate Benjamin 14:44:00 <Zakim> -Benjamin 14:53:18 <mgylling> mgylling has joined #rdfa 15:11:24 <markbirbeck> markbirbeck has joined #rdfa 15:21:07 <Zakim> -ShaneM 15:21:09 <Zakim> -manu 15:21:12 <Zakim> -??P7 15:21:14 <Zakim> SW_RDFa()10:00AM has ended 15:21:15 <Zakim> Attendees were +03539149aaaa, +1.415.524.aabb, manu, jeffs, ShaneM, knud, +49.631.205.7.aacc, Benjamin # SPECIAL MARKER FOR CHATSYNC. DO NOT EDIT THIS LINE OR BELOW. SRCLINESUSED=00000123
https://www.w3.org/2010/02/rdfa/wiki/Chatlog_2010-03-25
CC-MAIN-2016-50
en
refinedweb
Command not fired with multipart form (JSF 2.2) Splash › Forums › PrettyFaces Users › Command not fired with multipart form (JSF 2.2) Tagged: h:inputFile, jsf 2.2, multipart, Prettyfaces This topic contains 35 replies, has 8 voices, and was last updated by biksg 1 year, 6 months ago. - AuthorPosts I’ve been trying to use the new h:inputFile from JSF 2.2, but I cannot get it to work. Ever since I have changed the “enctype” attribute of the “h:form” tag, the action is not being invoked anymore. After making some experiments, it looks like the problem is related to PrettyFaces. Here is what I am using: The pom.xml fragment: <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-servlet</artifactId> <version>2.0.5.Final</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-config-prettyfaces</artifactId> <version>2.0.5.Final</version> <scope>compile</scope> </dependency> A pretty-config.xml example: <url-mapping <pattern value="/como-funciona/" /> <view-id </url-mapping> A link example: <h:linkComo funciona</h:link> The form: <h:form Any suggestions? - This topic was modified 2 years, 11 months ago by rvcoutinho. - This topic was modified 2 years, 11 months ago by rvcoutinho. - This topic was modified 2 years, 11 months ago by rvcoutinho. Hmmm. That’s weird. I also use enctype="multipart/form-data"sometimes and everything works fine. Could you try to remove the mapping from the configuration and access the page directly via /how.jsfand check if it works in this case? Christian, Thanks for the response. I completely disabled PrettyFaces and Rewrite, and the problem still occurs. So, it is probably not related to PrettyFaces. I will keep digging. I appreciate your response anyway. Hi, Christian. There was a bug on the WildFly version at that time. But it has been fixed. The problem persists though. It looks like pretty faces might be messing with the multipart configuration of the new Servlet 3.1. Do you have any idea on how to solve it? Thanks in advance. - This reply was modified 2 years, 4 months ago by rvcoutinho. - This reply was modified 2 years, 4 months ago by rvcoutinho. I think I know what is causing this. The problem is that Rewrite calls HttpServletRequest.getParameter*()from a servlet filter which is not explicitly allowed for Servlet 3.1 multipart requests. For Tomcat there is a known workaround. But I’m not sure if there is a way in Wildfly to do something similar. Here are some references: (See “Why are file uploads not working correctly any more?”) Hi. I tried that before posting. I had no luck with the context.xml. Thank you anyway. Lincoln Baxter IIIKeymaster Hmm, @Christian, do you think this is worth asking about on the WildFly mailing list? I’m not sure I see a great solution for this. Lincoln Baxter IIIKeymaster Are we sure the problem is not due to the fact that we wrap the inbound request with our own, which might hide the fact that the request is actually a Multi-part request? I wonder if the servlet spec dictates how those requests need to be represented. Afaik it was non-standard, but I could be wrong. Hi, I have been debugging the rewrite lib for a while. But I have not been able to find the problem yet. Do you guys want a simple project in order to test it? Thank you. Here is my test app: After deploying to Wildfly, open this URL: Try uploading a file. This should work fine. The open UploadConfigProviderand add the Join to the configuration. Redeploy and open the following URL: The upload will not work any more. Hi, Thanks for the effort. Any suggestions? Unfortunately I didn’t find any time to have a deeper look at this issue. But this sample app may be a good starting point for debugging. Janario OliveiraParticipant Hi, I’m migrating a project to wildfly and I have the same problem. After some debugging I found in jsf-impl used in wildfly(2.2.5-jbossorg-3) a point that is evaluating different. com.sun.faces.renderkit.ResponseStateManagerImpl.isPostback():84 The postback method doesn’t find the request parameter “javax.faces.ViewState” so it jumps from restoreView to renderResponse phase. After that I did some debugging in RewriteFilter. So evaluating request.getParameter(“javax.faces.ViewState”) in org.ocpsoft.rewrite.servlet.RewriteFilter.doFilter():155 looks like: – not multipart request with rewrite – show the parameter – multipart request without rewrite – show the parameter – multipart request with rewrite – parameter comes null Do you have any idea why this is happening? - This reply was modified 2 years, 3 months ago by Janario Oliveira. Could you give Rewrite 2.0.12.Final a try which was just released? It fixes some issues that caused problems with multipart requests on Wildfly. See Janario OliveiraParticipant Hi, I tried in 2.0.12.Final and also in 2.0.13.Final-SNAPSHOT. Same happens. It works with this workaround: 1 – Add undertow-servlet as provided dependency: <dependency> <groupId>io.undertow</groupId> <artifactId>undertow-servlet</artifactId> <version>1.0.0.Final</version><!--same version o wildfly--> <scope>provided</scope> </dependency> 2 – extends DefaultServlet from undertow and add @MultipartConfig @WebServlet(name = io.undertow.servlet.handlers.ServletPathMatches.DEFAULT_SERVLET_NAME, urlPatterns = "/*") @MultipartConfig public class MultipartDefaultServlet extends io.undertow.servlet.handlers.DefaultServlet { } After that the parameters will appears in the filter and also be able to invoke jsf actions. The problem is that WildFly associate a parser according to the servlet mapped HttpServletRequestImpl.parseFormData io.undertow.server.handlers.form.FormEncodedDataDefinition always added io.undertow.server.handlers.form.MultiPartParserDefinition only added in multipart servlets, DefaultServlet doesn't handle multipart. If I map FacesServlet to same pattern of URLMapping FacesServlet will handle multipart. Trying to reproduce(without the workaround) I create a simple html(multipart.html) and try to see the parameters in a WebFilter <form enctype="multipart/form-data" method="post"> <input name="testeParam" value="testeValue"/> <input type="submit"/> </form> @WebFilter(urlPatterns = "/*") public class TestFilter implements Filter { doFilter request.getParameterMap() } The parameters are always empty. In JBoss same happens but when it is a request of a rewrite mapped(URLMapping) the parameters came in the filter. So, What do you think? Is it a bug in WildFly? but in basic sample(multipart.html) parameters are empty even in JBoss - This reply was modified 2 years, 2 months ago by Janario Oliveira. - AuthorPosts You must be logged in to reply to this topic.
http://www.ocpsoft.org/support/topic/command-not-fired-with-multipart-form-jsf-2-2/
CC-MAIN-2016-50
en
refinedweb
Servers/admin log/October 2008 From OpenStreetMap Wiki Contents October 30 - 08:45 Firefishy: "resumed" planet export steps after server power outage. October 29 - 14:10 TomH: Restore systems after power outage. October 24 - 16:45 Firefishy: Puff, restarted rails after unexpected system reboot - 20:18 Jburgess: Puff, not responding to web requests, kernel task ata_aux spinning cpu, forced rebooted - 21:50 Firefishy: Wiki, mod_evasive, wiki cache whitelisted. Wiki cache timeouts lowered drastically - abort quicker rather than queue old-requests. PHP APC updated. October 23 - 22:57 Jburgess: Dev, Internal NIC jammed earlier in the day, Firefishy rebooted machine. October 22 - 09:00 Firefishy: Wiki, blocked known bots. Additionally yacybot, tkwsearch October 20 - 21:50 Firefishy: Swapped Dev's disks into Dulcy. Dev's PSU had blown. October 16 - 19:15 Firefishy: Munin added temperatures from Puff, Fuchur. October 15 - 19:25 Firefishy: Puff, Fuchur added HP hw advisory fan leads. Less noise. - 09:40 Firefishy: Restart mysql on dev - 08:47 TomH: Database hung, server restarted. October 13 - 17:55 Firefishy: Wiki, patched Mediawiki to version 1.13.2. October 12 - 02:00 Firefishy: DB, disabled spammer's account 'telephonecalli' and removed diary spam. October 10 - 12:00 Firefishy: Wiki, disabled $wgGenerateThumbnailOnParse and moved to efficient Thumb.php + mod_rewrite. - 04:09 Firefishy: Wiki, enabled search for language namespaces. - 04:00 Firefishy: Puff installed ruby-mysql-2.7.7. TomH has installed on Draco and Sarel. October 9 - 10:25 Firefishy: Norbert - test ruby-mysql-2.7.7 and increase ulimits October 3 - 03:50 Firefishy: wiki implemented DE, FR, ES, IT and NL namespaces. More another time. Some pages require Template Linking Fixes. October 2 - 18:20 Firefishy: db purged mysql binary logs prior to 2008-09-01.
http://wiki.openstreetmap.org/wiki/Servers/admin_log/October_2008
CC-MAIN-2016-50
en
refinedweb
When you're slogging away on a project like a migrations framework, it can seem a little unforgiving. You spend so many commits just laying down foundations and groundwork that it's such a relief when things finally start to work together - something I've always known as a black triangle. You can imagine, then, how happy I am with the progress made this week. Not only did the autodetector get a lot better, but there are now commands. Not only that, but you can frigging migrate stuff with them. I Should Calm Down A Bit I know, that might seem like the entire purpose of a migrations framework, but it's nice to finally get all this code I've been planning for years (quite literally in some cases) working together and playing nicely. Enough talk. Let's look at some examples. Here's a models.py file I found lying around: from django.db import models class Author(models.Model): name = models.CharField(max_length=255) featured = models.BooleanField() Pretty simple, eh? Let's make a migration for it! Of course, the command to do that has changed; in South, you had to run manage.py schemamigration appname for each app you wanted to migrate. Now, you just run manage.py makemigrations: $ ./manage.py makemigrations Do you want to enable migrations for app 'books'? yes Migrations for 'books': 0001_initial.py: - Create model Author makemigrations will scan all apps and output all changes at once, possibly involving multiple migrations for each app. It will know to add dependencies between apps with ForeignKeys to each other, and to split up a migration into two if there's a circular ForeignKey reference. There's no more --auto, no more --initial, and it'll even remind you to create migrations for new apps (don't worry, it won't prompt more than once). Let's make some changes to our models.py file; in particular, I'm going to allow longer names, and add an integer rating column: from django.db import models class Author(models.Model): name = models.CharField(max_length=500) featured = models.BooleanField() rating = models.IntegerField(null=True) Let's run makemigrations: $ ./manage.py makemigrations Migrations for 'books': 0002_auto.py: - Add field rating to author - Alter field name on author Notice how it's telling you nicely what each migration contains. There's also some colour on these commands to make them easier to read on the console; making migrations should be a pleasant experience, after all. The Challenges Of course, the result is lovely, but I'd like to look closer at one particular challenge - that of autodetection. Autodetection is a very deep topic, and one I'll doubtless return to. However, the particular problem this week was having the autodetector intelligently discover new apps to add migrations to. In South, you have to manually enable migrations for an app using the --initial switch to schemamigration, but I wanted to eliminate that distinction here. It's trivial enough to detect apps without migrations which need them, but that's not quite enough. The problem is, you see, that there's plenty of apps that don't have migrations and don't need them. Third-party libraries, old internal packages with manual SQL migrations, and of course our own django.contrib (though I'm sure a few of those will inevitably grow migrations). Thus, without any sort of extra code, makemigrations will prompt every time you run it about each of these unmigrated apps. That's going to get very annoying, and so I devoted quite a bit of thought to how to address this. There's no way to write a marker into the apps themselves - third-party libraries may be shared and/or read-only - and so there are only two solutions: - A setting, called UNMIGRATED_APPS - An autogenerated file in the project directory While we're trying to have less settings as part of Django - there's currently way too many - I feel that UNMIGRATED_APPS is a good fit to INSTALLED_APPS and fits the Django culture better. It does mean having to update the settings file each time you add an app you don't want to migrate, rather than makemigrations updating an autogenerated file for you, but the command can remind you of this and even print you the new value ready to paste into your settings file. Plus, it means migration refuseniks can just set UNMIGRATED_APPS = INSTALLED_APPS and get on with coding. Opinions on this are of course always welcome, via Twitter or email. One More Thing There's one final feature I want to show off. If you ever renamed a field in South you'd know that it detected it as a removal and an addition and lost all your data. That's no more. Behold: from django.db import models class Author(models.Model): name = models.CharField(max_length=500) featured_top = models.BooleanField() rating = models.IntegerField(null=True) $ ./manage.py makemigrations Migrations for 'books': 0003_auto.py: - Rename field featured on author to featured_top Don't worry, there's more nice features like that in the works.
http://www.aeracode.org/2013/6/20/tunnel-lights/
CC-MAIN-2016-50
en
refinedweb
In today’s Programming Praxis exercise, our goal is to fix something that annoys us in our programming environment. Let’s get started, shall we? The first annoyance that came to mind for me is one I encountered in the previous exercise. While I love the Parsec library in general, it is not without its little niggles. Specifically, there were two points: - The default method of parsing numbers is to use the functions defined on GenTokenParser, which requires two additional import lines and requires an extra parameters after the integer and float parsers. - I normally use the operators from Control.Applicative for composing parsers, but they’re right-associative and have an impractical precedence, which results in lots of parentheses. Today’s exercise was a good excuse to finally sit down and write a small library to fix these issues once and for all. And since I want to be able to easily use it in future projects and exercises, I’ve put it up on Hackage. Note: If you’re reading this shortly after I post it: documentation always takes a while to generate on Hackage, so you might need to wait a bit. I won’t cover all the functions of the library here (that’s what the documentation is for), so instead we’ll see what the gas mileage program looks like when using this new library: Some imports: import Text.Parsec import Text.Parsec.Utils import Text.Printf The showLog function is unchanged from last time. showLog :: [(Float, Float)] -> [String] showLog es = "Miles Gals Avg" : "------ ---- ----" : zipWith (\(m2,g) (m1,_) -> printf "%.0f %.1f %.1f" m2 g ((m2 - m1) / g)) (tail es) es The parser is nice and simple now. logParser = many $ (,) .: float -: space +: float -: newline And the main function is also simpler than it would otherwise have been, since we don’t need to deal with parse errors anymore. main = mapM_ putStrLn . showLog =<< parseFile logParser "input.txt" Tags: bonsai, code, Haskell, kata, parsec, parsec-utils, praxis, programming, utils
https://bonsaicode.wordpress.com/2012/11/06/programming-praxis-fix-something-annoying/
CC-MAIN-2016-50
en
refinedweb
This article describes how to extract icons from an executable module (EXE or DLL), and also how to get the icons associated with a file. In this article, you will find how to get the icon image that best fits the size you want to display. You can also find how to split an icon file to get its image. Icons are a varied lot—they come in many sizes and color depths. A single icon resource—an ICO file, or an icon resource in an EXE or DLL file—can contain multiple icon images, each with a different size and/or color depth. Windows extracts the appropriate size/color depth image from the resource depending on the context of the icon's use. Windows also provides a collection of APIs for accessing and displaying icons and icon images. The code I introduce will help you extract icons from executable modules (EXE, DLL) without the need to know the Windows APIs that are used in this situation. The code will also help you in extracting specific icon images from an icon file, and will help you in extracting the icon image that best fits a supplied icon size. If you want to understand what is going on in this code, you should know how to call APIs from C# code. You also need to know the icon format, about which you will find here: MSDN. First, you need to add a reference to TAFactory.IconPack.dll, or add the project named IconPack to your project. Add the following statement to your code: using TAFactory.IconPack; Use the IconHelper class to obtain the icons as follows: IconHelper //Get the open folder icon from shell32.dll. Icon openFolderIcon = IconHelper.ExtractIcon(@"%SystemRoot%\system32\shell32.dll", 4); //Get all icons contained in shell32.dll. List<icon> shellIcons = IconHelper.ExtractAllIcons(@"%SystemRoot%\system32\shell32.dll"); //Split the openFolderIcon into its icon images. List<icon> openFolderSet = IconHelper.SplitGroupIcon(openFolderIcon); //Get the small open folder icon. Icon smallFolder = IconHelper.GetBestFitIcon(openFolderIcon, SystemInformation.SmallIconSize); //Get large icon of c drive. Icon largeCDriveIcon = IconHelper.GetAssociatedLargeIcon(@"C:\"); //Get small icon of c drive. Icon smallCDriveIcon = IconHelper.GetAssociatedSmallIcon(@"C:\"); //Merge icon images in a single icon. Icon cDriveIcon = IconHelper.Merge(smallCDriveIcon, largeCDriveIcon); //Save the icon to a file. FileStream fs = File.Create(@"c:\CDrive.ico"); cDriveIcon.Save(fs); fs.
https://www.codeproject.com/articles/32617/extracting-icons-from-exe-dll-and-icon-manipulatio?fid=1533821&df=90&mpp=10&sort=position&spc=none&select=4389947&tid=3715675
CC-MAIN-2016-50
en
refinedweb
TERMIOS(4) BSD Programmer's Manual TERMIOS(4) termios - general terminal line discipline #include <termios.h> re- lated ter- minal; all processes spawned from that login shell are in the same ses- sion, nev- er changes the process group of the terminal and doesn't wait for the job to complete (that is, it immediately attempts to read the next command). If the job is started in the foreground, the user may type a key (usually '^Z') which generates the terminal stop signal (SIGTSTP) and has the ef- fect. Con- cept pro- cess of a session that has a controlling terminal has the same control- ling terminal. A terminal may be the controlling terminal for at most one session. The controlling terminal for a session is allocated by the ses- sion leader by issuing the TIOCSCTTY ioctl. A controlling terminal is never acquired by merely opening a terminal device file. When a control- ling control- ling terminal simply by closing all of its file descriptors associated with the controlling terminal if other processes continue to have it open. When a controlling process terminates, the controlling terminal is disas- sociated termi- nal, pro- cess. wheth- er canon- ical: 1. If there is enough data available to satisfy the entire re- quest,. In canonical mode input processing, terminal input is processed in units of lines. A line is delimited by a newline ', neces- sary transmis- sions. If VMIN is greater than {MAX_INPUT}, the response to the request is undefined. The four possible values for VMIN and VTIME and their in- teractions are described below. Case A: VMIN > 0, VTIME > 0 start- ed. If VMIN bytes are received before the inter-byte timer expires (remember that the timer is reset upon receipt of each byte), the read is satisfied. If the timer expires before VMIN bytes are received, the char- acters ac- tivated by the receipt of the first byte, or a signal is received. If data is in the buffer at the time of the read(), the result is as if data had been received immediately after the read(). Case B: VMIN > 0, VTIME = 0. Case C: VMIN = 0, VTIME > 0 ti- mer sec- tion). Special character on input and is recognized if the ISIG flag (see the Local Modes section) is enabled. Generates a SIGINT sig- nal processes a NL, EOF, or EOL character. If ICANON is set, the ERASE character is discarded when processed. im- mediately pro- cessed. 1003 preced- ing whitespace is erased, and then the maximal sequence of alphabetic/underscores or non alphabetic/underscores. As a spe- cial case in this second algorithm, the first previous non- whitespace character is skipped in determining whether the preceding word is a sequence of alphabetic/underscores. This sounds confusing per- formed when that character is received is undefined. If a modem disconnect is detected by the terminal interface for a con- trolling ap- propriately /* ignore BREAK condition */ BRKINT /* map BREAK to SIGINTR */ IGNPAR /* ignore (discard) parity errors */ PARMRK /* mark parity and framing errors */ INPCK /* enable checking of parity errors */ ISTRIP /* strip 8th bit off chars */ INLCR /* map NL into CR */ IGNCR /* ignore CR */ ICRNL /* map CR to NL (ala CRMOD) */ IXON /* enable output flow control */ IXOFF /* enable input flow control */ IXANY /* any char will restart after stop */ IMAXBEL /* ring bell on input queue full */ IUCLC /* translate upper case to lower case */ In the context of asynchronous serial data transmission, a break condi- tion asynchro- nous serial data transmission the definition of a break condition is im- plementation defined. If IGNBRK is set, a break condition detected on input is ignored, that is, not put on the input queue and therefore not read by any process. If IGNBRK is not set and BRKINT is set, the break condition flushes the in- put', or if PARMRK is set, as '\377', '\0', ' ' ' con- nected charac- ter. con- trol de- fined. /* the CR function */'s /* */ ex- ample, at 110 baud, two stop bits are normally used. If CREAD is set, the receiver is enabled. Otherwise, no character is re- ceived. Not all hardware supports this bit. In fact, this flag is pretty silly and if it were not part of the termios specification it would be omitted. If PARENB is set, parity generation and detection are enabled and a pari- ty ter- min. The CCTS_OFLOW (CRTSCTS) flag is currently unused. termi- nal on another host, the baud rate may or may not be set on the connec- tion between that terminal and the machine it is directly connected to. */ XCASE /* canonical upper/lower case */ possi- ble. If there is no character to erase, an implementation may echo an in- dication that this was the case or do nothing. If ECHOK and ICANON are set, the KILL character causes the current line to be discarded and the system echoes the ' deter- mining what constitutes a word when processing WERASE characters (see WERASE). If ECHONL and ICANON are set, the ' re- ceived or the timeout value VTIMEat- ed correspond- ing input characters are not processed as described for ICANON, ISIG, IXON, and IXOFF. If NOFLSH is set, the normal flush of the input and output queues associ- ated: for: use: ` \' | \! ~ \^ { \( } \) \ \\ pro- duce, con- sult the header file ) MirOS BSD #10-current April 19, 1994.
https://www.mirbsd.org/htman/sparc/man4/termios.htm
CC-MAIN-2016-50
en
refinedweb
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. How to Date deduction with Python ? here shows date of birth field in my openerp model 'date_of_birth': fields.date('Date of Birth'), need to change its default date to 25years earlier.because its easier to user to pick year. ( in openerp jquery default load current 20years in list and user have to get some time to select earlier year ). for ex : _defaults = { 'date_of_birth':fields.date.context_today - 25years please advice me to implement this issue (if its with python function seems good for my requirement) ---------------EDITED--------------- @ Dear Patently, I tried it with my console.its gives a error >>> import datetime >>> from datetime import timedelta >>> diff = datetime.datetime.now() - datetime.timedelta(years=42) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'years' is an invalid keyword argument for this function I tried it with openerp in eclipse def _dob(self, cr, uid, context=None): diff = datetime.datetime.now() - datetime.timedelta(years=42) return diff also gives error File "/home/bellvantage/Documents/openerp-7.0/openerp-7/openerp/addons/bpl/bpl.py", line 47, in _dob diff = datetime.datetime.now() - datetime.timedelta(years=42) TypeError: 'years' is an invalid keyword argument for this function try this: import datetime from datetime import timedelta diff = datetime.datetime.now() - datetime.timedelta(years!
https://www.odoo.com/forum/help-1/question/how-to-date-deduction-with-python-14869
CC-MAIN-2016-50
en
refinedweb
What is Python (Programming)? - The Basics Before getting started, lets get familiarized with the language. Python is a general-purpose language. Meaning, it has wide range of applications from Web development (like: Django and Bottle) to scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D). The syntax is clean and the code length is short. It's fun to work in Python because it allows you to think about the problem rather than focusing on syntax. More information on Python Language: History of Python Features of Python Programming -. - Free and open-source You can freely use and distribute Python, even for commercial use. Not only can you use and distribute softwares written in it, you can even make changes to the Python's source code. Python has a large community constantly improving it in each iteration. - Portability You can move Python programs from one platform to another, and run it without any changes. It runs seamlessly on almost all platforms including Windows, Mac OS X and Linux. - Extensible and Embeddable. For example: Need to connect MySQL database on a Web server? You can use MySQLdb library using import MySQLdb.. Applications of Python Web Applications You can create scalable Web Apps using frameworks and CMS (Content Management System) that are built on Python. Some of the popular platforms for creating Web Apps are: Django, Flask, Pyramid, Plone, Django CMS. Sites like Mozilla, Reddit, Instagram and PBS are written in Python. Scientific and Numeric Computing There are numerous libraries available in Python for scientific and numeric computing. There are libraries like: SciPy and NumPy that are used in general purpose computing. And, there are specific libraries like: EarthPy for earth science, AstroPy for Astronomy and so on. Also, the language is heavily used in machine learning, data mining and deep learning. Creating software Prototypes Python is slow compared to compiled languages like C++ and Java. It might not a good choice if resources are limited and efficiency is a must. However, Python is a great language for creating prototypes. For example: You can use Pygame (library for creating games) to create your game's prototype first. If you like the prototype, you can use language like C++ to create the actual game. Good Language to Teach Programming Python is used by many companies to teach programming to kids and newbies. It is a good language with a lot of features and capabilities. Yet, it's one of the easiest language to learn because of its simple easy-to-use syntax.
https://www.programiz.com/python-programming
CC-MAIN-2016-50
en
refinedweb
Much of the work on migrations so far has been laying a good bit of groundwork, but now everything is starting to come together into a working whole. The most important thing to land this week is Operations - the things which migrations will be structured around. Here's an example of what a migration would look like: from django.db import migrations, models class Migration(migrations.Migration): dependencies = [("myapp", "0001_initial")] operations = [ migrations.AddField("Author", "rating", models.IntegerField(default=0)), migrations.CreateModel( "Book", [ ("id", models.AutoField(primary_key=True)), ("author", models.ForeignKey("migrations.Author", null=True)), ], ) ] As you can see, the Operations define what the migration does, and serve as the top-level element that allows more declarative migrations - as opposed to those in South, which were an opaque block of procedural code. What's in a name? But what exactly do Operations do? That's a relatively easy question to answer - they have an interface of exactly three methods. The first, state_forwards, takes a project state (that's an in-memory representation of all of your models and fields at a point in time) and modifies it to reflect the type of change it makes - AddField, for example, would add a field to the in-memory model state. The other two are database_forwards and database_backwards, which take a SchemaEditor (an object allowing database changes) and two project states - the before and after state for the operation, the after state having come from state_forwards above. When the code runs a migration, it runs through and calculates a project state for every interim step between operations, by applying successive state_forwards functions down the entire dependency tree, and it can then supply each database run with both what it's working from, and what it is working towards, which helps greatly with some more complex operations. A good example to look at is the field operations - they use both the "from" and "to" project states, but are still relatively simple. Tying it all together Of course, Operations live in migrations, so if we want to run them we need something that understands migrations. Fortunately, that's now in place - a class known as the Executor uses the existing loading and graph-resolving pieces to provide an end-to-end way of running migrations. All you need to do is call migrate() with a list of target migrations, and it handles the rest. There's also a migration_plan() method if you want to know what it's about to do - useful for some tests and some user commands. In fact, user commands are the next step. While the Executor certainly offers most of the functionality of running migrations, it's not exactly in an easy-to-use CLI format. User commands Traditionally, South has had the migrate command to allow users to interact with the migration system. The issue here is, of course, that Django has traditionally used syncdb to let users create their database and add any new models. South overrides syncdb so it doesn't touch the migrated apps, but that then leaves you having to go and run migrate yourself, and if you run migrate before syncdb it'll fail with an error. My plan to fix this involves deprecating syncdb and structuring everything around a much improved migrate command, which handles both unmigrated and migrated apps. I'll also be introducing coloured output, since it's such an easy win in terms of readability, and I hope to introduce a "smart" mode, which will give you estimates about how long each migration will take and what sort of locking it will do, so you can plan which migrations to run when. It is tempting to just make that mode print "DON'T DO IT" if you're on MySQL, though. I started a discussion on django-developers about these proposed command changes; you should read it and reply if you have opinions on how this should work in the future. Next time on Migrations Work will now shift to getting a reasonable command-line client up and running for applying and unapplying migrations, and when that's done, the final pillar needs tacking: autodetection. Although some people find it hard to believe, South shipped without autodetection for quite a while, and consisted just of a schema backend and a migration runner (essentially more primitive versions of what I've built so far). These days, however, autodetection is possibly the most important feature. Anyone can throw some schema code in a file and have it run; having your framework work out that code and write it for you is the key step in making something like this easy-to-use. The Field API work from last time provides a good basis to this, though I'm still unsure how exactly to structure the detection logic - especially because there's call for fuzzy matching for things like field renames. I imagine it's going to end up being a score-based system where the detector works out all possible approaches to get from schema A to schema B and picks the best one, but I'll have more thoughts on that next time.
http://www.aeracode.org/2013/5/30/what-operation/
CC-MAIN-2016-50
en
refinedweb
The idea that a bunch of geeks getting together to debug software would be easily confused with Broccoli Cauliflower Tetrazzini is a bit of a stretch. So either Pillsbury is easily confused or its lawyers are trying to pad their bills. Out of the blue, Pillsbury’s lawyers have sent cease-and-desist letters to a number of engineers and a few companies that have been holding meetings where groups of programmers get together to test their implementations of some network standard against each other. What are they to cease and desist from? From using the term "bake-off" to describe such get-togethers, that’s what. Sheesh -- no wonder lawyers have a bad reputation. Pillsbury wants to claim that any and all uses of the term bake-off -- other than those referring to the annual cooking contest that Pillsbury has run for 50 years -- are prohibited by Pillsbury’s trademark. That contest is certainly well-known. It even has its own Web site (), where, among other things, you can find a list of the 14 "Hall of Fame" recipes -- complete with pictures -- from previous bake-offs. These recipes include the above-mentioned Broccoli Cauliflower Tetrazzini. I expect the fame of the cooking contest contributed to the use of the term bake-off by the geeks, but this did not happen yesterday. There may be no way to figure out when the term first started to be used in conjunction with software testing, but RFC 1025 details its use as early as 1980. Putting my amateur lawyer hat firmly on, I wonder how Pillsbury can suddenly claim its trademark is being violated more than 20 years after the alleged infringement started. I suppose the company could claim that it had not heard of the Internet and the quite common use of the term "bake-off" for many Internet activities until a couple of months ago. But it might take some searching to find a judge and jury that would believe that Minneapolis, where Pillsbury is headquartered, is that far off the beaten path. The result of Pillsbury’s sudden aggressiveness just could be a legal determination that "bake-off" has become a generic term and Pillsbury could wind up with less, rather than more, authority to control its use. This topic would seem more suited for an April Fool’s Day column. But it’s sad to say we have not seen the last of this sort of silliness. The bake-off case does not even touch the far more difficult area of trademark use on the Internet. The flat namespace of the Internet makes trademarks a complex issue. The Internet has none of the geographic, product category or visual differentiation that makes trademarks in the real world a simpler issue (well, relative only to the Internet). With the introduction of new Internet top-level domain names, which create new venues for trademark conflicts, the ground is being prepared for milling hordes of lawyers ready to do battle while billing their clients on an hourly basis. Disclaimer: This confusion is in the university’s interest becausesome of those milling hordes come from Harvard Law School. But the university has not expressed an opinion. This story, "Paying lawyers by the hour " was originally published by Network World.
http://www.itworld.com/article/2798865/business/paying-lawyers-by-the-hour.html
CC-MAIN-2016-50
en
refinedweb
JNDI binding and NameNotFoundExceptionDavid Bailey Dec 12, 2011 5:38 PM I'm an EJB-newbie, so please bear with me. I'm running JBoss 5.1.0, and I'm trying to define/deploy/access a stateless session bean. It's my understanding that JBoss 5.1.0 conforms to EJB 3.0, and so I can do the whole thing with annotations, without the need for a ejb-jar.xml or jboss.xml file. After much Googling, I have tried a number of different ways to do things, most recently the following. In my bean class, I added a no-args constructor and the following annotation and interface: {code} @Stateless public class ScoreComputer implements IScoreComputer {code} I defined an IScoreComputer class which declares the methods I want to have available, and put the following annotation on it: {code} @Local public interface IScoreComputer {code} In my server code, I try to obtain a reference to my ScoreComputer as follows: {code} IScoreComputer computer; InitialContext ejbContext = new InitialContext(); computer = (IScoreComputer)ejbContext.lookup("ScoreComputer/local"); {code} When this code executes, a NameNotFoundException is thrown with the message 'IScoreComputer not bound'. I know I'm not doing the JNDI binding correctly, because I've looked in the JMX console for JBoss, and no binding is listed containing the string 'Score'. But try as I might, I am unable to find instructions telling me the correct way to do the binding. I am using the Spring DispatcherServlet as my servlet, and for a while I thought that might be what is causing the problem, and it may well be. But I replaced that in my web.xml with a simple class which extends HttpServlet, and got the same behavior. I have also tried injecting my bean using the @EJB annotation, e.g. {code} @EJB protected IScoreComputer computer; {code} but that fails when JBoss attempts to deploy the .ear file, with a message "Resolution should not happen via injection container". 1. Re: JNDI binding and NameNotFoundExceptionWolf-Dieter Fink Dec 13, 2011 2:23 AM (in response to David Bailey) Do you have checked whether your ScoreComputer is deployed and bound correct? You should see some lines in the logfile what the JNDI names are. If unsure just start the server without you application and copy your ear to deploy after the server is up. 2. Re: JNDI binding and NameNotFoundExceptionjaikiran pai Dec 13, 2011 4:54 AM (in response to David Bailey) Also, what does your application packaging look like? Where are those EJBs placed and how do you deploy the application? 3. Re: JNDI binding and NameNotFoundExceptionDavid Bailey Dec 13, 2011 10:43 AM (in response to David Bailey) @Wolf-Dieter: I know my .ear is deployed, because I can access the HTML pages I put in it. When I make changes and redeploy, I see the changes. User interaction is as expected until execution hits the point where I'm trying to access the ScoreComputer bean. I know the JNDI binding is not correct because I don't see my bean listed anywhere in the JMX console. @jaikiran: Here is the directory structure in my development directory: ${basedir} |--src (standard Java source directory structure) |--build |--dist |--lib |--webinf |--static (contains .css, .js) |--velocity (contains Velocity .vm templates) |--application.xml |--scpoc-servlet.xml |--web.xml |--build.xml And here is my build.xml file: {code:xml}<?xml version="1.0"?> <project name="scpoc" basedir="." default="ear"> <property name="build" value="${basedir}/build" /> <property name="content" value="${basedir}/content" /> <property name="deployDir" value="D:/Applications/jboss-5.1.0.GA/server/default/deploy" /> <property name="dist" value="${basedir}/dist" /> <property name="jboss_lib" value="D:/Applications/jboss-5.1.0.GA/common/lib" /> <property name="lib" value="${basedir}/lib" /> <property name="src" value="${basedir}/src" /> <property name="webinf" value="${basedir}/webinf" /> <property name="version" value="0.1" /> <path id="build.classpath"> <fileset dir="${jboss_lib}" includes="**/*.jar" /> <fileset dir="${lib}" includes="**/*.jar" /> </path> <target name="clean"> <delete dir="${build}" /> <delete dir="${dist}" /> </target> <target name="init"> <tstamp /> <mkdir dir="${build}" /> <mkdir dir="${dist}" /> </target> <target name="compile" depends="init"> <javac srcdir="${src}" destdir="${build}" optimize="on"> <classpath refid="build.classpath" /> </javac> </target> <target name="war" depends="compile"> <war destfile="${dist}/${ant.project.name}.war" webxml="${webinf}/web.xml"> <lib dir="${lib}" /> <classes dir="${build}" /> <webinf dir="${webinf}"> <exclude name="web.xml" /> <exclude name="application.xml" /> </webinf> </war> </target> <target name="ear" depends="war"> <ear destfile="${dist}/${ant.project.name}-${version}.ear" appxml="${webinf}/application.xml"> <fileset dir="${dist}" includes="*.war" /> </ear> </target> <target name="deploy" depends="ear"> <copy todir="${deployDir}"> <fileset dir="${dist}" includes="*.ear" /> </copy> </target> </project>{code} I build the ear and deploy it to the JBoss 'default/deploy' directory. Thanks. 4. Re: JNDI binding and NameNotFoundExceptionWolf-Dieter Fink Dec 13, 2011 2:38 PM (in response to David Bailey) I think Jakiran meant the structure of your EAR and application.xml. I'm not sure but I suppose that you pack the ejb classes into the web application and I'm unsure whether it is picked up here. I prefere to add a ejb.jar to the ear and add the element module.ear to the application.xml 5. Re: JNDI binding and NameNotFoundExceptionDavid Bailey Dec 13, 2011 2:44 PM (in response to Wolf-Dieter Fink) I would have thought that showing my build.xml would answer the question about how my EAR is structured. And I was never aware that I needed to include a file called ejb.jar anywhere. Perhaps the question I should really be asking is: where can I find a good tutorial about these topics? I have been jumping from one 'Getting Started with EJB' site to another, and clearly the information I'm getting is incomplete. Compounding the problem is that there are still a lot of old EJB 2.x sites out there, providing what was accurate, but now outdated information. 6. Re: JNDI binding and NameNotFoundExceptionWolf-Dieter Fink Dec 13, 2011 3:12 PM (in response to David Bailey)1 of 1 people found this helpful I use jboss.org examples or BTW the file must not named ejb.jar, the file that include your ejb's must be added to the META-INF/application.xml like this: <module><ejb>myejb.jar</ejb></module> <module><java|web .... 7. Re: JNDI binding and NameNotFoundExceptionDavid Bailey Dec 13, 2011 4:49 PM (in response to David Bailey) Okay, it looks like my error was in believing that JBoss 5.1.0 was EJB 3.0 compliant. I added the jar file and application.xml markup per Wolf-Dieter's suggestion, but then I got an IllegalStateException stating "Null beannMetaData". I tried doing more things with annotations, but the error never went away. So I eventually gave up and wrote an ejb-jar.xml file. That appears to have satisfied JBoss' need for metadata, but now I'm getting all kinds of warnings about EJB spec violations. because my bean doesn't define the require ejbCreate method, the methods defined in my interface don't throw java.rmi.RemoteException, etc. So either the inclusion of the ejb-jar.xml has caused JBoss to treat my bean as an EJB 2.0 bean, or there's some configuration tweak I don't know about to tell JBoss that I'd like to use the 3.0 spec and annotations, rather than implementing EJB interfaces on my business beans, etc. Any help there? 8. Re: JNDI binding and NameNotFoundExceptionWolf-Dieter Fink Dec 14, 2011 2:16 AM (in response to David Bailey)1 of 1 people found this helpful It must be still a package or coding problem. I use ejb3 without any descriptor except application.xml The structure is META-INF/MANIFEST.MF META-INF/application.xml ejb.jar # ejb3 stateless session bean and interface persistence.jar # JPA entity used by the SLSB application.xml: <application> <display-name/> <description/> <module><ejb>ejb.jar</ejb></module> <module><java>persistence.jar</java></module> </application> In my example the ejb.jar include Bean and Interface. If you have a seperate interface jar you can add it like the persistence.jar. 9. Re: JNDI binding and NameNotFoundExceptionDavid Bailey Dec 14, 2011 10:11 AM (in response to David Bailey) Okay, I finally got it figured out. There were two missing pieces when I originally posted this 2 days ago. One, I didn't know I needed to package my bean classes in a separate jar and declare it in my application.xml. So thanks to Wolf-Dieter for that bit of information. Two, when attempting to access the bean from my code, I was using InitialContext.lookup() and supplying a string of "<ear-name>/<bean-name>/local". I didn't realize that I had to supply the full ear file name, including the version. That seems a bit klunky to me, to hard-code a version ID in my Java code. But once I got past the first hurdle of declaring my EJB jar file in the application XML, I was able to see the name JBoss was using to bind my bean to JNDI, and I saw that indeed the version ID was there. So once I specified the JNDI name correctly, everything worked. Thanks. 10. Re: JNDI binding and NameNotFoundExceptionWolf-Dieter Fink Dec 14, 2011 1:37 PM (in response to David Bailey) You should use '@EJB MyBeanLocal xxx;' instead of code a lookup, you are using EJB3 with injection. In this case you don't have to know the JNDI name. 11. Re: JNDI binding and NameNotFoundExceptionDavid Bailey Dec 14, 2011 1:55 PM (in response to Wolf-Dieter Fink) That was one of the first things I tried. I kept getting an error at deployment: 'Resolution should not happen via injection container' I suppose this was happening as a consequence of other things I was doing wrong. Now I no longer see that message. However, I am using Spring MVC to handle requests, and NOW I get an error message from Spring because it knows nothing about the bean declaraion.
https://developer.jboss.org/thread/176060?tstart=0
CC-MAIN-2016-50
en
refinedweb
On Sunday 21 March 2004 02:17, Sergey Matveychuk wrote: > @@ -716,9 +773,6 @@ pupa_util_biosdisk_get_pupa_dev (const c > return 0; > } > > - if (! S_ISBLK (st.st_mode)) > - return make_device_name (drive, -1, -1); > - This part is not good. The problem here is that we want to support installing GRUB into a normal file as well as a device file. So what I said to you was wrong. The check about a block device is sometimes very important. In FreeBSD, what is the right way to distinguish a device file from a normal file? > +#ifdef HAVE_MEMALIGN > p = memalign (align, size); > +#else > + p = malloc(size); > +#endif I don't agree on this one. Please implement memalign correctly. It's not so difficult. Probably it can be like this (not tested): p = malloc((size + align - 1) & ~(align - 1)); if (! p) return 0; return (p + align - 1) & ~(align - 1); It might be possible to find a better implementation, if you see glibc. BTW, I confirmed that you haven't assigned your copyright on GRUB to the FSF yet. For a GNU project, it is a custom to assign your copyright to the FSF, so that the FSF can fight instead of you in a court when someone claims that our software is illegal. Also, this step of a copyright assignment makes sure that your contribution will be free in freedom forever. So, would you like to sign a copyright assignment for GRUB? If you need more information, don't hesitate to ask me. We can talk privately if you want. Okuji
http://lists.gnu.org/archive/html/grub-devel/2004-03/msg00048.html
CC-MAIN-2016-50
en
refinedweb
Communication to the web service over Internet is based on the text formatted protocol, where binary fields are using the Base64 encoded text embedded in the body of the message. The Return object from the web method can be a simple type value or a custom object with more complex types such as binary arrays, etc. The logical connectivity between the web service and its consumer is encapsulated into the proxy/stub infrastructure allowing strongly type access from the business layer. The communication proxy/stub channel provides all magic over wire using the HTTP transport layer, including encoding/decoding binary images in the Base64 text pattern. Introducing the DIME (Direct Internet Message Encapsulation) in the WSE 1.0 +, the binary images can be sent over Internet as binary attachments. Using the WSE Dime namespace classes, the DIME programming is very straightforward at both ends. Each attachment represents one record behind the first one - record 0 (reserved for a SoapMessage). Dime SoapMessage This article describes the solution allowing to transfer the web service response (return value) in the binary format using the WSE-DIME feature. This is a fully transparent loosely coupled solution injected into the message process pipeline at the properly stage. has the following features: The concept of the DIME Bridge is based on binary transferring a web service response (return object) via the Attachment. On the server side - before the DIME bridge, the Return value is serialized into the DIME attachment and its reference in the SoapMessage is setup to null, which will produce very lightweight XML text formatted SoapMessage. On the client side - after the DIME bridge, the situation is reversed. The DIME Attachment is de-serialized into the Return object and its reference is set-up in the SoapMessage (replacing the null value). From the client point of view, all processes via the DIME Bridge is fully transparent and loosely coupled in the SoapExtension message pipeline. Attachment Return null SoapExtension The following picture shows this scenario: The client invokes the WebMethod via the WSClientProxy - at the stage before serialize (1), the SoapMessage is extended for unknown soap header to indicate a server side that the DIME Bridge can be used for the response (4). The SoapServerMessage before the serialize stage is holding the reference of the Return object, so it can be simply serialized and its stream be stored into the DIME Attachment. After that, we can "kill" the return object in the SoapMessage in the same manner like web method, returning value null. WSClientProxy SoapServerMessage The following flowchart shows the stage (4): The SoapMessage is encapsulated into the DIME records. In our case, the result value (return object) is first an Attachment, so the DIME Bridge on the wire (5) will have the following pattern, which represents the core of the concept: As you can see, the above server side stages are straightforward supported by MS WSE-DIME SoapExtension, which has to be plugged into the SoapExtension pipeline close to the wire (priority="1", group="0"). Situation on the client side is a little bit different, because the DIME Bridge can be injected also in the "legacy" client proxy (no WSE features). Let's look at this scenario. The SoapClientMessage at the BeforeDeserialize stage has a responsibility to handle a network stream in the case of the "legacy" proxy. The first DIME record represents the SoapMessage and must be passed through the SoapExtension pipeline like in a normal text/XML way. The next (second) DIME record is our stream of the Return value. Its reference is temporarily stored for the next - final stage. SoapClientMessage BeforeDeserialize The following flowchart shows the message workflow at the BeforeDeserialize stage: The following flowchart is a final stage of the SoapClientMessage processing. Based on the existing WSE proxy, the first Attachment is retrieved and temporarily stored. The next workflow is a common part for any type of proxy - replacing the nullvalue of the Return object in the SoapMessage by the deserialized stream. That's the final step, and from now the SoapMessage can go to the business layer like in the case without the DIME Bridge. That's all there is in the magic of the DIME Bridge. Based on the above concept, the DIME Bridge can be extended very easily for full duplex included the output parameters or for a message body streaming. The following picture shows an idea for streaming data without using the DOM, where SoapHeaders are encapsulated from the content (body) based on the described concept. This design pattern allows to scan the SOAP headers and forward content to the proper endpoint. SoapHeaders Using the DIME Bridge requires installing the WSE 2.0 in prior. After that, the downloaded DIME Bridge MSI file can be run. The setup will install the DimeBridge assembly into the GAC including its source code in the specified location. Note that the DIME Bridge must be installed on both sides: the server and its consumer (client). The next step is to plug the bridge into the SoapExtension pipeline. The following snippet shows a configuration part on the server side: <webServices> <soapExtensionTypes> <add type="Microsoft.Web.Services2.WebServicesExtension, Microsoft.Web.Services2, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" priority="1" group="0" /> <add type="RKiss.SoapExtensionUtil.DimeBridge, DimeBridge, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3445381e5e540793" priority="2" group="0" /> </soapExtensionTypes> </webServices> and the following snippet on the client side: <webServices> <soapExtensionTypes> <add type="RKiss.SoapExtensionUtil.DimeBridge, DimeBridge, Version=1.0.0.0, Culture=neutral, PublicKeyToken=3445381e5e540793" priority="1" group="0" /> </soapExtensionTypes> </webServices> Note that the priority of the DIME Bridge has to be properly set-up in the case of using multiple SoapExtensions. On the server side, the bridge's priority should be behind the WSE, but on the client side, must be close to the wire (NetworkStream). NetworkStream The following DIME Bridge properties can be configured in the appSettings section: appSettings <appSettings> <add key="DimeBridge.Name" value="__DimeBridgeRequest" /> <add key="DimeBridge.Enable" value="true" /> <add key="DimeBridge.SizeThreshold" value="0" /> </appSettings> DimeBridge.Name __DimeBridgeRequest DimeBridge.Enable true DimeBridge.SizeThreshold 5000 After this configuration step, the DIME Bridge is ready to transfer the web service result in binary manner. Of course, there is also capability to set-up the "private bridge" attributing the specific method. Note that this is a tightly coupled solution - incorporated (hard coded) into the client/service implementation. The trace of the checkpoints in the DIME Bridge can be displayed, for instance, on the DbgView program (). Here is its result, for attaching the result (338656 bytes) to the DIME record: DimeBridge ctor at C:/MyProject/WebApp/web.config: Enable = True, Name = __DimeBridgeRequest, Threshold = 5000bytes DimeBridge.SoapClientMessageBeforeSerialize - DimeBridgeRequest for ReturnType=DataContract.GetOrders DimeBridge ctor at C:/MyProject/WebServices/web.config: Enable = True, Name = __DimeBridgeRequest, Threshold = 5000bytes DimeBridge.SoapServerMessageBeforeSerialize - Attaching 'DataContract.GetOrders' [size=338656] DimeBridge.SoapClientMessageBeforeDeserialize- Detaching DimeRecord stream [length=338656] DimeBridge.SoapClientMessageAfterDeserialize - Object 'DataContract.GetOrders' has been returned and in the case of by-passing the bridge (the object size is under the threshold [908 < 5000]): DimeBridge ctor at C:/MyProject/WebApp/web.config: Enable = True, Name = __DimeBridgeRequest, Threshold = 5000bytes DimeBridge.SoapClientMessageBeforeSerialize - DimeBridgeRequest for ReturnType=DataContract.GetCatalog DimeBridge ctor at C:/MyProject/WebServices/web.config: Enable = True, Name = __DimeBridgeRequest, Threshold = 5000bytes DimeBridge.SoapServerMessageBeforeSerialize - No attaching, the object size is under the threshold [908 < 5000] Using the DIME Bridge for Return object with embedded binary images can significantly increase a performance over Internet. For instance, the streamed Return object with the length 5MB can be transferred ~300% faster than the XML formatted one. On the other hand, the small streamed return object like 2KB increase performance approximately about 12%. Implementation of the DIME Bridge is encapsulated into the base class - NopSoapExtension and its overrides. The base class represents the "empty" SoapExtension class with virtual public methods for each SoapMessage stage. If this class is plugged into the SoapExtension pipeline, the SoapMessage is processing though all stages without any action (nop operation). NopSoapExtension public class NopSoapExtension : WebServicesExtension { #region virtuals public virtual void SoapClientMessageBeforeSerialize(SoapClientMessage message){} public virtual void SoapClientMessageAfterSerialize(SoapClientMessage message) { Copy(_newStream, _oldStream);} public virtual void SoapServerMessageBeforeDeserialize(SoapServerMessage message) { Copy(_oldStream, _newStream); } public virtual void SoapServerMessageAfterDeserialize(SoapServerMessage message){} public virtual void SoapServerMessageBeforeSerialize(SoapServerMessage message){} public virtual void SoapServerMessageAfterSerialize(SoapServerMessage message) { Copy(_newStream, _oldStream); } public virtual void SoapClientMessageBeforeDeserialize(SoapClientMessage message) { Copy(_oldStream, _newStream); } public virtual void SoapClientMessageAfterDeserialize(SoapClientMessage message){} #endregion #region Properties public Stream OldStream { get { return _oldStream; } set {_oldStream = value; } } public Stream NewStream { get { return _newStream; } set {_newStream = value; } } public object Initializer { get { return _initializer; } set {_initializer = value; } } #endregion } Overriding a proper stage method by the derived class, the SoapMessage workflow is modified based on the application logic. The DimeBridge class inherited this base class to perform the described flowcharts overriding the following methods - stages: DimeBridge The SoapServerMessageBeforeSerialize method: SoapServerMessageBeforeSerialize public override void SoapServerMessageBeforeSerialize(SoapServerMessage message) { #region Dime Server - Attaching a return value // Check the primary conditions for DIME Response if(!bDimeBridgeEnable || !bIsDimeBridgeRequest || message.MethodInfo.IsVoid || ResponseSoapContext.Current == null || (ResponseSoapContext.Current != null && ResponseSoapContext.Current.Attachments.Count != 0)) return; // get the return value object[] parameterValues = message.GetType().BaseType.GetField("parameterValues", binding).GetValue(message) as object[]; object retval = (parameterValues != null && parameterValues.Length > 0) ? parameterValues[0] : null; // Check the secondary conditions for DIME Response if(retval == null || (retval != null && (retval.GetType().IsValueType || retval is string))) return; // serialize a return value BinaryFormatter bf = new BinaryFormatter(); MemoryStream stream = new MemoryStream(); bf.Serialize(stream, retval); // Check the configuration condition for DIME Response if(stream.Length > intDimeBridgeSizeThreshold) { // pass the return value via a Dime record DimeAttachment attachment = new DimeAttachment("image/object", TypeFormat.MediaType, stream); ResponseSoapContext.Current.Attachments.Add(attachment); // set the null for return value! parameterValues[0] = null; Trace.WriteLine(string.Format(@"DimeBridge." + @"SoapServerMessageBeforeSerialize - Attaching '{0}' [size={1}]", retval.GetType().FullName, stream.Length)); } else { Trace.WriteLine(string.Format("DimeBridge." + "SoapServerMessageBeforeSerialize - " + "No attaching, the object size is under the threshold [{0} < {1}]", stream.Length, intDimeBridgeSizeThreshold)); // clean-up, the return value is a part of the SoapEnvelope stream.Close(); stream = null; } #endregion } The SoapClientMessageBeforeDeserialize method: SoapClientMessageBeforeDeserialize public override void SoapClientMessageBeforeDeserialize(SoapClientMessage message) { #region Dime Client - Detaching Preprocessor #region validation // clean-up FirstAttachment = null; // check the message content type if(message.ContentType != "application/dime") { base.SoapClientMessageBeforeDeserialize(message); return; } // check the WSE client proxy WebServicesClientProtocol wsp = message.Client as WebServicesClientProtocol; if(wsp != null && wsp.ResponseSoapContext != null) { base.SoapClientMessageBeforeDeserialize(message); return; } // check the DimeBridge position in the message workflow if(OldStream.CanSeek == true) throw new Exception("The DimeBridge is not configured close to the wire."); #endregion #region Get the SoapEnvelope and FirstAttachment // get the network stream length object WireStreamLength = OldStream.GetType().GetField("m_ReadBytes", binding).GetValue(OldStream); // get the network stream using(BinaryReader binreader = new BinaryReader(OldStream, Encoding.UTF8)) { MemoryStream ms = new MemoryStream(binreader.ReadBytes(Convert.ToInt32(WireStreamLength))); DimeReader dimereader = new DimeReader(ms); if(dimereader.CanRead) { // SoapEnvelope (the return value is null!) base.Copy(dimereader.ReadRecord().BodyStream, NewStream); // our first Attachment - return value (body) DimeRecord dimerecord = dimereader.ReadRecord(); if(dimerecord.Type == "image/object") { FirstAttachment = dimerecord.BodyStream; message.ContentType = "text/xml; image/object"; Trace.WriteLine(string.Format(@"DimeBridge." + @"SoapClientMessageBeforeDeserialize- " + @"Detaching DimeRecord stream [length={0}]", dimerecord.ContentLength)); } else throw new Exception("DimeBridge " + "Internal Error - wrong attachment type."); } } #endregion #endregion } The SoapClientMessageAfterSerialize method: SoapClientMessageAfterSerialize public override void SoapClientMessageAfterDeserialize(SoapClientMessage message) { #region Dime Client - Detaching a return value // validation if(message.MethodInfo.IsVoid) return; // get the stream based on the WSE client proxy WebServicesClientProtocol wsp = message.Client as WebServicesClientProtocol; if(wsp != null && wsp.ResponseSoapContext != null && wsp.ResponseSoapContext.Attachments.Count == 1 && FirstAttachment == null) { DimeAttachment attachment = wsp.ResponseSoapContext.Attachments[0] as DimeAttachment; if(attachment.ContentType == "image/object") FirstAttachment = attachment.Stream; else throw new Exception("DimeBridge" + " Internal Error - wrong attachment type."); } if(FirstAttachment != null) { using(FirstAttachment) { // get parameterValues object[] parameterValues = message.GetType().BaseType.GetField("parameterValues", binding).GetValue(message) as object[]; // the return value has a mandatory index 0! if(parameterValues[0] == null) { BinaryFormatter bf = new BinaryFormatter(); parameterValues[0] = bf.Deserialize(FirstAttachment); Trace.WriteLine(string.Format(@"DimeBridge." + @"SoapClientMessageAfterDeserialize - " + @"Object '{0}' has been returned", parameterValues[0].ToString())); } else throw new Exception("DimeBridge Internal Error" + " - the parameterValues[0] is not null"); } FirstAttachment = null; } #endregion } As you can see, the implementation of the DIME Bridge at the message stages is straightforward using the classes from the Microsoft.Web.Services2.Dime namespace. As I described earlier, the "concept core" is based on overwriting the reference of the Return value. This reference is obtained using reflection. Microsoft.Web.Services2.Dime In this article, I showed you a SoapMessage encapsulation into the DIME records based on the message content, for instance, a return value from the WebMethod. The WSE-DIME feature allows to split the SoapHeaders from the message body and handle them separately (like Indigo messages). This concept can be used also for creating your own protocol for router, streaming, etc. in a loosely coupled manner. Included DIME Bridge is the example of this concept, how to significantly increase the response from the web service without touching either the client or server. I hope you will enjoy
http://www.codeproject.com/Articles/7195/WebService-DIME-Bridge?msg=841756
CC-MAIN-2014-10
en
refinedweb
ResourceBundle is used for globalization of messages.For example if we want to display mesages in the JSP page, it will not be hardcoded in the page itself. All the messages will be stored in the properties file, we will use ResourceBundle class to load the file. Every langiage have its own resource bundle so that based on the user's locale information, the appropriate resource bundle will be loaded. sample code: import java.util.Locale; import java.util.ResourceBundle; import java.util.MissingResourceException; public class HelloResourceBundleExample { public static void main(String [] argv) { try { Locale frenchLocale = new Locale("fr", "FR"); ResourceBundle rb = ResourceBundle.getBundle("HelloResourceBundle", frenchLocale); System.out.println(rb.getString("Hello")); System.out.println(rb.getString("Goodbye")); } catch (MissingResourceException mre) { mre.printStackTrace(); } } } posted by krishna Just to say a few words about ResourceBundle and its usages, it is something like a facility given to the developer to change the 'configurable' information outside your source code. for example, you may have to have a internationalized property-value settings w hich displays the message according to the current user's locale. What you need to have it you need to keep that many resourcebundle file (could be a plain properties file but with a specific naming convention like __.properties file. Generally, the language and country code are of in 2 characters say, en_US for US english, fr_FR for french etc., You can refer to the sun's tutorial for a good introduction and tutorial. It can be of any property settings like the information pertaining to the database connection (dburl, username, passwrod, driver details, version etc.,), logging information (log file name, locaiton, log file size, nature etc.,). But this is very well achieved through properties file but you can make use of the resource bundles. One advantage of having these information stored in properties file instead of java file: You don't need to compile the java source when you need to do any change in the information (that's why they are called as 'configurable' :)). Not just compiling, you noneed to update your entier archive file as well (.war, .ear, .jar etc). posted by Raghavan
http://www.javabeat.net/qna/37-what-is-the-use-of-resourcebundle-in-java/
CC-MAIN-2014-10
en
refinedweb
Flash CS3 and higher ActionScript 3.0 and higher This technique relates to: See User Agent Support for Flash for general information on user agent support.. In this example, the FLVPlayback component is used to create a video player. A custom class called "AudioDescriptions" is added to manage the playback of extended audio descriptions. This class provides event listeners to listen for cue points in the media that have been identified by the audio description provider. When these cuepoints are reached, an mp3 file containing the corresponding description will start playing. The recorded descriptions have been timed to fit with in the gaps in the movie's dialog. By default, audio descriptions will be enabled. A button (which must itself be accessible to meet other success criteria) is provided below the video player that allows the user to turn audio descriptions on or off. Example Code: package { import fl.video. *; import flash.events. *; import flash.media.Sound; import flash.media.SoundChannel; import flash.net.URLRequest; import flash.display.Sprite; public class AudioDescriptions extends Sprite { private var channel: SoundChannel = new SoundChannel; private var myPlayer: FLVPlayback; private var _enabled: Boolean = true; private var _toggleBtn: Button; private var snd: Sound = new Sound(); public function AudioDescriptions() { // point myPlayer to the FLVPlayback component instance on the stage, // which should be loaded with a valid video source. myPlayer = my_FLVPlybk; // add cue points. When any of these are reached, the // MetadataEvent.CUE_POINT event will fire myPlayer.addASCuePoint(8.35, "ASpt1"); myPlayer.addASCuePoint(23.23, "ASpt2"); enable(); enable_AD_btn.addEventListener(MouseEvent.CLICK, handleBtnClick); } private function handleBtnClick(e) { _enabled = ! _enabled; if (! _enabled) { disable(); enable_AD_btn.label = "Enable Audio Descriptions"; } else { enable(); enable_AD_btn.label = "Disable Audio Descriptions"; } } public function enable() { // set up an event handler which will be called each time a cue point is reached myPlayer.addEventListener(MetadataEvent.CUE_POINT, cp_listener); } public function disable() { // remove the event handler called each time a cue point is reached, so // that audio description is disabled. myPlayer.removeEventListener(MetadataEvent.CUE_POINT, cp_listener); } private function cp_listener(eventObject: MetadataEvent): void { snd = new Sound(); //recreate sound object as it can only load one mp3 file //check to see which cue point was reached switch (eventObject.info.name) { case "ASpt1": snd.load(new URLRequest("sphere.mp3")); //create a new Sound object, and load the appropriate mp3 channel = snd.play(); // play the audio description, and assign it to the SoundChannel object break; case "ASpt2": snd.load(new URLRequest("transfrm.mp3")); channel = snd.play(); break; } } } } The result can be viewed in the working version of Playing descriptions when cue points are reached. The source of Playing descriptions when cue points are reached is available. Audio description can also be provided via an additional audio track that is the same length and plays simultaneously as the primary media, but that only includes sound for the segments when audio description needs to be played and silence at other times. A Flash author can provide a toggle to turn this additional audio track on or off, based on the listener's preference. When the additional track is enabled, there are two parallel audio tracks, one being the primary audio, and the second being the one containing only audio description. It is still necessary to ensure that the audio description and primary audio do not overlap in ways that make comprehension difficult. This method will achieve the same result as the method used in Example 1, but may be chosen because of the type of audio description files that are provided to the Flash author. When Flash content contains video with an audio soundtrack, confirm that: Audio descriptions have been made available using separate sound files. A button is provided that allows users to enable or disable the audio descriptions .
http://www.w3.org/TR/WCAG-TECHS/FLASH26.html
CC-MAIN-2014-10
en
refinedweb
Malloc Madness malloc, does that ensure you actually allocated memory? I was looking at some C code a year or so ago and I noticed that the programmer never checked the return values from calls like malloc. Just to forestall any flames, I'm not suggesting this is a good idea. On the contrary, I always do a sanity check on anything I get back from any memory allocation call. However, it brought to mind an interesting question: If you get a good return from malloc, does that ensure you actually allocated memory? Can you run out of memory but still get a good return from malloc? The answer is: It depends. On some simple systems, the answer is probably yes. If you have a pointer to a chunk of memory, it's yours and will be until you release it. On Linux, however, the answer is often no. By default, the kernel makes a guess about how much memory you'll really need and if it thinks it might have enough, it returns a pointer to you. What it doesn't do, in the interest of performance, is actually allocate any of that memory in physical pages. That doesn't happen until you actually try to use the memory, and even then only for the pages (usually 4K) that you are actually using. By that time, however, there might not be any memory and you get an exception, even though malloc returned a good pointer. One issue with using Linux for embedded systems programming is that most people are used to writing Linux for servers or desktops, and it shows. For example, embedded systems often don't have swap files (either there isn't enough space for one, or the only mass storage is subject to write wearing). Some people assume that means there is no virtual memory. That isn't exactly true. The kernel is pretty smart. If it runs out of physical memory, it tries to decide if it has some pages sitting around that it can safely reclaim. Sometimes that means reducing things like the caches the system uses for files. Eventually, though, that's not enough and it starts stealing pages of memory from processes that haven't used the pages in a while. That's swap, right? Not exactly. Again, the clever kernel looks at the kind of page. If the page is something that has not changed since it was read from disk (say, part of your executable code), it just throws that page away. The kernel knows that if you need that page again, it can just reload it from the original. If the code was modified, then it will have to go to the swap file. Even then, the kernel tries to do things intelligently. If it swaps a page out, and later brings it back, it will mark the swap file page as unused, but it tries not to reuse the page. That way, if the page gets swapped again with no more changes, the kernel can just mark the existing swap page as in-use again, which is not only faster, but means that swapping to flash is probably not as bad as you thought it was (of course, modern wear-leveling algorithms also mitigate the same problem). So it isn't like the first page of your swap file or device gets written to more than any other part of the swap file. However, if you have no swap file at all, it means that any page that is "dirty" is locked in physical memory until you destroy it. That's a big hurdle. The corollary, though, is that having no swap file doesn't necessarily mean the system can't overcommit, since it can still discard clean pages. The overcommit_memory flag in /proc/sys/vm lets you turn off overcommit behavior, which might be a good idea. Usually, this is set to 2, which lets the kernel guess if it will have enough memory based on the amount of physical RAM and swap space along with a ratio ( overcommit_ratio). You can set it to 0, which is my preference for small, embedded systems. You can also set it to 1 so that malloc never fails! That might be useful if you are allocating very sparse arrays that you never really plan to use, although if you do that, I have to wonder about your design! Try this with the different flag settings: #include <stdio.h> #include <stdlib.h> #define TOUCH 1 // 0 to not touch memory, 1 to touch #define SIZE 1024 // bytes to allocate at once (assume <1 page) int main(int argc, char *argv) { unsigned i=0; char *p; do { p=(char *)malloc(SIZE); if (p) printf("%u\r", ++i); else { printf("Malloc failed after %u calls\n",i); break; } #if TOUCH *p=0xAA; #endif } while (p); return 0; } If you set TOUCH to 1, you will get less memory under the normal settings because accessing the memory forces the kernel to actually give you the memory. When TOUCH is 0, the return from malloc is valid, but doesn't really point to real memory, so the kernel is just guessing it can give you enough memory. If you prefer to code in Java, here's an interesting question to ask your JVM vendor: When you load (or JIT compile) some byte code, does it get stuck in physical memory if there is no swap file? Unless the JVM author took some very special pains, the answer will be yes. On the other hand, for embedded systems, a common virtual machine these days is actually the Android Dalvik system. While Android is based on Linux, it has some unusual wrinkles about what it does when it actually runs out of memory. But I'll save that for next time.
http://www.drdobbs.com/embedded-systems/malloc-madness/231600776
CC-MAIN-2014-10
en
refinedweb
WAIT(2) BSD Programmer's Manual WAIT(2) wait, waitpid, wait4, wait3 - wait for process termination #include <sys/types.h> #include , if non-zero, is filled in with termination information about the process that exited (see. The following symbolic constants are currently defined in <sys/wait.h>: #define WAIT_ANY (-1) /* any process */ #define WAIT_MYPGRP 0 /* any process in my process group */..
http://www.mirbsd.org/htman/i386/man2/WIFEXITED.htm
CC-MAIN-2014-10
en
refinedweb
Hi all, I'm glad to announce the release of IPython 0.6.6. IPython's homepage is at: and downloads are at: I've provided RPMs (Py2. Release notes ------------- This release was made to fix a few crashes recently found by users, and also to keep compatibility with matplotlib, whose internal namespace structure was recently changed. * Adapt to matplotlib's new name convention, where the matlab-compatible module is called pylab instead of matlab. The change should be transparent to all users, so ipython 0.6.6 will work both with existing matplotlib versions (which use the matlab name) and the new versions (which will use pylab instead). * Don't crash if pylab users have a non-threaded pygtk and they attempt to use the GTK backends. Instead, print a decent error message and suggest a few alternatives. * Improved printing of docstrings for classes and instances. Now, class, constructor and instance-specific docstrings are properly distinguished and all printed. This should provide better functionality for matplotlib.pylab users, since matplotlib relies heavily on class/instance docstrings for end-user information. * New timing functionality added to %run. '%run -t prog' will time the execution of prog.py. Not as fancy as python's timeit.py, but quick and easy to use. You can optionally ask for multiple runs. * Improved (and faster) verbose exeptions, with proper reporting of dotted variable names (this had been broken since ipython's beginnings). * The IPython.genutils.timing() interface changed, now the repetition number is not a parameter anymore, fixed to 1 (the most common case). timings() remains unchanged for multiple repetitions. * Added ipalias() similar to ipmagic(), and simplified their interface. They now take a single string argument, identical to what you'd type at the ipython command line. These provide access to aliases and magics through a python function call, for use in nested python code (the special alias/magic syntax only works on single lines of input). * Fix an obscure crash with recursively embedded ipythons at the command line. * Other minor fixes and cleanups, both to code and documentation. The NEWS file can be found at, and the full ChangeLog at. Enjoy, and as usual please report any problems. Regards, Fernando.
https://mail.python.org/pipermail/python-list/2004-December/249943.html
CC-MAIN-2014-10
en
refinedweb
When I first started playing around with HTML, I couldn't understand why there was no functionality to draw shapes and non-horizontal lines. It was only a couple of years later that I discovered VML (soon to be replaced by SVG). Don't get me wrong, HTML is an incredibly powerful GUI tool, but with the advent of vector graphics for the web, its power grows exponentially. VML (vector markup language) is the technology that allows developers to draw directly onto an HTML page as if it were a GDI canvas. The syntax is made up of 2 parts: markup and code. These are mutually exclusive. Unfortunately, possibly due to a very slow adoption rate, the VML object model is poorly documented and rarely used in samples (here too). In order to use VML, you need to ensure that the IE5.0 install included the VML plugin. Here's a really basic VML shape. Want to get a complete sample going on your machine? Firstly, you'll need to "import" the namespace into you HTML page: <html xmlns: Next, you'll need to add a new behaviour to the page: behaviour <style> v\:* { behavior: url(#default#VML); } </style> That's it. It won't do anything, but it makes your page VML-aware. This base code will be presumed for the rest of examples. Try adding this anywhere within the <body> tag: <body> <v:roundrect Note that valid XML syntax applies. If you don't adhere to this axiom, then the page could display unpredictably and make debugging very tedious. The actual VML markup is pretty self-explanatory and human-readable (a general goal of the XML standard). Note the v: tag-prefix, this specifies to the IE rendering engine that the roundrect tag in this case is to be handled differently to other tags. v: roundrect So, what's the point? One of the advantages of VML is its minute size when compared to images. Depending on the type of webplications you design, this could be reason enough. Also: So, hopefully, you can see that VML is more than just a distorted 1x1 pixel image. Here's the code for the famous diagonal line that I wanted to do in HTML for so long: <v:line The code is really neat and simple (albiet for simple shapes). For this type of shape, a from(x,y) to(x,y) co-ord syntax is used. The above samples are probably enough for most simple web graphics, but let's dive into some more. from(x,y) to(x,y) Try this: <v:oval Same concept, just a different shape. Note the all too familiar style tag attributes. style Here's a sample that uses a bunch of different shapes. <v:line <v:stroke </v:line> <v:line <v:stroke </v:line> <v:line <v:stroke </v:line> <v:line <v:stroke </v:line> <v:line <v:stroke </v:line> <v:line <v:stroke </v:line> <v:oval <v:curve</v:curve> <v:rect id=myrect </v:rect> As foreign as it seems (to most), using comments in HTML might be a really good plan here. One of the awesome things about VML as a graphics tool is that ALL paint events are handled for you. Try minimising the browser or "un-maximise" it and move a part of it off the edge of your screen and out again. It repaints on it's own - and with no noticable performance penalty! This is a massive bonus for those graphics guys out there. What about a real world use? Here's the output from a graphing engine I supposedly work on during bouts of insomnia. A huge bonus that this approach has over the standard "let-the-server-make-a-gif" idea, is that the client (browser) can alter the shapes at the client's will. I achieve this by giving each applicable shape an id and use inline event-handlers to setup how mousedown, mousemove and mouseup events are handled. After that it's just a matter of implementing a bit of drag-and-drop code. So, put another way, VML shapes are still objects as far as JScript/VBScript is concerned. mousedown mousemove mouseup The other technique that you can use to draw shapes is co-ordinate pairs. This can be a lot trickier to code by hand, but does gives you (virtually) unlimited power over your web presentation. Here's an example: <v:polyline Currently, the SVG specification is set to overtake VML and will eventually be supported natively by the main browsers. But in the meantime, VML is easy to use and can offer a new avenue of expression for lowly web
http://www.codeproject.com/Articles/1742/Introduction-to-VML?fid=3152&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=692670&fr=6
CC-MAIN-2014-10
en
refinedweb
Every time in my career when I learn something new, I always wish to share it. Every time when someone is new to a subject, at least he has to Google before he can post in the Forum. Sometimes the answers that are found in threads are not complete. I would like to take this chance to increase the pages and results found when you Google the subject “How to Store a Connection string in an App.Config file or Web.Config File. Let’s hear my story first. Every time I write an article, it means I grew to a certain level of understanding. When I came to the .NET world, I was so overwhelmed to create applications. My interest was on Windows applications, I was not that interested in developing Web applications, may be it’s because of the projects I was involved with. Sometimes I would just consume web services from a Windows application. I had fun and I loved ADO.NET as you have seen in my previous articles. I enjoyed database programming and I have learned a lot and am still learning to make my data layers the most beautiful thing in my world. Let’s cut to the chase. Recently I have been hard coding my connection string to a DLL (Data Access Layer). There was nothing wrong with that, because I knew for a fact that My Server name would never change for years. But that is where I made a big mistake, especially on a Windows application. After a while, I mean few weeks ago, the server that had held my database had resigned. Well the database had to be moved to another server, but hey this meant I have to recompile my DLLs. It is because I cannot change my connection string without recompiling my DLLs. It was never going to be a good thing to change the connection and recompile the DLL again. That means I had to look for a way to store the connection string. I have learned that I should never hardcode my connection string or anything that has to do with my application settings. In this article, we are going to look at how to place a connection string in a file that can be opened and edited anytime when the server changes or the password changes. We are going to use multiple comments in our article and we are going to use C# as our language. This is a very short story that is going to have a happy ending. To store a connection string to an App.Config file, you must do the following; Open your Visual Studio on an Existing Project to which you have once hard coded your connection string. Add a new Application.Configuration file: In your project, the file will be named App.config by default. Open the file. Now we are going to add our connection string, like this: <add key="MyConstring" Value = "User Id=sa; Password=password;Server=Myserver;Database=Mydatabase "></add> That means our file will look like this: <?xml version="1.0" encoding="utf-8" ?> < configuration> < add key="MyConstring" Value = "User Id=sa; Password=password;Server=Myserver;Database=Mydatabase "></add> < /configuration> Now remember that your connection string should be added between your configuration tags. Now that you have added your connection string, you have to access it from your C# or VB.NET code, but in this article I will be doing demonstrations in C#, but these languages do not differ that much. Now the first thing we need to do is to add a Reference to a System.Configuration namespace. After that, we are going to our usings and add it like this: configuration System.Configuration using using System.Configuration; After you are done, let's go to a string that you once hardcoded. It will be something like this: string String strcon = "User Id=sde; Password=topology; Server=bkiicoryw004;Database=Tshwane_Valuations " Now convert it to this: string strcon = ConfigurationManager.AppSettings.Get("MyConstring"); Then recompile your project. When you deploy your application, your App.Config will be deployed amongst other dependency, because your application will need the file to be told where the data source is sitting. So the next time the Server name changes or the password changes, you don't have to recompile your application, you just go to your Program Files and locate your Application directory and open the App.config with a text file and change the connection string. The above example is for Windows applications, but for Web applications, you have to add a Web.Config file as I did and add your connection string like this: <configuration> <appSettings> < add key =" MyConstring " value ="User id=sa; Password=password;Server=myserver;Database=mydb"></add> </appSettings> </configuration> When you access it from your code, it should look like this: string strcon = ConfigurationSettings.AppSettings["MyConstring"]; This is the END of the story. At least the story has a happy ending. The next time a server changes, you don't need to recompile. When the password changes, you don't need to recompile.Hope you loved my short story.Thank you.Ngiyabonga
http://www.codeproject.com/Articles/28699/How-to-Store-and-Retrieve-a-ConnectionString-from?fid=1525531&df=90&mpp=25&sort=Position&spc=Relaxed&tid=4231339
CC-MAIN-2014-10
en
refinedweb
This project is inspired by comments on my article about integrating FCKEditor with SharePoint. In that article, we created a custom Web Part (replacement for the Content Editor Web part) which uses FCKEditor as the web editor. That was OK, but what if you wanted more, if you wanted to use FCKEditor instead of a rich HTML editor that comes with SharePoint and WSS? It’s not enough. Instead of changing core.js, I offer you a cleaner solution, to create your own field type (so you can use it with any list you create in future) with FCKEditor as the default web editor. What do you need to do this? I haven’t tried with older versions, but as I have seen in articles about this subject, version 1.1 was an improvement. I didn’t need to add files and inclusions by myself (Visual Studio did it for me). For instructions on how to add FckEditor files to a SharePoint website, see my previous article: The basic idea is to have multiline columns in a SharePoint list which has a custom WYSIWYG editor. In this article, I implemented FCKEditor in this custom column. The benefits are many: CustomWeb is a deploy ready, VS2005 solution. Every sample from the article is available in that solution. When you prepare FCKEditor, you can just build and deploy this solution. SPFieldText SPField TextField BaseFieldControl using System; using System.Runtime.InteropServices; using System.Web.UI.WebControls; using Microsoft.SharePoint; using Microsoft.SharePoint.WebControls; using FredCK.FCKeditorV2; protected FCKeditor WebEditor; // protected Label WebEditorPrefix; If you want a Label in the new or edit forms, just uncomment this line. I didn’t need it. Label CreateChildControls protected override void CreateChildControls() { if (this.ControlMode == SPControlMode.Edit || this.ControlMode == SPControlMode.New) { // Make sure inherited child controls are completely rendered. base.CreateChildControls(); this.WebEditor = new FCKeditor(); ; //this.WebEditorPrefix = new Label(); //If you want your own label for a control /**** If You use ascx template use this two lines instead 2 above // (Label)TemplateContainer.FindControl("WebEditorPrefix"); //if (TemplateContainer.FindControl("WebEditor")!=null) //this.WebEditor = (FCKeditor)TemplateContainer.FindControl("WebEditor"); */ if (!this.Page.IsPostBack) { if (this.ControlMode == SPControlMode.New) { this.WebEditor.Value = ""; } // end assign default value in New mode this.WebEditor.ImageBrowserURL = "/fckeditor/fileUpload.aspx"; this.WebEditor.ToolbarSet = "MyToolbar"; this.WebEditor.Width = 680; this.WebEditor.Height = 500; }// end if this is not a postback What are we doing here? If the control is in Edit or New mode (when the user opens the New or Edit forms in the SharePoint list), we will render our FCKEditor control. In Display mode, we will render HTML (we specify this in the RenderPattern tag later in this article). If the item is new, we set the value to an empty string; otherwise, it should load the value from the SharePoint list. Properties we set later are FCKEditor specific, we set ImageBrowserURL (which is our custom file browser), toolbar (custom too), width, and height. Edit New Display RenderPattern ImageBrowserURL Value public override object Value { get { //EnsureChildControls(); return this.WebEditor.Value; } set { //EnsureChildControls(); this.WebEditor.Value = (String)value; } } Your field type definition is already created for you in the Templates/xml folder. It’s named fldtypes_CustomWeb.xml if you named your project CustomWeb. This is one of the good sides if you use WSS Extensions 1.1 for Visual Studio. Otherwise, you should create this file by yourself. Just build the project, and Visual Studio will add everything but RenderPattern. This part should never be entered manually. Because we want to display only HTML, it is very easy and you just add: <RenderPattern Name="DisplayPattern" DisplayName="DisplayPattern"> <Column AutoNewLine="TRUE" /> </RenderPattern> before the </FieldTypes> tag. And, one thing more. If you want to store more than 255 characters in your control, you should change the ParentType of your control to Note. </FieldTypes> ParentType Note At the end, your XML file should look like: <?xml version="1.0" encoding="utf-8"?> <FieldTypes> <FieldType> <Field Name="TypeName">CustomWebField</Field> <Field Name="TypeDisplayName">CustomWebField</Field> <Field Name="TypeShortDescription">CustomWebField</Field> <Field Name="ParentType">Note</Field> <Field Name="UserCreatable">TRUE</Field> <Field Name="FieldTypeClass">6dee03df-80d1-4a5b-abb2-3aa1ea2ad19e</Field> <RenderPattern Name="DisplayPattern" DisplayName="DisplayPattern"> <Column AutoNewLine="TRUE" /> </RenderPattern> </FieldType> </FieldTypes> Now, you are ready to deploy your solution. Just click on Deploy, and your field will be added to SharePoint. You can see your new field type when you go to the Create Column option of your list (see image below): Your new field type should be at the end of all known field types in SharePoint (text, option, hyperlink...). Give some nice name to it, and just select CustomWebField as your field type. From this point on, everything is pretty straightforward. You work with this type of column just like with other column types. So, when you open your list item in a new form, it looks like this: If you downloaded my version of FCKEditor, you should have the SharePoint file browser too (otherwise, you should have the default one): In Display mode, our field type just renders the HTML (remember the RenderPattern tag in the field definition XML): As you can see, I commented lines of code that reference a template container. What is all that about? You can create an ASCX page and use UI to create your control. Instead of using Controls.Add(this.WebEditor);, you can create an ASCX file and drag and drop controls you need for your project. You should put this ASCX file in the root of your project. Controls.Add(this.WebEditor); Then, in CustomWeb.FieldControl.cs, you should override the DefaultTemplateName property: DefaultTemplateName protected override string DefaultTemplateName { get { return "CustomFieldControl"; } } where CustomFieldControl is the name of your CustomFieldControl.ascx file. Just remember, after deployment, your ASCX file should be in the C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\CONTROLTEMPLATES folder. This was a pretty simple control, so I chose not to use an ASCX
http://www.codeproject.com/Articles/32877/FCKEditor-SharePoint-Integration-Creating-a-Custom/?fid=1534214&df=90&mpp=10&sort=Position&tid=3197795
CC-MAIN-2014-10
en
refinedweb
08 November 2012 22:00 [Source: ICIS news] HOUSTON (ICIS)--?xml:namespace> Centre-south sugarcane production totalled 36.21m tonnes on 15-31 October, up by 56.5% from 23.13m tonnes in the same two weeks one year earlier. The increase stemmed from good weather and a higher number of sugarcane mills that were in operation in the second half of October compared with the same period last year. Unica said 19 sugarcane mills in the centre-south have already finished the 2012 harvest and halted production for the year. The figure compares with 97 mills that were off line at this time last year. The centre-south accounts for 90% of Centre-south sugarcane production is expected to reach 518.5m tonnes for the 2012 crop year, up by 5% from the 493.2m tonnes produced in 2011, according to the Brazilian group. The projection represents a 2% increase from Unica's original forecast of 509.0m tonne in Apr
http://www.icis.com/Articles/2012/11/08/9612448/brazil-group-reaffirms-forecast-for-5-jump-in-sugarcane.html
CC-MAIN-2014-10
en
refinedweb
Disclaimer: If you already know Python really well, this post might not be handy for you. However, I’d still love to see your comments and feedback if you have a moment to reply. Much of my recent work has centered on OpenStack and I’ve found myself overwhelmed by learning Python. Although I don’t have any formal education on anything related to computer science or programming, I’ve worked my way through PHP, Perl and Ruby. Ruby seems to be the most comfortable language for me to use due to the simplicity of the syntax and the handy features provided by the standard libraries and common gems. Python always caught me as strange due to the forced indenting (I indent my code properly anyway, but it still feels weird to be forced to do so), module namespaces and the overall syntax. Things like list and generator comprehension made my head spin and I avoided Python like the plague. All of that had to change over the past few months. I’m not an expert in Python by any means but I’ll be glad to share with you how I trekked from the depths of Ruby to the edge of Python. Zed Shaw’s guide to learning Python has been the primary recommendation from every Python developer I’ve polled at Rackspace. It is clear, concise and accurate; however, I never did finish the HTML guide. Something would end up distracting me or I’d become discouraged by something I couldn’t understand. That’s when I found the video course on Udemy. The video course costs $29 and comes with the PDF copy of the book. You can watch Zed work through the problems on screen via an easy-to-follow screencast. He even makes common errors on screen and runs the interpreter so you can get familiar with exceptions from common typos. If it’s in Python or the standard libraries bundled along with it, it’s in the Python documentation. There are plenty of code examples for almost all of the methods from the standard libraries on the site. It’s a good resource to bookmark while you’re learning what certain methods do and which parameters they expect. You can also ensure that your code isn’t importing modules that are deprecated. This could draw criticism from some, but Stack Overflow is a good resource to find better ways to do things in Python. I’ve written some pretty ugly Python code only to find that I could have called a couple of methods from modules found in Python’s standard libraries. You can find lots of examples of code simplification and recommendations for which modules to use for a particular project. Keep in mind that some suggestions on the site can be subpar. Some may contain deprecated or insecure code that could hurt your project’s success. Be sure to look through the comments after each answer to ensure that you’re reading a solid solution. Some of the best resources for learning Python are probably all around you in your office or online. I’m extremely fortunate to be surrounded by gifted and experienced developers at Rackspace who genuinely care about their work and want to share their strategies with others. I’ve always had a tough time understanding lambdas (I couldn’t understand them in Ruby, either), but one of my coworkers took me through some examples as I was leaving work. If you feel like you might be a bother to your coworkers, try to do some homework on the topic first or give them a specific example of what you’re trying to solve. It will show them that you’ve done your best to understand the topic but that you need some help getting over the hump. A hot cup of their favorite coffee or snack doesn’t hurt either. Find a problem, make a project and write some Python. Most of us have something we’d like to accomplish if we had the time. Take that idea or problem and write Python to solve it. You’ll pick up new knowledge as you work through the project and you’ll probably back yourself into a corner more than once. When it happens, go back to the documentation, do some Googling and lean on your peers. I’ve been working with Python for just over a month and these strategies have jump started my learning by leaps and bounds. If you’re struggling, drop me a line and I’ll see what I can do to help. I’m also eager to hear your strategies for learning Python so they can be shared with others.
http://www.rackspace.com/blog/how-i-started-learning-python/
CC-MAIN-2014-10
en
refinedweb
Difference between revisions of "How to add validation logic to HttpServletRequest" Latest revision as of 00:15, 8 March 2012 Status Released 14/1/2008 Overview In a Java EE application, all user input comes from the HttpServletRequest object. Using the methods in that class, such as getParameter, getCookie, or getHeader, your application can get "raw" information directly from the user's browser. Everything from the user in the HttpServletRequest should be considered "tainted" and in need of validation and encoding before use. So it would be nice if we could add validation to the HttpServletRequest object itself, rather than making separate calls to validation and encoding logic. This way, developers would get some validation by default. This article presents an approach to building validation into the HttpServletRequest object so that it is mandatory for developers to use the validated data. Most projects have an "allow everything" approach to validation. What we want is a Positive security model that denies everything that's not explicitly allowed. So using this approach, you can start your project with a very restrictive set of allowed characters, and expand that only as necessary. Approach We're going to use a Java EE filter to wrap all incoming requests with a new class that extends HttpServletRequestWrapper, a utility class designed for just this type of application. Then all we have to do is override the specific methods that get user data and replace them with calls that do validation before returning data. The first thing to do is to create the ValidatingHttpRequest. Note that this class calls a new custom method named "validate" that throws a ValidationException if anything goes wrong. You can do a lot in the validate method, including encoding the input before it is returned to the application. public class ValidatingHttpRequest extends HttpServletRequestWrapper { public ValidatingHttpRequest(HttpServletRequest request) { super(request); } public String getParameter(String name) { HttpServletRequest req = (HttpServletRequest) super.getRequest(); return validate( name, req.getParameter( name ) ); } // Danger - you can optionally allow getting the raw parameter public String getRawParameter( String name ) { HttpServletRequest req = (HttpServletRequest) super.getRequest(); return req.getParameter( name ); } ... follow this pattern for getHeader(), getCookie(), etc... ... specifically don´t forget getParameterValues() as this is used by frameworks like Struts to get the parameter values ... Struts2 uses getParameterMap() to get the parameter values. Below is the sample way how to validate that. public Map<String, String[]> getParameterMap() { Map<String, String[]> map = super.getParameterMap(); Iterator iterator = map.keySet().iterator(); Map<String, String[]> newMap = new LinkedHashMap<String, String[]>(); while (iterator.hasNext()) { String key = iterator.next().toString(); String []values = map.get(key); String []newValues = new String[values.length]; for(int i = 0; i < values.length; i++){ newValues[i] = validate(key, values[i]); // Apply validation logic to the value } newMap.put(key, newValues); } return newMap; } // This is a VERY restrictive pattern alphanumeric < 20 chars // It's easy to make this a parameter for the filter and configure in web.xml private Pattern pattern = Pattern.compile("^[a-zA-Z0-9]{0,20}$"); private String validate( String name, String input ) throws ValidationException { // important - always canonicalize before validating String canonical = canonicalize( input ); // check to see if input matches whitelist character set if ( !pattern.matcher( canonical ).matches() ) { throw new ValidationException( "Improper format in " + name + " field"; } // you could html entity encode input, but it's probably better to do this before output // canonical = HTMLEntityEncode( canonical ); return canonical; } // Simplifies input to its simplest form to make encoding tricks more difficult private String canonicalize( String input ) { String canonical = sun.text.Normalizer.normalize( input, Normalizer.DECOMP, 0 ); return canonical; } //--A.in.the.k 08:57, 19 March 2009 (UTC) for correct implementation see // Return HTML Entity code equivalents for any special characters public static String HTMLEntityEncode( String input ) { StringBuffer sb = new StringBuffer(); for ( int i = 0; i < input.length(); ++i ) { char ch = input.charAt( i ); if ( ch>='a' && ch<='z' || ch>='A' && ch<='Z' || ch>='0' && ch<='9' ) { sb.append( ch ); } else { sb.append( "&#" + (int)ch + ";" ); } } return sb.toString(); } } Then all we have to do is make sure that all the requests in our application get wrapped in our new wrapper. It's easy to implement with a Java EE filter. public class ValidationFilter implements Filter { public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) { chain.doFilter(new ValidatingHttpRequest( (HttpServletRequest)request ), response); } } To add the filter to our application, all we have to do is put these classes on our application's classpath and then set up the filter in web.xml. <filter> <filter-name>ValidationFilter</filter-name> <filter-class>ValidationFilter</filter-class> </filter> <filter-mapping> <filter-name>ValidationFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Now all requests will go through the filter, get validated, and throw an exception if there's a problem. You'll want to set up a handler for ValidationExceptions, so that they get handled properly. If ValidationException extends SecurityException, and you've set up a handler for SecurityException that logs out users who try to hack, you'll be well on your way to a secure application. Limitations A limitation of this particular example is that all HTTP requests are validated according to the same validation pattern. A more sophisticated approach would allow developers to specify different validation patters for different types of data.
https://www.owasp.org/index.php?title=How_to_add_validation_logic_to_HttpServletRequest&diff=prev&oldid=125742
CC-MAIN-2014-10
en
refinedweb
10 August 2011 23:59 [Source: ICIS news] LONDON (ICIS)--European styrene butadiene rubber (SBR) August contract prices rolled over from July because of slower demand during August and lower-than-expected feedstock cost increases. A number of southern European plants have shut for planned maintenance during August and demand has dropped as a result. One producer said it made small reductions of €30/tonne ($42/tonne) on large orders. Most southern European producers offered rollovers, but warned there may be increases in September. One European trader said that there is no competition from ?xml:namespace> “There is a lot of talk of Asian SBR prices coming down, but most players are on holiday so more reaction is expected from September,” said the trader. There are currently no supply issues. Demand is weak but expected to pick up from September when plants build their inventories. SBR is used in the manufacture of tyres for the automotive industry. (
http://www.icis.com/Articles/2011/08/10/9484183/europe-styrene-butadiene-rubber-contracts-roll-over-in-august.html
CC-MAIN-2014-10
en
refinedweb
Most Wanted Apache MyFaces Trinidad 1.2 Tags and Tag Attributes This article by David Thomas discusses the Trinidad tags and their attributes in a structured manner. The reader will gain an insight into the design of Trinidad allowing them to draw an efficient mental map of the library and an effective selection and application of tags. More concretely, the following topics are covered: - An overview of the XHTML-focused Trinidad namespace trh - An overview of the central Trinidad namespace tr - An orientation and classification on the attributes supported by Trinidad A Short Tour through NAV 2009: Part 2 Read Part One of A Short Tour through NAV 2009 here.Read A Short Tour through NAV 2009: Part 2 in full A Short Tour through NAV 2009: Part 3 Read Part One of A Short Tour through NAV 2009 here. Read Part Two of A Short Tour through NAV 2009 here.Read A Short Tour through NAV 2009: Part 3 in full in full Ubuntu 9.10: How To Upgrade In this article by Christer Edwards, you'll learn a number of different ways to upgrade an existing Ubuntu installation. Whether it is a Desktop, Laptop or Server, you'll find instructions below. These methods have been tested by volunteers around the world and should prove to be simple and problem free for you as well.Read Ubuntu 9.10: How To Upgrade in full Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 2 Read Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 1 here.Read Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 2 in full Plotting Geographical Data using Basemap This article by Sandro Tosi is dedicated to Basemap, a Matplotlib toolkit to draw geographical data. We can use Matplotlib to draw on geographical map projections using the Basemap external toolkit. Basemap provides an efficient way to draw Matplotlib plots over real world maps.Read Plotting Geographical Data using Basemap in full
https://www.packtpub.com/article-network/%252Fadmin/Article-Network?page=175
CC-MAIN-2014-10
en
refinedweb
iCelBehaviour Struct ReferenceThis is an entity in the CEL layer at the BL (behaviour layer) side. More... #include <behaviourlayer/behave.h> Inheritance diagram for iCelBehaviour: Detailed DescriptionThis is an entity in the CEL layer at the BL (behaviour layer) side. Definition at line 69 'ret' parameter can be used to return values. Send a message to this entity. Returns true if the message was understood and handled by the entity. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 1.2 by doxygen 1.4.7
http://crystalspace3d.org/cel/docs/online/api-1.2/structiCelBehaviour.html
CC-MAIN-2014-10
en
refinedweb
I'm pleased to announce release 3.11: pam_setcred, pam_open_session, and pam_acct_mgmt now return PAM_IGNORE for ignored users or non-Kerberos logins rather than PAM_SUCCESS. This return code tells the PAM library to continue as if the module were not present in the configuration and allows sufficient to be meaningful for pam-krb5 in account and session groups. pam_authenticate continues to return failure for ignored users; PAM_IGNORE would arguably be more correct, but increases the risk of security holes through incorrect configuration. Support correct password expiration handling according to the PAM standard (returning success from pam_authenticate and an error from pam_acct_mgmt and completing the authentication after pam_chauthotk). This is not the default since it opens security holes with broken applications that don't call pam_acct_mgmt or ignore its exit status. To enable it, set the PAM option defer_pwchange for applications known to make the correct PAM calls and check return codes. Add a new option to attempt change of expired passwords during pam_authenticate if Kerberos authentication returns a password expired error. Normally, the Kerberos library will do this for you, but some Kerberos libraries (notably Solaris) disable that code. This option allows simulation of the normal Kerberos library behavior on those platforms. Work around an apparent Heimdal bug when krb5_free_cred_contents is called on an all-zero credential structure. It's not clear what's going on here and the Heimdal code looks correct, but avoiding the call fixes the problem. Warn if more than one of use_authtok, use_first_pass, and try_first_pass is set and use the strongest of the one set. Remove the workaround for versions of MIT Kerberos that didn't initialize a krb5_get_init_creds_opt structure on opt_alloc. This bug was only present in early versions of 1.6; the correct fix is to upgrade. Add an additional header check for AIX's bundled Kerberos.. For Kerberos libraries without krb5-config, also check for networking libraries (-lsocket and friends) before checking for Kerberos libraries in case shared library dependencies are broken. Fix Autoconf syntax error when probing for libkrb5support. Thanks, Mike Garrison. Set an explicit visibility of hidden for all internal functions at compile time if gcc is used to permit better optimization. Hide all functions except the official interfaces using a version script on Linux. This protects against leaking symbols into the application namespace and provides some mild optimization benefit. Fix the probing of PAM headers for const on Mac OS X. This will suppress some harmless compiler warnings there. Thanks, Markus Moeller. You can download it from: <> Debian packages have been uploaded to Debian unstable. Please let me know of any problems or feature requests not already listed in the TODO file. -- Russ Allbery ([email protected]) <>
http://fixunix.com/kerberos/506495-pam-krb5-3-11-released-print.html
CC-MAIN-2014-10
en
refinedweb
/* Support for complaint handling during symbol reading in GDB. Copyright 1990, 1991, 1992, 1993, 1995, 1998, 1999, 2000, 2002, "complaints.h" #include "gdb_assert.h" #include "command.h" #include "gdbcmd.h" extern void _initialize_complaints (void); /* Should each complaint message be self explanatory, or should we assume that a series of complaints is being produced? */ /* case 1: First message of a series that must start off with explanation. case 2: Subsequent message of a series that needs no explanation (the user already knows we have a problem so we can just state our piece). */ enum complaint_series { /* Isolated self explanatory message. */ ISOLATED_MESSAGE, /* First message of a series, includes an explanation. */ FIRST_MESSAGE, /* First message of a series, but does not need to include any sort of explanation. */ SHORT_FIRST_MESSAGE, /* Subsequent message of a series that needs no explanation (the user already knows we have a problem so we can just state our piece). */ SUBSEQUENT_MESSAGE }; /* Structure to manage complaints about symbol file contents. */ struct complain { const char *file; int line; const char *fmt; int counter; struct complain *next; }; /* The explanatory message that should accompany the complaint. The message is in two parts - pre and post - that are printed around the complaint text. */ struct explanation { const char *prefix; const char *postfix; }; struct complaints { struct complain *root; /* Should each complaint be self explanatory, or should we assume that a series of complaints is being produced? case 0: Isolated self explanatory message. case 1: First message of a series that must start off with explanation. case 2: Subsequent message of a series that needs no explanation (the user already knows we have a problem so we can just state our piece). */ int series; /* The explanatory messages that should accompany the complaint. NOTE: cagney/2002-08-14: In a desperate attempt at being vaguely i18n friendly, this is an array of two messages. When present, the PRE and POST EXPLANATION[SERIES] are used to wrap the message. */ const struct explanation *explanation; }; static struct complain complaint_sentinel; /* The symbol table complaint table. */ static struct explanation symfile_explanations[] = { { "During symbol reading, ", "." }, { "During symbol reading...", "..."}, { "", "..."}, { "", "..."}, { NULL, NULL } }; static struct complaints symfile_complaint_book = { &complaint_sentinel, 0, symfile_explanations }; struct complaints *symfile_complaints = &symfile_complaint_book; /* Wrapper function to, on-demand, fill in a complaints object. */ static struct complaints * get_complaints (struct complaints **c) { if ((*c) != NULL) return (*c); (*c) = XMALLOC (struct complaints); (*c)->root = &complaint_sentinel; (*c)->series = ISOLATED_MESSAGE; (*c)->explanation = NULL; return (*c); } static struct complain * find_complaint (struct complaints *complaints, const char *file, int line, const char *fmt) { struct complain *complaint; /* Find the complaint in the table. A more efficient search algorithm (based on hash table or something) could be used. But that can wait until someone shows evidence that this lookup is a real bottle neck. */ for (complaint = complaints->root; complaint != NULL; complaint = complaint->next) { if (complaint->fmt == fmt && complaint->file == file && complaint->line == line) return complaint; } /* Oops not seen before, fill in a new complaint. */ complaint = XMALLOC (struct complain); complaint->fmt = fmt; complaint->file = file; complaint->line = line; complaint->counter = 0; complaint->next = NULL; /* File it, return it. */ complaint->next = complaints->root; complaints->root = complaint; return complaint; } /* How many complaints about a particular thing should be printed before we stop whining about it? Default is no whining at all, since so many systems have ill-constructed symbol files. */ static unsigned int stop_whining = 0; /* Print a complaint, and link the complaint block into a chain for later handling. */ static void ATTR_FORMAT (printf, 4, 0) vcomplaint (struct complaints **c, const char *file, int line, const char *fmt, va_list args) { struct complaints *complaints = get_complaints (c); struct complain *complaint = find_complaint (complaints, file, line, fmt); enum complaint_series series; gdb_assert (complaints != NULL); complaint->counter++; if (complaint->counter > stop_whining) return; if (info_verbose) series = SUBSEQUENT_MESSAGE; else series = complaints->series; if (complaint->file != NULL) internal_vwarning (complaint->file, complaint->line, complaint->fmt, args); else if (deprecated_warning_hook) (*deprecated_warning_hook) (complaint->fmt, args); else { if (complaints->explanation == NULL) /* A [v]warning() call always appends a newline. */ vwarning (complaint->fmt, args); else { char *msg; struct cleanup *cleanups; msg = xstrvprintf (complaint->fmt, args); cleanups = make_cleanup (xfree, msg); wrap_here (""); if (series != SUBSEQUENT_MESSAGE) begin_line (); /* XXX: i18n */ fprintf_filtered (gdb_stderr, "%s%s%s", complaints->explanation[series].prefix, msg, complaints->explanation[series].postfix); /* Force a line-break after any isolated message. For the other cases, clear_complaints() takes care of any missing trailing newline, the wrap_here() is just a hint. */ if (series == ISOLATED_MESSAGE) /* It would be really nice to use begin_line() here. Unfortunately that function doesn't track GDB_STDERR and consequently will sometimes supress a line when it shouldn't. */ fputs_filtered ("\n", gdb_stderr); else wrap_here (""); do_cleanups (cleanups); } } switch (series) { case ISOLATED_MESSAGE: break; case FIRST_MESSAGE: complaints->series = SUBSEQUENT_MESSAGE; break; case SUBSEQUENT_MESSAGE: case SHORT_FIRST_MESSAGE: complaints->series = SUBSEQUENT_MESSAGE; break; } /* If GDB dumps core, we'd like to see the complaints first. Presumably GDB will not be sending so many complaints that this becomes a performance hog. */ gdb_flush (gdb_stderr); } void complaint (struct complaints **complaints, const char *fmt, ...) { va_list args; va_start (args, fmt); vcomplaint (complaints, NULL/*file*/, 0/*line*/, fmt, args); va_end (args); } void internal_complaint (struct complaints **complaints, const char *file, int line, const char *fmt, ...) { va_list args; va_start (args, fmt); vcomplaint (complaints, file, line, fmt, args); va_end (args); } /* Clear out / initialize all complaint counters that have ever been incremented. If LESS_VERBOSE is 1, be less verbose about successive complaints, since the messages are appearing all together during a command that is reporting a contiguous block of complaints (rather than being interleaved with other messages). If noisy is 1, we are in a noisy command, and our caller will print enough context for the user to figure it out. */ void clear_complaints (struct complaints **c, int less_verbose, int noisy) { struct complaints *complaints = get_complaints (c); struct complain *p; for (p = complaints->root; p != NULL; p = p->next) { p->counter = 0; } switch (complaints->series) { case FIRST_MESSAGE: /* Haven't yet printed anything. */ break; case SHORT_FIRST_MESSAGE: /* Haven't yet printed anything. */ break; case ISOLATED_MESSAGE: /* The code above, always forces a line-break. No need to do it here. */ break; case SUBSEQUENT_MESSAGE: /* It would be really nice to use begin_line() here. Unfortunately that function doesn't track GDB_STDERR and consequently will sometimes supress a line when it shouldn't. */ fputs_unfiltered ("\n", gdb_stderr); break; default: internal_error (__FILE__, __LINE__, _("bad switch")); } if (!less_verbose) complaints->series = ISOLATED_MESSAGE; else if (!noisy) complaints->series = FIRST_MESSAGE; else complaints->series = SHORT_FIRST_MESSAGE; } static void complaints_show_value (struct ui_file *file, int from_tty, struct cmd_list_element *cmd, const char *value) { fprintf_filtered (file, _("Max number of complaints about incorrect" " symbols is %s.\n"), value); } void _initialize_complaints (void) { add_setshow_uinteger_cmd ("complaints", class_support, &stop_whining, _("\ Set max number of complaints about incorrect symbols."), _("\ Show max number of complaints about incorrect symbols."), NULL, NULL, complaints_show_value, &setlist, &showlist); }
http://opensource.apple.com/source/gdb/gdb-1344/src/gdb/complaints.c
CC-MAIN-2014-10
en
refinedweb
Wireless Keyboard... Without The Keyboard 148 MindJob writes "Berkeley's Sensor & Actuator Center has developed a virtual keyboard that allows you to glue 10 tiny chips to your fingernails and type away anywhere. The chips are composed of tiny, battery powered MEMS, or Microelectromechanical Systems, that work by tracking the location of your fingers and transmitting via a low-powered radio to a nearby receiver that will work regardless of the computer platform." Re:Uhrm... Security Issues? (Score:1) A different twist. (Score:2) Uhrm... Security Issues? (Score:4) Re:Uhrm... Security Issues? (Score:1) T. Re:Bad Typists? (Score:1) -- Making iDirt 1.82 a safer place, one bug at a time. A better way (Score:1) I agree, emulating a keyboard with this would be unimaginative and wasteful. Fortunately there is a much better approach--thumbcode [stanford.edu]. I can chord, but it's just not as satisfying as having a keyboard, but a suspect signing would be even better with practice. Not to mention the looks you would get when you wire your office, home and virtual pets to respond to gestures. Now, a set of these and some display contacts with a resolution of at least 80 by 25 characters and my life would be complete. I could Angband [phial.com] right through meetings. Re:Kind of like these? (Score:1) That sounds pretty cool. Do you still have any of the old code/hardwear specs around? That would make a great addition over at the wearables newsgroup home page [blu.org]. If you don't have time to put it up there email me the specs, and if I can get it to work, I'll document it and get it up with the credits to you and your friend. Then I'll hack the hell out of it for my personal use :) Brain Fart (Score:1) Here (Score:3) The First Thing That Comes To Mind... (Score:1) "Are you an engineer too?" "No, I'm just an idiot." Anybody else remember this one? =) David E. Weekly (dew, Think) Re:Wasn't there a Dilbert cartoon about this? (Score:1) One Click (Score:1) Re:Bad Typists? (Score:1) Re:Darn, I know what this means (Score:1) Exactly except... (Score:1) jkl;jkl;jkl;jkl;jkl; Cheers, Ben Darn, I know what this means (Score:2) Cheers, Ben Like I Said... (Score:1) Re:Hmmm... (Score:1) We'll all end up like the technomage in B5's Crusade spinoff yet... Bluetooth - sorry, can't help myself. (Score:2) Imagine being able to use this device and a Bluetooth-enabled PalmOS device [widcomm.com] to enter data. Could be better than the Stowaway [thinkoutside.com]. Bluetooth would also solve some of the security problems mentioned by someone else. Just think of this as a cordless data entry device for a hidden PC - combine with a virtual display and you'd have an invisible computer system that could be used walking down the street... keyboard? (Score:1) so i suppose you could then model "virtual keyboard" movements on like a silent keyboard which is really just posterboard with bumps. that gives you your tactile cues to where keys are, and gives you something that you can move wherever. you could even cut the keyboard in half and put the halves wherever. but saying you can type in the air is a little naive--they're talking about sign-language. incidentally, it's kind of neat that it would be possible to learn sign language and be able to "dictate" that way to your computer. so you're movements are single-keystrokes, but rather entire words and phrases. you can abstract the communication process to your computer one entire level. it'd be nice to have more folks who can speak with sign-language anyways. i know i learned bits of sign language in order to communicate across loud rooms and during meetings. /will Nifty.. but.. (Score:1) Re:Kind of like these? (Score:2) Bad Typists? (Score:1) Perhaps they could add an evolving algorithm to the typing interpreter - so that as you begin to type differently using the wireless technology because there are no physical limitations, the program compensates - resulting a gradual shift to the most natural typing position. Who knows? Is the program based on the relative positions of the fingers, relative to the center of the hand? If so, all you would need to do is relax your hand by your side and type.... or perhaps the virtual keys would become smaller than physically pressable, so you would merely twitch the appropriate fingers.. or use chords... Again, it would be interesting to watch the general evolution of interpretive typing. Deja Vu (Score:1) Luckily, I was wrong on the Net - mine was based on a CCITT standard, not on TCP/IP... Re:Bad Typists? (Score:1) The other thing is, even in your example ambiguities arise as to what the speaker actually intends. You say assume anything that isn't a reserved 'vocal word' is a variable/function name, but you use what would be a reserved 'vocal word' (Int). Although I'd assume you could check to see if it was a valid location to place int, the voice interpreter could easily interpret public class hello world imp implements hello world int f as public class HelloWorldImp implements HelloWorld int f instead of public class HelloWorldImp implements HelloWorldIntF which is clearly not the desired code. To pull an example from my own (ugly, Windows) code: CopyMemory( &project.files[project.iFileCount - 1], file, sizeof( FILEENTRY ) ); I guess, vocally, I could dictate it as copy memory paren amp project dot files brac project dot i file count minus one comma file comma sizeof paren fileentry clopar clopar semi Or some variation thereof. However, that seems awfully complext to me, and typing it is fairly simple. Simply too much of the punctuation has to be dictated, and I don't see much way around it. And again, from wonderful windows code ghWnd = CreateWindowEx( 0, CLASS_MAINWINDOW, WINDOW_TITLE, WS_OVERLAPPEDWINDOW, x, y, width, height, 0, 0, ghInstance, 0 ); And I'm not even going to try to figure out what would be spoken for that one. The trouble is, syntactically, I think a language like C is just too complex for any dictation system to work. It might be easier for some, but I would much rather type my code. When I have to spell things out for my computer, I'd rather use a device designed for just that. Lee Press On Keyboard? (Score:1) Re:Bad Typists? (Score:1) What about... (Score:1) Even if there were some foot pedal or other device to turn they keyboard on/off, that's too weird. Just gimme a good old keyboard. Re:Exactly except... (Score:1) ;lkj;lkj;lkj;lkj;lkj The other way seems somehow... wrong. Forget the keyboard (Score:1) Forget the keyboard, use a chording style instead. With only one hand and chording one can do all possibly key combinations easily. No need for tactile feadback either. Keys are struck by relative positions of the fingers in relationship to each other. Each finger is capible of 3 easy to determin states (up, middle, down) and the thumb is capible of 5 states (up, down, left, right, middle) or more. That gives you 405 possible combinations. Not all are useable by all people as some people tend to move the pinky with the ring finger, but that can be worked around with alternate chords or finger training. To keep from typing while using the hand for other things, you asign a sequence of moves to turn on and off the keyboard. Like balling it into a fist or strumming the fingers in a wave pattern. To bad this was already patened a few years ago or I would have then. Fingernails (Score:1) Plus the fact that fingernails grow, and you'd have to refit the sensors ever couple of weeks, depending of your fingernail growth rate... And for them not to come off during other activities, they'd have to be glued on pretty hard, and that would make it a real PITA to refit them... so i guess that makes the fingernails pretty badly suited for this kind of stuff. A better idea if you're gona have them permanently fitted would be to implant somewhere inside the fingertip. --- Ilmari Anybody for some Quake? (Score:1) Man, that's one Clean Desktop of the Future. I wonder when they'll have a model that'll replace your Mouse or TrackPad. Kagenin Re:And another it-had-to-be-said... (Score:1) Re:it has to be said .... (Score:1) I've been living Dilbert all my life; this article just makes it better.. Especially since I've been in a "Wally Job" for the past three months and haven't been enjoying it one bit. -Chris Re:Yeah, but... (Score:1) Re:Bad Typists? (Score:1) I watch the TV (Got to love the Simpsons!), and imagine the monitor and the screen! Nyah. Re:Temporary solution. (Score:2) Re:dilbert knows about this (Score:1) why keyboards...make it a mouse too (Score:2) Yeah, but... (Score:1) The problem with this is that we need tactile feedback, otherwise we can't easily know we are "typing" a letter. Re:Bad Typists? (Score:2) For those of you who are thinking about speech as the interface of the future, doubtless you are correct for some cases. However, there will always be a place for precision work. Think about CAD programs. Can you imagine just speaking to them and getting the accuracy you need? Plus, until we have programming languages that are redesigned not to use punctuation that need be spoken, you'll be able to enter your code much easily with a keyboard. Re:Bad Typists? (Score:2) Thinking of what you can do with your eyes constantly switching between screen and keyboard, think now what you could do if you could leave your brain in the virtual world displayed on the screen and never have to look down. Re:it has to be said .... (Score:1) Come to think of it, I work with a Wally. Well, I do until Jan 4th, when I escape and go to work for a nicer company which is paying me, in part, to play with new technology. I'm in heaven. Why this is good (Score:1) The point of this technology is not that it will replace the keyboard sitting on your desk, but that for those with a desire to gargoyle will have an effective input method. I know everyone says speech recognition is the way to go, but I've been keeping up with the technology, and it's just not ready for prime time. Also, what if you don't want everyone to see/hear what you're inputting? Sounds like a virtual keyboard has plenty of application in wearable computing. Re:Exactly except... (Score:1) a;slkdfja;slkdfja;slkdfj hmmm.... Hunt and Pecker! (Score:3) What if you don't type normally? (Score:1) The sensors would have to aware of each other. The method of typing would have to be the standard way. With your fingers pressed on A S D F J K L ; And the keys would be registered accordingly. Like a joystick, it would most likely have to be calibrated for X:0 Y:0 position. One problem, what if you developed your own style of typing? Many applications (Score:1) Gives new meaning to the term 'air guitar'. how does this really work ..... (Score:4) Typing in the air has no frames of reference (unless you have some VR keyboard and goggles etc) and it's a 3-d sort of thing - no hard 2-d thing to stop your fingers at the end of very stroke. Instead I suspect it's probably getting close to the time when we can come up with a new typing metaphor - hopefully something a little easier on my wrists - maybe 'typing' with my arms relaxed in my lap or something. With something like this a form of virtual chord keyboard might work well too meaning we could get away from the positional locations of keys on a keyboard which might be more suited for virtual keyboards. Has anyone out there become proficient with a chord keyboard of some sort? can you type as fast or are you limited more by the time between chords? Of course with cool MEMS technology like this just think of the interesting musical instruments we can create! Kind of like these? (Score:2) We have a pair of these in our lab hooked up to one of our SGIs. Pretty nifty toys, actually did a bit of programming for them (nothing too fancy). The API is fairly easy to mess with. :) :) There's nothing like flipping someone off and watching a real-time rendered hand do it on your monitor.... You can even get them with little vibrators [virtex.com] on the tip of each finger and on the palm to give a sort of tactile feedback. You can program these to react any way you like. The most useful way is to increase the intensity of the vibration the harder you grip or press against a virtual object. Don't get any sick ideas... Can't sleep...Clown will eat me... Re:A few Thoughts (Score:1) Think of the applications (Score:2) Users of sign language could now have realtime translations... the chips would automatically detect the hand configuration and send it to a PC screen... maybe this would make sign language the new language of pc 's? Or a form of it. Consider all the different configurations anf combinations of hand movements and contortions... enough to equal a 101keyboard plus extras for shortcuts and such... but would this rate as ergonomic? And would it can a whole new for of RSI?? This will be different (Score:1) Then, of course, there is the whole issue of how well it can discriminate chords. I use Emacs, The One True Editor [gnu.org] (C-0 M-x all-hail-emacs), which is well known for some of the secondary meanings of its acronym [ucar.edu] including "Esc Meta Alt Ctrl Shift". We just express it more compactly as M-A-C-S-. Humor aside, will I be able to type M-C-v or C-@ or other three key chords with ease? Re:Bad Typists? (Score:1) All we'd need is a simple pre-processor that understood that 'public class HelloWorldImp implements HelloWorldIntf { public void greet() { System.out.println("Hello World"); } } is spoken public class hello world imp implements hello world int f brac public void greet dubparen brac system out println par quo cap hello cap world quo clopar semi clobrac clobrac' Assume anything that isn't a reserved 'vocal word' is a concatenated variable/function name, abbreviate the punctuation to monosyllable, and double check all vars/functs against a known list. Heck, strip out the punctuation and just guess at it. Say it, and try typing it.. For me, saying it is much faster, and I'm not shoddy typist. Re:Bad Typists? (Score:1) The downside I see is a learning curve. Shortly after posting, I hit up my old copy of Dragon Dictate for some real test results. I only expected human intelligible results, and read all punctuation fully. I started by reading off some of my own Java source, which went much faster than I could type.( I type 40-50 wpm ) Reading a coworker's C++ stuff took quite a bit of thought, but also was faster than I type. The snafu came when I tried to write original code. I made a mockery of myself, half stammering 'code' that would normally just spring from my fingertips. I couldn't do it. I don't see trying to edit code verbally as easy either. I suppose the only real test short of writing a parser would be to speak to a programmer incapable of typing due to RSI, a spinal injury, etc. Only someone who has actually done it can tell us how bad a curve it is, and if it is even worth the effort when we still have our IBM Model M's.. Why keyboards (Score:3) If the patern-recognition software is so good it can make out which key you think you are pressing, making out what sign your hand is making by the relative position of the fingertips should be just as easy. Wasn't there a Dilbert cartoon about this? (Score:1) But, for us geeks, something like that could save major wrist strain. I'm all for the idea. more key combinations available (Score:1) Maybe one day we can ctrl-x-left-toe. This is VR (you've missed the point) (Score:1) But the real appeal is for when you're not sitting in front of anything, or can't see it - like when you're wearing a head-mounted display....the "keyboard" could be something displayed to your eyes...but in the real world, maybe it's just a piece of foam rubber (or some other ergonomic surface). And, if the sensors are there, then who says they'd only be good for typing on a simulated keyboard? What about virtual sculpture, fingerpainting, graphical control, etc. This is like the Nintendo PowerGlove (fairly lame video game input device), but way higher resolution and all 10 fingers. Not a problem (Score:1) Re:Bad Typists? (Score:1) Temporary solution. (Score:1) Together with a chip attached to one of my optical sinews instead of a monitor and a wireless link to my home computer I could play quake during all boring periods of my life. Re:Temporary solution. (Score:1) The tv program was dutch (since I'm from Holland) but the research was done in Amerika somewhere at a university hospital or something. The guy with the implant had a....(sorry guys don't know the english word, spinal problem where you can't move the part of your body that's below a certain damaged point in your spine)....He couldn't move anything below his mouth I believe. Dokters implanted a sort of "electrode" (translation from dutch commentary) in his brain that was very sensitive to the electrical signals produced by the brain tissue directly surrounding it. After progressing through several stages of "translating" those signals they were now able to let the guy control the movement of a cursor over a picture of a keyboard on a monitor and he could also "think" a "click". He could actually type his name this way. Re:What about... (Score:1) 6of9: te;iughaoiugyhag'[qogvmpoieagjyesyes borger: six, I think you should turn the sensitivity on your keyboard down a little when you do that Re:more key combinations available (Score:1) Wired (Score:1) Re:how does this really work ..... (Score:1) Of course with cool MEMS technology like this just think of the interesting musical instruments we can create! That brings a whole new meaning to the slang: "Havein' a quick strum..." Now you really will be playing with yourself! ;) Sorry... WIRED (Score:1) If you ask me though, this looks like one of those things back in the 50's... "we'll be living on the moon by the year 2000...." CTS and other wrist issues. (Score:1) Anyways, my point is, wouldn't this compound an issue like that? I mean, now instead of typing a keyboard with minimal resistance, you type into the air.. with almost no resistance at all. Any ideas? Anyone know where I can find that reference?? -- Re:Forget the keyboard (Score:1) it has to be said .... (Score:4) "I'm not entirely sure that I want my computer knowing where my fingers are at all times" Yes yes yes, sorry, and all that. I resisted the temptation to say that for at least a minute. Hate me. jsm Re:it has to be said .... (Score:1) MODERATE THIS GUY UP PLS (Score:1) I'm not sure this is real (Score:1) Is there any reason ASL wouldn't work? (Score:1) Re:Temporary solution. (Score:1) Re:Temporary solution. (Score:1) I recently saw a tv program about a man that had had a sort of electrode implanted in his head that allowed him to control a pointer on a screen simply by thinking. Did anybody else read this and say "Ooh, where do I get one of these?" Anybody have a link to this info? Re:how does this really work ..... (Score:2) Re: Keyboard feedback, etc. (Score:1) Why not just have them implanted in your fingertips....Im sure you could arange for some kind in induction recharging for their internal power source, or better yet tap a blood vessal and use the flow of blood to power a microturbine which intrun would power the devices... - Resistance is futile... Why type? (Score:1) Re:Bad Typists? (Score:1) So then the point of a keyboardless keyboard would be.... Re:Uhrm... Security Issues? (Score:1) Re:Use a sheet of paper (Score:1) Bad Typists? (Score:2) Re:Why keyboards (Score:1) Kind of like Engelbarts' idea of a "chord" keyboard. Why not use "chords" to type the more common letters/words rather than having your fingers flying all over the place. Prob. slow you down more though? not sure. have to test :) anyway, I'll have mine as a dvorak please ;) second time in a few week (Score:1) Re: Keyboard feedback, etc. (Score:1) Btw, if someone offers those chips, I want them -- but I would put on thin, cut-out gloves (or rubber fingertips) before sticking the chips to my "fingernails" -- unless they're supposed to be discarded, like contact lenses. Even then, ecologically speaking, I'd prefer to use them as long as possible. - Here's what I want: - A lightweight, roughly rectangular "board" which hangs on an adjustable cord around my neck (smaller than 3"x4"x 1/2"). - It's two-sided, but can be used one-handed if a small "button" on the bottom of the slab is touched (in which case it becomes a chording keyboard). - The inner and upper sides of the slab are one-button thick, for special keys (normally accessed by thumb). - Normal key placement is optimized for two-handed operation, with the most common (in English) 2-letter combinations coded for alternate hand/strongest finger usage. - The "home key" positions are marked with dots, for feedback. - To further reduce RSI & carpel tunnel syndrome, it remains vertical for common use (as part of a wearable PC), but can be unfolded and placed on a desktop for positional variety. Re:Bad Typists? (Score:1) Re:Hmmm... (Score:1) The problem is in banishing them back to the 99th plane before they turn on you. Re:Temporary solution. (Score:1) But is that even possible? I'm taking it with a grain of salt unless someone can give me a link as proof... but the idea sounds quite interesting! -BK Re:Uhrm... Security Issues? (Score:1) -- jtjm Tactile Feedback- and the lack thereof (Score:2) Firstly, one of the most important things when buying a keyboard is the feel of the keys- people's preferences vary here- personally, I like a "clicky" keyboard (like the Cherry range) rather than the membrane types. Having no feedback at all would be very disconcerting. I don't quite understand how anyone but a perfect touch typist would know precisely where the keys were without any form of real keyboard, either. The bumps and ridges of the keys are essential to me in finding the right keys- typing on a desk would bound to be a little random. And how long would it take to apply the sensors and calibrate them each time? It would be best if they were permanently fitted in such a way that they didn't interfere with other things we might want to do with our hands- about the only sensible location is under the fingernails, but unless there is a significant change in fashion, this eliminates at least 50% of the market. I would have thought that sensors such as these might have a more useful application as part of a virtual reality "glove" or suchlike. -- jtjm Gesture Recognition (Score:1) Ciao, Peter Re:Here (Score:1) Anyway, I don't think so, thats what the article is about. This is a glove and it seems to be constructed out of standard components:" An Analog Devices 2 axis ADXL 202 accelerometer". They even give the name of the manufacturer. But it _could_ be done this way. In that case the MEMS would be really small versions of the above mentioned device that connect to controller via radio instead of wires. The controller could then be placed somwhere near the computer instead of beeing strapped to the wrist. When I read the article I thought they would use some kind of positionig system to determine the _absolute_ position of the Fingertips, not acceleration. Ciao, Peter A few Thoughts (Score:3) Anyway, these don't sound too practical. A Keyboard is just there laying in front of the computer. If I want to type something, I just do it. For those sensors I *always* have to put them on, that sounds way to cumbersome just to type a few words on the computer to answer an email or post a Hmmm, one camera focused to the Face, one (or two for some Kind of 3D) on the whole Body and these things on the Fingers and you put the action back into interaction Ciao, Peter Hmmm... (Score:2) There was another one as well... (Score:1) Had something to do with his PHB sending him to Elbonia without any preparation, warning, equipment. I remember a line like this: "....and if you had a keyboard, you would type Ctrl-Alt-A".." . Then the Elbonians said they didn't have vowels in their alphabet. Thats paraphrased, but it's the FIRST thing I thought of when I read the /. title of this product. Tried to find a link on dilbert.com but couldn't find it. Funny! Re:Why keyboards -- alt to voice recognition? (Score:1) They taught sign language at my high school. Could this be an alternative to voice recognition? --Jack Maybe useful... (Score:2) Re:I invented this already (Score:2)
http://tech.slashdot.org/story/99/12/22/224232/wireless-keyboard-without-the-keyboard?sdsrc=prevbtmprev
CC-MAIN-2014-10
en
refinedweb
Scala vs. F#: Comparing Functional Programming Features F# and Scala, two relatively recent programming languages, provide most .NET and Java software developers with new functional programming features that are worth understanding and evaluating. (Read a Short Briefing on Functional Programming for a primer on how functional programming works.) Scala is a Java-based, general-purpose language that is intended to appeal to both adherents of functional programming as well as devotees of more mainstream imperative and object-oriented programming. It compiles into Java bytecode and runs on top of the Java Virtual Machine (JVM). While Scala is fundamentally a functional language, it also embodies all the elements of imperative and object-oriented languages, which gives it the promise of introducing functional programming features to a broader programming community. F# is a general-purpose programming language developed by Microsoft to run as part of .NET's Common Language Runtime (CLR). It is based on another, orthodox functional language, Ocaml. Microsoft introduced F# into the .NET platform because of, among other reasons, the increased interest in functional programming and functional programming's suitability to high-performance computing and parallelism. Although its syntax is distinctly functional, F# actually is a hybrid functional/imperative/object-oriented language. Its object-oriented and imperative features are present mostly for compatibility with the .NET platform, but F#'s tripartite nature is also pragmatic -- it allows programmers who use any or all of the three programming paradigms to program exclusively in one or to combine all three. In this article, I will compare and contrast the functional features and related syntax of F# and Scala. F# vs. Scala: First Order Functions Functions in F# and Scala are treated as first order types. They can be passed in as arguments, returned from other functions, or assigned to a variable. In this F# code snippet, I first define a function ( increment) that adds 1 to a passed value, and then I define the function handler, which takes type myfunc and applies 2 to it as a parameter. Finally, I invoke the function handler with a parameter incremented to it. The function incrementis passed as a regular value, hence the function is being treated as a first order type: let increment x = x + 1 let handler myfunc = (myfunc 2) printfn "%A" (handler increment) Notice the type inference in the example above. F# will infer that x is an integer because I add 1 to it, and so x will be treated as an integer (Int) type. Here is the same example in Scala: def increment(x:Int) = x + 1 def handler( f:Int => Int) = f(2) println( handler( increment )) Currying is an essential feature of functional programming that allows for the partial application of functions and functional composition. F# supports currying. Here is an example of the curried function add in F#: Declaration: val add : int -> int -> int Implementation: let add = (fun x -> (fun y -> x + y) ) In Scala, the curried function add looks like this: def add(x:Int)(y:Int) = x + y F# vs. Scala: Lambda Expressions F# also supports Lambda expressions (anonymous functions). In F#, lambda expressions are declared with the keyword fun. In the example below (adopted from F# documentation), an anonymous function is applied to a list of numbers to increment each number in the list and return a new, incremented list: let list = List.map (fun i -> i + 1) [1;2;3] printfn "%A" list Lambda expressions in Scala are defined in a very succinct fashion. This is how you would increment a function with a Lambda expression ( x=>x+1) on a list of numbers (1,2,3) in Scala: val list = List(1,2,3).map( x => x + 1 ) println( list ) F# vs. Scala: Pattern Matching Pattern matching is a powerful feature of functional programming languages that allows blocks of code within the function to be 'activated' depending on the type of a value or an expression. (Think of pattern matching as a more powerful variation of the case statement.) In F# , the vertical line character ( |) is used to denote a case selector for the function match specification. Here is an F# version of a pattern-matched Fibonacci number function: let rec fib n = match n with | 0 -> 0 | 1 -> 1 | 2 -> 1 | n -> fib (n - 2) + fib (n - 1) Like F#, Scala supports pattern matching on functions. Here is the example of a Fibonacci number calculation in Scala. Notice that Scala uses the keyword case: def fib( n: Int): Int = n match { case 0 => 0 case 1 => 1 case _ => fib( n -1) + fib( n-2) } Page 1 of 2
http://www.developer.com/lang/other/article.php/3883051/Scala-vs-F-Comparing-Functional-Programming-Features.htm
CC-MAIN-2014-10
en
refinedweb
Dear Experts, How to convert list of entites to Json string? entity1: @Entity @NamedQueries({ @NamedQuery(name = "Departament.findAll", query = "select o from Departament o")) @Table(name = "DEPARTMENT") public class Departament implements Serializable { @Id @Column(nullable = false) private Long id; @Column(length = 119) private String name; @OneToMany(mappedBy="department", cascade = { CascadeType.PERSIST, CascadeType.MERGE }) private List<AccessRights> perms; // cut: setters and getters entity 2: @Entity @Table(name = "ACCESS_RIGHTS") public class AccessRights implements Serializable { @Id @Column(nullable = false) private Long id; @ManyToOne @JoinColumn(name = "ID_JEDNOSTKA") private Departament department; // cut: setters and getters In my main class I have executed: List<Departament> vl = ac.getDepartament(); System.out.println(new Gson().toJson(vl).toString()); //error: "An attempt was made to traverse a relationship using indirection that had a null Session." The has started to orccured when I've created ManyToOne relationship. If I remove it, everything works. I assume that toJson method cannot be applied. If there is any relation between entities. I'm I correct? How such an issue can be resolved? I've tried other parsers like Objectmapper, JsonLib - no success. Best regards.
https://community.oracle.com/thread/4038064
CC-MAIN-2020-10
en
refinedweb
Custom Date Field Initial Value Rule Introduction In Enterprise Forms, you can plug in custom initial date field value rules by writing Java code and configuring those in the repository. With custom initial date field value rules, you can let your forms adjust to any date related business rules easily. For example, in the Enterprise Forms demo application, you will see the "Next Saturday" custom date rule example for the "Date of Birth" field. If the initial value of the date field is set to "Next Saturday" like this example, then the form rendered in the frontend website will render the input field with the automatically calculated date value for the next Saturday from the current date. How to Write a Custom Date Rule Enterprise Forms defines the following interface for the custom date rule: package com.onehippo.cms7.eforms.hst.daterules; import java.util.Calendar; public interface DateRule { Calendar getDate(); } And, the example implementation above, "NextSaturday" rule is implemented as follows: package com.onehippo.cms7.eforms.hst.daterules; import java.util.Calendar; public class NextSaturdayDateRule extends AbstractNextDayOfWeekRule { @Override public Calendar getDate() { return getNext(Calendar.SATURDAY); } } You can simply implement #getDate() method to return an initial date value. If a form field is configured to use a specific initial date field value rule, then the form web page will invoke #getDate() method at runtime to render the initial value. For example, you might want to build "NextBusinessDay" rule based on your organization specific calendar and business rules. How to Configure a Custom Date Rule All the custom date initial value rules should be configured under /hippo:configuration/hippo:modules/eforms/hippo:moduleconfig/eforms:daterules node. The Enterprise Forms demo application contains the following by default from that node: /eforms:daterules: jcr:primaryType: nt:unstructured /eforms:daterule: jcr:primaryType: nt:unstructured eforms:dateruleclass: com.onehippo.cms7.eforms.hst.daterules.NextSaturdayDateRule eforms:dateruleid: next-saturday eforms:daterulelabel: Next Saturday As shown above, there is only one date rule ("Next Saturday") in the example. To add another custom date input date rule, add another uniquely named node under the "eforms:daterules" node, with node type "hipposys:moduleconfig" node type. Set the FQCN of your custom date input rule implementation class in "eforms:dateruleclass" property, and set a label string to be displayed in CMS form document editor in the "eforms:daterulelabel" property. If you configure it correctly, then you will see that custom date rule in the drop down of the date field property editor in CMS.
https://documentation.bloomreach.com/13/library/enterprise/enterprise-features/enterprise-forms/custom-date-field-initial-value-rule.html
CC-MAIN-2020-10
en
refinedweb
The Blinker Herald includes helpers to easily emit signals using Blinker. Decorate a function or method with @blinker_herald.emit() and pre and post signals will automatically be emitted to connected handlers. Project description Blinker Herald The Blinker Herald includes helpers to easily emit signals using the excelent blinker library. Decorate a function or method with @blinker_herald.emit() and pre and post signals will be automatically emitted to all connected handlers. - Free software: ISC license - Documentation:. Features Usage Let’s say you have a class and wants to emit a signal for a specific method: from blinker_herald import emit class SomeClass(object): @emit() def do_something(self, arg1): # here is were magically the 'pre' signal will be sent return 'something done' # here is were magically the 'post' signal will be sent using @emit decorator makes blinker_herald to emit a signal for that method and now you can connect handlers to capture that signals You can capture pre signal to manipulate the object: SomeClass.do_something.pre.connect def handle_pre(sender, signal_emitter, **kwargs): signal_emitter.foo = 'bar' signal_emitter.do_another_thing() And you can also capture the post signal to log the results: SomeClass.do_something.post.connect def handle_post(sender, signal_emitter, result, **kwargs): logger.info("The method {0} returned {1}".format(sender, result)) You can also use the namespace proxy blinker_herald.signals to connect handlers to signals, the signal name is the prefix pre or post followed by _ and the method name: from blinker_herald import signals @signals.pre_do_something.connect def handle_pre(sender, signal_emitter, **kwargs): ... If you have a lot of subclasses emitting signals with the same name and you need to capture only specific signals, you can specify that you want to listen to only one type of sender: from blinker_herald import emit, signals, SENDER_CLASS class BaseModel(object): ... @emit(sender=SENDER_CLASS) def create(self, **kwargs): new_instance = my_project.new(self, **kwargs) return new_instance class One(BaseModel): pass class Two(BaseModel): pass Note By default the sender is always the instance but you can use SENDER_CLASS to force the sender to be the class another options are SENDER_CLASS_NAME, SENDER_MODULE, SENDER_NAME and you can also pass a string, an object or a lambda receiving the sender instance e.g: @emit(sender=lambda self: self.get_sender()) Using SENDER_CLASS you can now connect to specific signal: from blinker_herald import signals @signals.post_create.connect_via(One) def handle_post_only_for_one(sender, signal_emitter, result, **kwargs): # sender is the class One (cls) # signal the instance of the class One (self) # result is the return of the method create The above will handle the create method signal for the class One but not for the class Two You can also be more specific about the signal you want to connect using the __ double underscore to provide method name: from blinker_herald import signals @signals.module_name__ClassName__post_method_name.connect def handle_post(sender, signal_emitter, result, **kwargs): ... The above will connect to the post signal emitted by module_name.ClassName.method_name Note You don’t have to use the pattern above if your project do not have a lot of method name collisions, using only the method name will be just fine for most cases. Credits This software was first created by SatelliteQE team to provide signals to Robottelo and Nailgun History 0.1.0 (2016-05-28) - First release on PyPI. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/blinker_herald/0.2.0/
CC-MAIN-2020-10
en
refinedweb
Data is invaluable, and this is especially true these days, where we store hundreds of terabytes of data on the Web. To handle this amount of data, we need more than the traditional set of SQL statements like select, insert, and update. Unfortunately, manually creating SQL statements can be a difficult job, not to mention that it's one that's very error prone. To address this, a new category of frameworks have been developed called ORMs or Object Relational Mappers which help developers to map their models used in the source code to data models stored in the databases. One of the most commonly used ORM in the Python world is SQLAlchemy. This framework is widely adopted because it implements a lot of best practices while delivering on the features that enterprise applications require. In this article, I will present how to map models from the code to database tables, as well as how to save, load and query data using SQLAlchemy and Python. All the code from this article is available on GitHub. Installation The installation of SQLAlchemy can be done using: pip install sqlalchemy command, but I recommend doing this in a virtual environment: greg@earth:~/$ mkdir how_to_sqlalchemy greg@earth:~/$ mkdir how_to_sqlalchemy/venv greg@earth:~/$ cd how_to_sqlalchemy greg@earth:~/how_to_sqlalchemy$ virtualenv venv greg@earth:~/how_to_sqlalchemy$ source venv/bin/activate (venv)greg@earth:~/how_to_sqlalchemy$ pip install sqlalchemy Here, I use MySQL as the database engine. To use SQLAlchemy along with MySQL, the mysql-python package has to be installed: (venv)greg@earth:~/how_to_sqlalchemy$ pip install mysql-python The Models SQLAlchemy supports a declarative style for mapping program models to database tables. To use this declarative approach, I need to import declarative_base from sqlalchemy. from sqlalchemy.ext.declarative import declarative_base BaseClass = declarative_base() I can then create the three models Users, Projects, and Bids. Projects have Bids and Bids have Users. A project can have multiple bids, and one user can have multiple bids. from sqlalchemy import Column, String, Integer, ForeignKey, Numeric from BaseClass import BaseClass class Bid(BaseClass): __tablename__ = 'bids' id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) proposal = Column(String(800)) price = Column(Numeric) user_id = Column(String(150), ForeignKey('users.user_name')) project_id = Column(Integer, ForeignKey('projects.id')) First I need to import the Column, String, Integer, and all the other methods that I need to use. These methods offer an abstraction over the different SQL syntax rules that are specific to the database engines. As a developer, I don't have to know how to declare an 800 character string field in MySQL, PostgreSQL, or MSSQL because SQLAlchemy knows instead of me. All I need to do is to set the correct parameters for the data fields, and SQLAlchemy will resolve the SQL statement generation. The __tablename__ property holds the name of the table that the model will map to once the database is created. class User(BaseClass): __tablename__ = 'users' user_name = Column(String(150), primary_key=True) first_name = Column(String(150)) last_name = Column(String(150)) email = Column(String(250)) bids = relationship('Bid') The User class has a field called bids. This field is marked as a relationship that points to the Bid class. Please notice that the Bid class has a user_id field, which is Foreign Key for the user_name field. This field ensures that all the Bids can be associated to a user. class Project(BaseClass): __tablename__ = 'projects' id = Column(Integer, primary_key=True, nullable=False, autoincrement=True) name = Column(String(150)) description = Column(String(800)) budget = Column(Numeric) bids = relationship('Bid') The Project class is almost identical with User class, except it has an id field which auto increments and a budget field which is a numeric value. The Project class has a relationship with the Bid class too, because one Project can have more bids. Connecting to the database SQLAlchemy can create the database in case there is no database available. The code that does this is in the init_database.py file: from sqlalchemy import create_engine from BaseClass import BaseClass def init(engine): db_engine = create_engine(engine, echo=True) BaseClass.metadata.create_all(db_engine) The database engine is created, where the engine parameter is a string. Here's an example: engine = 'mysql+mysqldb://johndoe:secret@localhost/how_to_sqlalchemy?unix_socket=/opt/lampp/var/mysql/mysql.sock' First, there is the database engine specific key, which, in this case, is mysql+mysqldb. Then comes the user:password@host/database construct. After these, there is an option to define parameters for the connection. In this example, the path to the uinx_socket is specified with the path pointing to the mysql socket. Once the db_engine is initialized, the BaseClass.metadata.create_all method will take all the classes and model configuration available in the context, creating tables. The screenshot shows what SQL statements were generated and executed to create the tables and relationships. Manipulating Data To add or read data from the MySQL database, a new SQLAlchemy session has to be created. SQLAlchemy has the concept of session, the same way Hibernate (a well-known ORM for Java) has. class DAL: def __init__(self, engine): """ :param engine: The engine route and login details :return: a new instance of DAL class :type engine: string """ if not engine: raise ValueError('The values specified in engine parameter has to be supported by SQLAlchemy') self.engine = engine db_engine = create_engine(engine) db_session = sessionmaker(bind = db_engine) self.session = db_session() def add_user(self, first_name, last_name, user_name, email): """ :type first_name: string :type last_name: string :type user_name: string :type email: string """ new_user = User(user_name = user_name, first_name = first_name, last_name = last_name, email = email) self.session.add(new_user) self.session.commit() In the DAL class' constructor, a new database engine is created. A new session is then created for the engine using the sessionmaker method In the add_user method, a new User object is created, which is then added to the session. In the last line, the commit method is invoked. This forces the session to flush all the changes in the session and write everything to the database. Data can be loaded from the database using the query method, which can be extended with an order_by method. This generates the select SQL statement with and order by clause. def get_users(self): all_users = self.session.query(User).order_by(User.user_name) return all_users Removing an item from the database is done with the delete method. It has to be given an item loaded from the session. This method truncates the Users table from the database. def clear_users(self): all_users = self.session.query(User) for user in all_users: self.session.delete(user) self.session.commit() In case searching or filtering is needed, SQLAlchemy handles that very well with an easy-to-use Query API: def search_users(self, user_name): all_users = self.session.query(User).filter(User.user_name.like('%' + user_name + '%')) return all_users The session is queried for User objects, which is then filtered to return only users which have a similar username like the one passed as argument. I just went through the installation steps of SQLAlchemy, and covered how the engine parameter should be configured, and how Python classes can be mapped to database models. This also demonstrates how these models can be queried, saved and deleted using SQLAlchemy's query API. You can find all the available query API calls supported by the framework on the SQLAlchemy help page.
https://www.freelancer.com.au/community/articles/storing-data-with-python
CC-MAIN-2020-10
en
refinedweb
§JSON Reads/Writes/Format Combinators JSON basics introduced Reads and Writes converters which are used to convert between JsValue structures and other data types. This page covers in greater detail how to build these converters and how to use validation during conversion. The examples on this page will use this JsValue structure and corresponding model: import play.api.libs.json._ val json: JsValue = Json.parse(""" { "name" : "Watership Down", "location" : { "lat" : 51.235685, "long" : -1.309197 }, "residents" : [ { "name" : "Fiver", "age" : 4, "role" : null }, { "name" : "Bigwig", "age" : 6, "role" : "Owsla" } ] } """) case class Location(lat: Double, long: Double) case class Resident(name: String, age: Int, role: Option[String]) case class Place(name: String, location: Location, residents: Seq[Resident]) §JsPath JsPath is a core building block for creating Reads/ Writes. JsPath represents the location of data in a JsValue structure. You can use the JsPath object (root path) to define a JsPath child instance by using syntax similar to traversing JsValue: import play.api.libs.json._ val json = { ... } // Simple path val latPath = JsPath \ "location" \ "lat" // Recursive path val namesPath = JsPath \\ "name" // Indexed path val firstResidentPath = (JsPath \ "residents")(0) The play.api.libs.json package defines an alias for JsPath: __ (double underscore). You can use this if you prefer: val longPath = __ \ "location" \ "long" §Reads Reads converters are used to convert from a JsValue to another type. You can combine and nest Reads to create more complex Reads. You will require these imports to create Reads: import play.api.libs.json._ // JSON library import play.api.libs.json.Reads._ // Custom validation helpers import play.api.libs.functional.syntax._ // Combinator syntax §Path Reads JsPath has methods to create special Reads that apply another Reads to a JsValue at a specified path: JsPath.read[T](implicit r: Reads[T]): Reads[T]- Creates a Reads[T]that will apply the implicit argument rto the JsValueat this path. JsPath.readNullable[T](implicit r: Reads[T]): Reads[Option[T]]- Use for paths that may be missing or can contain a null value. Note: The JSON library provides implicit Readsfor basic types such as String, Int, Double, etc. Defining an individual path Reads looks like this: val nameReads: Reads[String] = (JsPath \ "name").read[String] §Complex Reads You can combine individual path Reads to form more complex Reads which can be used to convert to complex models. For easier understanding, we’ll break down the combine functionality into two statements. First combine Reads objects using the and combinator: val locationReadsBuilder = (JsPath \ "lat").read[Double] and (JsPath \ "long").read[Double] This will yield a type of FunctionalBuilder[Reads]#CanBuild2[Double, Double]. This is an intermediary object and you don’t need to worry too much about it, just know that it’s used to create a complex Reads. Second call the apply method of CanBuildX with a function to translate individual values to your model, this will return your complex Reads. If you have a case class with a matching constructor signature, you can just use its apply method: implicit val locationReads = locationReadsBuilder.apply(Location.apply _) Here’s the same code in a single statement: implicit val locationReads: Reads[Location] = ( (JsPath \ "lat").read[Double] and (JsPath \ "long").read[Double] )(Location.apply _) §Validation with Reads The JsValue.validate method was introduced in JSON basics as the preferred way to perform validation and conversion from a JsValue to another type. Here’s the basic pattern: val json = { ... } val nameReads: Reads[String] = (JsPath \ "name").read[String] val nameResult: JsResult[String] = json.validate[String](nameReads) nameResult match { case s: JsSuccess[String] => println("Name: " + s.get) case e: JsError => println("Errors: " + JsError.toJson(e).toString()) } Default validation for Reads is minimal, such as checking for type conversion errors. You can define custom validation rules by using Reads validation helpers. Here are some that are commonly used: Reads.email- Validates a String has email format. Reads.minLength(nb)- Validates the minimum length of a collection or String. Reads.min- Validates a minimum value. Reads.max- Validates a maximum value. Reads[A] keepAnd Reads[B] => Reads[A]- Operator that tries Reads[A]and Reads[B]but only keeps the result of Reads[A](For those who know Scala parser combinators keepAnd == <~). Reads[A] andKeep Reads[B] => Reads[B]- Operator that tries Reads[A]and Reads[B]but only keeps the result of Reads[B](For those who know Scala parser combinators andKeep == ~>). Reads[A] or Reads[B] => Reads- Operator that performs a logical OR and keeps the result of the last Readschecked. To add validation, apply helpers as arguments to the JsPath.read method: val improvedNameReads = (JsPath \ "name").read[String](minLength[String](2)) §Putting it all together By using complex Reads and custom validation we can define a set of effective Reads for our example model and apply them: } } Note that complex Reads can be nested. In this case, placeReads uses the previously defined implicit locationReads and residentReads at specific paths in the structure. §Writes Writes converters are used to convert from some type to a JsValue. You can build complex Writes using JsPath and combinators very similar to Reads. Here’s the Writes for our example model:) There are a few differences between complex Writes and Reads: - The individual path Writesare created using the JsPath.writemethod. - There is no validation on conversion to JsValuewhich makes the structure simpler and you won’t need any validation helpers. - The intermediary FunctionalBuilder#CanBuildX(created by andcombinators) takes a function that translates a complex type Tto a tuple matching the individual path Writes. Although this is symmetrical to the Readscase, the unapplymethod of a case class returns an Optionof a tuple of properties and must be used with unliftto extract the tuple. §Recursive Types One special case that our example model doesn’t demonstrate is how to handle Reads and Writes for recursive types. JsPath provides lazyRead and lazyWrite methods that take call-by-name parameters to handle this: case class User(name: String, friends: Seq[User]) implicit lazy val userReads: Reads[User] = ( (__ \ "name").read[String] and (__ \ "friends").lazyRead(Reads.seq[User](userReads)) )(User) implicit lazy val userWrites: Writes[User] = ( (__ \ "name").write[String] and (__ \ "friends").lazyWrite(Writes.seq[User](userWrites)) )(unlift(User.unapply)) §Format Format[T] is just a mix of the Reads and Writes traits and can be used for implicit conversion in place of its components. §Creating Format from Reads and Writes You can define a Format by constructing it from Reads and Writes of the same type: val locationReads: Reads[Location] = ( (JsPath \ "lat").read[Double](min(-90.0) keepAnd max(90.0)) and (JsPath \ "long").read[Double](min(-180.0) keepAnd max(180.0)) )(Location.apply _) val locationWrites: Writes[Location] = ( (JsPath \ "lat").write[Double] and (JsPath \ "long").write[Double] )(unlift(Location.unapply)) implicit val locationFormat: Format[Location] = Format(locationReads, locationWrites) §Creating Format using combinators In the case where your Reads and Writes are symmetrical (which may not be the case in real applications), you can define a Format directly from combinators: implicit val locationFormat: Format[Location] = ( (JsPath \ "lat").format[Double](min(-90.0) keepAnd max(90.0)) and (JsPath \ "long").format[Double](min(-180.0) keepAnd max(180.0)) )(Location.apply, unlift(Location.unapply)) Next: JSON automated mapping
https://www.playframework.com/documentation/2.6.16/ScalaJsonCombinators
CC-MAIN-2020-10
en
refinedweb
Bug Fixes Add :go-* prototype functions (,) Support go-pos-no-wait in simulation mode - [pr2-interface.l] :move-to-send , for simulation mode, do not try to call :lookup-transform - [pr2-interface.l] fix typo : if -> when, return-from :move-to -> return-from :move-to-send, - [test/pr2-ri-test-simple.l] add test for go-pos, go-pos-no-wait, go-wait - [pr2eus/pr2eus/pr2-interface.l] fix typo (short modify) @h-kamada - test/test-ri-test.l: :wait-interpolation retuns a list of :interpolationg - pr2-interface : support timer-based motion for :move-to - more realistic simulation mode use default pr2_description () Other New Features Misc Updates Contributors: Kamada Hitoshi, Kei Okada, Masaki Murooka, Yuki Furuta, Yuto Inagaki add metapackage change roseus-svnrevision -> roseus-repo-version, due to set time-limit 1800 bugfix: change link name disable pr2-ri-test since this requires gazebo fix find_package components for groovy, generae missing package via generete-all-msg-srv.sh add :controller-timeout keyword to robot-interface to specify the timeout to wait controller add warn and exit the program for jsk-ros-pkg/jsk_common#186 Merge pull request #8 from YoheiKakiuchi/fix_joint_trajectory fix send-trajectory #11: back to gazebo from gzserver when testing pr2-ri-test.launch #11: use gzserver instead of gazebo on test Merge remote-tracking branch 'origin/master' into youhei-tip fix send-trajectory fix send-trajectory add keyword :joint-states-topic for changing jonit_states name install euslisp files in the package root directory: last catkinize commit was also done by murooka catkinize pr2eus fixed method to get links for new pr2 model update pr2 model, fix kinect geometry use joint_trajectory_action -> follow_joint_trajectory delete commit r5583 add --no-link-suffix,--no-joint-suffix, concerning backword compatibility update pr2 model do not use 0.2 sec marge, now the mergin is only 0.1 sec, see for more detail fix window name and draw floor for robot-interface's simulation mode, see Isseue 42, this requries r979() of jskeus add comments for go-velocity arguments and use msec in animation codes remove unused local variables ignore not existing joint add move base range in args of ik use :additional-weight-list to set weight without using index of weight vector explicitly ;; test pr2's ik by euscollada/pr2.sh and ik-test.l update ros-wait fix minor bug add :ros-wait method to robot-interface fix for using :move-to with /base_footprint as frame_id, [#234] update parameter for avoiding warning message, [#233] remove :wait-interpolation finish check on pr2-tuckarm-pose move code of visuazlizing trajectory to robot-inreface.l from pr2eus_openrave modified loading dependant programs, no longer needed require basic roseus codes modified time-limit for low power PC add checking correctly finished :wait-interpolation on pr2-tuckarm-pose add check code for result of move command, nil will be returned if failed or canceled add optional force-stop to :go-stop method add check of length c = 2 for dual arm manipulation use angle-vector-sequence in angle-vector-with-constraint when ri simulation #216, support select-target-arm for dual ik setup :header :seq, see [#160] send with move_base_simplw if /move_base/goal failed, see [#160] use /map frame to send move_base/goal, see [#160] add description for voice text command enable to add arguments for xx-vector methods, which is reported kuroiwa r4702 requires fix to make-pr2-model-file.l #200 fix pr2-ri-test to pass the test fix :stop-grasp retunrs t add :namespace keyword to robot-interface, see [tickets:#203] remove / from /joint_states according to [tickets:#202] add -r option (headless) for fuerte until hydro, gazebo needs GPU to start, so use DISPLAY to :0.0 for test do not wrap around -180/180 degree [#91] support :angle-vector over 360 degree, [#91] fix time-limit 300->600 add test code for :angle-vector-with-constraint support :arms in :angle-vector-with-constraint, [#91] retry twice if :move-gripper is not converged, see [#159] remove pause mode flag add :angle-vector-with-constraiont method, may be we can move to robot-interface? expand pr2_empty_world.launch files to respawn gazebo add test code which show wait-interpolation get dead use package:// for loading speak.l groovy needs throttled true to launch head-less gazebo? add debug message for :start-grasp fix #159, use robot-update-state to double check the length between tips set time-limit to 300 shorten test code return gripper with when simulation mode [#159] fix start-grasp, resend move-gripper when reached_goal is nil add test-start-grasp fix commit error [r4499] fix: relax camera position differs add keyword :use-tf2 and :joint-state-topic to robot-interface relax camera position differs update pr1012 bag/yaml file for new pr2 robot with sensor robot add comment to get bag files update pr2.l eus model with sensor head update robot_description dump for pr1040 add PR2_NO argument to make-pr2-model-file-test.launch add urdf file which dumped robot_description in pr1040 add pr2-ri-test.launch fix for joint name mismatch between ros and eus :move-to retunls nil if not reached to the goal (not closer than 200mm) #160 relax test sequence do not use collada_urdf_jsk_patch, use collada_urdf (send ri :state :worldcoords) return worldcoords when ri simulation commit add :draw-objects methods, update robot-interface viewer while :move-to in simulation mode :move-to takes absolute coordinats as an arguments, currently it does not take into account frame-id, every coords must be relative to world add comment revert [#1445], since min/max limit of infinite rotational joint has changed from 180 to 270 in go-pos moves robot in relatively: fix code unless joint-action-enable, Fixed [#146] fix wreit-r of reset pose from 180->0 [#145] support :object key in :start-grasp [#144] support if link-list and move-target is not defined in dual-arm ik mode add pr2 ik test with both hands support when dual-arm-ik when link-list is not set use ros::service-call to change tilt_laser_mux/select [#94] use check-continuous-joint-move-over-180 for simulation-modep [#91] fixed tuckarm-pose angle-vector fix: using :{larm,rarm,head,torso}-controller and :{larm,rarm,head,torso}-angle-vector add use-tilt-laser-obstacle-cloud workaround for unintentional 360 joint rotation problem [#91] fix to work pr2-read-state with X-less environment [#59] change name cancel-all-goals -> go-stop and do not speak in the method, check joint-action-enable, [#66] add cancel-all-goals add test for start-grasp add :simulation-modep method to robot-interface do not launch viewer when robot-interface is already created [#71] add pr2-grasp-test support no display environment [#59] suport (send ri :init :objects (list (roomxxx))) style interface for simulation environment with objects [#49] fix: add keyword :timeout temporary remove :add-controller for pr2 fix: larm-angle-vector and rarm-angle-vector update robot-interface.l for using joint group method for adding additional controllers fix: tuckarm pose add :wait-torso method to pr2-interface update for using (send ri :potentio-vector) fix #50, velocity limit for both plug/minus added wait option for stop-grasp use PLATFORM_FLOAT64 for daeFloat, collada-fom for groovy uses -DCOLLADA_DOM_DAEFLOAT_IS64, update pr2.l to use double precision value update: method :state .. use :update-robot-state remove debug message fix bug for continuous turning add a missing variable fix: initialization function name should be {robotname}-init fix: check absolute rotation angle using method :cancel-all-goals instead of :cancel-goal add :cancel-angle-vector and :stop-motion method for stopping motion add updated urdf file and corresponding bag files update pr2 model for fuerte autogenerating camera frame for fuerte fix calling ros::init if ros is not running add :ros-joint-angle for using meter/radian unit change: enable to pass robot instance fix minor bugs fix minor bugs fix for liner-joint add :send-trajectory to robot interface for using directly JointTrajectory.msg move pr2-arm-navigation from pr2eus to pr2eus_armnavigation add arm-navigation wrapper for PR2 add pr2-arm-navigation.l for using arm_navigation stack fix go-pos-unsafe, cehck if reached to the original goal using odom and retly if needed, set minimum go-pos-unsafe time to 1000 add debug message move kinect_frame transform infrmatin to /opt/ros/electric/urdf/robot.xml remove description for static tf nodes find vector method from (send self :methods) if exists such as :reference-vector and :error-vector find vector method from (send self :methods) if exists such as :reference-vector and :error-vector add groupname to slots variables of robot-interface add ros node initialize check change variable name viewer -> create-viewer add pr2-interface setup function change for using private queue group in robot-interface in order to divide spin group use rosrun rosbag play instaed of rosrun rosbag rosbag use equal, not eq to check link name use string joint/link name rule, add pr2-senros-robot for camera model fix for r3056 (use string as link name too, see #748) support dual-arm ik which uses target-coords, move-target, and link-list as cons ;; fix move-arm, thre, and rthre definitions update tuckarm-pose for non-collision and min-max safe version support :joint-action-enable to change real/virtual robot environment. Ask users to really move robot? when :warningp is set, #758 support :stop keyword to :inverse-kinematics use lib/llib/unittest.l use string-equal to check joint-name key of controller action name (:controller -> :controller-action) fixed to use string type joint names fix for jskeus r773 :gripper method in irtrobot class add reference/error vector method in robot-interface fix for joint with string name, euscollada/src/collada2eus.cpp@2969 use string joint-name spin once before check robot state variables fix typo update for #719, add accessor to openni camera frames support loos checking of cmaera name, currently we are trying to move namer name from string style to keyword style use (pr2) to instantiate pr2 robot change parent of larm-end-coords from l/r_gripper_parm_link to l/r_gripper_tool_frame fix pr2.l compile rule use _roscore_failed for not run make-pr2-model-file without roscore and /robot_description environment eps=0.01 for camera projection check update pr2.l update pr2model to r2714 euscollada update pr2 model for r2693 or euscollada add a test for link weight, update pr2.l model file retake pr1012_sensors.bag update test bagfile for pr2 sensors and kinect/tf check link-coords, currently this is commented out fix openni camera link coordinates see jsk_pr2_startup/jsk_pr2_sensors/kinect_head.launch update test bagfile for pr2 sensors add debug message and add pr2-camera-coords-test add debug message update pr2eus-test to make robot model on the fly update l_finger_tip_link position fix syntax error on :publish-joint-state fix syntax error on :publish-joint-state update publish-joint-state for pr2, publish gripper joint_state remove dependency for pr2_* from roseus update pr2.l with safty controller limit add black color to kinect add test for link position rename j_robotsound -> robotsound_jp sleep 1 second after advertising add japanese speech topic for pr2-interface move robot-interface from roseus to pr2eus added sound_play function add kinect camera add strict check for camera number test fix make-pr2-model-file as urdf_to_collada supports dae file loading robot-interface :state with no argument is obsolated, and add warning messages :go-pos-unsafe updated, 1000 times msec removed initialize-costmap, this is obsolated I checked latest pr2.l works well by my program pr2-interface :state :odom :pose should return coordinates add test for sensor read methods of pr2-interface added :set-robot-state1 method to update robot-state variable, and store the time stamp of current joint_states changed global frame for (:move-to and :state :worldcoords), /map -> /world unchanged min-max angle is OK added prosilica and kinect camra to bag in test change count for wait slow camera info topic do not make error when expected difference between unstable and stable model fix assert message type add debug messages fix tpo in format string rename variable, use stable and unstable fix camera test code fix to work when camera_info is not found add make-pr2-model-file-test remove debug code fix make-pr2-model-file so that other package can use this default frame-id of pr2:move-to is /map pr2-robot does not calcurate joint-torque in torque-vector method changed to use robot-interface devide pr2-interface into robot common interface and pr2 specific methods check if velocity and efforts in /joint_states are same length as joint list added joint-action-enable check for :publish-joint-state instantiate transform-listener in ros-interface :init error handling when time list contains 0.0 in angle-vector-sequence miss understanding of pr2-robot origin coords, base_footprint add (if p) in pr2-interface :objects fix when frame_id is base_link fix compile warning -> velocities in :update-robot-state add :state :worldcoords, update :move-to, use :go-velocity after the robot reached gaol using move_base navigation controller dissoc before copy-object check viewer in :objects, because viewer only exists in simulation mode changed go-pos-unsafe to use 80% of max velocity remove x::draw-things fix :start-grasp, dissoc if already assoced, use x::draw-thing in :objects, etc fix segfault add :objects for simulation mode to display objects in pr2-interface viewer, also simulation mode is supported in :start-grasp and :stop-grasp add :gripper :links to return gripper links do not call dynamic reconfigure to static costmap, but it will repaired update navigation utility to electric add simulation mode to go-pos-unsafe and go-velocity add go-pos-unsafe update navigation parameter methods in pr2-interface change pr2-interface to update robot-model by joint_state msg which contains unknown joint names add joint-action-enable for :move-to add accessor to :robot and :viewer fix when x::display is 0 fix type anlge -> angle change :start-grasp :wait nil -> t, and returns the space length of the gripper update :move-gripper, move gripper in simulation mode update pr2-tuckarm-pose smarter fix gripper joint manually update tuckarm pose method, and send angle-vector by each controller dump euscollada-robot definition to euscollada robot files and update pr2eus/pr2.l update pr2.l for latest euscollada/pr2.l ;; use euscollada-robot class instead of robot-model class ;; please refer to jsk-ros-pkg -r1822 commit fix previous commit : do not invoke viewer when no x:display found do not invoke viewer when no x:display found add pr2-ik-test.l and pr2eus-test.launch fix l_gripper_r_finger_tip_link -> l_wrist_roll_link add pr2-ik-test.l use palm link as parent of endcoords update with kinect model update pr2 model with safety_limit use :state :potentio-vector instead of old :state method call update pr2-read-state.l to draw torque add max velocity and torque in :init-ending set the name of base_trajectory action to same other actions fix typo pr2_base_trajectory_action update topic name for pr2_base_trajectory_action revert accidentally commit update namespace of pr2_base_trajectory_action add publish-joint-state method, which publish joint_states when joint-action-enable is nil set joint-action-enable t before wait-fore pr2-action-server wait for joint-velocity to zero, in wait-interpolation for pr2 add defun make-camera-from-ros-camera-info-aux make-camera-from-ros-camera-info-aux is required for non-roseus users fix hrp4 -> robot split pr2-interface to pr2-interface and ros-interface remove defun make-camera-from-ros-camera-info-aux, which is now defined in roseus-utils.l support :state :torque-vector, by mikita add effort to state in pr2-interface class use :torso_lift_joint method add dummy massproperty pr2.l add message name to constant in msg definition update pr2.l model 2010523 add clear-costmap, initialize-costmap, change-inflation-range, call clear-costmap when the robot retry move-to function i n (send ri :move-to) fix contious rotational joint problems, pr2 controller use joint angle value directory, so we add offset before sending the trajectory add and fix sub-angle-vector method, fix simulation mode :angle-vector-sequence returns angle-vector-sequence send only one message in pr2-angle-vector-sequence method fix diff-angle-vector in :angle-vector-sequence add diff-angle-vector function in :anlge-vector-sequence for calculating velocity vector for interpolation cropping angle of infinite rotational joint supported in irtmodel.l set :min and :max for infinite rotational joint is inf and -inf add simulation mode code in :angle-vector-sequence draw interpolated postures unless joint-action-enable in :angle-vector remove typo remove spin-once in (:angle-vector-sequence remove spin-once in (:angle-vector fix :inverse-kinematics move-arm move-target link-list, #493 if no viewer is executed before pr2-interface viewer, set pr2-interface viewer as a defulat viewer, so that users are able to use them as a default view fix fingertip pressure zero-reset, update pr2-read-state sample add ** to msg constant type we can send JointTrajectoryActionGoal to torso and head in diamondback update grasp timing in tuckarm-pose, add pr2-reset-pose add pr2 tuckarm pose function remove useless number 1 in ros::ros-warn use ros::ros-warn instaed of warning-message support sending go-velocity countinously, and once support sending go-velocity countinously fix go-velocity function add go-velocity method using trajectoy and safe_teleop add go-velocity to pr2-interface.l torso and head did not accept time_from_start, it only accept duration update pr2.l with :camera and :cameras add to generate :cameras and :camera by chen and k-okada require pr2-utils, show viewer in NON-joint-action-enable mode if robot-joint-disabled, :state sends recieved angle-vector pr2-interface :init works unless it connected to pr2 update ros-infro comment update pr2.l using r769 update :*-cmaera method definitoin, support forward-message-to fix :inverse-kinematics with use-base update :inverse-kinematics with use-base update :inverse-kinematics support use-torso, use-base, move-arm In head point action, pointing_frame is not used, and change translate length add fingertip pressure subscriber, to use finger-pressure call reset-fingertip beforehand set time out for gripper action action start time should be future, i think use :wait-interpolation, remove sleep fix do not generate pr2.l if it already exists add move_base_msgs fix problem, when not add roseus to /home/k-okada/ros/cturtle/ros/bin:/usr/local/cuda/bin/:.:/home/k-okada/bin:/usr/local/bin:/usr/local/svs/bin:/usr/java/j2sdk1.4.1/bin/:/usr/bin:/bin/:/usr/sbin:/sbin:/usr/X11R6/bin:/usr/local/jsk/bin:/home/k-okada/ros/cturtle/jsk-ros-pkg/euslisp/jskeus/eus/Linux/bin:/bin:/usr/h8300-hitachi-hms/bin:/usr/local/ELDK4.1/usr/bin:/home/k-okada/prog/scripts:/usr/local/src/gxp rename cmaera->camera-model, viewing->vwing update pr2model with new make-camera-from-ros-info-aux update to new make-camera-from-ros-info-aux update pr2 model file add pr2 model file at 100929 delete load-pr2-file.l load-pr2-file is removed, now we use make-pr2-modle-file generate pr2model from camera_info and /robot_description front of high_def_frame is +x set pointing_frame to look-at-point action goal fix to move head-end-coords in sending current pose update :angle-vector-sequence to work with real-pr2 robot add :angle-vector-sequence based on interpolator::push in rats/src/interpolator.cpp update :send-pr2-controller interface (:send-pr2-controller nil (action joint-names all-positions all-velocities starttiem duration) support send pr2 :inverse-kinematics c add test code for load-pr2-file add load-pr2-file add dual arm jacobian, torque sample by s.nozawa fix pr2 gripper action sending add hrp2 compatible :go-pos [m] [m] [degree] method remove waiting for move-base action in pr2-interface :init change to startable pr2-interface when move_base not found add :move-to method and move-base-action slot variable add :gripper and :override :limb of irtrobot.l to suppoer send pr2 :larm :gripper :angle-vector change to use roseus, whcih automatically load roseus.l eustf.l actionlib.l change to use pr2.l in pr2eus directory rosmake pr2eus to generate pr2.l fix to use require for eustf and actionlib revert to r527 float mod is supported in eus result of (r2deg p) should be integer for using mod crop joint-angle to +- 360 in :state :potentio-vector add depend package add gripper action to pr2-interface wait at most 10 seconds fix return-from, in :state method fix syntax error (require :keyword path) <- (require path) add pr2_controllers_msgs fix to use package:// load style rename roseus-add-{msgs,srvs}->ros::roseus->add-{msgs,srvs} pr2model is obsoluted add pr2 ros controlelr and euslisp interface add utility functions for pr2 euslisp model add sample program and launch file for PR2 users remove piped-fork and use ros::rospack-find modify pr2model.l to head joint add reset manip pose to pr2 fix pr2model, support :fix and :relative mode in :inverse-kinematics, see hold-cup in 2010_05_pr2ws/sample-motion.l for example override :init, set reset-pose as initial pose fix many bags to move pr2 by joint angle actionlib interface change middle-body-joint-angle-list API: omit string-upcase for joitn name add pr2eus model, which depends on urdf2eus Contributors: Haseru Chen, Yuki Furuta, Kei Okada, Yuto Inagaki, Satoshi Iwaishi, Manabu Saito, Shunichi Nozawa, Kazuto Murase, Masaki Murooka, Ryohei Ueda, Yohei Kakiuchi, Yusuke Furuta, Hiroyuki Mikita, Otsubo Satoshi
http://docs.ros.org/melodic/changelogs/pr2eus/changelog.html
CC-MAIN-2020-10
en
refinedweb
This multi-part series introduces Asynchronous Programming and the Twisted networking framework. - In Which We Begin at the Beginning - Slow Poetry and the Apocalypse - Our Eye-beams Begin to Twist - Twisted Poetry - Twistier Poetry - And Then We Took It Higher - An Interlude, Deferred - Deferred Poetry - A Second Interlude, Deferred - Poetry Transformed - Your Poetry is Served - A Poetry Transformation Server - Deferred All The Way Down - When a Deferred Isn’t - Tested Poetry - Twisted Daemonologie - Just Another Way to Spell “Callback” - Deferreds En Masse - I Thought I Wanted It But I Changed My Mind - Wheels within Wheels: Twisted and Erlang - Lazy is as Lazy Doesn’t: Twisted and Haskell - The End This introduction has some translations in other languages: 125 thoughts on “Twisted Introduction” the index link on this page: links back to your main page, and not to the index. great tutorial btw 🙂 Ah, wordpress cut-n-paste gets me again! Thanks 🙂 Hi, Dave. Do you mind if i’l translate this twisted intro to russian and publish it? I don’t mind at all, be my guest! Be sure to send me the link 🙂 Thank you! I will! Hello Dave, I find your intro on Twisted to be the best I have found to date, clear and covers important details that support a real understanding of Twisted. I would like to ask if in the future you have any plans for other Twisted tutorials. Just for an example tutorial; as the intro above couldn’t cover some deep areas of Twisted, I was thinking a multiple part (over time) tutorial that each covers say different abstractions. I would love to support you and the work you do in making Twisted a learning joy for me, a first time. Please let me know if any money I can send that will help. Ed. Hey Ed, thanks for the kind words! I appreciate your offer, but I’m happy to make this tutorial a labor of love. I plan to have a few more parts in this series and then take a break for a while. It’s hard work 🙂 I’m not sure whether I’m going to write more Twisted tutorials after that, but if you make it through the end of mine, then I think you’ll be well prepared to dive into the main documentation of Twisted itself. Also, Jp Calderone has a lengthy series that goes into the details of the Twisted Web framework over here. Thanks Dave, just thinking such work a labor of love …wow. I did look over the link you gave, and like it. Dave you take care, and I look forward to the parts you plan to add here, as I’m learning Twisted to build a larger tool set in python and your tutorial has been a great help in that direction. Ed. Good luck with your tool set! Hello, I understand this is not an interactive tutorial, if I may ask a question please. I find in part 5 something that I wish to understand; the word “buildFactory” has had me scratch my head because I can’t seem to find it mentioned anywhere I have looked as portocol interfaces and others, along with the tutorial pages 4 and 6. Also, have not found a reference to it on web searches I have done too. Where do I find it or do I just lack some background in Twisted I need. Ed. Thank you. Hey Ed, you found a typo! That should have been buildProtocol. I’ve fixed the text of Part 5, thanks for the close reading! Dave, thank you for the help. Ed. Great intro! Would be nice, if you could release the whole serie as a pdf. Jo. Thanks! Once I finish the series, I’ll look into possibilities there. I added joliprint links to each article, which generates a pdf from my posts. Might be useful, not sure. Thanks Dave for your excellent work! I’m working on translating this series into Japanese. I’ve finished around part5. Is it okay to keep publishing them in my space? Absolutely! I put a link to your site on the index page above. Thanks for doing that. Also, could you tell me your full name, so I can mention you with the link? Thank you very much. My name is Shigeru Kitazaki. Nice to meet you, Shigeru 🙂 Hi! Thanks for hard work, i read all articles and my head-ache is gone 😀 Even twisted book,that i lent from library of uni, was too hard to understand at the beginning, but no more… Btw, if anyone want pdf format – you can download it from my google docs: And at the moment i’m working on estonian translation. Glad it was helpful for you! And thanks for the pdf, can you tell me how to go about doing that? Send me the link to your translation and I’ll put it on the main index page. Hi timgluz, can you tell how to download your pdf…it is asking for username/passwd when I click on the above link. thanx Dear Dave I’m sure your writeup is a great source of knowledge. But caused by the lame of my brain, I fail to understand how to adopt it to my problem. I need to write application that : 1. Have to services in it: 1.a. a TCP server (just like a basic echo server) 1.b. an XMPP-Client/bot 2. Every msg come from XMPP-server, will wrote to StdOut (it’ll be extended inthe future) 3. Every msg received by TCP-server part, will be forwarded by the xmpp-client to XMPP-server. I post my code at pastebin: 1. echobot.py : 2. echobot.tac : 3. the occured error : Kindly please give me any enlightment. Sincerely -bino- Hey bino, I might not have time to look into your example to closely. But the traceback just looks like you are using an undefined name. I’m not too familiar with wokkel, but perhaps you mean self.send()? I think your tutorials are really awesome, and I love how you’re consistently putting so much work into this while remaining responsive and approachable. This was *long* overdue, but the official documentation page now links back to you: I couldn’t figure out if marketing your tutorials as good-for-newbies or not was doing you a disservice or not; I’m just mainly using them as a pointer for newbies who are willing to learn. If you disagree with the wording, please feel free to e-mail me about it. Thank you, I appreciate the kind words! I am specifically aiming this tutorial for people who are new to Twisted and, more importantly, new to asynchronous programming. So your description is apt. Finally, a very good introduction + tutorials + how-to in Twisted! Keep them coming, maybe even publish it as a book later? 😉 Thanks much! 🙂 You’re welcome! This guide looks good (I haven’t actually read much of it yet :-)). Given the length, it would be better to have a PDF for an e-reader, or printing it out, but the PDF mentioned earlier in the comments only go up to part 10. Any plans to produce an updated PDF? Are the parts that are in the PDF up-to-date, or has there been edits, so that one would be better off just reading the whole guide online? There have been a few edits of the earlier parts, but no dramatic changes. I’m going to look into packaging options once I finish the parts I initially planned to write, which will take me to Part 23. BTW, I added ‘joliprint’ links for all my posts. That’s a website that claims they can produce a PDF of any webpage. It might be useful for you, I’m not sure. This is amazing – thanks! Now I just need to find the time to read it all. 🙂 Good luck 🙂 Dear Dave, I have converted the twisted tutorial into .mobi format so that I can read it on my kindle. Do you mind if I publish the ebook version of this tutorial? Regards, Sangeet Kumar Hi Sangeet, I don’t mind at all. Could you send me the link so I can add it to the main tutorial page? dave Hi Dave, I have converted your tutorial into .mobi format to read in my kindle, so do you mind if I publish the ebook version of this tutorial. Regards, Sangeet Hello, Am enjoying this tutorial very much and interested in where I can obtain the Kindle-compatible version to try out on the Kindle I just got my wife for HER birthday 🙂 TCB Sounds like you’ll need to get a second one 🙂 I just installed a Kindle plugin, and now there should be a Kindle It! section on the right hand toolbar where you can send a post to your Kindle. Let me know how that works. I’ve been skimming through this, but one thing I haven’t seen is an example of a two-way protocol eg. a poetry server that takes a poem name as input and returns the requested poem. Do you cover this at all? I do, Part 12 has a two-way protocol that involves a request and a reply. Thanks for that 🙂 Am I also right in thinking that the factories are “single use only”, eg. once you use a PoetryClientFactory to get one poem, you can’t use it again? Factories are actually multi-use. For example, the client factory in Part 5 is used for every poem you download, and actually keeps track of the number of poems we’ve gotten so far so it knows when to quit the program. That’s solved a little differently in later versions, but it illustrates that factories can be used over and over again, and often are. Protocols, on the other hand, are created and used for a single connection. But in, say, twisted-client-8/get-poetry.py, the “PoetryClientFactory” keeps a single Deferred which is called back upon completion. This can’t be done more than once… have I misunderstood, or is this a different usage? You’re absolutely right. For that case, the factory is a single-use object. So I guess I should have said that factories can be multi-use, but don’t necessarily have to be. Server factories are almost always multi-use, since most servers accept many connections. Client factories, on the other hand, are probably more of a mixed bag in that respect. Okay, I see what you mean now 🙂 Thanks for the excellent guide, by the way! Thank you! great articles! far better than in my opinion. you should write a book with the name , lol. great articles! far better than ‘twisted essentials’ in my opinion. you should write a book with the name ‘asyn-model and twisted’, lol. Thank you very much! You might want to highlight the code in chapters using this: from twisted.internet import reactor reactor.run() Wrong window? 🙂 No, i was trying to insert some code highlighted with HTML. As the post area does not have help, i didn’t know if it will work. It didn’t. And i didn’t want to add another comment stating this 🙂 Ah, no worries. I don’t get that many posts 🙂 I’ve been meaning to look into something like that, thank you for the link! Ok, I went through and highlighted the code using a wordpress plugin. Hopefully it looks a little spiffier. Thanks for your work on “Twisted Introduction”. Is there up to date PDF version available? (I was directed here from #python IRC channel.) There isn’t a PDF of the entire thing, but each post has a ‘joliprint’ link at the end which will create a PDF of that article. If you have a Kindle, I added a widget on the right to send a post your free Kindle account as well. Joliprint looses formatting of code completely making it unreadable. I tried readability addon but it makes code look bad too. Ah, too bad. Back to the drawing board 🙂 I raised this issue using Feedback option on joliprint.com In the meantime I could create PDFs by cutting out unneeded elements from DOM using Firebug and then print to PDF using PDF Creator. I could send you one article so you could see if it looks ok. Wow, thanks! Hopefully the joliprint people can fix that and it will just work. But in the meantime if you want to send me that article that would be cool, though I’d hate to have you do a lot of busywork if it’s a pain. I couldn’t find your email anywhere on krondo.com… Oh, right, it’s [email protected]. Thanks a lot for this tutorial. It is incredibly well done and includes wonderful nerdy jokes. Great stuff. You are welcome, glad you liked it! Really like your introduce! Thanks! Hello Dave, i am writing my Thesis right now about performance on the Internet and wonder which kind of licese is your tutorial using. Or in other words, am I allowed to use three images from your work. Mainly those about sync and async Modell. Thanks for your work! Hey Lukas, I guess I should give the particular license some thought. I shall do so! But the short answer is that you are perfectly welcome to use the figures as long as you give me credit for them. thanks, dave Hi, how would one handle a situation where the server closes the port? I don’t get any callbacks to the connectionlost function in a factory. Hi sma, on a factory the callbacks you are looking for are ‘clientConnectionLost’ and ‘clientConnectionFailed’. The first one is called when a connection is lost after it was connected. The second is called if the attempt to connect never succeeded in the first place. The ‘connectionLost’ callback is actually a callback on the Protocol, which is another place you can handle lost connections (but not failed connections, those can only be handled by the factory). Make sense? Hi Dave. First, thank you for the great job you have done. I believe, every reader of this excellent tutorial deeply appreciates the effort you put into it. *applause* If you don’t mind, I have a couple of questions regarding Twisted in particucular and programming in general. Hope you could answer them 🙂 Suppose you have a protocol that is almost completely symmetric, i.e. it doesn’t matter who the server is and who the client is. However, a few cases when this actually matters should be handled. For this purpose one could make two distinct but very similar implementations, subclassing both ClientFactory and ServerFactory, then doing connectTCP on a ClientFactory subclass instance and listenTCP on the ServerFactory subclass instance. But this sounds quite ridiculous, doesn’t it? So, alternatively, one could subclass the generic Factory class and use it for both serving and being a client. However, here arises a problem I’ve bumped into recently: how could one tell whether he is accepting a connection or connecting himself from within the connectionMade callback on Factory subclass? And there’s something more generic. Suppose you have a large project that has to handle both the network interaction and (ugly, huh?) a GUI. How would you structure such a project? I’ve come up with a solution where you have a base object which is in charge for everything and stores references to other objects which are responsible for the specific parts of the program. For instance, you could have a base class called Application and it could store references to classes like NetworkController, GUI, DiskController and so on. (Here I use the words “class” and “object” interchangeably, hope you’re not going to find me and make me write “Class is not an object” 9001 times for that.) These “child” objects, in turn, store references to the Application itself, so whenever a class from the network-responsible part has to reach the GUI to tell it that the bandwidth is exhausted and immediate action has to be taken, it simply calls something like self.app.gui.TellTheUserWeAreScrewed(). However, I believe there exists a better approach but I can’t just figure out what it is 🙁 Hope you can help me. Glad you liked the series! I will address the first question now. The second is much larger 🙂 I think your alternative solution (subclassing from Factory) will work, but you would need to use two instances, one for listening and one for connecting. The instances will need to know for which end of the protocol they are creating protocols for. You would provide that information when you create them. Make sense? Absolutely! Thank you for providing the answer so quickly. I’m so ashamed I didn’t come with this myself… Don’t be, it’s always a learning process. Hey there, so I’ve been thinking about your second question. It’s a big question 🙂 It basically turns into the question “how should you write software”, since most really useful programs end up getting fairly complicated. I don’t have a real answer to your question, even though I’ve written a lot of software, some of it ending up kind of big. To me there is a legitimate viewpoint that interprets most of the major innovations in software (functions, modules, classes, types, etc.) as different ways to answer this question, basically different strategies for dealing with complexity. But here are some general rules of thumb that I think most programmers would agree with: + Build your software out of small components and make the interfaces to those components as small as possible. + Try to keep the dependencies of any one component small in number. Following this strategy will make it easier to test your components and to change them over time. Although it is more controversial, I also think there is some value in the idea of Dependency Injection, a pattern where components declare their dependencies rather than explicitly create them. Then, during runtime, the dependencies are ‘injected’ by the context, usually some sort of configuration system. This also makes it very easy to, say, substitute a mock component during testing. The situation you are describing where you have on big object that all the others depend on is a common one that people end up with, I think. And whether it is good enough for your projects depends on your situation. For smallish programs, I think you can get away with that, as long as you keep that top-level object very simple (basically it’s just a container of other things without methods of its own and acts kind of like a dependency injection configuration). Anyway, good luck on your project, if you haven’t already finished 🙂 Hi Dave, great tutorial. I now have a running twisted daemon working fine and dandy which (listens) receives information over the internet from a number of gsm devices. The problem is that this device’s ip/port change every so often and when this hapens it leaves the open connection socket listening for more data(not in time wait or finished state) and creates another socket. eventually the machine runs out of sockets and everything starts to fail. So far I have to restart the daemon every now and then to workarround this issue. but that’s just the lamest solution. and it’s not even a solution because even if a get a cron job to do it, when I scale the amount of devices it will become unpractical. I tried to set a timeout but since there is not any specific response that the server is waiting for then timeout is never met. So, is there a way close sockets aotumatically when they have say 180 seconds idle, or a way to tell the sockets to close after each transmission, or any other solution that I can implement, or something that I should change in my implementation. or anything you can think of that will still work when I have at least 1000 devices transmitting every 90 seconds. I am running in a centos 5 server VPS. python 2.4 Hi Ricardo, when you say you tried to set a timeout, what do you mean? I’m not sure which timeout setting you were using. But in any case, you should be able to do this in a pretty straightforward way. In your Protocol implementation you will want to set a timeout each time a connection is made (you can do that in the Protocol __init__) and then refresh the timeout when any data is received (you can do that in the dataReceived method). The timeout itself can be a DelayedCall that you create with reactor.callLater. That object has an api that let’s you reschedule the delay so you can refresh the timeout. If the timeout is reached, you will want to call .transport.loseConnection() to break the connection. Make sense? it does make sense, let me see if I get this straight, Every time I open a new socket i statr a countdown timer (say 120 seconds) and every time I receive data I update the timer (back to 120 seconds) so that if 120 seconde shall pass with no activity whatsoever I close the socket using the loseConnection method. sound pretty nifty, what I don’t get is how do I do that. what should i do in the protocol init and in the data received , do you have an example? Thank you very much for your help. I tried to use a method called connection.timeout but it never worked for me. What kind of object is ‘connection’? I’m not sure what API you are referring to. ok, now if I have 128 sockets in my machine this solution will only allow me 128 clients to be connected in a two minute frame since the devices send data every 90 seconds (roughly). and when i do loose the connection the device has to login again prior to sending data. so it still makes me a bit unconfortable, isn’t thre a way that many clients can share a socket. or that I could take care of saying 1000 devices even if I only have 128 sockets. or that it can drop a socket whithout droping the connection or without leeting the device know his connectio has been dropped. so that it does’t feel the need to relogin all the time With TCP connections each connection will use one socket, there is no way for two different clients to share the same TCP socket. Is 128 just a hypothetical number or are you really limited to 128 sockets? That seems pretty small. BTW the part of the reactor.callLater that i don’t get is, how do I update the call later, I thought that i could only make call something later and that’s it. not that i could keep adjusting the timer, it a wicked idea that opens up many watchdog like possibilities. I will check the documentation on the call later api. forgive my typos, i need to get a new keyboard soon. Hey Ricardo, check out the documentation for callLater — the return value is an IDelayedCall object with an API you can use to adjust the timeout (or cancel the call). Hey Dave I am using the twisted.internet API I did check on the call later, and the only issue would be the one with the sockets, since i am on a virtual private server (shared server) thats all i get and I tried to change the ammount of socketsit by modifiying /proc/sys/net/core/somaxconn but even as root I am unable to change it. Likely it only chageable by the administrators. Now on the other hand the devices I am listening to areable to speack UDP also, Do you in you expert opinion think that it would be a good idea to switch to udp? would that allow me to have unlimited clients? what is the tradeoff? Is there a way for me to drop the socket without letting the device know and the making a new one 90 seconds later when it transmits again, only to drop it right after the package has been received. I get 100 Byte packages every 90 seconds from remote devices informating me the current state of a number of variables via gprs/internet. Hey Ricardo, if you close a TCP socket, the TCP protocol will take care of informing the other side that the connection has been dropped. At that point it is up to the client to do the right thing and reconnect if needed — you’ll have to test to see if these devices will do that. With a bash shell, the command: ‘ulimit -n’ tells you the number of open file descriptors (sockets) you can have. You can try to increase it with ‘ulimit -n XXX’ where ‘XXX’ is the number you want. It sounds like UDP is an option here. It would allow you to have essentially unlimited clients since UDP is a connectionless protocol. As I am sure you know, UDP does not guarantee delivery. Is it ok if sometimes packets are not delivered? If so, UDP could be a nice fit for you. Otherwise you would have to implement your own retry mechanism under UDP, essentially replicating parts of TCP anyway. they do indeed, but its apin that every time they are first reconnecting, then relogin which is not needed, and then sending a package, then i do loseConnection to free the socket, but i do get the same behavior all over again, and i was expecting to just get the data package. thats because the connection is been closes cleanly, yet if there were a way that I could drop it (not claenly close it) in such a manner that the device wouldn’t know, then when it transmits again the reactor.listen on the port would listen to the incoming package open a socket, process it, and close it again, so that the socket is used only in the time it takes the package to be delivered. I did this and i got recursive attempts to login from the device which were satisfactorily responded from the server. that is the device would succesfully login, then the connection lost close the connection and then the device attempts to login again instead of just transmitting the data. The devices did what i want them to when I had my first issue which was that the gprs would rotate the ip/port of the device and the device would open another socket when it sended the next data package(not trying to relogin), leaving the other socket in an established state, eventually making the server run out of sockets even when I had only one or two devices. I’m not entirely following you, but there’s no way to drop a socket without closing it — they mean the same thing. Hi, ricardo. If you still reading this, I’m doing the same thing with gps/gprs trackers which has mostly the same behaviour. And I found that the ‘timeout’ solution is quite appropriate in our situation (I use TCP connections). The only difference is that I don’t know exactly refresh time interval for each device, so I use some alogirithm to predict timeout value. BTW if you have the list of open connectoins where each connection mapped to the gps device (I do), then for every new login request from the device you could consider all old connections already in list is lost, so you could manually ‘lose’ them. Another thing for ricardo – if you’re limited in number of simultaneously opened TCP connections, then you could try to close connection after receiving first data packet (which is after login packet). In that case, each next portion of data will be consisting off two packets: login and data Hi Dave, I have just started your introduction and i am enjoying it very much. I was also looking for something to do to give back what the python community has been giving me for the past two years. I have noticed that there is no one working in a spanish translation of the tutorial. How about i do it? Please, feel free to contact me, i’ll be waiting for your permission! Marcial Permission granted! Thanks very much. I’m especially excited about this because I’ve been slowly trying to learn Spanish and having my own words translated will be like having my own Rosetta Stone 🙂 can I translate this tutorial into Korean and publish in my blog? How can I reach you via like email? Definitely! You can reach me at [email protected]. It would have been nice if your tutorial titles did a better job describing the topics they’re covering Going straight to the top of my “to read” list, thanks for all your effort! Thanks, glad you like it! Twisted musculature should not be used without consent. Slow moving poetry should learn to keep the pace with novelists who are more advanced in style. Dude, thank you for this tutorial! I spent like a week on the twisted documentation without getting my head around it. This really brought light into the cave. Cool, glad you liked it! Hey Dave! I just started to read the blog but the link to Part 2 of the article is broken, its redirecting to the Index page. Please make the change as Part 2 has many references elsewhere. I am anxious to read the full article. 🙂 Utkarsh Whoops, should be fixed. I’ll fix the others. I switched to more meaningful link names, but I need to update the links on the posts. The links to HTML and PDF versions alluded to on this page also seem to be broken. Yes, those appear to be gone. Cleaned up the links, thanks!
http://krondo.com/an-introduction-to-asynchronous-programming-and-twisted/comment-page-1/?replytocom=807
CC-MAIN-2020-10
en
refinedweb
How to Deploy Your Secure Vue.js App to AWS 'vue' import Router from 'vue-router' import Home from '@/components/home' import Secure from '@/components/secure' Vue.use(Router) let router = new Router({ routes: [ { path: '/', name: 'Home', component: Home }, { path: '/secure', name: 'Secure',: 'history', })for: - Windows: - Mac/linux run pip install awscli After you’ve installed aws-cli, you will need to generate keys within AWS so you can perform actions via the CLI. - Choose your account name in the navigation bar, and then choose My Security Credentials. (If you see a warning about accessing the security credentials for your AWS account, choose Continue to Security Credentials.) - Expand the Access keys (access key ID and secret access key) section. - Choose Create New Access Key. A warning explains that you have only this one opportunity to view or download the secret access key. It cannot be retrieved later. - If you choose Show Access Key, you can copy the access key ID and secret key from your browser window and paste it somewhere else. - If you choose Download Key File, you receive a file named rootkey.csvthat contains the access key ID and the secret key. Save the file somewhere safe. Note: If you had an existing AWS account or are not using root credentials. You can view and generate your keys in IAM. Now that you have your Access Key and Secret Access Key, you need to configure the cli. In your console run aws configure and paste in your keys. $ aws configure AWS Access Key ID [None]: YOUR KEY AWS Secret Access Key [None]: YOUR SECRET Default region name [None]: us-east-1 Default output format [None]: ENTER Now, you can use the aws-cli to sync your ./dist folder to your new bucket. Syncing will diff what’s in your ./dist folder with what’s in the bucket and only upload the required changes. aws s3 sync ./dist s3://your-bucket-name Tab back to your S3 bucket endpoint, and you should see your site hosted on S3! For convenience, add the following script entry to package.json so you can run npm run deploy when you want to sync your files. "scripts": { "deploy": "aws s3 sync ./dist s3://your-bucket-name" } Distribute your App with Amazon CloudFront CDN Amazon S3 static web hosting has ultra-low latency if you are geographically near the region your bucket is hosted in. But, you want to make sure all users can access your site quickly regardless of where they are located. To speed up delivery of your site, you can AWS CloudFront CDN. CloudFront is a global content delivery network (CDN) that securely delivers content (websites, files, videos, etc) to users around the globe. At the time of writing this article, CloudFront supports over 50 edge locations: Setting up a CloudFront Distribution takes just a few minutes now that your files are stored in S3. - Go to CloudFront Home - Click Create Distribution, and select Get Started under Web settings - In the “Origin Domain Name” you should see your bucket name in the drop-down. Select that bucket and make the following changes: - Viewer Protocol Policy: “Redirect HTTP to HTTPS”. (This is a secure app, right!?) - Object Caching: “Customize”. And set Minimum TTL and Default TTL both to “0”. You can adjust this later to maximize caching. But, having it at “0” allows us to deploy changes and quickly see them. - Default Root Object: “index.html” - Click Create Distribution The process can take anywhere from 5-15 minutes to fully provision your distribution. While you wait, you need to configure your distribution to handle vue-router’s history mode. Click on the ID of your new distribution and click the “Error Page” tab. Add the following error pages. These error page configurations will instruct CloudFront to respond to any 404/403 with ./index.html. Voila! Click on the “General” tab, and you should see an entry for “Domain Name”. The Domain Name is the publicly accessible URL for your distribution. After the status of your new distribution is Deployed, paste the URL into your browser. Test to make sure the history mode works by navigating to the secure page and refreshing your browser. Add Authentication with Okta To use Okta, you must first have an Okta developer account. If you don’t have one you can create a free account. After you are logged in, click “Applications” in the navbar and then “Add Application” button. Make sure to select “Single-Page App” as the platform and click Next. You will need to add your CloudFront URL to both Base URIs and also as a Login redirect URIs, otherwise Okta will not allow you to authenticate. Your application settings should look similar to this (except for your CloudFront URL). Note: Make sure to use HTTPS when entering your CloudFront URL. Take note of your “Client ID” at the bottom of the “General” tab as you will need it to configure your app. Add Secure Authentication to Your App Okta has a handy Vue component to handle all the heavy lifting of integrating with their services. To install the Okta Vue SDK, run the following command: npm i @okta/[email protected] Open src/router/index.js and modify it to look like the following code. Also, make sure to change {clientId} and {yourOktaDomain} to yours! import Vue from 'vue' import Router from 'vue-router' import Home from '@/components/home' import Secure from '@/components/secure' import Auth from '@okta/okta-vue' Vue.use(Auth, { issuer: 'https://{yourOktaDomain}/oauth2/default', client_id: '{clientId}', redirect_uri: window.location.origin + '/implicit/callback', scope: 'openid profile email' }) Vue.use(Router) let router = new Router({ mode: 'history', routes: [ { path: '/', name: 'Home', component: Home }, { path: '/implicit/callback', component: Auth.handleCallback() }, { path: '/secure', name: 'Secure', component: Secure, meta: { requiresAuth: true } } ] }) router.beforeEach(Vue.prototype.$auth.authRedirectGuard()) export default router Next is to lock down the /secure route to only authenticated users. Okta’s Vue SDK comes with the method auth.authRedirectGuard() that inspects your routes metadata for the key requiresAuth and redirects unauthenticated users to Okta’s authentication flow. Finally, make some style changes to App.vue <template> <div id="app"> <div> <a href="#" v-Login</a> <div v-else> Welcome {{ activeUser.email }} - <a href="#" @click.Logout</a> </div> </div> > <style> #app { font-family: 'Avenir', Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 60px; } </style> In your terminal, restart the dev server via npm run dev. Tab to your browser and open. If you click “Login” or “Go to secure page” (the protected /secure route), you should get Okta’s authentication flow. Clicking either of these should show you as logged in and you should be able to access the Secure Page. Build a Secure Express REST Server Finally, we are going to build an Express server to respond to /hello and /secure-data requests. The /secure-data will be protected and require an authentication token from the frontend. This token is available via $auth.getUser() thanks to Okta’s Vue SDK. To get started, create a new directory for your server. mkdir secure-app-server cd secure-app-server npm init -y Then install the required dependencies. npm install -s express cors body-parser @okta/jwt-verifier aws-serverless-express Next is to create a file that will define the application. Copy the following code into app.js and change {clientId} and {yourOktaDomain} to yours. const express = require('express') const cors = require('cors') const bodyParser = require('body-parser') const authRequired = () => { return (req, res, next) => { // require! } } // public route that anyone can access app.get('/hello', (req, res) => { return res.json({ message: 'Hello world!' }) }) // route uses authRequired middleware to secure it app.get('/secure-data', authRequired(), (req, res) => { return res.json({ secret: 'The answer is always "A"!' }) }) module.exports = app Create one last file that loads up the app and listens on port 8081. Create ./index.js and copy the following code. const app = require('./app') app.listen(8081, () => { console.log('listening on 8081') }) Start the server by running node ./ in your console. Tab to your browser and open. You should see our JSON payload. But, loading should result in an error. Call the Secure API Endpoint from Your Vue.js Frontend With your secure Express REST server still running, navigate back to your client and install axios so you can call the /secure-data endpoint. npm i axios Modify ./src/components/secure.vue so that it will get the access token from the Okta Vue SDK and send the request to the API. <template> <div> <h1>Secure Page</h1> <h5>Data from GET /secure-data:</h5> <div class="results"> <pre>{{ data }}</pre> </div> <div> <router-linkGo back</router-link> </div> </div> </template> <script> import axios from 'axios' export default { data () { return { data: null } }, async mounted () { let accessToken = await this.$auth.getAccessToken() const client = axios.create({ baseURL: '', headers: { Authorization: `Bearer ${accessToken}` } }) let { data } = await client.get('/secure-data') this.data = data } } </script> <style> .results { width: 300px; margin: 0 auto; text-align: left; background: #eee; padding: 10px; } </style> Tab back to your browser and reload your web app. Navigate to the, and you should see the results from the API call. Configure Serverless and Deploy the Express API Serverless is an open-source AWS Lambda and API Gateway automation framework that allows you to deploy your app into a serverless infrastructure on AWS. The term “serverless” (not to be confused with the software Serverless) is used to describe an app running in the cloud that doesn’t require the developer to provision dedicated servers to run the code. Serverless uses AWS Lambda and AWS API Gateway to run your express API 100% in the cloud using only managed services. AWS Lambda is a service that lets you run code in the cloud without provisioning or managing servers. And, AWS API Gateway is a service that makes it easy for developers to create, publish, update, monitor, and secure API’s at scale. Combining both of these services give you a robust platform to host a secure API. To get started with Serverless, install it globally. npm install -g serverless Next, you need to create a Serverless configuration in your server app. Use the following command from within your ./secure-app-server project. serverless create --template aws-nodejs --name secure-app-server Open up serverless.yml and modify it to look like the file below. When you create a Serverless configuration, it contains a lot of boilerplate code and comments. The following structure is all you need to get the app deployed. service: secure-app-server provider: name: aws runtime: nodejs8.10 stage: dev functions: api: handler: handler.handler events: - http: path: "{proxy+}" method: ANY cors: true The provider spec informs Serverless that your app runs NodeJS and targets deployment on AWS. The functions outlines a single handler that should handle ANY HTTP requests and forward them your app. To finish up Serverless configuration, modify handler.js to the following code. It uses aws-serverless-express which is a neat little package that proxies ALL API requests to a local express app. 'use strict'; const awsServerlessExpress = require('aws-serverless-express') const app = require('./app') const server = awsServerlessExpress.createServer(app) exports.handler = (event, context) => { awsServerlessExpress.proxy(server, event, context) } Finally, you should be ready to deploy your app via Serverless. Run the following command. serverless deploy This process will take a few minutes to provision the stack initially., Once completed, you should see an endpoints entry under “Service Information” (your URL will be slightly different than mine). endpoints: ANY -{proxy+} To test it out, navigate to and you should see our hello world message. Attempting to go to should result in an error. Change Frontend Vue to Use Production API Up until this point, your frontend app has been configured to call the API hosted locally on. For production, you need this to be your Serverless Endpoint. Open ./src/components/secure.vue and replace baseURL with your endpoint within mounted(). baseURL: '', Finally, build your app and deploy it to CloudFront. npm run build npm run deploy Navigate to your CloudFront URL, and you should have a working app! Congratulations on a job well done! If your CloudFront URL failed to pull the latest version of your web app, you might need to invalidate the CDN cache. Go to your distribution, click on the Invalidations tab. Click Create Invalidation and invalidate paths “/*”. It will take a few minutes, but once it’s complete, you should be able to pull in the latest version. Final Thoughts Amazon Web Services is a robust platform that can pretty much do anything. But, it has a relatively steep learning curve and might not be right for all cloud beginners. Nonetheless, I encourage you to dig more into what AWS provides and find the right balance for your development needs. You can find the full source code for this tutorial at: and. Here are a few other articles I’d recommend to learn more about user authentication with common SPA frameworks. - Build a Basic CRUD App with Vue.js and Node - Add Authentication to Your Vanilla JavaScript App in 20 Minutes - Build a React Application with User Authentication in 15 Minutes - Build an Angular App with Okta’s Sign-in Widget in 15 Minutes Please be sure to follow @oktadev on Twitter to get notified when more articles like this are published.
https://www.sitepoint.com/deploy-your-secure-vue-js-app-to-aws/
CC-MAIN-2020-10
en
refinedweb
How to get Django and ReactJS to work together? New to Django and even newer to ReactJS. I have been looking into AngularJS and ReactJS, but decided on ReactJS. It seemed like it was edging out AngularJS as far as popularity despite AngularJS having more of a market share, and ReactJS is said to be quicker to pickup. All that junk aside, I started taking a course on Udemy and after a few videos it seemed important to see how well it integrates with Django. That is when I inevitably hit a wall just getting it up and running, what kind of documentation is out there so that I am not spinning my wheels for several hours and nights. There really isn't any comprehensive tutorials, or pip packages, I came across. The few I came across didn't work or were dated, pyreact for example. One thought I had was just to treat ReactJS completely separate, but keeping into consideration the classes and IDs I want the ReactJS components to render in. After the separate ReactJS components are compiled into a single ES5 file, just import that single file into the Django template. I think that will quickly breakdown when I get to rendering from Django models although the Django Rest Framework sounds like it is involved. Not even far enough to see how Redux affects all of this. Anyway, anyone have a clear way they are using Django and ReactJS they care to share? At any rate, the documentation and tutorials are plentiful for AngularJS and Django, so it is tempting to just go that route to get started with any front-end framework... Not the best reason. Solutions/Answers: Answer 1: I don’t have experience with Django but the concepts from front-end to back-end and front-end framework to framework are the same. - React will consume your Django REST API. Front-ends and back-ends aren’t connected in any way. React will make HTTP requests to your REST API in order to fetch and set data. - React, with the help of Webpack (module bundler) & Babel (transpiler), will bundle and transpile your Javascript into single or multiple files that will be placed in the entry HTML page. Learn Webpack, Babel, Javascript and React and Redux (a state container). I believe you won’t use Django templating but instead allow React to render the front-end. - As this page is rendered, React will consume the API to fetch data so React can render it. Your understanding of HTTP requests, Javascript (ES6), Promises, Middleware and React is essential here. Here are a few things I’ve found on the web that should help (based on a quick Google search): - Django and React API Youtube tutorial - Setting up Django with React (04-19 update: broken link) - Search for other resources using the bolded terms above. Try “Django React Webpack” first. Hope this steers you in the right direction! Good luck! Hopefully others who specialize in Django can add to my response. Answer 2: I feel your pain as I, too, am starting out to get Django and React.js working together. Did a couple of Django projects, and I think, React.js is a great match for Django. However, it can be intimidating to get started. We are standing on the shoulders of giants here 😉 Here’s how I think, it all works together (big picture, please someone correct me if I’m wrong). - Django and its database (I prefer Postgres) on one side (backend) - Django Rest-framework providing the interface to the outside world (i.e. Mobile Apps and React and such) - Reactjs, Nodejs, Webpack, Redux (or maybe MobX?) on the other side (frontend) Communication between Django and ‘the frontend’ is done via the Rest framework. Make sure you get your authorization and permissions for the Rest framework in place. I found a good boiler template for exactly this scenario and it works out of the box. Just follow the readme and once you are done, you have a pretty nice Django Reactjs project running. By no means this is meant for production, but rather as a way for you to dig in and see how things are connected and working! One tiny change I’d like to suggest is this: Follow the setup instructions BUT before you get to the 2nd step to setup the backend (Django here), change the requirements file for the setup. You’ll find the file in your project at /backend/requirements/common.pip Replace its content with this appdirs==1.4.0 Django==1.10.5 django-autofixture==0.12.0 django-extensions==1.6.1 django-filter==1.0.1 djangorestframework==3.5.3 psycopg2==2.6.1 this gets you the latest stable version for Django and its Rest framework. I hope that helps. Answer 3: As others answered you, if you are creating a new project, you can separate frontend and backend and use any django rest plugin to create rest api for your frontend application. This is in the ideal world. If you have a project with the django templating already in place, then you must load your react dom render in the page you want to load the application. In my case I had already django-pipeline and I just added the browserify extension. () As in the example, I loaded the app using django-pipeline: PIPELINE = { # ... 'javascript':{ 'browserify': { 'source_filenames' : ( 'js/entry-point.browserify.js', ), 'output_filename': 'js/entry-point.js', }, } } Your “entry-point.browserify.js” can be an ES6 file that loads your react app in the template: import React from 'react'; import ReactDOM from 'react-dom'; import App from './components/app.js'; import "babel-polyfill"; import { Provider } from 'react-redux'; import { createStore, applyMiddleware } from 'redux'; import promise from 'redux-promise'; import reducers from './reducers/index.js'; const createStoreWithMiddleware = applyMiddleware( promise )(createStore); ReactDOM.render( <Provider store={createStoreWithMiddleware(reducers)}> <App/> </Provider> , document.getElementById('my-react-app') ); In your django template, you can now load your app easily: {% load pipeline %} {% comment %} `browserify` is a PIPELINE key setup in the settings for django pipeline. See the example above {% endcomment %} {% javascript 'browserify' %} {% comment %} the app will be loaded here thanks to the entry point you created in PIPELINE settings. The key is the `entry-point.browserify.js` responsable to inject with ReactDOM.render() you react app in the div below {% endcomment %} <div id="my-react-app"></div> The advantage of using django-pipeline is that statics get processed during the collectstatic. Answer 4: The first approach is building separate Django and React apps. Django will be responsible for serving the API built using Django REST framework and React will consume these APIs using the Axios client or the browser’s fetch API. You’ll need to have two servers, both in development and production, one for Django(REST API) and the other for React (to serve static files). The second approach is different the frontend and backend apps will be coupled. Basically you’ll use Django to both serve the React frontend and to expose the REST API. So you’ll need to integrate React and Webpack with Django, these are the steps that you can follow to do that First generate your Django project then inside this project directory generate your React application using the React CLI For Django project install django-webpack-loader with pip: pip install django-webpack-loader Next add the app to installed apps and configure it in settings.py by adding the following object WEBPACK_LOADER = { 'DEFAULT': { 'BUNDLE_DIR_NAME': '', 'STATS_FILE': os.path.join(BASE_DIR, 'webpack-stats.json'), } } Then add a Django template that will be used to mount the React application and will be served by Django { % load render_bundle from webpack_loader % } <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width" /> <title>Django + React </title> </head> <body> <div id="root"> This is where React will be mounted </div> { % render_bundle 'main' % } </body> </html> Then add an URL in urls.py to serve this template from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView urlpatterns = [ url(r'^', TemplateView.as_view(template_name="main.html")), ] If you start both the Django and React servers at this point you’ll get a Django error saying the webpack-stats.json doesn’t exist. So next you need to make your React application able to generate the stats file. Go ahead and navigate inside your React app then install webpack-bundle-tracker npm install webpack-bundle-tracker --save Then eject your Webpack configuration and go to config/webpack.config.dev.js then add var BundleTracker = require('webpack-bundle-tracker'); //... module.exports = { plugins: [ new BundleTracker({path: "../", filename: 'webpack-stats.json'}), ] } This add BundleTracker plugin to Webpack and instruct it to generate webpack-stats.json in the parent folder. Make sure also to do the same in config/webpack.config.prod.js for production. Now if you re-run your React server the webpack-stats.json will be generated and Django will be able to consume it to find information about the Webpack bundles generated by React dev server. There are some other things to. You can find more information from this tutorial. Answer 5: A note for anyone who is coming from a backend or Django based role and trying to work with ReactJS: No one manages to setup ReactJS enviroment successfully in the first try 🙂 There is a blog from Owais Lone which is available from ; however syntax on Webpack configuration is way out of date. I suggest you follow the steps mentioned in the blog and replace the webpack configuration file with the content below. However if you’re new to both Django and React, chew one at a time because of the learning curve you will probably get frustrated. var path = require('path'); var webpack = require('webpack'); var BundleTracker = require('webpack-bundle-tracker'); module.exports = { context: __dirname, entry: './static/assets/js/index', output: { path: path.resolve('./static/assets/bundles/'), filename: '[name]-[hash].js' }, plugins: [ new BundleTracker({filename: './webpack-stats.json'}) ], module: { loaders: [ { test: /\.jsx?$/, loader: 'babel-loader', exclude: /node_modules/, query: { presets: ['es2015', 'react'] } } ] }, resolve: { modules: ['node_modules', 'bower_components'], extensions: ['.js', '.jsx'] } }; Answer 6: The accepted answer lead me to believe that decoupling Django backend and React Frontend is the right way to go no matter what. In fact there are approaches in which React and Django are coupled, which may be better suited in particular situations. This tutorial well explains this. In particular: I see the following patterns (which are common to almost every web framework): -React in its own “frontend” Django app: load a single HTML template and let React manage the frontend (difficulty: medium) -Django REST as a standalone API + React as a standalone SPA (difficulty: hard, it involves JWT for authentication) -Mix and match: mini React apps inside Django templates (difficulty: simple) Answer 7: You can try the following tutorial, it may help you to move forward: Serving React and Django together Answer 8: I know this is a couple of years late, but I’m putting it out there for the next person on this journey. GraphQL has been helpful and way easier compared to DjangoRESTFramework. It is also more flexible in terms of the responses you get. You get what you ask for and don’t have to filter through the response to get what you want. You can use Graphene Django on the server side and React+Apollo/Relay… You can look into it as that is not your question. References - Database Administration Tutorials - Programming Tutorials & IT News - Linux & DevOps World - Entertainment & General News - Games & eSport
https://loitools.com/blog/how-to-get-django-and-reactjs-to-work-together/
CC-MAIN-2020-10
en
refinedweb
What is JSON? JSON is used to store information in an organized, and easy-to-access manner. Its full form is JavaScript Object Notation. It offers a human-readable collection of data which can be accessed logically. In this XML vs. JSON tutorial, you will learn: - What is JSON? - What is XML? - History of JSON - History of XML - Features of JSON - Features of XML - Difference between JSON and XML - JSON Code vs XML Code - Advantages of using JSON - Advantages of using XML - Disadvantages of using JSON - Disadvantages of using XML What History of JSON Here are important landmarks that form the history of JSON: - Douglas Crockford specified the JSON format in the early 2000s. - The official website was launched in 2002. - In December 2005, Yahoo! starts offering some of its web services in JSON. - JSON became an ECMA international standard in 2013. - The most updated JSON format standard was published in 2017. History of XML Here, are the important landmark from the history of XML: - XML was also derived from SGML. - Version 1.0 of XML was released in February 1998. - Jan 2001:IETF Proposed Standard: XML Media Types - XML is the Extensible Markup Language. - 1970: Charles Goldfarb, Ed Mosher, and Ray Lorie invented GML - The development of XML started in the year 1996 at Sun Microsystem Features. Features of XML - XML tags are not predefined. You need to define your customized tags. - XML was designed to carry data, not allows you to display that data. - Mark-up code of XML is easy to understand for a human. - Well, the structured format is easy to read and write from programs. - XML is an extensible markup language like HTML. Difference between JSON and XML Here is the prime difference between JSON vs. XML JSON Code vs XML Code Let's see a sample JSON Code { "student": [ { "id":"01", "name": "Tom", "lastname": "Price" }, { "id":"02", "name": "Nick", "lastname": "Thameson" } ] } Let's study the same code in XML <?xml version="1.0" encoding="UTF-8" ?> <root> <student> <id>01</id> <name>Tom</name> <lastname>Price</lastname> </student> <student> <id>02</id> <name>Nick</name> <lastname>Thameson</lastname> </student> </root> Advantages of using JSON Here are the important benefits/ pros of using JSON: - Provide support for all browsers - Easy to read and write - Straightforward syntax - You can natively parse in JavaScript using eval() function - Easy to create and manipulate - Supported by all major JavaScript frameworks - Supported by most backend technologies - JSON is recognized natively by JavaScript - It allows you to transmit and serialize structured data using a network connection. - You can use it with modern programming languages. - JSON is text which can be converted to any object of JavaScript into JSON and send this JSON to the server. Advantages of using XML Here are significant benefits/cons of using XML: - Makes documents transportable across systems and applications. With the help of XML, you can exchange data quickly between different platforms. - XML separates the data from HTML - XML simplifies platform change process Disadvantages of using JSON Here are cons/ drawback of using JSON: - No namespace support, hence poor extensibility - Limited development tools support - It offers support for formal grammar definition Disadvantages of using XML Here, are cons/ drawbacks of using XML: - XML requires a processing application - The XML syntax is very similar to other alternatives 'text-based' data transmission formats which is sometimes confusing - No intrinsic data type support - The XML syntax is redundant - Does n't allow the user to create his tags. KEY DIFFERENCE - JSON object has a type whereas XML data is typeless. - JSON does not provide namespace support while XML provides namespaces support. - JSON has no display capabilities whereas XML offers the capability to display data. - JSON is less secured whereas XML is more secure compared to JSON. - JSON supports only UTF-8 encoding whereas XML supports various encoding formats.
http://www.test3.guru99.com/json-vs-xml-difference.html
CC-MAIN-2020-10
en
refinedweb
I can do most things with my laptop, why can’t I make it cook? I’ve attached sensors, motors, NMR, cameras, etc. why not a heater and a thermometer? Sous Vide, or immersion cooking produces great tasting food. It is also an interesting representative problem of closed loop temperature control. With numerous people trying to provide an open source or DIY solution for Sous Vide cooking, I figured I’d add to that body of work in my own way. I’d write a software simulation of an immersion cooker to help people develop better hardware. I don’t know what the time constants in the simulation might be, but you can set them, and learn PID control. I also might hook up a relay, a heater, and a thermo couple and actually do this, but mostly I figured I’d help others study basic PID control. PID control loops (Proportional, Integral, Derivative) are a basic form of closed loop control, meaning the error is fed back into the control loop and used to change how control is performed as opposed to open loop. In a PID loop, an error is calculated, and then three different terms are computed and summed to form a correction. The incoming error term may have known properties based on how the measurement was made or changes to the goal value, so sometimes pre-filtering is performed (FIR/IIR). The integral sometimes isn’t truly just adding up the errors but sometimes decays so it has a limited memory of past errors. The derivative term may be subtracting two noisy errors that are similar producing a large value that is mostly noise so sometimes the derivative value is filtered (FIR /IIR, outlier rejection, boxcar, etc.). Finally each of these terms may have a dead band, and the sum may too. A dead band means don’t adjust unless over +X threshold amount, or under –Y threshold amount. Further the control applied may also be filtered to remove control changes that would cause a harmonic pattern, such as oscillation. So with this in mind, a generic PID may have a large set of features (pre/post filters), dead bands, and pre /post conditioning filters, and a variety of integration like effects. In general, a PID filter can be thought of as a corrective loop, where you move closer to the goal proportional to the error perceived. In many cases a proportional gain is what’s needed. For example for servo motion where there is a large motor and a tiny weight, driving to position requires only a P term. If there is consistent pressure (weight) or loss (heat loss) proportional control may not be enough. The integral term adds up tiny errors, and corrects better than proportional when the error is small. The integral term may accelerate too much resulting in over correction and even oscillation, so the derivative term is then used like a break. There are several tuning techniques, the author prefers starting with Ziegler-Nichols. I set the proportional gain till it over controls resulting in oscillations I can measure. When the system is mostly linear, the oscillations will have fixed period. This proportional gain is the “ultimate” gain or Ku, and the period is Pu. From there: P only: P = 0.5*Ku PI only: P = 0.45*Ku, I = 0.54*Ku / Pu. PID: P = 0.6*Ku, I= 1.2*Ku/Pu, D = 0.075*Ku*Pu In most cases, this is close to good enough. In the case of a Sous Vide, overshoot (going too hot while attaining a temperature) is bad, and consistent undershoot is bad. It can take a known time to reach stability before we put the food in is ok. For this reason, backing off on I and D terms, and using a decaying integral are likely good ideas. There are many systems to compute “ideal” coefficients but they don’t arrive at the same answer, go figure. Ideal is based on your particular requirements, and simulation helps a lot. Integrating with the real system then tells you how much more work to do, it usually isn’t a check the box test unless you’re lucky or the control system was simulated perfectly, or it is relatively linear and or simple. Most problems I’ve used PID on started simple, then weight was cut, materials changed, etc. and at some point simulation became just a starting point. At some point PID stops working and a lookup table based on measurements for coefficients to interpolate from is required. That is the essence of an autopilot because that kind of algorithm can model non-linear processes that appear linear or quadratic over short numerical regions (perhaps a different article). My point is, PID usually can solve the problem if the problem is reasonable but there are other solutions that sometimes work better. If your problem seems too hard to solve because small changes in gains cause the system to behave strangely then likely there is a non-linear response to the correction and PID might not be the choice to use. For Sous Vide cooking, temperature isn’t linear in time (exponential) but it is a smooth effect, and over any short region of change if we treat it as linear, the error term isn’t huge or suddenly change sign. For those reasons PID will work, perhaps not ideally converging with zero overshoot but it works very well and is probably the most common way to handle heating control. The key thing is to bring food up to a target temperature ideal for that food, and hold it there for a period of time. For example to cook an egg so only certain proteins coagulate (perfect poached egg), or to make the perfect steak, or carrots, the food is cooked longer than needed at no more than the set temperature, and as close to the set temperature as possible. Above certain temperatures the heat only breaks down flavor and makes the food tougher for certain foods. Temperature: Min, Max. 1000F to 2200F (roughly 500C to 1000C) Accuracy 0.250F (threshold) 0.10F (goal) Stability Gaussian 1-sigma of accuracy to time constant of heat transfer. Overshoot Initial overshoot of water ok if food isn’t present. Need to know when food started to be at temperature to know how long to cook it. Physical Control: Either proportionally or pulse an AC heater. Pulsing with a solid state relay is probably cheapest, but the frequency of the pulses need to be realistic because they affect how well the heating element will accurately respond and the life of the heating element. Model Assumptions Slow -> Container/Air -> Container/Water -> Medium ->Bag/Food -> Others -> Fast Mathematics of Physics and Modern Engineering, McGraw hill publishing, 1966, by I.S. Sokolnikoff & R.M. Redheffer, p. 432. Indroduction to Applied Mathematics, Wellesly-Cambridge Press, 1986, by Gilbert Strang, p. 461, 536. The model can be describes as a set of heat transfers. Yes, the specific heat and energy could be modeled with a more classical thermodynamic model, but in the end the math reduces more or less to time constants and transfer from one singular thermal body to another. The heat escape path is: Water-> Container -> Air && Water->Air. The add path is: Heater-> Water The measurement path is: Water-> Sensor We can turn what could be a wacky looking differential equation into a set of difference equations . By making the simulation time arbitrarily small, the error is arbitrarily small, and computers are good at rapid repeated calculations. For this reason, FEA modeling or differential equations are just not needed (although a lot of fun, but not in this article). In short we can simulate multiple simultaneous heat transfers as heat adding or being lost across a given boundary, rewriting as a difference: Because the volume of water is varying, assuming a pot or rice cooker we can adjust the time constant based on the surface area to volume ratio and simulate those affects. We can also run the adjustment times at a different rate to the simulation time delta, and set weather control is proportional or pulsed and compare the results. Water volume effect: private void simulateTime(double dt, double percentHeat, ref double water, ref double container, <br /> ref double sensor, ref double food, Parameters p, PID pid, bool usePID) { // Limit to the max / min. percentHeat = percentHeat < 0 ? 0 : percentHeat > 100 ? 100 : percentHeat; // If binary, heater is on/off in which case threshold. if (p.Binary) { percentHeat = percentHeat > 25 ? 100 : 0; } double heaterAdded = percentHeat * p.HeaterWatts * heaterSpecificHeatFactor; // surface area is constant but thermal mass of the water isn't. double effectiveWaterContainer = p.TWC * p.WaterRadius * p.WaterDepth / (p.WaterRadius + 2 * p.WaterDepth); double effectiveWaterAir = p.TWA * p.WaterDepth; // Heat added from heater to water. double heatAddedToWater = (heaterAdded - water) * (1 – (1/ p.TWH) * System.Math.Exp(-dt / p.TWH)); heatAddedToWater = heatAddedToWater < 0 ? 0 : heatAddedToWater; // Heat lost to the sensor. double heatLostToSensor = (water - sensor) * (1-System.Math.Exp(-dt / p.TWS))/p.TWS; // Heat lost to the food. double heatToFood = (water - food) * (1 - System.Math.Exp(-dt / p.TWF)) / p.TWF; // Heat from water to container. double heatLostWaterContainer = (water - container) * (1 - System.Math.Exp(-dt / effectiveWaterContainer))/effectiveWaterContainer; // Heat water to air. double heatLostWaterAir = (water - p.Air) * (1 - System.Math.Exp(-dt / effectiveWaterAir))/effectiveWaterAir; // Heat Container to air. double heatLostContainerAir = (container - p.Air) * (1 - System.Math.Exp(-dt / p.TCA))/p.TCA; // Update. food = food + heatToFood; water = water + heatAddedToWater - heatLostWaterContainer – heatLostWaterAir - heatLostToSensor - heatToFood; sensor = sensor + heatLostToSensor; container = container + heatLostWaterContainer - heatLostContainerAir; } public class PID { double m_p = 0, m_i = 0, m_d = 0, m_g = 0, integral = 0, last = 0; bool first = true; public PID(double setPoint, double proportional, double integral, double derivative) { m_g = setPoint; m_p = proportional; m_i = integral; m_d = derivative; } public double Update(double sensor, double dt) { double error = m_g - sensor; integral = (0.9 * integral) + error; double derivative = first ? 0 : (sensor - last) / dt; last = sensor; first = false; return (m_p * error) + (m_i * integral) + (m_d * derivative); } } The following time constants in seconds were guessed at to produce are somewhat realistic simulation: Water-heater 600 Water-sensor 2 Water-container 1000 Container-air 1000 Water-food 20 Water-air 1000 Sure the time constants are made up. After gathering actual data, make educated guesses. For anything I’ve worked on (missiles, GPS motion, thermal control) it was always an estimate, a guess, the reality is even if you measure the exact value, when you get to production there is enough variation for the exact value not to matter. Simulation tied to real data ensures that if the production values vary, the gains provide for robust control anyways. One of the best ways to do this for real would be to turn the heater on at various fixed values and measure the steady state water and container temperatures, and look up the time constant for the thermocouple used, then make the model look like the curves seen for real, double checking with heat loss vs. heat added. Pure math often won’t do all the work unless your system is simple and well known (density of the plastic container, exact CAD model for shape, calibrated heater wattage, losses in switching on/off, etc.). It took a proportional gain of over 100 to start to see an initial overshoot, the inherent decay meant there is no real gain that will cause an oscillation. Using Ziegler-Nichols, and looking at the overshoot, the time to decay from an overshoot is in tens of seconds so IF Pu did exist it would be on the order of 5-20 seconds, and our gain ultimate (Ku) can be large (10 – 50). This provides ball park initial values to setup and study (Ku = 30, Pu = 10). P = 0.6*Ku, I= 1.2*Ku/Pu, D = 0.075*Ku*Pu P = 18, I = 3.6, D = 22.5 This resulted in the following graph: The problem is it was 0.2 degrees too cool for the second half of cook time, clearly we need more integral. Doubling the gains brought the system closer (62.69). Note what happened when I used non-proportional pulsed control: Because the food provide a sort of temperature buffer, it takes time to transfer water to food through the bag the food is in, a little water overshoot is desirable if accuracy is improved. For this reason we can play with the numbers and improve performance. If this was more than an intellectual exercise, the next step would be to randomly generate start water temperatures (which also has the affect of spontaneously adding food a different temperature to the bath after reaching temperature) as well as varying water-food time constant (simulate different volumes of foods, marinades in the food bag). Then show that for a given set of gains there is little overshoot, and very stable performance. For the most part, most gain choices will be stable enough, but some will get the water to the desired temperature faster than others, and in a commercial kitchen producing product on a time schedule that would mater. For teaching your lap top how to make tuna confit, or perfect harcot verts, a little simulation and some trial and error should be sufficient. The key result from the simulation suggests using a PID with a simple relay would work very well, and thus a rice cooker plugged into a relay controlled by a PID should (in theory) be.
https://www.codeproject.com/Articles/329012/SousVidePID?fid=1685414&df=90&mpp=10&sort=Position&spc=None&tid=4155277&noise=1&prof=True&view=Expanded
CC-MAIN-2017-39
en
refinedweb
[ ] ASF GitHub Bot commented on CXF-7462: ------------------------------------- Github user reta commented on a diff in the pull request: --- Diff: rt/rs/sse/src/main/java/org/apache/cxf/jaxrs/sse/OutboundSseEventImpl.java --- @@ -24,24 +24,24 @@ import javax.ws.rs.core.MediaType; import javax.ws.rs.sse.OutboundSseEvent; -public class OutboundSseEventImpl implements OutboundSseEvent { - private String id; - private String name; - private String comment; - private long reconnectDelay = -1; - private Class<?> type; - private Type genericType; - private MediaType mediaType; - private Object data; +public final class OutboundSseEventImpl implements OutboundSseEvent { + private final String id; + private final String name; + private final String comment; + private final long reconnectDelay; + private final Class<?> type; + private final Type genericType; + private final MediaType mediaType; + private final Object data; public static class BuilderImpl implements Builder { private String id; private String name; private String comment; private long reconnectDelay = -1; - private Class<?> type; + private Class<?> type = String.class; --- End diff -- I would not make any assumptions about default type here, better to be explicit. > OutboundSseEventImpl could use some minor tweaks > ------------------------------------------------ > > Key: CXF-7462 > URL: > Project: CXF > Issue Type: Improvement > Components: JAX-RS > Affects Versions: 3.2.0 > Reporter: Andy McCright > Priority: Minor > Fix For: 3.2.0 > > > The OutboundSseEventImpl class could use some minor tweaks, including: > 1) Make the fields final to reflect that the event is immutable. > 2) Use defaults for the data type (String.class) and media type (SERVER_SENT_EVENT_TYPE). > 3) Restrict the constructor's visibility. > I also plan to add some tests for these changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
http://mail-archives.apache.org/mod_mbox/cxf-issues/201708.mbox/%[email protected]%3E
CC-MAIN-2017-39
en
refinedweb
Clean Code author Bob Martin helped create the software development collaboration framework called FitNesse, which "enables customers, testers, and programmers to learn what their software should do, and to automatically compare that to what it actually does do." The open source framework acts as an acceptance testing tool, a wiki, and a web server with no configuration setup. Let's start by installing the plugin: $ grails install-plugin fitnesseOnce you start up the Grails app, a FitNesse server will also start internally and connect the Wiki with the app. FitNesse Fixtures, which should not be confused with Grails Fixtures, provide a bridge between the FitNesse Wiki and the System Under Test (SUT). FitNesse Fixtures more closely resemble Grails artifacts with similar features like dependency injection and hot reloading. A FitNesse directory in the grails-app directory should be used for creating fixtures. Here's a Wiki test example. It consists of a fixture name (loan calculator), headers (income, debt), and the body of the table: |loan calculator | |income|debts|category?| |10000 |500 |A | |5000 |10000|C | and this is the Fixture bridge: class LoanCalculator { // Input parameters int income int debts // Output parameter String category // Dependency Injected def loanCalculationService void execute() { this.category = loanCalculationService.calculateLoanCategory(income, debts) } } The Wiki contains the tests (in table form). The plugin currently supports the following tests: Decision Table - Supplies the inputs and outputs for decisions. This is similar to the Fit Column Fixture. Query Table - Supplies the expected results of a query. This is similar to the Fit Row Fixture. Script Table - A series of actions and checks. Similar to Do Fixture. Import - Add a path to the fixture search path. Six other tables may work, but haven't been tested yet. Other features: - Grrovy and FitNesse support strings as methods - Both support default arguments - Automatic reloading of Fixtures in Grails Download the plugin here. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/grails-fitnesse-plugin
CC-MAIN-2017-39
en
refinedweb
Veeam® is happy to provide the Microsoft community with a study guide prepared by MVP and MCT, Orin Thomas. This guide will take you through each of the exam objectives, helping you to prepare for and pass the examination. public class TestDLL { private string myString = "EMPTY"; public Tenging() { } public string MyProperty { get { return myString; } set { myString = MyProperty; } } } Add your voice to the tech community where 5M+ people just like you are talking about what matters. using System.Runtime.InteropServices; [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface ITestDLL { [DispId(1)] string MyProperty {get; set; } } [ClassInterface(ClassInterfaceType.None)] [ProgId("<Namespace>.TestDLL")] public class TestDLL : ITestDLL { etc. If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/22987734/How-to-create-a-DLL-using-Visual-Studio-C-that-registers-with-regsvr32-NOT-regasm.html
CC-MAIN-2017-39
en
refinedweb
The native Bing Maps Windows Store control has two types of shapes: polygons and polylines. These shapes are great for representing areas and paths on the map. Often it is useful to be able to associate some information or metadata with these shapes. In past versions of Bing Maps we could easily store this information in the Tag property of the shape. This makes it easy to retrieve this data when a shape is clicked or tapped. Unfortunately,the MapPolygon and MapPolyline shapes in the native Bing Maps Windows Store control do not have a Tag property. Recently on the Bing Maps forums, one of our Bing Maps engineers pointed out that the MapPolygon and MapPolyline classes are both DependancyObjects. This means that we could create a DependencyProperty which adds a “Tag” property to these shapes. In this blog post we are going to see just how easy this is to do. To get started, open up Visual Studio and create a new project in either C# or Visual Basic. Select the Blank App template, call the application ClickableShapes, and press OK. Add a reference to the Bing Maps SDK. To do this, right click on the References folder and press Add Reference. Select Windows -> Extensions, and then select Bing Maps for C#, C++ and Visual Basic and Microsoft Visual C++ Runtime. If you do not see this option ensure that you have installed the Bing Maps SDK for Windows Store apps. If you notice that there is a little yellow indicator on the references that you just added. The reason for this is that the C++ runtime package requires you to set the Active solution platform in Visual Studio to one of the following options; ARM, x86 or x64. To do this, right click on the Solution folder and select Properties. Then go to Configuration Properties -> Configuration. Find your project and under the Platform column set the target platform to x86. Press OK and the yellow indicator should disappear from our references. Next open the MainPage.xaml file and update it with the following XAML. This will add a map to the app. Make sure to set the Credentials attribute of the Map element to a valid Bing Maps key. <Page x:Class="ClickableShapes.MainPage" xmlns="" xmlns:x="" xmlns:local="using:ClickableShapes" xmlns:d="" xmlns:mc="" xmlns:m="using:Bing.Maps" mc: <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> <m:Map </Grid> </Page> Both the MapPolygon and MapPolyline classes derive from a common class called MapShape. Rather than creating two DependencyProperty's, we can instead create a single one on the MapShape class. Open the MainPage.xamls.cs or MainPage.xaml.vb file and update it with the following code. This will create a DependencyProperty on the MapShape class called “Tag”. C# using Bing.Maps; using System; using Windows.UI; using Windows.UI.Popups; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Input; namespace ClickableShapes { public sealed partial class MainPage : Page { private MapShapeLayer shapeLayer; public static readonly DependencyProperty TagProp = DependencyProperty.Register("Tag", typeof(object), typeof(MapShape),new PropertyMetadata(null)); public MainPage() { this.InitializeComponent(); } } } Visual Basic Imports Bing.Maps Imports Windows.UI Imports Windows.UI.Popups Public NotInheritable Class MainPage Inherits Page Private shapeLayer As MapShapeLayer Public Shared ReadOnly TagProp As DependencyProperty = DependencyProperty.Register("Tag", GetType(Object), GetType(MapShape), New PropertyMetadata(Nothing)) Public Sub New() Me.InitializeComponent() End Sub End Class Next we will add an event handler for when the map is loaded in the constructor of the app. In this event handler we will add a MapShapeLayer to the map for loading our shapes to. We will then generate a test polygon and polyline to the map. We will add some string as metadata to the Tag property by using the SetValue method on the shape. Update the constructor and add the MyMapLoaded event handler to the MainPage.xamls.cs or MainPage.xaml.vb file using the following code. C# public MainPage() { this.InitializeComponent(); MyMap.Loaded += MyMapLoaded; } private void MyMapLoaded(object sender, RoutedEventArgs e) { //Add a shape layer to the map shapeLayer = new MapShapeLayer(); MyMap.ShapeLayers.Add(shapeLayer); //Create mock data points var locs = new LocationCollection(); locs.Add(new Location(0, 0)); locs.Add(new Location(10, 10)); locs.Add(new Location(10, 0)); //Create test polygon var polygon = new MapPolygon(); polygon.Locations = locs; polygon.FillColor = Colors.Red; //Set the tag property value polygon.SetValue(TagProp, "I'm a polygon"); //Add a tapped event polygon.Tapped += ShapeTapped; //Add the shape to the map shapeLayer.Shapes.Add(polygon); var locs2 = new LocationCollection(); locs2.Add(new Location(20, 20)); locs2.Add(new Location(40, 40)); locs2.Add(new Location(50, 20)); //Create test polyline var polyline = new MapPolyline(); polyline.Locations = locs2; polyline.Width = 5; polyline.Color = Colors.Blue; //Set the tag property value polyline.SetValue(TagProp, "I'm a polyline"); //Add a tapped event polyline.Tapped += ShapeTapped; //Add the shape to the map shapeLayer.Shapes.Add(polyline); } Visual Basic Private Sub MyMapLoaded(sender As Object, e As RoutedEventArgs) 'Add a shape layer to the map shapeLayer = New MapShapeLayer() MyMap.ShapeLayers.Add(shapeLayer) 'Create mock data points Dim locs = New LocationCollection() locs.Add(New Location(0, 0)) locs.Add(New Location(10, 10)) locs.Add(New Location(10, 0)) 'Create test polygon Dim polygon = New MapPolygon() polygon.Locations = locs polygon.FillColor = Colors.Red 'Set the tag property value polygon.SetValue(TagProp, "I'm a polygon") 'Add a tapped event AddHandler polygon.Tapped, AddressOf ShapeTapped 'Add the shape to the map shapeLayer.Shapes.Add(polygon) Dim locs2 = New LocationCollection() locs2.Add(New Location(20, 20)) locs2.Add(New Location(40, 40)) locs2.Add(New Location(50, 20)) 'Create test polyline Dim polyline = New MapPolyline() polyline.Locations = locs2 polyline.Width = 5 polyline.Color = Colors.Blue 'Set the tag property value polyline.SetValue(TagProp, "I'm a polyline") 'Add a tapped event AddHandler polyline.Tapped, AddressOf ShapeTapped 'Add the shape to the map shapeLayer.Shapes.Add(polyline) End Sub Finally we will need to create the event handler for when the shapes are tapped. When a shape is tapped we will be able to use the GetView method on the shape to retrieve the Tag property. We will then take this value and display it to the user using a MessageDialog. Add the following event handler to the MainPage.xamls.cs or MainPage.xaml.vb file. C# private async void ShapeTapped(object sender, TappedRoutedEventArgs e) { if (sender is MapShape) { var poly = sender as MapShape; var tag = poly.GetValue(TagProp); if (tag != null && tag is string) { var msg = new MessageDialog(tag as string); await msg.ShowAsync(); } } } Visual Basic Private Async Sub ShapeTapped(sender As Object, e As TappedRoutedEventArgs) If TypeOf sender Is MapShape Then Dim poly = TryCast(sender, MapShape) Dim tag = poly.GetValue(TagProp) If tag IsNot Nothing AndAlso TypeOf tag Is String Then Dim msg = New MessageDialog(TryCast(tag, String)) Await msg.ShowAsync() End If End If End Sub The application is now complete. Deploy the app by pressing F5 or clicking on the Debug button. When the app is running tap or click on the shapes on the map. When the event is fired a message will be displayed that contains the metadata stored in the Tag property of the shape. Note in this example I simply stored a string in the Tag property, but you can store any object you want in it. You can download the full source code for this code sample from the Visual Studio code gallery here. If you are looking for some other great resources on Bing Maps for Windows Store apps, look through the Bing Developer Center blog or check out all the Bing Maps MSDN code samples. - Ricky Brundritt, EMEA Bing Maps TSP FinalApp-ClickableShapes-Thumbnail.png
https://blogs.msdn.microsoft.com/bingdevcenter/2014/01/23/make-clickable-shapes-in-the-native-bing-maps-control/
CC-MAIN-2017-39
en
refinedweb
Hi, im creating some program, which reads number from stdin, and save them into variable X, but how can i make condition that program will print error on the screen if X is not number #include <stdio.h> #include <stdlib.h> #include <ctype.h> int main() { float x; while ( (scanf("%e",&x)) != EOF) if ( x is not number) { printf("you didnt entered number"); break; } else printf("%.10e\n", x); return 0; } can i make it somehow ? + i would like to work with data type float or doble, not with string or char.
https://www.daniweb.com/programming/software-development/threads/233198/standard-input
CC-MAIN-2017-39
en
refinedweb
PI Cloud Connect Overview - Magnus Simpson - 1 years ago - Views: Transcription 1 PI Cloud Connect Overview Version 1.0.8 2 Content Product Overview... 3 Sharing data with other corporations... 3 Sharing data within your company... 4 Architecture Overview... 5 PI Cloud Connect and PI Cloud Services... 5 How does PI Cloud Connect work?... 5 Using the Customer Portal... 6 PI Cloud Services... 6 PI Cloud Connect... 7 Supported AF Objects Performance and Throughput Best Practices Security Overview PI Cloud Connect Windows Azure Components On-Prem Components Deployment Overall data flow Troubleshooting Signing in the Customer Portal Node deployment Accessing PI AF Accessing local log files... 31 3 Product Overview PI Cloud Connect is the first of a set services delivered by OSIsoft that fall under the PI Cloud Services umbrella. PI Cloud Connect is a Cloud based Software as a Service (SaaS) offering managed by OSIsoft that allows you to share data between PI Systems. Cloud based because the solution leverages components running in Windows Azure, the public Cloud offering from Microsoft Managed by OSIsoft because we support, maintain and upgrade this service and all its components. PI Cloud Connect offers many advantages: Solution maintained and managed by OSIsoft with minimal On-Prem 1 footprint Scalable and reliable solution based on Windows Azure Configuration and monitoring accessible through a Web-based Customer Portal that only requires a modern Web browser Secure data sharing without requiring Virtual Private Networks (VPNs) Seamless and simultaneous transfer of real-time and meta-data from your PI AF structures, this allows asset models in the PI System to be transferred Publish/subscribe architecture that supports one-to-many, many-to-one and many-to-many data exchanges, which advantageously replaces point-to-point connections Sharing data with other corporations In many situations, all partners in a business collaboration such as joint ventures, contract manufacturers, expert service providers, and operations and maintenance companies need access to production data. When all partners have access to the real-time data, each of them can plan ahead for equipment maintenance or for scheduling the delivering of critical components. PI Cloud Connect provides all parties a secure way of sharing data between their respective PI Systems without having to deploy point-to-point VPNs in multiple scenarios: In a joint venture even though only one company usually operates the assets all partners need access to the production data To deliver the best service possible, partners and vendors who supply raw materials, equipment or expertise need access to the real-time data collected at the operations sites Contract manufacturers, who manufacture products on behalf of other companies need to expose the operation and quality data to those companies 1 On-Prem refers to components or deployments in situ (on site) as opposed to remote components or deployment such as in the Cloud. 4 Operations and Maintenance companies (O&M), Service Providers (SP), and Performance Analytic Vendors (PAV) also need access to the real-time data on site to provide expert knowledge about the efficiency and health of equipment such as pumps, compressors, generators or other components or additives that are critical to a certain process Sharing data within your company If you have a central PI System installed at your head office and other PI System instances deployed at operations' sites, you probably want to have a centralized view of your operations and make site-to-site comparisons. With PI Cloud Connect, sites that monitor assets and collect real-time data can publish their data so that your head office can subscribe to it. 5 Architecture Overview PI Cloud Connect and PI Cloud Services PI Cloud Services is the overall umbrella under which all OSIsoft Cloud based services are made available to customers. To simplify manageability, all the services are managed in one account. Besides PI Cloud Connect, the screenshot below shows others service that may be available in the future. How does PI Cloud Connect work? PI Cloud Connect is a Windows Azure hosted application that relies on a publish/subscribe mechanism to manage the data flow within and between accounts. Once they have signed-up for the PI Cloud Connect service, users can sign-in to the PI Cloud Connect Customer Portal to install the components required to securely and reliably connect their PI Systems and share data. Customers use the PI Cloud Connect Customer Portal to manage publications, subscriptions, users and nodes. 6 Data Sharing Workflow On one hand, a publisher selects a set of data to include in a publication. A publication is configured by selecting a PI AF Element from any PI AF server that is accessible from a registered PI Connect node. A PI Connect node is a computer where the PI Connect components have been installed. The deployment of the PI Connect components is performed via the PI Cloud Connect Customer Portal. Once a publication is configured, the publisher grants access to one (or more) PI Cloud Connect users to that publication; and that user can then subscribe to it. To grant access to a publication, the publisher notifies via one (or several) user(s) that they have access to the publication. The publisher needs to have an a priori knowledge of the subscriber(s) contact information 2. Prior to using PI Cloud Connect for trans-enterprise data exchange, it is highly recommended that publishers and subscribers establish a business relationship to define the scope of the data exchange and share the contact information. On the other hand, when users receive a notification (via or directly in the Customer Portal) they can create a subscription associated with that publication. The association between a publication and a subscription is a contract between the publisher and subscriber that specifies what data is being shared. When the configuration of the publication and the associated subscriptions is complete on both sides and the publication and the subscription are started, the exchange of data commences and continues until one of the parties decides to stop it. Using the Customer Portal PI Cloud Services After signing in your account, you access the landing page of the Customer Portal that presents all the services available. After selecting the PI Cloud Connect tile, you enter the PI Cloud Connect Portal. 2 For obvious privacy and accounts data isolation reasons, the Customer Portal does not expose information from one account to another account unless specified. 7 PI Cloud Connect The user interface provides easy-to-use Web pages for managing your publications, subscriptions and nodes. In the following sections, we explain some of the tasks you can perform from these pages. Activities Summary The main page is the Activities Summary page that presents an overview of the publications, subscriptions and systems. You access each of these sections by clicking a tile or the corresponding option on the left-hand menu. Publications The Publications page lists all your publications: the one created in your account by any of the users of that account as well as those you (or others users in your account) have been granted access to from others accounts. Note that 8 publications from other accounts can only be seen by the user(s) who have been given that access to these publications and not all users for that account. Granting access to a publication is a user based concept and not account based concept. Therefore, different users from the same accounts might see different publications listed in the Publications page. From this page, you can: create new publications take specific actions when a publication is selected o manage a publication (stop/start/delete) o view details/subscribers o subscribe to a publication When you create a new publication, a wizard guides you through the steps required to configure that publication. Note that prior to creating a publication, a PI Connect node must be configured from the System page so that a data source (one or more PI AF Servers) is available. In the first step, the Wizard shows a list of the available AFserver.AFDatabase namespaces for each PI Connect node. 9 The list of namespaces can be sorted by Namespace, Node or Node User Account: The Node User Account is the Windows account provided during the deployment of the PI Connect node for the Windows service which runs on the node under that account. More details are provided in the On-Prem deployemnet section of this document The Node colum list the name of each deployed PI Cloud Connect node The Namespace column lists all AFServer.AFDatabase namespaces accessible from any of the Node and Node User Account After selecting one of the available data source, you can move to the next step. 10 In the Publication Scope step, two options are available in the dropdown menu: Select AF Elements Select AF Templates The first option allows you to select an AF Element that is the target for the publications. 11 In that case, the selected AF Element along with all its children AF Elements and associated real time data (AF Attributes mapped to PI Points) will constitute the publication scope. You should ensure the AF Elements in your publication contain only supported AF Objects, as described in the following section. Note: If some AF Elements targeted by the publications are derived from AF Elements Templates, these AF templates will also be part of the publication. The next step in the wizard allows you to retrieve historical data that is available prior to the time the publication is started. The value provided has to be an integer between 0 and 30 (both included).this setting only applies to the real time data associated with AF PI Points 3 Data References. The second option allows you to select AF Element Templates only. In that case, the scope of the publication is restricted to all the AF Element Templates of the AFserver.AFDatabase namespace selected during the previous step. 3 Only the most recent version of the AF Elements at the time of the publication start are included in the publication scope. The history recovery doesn t apply to AF Objects versions. 12 The Publishing Options step when choosing the AF Templates options has no configuration since there is no real time data associated with AF Templates. For either options (AF Elements or AF Elements Templates), the next step is to define the publication name and its description (optional). From the main Publication page, you can also select an existing publication and look at more detailed information about its status and the users who have been notified about and granted access to the publication. 13 14 From that page you can also grant access others users from others accounts to your publication. Note that for the publications that you ve been granted access to by others accounts, the only possible option is to subscribe. Subscriptions The Subscription page lists your existing subscriptions and allow you to take specific actions when a subscription is selected. This page is similar to the Publication page but cannot be used to create new subscriptions. Subscriptions are created from the Publication page by subscribing to a publication. 15 When you create a new subscription, the same wizard used to create a publication guides you through the steps required to configure that subscription, except that there are no subscribing options step in the wizard. Note also that before creating a subscription, a PI Connect node must be configured from the System page so that a Destination System (one or more PI AF Servers) is available. It is recommended that each subscription targets a dedicated PI AF Database to avoid potential conflicts with multiple subscriptions targeting the same AF Database. Also, keep in mind that at least 1 element needs to exist in the PI AF Database before configuring a subscription into that PI AF Database. 16 User Accounts Users account are managed at the PI Cloud Services level and are shared across all services. At the moment, all users have the same role (administrator) in PI Cloud Connect. Therefore, no specific configuration is accessible at the service level. A redirection to the PI Cloud Services Launchpad is provided. From the User Accounts page in the PI Cloud Services Launchpad, you can view a list of existing users and activate new users. 17 New users are added to an account by providing their First Name, Last Name and address. Note that the address provided does not have to be a Window Live account. That address is first used to send an activation to the new user and for future communication. However, during the activation process the user will have to use a valid Window Live account or authenticate with Active Directory Federated Services (ADFS) to be authenticated and granted access to the Customer Portal. When new users are added to an account, they have 48 hours to activate their account. Until the account is activated, the user s status is in a pending state. It is possible to resend an activation to a pending user who has missed the 48 hours window for activation by selecting the Edit User menu. 18 System The System page has two sections: Nodes and Download. The Node section lists the different On-Prem nodes where PI Connect components have been deployed. The status icon indicates whether the PI Connect node as an active connection with the Cloud components (heartbeat). 19 The Download section is used for deploying new nodes. That section lists software pre-requisites and provides access to download the setup kit for deploying a new PI Connect node. More details about installing a new PI Connect node are provided in the Security Overview section of this document. 20 Supported AF Objects PI Cloud Connect currently supports the following AF Objects: AF Elements AF Element Templates AF Enumeration Sets AF Attributes configured with the following Data References: o None (static values) o PI Point o Formula data references which reference attributes that have been published AF Categories The only fully supported AF Objects are those listed above, here is a short list of the commonly used unsupported objects: Table data references will only transmit the configuration string, meaning the tables would have to be transferred manually via another method, i.e. XML import / export. AF Units of Measure Attributes which reference other attributes (using the Attribute format) PI Analyses or PI Analyses Templates PI Event Frames Custom Data References Custom AF Reference types PI Point Arrays PI Notifications AF File data types AF Transfers and Cases Support for other AF Data types and objects may be added in the future. Performance and Throughput PI Cloud Connect can sustain a data transfer rate of approximately 2,000 events/sec per node 4. When publishing or subscribing to data at a rate of 2,000 events/sec, 108 Kbytes/sec of network throughput will be utilized on a constant basis. As a comparison, average OSIsoft customers have data rates for their PI Interfaces of about 50 events/sec per 1,000 PI Points. Given an average customer, the bandwidth required per thousand (1,000) PI Points is approximately 2.7 Kbytes/sec. A subscriber will be able to create approximately 1,000 points in 1 hour on the initial startup. 4 If you going to be close to 2,000 events/sec on your publishing PI System, the MaxUpdateQueue tuning parameter on the PI Data Archive should be set to 240,000. 21 Best Practices This section is a quick overview of the best practices for using PI Cloud Connect. Each subscription should target its own PI AF Database. The hierarchy used for PI Cloud Connect should only contain supported AF Objects (see the above section on Supported AF Objects) Limit the total events per second transmitted through a PI Connect node to approximately 2,000 events / sec. Avoid potential circular publications and subscriptions. For example, in the scenario below you need at least 3 databases in order to publish AF Templates (1) from the Template AF Database, subscribing to the AF Templates into an AF Database (2) at the site, and then publishing the AF Elements from the site to the Corporate Asset Model AF Database (3). The AF Templates from Corporate located in the site AF Database (2) should not be modified. The AF Templates and AF Elements at the Corporate Asset Model AF Database (3) should not be modified either. Corporate AF Collective P S Template AF Database (1) Corporate Asset Model (3) PI Cloud Connect Palo Alto Mountain View (2) San Jose Cupertino Waverly Park San Francisco 22 Security Overview PI Cloud Connect deploys several levels of security to keep your information secure and still allow users access to the data they need: At the infrastructure level: PI Cloud Connect is managed by OSIsoft and our administrators takes care of provisioning the infrastructure required for onboarding new accounts, updating information for existing accounts as well as upgrading the different components when new features or updates are available. At the account level: an account represents a company, partner, or affiliate that has signed up for the PI Cloud Connect service. Each account has a unique access to the Customer Portal with a URL of the form:. Each account is fully isolated from other accounts. Users within an account do not know anything about other accounts or about other users belonging to other accounts. At the sign-in level: to access PI Cloud Connect features, all users must sign in to their secure Customer Portal and are authenticated by an Identity Provider of their choosing 5. At the user level: When publishing data, the publisher decides which user has access to subscribe to the publication. This is done on a per user basis, not a per account basis. Additionally, PI Cloud Connect is a reliable product designed to protect your information. The Web services used in Windows Azure as well as those exchanging information with On-Prem components are secured by the use of certificates or access tokens and the Customer Portal uses HTTPS to securely encrypt communication. HTTP Web sites send all communication in plain text, which anyone can read. But HTTPS works in conjunction with Secure Sockets Layer (SSL) to encrypt all communication. PI Cloud Connect Windows Azure Components PI Cloud Connect leverages several components in Windows Azure such as Web roles for the Customer Portal, worker roles for queuing and transferring data, Windows Azure Service Bus for establishing secure connection between the Cloud and your premise and security components such as Microsoft Azure Access Control Service (ACS) which is a federation provider in the Cloud. Internally, PI Cloud Connect uses Secure Sockets Layer (SSL) to secure all in-transit data. PI Cloud Connect authenticates calls between, for example, the Customer Portal (Web role) and Microsoft Azure ACS or from the Customer Portal to the worker roles. PI Cloud Connect also makes secure calls to your PI servers and PI AF servers by using claims-aware tokens. This allows the Windows Service that runs on your premise to map the claims-aware Security Token that it 5 In this initial release, PI Cloud Connect supports Windows Live ID (Microsoft Account) and integration with Active Directory Federated Services (ADFS) as valid Identity Providers (IP). 23 receives from PI Cloud Connect to a Windows Security Token on your premise. Then the call from PI Cloud Connect running in Windows Azure is forwarded to your PI AF server using that Windows Security Token to identify the user. Sign-in process When you first sign in to the Customer Portal, it establishes a trust with Microsoft Azure ACS. ACS acts as a federation provider in the Cloud and facilitates authentication between an application and one or more identity providers. Here ACS facilitates authentication between the Customer Portal and one or more identity providers. When a user signs in using the identity provider(s) that has been configured for her Account, the ACS issues a Security Token for that user. This Security Token is used to make secure web service calls to the PI Cloud Connect server. On-Prem Components Deployment Deploying a new PI Connect node is managed via the Customer Portal. After downloading the installation kit, you can either proceed with the installation from the computer used to access the Customer Portal or deploy the setup kit on a different computer. Either way, the computer targeted as a PI Connect node must have an outbound connection to the Internet. The setup kit needs elevated Administer privileges to install PI Connect. 24 These credentials are used to access the PI AF server(s) your data is read/write from/to. This account is also the account used to create and populate the PI Points associated with a subscription when that subscription is associated with a publication scoping real time data. Because the account is used for accessing the PI AF Server and PI Data Archive, this account will need read access when publishing, and write access when subscribing with PI Cloud Connect. This is the account under which the PI Connect Windows service is running. If you need to modify anything related to this account or the PI Connect service after installation, please contact our support team at Note: Changing the Windows service credentials via the Services management console will not work properly and make the PI Connect node dysfunctional. An uninstall and reinstall of PI Cloud Connect is required in order to change the service credentials. 25 Here is a summary of the attributes required for the Windows Service Account running the PI Connect service: Log on as a Service privileges Must have been used once to log on the computer Must have access to both PI AF and the default PI Server Data Archive associated with PI AF o To read the data targeted for a publication o To write the data targeted for a subscription When using a proxy server, that account should be able to communicate with the Internet via the proxy server. The next step requires you to specify a name (pre-populated with the machine name) and description (optional) for the PI Connect you are deploying. That name/description will be used in the Customer Portal. 26 The setup kit will further proceed with the installation PI Cloud Connect Overview 27 Before the installation process is completed, you are asked for another set of credentials that are used to establish a one-time connection between the local Windows Service and Windows Azure via the Azure Service Bus. The picture below shows the login screen provided by Microsoft (Windows Live ID) when it is used as the Identity Provider to authenticate with PI Cloud Connect. After installation is completed, the Windows Service that runs on your premise starts automatically and initiates an outbound connection from your premise to PI Cloud Connect running in the Cloud using the Service Bus Relay (which is 28 part of Windows Azure services). The newly configured node should appear in the System/Nodes page of the PI Cloud Connect Customer Portal. The use of certificates enables the Windows Service to be granted only least-privilege listen access to the Service Bus Relay. Similarly, PI Cloud Connect is granted permission to send to the Service Bus Relay, only. This means that PI Cloud Connect can connect between Windows Azure and your premise without you needing to open additional ports in your firewall. Also, data and other information cannot flow in an unintended direction. Overall data flow The diagram below shows the data flow between the Windows Services running On-Prem and the Windows Azure Components of PI Cloud Connect leveraging the Azure Service Bus Relay. Each account has its own dedicated Service Bus Endpoints for each of the PI Connect node deployed. Each account can deploy multiple nodes, each node being a publisher, a subscriber or both. 29 Troubleshooting 101 This section presents the most common issues customers are faced with when starting to use PI Cloud Connect. For more help and support, please contact us at Signing in the Customer Portal This error message is provided when an authentication against a Windows Live account fails. This might happen in different circumstances: You have not verified your address, please check your inbox for a signup verification . The Live ID account/password combination provided is invalid Live ID credentials were cached in your Browser and you didn t get explicitly presented with the Windows Live sign in page 30 Your Live ID credentials are valid but they are not associated with a user in PI Cloud Services o You are not yet a user in PI Cloud Services for the account you are trying to access o You used a different Live ID account when you activated your PI Cloud Services user account If you are still having issues, please contact support at Node deployment When deploying a new node, the setup kit might not be able to complete successfully. Please send us the error log at The Copy Errors button will copy the content of the error log to your clipboard to make it easy to paste it in your . 31 Accessing PI AF When creating or subscribing to a publication, the first step is to select a data source/destination from/to PI AF. This error message appears when it is not possible to reach out to your PI AF servers from the Customer Portal running in Azure. This may happen for several reasons: No PI connect nodes have been configured for your account The PI Connect nodes are not reachable (validate the node s status icon) o Communication between the Azure components and the On-Prem components is failing o The PI Connect Windows Service is down Connection between the PI Connect node and the PI AF server is failing The service account for the PI Connect service does not have access to your PI AF Server and PI AF Database. Accessing local log files The PI Connect Windows service logs information about its operation. These logs are located in the %AppData%/OSIsoft/logs folder for the user account under which the PI Connect Windows service is running. When suspecting a problem with PI Cloud Connect on a specific PI Connect node, please send us these logs files at PI Cloud Connect. Frequently Asked Questions PI Cloud Connect Frequently Asked Questions Version 1.0.5 Content FAQ...3 General questions... 3 Signing up... 4 Deployment... 5 Publishing... 5 Subscribing... 6 User accounts... 7 Security... 8 Pricing... PI Cloud Services --- PI Cloud Connect. Customer Onboarding Checklist PI Cloud Services/Connect Onboarding Checklist PI Cloud Services --- PI Cloud Connect Customer Onboarding Checklist Version 1.0.9 Content Introduction... 3 Business requirements... 3 Framing the context PI Cloud Connect. Customer Onboarding Checklist PI Cloud Connect Onboarding Checklist PI Cloud Connect Customer Onboarding Checklist Version 1.0.4 Content Introduction...3 Business requirements...3 Framing the context of the data exchange... 3 Onboarding HarePoint Workflow Extensions for Office 365. Quick Start Guide HarePoint Workflow Extensions for Office 365 Quick Start Guide Product version 0.91 November 09, 2015 ( This Page Intentionally Left Blank ) HarePoint.Com Table of Contents 2 Table of Contents Table of Owner of the content within this article is Written by Marc Grote Owner of the content within this article is Written by Marc Grote Microsoft Forefront TMG How to use SQL Server 2008 Express Reporting Services Abstract In this Configuration Guide. BES12 Cloud Configuration Guide BES12 Cloud Published: 2016-04-08 SWD-20160408113328879 Contents About this guide... 6 Getting started... 7 Configuring BES12 for the first time...7 Administrator permissions you need NETWRIX EVENT LOG MANAGER NETWRIX EVENT LOG MANAGER ADMINISTRATOR S GUIDE Product Version: 4.0 July/2012. Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment Configuring Single Sign-On from the VMware Identity Manager Service to Office 365 Configuring Single Sign-On from the VMware Identity Manager Service to Office 365 VMware Identity Manager JULY 2015 V1 Table of Contents Overview... 2 Passive and Active Authentication Profiles... 2 Adding System Administration Training Guide. S100 Installation and Site Management System Administration Training Guide S100 Installation and Site Management Table of contents System Requirements for Acumatica ERP 4.2... 5 Learning Objects:... 5 Web Browser... 5 Server Software... 5 Introduction to the EIS Guide Introduction to the EIS Guide The AirWatch Enterprise Integration Service (EIS) provides organizations the ability to securely integrate with back-end enterprise systems from either the AirWatch SaaS environment ADFS Integration Guidelines ADFS Integration Guidelines Version 1.6 updated March 13 th 2014 Table of contents About This Guide 3 Requirements 3 Part 1 Configure Marcombox in the ADFS Environment 4 Part 2 Add Relying Party in AD Mobile Device Management Version 8. Last updated: 17-10-14 Mobile Device Management Version 8 Last updated: 17-10-14 Copyright 2013, 2X Ltd. E mail: [email protected] Information in this document is subject to change without notice. Companies names WHITE PAPER Citrix Secure Gateway Startup Guide WHITE PAPER Citrix Secure Gateway Startup Guide Contents Introduction... 2 What you will need... 2 Preparing the environment for Secure Gateway... 2 Installing a CA using Windows Server Installation Guide Version 3.0 SIMS Teacher app Installation Guide Version 3.0 Step-by-step guide needed to install and configure the SIMS Teacher app service for a school Version 3.0 Information use and disclaimer The information contained Installing and Configuring vcenter Support Assistant Installing and Configuring vcenter Support Assistant vcenter Support Assistant 5 Docufide Client Installation Guide for Windows Docufide Client Installation Guide for Windows This document describes the installation and operation of the Docufide Client application at the sending school installation site. The intended audience is SMART Vantage. Installation guide SMART Vantage Installation guide Product registration If you register your SMART product, we ll notify you of new features and software upgrades. Register online at smarttech.com/registration. Keep the Central Administration User Guide User Guide Contents 1. Introduction... 2 Licensing... 2 Overview... 2 2. Configuring... 3 3. Using... 4 Computers screen all computers view... 4 Computers screen single computer view... 5 All Jobs screen... Using RD Gateway with Azure Multifactor Authentication Using RD Gateway with Azure Multifactor Authentication We have a client that uses RD Gateway to allow users to access their RDS deployment from outside their corporate network. They have about 1000+, Virtual Data Centre. User Guide Virtual Data Centre User Guide 2 P age Table of Contents Getting Started with vcloud Director... 8 1. Understanding vcloud Director... 8 2. Log In to the Web Console... 9 3. Using vcloud Director... 10 CA Nimsoft Service Desk CA Nimsoft Service Desk Single Sign-On Configuration Guide 6.2.6 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation Installing and Configuring vcenter Support Assistant Installing and Configuring vcenter Support Assistant vcenter Support Assistant 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced FileMaker Server 15. Getting Started Guide FileMaker Server 15 Getting Started Guide 2007 2016 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and FileMaker Go are trademarks Reconfiguring VMware vsphere Update Manager Reconfiguring VMware vsphere Update Manager vsphere Update Manager 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by Developing Microsoft Azure Solutions 20532A; 5 days Lincoln Land Community College Capital City Training Center 130 West Mason Springfield, IL 62702 217-782-7436 Developing Microsoft Azure Solutions 20532A; 5 days Course Description NETWRIX EVENT LOG MANAGER NETWRIX EVENT LOG MANAGER QUICK-START GUIDE FOR THE ENTERPRISE EDITION Product Version: 4.0 July/2012. Legal Notice The information in this publication is furnished for information use only, and does Introduction to Mobile Access Gateway Installation Introduction to Mobile Access Gateway Installation This document describes the installation process for the Mobile Access Gateway (MAG), which is an enterprise integration component that provides a secure Configuration Guide BES12. Version 12.3 Configuration Guide BES12 Version 12.3 Published: 2016-01-19 SWD-20160119132230232 Contents About this guide... 7 Getting started... 8 Configuring BES12 for the first time...8 Configuration tasks for managing Remote Access Platform. Architecture and Security Overview Remote Access Platform Architecture and Security Overview NOTICE This document contains information about one or more ABB products and may include a description of or a reference to one or more standards Installation Guide Novell Storage Manager 4.1 for Active Directory September 10, 2015 Installation Guide Novell Storage Manager 4.1 for Active Directory September 10, 2015 Legal Notices Condrey Corporation makes no representations or warranties with respect VMware vcloud Air Networking Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document, Bentley CONNECT Dynamic Rights Management Service v1.0 Implementation Guide Last Updated: March 20, 2013 Table of Contents Notices...5 Chapter 1: Introduction to Management Service...7 Chapter 2: Configuring Bentley Dynamic Rights...9 Adding Role Services Ekran System Help File Ekran System Help File Table of Contents About... 9 What s New... 10 System Requirements... 11 Updating Ekran to version 4.1... 13 Program Structure... 14 Getting Started... 15 Deployment Process... 15 BlackShield ID Agent for Remote Web Workplace Agent for Remote Web Workplace 2010 CRYPTOCard Corp. All rights reserved. http:// Copyright Copyright 2010, CRYPTOCard All Rights Reserved. No part of this publication may be reproduced, GFI MailArchiver for Exchange 4. Manual. By GFI Software GFI MailArchiver for Exchange 4 Manual By GFI Software Email: [email protected] Information in this document is subject to change without notice. Companies, names, and data used in examples MadCap Software. Upgrading Guide. Pulse MadCap Software Upgrading Guide Pulse Copyright 2014 MadCap Software. All rights reserved. Information in this document is subject to change without notice. The software described in this document is furnished, Secret Server Qualys Integration Guide Secret Server Qualys Integration Guide Table of Contents Secret Server and Qualys Cloud Platform... 2 Authenticated vs. Unauthenticated Scanning... 2 What are the Advantages?... 2 Integrating Secret Server Installation Guide for Pulse on Windows Server 2012 MadCap Software Installation Guide for Pulse on Windows Server 2012 Pulse Copyright 2014 MadCap Software. All rights reserved. Information in this document is subject to change without notice. The software Developing Microsoft Azure Solutions Course 20532A: Developing Microsoft Azure Solutions Page 1 of 7 Developing Microsoft Azure Solutions Course 20532A: 4 days; Instructor-Led Introduction This course is intended for students who have experience Sync Security and Privacy Brief Introduction Security and privacy are two of the leading issues for users when transferring important files. Keeping data on-premises makes business and IT leaders feel more secure, but comes with technical Defender 5.7 - Token Deployment System Quick Start Guide Defender 5.7 - Token Deployment System Quick Start Guide This guide describes how to install, configure and use the Defender Token Deployment System, based on default settings and how to self register WhatsUp Gold v16.2 Installation and Configuration Guide WhatsUp Gold v16.2 Installation and Configuration Guide Contents Installing and Configuring Ipswitch WhatsUp Gold v16.2 using WhatsUp Setup Installing WhatsUp Gold using WhatsUp Setup... 1 Security guidelines Secure Messaging Server Console... 2 Secure Messaging Server Console... 2 Upgrading your PEN Server Console:... 2 Server Console Installation Guide... 2 Prerequisites:... 2 General preparation:... 2 Installing the Server Console... 2 Activating MultiSite Manager. User Guide MultiSite Manager User Guide Contents 1. Getting Started... 2 Opening the MultiSite Manager... 2 Navigating MultiSite Manager... 2 2. The All Sites tabs... 3 All Sites... 3 Reports... 4 Licenses... 5 3. Microsoft Dynamics CRM Server 2011 software requirements Microsoft Dynamics CRM Server 2011 software requirements This section lists the software and application requirements for Microsoft Dynamics CRM Server 2011. Windows Server operating system: Microsoft MultiSite Manager. Setup Guide MultiSite Manager Setup Guide Contents 1. Introduction... 2 How MultiSite Manager works... 2 How MultiSite Manager is implemented... 2 2. MultiSite Manager requirements... 3 Operating System NETASQ SSO Agent Installation and deployment NETASQ SSO Agent Installation and deployment Document version: 1.3 Reference: naentno_sso_agent Page 1 / 20 Copyright NETASQ 2013 General information 3 Principle 3 Requirements 3 Active Directory user Reconfiguring VMware vsphere Update Manager Reconfiguring VMware vsphere Update Manager vsphere Update Manager 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a Getting Started with Sitecore Azure Sitecore Azure 3.1 Getting Started with Sitecore Azure Rev: 2015-09-09 Sitecore Azure 3.1 Getting Started with Sitecore Azure An Overview for Sitecore Administrators Table of Contents Chapter 1 Getting SAFETICA INSIGHT INSTALLATION MANUAL SAFETICA INSIGHT INSTALLATION MANUAL SAFETICA INSIGHT INSTALLATION MANUAL for Safetica Insight version 6.1.2 Author: Safetica Technologies s.r.o. Safetica Insight was developed by Safetica Technologies Introduction to Directory Services Introduction to Directory Services Overview This document explains how AirWatch integrates with your organization's existing directory service such as Active Directory, Lotus Domino and Novell e-directory Deploy Remote Desktop Gateway on the AWS Cloud Deploy Remote Desktop Gateway on the AWS Cloud Mike Pfeiffer April 2014 Last updated: May 2015 (revisions) Table of Contents Abstract... 3 Before You Get Started... 3 Three Ways to Use this Guide... 4 SafeGuard Enterprise Web Helpdesk SafeGuard Enterprise Web Helpdesk Product version: 5.60 Document date: April 2011 Contents 1 SafeGuard web-based Challenge/Response...3 2 Installation...5 3 Authentication...8 4 Select the Web Help Desk Google Apps Deployment Guide CENTRIFY DEPLOYMENT GUIDE Google Apps Deployment Guide Abstract Centrify provides mobile device management and single sign-on services that you can trust and count on as a critical component of your corporate MaaS360 Cloud Extender MaaS360 Cloud Extender Installation Guide Copyright 2013 Fiberlink Communications Corporation. All rights reserved. Information in this document is subject to change without notice. The software described NEFSIS DEDICATED SERVER NEFSIS TRAINING SERIES Nefsis Dedicated Server version 5.2.0.XXX (DRAFT Document) Requirements and Implementation Guide (Rev5-113009) REQUIREMENTS AND INSTALLATION OF THE NEFSIS DEDICATED SERVER Nefsis GRAVITYZONE HERE. Deployment Guide VLE Environment GRAVITYZONE HERE Deployment Guide VLE Environment LEGAL NOTICE All rights reserved. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including NETWRIX USER ACTIVITY VIDEO REPORTER NETWRIX USER ACTIVITY VIDEO REPORTER ADMINISTRATOR S GUIDE Product Version: 1.0 January 2013. Legal Notice The information in this publication is furnished for information use only, and does not constitute Request Manager Installation and Configuration Guide Request Manager Installation and Configuration Guide vcloud Request Manager 1.0.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced RoomWizard Synchronization Software Manual Installation Instructions 2 RoomWizard Synchronization Software Manual Installation Instructions Table of Contents Exchange Server Configuration... 4 RoomWizard Synchronization Software Installation and Configuration... 5 System Desktop Surveillance Help Desktop Surveillance Help Table of Contents About... 9 What s New... 10 System Requirements... 11 Updating from Desktop Surveillance 2.6 to Desktop Surveillance 3.2... 13 Program Structure... 14 Getting MaaS360 On-Premises Cloud Extender MaaS360 On-Premises Cloud Extender Installation Guide Copyright 2014 Fiberlink Communications Corporation. All rights reserved. Information in this document is subject to change without notice. The software Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015 Metalogix SharePoint Backup Publication Date: August 24, 2015 All Rights Reserved. This software is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this Enterprise Solution for Remote Desktop Services... 2. System Administration... 3. Server Management... 4. Server Management (Continued)... CONTENTS Enterprise Solution for Remote Desktop Services... 2 System Administration... 3 Server Management... 4 Server Management (Continued)... 5 Application Management... 6 Application Management (Continued)... Administering Jive Mobile Apps Administering Jive Mobile Apps Contents 2 Contents Administering Jive Mobile Apps...3 Configuring Jive for Android and ios... 3 Native Apps and Push Notifications...4 Custom App Wrapping for ios... 5 Native Netwrix Auditor for SQL Server Netwrix Auditor for SQL Server Quick-Start Guide Version: 8.0 4/22/2016 Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment from Installation Guide. SafeNet Authentication Service SafeNet Authentication Service Installation Guide Technical Manual Template Release 1.0, PN: 000-000000-000, Rev. A, March 2013, Copyright 2013 SafeNet, Inc. All rights reserved. 1 Document Information Netwrix Auditor for Exchange Netwrix Auditor for Exchange Quick-Start Guide Version: 8.0 4/22/2016 Legal Notice The information in this publication is furnished for information use only, and does not constitute a commitment from Netwrix Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new EMR Link Server Interface Installation EMR Link Server Interface Installation Version 1.0 ** INTRODUCTION ** If you would like assistance with installation, please contact our preferred support provider at [email protected], or call
http://docplayer.net/10110031-Pi-cloud-connect-overview.html
CC-MAIN-2017-39
en
refinedweb
Self-contained UI tests for iOS applications We’re all familiar with TDD, or at least write unit tests for our software, but unit tests won’t check application state after complex UI interactions. If you want to make sure that an application behaves correctly when users interact with it, then you need to write UI tests. Automated UI tests of your mobile application can help you detect problems with your code during everyday Continuous Integration (CI) process. It may however be hard to achieve a stable test environment if your application presents data obtained from remote servers. This article explains how to set up a self-contained test environment for connected iOS applications, that can be used both in Continuous Integration and manual testing. We’ll be using WireMock and Xcode UI Testing with Page Object Pattern to achieve our goal. UI Tests We’ll use Xcode UI Testing for UI tests. It’s the official UI testing framework from Apple that reduces the need for explicit waits in test code. Less explicit waiting means faster and more readable test code which is very important for test suite maintenance. Also, as it’s the official Apple framework, we’ll hopefully avoid situations when the test framework breaks with new Xcode releases. Unfortunately there isn’t much official documentation for the framework, but Joe Masilotti did a tremendous job of documenting and explaining all of the quirks. Test target setup First of all we need a test target. We’ll add a new UI Testing bundle to our project: We want to run UI tests as part of CI and also allow developers to immediately see if their code changes are passing UI tests when they run tests manually. Thus we will not create a separate scheme for UI tests, but we’ll build and execute them as a part of Test action for our default scheme. To do this we have to set up our project as on the screens below: Disabling animations UI tests usually take a lot of time compared to simple unit tests. We want to make our tests as fast as possible as we’ll be running them as part of CI. We’ll disable UI animations in the application when UI tests are running to speed things up. This can be done by setting an environment property, which we will later check at application start. The best way to do this is to extend XCUIApplication class as it has to be done before every test: extension XCUIApplication { func setUpAndLaunch() { launchEnvironment = ["DISABLE_ANIMATIONS": "1"] launch() } } Now we can call the new method in the test class setUp() instead of the regular XCUIApplication().launch(): class applicationUITests: XCTestCase { override func setUp() { super.setUp() continueAfterFailure = false XCUIApplication().setUpAndLaunch() } } The last thing we need is to check the property at application start and disable animations if needed. This can be done in AppDelegate: @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool { if ProcessInfo.processInfo.environment["DISABLE_ANIMATIONS"] == "1" { UIView.setAnimationsEnabled(false) } return true } } Page objects It’s time to start writing our test code! Let’s assume we’re testing an e-commerce application such as Allegro. It displays a listing of tappable products and we want to check if correct product is opened after a tap. We could write something like this: func testTapOnListingItemShouldOpenCorrectProduct() { // tap on first item XCUIApplication().collectionViews.cells["listingItem0"].tap() // check item title XCTAssertEqual(XCUIApplication().staticTexts["productTitle"], "Product 1") // check item price XCTAssertEqual(XCUIApplication().staticTexts["productPrice"], "€ 1.00") } But even with comments it’s not really readable, is it? Moreover we can’t reuse this code in other test methods and copying those XCUIApplication() calls over and over again is not feasible at all. This is the place where Page Objects (a concept known well to anyone who ever worked with Selenium) are really helpful. We have two screens within our application: - listing - product so we’ll create two screen objects that we’ll use in tests. Let’s start with a simple listing object: class ListingScreen { static func item(_ itemNumber: UInt) -> XCUIElement { return XCUIApplication().collectionViews.cells["listingItem" + String(itemNumber)] } } It not only wraps listing functionality nicely, but it’s also reusable for any listing item. Product screen will look like this: class ProductScreen { static let title = XCUIApplication().staticTexts["productTitle"] static let price = XCUIApplication().staticTexts["productPrice"] } Now we can use both screen objects in test code: func testTapOnListingItemShouldOpenCorrectProduct() { // tap on first item ListingScreen.item(0).tap() // check item title XCTAssertEqual(ProductScreen.title.label, "Product 1") // check item price XCTAssertEqual(ProductScreen.price.label, "€ 1.00") } Code looks definitely better, but we can improve it even further… Assert helpers Those assertions in the test code aren’t really reusable, but we can expect to use them a lot in test methods. Let’s try to move them to separate helper methods then: // MARK: Helper functions func checkProductTitle(_ title: String) { XCTAssertEqual(ProductScreen.title.label, title) } func checkProductPrice(_ price: String) { XCTAssertEqual(ProductScreen.price.label, price) } Our test method will now look like this: func testTapOnListingItemShouldOpenCorrectProduct() { ListingScreen.item(0).tap() checkProductTitle("Product 1") checkProductPrice("€ 1.00") } It doesn’t need comments anymore, does it? But if we run the test, we’ll discover a nasty side-effect of our helper methods: Error marker is placed within the helper method when the test fails. This is not a big problem when the helper is used only once, but we’ll be using it multiple times in test methods. Thankfully this is easy to fix. Every assertion method takes two additional parameters which tell Xcode from where in the source file the assert comes. We’ll use those parameters to place the error marker in the test method. Let’s improve our helpers: func checkProductTitle(_ title: String, file: StaticString = #file, line: UInt = #line) { XCTAssertEqual(ProductScreen.title.label, title, file: file, line: line) } func checkProductPrice(_ price: String, file: StaticString = #file, line: UInt = #line) { XCTAssertEqual(ProductScreen.price.label, price, file: file, line: line) } We can see that the marker is correctly placed when we run the test again. We didn’t even have to change anything in our test method! Network data stubbing So far we were using hardcoded test data in the test method, but the truth is that our application presents data received from backend servers. Thus it’s possible that this data will often change. If suddenly a different product is the first one on the product listing then the test will fail as the name or price won’t match. Moreover a server outage will cause our tests to fail as well because we won’t receive any data at all. How can we ensure that our tests will be server data independent and that they won’t fail if the server is experiencing downtime? Network data stubbing can help us with that. There’s a great, well documented, opensource project called WireMock that we can use to serve network stubs. It not only serves but also records stubs, which is really handy if you want to mock network communication quickly. WireMock script We want to run WireMock before every test run and also have the possibility to use it for manual tests and network communication recording. It would be hard to remember the whole syntax of WireMock commands so let’s write a simple script that would do all those things for us: #!/bin/sh WIREMOCK_DIR=`dirname $0` MAPPINGS_DIR="$WIREMOCK_DIR" PORT=8080 START=true STOP=false RECORD=false API_URL="" function usage { echo "Usage:" echo "\twiremock.sh -k|r [-h] [-m <mappings_dir>]" echo echo "\t-k --kill - stop server" echo "\t-m --mappings <mappings_dir> - start server with mocks from <mappings_dir>" echo "\t-r --record - start wiremock in recording mode" echo "\t-h --help - this screen" } while [ -n "$1" ] do case $1 in -m | --mappings ) shift MAPPINGS_DIR="$1" ;; -k | --kill ) START=false STOP=true RECORD=false ;; -r | --record ) START=false STOP=true RECORD=false ;; -h | --help ) usage exit ;; * ) usage exit 1 esac shift done if [ "$START" == true ] then echo "Starting Wiremock in play mode on port $PORT with mappings from $MAPPINGS_DIR" java -jar $WIREMOCK_DIR/wiremock.jar --verbose --port $PORT --root-dir $MAPPINGS_DIR & elif [ "$STOP" == true ] then echo "Stopping Wiremock on localhost:$PORT & $AUTH_PORT" curl -X POST --data '' "" elif [ "$RECORD" == true ] then echo "Starting Wiremock in record mode on port $PORT" echo "Storing mappings to $MAPPINGS_DIR" java -jar $WIREMOCK_DIR/wiremock.jar --proxy-all "$API_URL" --record-mappings --verbose --port $PORT --root-dir $MAPPINGS_DIR & fi Now we’ll be able to start WireMock by simply running ./wiremock.sh and stop it by running ./wiremock.sh -k. We can even run WireMock in record mode to record new mappings with ./wiremock.sh -r. The script expects to find mappings and __files directories with mock files in the script directory — this can be changed by providing -m path_to_mappings option. Build configuration Now that we have our script, it would be good to start WireMock before every test session and stop it afterwards. We can achieve this by adding pre- and post-actions for Test action that will run the script with correct parameters. Assuming that wiremock.sh is made executable and placed in WireMock directory under applicationUITests our actions would look like this: So now we start WireMock before every test session, but… our application is not using it. We have to configure the project and make a small change in application code so that it connects to localhost when needed. Let’s start with project configuration. We want our Test action to use localhost and we also want to use localhost for manual testing when needed. The easiest solution that fullfils both requirements is to create a new build configuration that will set a special build flag at compilation time. To achieve this we have to clone the Debug configuration (as this is the configuration used by Test) on project Info screen and give the new configuration a meaningful name (e.g. Localhost). Build configurations should look like this afterwards: Now we have to add a new custom flag (-DLOCALHOST) for Swift compiler on Build Settings screen like this: It’s time to make sure localhost is used instead of real API URL in Localhost configuration. We have to add conditional code in the place where API URL is defined: var baseURL: String { #if LOCALHOST return "" #else return "" #endif } We’ll be sending requests to WireMock over an unencrypted connection so we need to allow arbitrary loads in Info.plist. We only want to do this for Localhost configuration and no other so we’ll add two additional build phases to the application target. First one will run before actual compilation takes place and will enable arbitrary loads for Localhost: if [ "$CONFIGURATION" == "Localhost" ] then `/usr/libexec/PlistBuddy -c "Set :NSAppTransportSecurity:NSAllowsArbitraryLoads true" "$SRCROOT/$INFOPLIST_FILE"` fi Second one will run after compilation and will disable arbitrary loads: if [ "$CONFIGURATION" == "Localhost" ] then `/usr/libexec/PlistBuddy -c "Set :NSAppTransportSecurity:NSAllowsArbitraryLoads false" "$SRCROOT/$INFOPLIST_FILE"` fi Build phases should be ordered like this: Using data from mocks in tests So far our test methods included hardcoded data like “Product 1” for item name. This isn’t really flexible because we would need to change those hardcoded values every time we make a change in mocks. It would be way better if we used the data loaded from mocks. We can do this by creating a simple mock data parser for tests. But first things first — let’s bundle mocks with the test bundle so we have files to read from. The easiest way to do it is to reference __files directory in the UI test target like this: Afterwards we’ll have a reference to the __files directory in our project structure: This way we can easily access mock files in Xcode and they will be automatically bundled with test bundle. We are ready to write our parser code now. Let’s start with a simple base class for test data parsers that will load a specified mock file and deserialize it into a dictionary at initialisation time. We’ll also create a simple type alias called JSONDict for readability purposes as it’s easier to use in code than [String: AnyObject] typealias JSONDict = [String: AnyObject] enum TestFile: String { case firstProduct = "body-firstproduct" } class TestDataParser { var json: JSONDict! init(testFile: TestFile) { guard let path = Bundle(for: type(of: self)).path(forResource: testFile.rawValue, ofType: "json", inDirectory: "__files"), let jsonData = try? Data(contentsOf: URL(fileURLWithPath: path)) else { return } do { json = try JSONSerialization.jsonObject(with: jsonData, options: JSONSerialization.ReadingOptions.mutableContainers) as? JSONDict } catch let jsonError { print(jsonError) } } } Now, when the base class is ready, we can create a proper parser for product mocks. Let’s assume we have a mock for a product request that looks like this: { "product": { "title": "Product 1", "price": "1.00", } } Data parser for product file can be implemented like this: class ProductTestData: TestDataParser { var title: String { return product["title"] as! String } var price: String { return "€ " + (product["price"] as! String) } private var product: JSONDict { return json["product"] as! JSONDict } } Now, if testData is an instance of ProductTestData, we can call testData.title to get product title from loaded mock. Let’s modify our tests to use the new test data parser: class applicationUITests: XCTestCase { let testData = ProductTestData(testFile: .firstProduct) override func setUp() { super.setUp() continueAfterFailure = false XCUIApplication().setUpAndLaunch() } override func tearDown() { super.tearDown() } func testTapOnListingItemShouldOpenCorrectProduct() { ListingScreen.item(0).tap() checkProductTitle(testData.title) checkProductPrice(testData.price) } // MARK: Helper functions func checkProductTitle(_ title: String, file: StaticString = #file, line: UInt = #line) { XCTAssertEqual(ProductScreen.title.label, title) } func checkProductPrice(_ price: String, file: StaticString = #file, line: UInt = #line) { XCTAssertEqual(ProductScreen.price.label, price, file: file, line: line) } } That’s it! We now have a readable test that uses data received from WireMock. The test environment is ready and we can write more tests in similar manner. Summary At first glance it might seem that it’s not easy to set up a UI test environment for iOS applications, but as you’ve seen above it just takes a few simple steps. It’s most definitely worth the effort as it can save you lots of manual testing and it will show you issues with your code as soon as that code is committed. Share this post
https://allegro.tech/2016/10/self-contained-UI-tests-for-ios-applications.html
CC-MAIN-2017-39
en
refinedweb
Content-type: text/html #include <unistd.h> int fsync(int fildes); The fsync() function moves all modified data and attributes of the file descriptor fildes to a storage device. When fsync() returns, all in-memory modified copies of buffers associated with fildes have been written to the physical medium. The fsync() function is different from sync(), which schedules disk I/O for all files but returns before the I/O completes. The fsync() function forces all outstanding data operations to synchronized file integrity completion (see fcntl.h(3HEAD) definition of O_SYNC.) The fsync() function forces all currently queued I/O operations associated with the file indicated by the file descriptor fildes to the synchronized I/O completion state. All I/O operations are completed as defined for synchronized I/O file integrity completion. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. If the fsync() function fails, outstanding I/O operations are not guaranteed to have been completed. The fsync() function will fail if: EBADF The fildes argument is not a valid file descriptor. EINTR A signal was caught during execution of the fsync() function. EIO An I/O error occurred while reading from or writing to the file system. ENOSPC There was no free space remaining on the device containing the file. ETIMEDOUT Remote connection timed out. This occurs when the file is on an NFS file system mounted with the soft option. See mount_nfs(1M). In the event that any of the queued I/O operations fail, fsync() returns the error conditions defined for read(2) and write(2). The fsync() function should be used by applications that require that a file be in a known state. For example, an application that contains a simple transaction facility might use fsync() to ensure that all changes to a file or files caused by a given transaction were recorded on a storage medium. The manner in which the data reach the physical medium depends on both implementation and hardware. The fsync() function returns when notified by the device driver that the write has taken place. See attributes(5) for descriptions of the following attributes: mount_nfs(1M), read(2), sync(2), write(2), fcntl.h(3HEAD), fdatasync(3RT), attributes(5), standards(5)
https://backdrift.org/man/SunOS-5.10/man3c/fsync.3c.html
CC-MAIN-2017-39
en
refinedweb
The PlotDevice application is a convenient environment for writing scripts and being able to quickly see how your edits affect the graphical output. But once you’re happy with the state of a script, you can start to treat it as a program in its own right (rather than a mere ‘document’ that only exists inside the app). The graphics core used by the application is also available in a ‘console’ setting, allowing you to run your scripts from the command line. It can even be used as a traditional Python module that you import into your own programs. plotdeviceCommand The scripts you write in PlotDevice can be run from the terminal using an external command line tool called plotdevice. The command allows you to run scripts, coordinate with an external editor, and export images, movies, or animated GIFs. In the application, open the Preferences window and click the Install button: Select a destination folder in your shell’s PATH and the application will create a symlink from there to the script contained in the app bundle. If admin privileges are required to access the selected folder, you will be prompted for your password. Note that since this connection involves a symlink, the plotdevice command will break if you move the application after installing the tool. If this happens, you will see an error message in the Preferences window and can reinstall the link using the same procedure as before. Once you have installed the plotdevice tool you can run any of the ’.pv’ scripts you’ve written with the app. Running one of the example scripts is as simple as opening a Terminal window and typing: You’ll see a new document icon appear in the Dock and the script’s output will be displayed in a window: As in the PlotDevice application, you can useand to re-run and interrupt the script. The menu items are also available to you. Running the command with just a filename argument will display its output in a window, but with the right combination of command line switches you can export graphics to file using any of the supported image/video formats. There are quite a few optional switches: plotdevice [-f] [-b] [--live] [--cmyk] [--virtualenv PATH] [--args [a [b ...]]] [--export FILE] [--frames N or M-N] [--fps N] [--rate N] [--loop [N]] file But fear not; we’ll walk through each of the options below. -f run full-screen --virtualenv PATH path to a virtualenv whose libraries you want to use (this should point to the top-level virtualenv directory) --args [a [b ...]] arguments to be passed to the script as sys.argv -b run PlotDevice in the background (i.e., don’t switch apps when the script is run) --live re-render graphics each time the script file is saved --export FILE a destination filename ending in eps, png, tiff, jpg, gif, or mov --cmyk convert colors to c/m/y/k before generating images (otherwise colors will be r/g/b) --frames N or M-N number of frames to render or a range specifying the first and last frames (default 1-150) --fps N frames per second in exported video (default 30) --rate N video bitrate in megabits per second (default 1) --loop [N] number of times to loop an exported animated gif (omit N to loop forever) # Run a script plotdevice script.pv # Run fullscreen plotdevice -f script.pv # Save script's output to pdf plotdevice script.pv --export output.pdf # Create an animated gif that loops every 2 seconds plotdevice script.pv --export output.gif --frames 60 --fps 30 --loop # Create a sequence of numbered png files – one for each frame in the animation plotdevice script.pv --export output.png --frames 10 # Create a 5 second long H.264 video at 2 megabits/sec plotdevice script.pv --export output.mov --frames 150 --rate 2.0 Though the plotdevice command provides a convenient way to launch scripts with the PlotDevice interpreter, you may prefer to use the graphics context and export functions from within your own module (and running whichever python binary your system or virtualenv provides). Detailed installation instructions can be found in the project’s README file. To simplify the use of PlotDevice with other external libraries, we recommend installing the module into a virtualenv alongside your script: $ virtualenv env $ source ./env/bin/activate (env)$ pip install plotdevice But you don’t need to stop there. The Python Package Index has thousands of other modules your script can use. Install them into the virtualenv and your script will know just where to find them: (env)$ pip install requests envoy bs4 # some other useful packages A nice side-effect of installing into a virtualenv is that it automatically creates a copy of the plotdevice command that’s specific to that particular folder. When you ‘source’ the activate file, your path is adjusted to let you run the plotdevice command without specifying its path: $ source ./env/bin/activate (env)$ plotdevice myscript.pv # uses the tool found at ./env/bin/plotdevice The plotdevice module contains all the global commands and constants you’re used to from the application. For instance, the following will draw a few boxes: #!/usr/bin/env python from plotdevice import * for x, y in grid(10,10,12,12): rect(x,y, 10,10) Though using ‘ import *’ is generally frowned upon in the Python community, we feel like it’s pretty easily justified in this case since PlotDevice’s raison d’être is to make your drawing code short-and-sweet. You can then generate output files using the export() command. It takes a file path as an argument and the format will be determined by the file extension ( eps, png, jpg, gif, or tiff): export('~/Pictures/output.pdf') If you’re generating multiple images, be sure to reset the graphics state in between frames with: clear(all) But if you plan to do more than generate a one-off, you’ll likely find the with export() usage more convenient. The context-manager provides some handy methods for writing images, multi-page PDFs, and even animations. As you can see from the toy example above, Python scripts that use the plotdeivce module look a little different from scripts that run in the PlotDevice application. In particular, the lines at the very beginning of the Python script aren’t necessary in the application since it provides all the graphics commands implicitly as part of the script’s runtime environment. In addition, scripts that run in the app or with the plotdevice command expect special handling relating to animations. Just by defining a draw() method in your script, the viewer will repeatedly clear the canvas and call your method – even though the script itself doesn’t explicity call it. It’s for this reason that the scripts you save from the application end with a .pv extension rather than a .py. The file extension is a small reminder that there are some missing pieces required to turn the file into full-fledged Python script. Luckily, converting a .pv script to run without the plotdevice tool is as simple as changing its file extension to py and adding three lines to the top of the code: #!/usr/bin/env python # encoding: utf-8 from plotdevice import * Importing the module won’t give you the default animation behavior (though you can easily create movies with the export() command), but it will add all the familiar PlotDevice commands to the script’s namespace. In addition, it will load all the necessary C-extensions and other system dependencies. If you’d prefer to keep your namespace tidy, you can also import the module as-is. Just remember to prefix all your commands with the module name: #!/usr/bin/env python # encoding: utf-8 import plotdevice as pd pd.size(256, 256) pd.background('red') pd.rect(64,64, 128,128, fill='white') pd.export('white-box.png')
https://plotdevice.io/tut/Console
CC-MAIN-2017-39
en
refinedweb
std::tgamma arg. [edit] Parameters [edit] Return value If no errors occur, the value of the gamma function of arg, that is ∫∞ 0targ-1 e-t dt, is returned If a domain error occurs, an implementation-defined value (NaN where supported) is returned. If a pole error occurs, ±HUGE_VAL, ±HUGE_VALF, or ±HUGE_VALL is returned. arg is zero or is an integer less than zero, a pole error or a domain error may occur. If the implementation supports IEEE floating-point arithmetic (IEC 60559), - If the argument is ±0, ±∞ is returned and FE_DIVBYZERO is raised - If the argument is a negative integer, NaN is returned and FE_INVALID is raised - If the argument is -∞, NaN is returned and FE_INVALID is raised - If the argument is +∞, +∞ is returned. - If the argument is NaN, NaN is returned [edit] Notes If arg is a natural number, std::tgamma(arg) is the factorial of arg-1. Many implementations calculate the exact integer-domain factorial if the argument is a sufficiently small integer. For IEEE-compatible type double, overflow happens if 0 < x < 1/DBL_MAX or if x > 171.7 POSIX requires that a pole error occurs if the argument is zero, but a domain error occurs when the argument is a negative integer. It also specifies that in future, domain errors may be replaced by pole errors for negative integer arguments (in which case the return value in those cases would change from NaN to ±∞). There is a non-standard function named gamma in various implementations, but its definition is inconsistent. For example, glibc and 4.2BSD version of gamma executes lgamma, but 4.4BSD version of gamma executes tgamma. [edit] Example #include <iostream> #include <cmath> #include <cerrno> #include <cstring> #include <cfenv> #pragma STDC FENV_ACCESS ON int main() { std::cout << "tgamma(10) = " << std::tgamma(10) << ", 9! = " << 2*3*4*5*6*7*8*9 << '\n' << "tgamma(0.5) = " << std::tgamma(0.5) << ", sqrt(pi) = " << std::sqrt(std::acos(-1)) << '\n'; // special values std::cout << "tgamma(1) = " << std::tgamma(1) << '\n' << "tgamma(+Inf) = " << std::tgamma(INFINITY) << '\n'; // error handling errno=0; std::feclearexcept(FE_ALL_EXCEPT); std::cout << "tgamma(-1) = " << std::tgamma(-1) << '\n'; if(errno == EDOM) std::cout << " errno == EDOM: " << std::strerror(errno) << '\n'; if(std::fetestexcept(FE_INVALID)) std::cout << " FE_INVALID raised\n"; } Possible output: tgamma(10) = 362880, 9! = 362880 tgamma(0.5) = 1.77245, sqrt(pi) = 1.77245 tgamma(1) = 1 tgamma(+Inf) = inf tgamma(-1) = nan errno == EDOM: Numerical argument out of domain FE_INVALID raised [edit] See also [edit] External links Weisstein, Eric W. "Gamma Function." From MathWorld--A Wolfram Web Resource.
http://en.cppreference.com/w/cpp/numeric/math/tgamma
CC-MAIN-2015-14
en
refinedweb
byte b = 0x7f; unsigned long u = b << 8; Serial.print("0x7F << 8: "); Serial.println(u, HEX); b = 0x80; u = b << 8; Serial.print("0x80 << 8: "); Serial.println(u, HEX); 0x7F << 8: 7F00 0x80 << 8: FFFF8000 constants are ints. So b << 8 is a uint8_t and a int16_t together. The bigger one is int16_t, so the compiler promotes the other to that. u = b << (byte)8; u = b << (unsigned int)8; I know that blaming unexpected results on compiler bugs is one of the first signs that one is losing his grip on reality ... Integer types smaller than int are promoted when an operation is performed on them. If all values of the original type can be represented as an int, the value of the smaller type is converted to an int; otherwise, it is converted to an unsigned int. unsigned int b = 0x7f; #include <stdio.h>#include <inttypes.h>int main () { unsigned char b = 0x80; unsigned long u = b << 8; printf("0x80 << 8: %lx\n", u); return 0;} unsigned long getValue() { unsigned long val = 0; val = getByte() << 16; val |= getByte() << 8; val |= getByte(); ... return val;}byte getByte() { ...//read byte from device...} ... and run on my Mac desktop displays the expected 0x80 << 8: 8000. #include <stdio.h>#include <inttypes.h>int main () { unsigned char b = 0x80; unsigned long u = b << 24; printf("0x80 << 24: %lx\n", u); printf("sizeof (int) = %i\n", (int) sizeof (int)); return 0;} 0x80 << 24: ffffffff80000000sizeof (int) = 4 byte Category, Name, Target;unsigned int ID;ID = (uint16_t) ((Category << 6 | Name ) << 3 | Target) << 5; compiled (with gcc) and run on my Mac desktop displays the expected 0x80 << 8: 8000. The actual behavior then seems to be implementation dependent. byte b = 0x80;unsigned long ul = 8;unsigned long r = b << ul;Serial.println(r, HEX); I expected b to be promoted to a 16 bit type prior to the left shift, but I didn't expect the promotion to be from an unsigned type (byte) to a signed type (int). It seems to me that it should have been promoted to an unsigned int. byte b = 0x80; unsigned long ul = 256; unsigned long r = b * ul; Serial.println(r, HEX); The behavior is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand. Or is it? Would both operands be promoted to unsigned long? But they can't be or the results would be 0x8000. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=139424.msg1047510
CC-MAIN-2015-14
en
refinedweb
java.lang.Object org.apache.myfaces.component.html.util.StreamingResourceLoaderorg.apache.myfaces.component.html.util.StreamingResourceLoader public class StreamingResourceLoader Serve component-specific resources that MUST be embedded in the HEAD of an html page. Currently, there is only one case where resources must be in the document head: inline CSS or links to CSS stylesheets. When using the StreamingAddResource class, a single link is output in the document HEAD for each page which embeds the name of this class in the url. This causes the browser to make a GET request to that link url when rendering the page; the tomahawk extensions filter sees the embedded ResourceLoader class name and creates an instance of this class to handle the request. Note that for other resources the StreamingAddResources class generates urls that embed the standard MyFacesResourceLoader url, ie this class does not handle serving of resources other than the ones that MUST be in the head section. The url also embeds a "request id" which is unique for each page served. This id is then used as a key into a global-scoped cache. The data there was inserted during the previous request, and is deleted as soon as it is served up by this class. public StreamingResourceLoader() public void serveResource(javax.servlet.ServletContext context, javax.servlet.http.HttpServletRequest request, javax.servlet.http.HttpServletResponse response, String resourceUri) throws IOException ResourceLoader serveResourcein interface ResourceLoader context- TODO request- the request response- the response to write the resource content to resourceUri- contains the uri part after the uri which is used to identify the resource loader IOException
http://myfaces.apache.org/tomahawk-project/tomahawk/apidocs/org/apache/myfaces/component/html/util/StreamingResourceLoader.html
CC-MAIN-2015-14
en
refinedweb
06 October 2010 12:55 [Source: ICIS news] LONDON (ICIS)--EU and ?xml:namespace> EU Trade Commissioner Karel De Gucht, the Belgian Minister of Foreign Affairs Steven Vanackere, representing the Presidency of the Council of the European Union (EU), and South Korean Minister for Trade Kim Jong-hoon signed the agreement on Wednesday at the EU-Korea Summit in “The agreement between the EU and “It will provide a real boost to jobs and growth in The FTA was initialled between the Commission and The date of provisional application would be 1 July 2011, provided that the European Parliament gives its consent and that the Regulation of the European Parliament and of the Council implementing the bilateral safeguard clause of the FTA is in place. EU member states would also have to ratify the agreement according to their own laws and procedures. Last month, Rene van Sloten, executive director for industrial policy at industry association Cefic, said the European chemicals industry welcomed the EU’s free trade agreement with The trade in EU-South Korea goods was worth around €54bn ($75bn) in 2009, the Commission said. In terms of tariffs, According to the EU, exporters of chemicals would be relieved from over €150m in duties each year through the trade
http://www.icis.com/Articles/2010/10/06/9398964/EU-South-Korea-officials-sign-free-trade-agreement.html
CC-MAIN-2015-14
en
refinedweb
01 November 2011 17:51 [Source: ICIS news] HOUSTON (ICIS)--Surging gasoline and distillate margins and discounted crude throughput for the third quarter pushed Valero earnings about 40% higher, the company said on Tuesday. ($1 = €0.72) Valero net income rose to $1.2bn (€864m) from $292 in the same quarter last year. Profits for gasoline refining from Louisiana Light Sweet crude were up 89% at $8.20/bbl compared with the third quarter of 2010, while ultra-low-sulphur diesel profits jumped 66% to $14.19/bbl. In addition, Maya crude oil was at a $13.38/bbl discount to Louisiana Light Sweet crude, an increase of 20% on a year ago. Finally, mid-continent crude and Eagle Ford basin crude were at a discount of $22.47/bbl to West Texas Intermediate (WTI) crude, widening by about $15/bbl from mid-continent crude's discount in 2010. During the quarter, Valero processed more than 460,000 bbl/day of the discounted Eagle Ford crude at its 93,000 bbl/day Three Rivers and 142,000 bbl/day Corpus Christi refineries in Texas. That was more than 40,000 bbl/day higher than in 2010, and the Eagle Ford crude saved the company about $15/bbl compared with processing imported sweet crude. The higher profits from production of gasoline and distillate provided an incentive to increase throughput at the company's refineries, Valero executive vice president Mike Ciskowski said. As a result, refinery throughput jumped by 389,000 bbl/day for the quarter. The increase in volumes was also a result of added capacity from the acquisition of the 220,000 bbl/day Pembroke refinery in Wales on 1 August and operations at the 235,000 bbl/day Aruba refinery, which was down in 2010. “We were able to capitalise on favourable refining margins and attain our highest refinery utilisation since the third quarter of 2007,” Valero CEO Bill Kleese said. ?xml:namespace> (
http://www.icis.com/Articles/2011/11/01/9504639/us-valero-earnings-soar-on-high-profit-margins-cheaper-feedstock.html
CC-MAIN-2015-14
en
refinedweb
04 September 2012 19:08 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> Toronto-based Royal Bank said that its monthly purchasing managers’ index (PMI) for Growth in Canadian manufacturing output was unchanged from July but new orders increased, partly reflecting an uptick in new export work, the bank said. "In contrast to declining manufacturing conditions around the world, particularly in the US, the euro area and China, the Canadian manufacturing sector is continuing to grow, albeit at a moderately slower pace," said Craig Wright, the bank’s chief economist. "It is encouraging to see that new export orders rebounded and manufacturing firms reported that they continued to hire employees in August,” Wright said. Royal Bank’s PMI is based on a survey of 400 Canada-based industrial firms. The bank conducts the survey in cooperation with the Purchasing Management Association of Canada (PMAC)
http://www.icis.com/Articles/2012/09/04/9592652/canada-manufacturing-continues-to-grow-in-august-but-pace-slows.html
CC-MAIN-2015-14
en
refinedweb
Name | Synopsis | Interface Level | Parameters | Description | Context | See Also #include <sys/scsi/scsi.h> #include <sys/cmn_err.h> void scsi_log(dev_info_t *dip, char *drv_name, uint_t level, const char *fmt, ...); Solaris DDI specific (Solaris DDI). Pointer to the dev_info structure. String naming the device. Error level. Display format. The scsi_log() function is a utility function that displays a message via the cmn_err(9F) routine. The error levels that can be passed in to this function are CE_PANIC, CE_WARN, CE_NOTE, CE_CONT, and SCSI_DEBUG. The last level is used to assist in displaying debug messages to the console only. drv_name is the short name by which this device is known; example disk driver names are sd and cmdk. If the dev_info_t pointer is NULL, then the drv_name will be used with no unit or long name. If the first character in format is: An exclamation mark (!), the message goes only to the system buffer. A caret (^), the message goes only to the console. A question mark (?) and level is CE_CONT, the message is always sent to the system buffer, but is written to the console only when the system has been booted in verbose mode. See kernel(1M). If neither condition is met, the ? character has no effect and is simply ignored. All formatting conversions in use by cmn_err() also work with scsi_log(). The scsi_log() function may be called from user, interrupt, or kernel context. kernel(1M), sd(7D), cmn_err(9F), scsi_errmsg(9F) Name | Synopsis | Interface Level | Parameters | Description | Context | See Also
http://docs.oracle.com/cd/E19253-01/816-5180/6mbbf02o4/index.html
CC-MAIN-2015-14
en
refinedweb