prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Reading a "flipped" table in to a data.frame correctly I have a tab-delimited file that looks like this: ``` AG-AG AG-CA AT-AA AT-AC AT-AG ... 0.0142180094786 0.009478672985781 0.0142180094786 0.4218009478672 ... ``` When I read this into R using read.table i get: ``` nc.tab <- read.table("./percent_splice_pair.tab", sep="\t", header=TRUE) AG.AG AG.CA AT.AA AT.AC AT.AG ... 1 0.01421801 0.009478673 0.01421801 0.4218009 0.03317536 ... ``` This feels somewhat awkward for me, because I am much more used to working with data if its like this: ``` splice.pair counts AG.AG 0.01421801 AG.CA 0.009478673 AT.AA 0.01421801 AT.AG 0.03317536 ... ... ``` so far, my attempts at trying to coerce the table into a data frame like this (using `data.frame()`) have caused very odd results. I can't work out how to get each row of the table I have as a simple list, which I can then use as columns for the data frame. `colnames(nc.tab)` works for the headers but things like `nc.tab[1,]` just give me the table + headers again. Am I missing something obvious? --edit-- Whilst @Andrie's answer gave me the data.frame I needed, I had to do a bit of extra work to coerse the counts values into numeric values so that they would work correctly in ggplot: ``` nc.tab <- read.table("./percent_splice_pair.tab", header=FALSE, sep="\t") nc.mat <- t(as.matrix(nc.tab)) sp <- as.character(nc.tab[,2]) c <- as.numeric(as.character(nc.tab[,2])) nc.dat <- data.frame(Splice.Pair=sp, count=c) Splice.Pair count 1 AG-AG 0.014218009 2 AG-CA 0.009478673 3 AT-AA 0.014218009 4 AT-AC 0.421800948 5 AT-AG 0.033175355 ```
You need the following to read and reshape your data in the way you want: - use `read.table` with the parameter `header=FALSE` - then transpose the data with the function `t()` - rename the columns Here is the code: ``` x <- read.table(..., header=FALSE) df <- as.data.frame(t(x)) names(df) <- c("splice.pair", "counts") df splice.pair counts V1 AG-AG 0.0142180094786 V2 AG-CA 0.009478672985781 V3 AT-AA 0.0142180094786 V4 AT-AC 0.4218009478672 ```
Can someone explain how to use FastTags There are two ways to create customs tags with the play framework. 1. By defining a groovy template in app/view/tags 2. Directly in pure java by having a class extend FastTags The latest is NOT documented.
So, similar to how JavaExtensions work by extending the JavaExtensions class, to create a FastTag you need to create a class that extends FastTags. Each method that you want to execute as a tag needs to conform to the following method structure. ``` public static void _tagName(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) ``` **Note the underscore before the name of the tag.** To understand how to build an actual tag, the easiest way is to look at the source code for a FastTag, and see one in action. Here is the source straight from git hub. <https://github.com/playframework/play/blob/master/framework/src/play/templates/FastTags.java> Below are a few I have copied, so that I can explain how this works. ``` public static void _verbatim(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { out.println(JavaExtensions.toString(body)); } ``` So, this first method is the **verbatim** tag, and simply calls the toString method on the JavaExtensions, and passes in the body of the tag. The body of the tag would be anything between the open and close tag. So ``` <verbatim>My verbatim</verbatim> ``` The body value would be ``` My verbatim ``` The second example, is slightly more complex. It is a tag that relies on a parent tag to function. ``` public static void _option(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { Object value = args.get("arg"); Object selectedValue = TagContext.parent("select").data.get("selected"); boolean selected = selectedValue != null && value != null && selectedValue.equals(value); out.print("<option value=\"" + (value == null ? "" : value) + "\" " + (selected ? "selected=\"selected\"" : "") + "" + serialize(args, "selected", "value") + ">"); out.println(JavaExtensions.toString(body)); out.print("</option>"); } ``` This code works by outputting an HTML option tag, and sets the selected value, by checking which value is selected from the parent tag. The first 3 lines just get data, and set up the data ready to output. Then, the final 3 lines outputs the result of the tag. There are many more examples in the source code I have linked to, with varying degrees of complexity, but hopefully this will be a good starting point for you. To ensure that your tags do not conflict between projects, or with the core Play tags, you can set up namespaces, using the class level annotation @FastTags.Namespace. So, for a hello tag, in a namespace of my.tags, you would do the following ``` @FastTags.Namespace("my.tags") public class MyFastTag extends FastTags { public static void _hello (Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { ... } } ``` and then in your templates, you would reference the hello tag as ``` #{my.tags.hello/} ```
Please help me understand type attribute of web.config custom settings? I am trying to define custom settings in my web.config file and I'm pretty sure I have most of it correct and it all makes sense, except the one crucial part where I don't understand what I'm supposed to use. The tutorial I used to create my SectionHandler, didn't go into an explanation of it and MSDN isn't really helping me fully understand it either. This comes from the tutorial I used: ``` <section name="BlogSettings" type="Fully.Qualified.TypeName.BlogSettings, AssemblyName" /> ``` [Link to Tutorial](http://haacked.com/archive/2007/03/11/custom-configuration-sections-in-3-easy-steps.aspx) This is from MSDN: ``` type="System.Configuration.SingleTagSectionHandler" ``` Yes, I am very new to ASP.NET and I'm trying to learn. I would be happy with any good references that explain what's going on here.
### Description The type Attribute of the Section in web.config is the "path" to the corresponding `ConfigurationSection` class you have implemented. The ConfigurationSection class is the class that defines the section and the possible configuration content. The `ConfigurationSection` is the base class of all configuration sections. This belongs to .NET in general, not only web.config. Check out [Unraveling the Mysteries of .NET 2.0 Configuration](http://www.codeproject.com/Articles/16466/Unraveling-the-Mysteries-of-NET-2-0-Configuration) ### More Information - [MSDN - ConfigurationSection Class](http://msdn.microsoft.com/en-us/library/system.configuration.configurationsection.aspx) - [Unraveling the Mysteries of .NET 2.0 Configuration](http://www.codeproject.com/Articles/16466/Unraveling-the-Mysteries-of-NET-2-0-Configuration)
Weird set index error I am stuck on this chunk of code ``` hdiag = zeros(Float64,2) hdiag = [0,0] println(hdiag) hdiag[1] = randn() ``` In the last line I obtain an `InexactError`. It is strange because randn() it's a `Float64`, but for some reason I have to do `hdiag=randn(2)` and then there should not be a problem.
The line: ``` hdiag = [0,0] ``` *changes* `hdiag` to refer to a completely new and different array than what it was before. In this case, that new array is an integer array, and so any subsequent assignments into it need to be convertible to integers. Indexed assignment is different; it changes the contents of the existing array. So you can use `hdiag[:] = [0,0]` and it will change the contents, converting the integers to floats as it does so. This gets even easier in version 0.5, where you can use the new `.=` dot assignment syntax to assign into an existing array: ``` hdiag .= [0,0] ``` will do what you want. For more details on arrays, bindings, and assignment, I recommend reading this blog post: [Values vs. Bindings: The Map is Not the Territory.](http://www.johnmyleswhite.com/notebook/2014/09/06/values-vs-bindings-the-map-is-not-the-territory/)
run non web java application on tomcat I have a simple Java application that I need to be running at all time (also to start automatically on server restart). I have thought on a service wrapper, but the Windows version is paid. Is there a way that I can configure Tomcat to run a specific class from a project automatically or any other solution that could give the same result?
I think your need is to have an application (whatever web or non web) that starts with tomcat at the same time. Well, you need to have a simple web application that registers a listener (that listens to the application start event i.e. tomcat start event) and launches your class. It's very simple in your web.xml you declare a listener like this : ``` <listener> <description>application startup and shutdown events</description> <display-name>ApplicationListener</display-name> <listener-class>com.myapp.server.config.ApplicationListener</listener-class> </listener> ``` And in you ApplicationListener class you implement ServletContextListener interface. Here is an example : ``` import java.io.File; import javax.servlet.ServletContext; import javax.servlet.ServletContextEvent; import javax.servlet.ServletContextListener; /** * Class to listen for application startup and shutdown * * @author HBR * */ public class ApplicationListener implements ServletContextListener { private static Logger logger = Logger.getLogger(ApplicationListener.class); @Override public void contextDestroyed(ServletContextEvent servletContextEvent) { logger.info("class : context destroyed"); } @Override public void contextInitialized(ServletContextEvent servletContextEvent) { ServletContext context = servletContextEvent.getServletContext(); ///// HERE You launch your class logger.info("myapp : context Initialized"); } } ```
R Split string and keep substrings righthand of match? How to do this stringsplit() in R? Stop splitting when no first names seperated by dashes remain. Keep right hand side substring as given in results. ``` a <- c("tim/tom meyer XY900 123kncjd", "sepp/max/peter moser VK123 456xyz") # result: c("tim meyer XY900 123kncjd", "tom meyer XY900 123kncjd", "sepp moser VK123 456xyz", "max moser VK123 456xyz", "peter moser VK123 456xyz") ```
Here is one possibility using a few of the different base string functions. ``` ## get the lengths of the output for each first name len <- lengths(gregexpr("/", sub(" .*", "", a), fixed = TRUE)) + 1L ## extract all the first names ## using the fact that they all end at the first space character fn <- scan(text = a, sep = "/", what = "", comment.char = " ") ## paste them together paste0(fn, rep(regmatches(a, regexpr(" .*", a)), len)) # [1] "tim meyer XY900 123kncjd" "tom meyer XY900 123kncjd" # [3] "sepp moser VK123 456xyz" "max moser VK123 456xyz" # [5] "peter moser VK123 456xyz" ``` **Addition:** Here is a second possibility, using a little less code. Might be a little faster too. ``` s <- strsplit(a, "\\/|( .*)") paste0(unlist(s), rep(regmatches(a, regexpr(" .*", a)), lengths(s))) # [1] "tim meyer XY900 123kncjd" "tom meyer XY900 123kncjd" # [3] "sepp moser VK123 456xyz" "max moser VK123 456xyz" # [5] "peter moser VK123 456xyz" ```
Second function is not running in jquery I have two functions and I want to call one function after the first is completed. I wrote this: ``` $(document).ready(function () { FetchProducts('@Model.ProductId', function () { SimilarProducts('@Model.Class.Group.SectionId', '@Model.ProductId', '@TempData["Min"]', '@TempData["Max"]'); }); }); ``` `FetchProducts` function runs an ajax call that will fill `TempData["Min"]` and `TempDate["Max"]` and returns a list of products. `SimilarProducts` want to make another ajax request by min and max to get some similar products. `FetchProducts` is running properly but `SimilarProducts` is not running. Whats the problem? **Update** This is `FetchProducts` function: ``` function FetchProducts(productId) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } }); } ``` And this is `SimilarProducts` function: ``` function SimilarProducts(sectionId,productId, minimum, maximum) { $.getJSON("/product/getsimilarproducts", { sectionId: sectionId, productId: productId, min: minimum, max: maximum }, function (data) { var i = 0; for (var i = 0; i < data.length; i++) { ... } }); } ```
Oh well, since your update I can tell you where is the error :) So, talk about your 'FetchProducts' function : ``` function FetchProducts(productId) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } }); ``` } As we can see, your 'FetchProducts' function only accept one argument, but in his call, you added an anonymous function. In order to make it working properly, you shoud edit your 'FetchProducts' function like this : ``` function FetchProducts(productId, callback) { $.getJSON("/product/jsonproducts", { productId: productId }, function (data) { var i = 0; $(".section-items-details").empty(); for (var i = 0; i < data.length; i++) { ... } ... // Everything is ok, let's call our callback function! if ($.isFunction(callback)) callback(); }); } ```
Converting dates with PHP for DATETIME in SQL I have a forum in PHP which takes a date like in the form `dd/mm/yyyy hh:mm:ss`. However, I need to insert it for SQL as a DATETIME in the format as `yyyy-mm-dd hh:mm:ss`. How can I convert this data?
Your date time format is wrong: `dd/mm/yyyy hh:mm:ss`. Probably you mean `d/m/Y H:i:s` If you have 5.3+ version there is safe way to convert the date time into another format. Here's an example: ``` $timestamp = '31/05/2001 12:22:56'; $timestamp = DateTime::createFromFormat('d/m/Y H:i:s', $timestamp); echo $timestamp->format('Y-m-d H:i:s'); ``` or if you like more procedural way: ``` $timestamp = '31/05/2001 12:22:56'; $timestamp = date_create_from_format('d/m/Y H:i:s', $timestamp); echo date_format($timestamp, 'Y-m-d H:i:s'); ``` Be careful with previous suggestions. Some are completely wrong and others could lead to errors.
Rust 'for loop' (converting from c++) Trying to convert this for loop from c++ to rust and i'm having a hard time figuring it out as I'm very new to Rust syntax. ``` double sinError = 0; for (float x = -10 * M_PI; x < 10 * M_PI; x += M_PI / 300) { double approxResult = sin_approx(x); double libmResult = sinf(x); sinError = MAX(sinError, fabs(approxResult - libmResult)); } ```
# Iterate over integers As @trentcl already pointed out, it's usually better to iterate over integers instead of floats, to prevent numerical errors from adding up: ``` use std::f32::consts::PI; let mut sin_error = 0.0; for x in (-3000..3000).map(|i| (i as f32) * PI / 300.0) { sin_error = todo!(); } ``` Just replace `todo!()` with the code that computes the next `sin_error`. ### A more functional way ``` use std::f32::consts::PI; let sin_error = (-3000..3000) .map(|i| (i as f32) * PI / 300.0) .fold(0.0, |sin_error, x| todo!()); ``` In case you don't care about numerical errors, or want to iterate over something else, here are some other options: # Use a `while` loop It's not as nice, but does the job! ``` use std::f32::consts::PI; let mut sin_error = 0.0; let mut x = -10.0 * PI; while (x < 10.0 * PI) { sin_error = todo!(); x += PI / 300.0; } ``` # Create your iterator with `successors()` The `successors()` function creates a new iterator where each successive item is computed based on the preceding one: ``` use std::f32::consts::PI; use std::iter::successors; let mut sin_error = 0.0; let iter = successors(Some(-10.0 * PI), |x| Some(x + PI / 300.0)); for x in iter.take_while(|&x| x < 10.0 * PI) { sin_error = todo!(); } ``` ### A more functional way ``` use std::f32::consts::PI; use std::iter::successors; let sin_error = successors(Some(-10.0 * PI), |x| Some(x + PI / 300.0)) .take_while(|&x| x < 10.0 * PI) .fold(0.0, |sin_error, x| todo!()); ```
react.js setState call with key but without value? Just started to learn react.js and javascript. I'm going through all the documentation on facebook's github, but got stuck with this. In the *handleCelsiusChange* method of *Calculator* class in [Lifting state up](https://facebook.github.io/react/docs/react-component.html) chapter there is this line: ``` this.setState({scale: 'c', value}); ``` So *scale* will get the value 'c'. Okay. But what is this *value* being simply there? Shouldn't it be a key-value pair? I've checked the [explanation of setState()](https://facebook.github.io/react/docs/react-component.html): > > The first argument can be an object (containing zero or more keys to > update) or a function (of state and props) that returns an object > containing keys to update. > > > But it says nothing relevant about this usage. *Thanks! :)*
That's actually a feature of ES6. If the key matches an existing variable name you can use this shorthand syntax. So instead of writing `value: value` you can simply write `value` as key and variable name are the same. **Example with ES6** ``` function getCar(make, model, value) { return { // with property value shorthand // syntax, you can omit the property // value if key matches variable // name make, model, value }; } ``` **The equivalent of the above in ES3/ES5** ``` function getCar(make, model, value) { return { make: make, model: model, value: value }; } ``` *Example taken from [http://www.benmvp.com/learning-es6-enhanced-object-literals/](http://www.benmvp.com/learning-es6-enhanced-object-literals/_)*
How to preserve POST data via ajax request after a .htaccess redirect? **.htacesss** ``` RewriteCond %{REQUEST_URI} ^/api/(.+)$ RewriteRule ^api/(.+)$ /index.php?api=%1 [QSA,L] ``` example ajax url request: 'http://hostname.com/api/ext/list.php?query=de' I want to be able to redirect urls in this format to the following index.php?api={requested\_filename}&param1=value1&param2=value2 ... because the whole site is processed through a bootstrap process in index.php which has a routing part loading configs, templates etc... When I try a jquery code for example, the POST data is lost after redirect. ``` $.ajax({ url: '/api/contact.php', type: 'POST', data: { email: $("#contactEmail").val(), name: $("#contactName").val(), message: $("#contactMessage").val() // etc ... } }); ``` I've read that you cannot preserve data on a http redirect. But how do all the frameworks avoid that? I've coded in many, and every one is bootstraped through the index.php and there are rewrite rules in the .htaccess file for enabling pretty urls. So in Yii for example, I would call an url "api/uploads/latests.json" with some POST data and the controllers on the backend would receive that data. What am i missing here? note: I've tested the **[P]** mod\_rewrite parameter, and i think that this server doesn't have mod\_proxy enabled.
There is a difference between a rewrite and a redirect. Rewrite is an apache (and other servers) module that will follow a set of cond/rules to map a requested url to files on the server (ex: a bootstrap rewrites all urls to a single file, usually index.php. A mvc might map /model/controller/view uri to an index.php that calls the appropriate mvc files). A redirect actually changes the page you are on. Someone requests page A.php and that page says "what you are looking for is on B.php" and so your browser goes to B.php. A rewrite will preserve post parameters because the url doesn't change. A rewrite will just change the script being requested, but to the browser it looks like the page still exists at the requested url. A redirect will not preserve post parameters because the server will redirect you to another page completely. What it appears you are trying to do is a rewrite, not a redirect. You should have no problems getting the post parameters. To fix this, how are you checking in index.php that there are no post parameters? Are you sure the controller you are expecting is getting called?
What does this command do? "exec bash -l" What does this command do? ``` exec bash -l ``` I found this command as part of a reminder text file were I wrote some instructions regarding how to create a ssh key and clone a git repo, but I wrote it a long time ago and I can't remember what it does.
`exec` executes a specified command, replacing the current process rather than starting a new subprocess. If you type ``` bash -l ``` at a shell prompt, it will invoke a new shell process (the `-l` makes it a login shell). If you exit that shell process, you'll be back to your original shell process. Typing ``` exec bash -l ``` means that the new shell process *replaces* your current shell process. It's probably slightly less resource intensive. The reason for doing it is probably so that the new shell sets up its environment (by reading your `.bashrc`, `.bash_profile`, etc.). See the bash documentation for more information: - [Bash Startup Files](http://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html) for how a login shell differs from a non-login shell - [Bourne Shell Builtins](http://www.gnu.org/software/bash/manual/html_node/Bourne-Shell-Builtins.html) for documentation on the `exec` command. (You should be able to read the manual on your own system by typing `info bash`.)
Serving a multitude of static sites from a wildcard domain in AWS I've got a pretty specific problem here, we've got a system that we already have and maintain, the system involves using subdomains to route people to specific apps. on a traditional server that goes like follows; we have a wildcard subdomain, \*.domain.com that routes to nginx and serves up a folder so myapp.domain.com > nginx > serves up myapp app folder > myapp folder contains a static site I'm trying to migrate this in some way to AWS, I basically need to do a similar thing in AWS, I toyed with the idea of putting each static app into an s3 bucket and then the wildcard domain in route 53 but i'm unsure how s3 would know which folder to serve up as that functionality isn't part of route 53 Anyone have any suggestions? Thanks for all your help
CloudFront + Lambda@Edge + S3 can do this "serverless." Lambda@Edge is a CloudFront enhancement that allows attributes of requests and responses to be represented and manipulated as simple JavaScript objects. Triggers can be provisioned to fire during request processing, either before the cache is checked ("viewer request" trigger) or before the request proceeds to the back-end ("origin server", an S3 web site hosting endpoint, in this case) following a cache miss ("origin request" trigger)... or during response processing, after the response is received from the origin but before it is considered for storing in the CloudFront cache ("origin response" trigger), or when finalizing the response to the browser ("viewer response" trigger). Response triggers can also examine the original request object. The following snippet is something I originally [posted](https://forums.aws.amazon.com/thread.jspa?messageID=812462&#812462) at the AWS Forums. It is an Origin Request trigger which compares the original hostname to your pattern (e.g. the domain must match `*.example.com`) and if it does, the hostname prefix `subdomain-here.example.com` is request is served from a folder named for the subdomain. ``` lol.example.com/cat.jpg -> my-bucket/lol/cat.jpg funny-pics.example.com/cat.jpg -> my-bucket/funny-pics/cat.jpg ``` In this way, static content from as many subdomains as you like can all be served from a single bucket. In order to access the original incoming `Host` header, CloudFront needs to be configured to [whitelist the Host header for forwarding to the origin](https://aws.amazon.com/premiumsupport/knowledge-center/configure-cloudfront-to-forward-headers/) even though the net result of the Lambda function's execution will be to modify that value before the origin acually sees it. The code is actually very simple -- most of the following is explanatory comments. ``` 'use strict'; // if the end of incoming Host header matches this string, // strip this part and prepend the remaining characters onto the request path, // along with a new leading slash (otherwise, the request will be handled // with an unmodified path, at the root of the bucket) const remove_suffix = '.example.com'; // provide the correct origin hostname here so that we send the correct // Host header to the S3 website endpoint const origin_hostname = 'example-bucket.s3-website.us-east-2.amazonaws.com'; // see comments, below exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; const host_header = headers.host[0].value; if(host_header.endsWith(remove_suffix)) { // prepend '/' + the subdomain onto the existing request path ("uri") request.uri = '/' + host_header.substring(0,host_header.length - remove_suffix.length) + request.uri; } // fix the host header so that S3 understands the request headers.host[0].value = origin_hostname; // return control to CloudFront with the modified request return callback(null,request); }; ``` Note that index documents and redirects from S3 may also require an Origin Response trigger to normalize the `Location` header against the original request. This will depend on exactly which S3 website features you use. But the above is a working example that illustrates the general idea. Note that `const origin_hostname` needs to be set to the bucket's endpoint hostname as configured in the CloudFront origin settings. In this example, the bucket is in us-east-2 with the web site hosting feature active.
When you type dir in terminal, the name file with space separate with symbol backslash \ ### Terminal Text ``` $ cd LALALA ~/LALALA $ dir la\ la\ 1 la\ la\ 2 ~/LALALA $ ``` ### Directory Image ![Directory Image](https://i.stack.imgur.com/zv1zf.png) When I use `dir` command in terminal, the list of file show `\` (backslash) to separate their word (see: Terimanal Image). Whereas in the origin, there is no '\' (backslash) (see: Directory Image). How do I change this back to normal? NB. I think the reason is because of the `pwd` command on the terminal. After I use the PWD command, this happens.
What you experience is *quoting*, `dir` has the `-N` or `--literal` option to disable it: ``` -N, --literal print entry names without quoting ``` The exact same applies to `ls`, just that it quotes the whole filenames with single quotes instead of escaping special characters (which can be triggered with `-b` or `--escape` for `ls` as well). Calling `ls` with `-N` disables this behaviour as well. ### Example run ``` $ touch 'la la '{1,2} $ dir la\ la\ 1 la\ la\ 2 $ dir -N la la 1 la la 2 $ ls 'la la 1' 'la la 2' $ ls -b la\ la\ 1 la\ la\ 2 $ ls -N la la 1 la la 2 ``` ### Further reading - [`man dir`](http://manpages.ubuntu.com/manpages/bionic/en/man1/dir.1.html) and [`info dir`](https://www.gnu.org/software/coreutils/manual/html_node/dir-invocation.html#dir-invocation) - [Difference between 'dir' and 'ls' terminal commands?](https://askubuntu.com/q/103913/507051) - Regarding the similar behaviour of `ls`: [Filenames with space showing as 'file name' after upgrade to 18.04](https://askubuntu.com/q/1105867/507051)
What do negative values in the output matrices of FeatureHasher mean? I expected all values in the output sparse matrices of `FeatureHasher` to be non-negative because I thought it should just count the term frequencies and index the terms with the help of a hashing function. When collision happens, I expect it to just add up the frequencies of the 'collided terms'. However, it seems I was wrong since it in fact outputted a bunch of negative values, given a list of dictionary with terms as keys and term frequencies as values. What exactly do negative values mean here?
If the input values were all positive (term frequencies), then the negative values don't actually mean anything. Citing the [scikit-learn docs](http://scikit-learn.org/stable/modules/feature_extraction.html#feature-hashing): > > Since the hash function might cause collisions between (unrelated) features, a signed hash function is used and the sign of the hash value determines the sign of the value stored in the output matrix for a feature. This way, collisions are likely to cancel out rather than accumulate error, and the expected mean of any output feature’s value is zero. > > > The Wikipedia has a [little table](https://en.wikipedia.org/wiki/Feature_hashing#Feature_vectorization_using_the_hashing_trick) showing the effect of this scheme, which was invented by [Weinberger et al.](http://alex.smola.org/papers/2009/Weinbergeretal09.pdf). Not only does it alleviate the effect of collisions, but it makes your features behave more like Gaussians, which helps some learning algorithms. If you want to get rid of the negative values, then pass `non_negative=True` to the `FeatureHasher` constructor. The implementation of that option is a bit of a hack, but it makes tf-idf and naive Bayes work again.
Cordova 7 - config.xml or package.json? I've successfully created and released an application on Cordova - so I'm not new to the platform, but it wasn't without its hiccups and frustrations. With Cordova 7, we now have config.xml and package.json, largely containing the same information. If I want to change, say the version number, or the title, **which one do I edit** and how do I then update the other file accordingly? Nothing I've tried seems to work, and it seems rather redundant to have to make the changes twice?
**EDIT**: This is no longer accurate, as of Cordova 9. There is an [issue](https://github.com/apache/cordova-docs/issues/1004#issuecomment-510598007) to document this in the docs, which has not been addressed yet. It seems as currently the plan is to migrate away from `config.xml` entirely, however that does not seem to be complete. See also [this answer](https://stackoverflow.com/a/55752850/3067153). --- From what i can read at the [Cordova 7 Release Notes](http://cordova.apache.org/news/2017/05/04/cordova-7.html), the `package.json` will always be created to mirror the `config.xml` whenever you run `cordova prepare`, if it does not exist. If a `package.json` does exist, it will take preference (but only for the things defined in it, like plugins and platforms, for other config options `config.xml` will still be used). So, it might be a solution for you to save your settings in `config.xml`, and always delete the `package.json` before running `cordova prepare`, maybe with a custom npm script.
Heatmap color key with five different colors I have the following code and not sure how to use it to display a heatmap with a color key displaying five different colors representing defined values: ``` hm <- heatmap.2(data_matrix, scale="none",Rowv=NA,Colv=NA,col = rev(brewer.pal(11,"RdBu")),margins=c(5,5),cexRow=0.5, cexCol=1.0,key=TRUE,keysize=1.5, trace="none") ``` Color key required: ``` <0.3 (blue) 0.3-1 (green) 1-1.3 (yellow) 1.3-3.0 (orange) >3.0 (red) ``` I would be happy if someone can help. Thanks! James
``` require(gplots) require(RColorBrewer) ## Some fake data for you data_matrix <- matrix(runif(100, 0, 3.5), 10, 10) ## The colors you specified. myCol <- c("blue", "green", "yellow", "orange", "red") ## Defining breaks for the color scale myBreaks <- c(0, .3, 1, 1.3, 3, 3.5) hm <- heatmap.2(data_matrix, scale="none", Rowv=NA, Colv=NA, col = myCol, ## using your colors breaks = myBreaks, ## using your breaks dendrogram = "none", ## to suppress warnings margins=c(5,5), cexRow=0.5, cexCol=1.0, key=TRUE, keysize=1.5, trace="none") ``` This should work, and give you some ideas of how to edit it further if you'd like. To get the legend with your exact values, I wouldn't bother with the built-in histogram and would instead just use `legend`: ``` hm <- heatmap.2(data_matrix, scale="none", Rowv=NA, Colv=NA, col = myCol, ## using your colors breaks = myBreaks, ## using your breaks dendrogram = "none", ## to suppress warnings margins=c(5,5), cexRow=0.5, cexCol=1.0, key=FALSE, trace="none") legend("left", fill = myCol, legend = c("0 to .3", "0.3 to 1", "1 to 1.3", "1.3 to 3", ">3")) ```
Porting AWT graphics code to Android We would like to use some of our existing Java AWT graphics code on the Android platform. As far as I can tell, Android does not include any of the AWT classes -- no `Graphics2D`, `Path2D`, `FontMetrics`, etc. What is the best approach to port our drawing code to Android? Ideally, we would like to modify our code base to target *both* Android and generic Java.
The android platform supports a small subset of awt. By small, I mean it supports awt fonts. Going from java swing (are you really just using awt as a standalone UI?) to Android is going to be a shock to the system. One defines Android's UI in XML resource files, and those resources are loaded into Activity classes which represents a logical unit of the application. Canvas' replace Graphics2D objects, and they have somewhat different functionality.The Android UI system seeks to avoid absolute positioning, which is common in java UI's. And there is so much more to consider for Android, like varying screen sizes and differences in resolution. Neither of which were much of a issue for Java SE. To answer your question: you have your work cut out for you and you will have to start much of your coding from scratch. I'm not sure what 'drawing code' means, but in my case, I have a large amount of animation code that is displayed via awt shapes. This is reproducable on android as there are graphics.drawable.shapes objects, and one can display primitive shapes by doing something like canvas.drawCircle(x,y,z,h) (remind you of something?). But when I ported it, it was difficulty and felt like I was cortorting the android to do something it really didn't want to, and wasn't intended to do. It's hard to answer your question exactly given the vagueness of it.
C# 8 switch expression: Handle multiple cases at once? C# 8 introduced pattern matching, and I already found good places to use it, like this one: ``` private static GameType UpdateGameType(GameType gameType) { switch (gameType) { case GameType.RoyalBattleLegacy: case GameType.RoyalBattleNew: return GameType.RoyalBattle; case GameType.FfaLegacy: case GameType.FfaNew: return GameType.Ffa; default: return gameType; } } ``` which then becomes ``` private static GameType UpdateGameType(GameType gameType) => gameType switch { GameType.RoyalBattleLegacy => GameType.RoyalBattle, GameType.RoyalBattleNew => GameType.RoyalBattle, GameType.FfaLegacy => GameType.Ffa, GameType.FfaNew => GameType.Ffa, _ => gameType; }; ``` However, you can see I now have to mention `GameType.RoyalBattle` and `GameType.Ffa` twice. Is there a way to handle multiple cases at once in pattern matching? I'm thinking of anything like this, but it is not valid syntax: ``` private static GameType UpdateGameType(GameType gameType) => gameType switch { GameType.RoyalBattleLegacy, GameType.RoyalBattleNew => GameType.RoyalBattle, GameType.FfaLegacy, GameType.FfaNew => GameType.Ffa, _ => gameType; }; ``` I also tried things like ``` [GameType.RoyalBattleLegacy, GameType.RoyalBattleNew] => GameType.RoyalBattle ``` or ``` GameType.FfaLegacy || GameType.FfaNew => GameType.Ffa ``` but none are valid. Also did not find any example on this. Is it even supported?
As of C#9, you can do exactly what you wanted via "disjunctive `or`" patterns: ``` private static GameType UpdateGameType(GameType gameType) => gameType switch { GameType.RoyalBattleLegacy or GameType.RoyalBattleNew => GameType.RoyalBattle, GameType.FfaLegacy or GameType.FfaNew => GameType.Ffa, _ => gameType; }; ``` Further reading: - [What's new in C# 9.0: pattern matching enhancements](https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9#pattern-matching-enhancements) - [Patterns C# reference](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/patterns#logical-patterns)
Sort Bullets in Database I have a column `[datatype:varchar(50)]` in database (SQL Server 2008) having Values as shown below: ``` 1 2 1.1.11 4.1 5 2.1 1.1 4 1.2.1 4.2.2 4.3 4.2 4.3.1 4.2.1 11.2 1.2.4 4.4 ``` these are numbered bullets for my records I need to sort them as grouping all the records in sequence `1,1.1,1.1.1,2,3.1,4,10.1,11.1....` Please help me in this regard.
``` WITH T(YourColumn) AS ( SELECT '1' UNION ALL SELECT '2' UNION ALL SELECT '1.1.11' UNION ALL SELECT '4.1' UNION ALL SELECT '5' UNION ALL SELECT '2.1' UNION ALL SELECT '1.1' UNION ALL SELECT '4' UNION ALL SELECT '1.2.1' UNION ALL SELECT '4.2.2' UNION ALL SELECT '4.3' UNION ALL SELECT '4.2' UNION ALL SELECT '4.3.1' UNION ALL SELECT '4.2.1' UNION ALL SELECT '11.2' UNION ALL SELECT '1.2.4' UNION ALL SELECT '4.4' ) SELECT * FROM T ORDER BY CAST('/' + YourColumn + '/' AS HIERARCHYID) ``` Returns ``` YourColumn ---------- 1 1.1 1.1.11 1.2.1 1.2.4 2 2.1 4 4.1 4.2 4.2.1 4.2.2 4.3 4.3.1 4.4 5 11.2 ``` Is that what you need?
Return a value or modify reference? I've seen both before, and as far as I know, it's pretty much subjective, but if given the option, which would you do and why? If the data were large, would there be any speed/memory benefit to one of them? ``` function processData(&$data_to_process) { // Pass by reference. // do something to the data } // ... somewhere else $this->processData($some_data); ``` or ``` function processData($data_to_process) { // Pass by value. // do something to the data return $data_to_process; } // ... somewhere else $some_data = $this->processData($some_data); ```
PHP copies on write, so if the data doesn't change in the function, using a reference only makes things run slower. In your case, you are changing the data, so a copy will occur. Test with the following: ``` <?php define('N', 100000); $data = range(1, N); srand(1); function ref(&$data) { $data[rand(1, N)] = 1; } function ret($data) { $data[rand(1, N)] = 1; return $data; } echo memory_get_usage()."\n"; echo memory_get_peak_usage()."\n"; ref($data); // $data = ret($data); echo memory_get_usage()."\n"; echo memory_get_peak_usage()."\n"; ?> ``` Run it once with `ref()` and once with `ret()`. My results: ### ref() - 8043280 (before / current) - 8044188 (before / peak) - 8043300 (after / current) - 8044216 (after / peak) ### ret() - 8043352 (before / current) - 8044260 (before / peak) - 8043328 (after / current) - 12968632 (after / peak) So, as you can see, PHP uses more memory when modifying the data in the function and returning it. So the optimal case is to pass by reference. However, passing by reference can be dangerous if it's not obvious that it is occurring. Often you can avoid this question altogether by encapsulating your data in classes that modify their own data. Note that if you use objects, PHP5 always passes them by reference.
URL Pattern with Spring MVC and Spring Security I have seen some url-pattern in my xml e.g, in filter-mapping, intercept-url, mvc:resources, etc. Are these patterns always the same ? What's the difference among these URL patterns `/`, `/*`, `/**` ?
It depends in which context you ask this question: 1. Servlet/Filter mappings 2. Spring Security mappings --- ## Servlet/Filter mappings > > In the Web application deployment descriptor, the following syntax is used to define mappings: > > > - A string beginning with a ‘/’ character and ending with a ‘/\*’ suffix is used for > path mapping. > - A string beginning with a ‘\*.’ prefix is used as an extension mapping. > - The empty string ("") is a special URL pattern that exactly maps to the > application's context root, i.e., requests of the form `http://host:port/<contextroot>/`. In this case the path info is ’/’ and the servlet path and context path is > empty string (““). > - A string containing only the ’/’ character indicates the "default" servlet of the > application. In this case the servlet path is the request URI minus the context path > and the path info is null. > - All other strings are used for exact matches only > > > This comes from the [Servlet Specification (JSR 315)](https://jcp.org/en/jsr/detail?id=315) (section 12.2). In a Servlet/Filter mapping scenario `/` means the "*default*" servlet, normally this is where the `DefaultServlet` (in Tomcat that is) is mapped to. Basically it handles all incoming requests and doesn't pass them on further down the chain for processing (basically this is the last-catch-all mapping). `/*` in the servlet mapping scenario means all incoming URLs (when it cannot be processed it will be handed of the the last-catch-all-mapping). --- ## Spring Security mappings Now when talking about **Spring**, `/`, `/*` and `/**` have a different meaning. They refer to so called Ant-style [path expressions](http://ant.apache.org/manual/dirtasks.html#patterns). Where `/` means only / (the root of your application), where `/*` means the root including one level deep and where `/**` means everything. So `/foo/*` will match a URL with `/foo/bar` but will not match `/foo/bar/baz`. Whereas `/**` or `/foo/**` would match all of them.
Scala Playframework send file I have a string of data, which I get from data in my database. I want to send it to the user, but without creating a local copy of the file, something like ``` Ok(MyString).as("file/csv") ``` But it is not working. How can I do it?
You can do this by using `chunked` with an `Enumerator`. I've also used `withHeaders` to specify the content type and disposition of the `Result` to "attachment", so that the client will interpret it as a file to download (rather than opening in the browser itself). ``` import play.api.libs.iteratee.Enumerator val myString: String = ??? // the String you want to send as a file Ok.chunked(Enumerator(myString.getBytes("UTF-8")).andThen(Enumerator.eof)) .withHeaders( "Content-Type" -> "text/csv", "Content-Disposition" -> "attachment; filename=mystring.csv" ) ``` This might not compile right away, depending on the types you're getting from the database. Come to think of it, this should also work (without the `Enumerator`): ``` Ok(myString).withHeaders( /* headers from above */ ) ```
What is the difference between React Native and React? I have started to learn *React* out of curiosity and wanted to know the difference between React and React Native - though could not find a satisfactory answer using Google. React and React Native seems to have the same format. Do they have completely different syntax?
[ReactJS](https://reactjs.org/) is a JavaScript library, supporting both front-end web and being run on a server, for building user interfaces and web applications. It follows the concept of reusable components. [React Native](https://reactnative.dev/) is a mobile framework that makes use of the JavaScript engine available on the host, allowing you to build mobile applications for different platforms (iOS, Android, and Windows Mobile) in JavaScript that allows you to use ReactJS to build reusable components and communicate with native components [further explanation](https://stackoverflow.com/questions/41124338/does-react-native-compile-javascript-into-java-for-android) Both follow the JSX syntax extension of JavaScript. Which compiles to `React.createElement` calls under the hood. [JSX in-depth](https://reactjs.org/docs/jsx-in-depth.html) Both are open-sourced by Facebook.
Is it possible to pass input to a running service or daemon? I want to create a Java console application that runs as a daemon on Linux, I have created the application and the script to run the application as a background daemon. The application runs and waits for command line input. My question: Is it possible to pass command line input to a running daemon?
On Linux, all running processes have a special directory under [`/proc`](http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html) containing information and hooks into the process. Each subdirectory of `/proc` is the PID of a running process. So if you know the PID of a particular process you can get information about it. E.g.: ``` $ sleep 100 & ls /proc/$! ... cmdline ... cwd environ exe fd fdinfo ... status ... ``` Of note is the `fd` directory, which contains all the [file descriptors](https://stackoverflow.com/q/5256599/113632) associated with the process. `0`, `1`, and `2` exist for (almost?) all processes, and `0` is the default stdin. So writing to `/proc/$PID/fd/0` will write to that process' stdin. A more robust alternative is to set up a [named pipe](https://en.wikipedia.org/wiki/Named_pipe) connected to your process' stdin; then you can write to that pipe and the process will read it without needing to rely on the `/proc` file system. See also [Writing to stdin of background process](https://serverfault.com/q/188936/134800) on ServerFault.
SQL Server get path with recursive CTE I want to get the path for each department with this format 1.1, 1.2 and so on. This is my department table : ``` id name parentId -------------------- 1 Dep 1 0 2 Dep 2 1 3 Dep 3 0 4 Dep 4 1 5 Dep 5 4 6 Dep 6 2 ``` This is my recursive CTE that give me the parents and children in a flat table starting from a root department. ``` WITH recursiveCte (parentId, id, name, Level) AS ( -- Anchor member definition SELECT d.parentId, d.id, d.name, 0 AS Level FROM Department AS d WHERE parentId = 0 UNION ALL -- Recursive member definition SELECT d.parentId, d.id, d.name, Level + 1 FROM Department AS d INNER JOIN recursiveCte AS r ON d.parentId = r.id ) -- Statement that executes the CTE SELECT parentId,id, name, Level FROM recursiveCte ORDER BY id ``` Current results: ``` parentId id name Level ------------------------------- 0 1 Dep 1 0 1 2 Dep 2 1 0 3 Dep 3 0 1 4 Dep 4 1 4 5 Dep 5 2 2 6 Dep 6 2 ``` Desired results: ``` parentId id name Level Path -------------------------------------- 0 1 Dep 1 0 1 1 2 Dep 2 1 1.1 2 6 Dep 6 2 1.1.1 1 4 Dep 4 1 1.2 4 5 Dep 5 2 1.2.1 0 3 Dep 3 0 2 ``` Thanks.
Here is a working solution. It is difficult to describe in words why this works, so I recommend taking apart the query to see how it works yourself. Basically, we recursively build the path string you want to see, using `ROW_NUMBER` to keep track to which particular parent each new path addition belongs. ``` recursiveCte (parentId, id, name, Level, Path, FullPath) AS ( SELECT d.parentId, d.id, d.name, 0 AS Level, CAST(ROW_NUMBER() OVER (ORDER BY d.id) AS nvarchar(max)), RIGHT('000' + CAST(ROW_NUMBER() OVER (ORDER BY d.id) AS nvarchar(max)), 3) FROM Department AS d WHERE parentId = 0 UNION ALL SELECT d.parentId, d.id, d.name, r.Level + 1, r.Path + '.' + CAST(ROW_NUMBER() OVER (PARTITION BY r.Level ORDER BY d.id) AS nvarchar(max)), r.FullPath + '.' + RIGHT('000' + CAST(ROW_NUMBER() OVER (PARTITION BY r.Level ORDER BY d.id) AS nvarchar(max)), 3) FROM Department AS d INNER JOIN recursiveCte AS r ON d.parentId = r.id ) SELECT parentId, id, name, Level, Path, FullPath FROM recursiveCte ORDER BY FullPath; ``` [![enter image description here](https://i.stack.imgur.com/J8ASw.png)](https://i.stack.imgur.com/J8ASw.png) [## Demo](http://rextester.com/DODYQ46889) **Edit:** I slightly edited my original answer so that it now sorts the path string using a fixed-width version, i.e. every number has a fixed width of 3 digits. This means that `001` will always sort before `010`, which is the behavior we want.
Sscanf delimiters for parsing I want to parse /etc/passwd file using sscanf . Currently trying below snippet ``` sscanf(buf,"%s:%*s:%*u:%*u:%*s:%*s", szName, &ncUser_gid); ``` its wrong. i need to parse only username and respective group Id of that user
The basic answer is to use (negated) 'scan sets' — read the manual for [`sscanf()`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/sscanf.html). ``` if (sscanf(buf, "%[^:]:%*[^:]:%*[^:]:%u", szName, &ncUser_gid) != 2) ``` This reads a sequence of non-colons into `szName`, and then skips a colon, the password field, a colon, the UID field, a colon, and reads the number in the next field into `ncUser_gid`. It also checks that you got both values, while ignoring the other trailing fields (comment, home, shell). Note that because you're using `sscanf()`, there really isn't a need to process any of the trailing fields. Also, there are 7 fields, not 6, in a password file entry. With `sscanf()`, this isn't a problem. If you were reading from a file, it would be. Also, if you were reading from a file, you'd have to worry about scan sets not skipping leading white space, which would be the newline left over from the previous line of input. For file-stream parsing, you'd need to use: ``` int rc; if ((rc = fscanf(fp, " %[^:]:%*[^:]:%u:%u:%[^:]:%[^:]:%[^:]", username, &uid, &gid, comment, homedir, shell)) != 5 && rc != 6) …handle format error… if (rc == 5) shell[0] = '\0'; ``` Note that there does not have to be any data for the shell field. This would run foul of an empty comment field, too, but that is normally populated. Note that it skipped the password; it is seldom interesting in modern versions of Unix. ### `sscanf()` example ``` #include <stdio.h> int main(void) { char buf[] = "root:*:0:1:System Administrator:/var/root:/bin/sh"; char szName[10] = "Pygmalion"; // Make sure it isn't empty!; unsigned int ncUser_gid = 23456; // Make sure it isn't zero! if (sscanf(buf, "%[^:]:%*[^:]:%*[^:]:%u", szName, &ncUser_gid) != 2) printf("Ooops!\n"); else printf("User: [%s]; GID = %u\n", szName, ncUser_gid); return 0; } ``` Output: ``` User: [root]; GID = 1 ``` (I hacked the entry so the UID and GID are different.) ### `fscanf()` example ``` #include <stdio.h> int main(void) { const char passwd[] = "/etc/passwd"; FILE *fp = fopen(passwd, "r"); if (fp == 0) { fprintf(stderr, "failed to open '%s' for reading\n", passwd); return 1; } char username[64]; unsigned uid; unsigned gid; char comment[64]; char homedir[64]; char shell[64]; int rc; while (!feof(fp)) { if ((rc = fscanf(fp, " %63[^:\n]:%*[^:\n]:%u:%u:%63[^:\n]:%63[^:\n]:%63[^:\n]", username, &uid, &gid, comment, homedir, shell)) != 5 && rc != 6) { int c; while ((c = getc(fp)) != EOF && c != '\n') ; } else { if (rc == 5) shell[0] = '\0'; printf("[%s] %u %u [%s] [%s] [%s]\n", username, uid, gid, comment, homedir, shell); } } return 0; } ``` Note that on a Mac, the password file starts with a number of lines of `#` comments. The `%[^:\n]` notation, or something similar, is required to avoid problems parsing that section of the file. On sane systems without such comment lines in the file, you can probably get away without them. Note too that the code protects itself from overflow in the string fields. Also, I continued with `unsigned` integers for UID and GID, but `nobody` has a negative value `-2` for both UID and GID, so a signed type might be better. Example output: ``` [nobody] 4294967294 4294967294 [Unprivileged User] [/var/empty] [/usr/bin/false] [root] 0 0 [System Administrator] [/var/root] [/bin/sh] [daemon] 1 1 [System Services] [/var/root] [/usr/bin/false] … ``` All users have a specific shell specified on my Mac so the 'rc == 5' code hasn't really been tested. Sample output: ``` [# Open Directory. ## nobody] 4294967294 4294967294 [Unprivileged User] [/var/empty] [/usr/bin/false ``` JFTR: Tested on a Mac running macOS 10.12.5 using GCC 7.1.0. Compilation command line like: ``` $ gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes \ > -Wstrict-prototypes pw89.c -o pw89 $ ```
ASP.Net Menu Bar is not being displayed I built a Menu Bar to my site master in ASP.net: ``` <div class="MenuBar"> <asp:ContentPlaceHolder ID="MainContent" runat="server"> <asp:Menu ID="menuBar" runat="server" Orientation="Vertical" Width="100%"> <DynamicHoverStyle CssClass="DynamicHover" /> <DynamicMenuItemStyle CssClass="DynamicMenuItem" /> <DynamicSelectedStyle CssClass="DynamicHover" /> <StaticHoverStyle CssClass="staticHover" /> <StaticMenuItemStyle CssClass="StaticMenuItem" ItemSpacing="1px" /> <StaticSelectedStyle CssClass="staticHover" /> </asp:Menu> </asp:ContentPlaceHolder> </div> ``` Code behind: ``` public partial class SiteMaster : System.Web.UI.MasterPage { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { getMenu(); } } private void getMenu() { Menu menuBar = new Menu(); SqlConnection con = new SqlConnection(); con.ConnectionString = "server=(local);database=PhilipsMaterials;Integrated Security=SSPI;"; con.Open(); DataSet ds = new DataSet(); DataTable dt = new DataTable(); string sql = "Select [Material Name] from Materials"; SqlDataAdapter da = new SqlDataAdapter(sql, con); da.Fill(ds); dt = ds.Tables[0]; DataRow[] drowpar = dt.Select(); String s = "sss"; foreach (DataRow dr in drowpar) { menuBar.Items.Add(new MenuItem(dr["Material Name"].ToString())); } con.Close(); } ``` } For some reason the menu is not being displayed when I view the site on the browser. Someone knows why? Thanks.
You instantiate a new object from menu then don't equel it to the exist menu, You don't need to instantiate a new one, just replace you function to below : ``` private void getMenu() { // Menu menuBar = new Menu(); SqlConnection con = new SqlConnection(); con.ConnectionString = "server=(local);database=PhilipsMaterials;Integrated Security=SSPI;"; con.Open(); DataSet ds = new DataSet(); DataTable dt = new DataTable(); string sql = "Select [Material Name] from Materials"; SqlDataAdapter da = new SqlDataAdapter(sql, con); da.Fill(ds); dt = ds.Tables[0]; DataRow[] drowpar = dt.Select(); String s = "sss"; foreach (DataRow dr in drowpar) { menuBar.Items.Add(new MenuItem(dr["Material Name"].ToString())); } con.Close(); } } ``` And call it from **Page\_PreRender** and not from Page\_Load.
Dynamic Linq OR with int and string I have a very simple query that would look like this ``` select * from job where jobId like '%23%' or title like '%23%' ``` I need to be able to replicate this using dynamic Linq The closest i've come to is this, but it doesn't work ``` .Where("@0.Contains(jobId) or title.Contains(@0)", "23"); ``` Has anyone got a solution to this, ideally I would like it to do a like on both int's and strings **addendum based on comments** The error is: > > An exception of type 'System.Linq.Dynamic.ParseException' occurred in System.Linq.Dynamic.dll but was not handled in user code Additional information: No applicable method 'Contains' exists in type 'String' > > > The `jobId` field is an `int`, while `title` is a `varchar`.
Your query is nearly right: ``` .Where("@0.Contains(jobId.ToString()) or title.Contains(@0)", "23") ``` Entity Framework (I hope you are using it) correctly changes `jobId.ToString()` to `CAST( [Extent1].[Id] AS nvarchar(max))`... It then uses a `CHARINDEX` instead of a `LIKE`, but this isn't a problem. The query I get, with Entity Framework 6.1.3 on SQL Server is: ``` SELECT [Extent1].[jobId] AS [jobId], [Extent1].[title] AS [title] FROM [dbo].[job] AS [Extent1] WHERE (( CAST(CHARINDEX( CAST( [Extent1].[jobId] AS nvarchar(max)), N'23') AS int)) > 0) OR ([Extent1].[title] LIKE N'%23%') ```
Fabric.js text width being ignored I am trying to add text to a canvas from a form input. Is there a way to automatically wrap the fabricjs text, to fit inside the canvas? There are 3 issues I am trying to overcome: 1. the text doesn't respect the 'text.left' position if the string is longer than the given space. 2. I cannot escape newlines, so the \n is written inline with the text. 3. Center-align is completely ignored until text is updated. Here is my fabric text: ``` var text = new fabric.Text($('#myInput').text(), { left: 10, top: 12, width: 230, textAlign: 'center', fontSize: 28, fontFamily: 'Helvetica Nue, Helvetica, Sans-Serif, Arial, Trebuchet MS' }); ``` A [fiddle](http://jsfiddle.net/dval/Kz7VL/) showing the issue. How do I insert a newline? How do I center-align the text in the given text-block? How do position the text-block on the canvas without having to make edits? Edit: I have found that part of my problem was from using a text input. I changed that to a textarea element, and now newlines can be inserted easily. Weirdly, centering also works when I do this. I guess the only issue is that the text doesn't auto-wrap when it's wider than the given text-block width. So, instead of wrapping and adding lines, it just extends beyond the canvas. is there a way to do wordwrap type styling or do I have to count characters and insert linebreaks?
I know this one's old, but I ran into this same issue. In my case, the text is in a group, so I want it to wrap to a specific width, but allow the group to resize (so I couldn't use the `TextBox`). I ended up pre-processing the string to put line breaks when the set width is met. ``` function addTextBreaks(text, width, fontSize) { text = text.trim(); var words = text.toString().split(' '), canvas = document.createElement('canvas'), context = canvas.getContext('2d'), idx = 1, newString = ''; context.font = fontSize + 'px Lato'; while (words.length > 0 && idx <= words.length) { var str = words.slice(0, idx).join(' '), w = context.measureText(str).width; if (w > width) { if (idx == 1) { idx = 2; } newString += words.slice(0, idx - 1).join(' '); newString += '\n'; words = words.splice(idx - 1); idx = 1; } else { idx += 1;} } if (idx > 0) { var txt = words.join(' '); newString += txt; } return newString; } ```
@IfProfileValue not working with JUnit 5 SpringExtension I use junit5 with spring-starter-test, in order to run spring test I need to use `@ExtendWith` instead of `@RunWith`. However `@IfProfileValue` work with `@RunWith(SpringRunner.class)` but not with `@ExtendWith(SpringExtension.class)`, below is my code: ``` @SpringBootTest @ExtendWith({SpringExtension.class}) class MyApplicationTests{ @Test @DisplayName("Application Context should be loaded") @IfProfileValue(name = "test-groups" , value="unit-test") void contextLoads() { } } ``` so the contextLoads should be ignore since it didn't specify the env test-grooups. but the test just run and ignore the `@IfProfileValue`.
I found out that `@IfProfileValue` only support for junit4, in junit5 we will use `@EnabledIf` and `@DisabledIf`. Reference <https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#integration-testing-annotations-meta> Update: Thanks to @SamBrannen'scomment, so I use junit5 build-in support with regex matches and make it as an Annotation. ``` @Target({ ElementType.TYPE, ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) @EnabledIfSystemProperty(named = "test-groups", matches = "(.*)unit-test(.*)") public @interface EnableOnIntegrationTest {} ``` so any test method or class with this annotion will run only when they have a system property of test-groups which contains unit test. ``` @Test @DisplayName("Application Context should be loaded") @EnableOnIntegrationTest void contextLoads() { } ```
Meaning of a immutable name in julia package I just downloaded the package RandomMatrices.jl to julia because I need to compute the Tracy-Widom densities. However the docs does not help me much in understanding what does it mean: ``` immutable TracyWidom <: ContinuousUnivariateDistribution end ``` What kind of type is this `TracyWidom` name? What should I supply in order to do some tests with the function ``` pdf(d::TracyWidom, t::Real) ``` contained in TracyWidom.jl?
The package defines a `TracyWidom` type, which has no fields. You can create a new instance of the `TracyWidom` type by typing `TracyWidom()`. To calculate the PDF of the `TracyWidom` distribution, use `pdf(TracyWidom(), 0.5)`. You can figure this out from the `pdf` type signature: the first argument should be an object of type `TracyWidom`, and the second of type `Real`. Julia has a [rich type system](http://docs.julialang.org/en/latest/manual/types/), but relevant for here are the following: `abstract`, which can't be created, `immutable`, which can't be modified after being created, and `type` which is the "normal" kind. In [Distributions.jl](https://github.com/JuliaStats/Distributions.jl/blob/master/src/univariate/normal.jl), which this is based off, there are types defined for each distribution, e.g. the normal distribution is ``` immutable Normal <: ContinuousUnivariateDistribution μ::Float64 σ::Float64 # Some other stuff... end ``` which makes more sense because its defined by those two parameters - `TracyWidom` doesn't need any parameters, but we still follow the same style. The `<: ContinuousUnivariateDistribution` means that `Normal` and `TracyWidom` are both `ContinuousUnivariateDistribution`s, which is an abstract type.
xubuntu 16.04: how do I make it lock screen automatically after timeout With Xubuntu 16.04, how do you get it to lock the screen automatically after a number of minutes of inactivity? Hard to believe I'm asking this, it seems like a basic thing. With other Ubuntu flavors and versions the default has been, screen locks after a certain number of minutes, and the setting is settable, so to speak -- I don't remember exactly where but there's a GUI control for it someplace sensible. I have searched a lot for the answer but what I've found seems to be dated.
You have to adjust two things for this. ## 1. Configure the time needed to go for the display in **Blank Mode** 1. Open **Settings Manager** 2. Go to **Hardware** Section 3. Click on **Power Manager** 4. Click on **Display** Tab 5. There is an option *Handle display power management*. Enable that option by checking it. And change the timing parameter by using the slider with label **Blank After**, setting value whatever you want. Remember, Less than 1 minute means Never. **Note:** If you can't find **Power Manager** settings under Hardware section, you're missing `xfce4-power-manager` package. Install it with this command ``` sudo apt install xfce4-power-manager ``` Here is the screenshot of the Settings Window [![Settings Window](https://i.stack.imgur.com/j2Ruv.png)](https://i.stack.imgur.com/j2Ruv.png) ## 2. Configure the time must pass to active lock This is controlled by Xscreensaver in Xubuntu. 1. Open **Settings Manager** 2. Go to **Personal** Section 3. Click **Screensaver** 4. While in **Display Modes** tab, at the bottom of it, there is a settings with label **Lock Screen After** [N] minutes. This controls the time required for the Lock to active *after screen goes* blank. So, the calculation is, The time needed for the display to go blank + the time needed for the lock to be activated. (I set this 0) Adjust this settings. Then wait for the display to go blank. Wait the time to active lock. There will be lock now. Here is a picture of the settings window of it. [![Xscreensaver preference](https://i.stack.imgur.com/YK1LH.png)](https://i.stack.imgur.com/YK1LH.png) Hope this answers the question.
Create table with column names from another tables column data I have table with a single column like this: ``` --------- | col | --------- | A | | B | | C | --------- ``` I want to create a new table with the following column names like this: ``` ------------------- | A | B | C | ------------------- ``` Any suggestions? Thanks a lot.
One way is to use dynamic sql. Assuming data type int for all columns, you can do something like this: Create and populate sample table (**Please** save us this step in your future questions): ``` DECLARE @T table ( col char(1) ) INSERT INTO @T VALUES ('a'), ('b'), ('c') ``` Build the dynamic sql: ``` DECLARE @Sql nvarchar(max) = 'CREATE TABLE YourNewTableName (' SELECT @Sql = @Sql + col +' int,' FROM @T SET @Sql = LEFT(@Sql, LEN(@Sql) - 1) +');' --Print @sql ``` Execute it (You will probably want to print it before to make sure it's correct): ``` EXEC(@Sql) ```
How can I tell if floating point arithmetic is performed in hardware or software? *How can I tell if floating point arithmetic is performed in hardware or software?* I could find the processor's name and Google it, but is there a way to do it in a BASH script? For instance, is there something saved in a system file that I could read? UPDATE: output of /proc/cpuinfo on Intel: ``` processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 69 model name : Intel(R) Core(TM) i3-4010U CPU @ 1.70GHz stepping : 1 microcode : 0x17 cpu MHz : 782.000 cache size : 3072 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes <-- !!! fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid bogomips : 3392.25 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: ``` output of /proc/cpuinfo on RPi (using Raspian v7): ``` processor : 0 model name : ARMv6-compatible processor rev 7 (v6l) BogoMIPS : 2.00 Features : swp half thumb fastmult vfp edsp java tls CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xb76 CPU revision : 7 Hardware : BCM2708 Revision : 000e Serial : 000000007b455c14 ```
Well, you can tell if your CPU has FPU capabilities with the data stored in `/proc/cpuinfo` and filter it with `grep fpu` `$ grep "fpu" /proc/cpuinfo` ``` fpu : yes fpu_exception : yes flags : fpu vme de pse ... ``` And for info, what type of CPU are you *playing* with? :) **EDIT** for ARM proc, look for vector floating point unit (`vfp`), some info [here](https://wiki.debian.org/ArmHardFloatPort#VFP). Ex: ``` # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 7 (v6l) BogoMIPS : 697.95 Features : ... vfp ... ```
I am missing a migration file Can I add the migration file, but tell rails not to run it locally? I need the file for others to setup the application locally.
If you have a file: ``` db/migrate/20121010100909_modify_table_x.rb ``` You can go into your database and run the following SQL. MySQL: ``` INSERT INTO 'schema_migrations' VALUES ('20121010100909'); ``` PostgreSQL: ``` INSERT INTO schema_migrations VALUES ('20121010100909'); ``` And it will then ignore that migration. **Edit - How to "go into your database"** Using the parameters from `config/database.yml` in Rails, connect to the database you are using. You will need to use the command-line tool of whatever database software you're using. E.g. For PostgreSQL: ``` psql -d <database_name> -U <username> ``` For MySQL: ``` mysql -u <username> <databasename>@localhost -p ``` Type in your password if required. Then type in and execute the SQL above.
Why does single quote in Lisp always return upper case? I'd like to be able to set case from a single quote, but that does not seem possible. ``` (format nil "The value is: ~a" 'foo) "The value is: FOO" (format nil "The value is: ~a" 'FOO) "The value is: FOO" (format nil "The value is: ~a" "Foo") "The value is: Foo" ```
**Quoting** The quote has nothing to do with case. A `quote` prevents evaluation. quoting a symbol: ``` CL-USER 1 > 'foo FOO ``` quoting a list: ``` CL-USER 2 > '(1 2 3 foo) (1 2 3 FOO) ``` You can put a quote in front of many things. For example in front of a string: ``` CL-USER 3 > '"a b c" "a b c" ``` Since strings evaluate to themselves, quoting them or not makes no difference: ``` CL-USER 4 > "a b c" "a b c" ``` Symbols are by default read as uppercase: ``` CL-USER 5 > 'FooBar FOOBAR CL-USER 6 > (symbol-name 'FooBar) "FOOBAR" ``` But that has nothing to do with quoting and is a *feature* of the *reader*. ``` CL-USER 7 > (read-from-string "foo") FOO 3 ``` **Downcase** If you want the string in lowercase, you need to convert the string to lowercase: ``` CL-USER 8 > (string-downcase (symbol-name 'FooBar)) "foobar" ``` **Symbols with mixed case** But you can create symbols with lowercase names or mixed case. You need to escape them: ``` CL-USER 9 > '|This is a symbol With spaces and mixed case| |This is a symbol With spaces and mixed case| CL-USER 10 > 'F\o\oB\a\r |FooBar| ``` **Downcasing output using `FORMAT`** You can also tell `FORMAT` to print in lowercase: ``` CL-USER 11 > (format nil "The value is: ~(~a~)" 'foo) "The value is: foo" ```
How to localize WPF .net-core application with RESX I want to localize a .Net Core application with .resx files. Can anyone give me a complete step-by-step solution? I'm new at WPF and .Net-Core. When I change the current culture in code nothing happens. Here is my code: ``` <ToolBar> <Button Content="{x:Static strings:Resource.NewCustomer}" Command="{Binding NewCustomerDelegateCommand}"/> </ToolBar> ``` ``` public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { Thread.CurrentThread.CurrentCulture = new System.Globalization.CultureInfo("de-DE"); Thread.CurrentThread.CurrentUICulture = new System.Globalization.CultureInfo("de-DE"); base.OnStartup(e); } } ```
When localizing WPF application you have several choices. 1. Using native WPF's approach (with using UID), see: [MSDN](https://learn.microsoft.com/en-us/dotnet/desktop/wpf/advanced/wpf-globalization-and-localization-overview). This approach has drawbacks explained more in detail in the article, but generally saying: - it's not using resx (which is very natural for most of the developers) - does not support changing language at runtime 2. Using the resx's strongly-typed resource class generated by PublicResXFileCodeGenerator / ResXFileCodeGenerator. (see custom tool in the properties window in the VS for the resx file; PS: the second generator creates internal class, so keep in mind that it will be usable only in the same assembly where xaml being localized exists and only with WPF Binding, not with Static markup extension) This approach is similar to the one you’ve tried to apply, meaning in the xaml you use Binding (or x:Static) to the properties of the generated strongly-typed resource class. The drawback of this approach is that the generated strongly-typed resource class does not provide any notification whenever culture/language is changed. Other words, you can provide strings, labels etc., but they will be resolved at the time when xaml is parsed/loaded. After that it becomes fixed content: - does not support changing language at runtime 3. You can find on the market already available solutions for localizing WPF application using resx files, e.g. have a look: <https://www.qube7.com/guides/localization.html> - it supports changing language at runtime (UI automatically refreshes when you set [Culture.Current](https://www.qube7.com/docs/qube7.composite.culture.html#Qube7_Composite_Culture_Current)) - the solution provides way to localize xaml that is supported in VS designer - approach of localizing xaml is similar to localizing asp.net: you may have local resources (resx per xaml view) + you may have global resources (resx that holds shared resources) - solution gives you way to localize view model as well (which is also something not rare to see)The full source code of the Resource markup extension you’ll find here: [Resource.cs](https://github.com/qube7/qcomposite/blob/master/Qube7.Composite/Data/Resource.cs)
Rotate logfiles each time the application is started (Python) I'm using the logging module in Python and I would like it to create a new logfile each time my application is started. The older logfiles shoud be rotated (eg: logfile.txt -> logfile1.txt, etc). I already found this: <http://docs.python.org/library/logging.html> > > BaseRotatingHandler is the base class > for handlers that rotate log files at > a certain point. It is not meant to be > instantiated directly. Instead, use > RotatingFileHandler or > TimedRotatingFileHandler. > > > The RotatingFileHandler does a rollover at a predetermined size and the TimedRotatingFileHandler does a rollover based on the product of when and interval. Both are not what I want, I want the rotation to happen immediately when my application starts.
I might be enough to use `RotatingFileHandler` without `maxBytes`, then call `doRollover()` on application start. Yup, seems to work fine. The code below will create a new log file on each application run, with added timestamps for log start and close times. Running it will print the list of available log files. You can inspect them to check correct behavior. Adapted from the Python docs example: ``` import os import glob import logging import logging.handlers import time LOG_FILENAME = 'logging_rotatingfile_example.out' # Set up a specific logger with our desired output level my_logger = logging.getLogger('MyLogger') my_logger.setLevel(logging.DEBUG) # Check if log exists and should therefore be rolled needRoll = os.path.isfile(LOG_FILENAME) # Add the log message handler to the logger handler = logging.handlers.RotatingFileHandler(LOG_FILENAME, backupCount=50) my_logger.addHandler(handler) # This is a stale log, so roll it if needRoll: # Add timestamp my_logger.debug('\n---------\nLog closed on %s.\n---------\n' % time.asctime()) # Roll over on application start my_logger.handlers[0].doRollover() # Add timestamp my_logger.debug('\n---------\nLog started on %s.\n---------\n' % time.asctime()) # Log some messages for i in xrange(20): my_logger.debug('i = %d' % i) # See what files are created logfiles = glob.glob('%s*' % LOG_FILENAME) print '\n'.join(logfiles) ```
Mixed model vs. Pooling Standard Errors for Multi-site Studies - Why is a Mixed Model So Much More Efficient? I've got a data set consisting of a series of "broken stick" monthly case counts from a handful of sites. I'm trying to get a single summary estimate from two different techniques: Technique 1: Fit a "broken stick" with a Poisson GLM with a 0/1 indicator variable, and using a time and time^2 variable to control for trends in time. That 0/1 indicator variable's estimate and SE are pooled using a pretty straight up and down method of moments technique, or using the tlnise package in R to get a "Bayesian" estimate. This is similar to what Peng and Dominici do with air pollution data, but with fewer sites (~a dozen). Technique 2: Abandon some of the site-specific control for trends in time and use a linear mixed model. Particularly: ``` lmer(cases ~ indicator + (1+month+I(month^2) + offset(log(p)), family="poisson", data=data) ``` My question involves the standard errors that come out of these estimates. Technique 1's standard error, which is actually using a weekly rather than monthly time set and thus should have *more* precision, has a standard error on the estimate of ~0.206 for the Method of Moments approach and ~0.306 for the tlnise. The lmer method gives a standard error of ~0.09. The effect estimates are reasonably close, so it doesn't seem to be that they're just zeroing in on different summary estimates as much as the mixed model is vastly more efficient. Is that something that's reasonable to expect? If so, why are mixed models so much more efficient? Is this a general phenomena, or a specific result of this model?
I know this is an old question, but it's relatively popular and has a simple answer, so hopefully it'll be helpful to others in the future. For a more in-depth take, take a look at Christoph Lippert's course on Linear Mixed Models which examines them in the context of genome-wide association studies [here](http://www.mixed-models.org/lectures-ucla/winter-2014-current-topics-in-computational-biology/). In particular see [Lecture 5](http://www.mixed-models.org/wp-content/uploads/2014/01/5_LinearMixedModels.pdf). The reason that the mixed model works so much better is that it's designed to take into account exactly what you're trying to control for: population structure. The "populations" in your study are the different sites using, for example, slightly different but consistent implementations of the same protocol. Also, if the subjects of your study are people, people pooled from different sites are less likely to be related than people from the same site, so blood-relatedness may play a role as well. As opposed to the standard maximum-likelihood linear model where we have $\mathcal{N}(Y|X\beta,\sigma^2) $, linear mixed models add in an additional matrix called the kernel matrix $K$, which estimates the similarity between individuals, and fits the "random effects" so that similar individuals will have similar random effects. This gives rise to the model $\mathcal{N}(Y|X\beta + Zu,\sigma^2I + \sigma\_g^2K)$. Because you are trying to control for population structure explicitly, it's therefore no surprise that the linear mixed model outperformed other regression techniques.
Why default argument cannot be specified for an explicit template specialization? The below code couldn't pass the compilation, what's the consideration for this compiler error? ``` template<class T> void f(T t) {}; template<> void f<char>(char c = 'a') {} ``` Error message: Default arguments are not allowed on an explicit specialization of a function template
I think that the rationale behind this error is due to the fact that the default arguments in the function template apply to its specialization as well and you are not allowed to define the default argument more than once in C++. Consider the following: ``` #include <iostream> template<class T> void f(T t = 'a') {} template<> void f<char>(char c) { std::cout << c << std::endl; } int main(int argc, char **argv) { f<char>(); } ``` This will print `a` meaning that specialization is called with the default argument defined in the main template. If you need a different default argument for each specialization you can use the approach illustrated below: ``` #include <iostream> template<class T> struct default_arg { static T get() { return T(); } }; template<class T> void f(T t = default_arg<T>::get()) {} template<> struct default_arg<char> { static char get() { return 'a'; } }; template<> void f<char>(char c) { std::cout << c << std::endl; } int main(int argc, char **argv) { f<char>(); } ```
Expedited WorkRequests require a ListenableWorker to provide an implementation for getForegroundInfoAsync() Doing a little [Jeopardy style Q&A](https://stackoverflow.blog/2011/07/01/its-ok-to-ask-and-answer-your-own-questions/) here. I have some work I sometimes need to run as *expedited* as described in the [version 2.7.0 of `WorkManager`](https://developer.android.com/jetpack/androidx/releases/work#2.7.0): ``` val constraints = Constraints.Builder() .setRequiredNetworkType(NetworkType.CONNECTED).build() val oneTimeWorkRequest = OneTimeWorkRequest.Builder(MyWorker::class.java) .setInitialDelay(2, TimeUnit.SECONDS) .setExpedited(OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .setConstraints(constraints).build() WorkManager.getInstance(context).enqueueUniqueWork("my-identifier", ExistingWorkPolicy.REPLACE, oneTimeWorkRequest) ``` I believe the code ran just fine on Android 12/S, but when the job is run on Android 11 I get the following error: ``` E/WM-WorkerWrapper: Work [ id=<UUID>, tags={ [WorkerTag] } ] failed because it threw an exception/error java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Expedited WorkRequests require a ListenableWorker to provide an implementation for `getForegroundInfoAsync()` at androidx.work.impl.utils.futures.AbstractFuture.getDoneValue(AbstractFuture.java:516) at androidx.work.impl.utils.futures.AbstractFuture.get(AbstractFuture.java:475) at androidx.work.impl.WorkerWrapper$2.run(WorkerWrapper.java:311) at androidx.work.impl.utils.SerialExecutor$Task.run(SerialExecutor.java:91) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.lang.Thread.run(Thread.java:923) ``` What do I need to do?
[The documentation for `ListenableWorker.getForegroundInfoAsync()`](https://developer.android.com/reference/androidx/work/ListenableWorker#getForegroundInfoAsync()) states this: > > Prior to Android S, WorkManager manages and runs a foreground service on your behalf to execute the WorkRequest, showing the notification provided in the ForegroundInfo. To update this notification subsequently, the application can use NotificationManager. > > > Starting in Android S and above, WorkManager manages this WorkRequest using an immediate job. > > > So in the class extending `ListenableWorker` it's necessary to override `getForegroundInfoAsync()`. An alternative to directly overriding that method yourself is to use for example [`CoroutineWorker`](https://developer.android.com/reference/kotlin/androidx/work/CoroutineWorker): ``` class MyWorker(val context: Context, workerParams: WorkerParameters) : CoroutineWorker(context, workerParams) { companion object { private const val NOTIFICATION_CHANNEL_ID = "11" private const val NOTIFICATION_CHANNEL_NAME = "Work Service" } override suspend fun doWork(): Result { // TODO: Do work here return Result.success() } override suspend fun getForegroundInfo(): ForegroundInfo { val notificationManager = context.getSystemService(Context.NOTIFICATION_SERVICE) as NotificationManager if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { val channel = NotificationChannel( NOTIFICATION_CHANNEL_ID, NOTIFICATION_CHANNEL_NAME, NotificationManager.IMPORTANCE_HIGH ) notificationManager.createNotificationChannel(channel) } val notification = NotificationCompat.Builder(context, NOTIFICATION_CHANNEL_ID) .setContentIntent(PendingIntent.getActivity(context, 0, Intent(context, MainActivity::class.java), Constants.PENDING_INTENT_FLAG_IMMUTABLE)) .setSmallIcon(R.drawable.ic_refresh_24dp) .setOngoing(true) .setAutoCancel(true) .setOnlyAlertOnce(true) .setPriority(NotificationCompat.PRIORITY_MIN) .setContentTitle(context.getString(R.string.app_name)) .setLocalOnly(true) .setVisibility(NotificationCompat.VISIBILITY_SECRET) .setContentText("Updating widget") .build() return ForegroundInfo(1337, notification) } } ``` (That constant for the pending intent flag is really just `val PENDING_INTENT_FLAG_MUTABLE = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.S) PendingIntent.FLAG_MUTABLE else 0` to make stuff work with both Android 12/S and earlier.)
Is it possible to rollback create\_all in sqlalchemy? According to [THIS](https://stackoverflow.com/questions/4692690/is-it-possible-to-roll-back-create-table-and-alter-table-statements-in-major-sql) question for some DBMSs it is possible to rollback CREATE TABLE statement. Particularry it is possible for sqlite (although it is undocumented). So my question is, is it possible to rollback create\_all in sqlalchemy? I was trying to write some test code, but it seems not to work: ``` >>> engine = create_engine('sqlite:///:memory:') >>> engine Engine(sqlite:///:memory:) >>> Session = sessionmaker(bind=engine) >>> connection = engine.connect() >>> session = Session(bind=connection) >>> engine.table_names() [] >>> transaction = connection.begin() >>> Base = declarative_base() >>> class Test(Base): ... __tablename__ = 'TEST' ... id = Column(Integer, primary_key=True) ... >>> Base.metadata.bind = engine >>> Base.metadata.create_all() >>> engine.table_names() [u'TEST'] >>> transaction.rollback() >>> session.close() >>> connection.close() >>> engine.table_names() [u'TEST'] ```
1. Use Postgresql or SQL Server. MySQL, Oracle do *not* support transactional DDL. Recent versions of SQLite do appear to support transactional DDL. The Python [sqlite3 driver](http://docs.python.org/2/library/sqlite3.html) however [does not](http://bugs.python.org/issue10740). 2. simple recipe: ``` with engine.begin() as conn: metadata.create_all(conn) ``` if you raise an exception inside the "with:", the transaction will be rolled back. 3. want to see it yourself, OK: ``` from sqlalchemy import inspect # need to be running 0.8 for this with engine.connect() as conn: trans = conn.begin() metadata.create_all(conn) inspector = inspect(conn) table_names = inspector.get_table_names() trans.rollback() inspector = inspect(conn) rolled_back_table_names = inspector.get_table_names() ``` to gain an understanding of transactional scope, I recommend you read <http://docs.sqlalchemy.org/en/latest/core/connections.html>
Is it possible to code this way:while(lambda){} The code below was compiled without error: ``` std::string lastName, chldName; while([&]() { return true; }) { //codes... } ``` But when I tried this way: ``` std::string lastName, chldName; while([&]() { std::cin >>lastName; return true; }) { //codes... } ``` The compiler complained that : > > error: could not convert 'main()::{(\* > & lastName)}' from 'main()::' to 'bool' > > > How to understand this error?Is it possible to use lambda this way?
Your first example works not how you want, it's equivalent to `while (true)`, since lambda will be converted to function-pointer, that will be converted to bool (true) - it should be ``` while([&]() { return true; }()) ``` Note to call of lambda Your second example will not compile without call of lambda, since you are trying to access to catched-variables, thats forbids conversion from lambda to function-pointer, that can be converted to bool, but it will neither compiled with `()`, > > If > a lambda-expression does not include a trailing-return-type, it is as if the trailing-return-type denotes the > following type: > > > — if the compound-statement is of the form > { attribute-specifier-seqopt return expression ; } > the type of the returned expression after lvalue-to-rvalue conversion (4.1), array-to-pointer conver- > sion (4.2), and function-to-pointer conversion (4.3); > > > **— otherwise, void.** > > In your case, return type will be deduced to `void`, but since you return bool, you should use `trailing-return-type` ``` while([&]() -> bool { std::cin >>lastName; return true; }()) ```
Python: Plot residuals on a fitted model I want to plot the lines (residuals; cyan lines) between data points and the estimated model. Currently I'm doing so by iterating over all data points in my income `pandas.DataFrame` and adding vertical lines. `x`, `y` are the points' coordinates and `predicted` are the predictions (here the blue line). ``` plt.scatter(income["Education"], income["Income"], c='red') plt.ylim(0,100) for indx, (x, y, _, _, predicted) in income.iterrows(): plt.axvline(x, y/100, predicted/100) # /100 because it needs floats [0,1] ``` Is there a more efficient way? This doesn't seem like a good approach for more than a few rows. [![enter image description here](https://i.stack.imgur.com/d8Tlw.png)](https://i.stack.imgur.com/d8Tlw.png)
First of all note that [`axvline`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.axvline.html) here only works by coincidence. In general the `y` values taken by `axvline` are in coordinates relative to the axes, not in data coordinates. In contrast, [`vlines`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.vlines.html) uses data coordinates and also has the advantage to accept arrays of values. It will then create a `LineCollection`, which is more efficient than individual lines. ``` import matplotlib.pyplot as plt import numpy as np x = np.linspace(-1.2,1.2,20) y = np.sin(x) dy = (np.random.rand(20)-0.5)*0.5 fig, ax = plt.subplots() ax.plot(x,y) ax.scatter(x,y+dy) ax.vlines(x,y,y+dy) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/8kymg.png)](https://i.stack.imgur.com/8kymg.png)
Woocommerce - Clear cart on home page (doesn't work while logged in) I'm trying to clear the cart on home page. I've added this part to home page head section: ``` <script type='text/javascript'> function clearCart() { jQuery.post( "https://abcdefgh.com/wp-admin/admin-ajax.php", //ajaxurl, { "action": "clearcart" } ); console.log('its homepage bro!'); } jQuery(document).ready(function(){ clearCart(); }); </script> ``` And, added this part to functions.php: ``` add_action('wp_ajax_nopriv_clearcart',function(){ global $woocommerce; $woocommerce->cart->empty_cart(true); }); ``` If the user is not logged in, the cart is cleared on the home page. But, if the user is logged in, it does not work although the javascript part is executed (I checked via console log.). Why do you think it happens and how can I resolve it?
You don't need to use Ajax for that. I just use **optionally** a bit of jQuery to Refresh / update related cart data (like in mini-cart): ``` add_action( 'wp_footer', 'clear_the_cart_in_home_refresh' ); function clear_the_cart_in_home_refresh(){ // Just in home page when cart is not empty if( WC()->cart->is_empty() ) return; if( ! is_front_page() ) return; // Empty cart WC()->cart->empty_cart(true); WC()->session->set('cart', array()); // Reset minicart count and update page content (if needed) ?> <script type='text/javascript'> jQuery(document).ready(function($){ setTimeout(function() { $('body').trigger('wc_fragment_refresh'); $('body').trigger('updated_wc_div'); }, 100); }); </script> <?php } ``` *Code goes in function.php file of your active child theme (or theme) or also in any plugin file.* Tested and works.
How to get Laravel Mail response after send function called? Is it possible to get Mail response in Laravel after Mail:to a method called? Where is the best place to do that? ``` Mail::to($order->email) ->send(new ThankYouMail($order->fresh())); ``` I am using Sendgrid as a Laravel Mail driver and want to get messageID in order to use for afterward Sendgrid hooks (get email delivery status etc). ``` public function build() { $from = '[email protected]'; $subject = 'Thank You for Ordering'; $name = 'Name XYZ'; $order_id = (string)$this->order->id; $headerData = [ 'category' => 'Order', 'unique_args' => [ 'OrderID' => $order_id ] ]; $header = $this->asString($headerData); $this->withSwiftMessage(function ($message) use ($header) { $message->getHeaders() ->addTextHeader('X-SMTPAPI', $header); }); return $this->view('mails.thank-you') ->from($from, $name) ->replyTo($from, $name) ->subject($subject); } ``` **UPDATE: 2019/06/25** If anyone wants to get and store Mail statuses it is the best solution to use Sendgrid Event Webhook. - Probably you will attach (to the email) an unique arg like OrderID during the email send process (Check the Sendgrid API or my example from above). - After that you have to create API POST route in order to receive/store Webhook Email data where you can filter/aim and connect status data. **UPDATE #2: 2021/01/21** In order to receive data from Sendgrid it is required to create some logic: Route: ``` Route::post('sendgrid/events','SendgridOrderEventController@store'); ``` Controller: ``` /** * Store a sendgrid event in database. * @param App\Http\Requests\API\CreateSendgridOrderEventRequest $request * @return Response */ public function store(CreateSendgridOrderEventRequest $request) { $sendgrid_request = $request->all()[0]; if($sendgrid_request['OrderID']) { $sendgrid_request['order_id'] = $sendgrid_request['OrderID']; $sendgrid_request['sendgrid_timestamp'] = $sendgrid_request['timestamp']; $sendgrid_request['sendgrid_message_id'] = $sendgrid_request['sg_message_id']; $sendgridOrderEvent = SendgridOrderEvent::create($sendgrid_request); } return $this->sendResponse($sendgridOrderEvent, 'SendGrid Order Event created successfully'); } ``` Within the Sendgrid dashboard, find webhooks settings and place your API POST endpoint: `api.yourdomain.com/sendgrid/events` so after that, Sendgrid will be able to generate events and send a POST request to your API endpoint where your logic can receive and handle incoming data. More details: <https://sendgrid.com/docs/for-developers/tracking-events/event/>
An [`Illuminate\Mail\Events\MessageSent`](https://github.com/laravel/framework/blob/v5.8.10/src/Illuminate/Mail/Events/MessageSent.php) is dispatched after the mail is sent [[1]](https://github.com/laravel/framework/blob/v5.8.10/src/Illuminate/Mail/Mailer.php#L515) This event is dispatched with the instance of the swift message instance. [[2]](https://github.com/laravel/framework/blob/v5.8.10/src/Illuminate/Mail/Message.php#L313) You can listen on this event and get the message id. This event can be subscribed to by registering in the boot method of the `EventServiceProvider` an event listener. [[3]](https://laravel.com/docs/5.0/events#basic-usage) ``` protected $listen = [ 'Illuminate\Mail\Events\MessageSent' => [ 'App\Handlers\Events\MessageIdTranscript', ], ]; ``` The `MessageIdTranscript` is a plain PHP class with a `handle` method that is called with the event. Instances of `Swift_Message` expose the message Id via a public getter method named [`getId`](https://github.com/swiftmailer/swiftmailer/blob/v6.2.0/lib/classes/Swift/Mime/SimpleMimeEntity.php#L200). ``` namespace App\Handlers\Events; use Illuminate\Mail\Events\MessageSent; class MessageIdTranscript { /** * Handle the event. * * @param MessageSent $event * @return void */ public function handle(MessageSent $event) { $messageId = $event->message->getId(); // continue handling... } } ```
How can make a FileProvider available to other applications? Is it possible to have a FileProvider available to other applications ? manifest.xml ``` ... <provider android:name="android.support.v4.content.FileProvider" android:authorities="com.example.myapp.fileprovider" android:exported="true" android:grantUriPermissions="false" android:permission="com.example.filesvisible.permission.READ" > <meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/filespath" /> </provider> ... ``` From the [doc](http://developer.android.com/guide/topics/manifest/provider-element.html): > > false: The provider is not available to other applications. Set > android:exported="false" to limit access to the provider to your > applications. Only applications that have the same user ID (UID) as > the provider will have access to it. > > > I've tried to set exported to true but I got this exception `Unable to get provider android.support.v4.content.FileProvider: java.lang.SecurityException: Provider must not be exported` Why I can't export a FileProvider ?
> > Is it possible to have a FileProvider available to other applications ? > > > Yes. That is usually the point. > > Why I can't export a FileProvider ? > > > Because that is not how you use a `FileProvider`. The point behind `FileProvider` is to [give *select* access to files to third-party apps](http://developer.android.com/reference/android/support/v4/content/FileProvider.html#Permissions). You can do this by `FLAG_GRANT_READ_URI_PERMISSION` and/or `FLAG_GRANT_WRITE_URI_PERMISSION`, in the `Intent` that you use to pass one of your provider's `Uri` values to the third-party app (e.g., via an `ACTION_VIEW` `Intent` used with `startActivity()`). Also see [the training guide on sharing files](http://developer.android.com/training/secure-file-sharing/index.html).
does incremented column makes the b-tree index on the column unbalanced? I have been thinking about two questions. Couldn't find any resources on the internet about this. How do dbms handle it ? Or do they ? Especially Oracle. Before the questions, here is an example: Say I have a master table "MASTER" and slave table "SLAVE". Master table has an "ID" column which is the primary key and index is created by Oracle.Slave table has the foreign key "MASTER\_ID" which refers to master table and "SLAVE\_NO". These two together is the primary key of slave table, which is again indexed. ``` **MASTER** | **SLAVE** (P) ID <------> (P)(F) MASTER_ID (P) SLAVE_NO ``` Now the questions; 1- If MASTER\_ID is an autoincremented column, and no record is ever deleted, doesn't this get the table's index unbalanced ? Does Oracle rebuilds indexes periodically ? As far as i know Oracle only balances index branches at build time. Does Oracle re-builds indexes Automatically ever ? Say if the level goes up to some high levels ? 2- Assuming Oracle does not rebuild automatically, apart from scheduling a job that rebuilds index periodically, would it be wiser to order SLAVE table's primary key columns reverse ? I mean instead of "MASTER\_ID", "SLAVE\_NO" ordering it as "SLAVE\_NO", "MASTER\_ID"i, would it help the slave table's b-tree index be more balanced ? (Well each master table might not have exact number of slave records, but still, seems better than reverse order) Anyone know anything about that ? Or opinions ?
> > If `MASTER_ID` is an autoincremented column, and no record is ever deleted, doesn't this get the table's index unbalanced ? > > > `Oracle`'s indexes are never "unbalanced": every leaf in the index is at the same depth as any other leaf. No page split introduces a new level by itself: a leaf page does not become a parent for new pages like it would be on a non-self-balancing tree. Instead, a sibling for the split page is made and the new record (plus possibly some of the records from the old page) go to the new page. A pointer to the new page is added to the parent. If the parent page is out of space too (can't accept the pointer to the newly created leaf page), it gets split as well, and so on. These splits can propagate up to the root page, whose split is the only thing which increases the index depth (and does it for all pages at once). Index pages are additionally organized into double-linked lists, each list on its own level. This would be impossible if the tree were unbalanced. If `master_id` is auto-incremented this means that all splits occur at the end (such called `90/10` splits) which makes the most dense index possible. > > Would it be wiser to order `SLAVE` table's primary key columns reverse? > > > No, it would not, for the reasons above. If you join `slave` to `master` often, you may consider creating a `CLUSTER` of the two tables, indexed by `master_id`. This means that the records from both tables, sharing the same `master_id`, go to the same or nearby data pages which makes a join between them very fast. When the engine found a record from `master`, with an index or whatever, this also means it already has found the records from `slave` to be joined with that `master`. And vice versa, locating a `slave` also means locating its `master`.
CreateWindow Fails as Unable to Find Window Class - C++ In my application the function `CreateWindow` is failing for some reason. `GetLastError` indicates error 1407, which, according to the MSDN documentation is "Cannot find window class." The following code shows how `CreateWindow` is being called and the respective variables names at time of call: ``` m_hInstance = ::GetModuleHandle( NULL ); if ( m_hInstance == NULL ) { TRACE(_T("CNotifyWindow::CNotifyWindow : Failed to retrieve the module handle.\r\n\tError: %d\r\n\tFile: %s\r\n\tLine: %d\r\n"), ::GetLastError(), __WFILE__, __LINE__); THROW(::GetLastError()); } m_hWnd = ::CreateWindow( _pwcWindowClass, // L"USBEventNotificationWindowClass" _pwcWindowName, // L"USBEventNotificationWindow" WS_ICONIC, 0, 0, CW_USEDEFAULT, 0, NULL, NULL, m_hInstance, // 0x00400000 NULL ); if ( m_hWnd == NULL ) // m_hWnd is returned as NULL and exception is thrown. { TRACE(_T("CNotifyWindow::CNotifyWindow : Failed to create window.\r\n\tError: %d\r\n\tFile: %s\r\n\tLine: %d\r\n"), ::GetLastError(), __WFILE__, __LINE__); THROW(::GetLastError()); } ::ShowWindow( m_hWnd, SW_HIDE ); ``` What am I doing wrong?
You have to call [RegisterClassEx](http://msdn.microsoft.com/en-us/library/ms633587%28v=VS.85%29.aspx) before you can use the window class on CreateWindow. Example code [here](http://msdn.microsoft.com/en-us/library/ms633575%28v=VS.85%29.aspx). > > Each process must register its own > window classes. To register an > application local class, use the > RegisterClassEx function. You must > define the window procedure, fill the > members of the WNDCLASSEX structure, > and then pass a pointer to the > structure to the RegisterClassEx > function. > > >
POSIX sh EBNF grammar Is there an existing POSIX sh grammar available or do I have to figure it out from the specification directly? Note I'm not so much interested in a *pure* sh; an extended but conformant sh is also more than fine for my purposes.
I have done some more digging and found these resources: 1. An `sh` tutorial located [here](https://web.archive.org/web/20170207130846/http://porkmail.org/era/unix/shell.html) 2. A Bash book containing Bash 2.0's BNF grammar (gone from [here](http://www.teiser.gr/icd/staff/nikolaid/learning_the_bash_shell.pdf)) with the relevant appendix [still here](http://my.safaribooksonline.com/book/operating-systems-and-server-administration/unix/1565923472/syntax/lbs.appd.div.3) I have looked through the sources of `bash`, `pdksh`, and `posh` but haven't found anything remotely at the level of abstraction I need.
Printing values for hash of arrays in perl I've got a hash of arrays that's declared as follows: ``` my %hash; push @{ $hash{ $value1[$_] } }, [ $value1[$_], $value2[$_], $value3[$_], $value4[$_], $value5[$_] ] for 0 .. $#value1; ``` I want to be able to inspect the values for each key using: ``` open KEYS, '>keys.txt' or die "Can't write to 'keys.txt'\n"; for my $key ( sort keys %hash ) { print KEYS "Key: $key contains the values: "; for my $value ( @{$hash{$value1}} ) { print KEYS "$value "; } print KEYS "\n"; } close(KEYS); ``` While I can visualise the keys and associated values using Data::Dumper, the output from the above code gives memory locations, rather than values, for each key. E.g: ``` Key: 'Value1' contains the values: ARRAY(0x7fcd8645ba68) ``` Even though I'm pushing the same number of values onto each array, each key contains different numbers of values Is there something wrong with the way I'm going about this?
First, in your inner loop, you have ``` for my $value ( @{$hash{$value1}} ) { print KEYS "$value "; } ``` What on earth is `$value1`? I think you wanted to use `$key`. Always `use strict; use warnings` to get warned about undefined values and undeclared variables. Next, let's take a look what happens when we do ``` my %hash; push @{ $hash{ $value1[$_] } }, "(value$_)" for 0 .. $#value1; ``` instead, i.e. we just push a string onto the arrayref in the hash. Then, the output looks somewhat like ``` Key: Value1 contains the values: (value0) Key: Value2 contains the values: (value1) Key: Value3 contains the values: (value2) ``` Aha! Whatever we push onto that arrayref is printed out as is. If you push an anonymous arrayref like `[...]`, you get the stringification of that reference: `ARRAY(0x1234567)`. You probably want the contents of that arrayref. Easy: just dereference it. ``` ...; print KEYS "[@$value] "; ``` or something like that. The `"[...]"` are used here just to visually group the output. --- Style notes: Please consider 3-arg `open` with lexical filehandles: ``` my $filename = "keys.txt"; open my $keys, "<", $filename or die "Can't open $filename: $!"; ``` or use automatic error handling: ``` use autodie; open my $keys, "<", "keys.txt"; ``` Either way, it is usually important that you include the *reason* for the failure `$!` in the error message, or it is almost useless. Instead of using loop, your code may be more elegant with `map` and `join`, depending on your taste. I would have probably written the loop as ``` use feature 'say'; for my $key ( sort keys %hash ) { say {$keys} "Key: $key contains the values: " . join " ", map { "[@$_]" } @{ $hash{$key} }; } ```
Elasticsearch completion suggest search with multiple-word inputs Using the Elasticsearch completion suggester I have problems returning multi-word input suggestions matching a one-word query. Example structure: ``` PUT /test_index/ { "mappings": { "item": { "properties": { "test_suggest": { "type": "completion", "index_analyzer": "whitespace", "search_analyzer": "whitespace", "payloads": false } } } } } PUT /test_index/item/1 { "test_suggest": { "input": [ "cat dog", "elephant" ] } } ``` Working query: ``` POST /test_index/_suggest { "test_suggest":{ "text":"cat", "completion": { "field" : "test_suggest" } } } ``` with result ``` { "_shards": { "total": 5, "successful": 5, "failed": 0 }, "test_suggest": [ { "text": "cat", "offset": 0, "length": 3, "options": [ { "text": "cat dog", "score": 1 } ] } ] } ``` Failing query: ``` POST /test_index/_suggest { "test_suggest":{ "text":"dog", "completion": { "field" : "test_suggest" } } } ``` with result ``` { "_shards": { "total": 5, "successful": 5, "failed": 0 }, "test_suggest": [ { "text": "dog", "offset": 0, "length": 3, "options": [] } ] } ``` I would expect the same result as the working query, matching 'cat dog'. Any suggestions what the problem is and how to make the failing query working? I get the same results when using the standard analyzer instead of the whitespace analyzer. I would like to use multiple words per input string as showed in the example above.
The completion suggester is a [prefix suggester](http://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters-completion.html#_why_another_suggester_why_not_prefix_queries), meaning it tries to match your query to the first few characters of the inputs that it's been given. If you want the document you posted to match the text "dog", then you'll need to specify "dog" as an input. ``` PUT /test_index/item/1 { "test_suggest": { "input": [ "cat dog", "elephant", "dog" ] } } ``` In my experience, the limitation of having to specify inputs to match makes completion suggesters less useful that other ways to implement prefix matching. I like [edge ngrams](http://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-edgengram-tokenizer.html) for this purpose. I recently wrote a blog post about using ngrams that you might find helpful: <http://blog.qbox.io/an-introduction-to-ngrams-in-elasticsearch> As a quick example, here is a mapping you could use ``` PUT /test_index { "settings": { "analysis": { "filter": { "edge_ngram_filter": { "type": "edge_ngram", "min_gram": 2, "max_gram": 20 } }, "analyzer": { "edge_ngram_analyzer": { "type": "custom", "tokenizer": "standard", "filter": [ "lowercase", "edge_ngram_filter" ] } } } }, "mappings": { "item": { "properties": { "text_field": { "type": "string", "index_analyzer": "edge_ngram_analyzer", "search_analyzer": "standard" } } } } } ``` then index the doc like this: ``` PUT /test_index/item/1 { "text_field": [ "cat dog", "elephant" ] } ``` and any of these queries will return it: ``` POST /test_index/_search { "query": { "match": { "text_field": "dog" } } } POST /test_index/_search { "query": { "match": { "text_field": "ele" } } } POST /test_index/_search { "query": { "match": { "text_field": "ca" } } } ``` Here's the code all together: <http://sense.qbox.io/gist/4a08fbb6e42c34ff8904badfaaeecc01139f96cf>
Mongodb aggregation - sum a grouped, addToSet field I've got the following query: ``` db.listener.aggregate( [ { "$match" : { "location.countryName" : "Italy" } }, { "$project" : { "location" : "$location"} }, { "$group" : { "_id" : { "country": "$location.countryName", "city": "$location.cityName" }, "count" : { "$sum" : 1 }, "co-ords" : { "$addToSet" : { "lat" : "$location.latitude", "long" : "$location.longitude" } } } }, { "$group" : { "_id" : "$_id.country", "cities" : { "$push" : { "city" : "$_id.city", "count" : "$count", "co-ords" : "$co-ords" } } } }, { "$sort" : { "_id" : 1 } }, ] ``` ) which give the following results (truncated): ``` { "result" : [ { "_id" : "Italy", "cities" : [ { "city" : "Seriate", "count" : 1, "co-ords" : [ { "lat" : "45.6833", "long" : "9.7167" } ] }, { "city" : "Milan", "count" : 3, "co-ords" : [ { "lat" : "45.4612", "long" : "9.1878" }, { "lat" : "45.4667", "long" : "9.2" } ] }, ``` As you can see in the example for the city of Milan, the total city count is 3 but the number of longitude/latitude sets is two. This means that one of those more precise locations has two instances and the other has one. I now need to amend my query to reflect the number of instances per latitude/longitude as well as the overall count. It might look something like this: ``` { "city" : "Milan", "count" : 3, "co-ords" : [ { "lat" : "45.4612", "long" : "9.1878", "total" : 2 }, { "lat" : "45.4667", "long" : "9.2", "total" : 1 } ] }, ``` I've tried a few things to achieve this but it never come out as I'd like or Mongo throws an error. Anyone know the best way to do this? Many thanks, Nick.
``` db.listener.aggregate( [ { "$match" : { "location.countryName" : "Italy" } }, { "$group" : { "_id" : { "country": "$location.countryName", "city": "$location.cityName", "coords": { "lat" : "$location.latitude", "long" : "$location.longitude" } }, "count" : { "$sum" : 1 } } }, { "$group" : { "_id" : { "country": "$_id.country", "city": "$_id.city" }, "coords": { "$addToSet" : { "long" : "$_id.coords.long", "lat" : "$_id.coords.lat", "total" : "$count" } }, "count" : { "$sum" : "$count" } } }, { "$group" : { "_id" : "$_id.country", "cities" : { "$push" : { "city" : "$_id.city", "count" : "$count", "coords" : "$coords" } } } }, { "$sort" : { "_id" : 1 } }, ]); ``` Sample output on your data from this: ``` { "_id" : "Italy", "cities" : [ { "city" : "Seriate", "count" : 1, "coords" : [ { "long" : "9.7167", "lat" : "45.6833", "total" : 1 } ] }, { "city" : "Milan", "count" : 3, "coords" : [ { "long" : "9.1878", "lat" : "45.4612", "total" : 1 }, { "long" : "9.2", "lat" : "45.4667", "total" : 2 } ] } ] } ```
Performance with global variables vs local I am still new to Python, and I have been trying to improve the performance of my Python script, so I tested it with and without global variables. I timed it, and to my surprise, it ran faster with global variables declared rather than passing local vars to functions. What's going on? I thought execution speed was faster with local variables? (I know globals are not safe, I am still curious.)
## Locals should be faster According to [this page on locals and globals](http://www.diveintopython.net/html_processing/locals_and_globals.html): > > When a line of code asks for the value of a variable x, Python will search for that variable in all the available namespaces, in order: > > > - **local namespace** - specific to the current function or class method. If the function defines a local variable x, or has an argument x, Python will use this and stop searching. > - **global namespace** - specific to the current module. If the module has defined a variable, function, or class called x, Python will use that and stop searching. > - **built-in namespace** - global to all modules. As a last resort, Python will assume that x is the name of built-in function or variable. > > > Based on that, I'd assume that local variables are generally faster. My guess is what you're seeing is something particular about your script. ## Locals are faster Here's a trivial example using a local variable, which takes about 0.5 seconds on my machine (0.3 in Python 3): ``` def func(): for i in range(10000000): x = 5 func() ``` And the global version, which takes about 0.7 (0.5 in Python 3): ``` def func(): global x for i in range(1000000): x = 5 func() ``` ## `global` does something weird to variables that are already global Interestingly, this version runs in 0.8 seconds: ``` global x x = 5 for i in range(10000000): x = 5 ``` While this runs in 0.9: ``` x = 5 for i in range(10000000): x = 5 ``` You'll notice that in both cases, `x` is a global variable (since there's no functions), and they're both slower than using locals. I have no clue why declaring `global x` helped in this case. This weirdness doesn't occur in Python 3 (both versions take about 0.6 seconds). ## Better optimization methods If you want to optimize your program, the best thing you can do is [profile it](http://wiki.python.org/moin/PythonSpeed/Profiling). This will tell you what's taking the most time, so you can focus on that. Your process should be something like: 1. Run your program with profiling on. 2. Look at the profile in KCacheGrind or a similar program to determine Which functions are taking the most time. 3. In those functions: - Look for places where you can cache results of functions (so you don't have to do as much work). - Look for algorithmic improvements like replacing recursive functions with closed-form functions, or replacing list searches with dictionaries. - Re-profile to make sure the function is still a problem. - Consider using [multiprocessing](http://docs.python.org/library/multiprocessing.html#using-a-pool-of-workers).
CSS arrow on list hover I have a list menu with a border which goes 1px ...100% width. See image 1: [![enter image description here](https://i.stack.imgur.com/y3EDP.png)](https://i.stack.imgur.com/y3EDP.png) Now comes the hard part. When mouse is hovered on a menu item I need this to happen under the item hovered: See image 2: [![enter image description here](https://i.stack.imgur.com/w4xjN.png)](https://i.stack.imgur.com/w4xjN.png) How can I do this? I've been googling and found nothing on it.
Using relative positioned list-items combined with a rotated, absolute positioned after gives you a possible solution: ``` * { margin: 0; padding: 0; list-style: none; } ul li { display: inline-block; vertical-align: top; padding: 1em 3em; margin-right: -4px; border-bottom: 1px solid black; position: relative; } ul li:hover:after { content: ' '; display: block; height: 16px; width: 16px; background: white; position: absolute; left: 50%; margin-left: -8px; transform: rotate(45deg); bottom:-9px; border-left: 1px solid black; border-top: 1px solid black; } ``` ``` <ul> <li>Item 1</li> <li>Item 2</li> <li>Item 3</li> <li>Item 4</li> </ul> ```
Why is a Common-Lisp Lambda expression a valid function name? So let's say I want to call some function. If I've defined the function with a defun, I just use the name of the function at the beginning of a list followed by it's arguments like so (I will be using "=>" in examples to show the output of entering the code into the CLisp REPL): ``` (defun f (a) (lambda (b) (+ a b))) => F (f 12) => #<FUNCTION :LAMBDA (B) (+ A B)> ``` Simple enough. The first element of the list must be the name of a function, special operator, or macro. Except, the following is also valid: ``` ((lambda (a) (+ a 12)) 1) => 13 ``` A quick google-search reveals that LAMBDA is both a symbol and a macro. Trying to expand the macro yeilds: ``` (macroexpand '(lambda (a) (+ a 12))) => #'(LAMBDA (A) (+ A 12)) ``` This is not helpful. I have no way to differentiate between the macro LAMBDA and the symbol LAMBDA, and I'm totally unclear as to why I can use a lambda expression as a function-name but not, say, #'f, which, to my knowledge, should evaluate to a valid function-designator for the function F in the same way that #'(LAMBDA (A) (+ A 12)) does and yet: ``` (#'f 12) => *** - EVAL: #'F is not a function name; try using a symbol instead ``` Is LAMBDA a special exception to the otherwise hard-set rule that the first element of an evaluated expression *must* be the name of some operation, or is there some more consistent ruleset that I'm misunderstanding?
**Lambda expressions and function names** A *lambda expression* is not a *function name*. *function names* in Common Lisp are defined to be either *symbols* or *(setf symbol)*. A *lambda expression* is basically built-in syntax to describe an anonymous function. Note that lambda expressions on their own are not meaningful in Common Lisp. They are only appearing in a *lambda form* (see below) and inside a form with the special operator `function`. **Lists as Forms** > > Is LAMBDA a special exception to the otherwise hard-set rule that the first element of an evaluated expression must be the name of some operation, or is there some more consistent ruleset that I'm misunderstanding? > > > The Common Lisp specification defines that there are only four list-based *forms*. A *form* is a valid Lisp piece of code. - special forms (the form begins with a special operator) - macro forms (the form begins with a macro operator) - function forms (the form begins with a function operator) - lambda forms (the form begins with a lambda expression) See the Common Lisp HyperSpec: [Conses as Forms](http://www.lispworks.com/documentation/HyperSpec/Body/03_abab.htm). Note that there is no mechanism in Common Lisp to extend this. There are only these four types of list-based forms. One could think of extensions: arrays as functions, CLOS objects as functions, different types of functions like fexprs, variables, ... None of those are supported by Common Lisp syntax for list-based forms and there is no portable mechanism to add those. **LAMBDA** `LAMBDA` has two different purposes in Common Lisp: - it's the head of a *lambda expression*. - as a macro `LAMBDA`. This expands `(lambda ....)` into `(function (lambda ....))` The macro `LAMBDA` was added to Common Lisp after the first language definition CLtL1 as a convenience to be able to write `(lambda (x) x)` instead of `(function (lambda (x) x))` or `#'(lambda (x) x)`. Thus it's an abbreviation for the function special operator form and makes code look a bit simpler and more *Scheme-like*.
Multi-line regular expressions in Visual Studio Is there any way to get Visual Studio to perform a regex replace across multiple lines (let the match cross line boundaries)? I know there are many editors I can use for this, but it seems strange that this feature has been left out of Visual Studio. Am I missing something?
Use `(.*\n)*?` to skip zero or more lines between your expressions. `start(.*\n)*?end` finds ``` start two three end ``` `?` is the non-greedy operator, used to skip as few lines as possible. If `end` is not the first word in the line, add `.*` to match the extra characters. I.e.: `start(.*\n)*?.*end` finds ``` start two three four end end ``` If you only want to replace until the first `end`, add another non-greedy operator: `start(.*\n)*?.*?end`. **Historic:** In Visual Studio 2017 (and early versions of 2019) you can also use the single line option in the Find and Replace dialog `Ctrl`-`Shift`-`F` like this: `(?s)start.*end` For more see the version history of this answer.
tcp two sides trying to connect simultaneously Consider the three way handshake of TCP. It is explained [here](http://www.faqs.org/rfcs/rfc793.html). Now the article above mentions that two sides may try to connect simultaneously and the three way handshake works fine in this case. Can we simulate this situation using the sockets api. what we usually code using sockets is a passive open(server) and an active open(client)?
It is possible to cause a simultaneous TCP open using the sockets API. As Nikolai mentions, it is a matter of executing the following sequence with a timing such that the initial SYNs cross each other. ``` bind addr1, port1 connect addr2, port2 bind addr2, port2 connect addr1, port1 ``` Here's how I achieved a simultaneous open using a single Linux host. 1. Slow down the loopback interface using [netem](http://www.linuxfoundation.org/collaborate/workgroups/networking/netem) ``` tc qdisc add dev lo root handle 1:0 netem delay 5sec ``` 2. Run `netcat` twice ``` netcat -p 3000 127.0.0.1 2000 netcat -p 2000 127.0.0.1 3000 ``` The two netcat processes connect to each other resulting in a single TCP connection ``` $ lsof -nP -c netcat -a -i # some columns removed COMMAND PID NAME netcat 27911 127.0.0.1:2000->127.0.0.1:3000 (ESTABLISHED) netcat 27912 127.0.0.1:3000->127.0.0.1:2000 (ESTABLISHED) ``` Here's what tcpdump showed me (output edited for clarity) ``` 127.0.0.1.2000 > 127.0.0.1.3000: Flags [S], seq 1139279069 127.0.0.1.3000 > 127.0.0.1.2000: Flags [S], seq 1170088782 127.0.0.1.3000 > 127.0.0.1.2000: Flags [S.], seq 1170088782, ack 1139279070 127.0.0.1.2000 > 127.0.0.1.3000: Flags [S.], seq 1139279069, ack 1170088783 ```
how producers find kafka reader The producers send messages by setting up a list of Kafka Broker as follows. ``` props.put("bootstrap.servers", "127.0.0.1:9092,127.0.0.1:9092,127.0.0.1:9092"); ``` I wonder "producers" how to know that which of the three brokers knew which one had a *partition leader*. For a typical distributed server, either you have a load bearing server or have a virtual IP, but for Kafka, how is it loaded? Does the producers program try to connect to one broker at random and look for a broker with a partition leader?
A Kafka cluster contains multiple broker instances. At any given time, exactly one broker is the leader while the remaining are the in-sync-replicas (ISR) which contain the replicated data. When the leader broker is taken down unexpectedly, one of the ISR becomes the leader. Kafka chooses one broker’s partition’s replicas as leader using ZooKeeper. When a producer publishes a message to a partition in a topic, it is forwarded to its leader. According to [Kafka documentation](https://kafka.apache.org/intro#intro_distribution): > > The partitions of the log are distributed over the servers in the > Kafka cluster with each server handling data and requests for a share > of the partitions. Each partition is replicated across a configurable > number of servers for fault tolerance. > > > Each partition has one server which acts as the "leader" and zero or > more servers which act as "followers". The leader handles all read and > write requests for the partition while the followers passively > replicate the leader. If the leader fails, one of the followers will > automatically become the new leader. Each server acts as a leader for > some of its partitions and a follower for others so load is well > balanced within the cluster. > > > You can find topic and partition leader using [this piece of code.](https://cwiki.apache.org/confluence/display/KAFKA/Finding+Topic+and+Partition+Leader) **EDIT:** The producer sends a meta request with a list of topics to one of the brokers you supplied when configuring the producer. The response from the broker contains a list of partitions in those topics and the leader for each partition. The producer caches this information and therefore, it knows where to redirect the messages.
Spark DataFrame change datatype based on column condition I have one Spark DataFrame df1 of around 1000 columns all of String type columns. Now I want to convert df1's columns' type from string to other types like double, int etc based on conditions of column names. For e.g. let's assume df1 has only three columns of string type ``` df1.printSchema col1_term1: String col2_term2: String col3_term3: String ``` Condition to change column type is if col name contains term1 then change it to int and if col name contains term2 then change it to double and so on. I am new to Spark.
You can simply map over columns, and cast the column to proper data type based on the column names: ``` import org.apache.spark.sql.types._ val df = Seq(("1", "2", "3"), ("2", "3", "4")).toDF("col1_term1", "col2_term2", "col3_term3") val cols = df.columns.map(x => { if (x.contains("term1")) col(x).cast(IntegerType) else if (x.contains("term2")) col(x).cast(DoubleType) else col(x) }) df.select(cols: _*).printSchema root |-- col1_term1: integer (nullable = true) |-- col2_term2: double (nullable = true) |-- col3_term3: string (nullable = true) ```
How to remove Kernel/LTS Enablement Stack? this question regarding the [LTS Enablement Stack](https://wiki.ubuntu.com/Kernel/LTSEnablementStack) has actually two parts: 1. I installed Linux 3.5.0-x on Precise 64bit via `sudo apt-get install linux-generic-lts-quantal xserver-xorg-lts-quantal` . As it turns out, VirtualBox does [not work yet on the new kernel](http://launchpad.net/bugs/1081307) (EDIT: This is fixed now.). Therefore I want to remove it from the system. As expected removing the meta-packages via `sudo apt-get install linux-generic-lts-quantal xserver-xorg-lts-quantal` is not enough. What packages do I need to remove manually? For the kernel I would do (as of now): `sudo apt-get remove linux-image-3.5.0-25-generic` . However, regarding xserver-org I am lost. How can I switch back to the original xserver? 2. I installed a system with the 12.04.2 installation media. There I have also the 3.5.0-x kernel, because that's the [default for this point release](https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuDesktop#PrecisePangolin.2BAC8-ReleaseNotes.2BAC8-CommonInfrastructure.Ubuntu_Kernel_3.5.0-23.35). How can I downgrade to the kernel (and xserver?) to the 3.2.0-x series? Thank you!
Important: The below instructions only work if you have one of the pre-enablement stack kernels installed still, e.g. 3.2.0-38. With the Enablement Stack installed run 'uname -r' to check the current kernel version. Substitute it into the command below: ``` sudo apt-get purge linux-generic-lts-quantal xserver-xorg-lts-quantal linux-headers-generic-lts-quantal linux-image-generic-lts-quantal linux-image-3.5.0-25-generic linux-headers-3.5.0-25 ``` Re-install important components (including X itself!) that are removed due to dependencies by the above commands: ``` sudo apt-get install xserver-xorg xserver-xorg-input-synaptics ``` Fix the steam client: ``` sudo apt-get install libgl1-mesa-glx:i386 ``` Finally, reboot your computer. IMHO the enablement stack isn't ready for release. In my experience Virtualbox, Samba, and the Steam client all break with the Enablement Stack installed. There's a workaround for Virtualbox but none for Samba or Steam so far (March 2013).
gtk+-3.0 not found issue I'm writing program using Vala language. When I try to build a `.deb` package on Launchpad, I get this error[![error](https://i.stack.imgur.com/kDKCJ.png)](https://i.stack.imgur.com/kDKCJ.png) CMake can't find `valac` package `gtk+-3.0`. It uses `valac` version `0.30`. Though it does find it, when I do this on my computer (use cmake to build). This is first time I try to build `.deb` package, so I'm a bit confused with what to do... Can anyone tell me how to fix this? Thanks in advance.
The error is quite clear, CMake can't find the [`pkg-config` package](https://en.wikipedia.org/wiki/Pkg-config) named 'gtk+-3.0'. You have to install the [Ubuntu package containing the `gtk+-3.0.pc` file](http://packages.ubuntu.com/search?searchon=contents&keywords=gtk%2B-3.0.pc&mode=exactfilename&suite=xenial&arch=any) (which is `libgtk-3-dev`) in order to compile and link against the libgtk+-3.0 library. In fact `pkg-config` is available in many Linux distributions and it always looks for the `gtk+-3.0.pc` file in it's search path. The package containing this file (and the development headers and libraries) may be named differently on differnt Linux distros. While we are talking about Vala: The valac compiler also has a command line switch named `--pkg` (e.g. `--pkg gee-0.8 --pkg gtk+-3.0`) which uses the `pkg-config` to determine the necessary headers and libs to build against libraries that come with a `.pc` file.
SQL query to compare current and previous value in a column in a history table I have a SQL Server 2008 database with a history table for recording changes in a main table. I need to report the current (latest) value of the 'rate' column, and the most recent previous value that's different from current. So, given something like: ``` id | rate | uninteresting | updated_on | version -----+--------+---------------+--------------+---------- 123 | 1.20 | foo | 2010-10-18 | 1500 456 | 2.10 | bar | 2010-10-12 | 2123 123 | 1.20 | baz | 2010-10-10 | 1499 123 | 1.10 | baz | 2010-10-08 | 1498 456 | 2.00 | bar | 2010-10-11 | 2122 123 | 1.00 | baz | 2010-08-01 | 1497 456 | 2.00 | quux | 2010-10-05 | 2121 456 | 1.95 | quux | 2010-09-07 | 2120 ``` I want to produce: ``` id | cur_rate | cur_ver | updated_on | prev_rate | prev_ver | prev_updated -----+----------+---------+------------+-----------+----------+------------- 123 | 1.20 | 1500 | 2010-10-18 | 1.10 | 1498 | 2010-10-08 456 | 2.10 | 2123 | 2010-10-12 | 2.00 | 2122 | 2010-10-11 ``` Note that I'm looking for the latest entry where the rate is different from the most recent entry. I've tried various approaches, but either get way too many results, or none at all. Any suggestions?
There are a couple of ways to accomplish this. Here's one way ``` Declare @table as table( id int, rate decimal(10,5) , uninteresting varchar(10) , updated_on date, version int ) INSERT INTO @table VALUES (123 , 1.20 , 'foo ' , '2010-10-18' , 1500), (456, 2.1, ' bar ', ' 2010-10-12 ', 2123), (123, 1.2, ' baz ', ' 2010-10-10 ', 1499), (123, 1.1, ' baz ', ' 2010-10-08 ', 1498), (456, 2, ' bar ', ' 2010-10-11 ', 2122), (123, 1, ' baz ', ' 2010-08-01 ', 1497), (456, 2, ' quux ', ' 2010-10-05 ', 2121), (456, 1.95, ' quux ', ' 2010-09-07 ', 2120) ;WITH rates AS (SELECT Row_number() OVER ( PARTITION BY curr.id, curr.rate ORDER BY curr.updated_on DESC) AS rn, curr.id, curr.rate cur_rate, curr.version cur_ver, curr.updated_on, previous.rate prev_rate, previous.version prev_ver, previous.updated_on prev_updated FROM @table curr LEFT JOIN @table previous ON curr.id = previous.id AND curr.rate <> previous.rate AND curr.updated_on > previous.updated_on ) SELECT id, cur_rate, cur_ver, updated_on, prev_rate, prev_ver, prev_updated FROM rates WHERE rn = 1 ``` produces this result ``` id cur_rate cur_ver updated_on prev_rate prev_ver prev_updated ----------- -------- ----------- ---------- --------- ----------- ------------ 123 1.00000 1497 2010-08-01 NULL NULL NULL 123 1.10000 1498 2010-10-08 1.00000 1497 2010-08-01 123 1.20000 1500 2010-10-18 1.10000 1498 2010-10-08 456 1.95000 2120 2010-09-07 NULL NULL NULL 456 2.00000 2122 2010-10-11 1.95000 2120 2010-09-07 456 2.10000 2123 2010-10-12 2.00000 2122 2010-10-11 ``` IF you change the rn to drop the rate in the partition by e.g. `( PARTITION BY curr.id ORDER BY curr.updated_on DESC) AS rn,` you get ``` id cur_rate cur_ver updated_on prev_rate prev_ver prev_updated ----------- -------- ----------- ---------- --------- ----------- ------------ 123 1.20000 1500 2010-10-18 1.10000 1498 2010-10-08 456 2.10000 2123 2010-10-12 2.00000 2122 2010-10-11 ```
How is a reference counter implemented at compile time? Here is a made up set of function calls (I tried to make it complicated but perhaps it is easy). ``` function main(arg1, arg2) { do_foo(arg1, arg2) } function do_foo(a, b) { let x = a + b let y = x * a let z = x * b let p = y + z let q = x + z let r = do_bar(&p) let s = do_bar(&q) } function do_bar(&p, &q) { *p += 1 *q += 3 let r = &p * &q let s = &p + &q let v = do_baz(&r, &s) return &v } function do_baz(&a, &b) { return *a + *b } ``` How do you generally go about figuring out the liveness of variables and where you can insert instructions for reference counting? Here is my attempt... Start at the top function `main`. It starts with 2 arguments. Assume there is **no copying** that occurs. It passes the actual mutable values to `do_foo`. Then we have `x`. X owns a and b. Then we see `y`. `y` is set to `x`, so link the previous x to this `x`. By `r`, we don't see `x` anymore, so perhaps it can be freed.... Looking at `do_bar` by itself, we know basically that `p` and `q` can't be garbage collected within this scope. Basically, I have no idea how to start implementing an algorithm to implement ARC (ideally compile time reference counting, but runtime would be okay for now too to get started). ``` function main(arg1, arg2) { let x = do_foo(arg1, arg2) free(arg1) free(arg2) free(x) } function do_foo(a, b) { let x = a + b let y = x * a let z = x * b let p = y + z free(y) let q = x + z free(x) free(z) let r = do_bar(&p) let s = do_bar(&q) return r + s } function do_bar(&p, &q) { *p += 1 *q += 3 let r = &p * &q let s = &p + &q let v = do_baz(&r, &s) free(r) free(s) return &v } function do_baz(&a, &b) { return *a + *b } ``` How do I start with implementing such an algorithm. I have searched for every paper on the topic but found no algorithms.
The following rules should do the job for your language. 1. When a variable is declared, increment its refcount 2. When a variable goes out of scope, decrement its refcount 3. When a reference-to-variable is assigned to a variable, adjust the reference counts for the variable(s): - increment the refcount for the variable whose reference is being assigned - decrement the refcount for the variable whose references was previously in the variable being assigned to (if it was not null) 4. When a variable containing a non-null reference-to-variable goes out of scope, decrement the refcount for the variable it referred to. Note: - If your language allows reference-to-variable types to be used in data structures, "static" variables, etcetera, the rules abouve need to be extended ... in the obvious fashion. - An optimizing compiler may be able to eliminate some refcount increments and decrements. --- Compile time reference counting: 1. There isn't really any such thing. Reference counting is done at runtime. It doesn't make sense to do it at compile time. 2. You are probably talking about analyzing the code to determine if runtime reference counting can be optimized or entirely eliminated. - I alluded to the former above. It is really a kind of peephole optimization. - The latter entails checking whether a reference-to-variable can ever escape; i.e. whether it *could be* used after the variable goes out of scope. (Try Googling for "escape analysis". This is kind of analogous to the "escape analysis" that a compiler could do to decide if an object could be allocated on the stack rather than in the heap.)
Sending Multiple emails at once using PHP I am trying to send multiple emails using php. But everytime I try to send the email I'm getting “errorerrorerror”—one “error” for each email—that's in the table. Here's the code ``` $emailsql = "SELECT Username FROM Companyuserinfo WHERE Company_ID = '$cid'"; $emailquery = mysqli_query($connection, $emailsql); while($emailrow = mysqli_fetch_array ($emailquery)){ $Usernamesend = $emailrow['Username']; $sendsql = "SELECT * FROM users WHERE username = '$Usernamesend'"; $sendquery = mysqli_query($connection, $sendsql); $sendrow = mysqli_fetch_array ($sendquery); $emailtosend = $sendrow['email']; $to="$emailtosend"; $from = "[email protected]"; $subject="TEST!"; $message="HEY MY BROTHER!! I AM TESTING TdfdHIS BABY! WOOHOO!"; $headers = "From: $from\n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-type: text/html; charset=iso-8859-1\n"; mail($to, $subject, $message, $headers); if (!mail($to, $subject, $message, $headers)){ echo "error"; } else{ echo "Form submitted successfully! Press back $emailtosend"; } } ```
I got it to work....replaced my previous script with this; ``` $emailsql = "SELECT * FROM Companyuserinfo WHERE Company_ID = '$cid'"; $email_query = mysqli_query($connection, $emailsql); while($emailrow = mysqli_fetch_array ($email_query)){ $Usernamesend = $emailrow['Username']; $company_name = $emailrow['Company_Name']; $sendsql = "SELECT * FROM users WHERE username = '$Usernamesend'"; $send_query = mysqli_query($connection, $sendsql); $mail_body = ''; $sendrow = ''; $sendrow = mysqli_fetch_array($send_query); $email = $sendrow["email"]; $name = $sendrow["first_name"]; $to = "$email"; $from = "[email protected]"; $subject = "TESTINGG"; $message = 'TEST!!'; $headers = "From: $from\n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-type: text/html; charset=iso-8859-1\n"; $headers .= "X-Priority: 1 (Highest)\n"; $headers .= "X-MSMail-Priority: High\n"; $headers .= "Importance: High\n"; $mail_result = mail($to, $subject, $message, $headers); } if ($mail_result) { echo "Submitted Successfully! Press close"; } else { echo "There was an error submitting... Press close"; } ```
Is calling glFinish necessary when synchronizing resources between OpenGL contexts? I am using two OpenGL contexts in my application. The first one is used to render data, the second one to background load and generate VBOs and textures. When my loading context generates a VBO and sends it to my rendering thread, I get invalid data (all zeroes) in my VBO unless I call `glFlush` or `glFinish` after creating the VBO on the loading context. I think that this is due to my loading context not having any buffer swap or anything to tell the GPU to start working on its command queue and doing nothing (which leads to an empty VBO on the rendering context side). From what I've seen, this flush is not necessary on Windows (tested with an Nvidia GPU, it works even without the flushes) but is necessary on linux/macOS. This page on Apple's documentation says that calling `glFlush` is necessary (<https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/OpenGLESApplicationDesign/OpenGLESApplicationDesign.html>) > > If your app shares OpenGL ES objects (such as vertex buffers or textures) between multiple contexts, you should call the glFlush function to synchronize access to these resources. For example, you should call the glFlush function after loading vertex data in one context to ensure that its contents are ready to be retrieved by another context. > > > But is calling `glFinish` or `glFlush` necessary or is there simpler/lighter commands available to achieve the same result ? (and which is necessary, `glFlush` or `glFinish` ?) Also, is there a documentation or reference somewhere that talks about this ? I couldn't find any mentions and it seems to work differently between implementations.
If you manipulate the contents of any object in thread A, those contents are not visible to some other thread B [until two things have happened](https://www.khronos.org/opengl/wiki/Memory_Model#Object_content_visibility): 1. The commands modifying the object have [completed](https://www.khronos.org/opengl/wiki/Synchronization#Command_state). `glFlush` does not complete commands; you must use `glFinish` or a [sync object](https://www.khronos.org/opengl/wiki/Sync_Object) to ensure command completion. Note that the completion needs to be communicated to thread B, but the synchronization command has to be issued on thread A. So if thread A uses `glFinish`, it now must use some CPU synchronization to communicate that the thread is finished to thread B. If you use fence sync objects instead, you need to create the fence on thread A, then hand it over to thread B who can test/wait on that fence. 2. The object must be re-bound to the context of thread B. That is, you have to bind it to that context after the commands have completed (either directly with a `glBind*` command or indirectly by binding a container object that has this object attached to it). This is detailed in Chapter 5 of the OpenGL specification.
c++ when will while(cin>>s) stop I'm new to C++. I'm sorry if this question is duplicated but I just can't find similar question. very basic code: ``` string s; while (cin >> s) cout << s << endl; ``` I expect the loop stop when I press the return in my keyboard. But it never stop.... I know that cin will return false when it encounters invalid input, for example if we use ``` int i; while (cin >> i) cout << i << endl; ``` then the loop ends when we enter a non-integer. But in case of string, how can we stop that loop?
`while (cin >> s) { ... }` will loop as long as the input is valid. It will exit the loop when the attempted input fails. There are two possible reasons for failure: 1. Invalid input 2. End of file Assuming that the input itself is valid, in order to terminate the loop the input stream has to reach the end. When the input is actually a file, recognizing the end is easy: when it runs out of characters it's at the end of the file. When it's the console, it's not so easy: you have to do something artificial to indicate the end of the input. Do that, you have to tell the terminal application (which controls the console) that there is no more input, and the terminal application, in turn, will tell your program that it's at the end of the input. The way you do that depends on the terminal application, which is typically part of the operating system. - On *Windows*, `ctrl-Z` tells the terminal application that you're at the end of your input. - On *Unix systems*, it's `ctrl-D`.
Property Editor not registered with the PropertyEditorManager: error on custom tag invokation I'm getting following error when invoking my `testtag.jsp`: > > org.apache.jasper.JasperException: Unable to convert string "<c:out var='myvar'/>" to class "javax.servlet.jsp.tagext.JspFragment" for attribute "att1": Property Editor not registered with the PropertyEditorManager > > > (The detailed stack trace is irrelevant) I'm using a J2EE 1.4 server (that is, JSP 2.0) My `WEB-INF/testtag.tag` ``` <%@ tag body-content="scriptless" %> <%@ tag description="Renders some test html" %> <%@ attribute name="att1" fragment="true" required="true" %> <h1><jsp:invoke fragment="att1"/></h1> ``` The jsp `testtag.jsp` using this tag: ``` <%@page contentType ="text/html" pageEncoding="UTF-8" buffer="none" session="false" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="t" tagdir="/WEB-INF/tags" %> <c:set var="myvar" value="hello world"/> <t:testtag att1="<c:out var='myvar'/>" /> ```
According to JSP.7.1.6 of the [JSP 2.0 specs](http://download.oracle.com/otndocs/jcp/jsp-2.0-fr-oth-JSpec/ "JSP 2.0 Spec") (bold emphasis mine), > > During the translation phase, various pieces of the page are translated into > implementations of the `javax.servlet.jsp.tagext.JspFragment` abstract class, before > being passed to a tag handler. This is done automatically **for any JSP code in the > body** of a *named attribute* (one that is defined by `<jsp:attribute>`) that is declared to > be a fragment, or of type `JspFragment`, in the TLD. > > > That is, in `testtag.jsp` the fragment should be passed this other way: ``` <%@page contentType ="text/html" pageEncoding="UTF-8" buffer="none" session="false" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@ taglib prefix="t" tagdir="/WEB-INF/tags" %> <c:set var="myvar" value="hello world"/> <t:testtag> <jsp:attribute name="att1"><c:out value='${myvar}'/></jsp:attribute> </t:testtag> ```
c++ - deterministic alternative to std::uniform\_XXX\_distribution with mt19937? I need a way to get deterministic sequences of ints and doubles. ``` template <class U> constexpr auto get_random_value (std::mt19937 &gen, U min_value, U max_value)->U { if constexpr ( std::is_same_v <U, double> or std::is_same_v <U, float> ){ std::uniform_real_distribution <U> distrib( min_value, max_value ); return distrib( gen ); } else if constexpr ( std::is_same_v <U, u32> or std::is_same_v <U, i32> ){ std::uniform_int_distribution distrib( min_value, max_value ); return distrib( gen ); } else { throw std::runtime_error( "error value type" ); } } ``` My issue is that one day to another, the same seeded value will lead to different results. The distribution is to blame because it goes a long way to avoid the pitfall of the modulo. But I need a precise way to always be certain that a sequence will always be the same starting from a given seed. And I need an unbiased partition (so % and rand() are out). What kind of implementation will guarantee this?
The distributions in the C++ standard are not portable ("seed-stable"), in the sense that the result can change between [different implementations](https://stackoverflow.com/q/42475773/3740047) (e.g. Microsoft STL vs gcc libstdc++ vs clang libc++) or even different versions (e.g. [Microsoft changed their implementation before](https://github.com/microsoft/STL/issues/178)). The standard simply does not prescribe a specific algorithm, with the intention to allow implementations to [select the one with the best performance for each platform](https://stackoverflow.com/a/24554535/3740047). So far, there is only a [proposal (D2059R0)](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2059r0.pdf) to do something about this. Note, however, that the generators are actually portable. I have yet to see a library that **guarantees** portability. However, in practice [boost.random](https://www.boost.org/doc/libs/1_80_0/doc/html/boost_random.html) is known to produce reproducible result across platforms (see e.g. [here](https://www.reddit.com/r/cpp/comments/e9mft4/when_did_you_want_to_use_random_but_couldnt/) or [here](https://www.reddit.com/r/cpp/comments/7i21sn/til_uniform_int_distribution_is_not_portable/) or [here](https://blog.askesis.pl/post/2021/03/portable-random-number.html)). Also, Google's [abseil](https://abseil.io/docs/cpp/guides/random) library explicitly states that they do not provide a stability guarantee, but seems to produce the same result on different platforms nevertheless. Here is a [live example on godbolt](https://godbolt.org/z/fEf4hbzGM) where you can see that to some extent (well, at least Linux vs Windows for a tiny selection of parameters). The major point is not to update the libraries without checking for a breaking change. Also compare e.g. [this](https://blog.askesis.pl/post/2021/03/portable-random-number.html) blog post. Or you could also implement a specific algorithm yourself (see e.g. [here](https://www.pcg-random.org/posts/bounded-rands.html)) or simply copy the code from one the libraries to your code base and thereby effectively freezing its version. For distributions involving floating point arithmetic, you also have the problem that the arithmetic itself is, generally speaking, far from stable across platforms since stuff like automatic vectorization (SSE2 vs AVX etc) or FPU settings might change behavior. See [this blog post](https://randomascii.wordpress.com/2013/07/16/floating-point-determinism/) for more information. How far distributions, including the above mentioned libraries, are affected by this, I unfortunately do not know. The small example on godbolt mentioned above does at least not show any problem with `-ffast-math`, which is a good sign. Whatever you do, I highly recommend to back up your choice by appropriate automatic tests (unit tests etc.) that catch any potential deviations in behavior.
Change Rectangle Fill Based on ColumnWidth of a grid Essentially i want to do as the title says, if the columnwidth is .50 then the rectangle should be red, if it's .75 then it should be amber, and if it's 1 then it should be green. I thought I could achieve this with DataTriggers but for some reason I am getting "Object Reference Not Set To An Instance Of An Object" error, here is my code, am I missing something here? FYI the width property will be changed in the backend through a timer\_tick event. ``` <Grid x:Name="Grid1" Width="300" Height="30"> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="MyColumn1" Width=".50*"></ColumnDefinition> <ColumnDefinition x:Name="MyColumn2" Width=".50*"></ColumnDefinition> </Grid.ColumnDefinitions> <Grid.Triggers> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value=".50*"> <Setter TargetName="rect" Property="Fill" Value="Red"></Setter> </DataTrigger> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value=".75*"> <Setter TargetName="rect" Property="Fill" Value="Yellow"></Setter> </DataTrigger> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value="1"> <Setter TargetName="rect" Property="Fill" Value="Green"></Setter> </DataTrigger> </Grid.Triggers> <Rectangle x:Name="rect" Grid.Column="0" HorizontalAlignment="Stretch"></Rectangle> <Rectangle Grid.Column="1" Fill="Blue"></Rectangle> </Grid> ```
This worked for me. Please note I used a `Style` to set triggers on a `Rectangle`. ``` <Grid x:Name="Grid1" Width="300" Height="30"> <Grid.ColumnDefinitions> <ColumnDefinition x:Name="MyColumn1" Width=".50*"></ColumnDefinition> <ColumnDefinition x:Name="MyColumn2" Width=".50*"></ColumnDefinition> </Grid.ColumnDefinitions> <Rectangle x:Name="rect" Grid.Column="0" HorizontalAlignment="Stretch"> <Rectangle.Style> <Style TargetType="{x:Type Rectangle}"> <Style.Triggers> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value=".50*"> <Setter Property="Fill" Value="Red"></Setter> </DataTrigger> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value=".75*"> <Setter Property="Fill" Value="Yellow"></Setter> </DataTrigger> <DataTrigger Binding="{Binding ElementName=MyColumn1,Path=Width}" Value="1"> <Setter Property="Fill" Value="Green"></Setter> </DataTrigger> </Style.Triggers> </Style> </Rectangle.Style> </Rectangle> <Rectangle Grid.Column="1" Fill="Blue"></Rectangle> </Grid> ```
Why returning a reference owned by the current function is allowed in Rust? I am learning Rust's lifetime/ownership concepts, and would like to explain the following behavior in Rust (rustc 1.37.0). For a program like this: ``` #[derive(Debug)] struct Book { price: i32, } fn main() { let book1 = Book {price: 12}; let cheaper_book = choose_cheaper(&book1); println!("{:?}", cheaper_book); } fn choose_cheaper(b1: &Book) -> &Book { if b1.price < 15 { b1 } else { let cheapest_book = Book {price: 0}; &cheapest_book } } ``` Rust reports: ``` 17 | &cheapest_book | ^^^^^^^^^^^^^^ returns a reference to data owned by the current function ``` And I can understand this error and it is because variable `cheapest_book` is the owner of the Book with price 0, and it will be dropped at the end of this function, so the returned reference will become invalid after that. But it is hard for me to explain why the following is allowed if I change the `choose_cheaper` function to be: ``` fn choose_cheaper(b1: &Book) -> &Book { if b1.price < 15 { b1 } else { let cheapest_book = &Book {price: 0}; cheapest_book } } ``` Could some one shed me some light on it? Thanks.
In the line `let cheapest_book = &Book {price: 0};`, the `Book` is *not* a "new" instance of the `Book` type. Every time this function is called it will return a reference to the *same* instance of the `Book` type, which will be stored in the read-only data section of the executable (or, technically, the data section if it contains a `Cell` or `AtomicUsize` or the like). We can in this instance "expand" the code into something a little more explicit: ``` static GLOBAL_BOOK: Book = Book { price: 0 }; fn choose_cheaper<'a>(b1: &'a Book) -> &'a Book { if b1.price < 15 { b1 } else { let cheapest_book = &GLOBAL_BOOK; cheapest_book } } ``` Note that the reference to `GLOBAL_BOOK` could actually be a `&'static Book`, but `&'a Book` is a supertype of that so it's okay to return the static reference as if it were an `'a` reference. If this seems weird, consider that this is exactly what happens with string literals; they just don't have the explicit `&` character: After `let foo = "string!";`, `foo` is a `&'static str` referencing some data in the read-only section of the executable, not a local object. So you can also write `return "string!";` in functions returning `&'a str` for any `'a`. The rule for whether rust will make this transformation is whenever you "construct" an object (using tuple syntax, or struct or enum or union initialization syntax, or numeric or string literals, or any combinations thereof - **not** function calls to `new()` or any other function) behind a `&`, they'll become an anonymous static. So in fact `&&1_u32` is a `'static` reference to a static `'static` reference to a static `u32`.
How to redirect a URL in Nginx I need to redirect every <http://test.com> request to <http://www.test.com>. How can this be done. In the server block I tried adding ``` rewrite ^/(.*) http://www.test.com/$1 permanent; ``` but in browser it says > > The page isn't redirecting properly > > > Firefox has detected that the server is redirecting the request for this address in a way that will never complete. > > > My server block looks like ``` server { listen 80; server_name test.com; client_max_body_size 10M; client_body_buffer_size 128k; root /home/test/test/public; passenger_enabled on; rails_env production; #rewrite ^/(.*) http://www.test.com/$1 permanent; #rewrite ^(.*)$ $scheme://www.test.com$1; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } ```
Best way to do what you want is to add another server block: ``` server { #implemented by default, change if you need different ip or port #listen *:80 | *:8000; server_name test.com; return 301 $scheme://www.test.com$request_uri; } ``` And edit your main server block server\_name variable as following: ``` server_name www.test.com; ``` *Important*: New `server` block is the right way to do this, [`if` is evil](https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/). You must use locations and servers instead of `if` if it's possible. `Rewrite` is sometimes [evil too](https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/#taxing-rewrites), so replaced it with `return`.
How can I dilate a binary image without closing holes with OpenCV How can I apply dilation to the binary image on the left without closing the hole in the loop? I'm also interested in doing it efficiently. [![enter image description here](https://i.stack.imgur.com/6JUmq.png)](https://i.stack.imgur.com/6JUmq.png) **Context**: I need to train a CNN to read handwritten digits. Believe it or not, the image on the left is supposed to be a 9. Since the dataset has a lot of 9s written that way I may have a chance at training a model to recognise it. I do need to apply some dilation though, in order to get the digit thickness to be similar to that of the digits fed into the pre-trained model. I think if I lose the hole in the loop, I'll have no chance.
You just need to fill the hole inside the contour, inverse it, then multiply it to the dilated image: [![enter image description here](https://i.stack.imgur.com/oeIhP.png)](https://i.stack.imgur.com/oeIhP.png) Here is the opencv code (c++): ``` Mat img__ = imread("E:/1.jpg", 0); Mat img1; threshold(img__, img1, 0, 255, THRESH_OTSU); # you don't need this line, I used it because I read the image from my hard disk. You can comment this line vector<vector<Point>> contours; vector< Vec4i > hierarchy; findContours(img1, contours, hierarchy, RETR_CCOMP, CHAIN_APPROX_NONE); Mat tmp = Mat::zeros(img1.size(), CV_8U); for (size_t i = 0; i < contours.size(); i++) { if (hierarchy[i][2] < 0) # this stands for the inner contours which surrounds the hole drawContours(tmp, contours, i, Scalar(255, 255, 255), -1); } Mat img2; dilate(img1, img2, Mat::ones(9, 9, CV_8U)); imshow("original", img1); imshow("1", tmp); imshow("3", img2); tmp = 255 - tmp; imshow("2", tmp); tmp = tmp / 255; multiply(tmp, img2, img2); imshow("4", img2); waitKey(0); ```
If you're explicitly initializing an Object array in Java, is including "new Object[]" different than not including it? Preface: this question *has* been asked [here](https://stackoverflow.com/questions/4706814/array-initialization-differences-java "repeat question"), but I'm wondering specifically about the author's specific meaning. I'm reading through *Thinking in Java, 3rd ed. Revision 4.0*, and Eckel shows this snippet in Chapter 4, *Initialization and Cleanup*: ``` public class ArrayInit { public static void main(String[] args) { Integer[] a = { new Integer(1), new Integer(2), new Integer(3), }; Integer[] b = new Integer[] { new Integer(1), new Integer(2), new Integer(3), }; } } ``` And states the following: > > The first form is useful at times, but it’s more limited since the size of the array is determined at compile time. > > The second form provides a convenient syntax to create and call methods that can produce the same effect as C’s variable argument lists (known as “varargs” in C). These can include unknown quantities of arguments as well as unknown types. > > > I've never known these to be different as Eckel describes. To my understanding, they are both arrays of static size. I don't understand how the first is any more "limited" than the second. What's he talking about?
I think this *might* be what the author is referring to. Since Java 5, we can declare functions with *variable argument lists*. ``` public static int newStyleSum(final int... numbers) { int sum = 0; for (final int number : numbers) { sum += number; } return sum; } ``` They can be used as in: ``` int s = newStyleSum(1, 2, 3, 4); ``` This feature is merely syntactic sugar. Internally, an anonymous array is passed to the function. Before we had this syntax, the above example would have to be written as: ``` public static int oldStyleSum(final int[] numbers) { int sum = 0; for (int i = 0; i < numbers.length; ++i) { sum += numbers[i]; } return sum; } ``` and called as ``` int s = oldStyleSum(new int[]{1, 2, 3, 4}); // "second" form ``` but not as ``` int s = oldStyleSum({1, 2, 3, 4}); // "first" form (syntax error) ``` which, even today, is still a syntax error. And that might indeed be what he is talking about. Java 5 came out in 2004 so for a 2002 book, it makes sense. The new syntax is more flexible and – importantly – backwards compatible, so we can still do ``` int s = newStyleSum(new int[]{1, 2, 3, 4}); ``` or, more importantly, ``` int[] numbers = {1, 2, 3, 4}; int s = newStyleSum(numbers); ``` if we want to.
What is the difference between socket.send() and socket.sendall()? I'm confused about `socket.send()` and `socket.sendall()` functions in Python. As I understand from [the documentation](https://docs.python.org/2/library/socket.html#socket.socket.sendall) `send()` function uses TCP protocol and `sendall()` function uses UDP protocol for sending data. I know that TCP is more reliable for most of the Web Applications because we can check which packets are sent and which packets are not. That's why, I think use of `send()` function can be more reliable rather than `sendall()` function. At this point, I want to ask what is the exact difference between these two functions and which one is more reliable for web applications? Thank you.
**socket.send** is a low-level method and basically just the C/syscall method [send(3)](http://linux.die.net/man/3/send) / [send(2)](http://linux.die.net/man/2/send). It can send less bytes than you requested, but returns the number of bytes sent. **socket.sendall** is a high-level Python-only method that sends the entire buffer you pass or throws an exception. It does that by calling `socket.send` until everything has been sent or an error occurs. If you're using TCP with blocking sockets and don't want to be bothered by internals (this is the case for most simple network applications), use sendall. And python docs: > > Unlike send(), this method continues to send data from string until > either all data has been sent or an error occurs. None is returned on > success. On error, an exception is raised, and **there is no way to > determine how much data**, if any, was successfully sent > > > Credits to Philipp Hagemeister for brief description I got in the past. **edit** `sendall` use under the hood `send` - take a look on [cpython](https://github.com/python/cpython/blob/65e6c1eff3dde469256df909847b0e9126d0c255/Modules/socketmodule.c#L3562) implementation. Here is sample function acting (more or less) like `sendall` : ``` def sendall(sock, data, flags=0): ret = sock.send(data, flags) if ret > 0: return sendall(sock, data[ret:], flags) else: return None ``` or from [rpython (pypy source)](https://bitbucket.org/pypy/pypy/src/fd211d1194e8502f4eeffd919a0a4b31d688d324/rpython/rlib/rsocket.py?at=default&fileviewer=file-view-default#rsocket.py-914): ``` def sendall(self, data, flags=0, signal_checker=None): """Send a data string to the socket. For the optional flags argument, see the Unix manual. This calls send() repeatedly until all data is sent. If an error occurs, it's impossible to tell how much data has been sent.""" with rffi.scoped_nonmovingbuffer(data) as dataptr: remaining = len(data) p = dataptr while remaining > 0: try: res = self.send_raw(p, remaining, flags) p = rffi.ptradd(p, res) remaining -= res except CSocketError, e: if e.errno != _c.EINTR: raise if signal_checker is not None: signal_checker() ```
How to access html controls in code behind I'm trying to follow this [example](http://www.asp.net/web-forms/tutorials/security/membership/validating-user-credentials-against-the-membership-user-store-cs) on how to validate credentials. However, it uses asp: controls for the login form. If I were to use html controls instead so CSS styles can be applied, eg ``` <div id="login"> <a href="#" id="lclose"></a> <form action="#" runat="server"> <fieldset> <div class="frame"> <h4>Login</h4> <small>Sign in to your account.</small> <div class="clear"></div> <input type="text" value="Username" class="input-text autoclear" /> <input type="password" value="Password" class="input-text autoclear"/> </div> <div class="separator"></div> <div> <input type="submit" value="Sign in" class="input-submit float-right" runat="server" onserverclick="LoginButton_Click"/> <a href="#" class="float-left">Forgot your password?</a> </div> </fieldset> </form> </div> ``` How do I access the Username & Password in code behind similar to? ``` protected void LoginButton_Click(object sender, EventArgs e) { // Validate the user against the Membership framework user store if (Membership.ValidateUser(UserName.Text, Password.Text)) { // Log the user into the site FormsAuthentication.RedirectFromLoginPage(UserName.Text, RememberMe.Checked); } // If we reach here, the user's credentials were invalid InvalidCredentialsMessage.Visible = true; } ``` What is the correct syntax instead of `UserName.Text, Password.Text`?
Add `id` and `runat` server attributes to the input tag (see below) ``` <input type="text" value="Username" class="input-text autoclear" id="Username" runat="server"/> <input type="password" value="Password" class="input-text autoclear" id="Password" runat="server"/> ``` You also need to change `Text` to `Value` in your code: ``` protected void LoginButton_Click(object sender, EventArgs e) { // Validate the user against the Membership framework user store if (Membership.ValidateUser(Username.Value, Password.Value)) { // Log the user into the site FormsAuthentication.RedirectFromLoginPage(UserName.Value, RememberMe.Checked); } // If we reach here, the user's credentials were invalid InvalidCredentialsMessage.Visible = true; } ``` You can also add a html `checkbox` for `RememberMe` ``` <input id="RememberMe" type="checkbox" runat="server" value ="RememberMe"/> ``` Now you can check the checked states by calling `RememberMe.Checked`
Estimate the population variance from a set of means I have a set of measurements which is partitioned into M partitions. However, I only have the partition sizes $N\_i$ and the means $\bar{x}\_i$ from each partition. Because all measurements are assumed to be from the same distribution, I believe I can estimate the mean of the population, $\bar{y}$, and standard deviation of the mean, $\sigma\_{mean}$: $$ N=\sum\_{i=1}^M N\_i $$ $$ \bar{y} = \frac{1}{N}\sum\_{i=1}^MN\_i\bar{x}\_i $$ $$ \sigma\_{mean}=\sqrt{\frac{1}{N}\sum\_i N\_i(\bar{x}\_i-\bar{y})^2} $$ My questions: 1. Am I right in my assumptions, that the mean $\bar{y}$ can be computed as above? 2. How can I find the standard deviation for the population, given only the means? I read that the standard deviation of the population and standard deviation of the mean is related with $$ \sigma\_{mean}=\frac{\sigma}{\sqrt{n}} \mbox{[1]} $$ where $n$ is the number of samples used in the computation of $\bar{x}\_i$. So is it actually as simple as just multiplying $\sigma\_{mean}$ with $\sqrt{n}$ if $n$ for all means are the same? 3. If it's that simple, what do I do if each $\bar{x}\_i$ is computed using a different number of samples? [1] [Wikipedia:Standard Deviation](http://en.wikipedia.org/wiki/Standard_deviation#Relationship_between_standard_deviation_and_mean)
Let $X\_i$ be the mean of $N\_i$ independent draws from some unknown distribution $F$ having mean $\mu$ and standard deviation $\sigma$. Altogether these values represent $N=N\_1+N\_2+\cdots+N\_k$ draws. It follows from these assumptions that each $X\_i$ has expectation $\mu$ and variance $\sigma^2/N\_i$. Part of the question proposes estimating $\mu$ from these data as $$\hat{\mu} = \frac{1}{N}\sum\_{i=1}^k N\_i X\_i.$$ **We can verify that this is a good estimate.** First, it is unbiased: $$E[\hat{\mu}] = E\left[\frac{1}{N}\sum\_{i=1}^k N\_i X\_i\right] = \frac{1}{N}\sum\_{i=1}^k N\_i \mu = \mu.$$ Second, its estimation variance is low. To compute this we find the second moment: $$\begin{align} E\left[\hat{\mu}^2\right] &= E\left[\frac{1}{N^2}\sum\_{i,j}N\_i N\_j X\_i X\_j\right]\\ &= \mu^2 + \sigma^2/N. \end{align}$$ Subtracting the square of the first moment shows that the sampling variance of $\hat{\mu}$ equals $\sigma^2/N$. This is as low as an unbiased linear estimator can possibly get, because it equals the sampling variance of the mean of the $N$ (unknown) values from which the $X\_i$ were formed; that sampling variance is known to be minimum among all unbiased linear estimators; and any linear combination of the $X\_i$ is *a fortiori* a linear combination of the $N$ underlying values. **To address the other parts of the question**, let us seek an unbiased estimator of the variance $\sigma^2$ in the form of a weighted sample variance. Write the weights as $\omega\_i$. Computing in a similar vein we obtain $$\begin{align} E\left[\widehat{\sigma^2}\right] &= E\left[\sum\_i \omega\_i(X\_i-\hat{\mu})^2\right] \\ &= \sum\_i \omega\_i E\left[X\_i^2 - \frac{2}{N}\sum\_j N\_j X\_i X\_j + (\hat{\mu})^2\right]\\ &= \sum\_i \omega\_i \left((\mu^2 + \sigma^2 / N\_i)\left(1 - 2\frac{N\_i}{N}\right) - \frac{2}{N}\sum\_{j\ne i} N\_j \mu^2 + (\mu^2 + \sigma^2/N)\right)\\ &= \sigma^2 \sum\_i \omega\_i\left(\frac{1}{N\_i} - \frac{1}{N}\right). \end{align}$$ A natural choice (inspired by ANOVA calculations) is $$\omega\_i = \frac{N\_i}{k-1}.\tag{\*}$$ For indeed, $$E\left[\widehat{\sigma^2}\right] = \sigma^2 \sum\_i^k \frac{N\_i}{k-1}\left(\frac{1}{N\_i} - \frac{1}{N}\right) = \sigma^2 \frac{1}{k-1}\sum\_i^k \left(1 - \frac{N\_i}{N}\right) = \sigma^2\frac{k-\frac{N}{N}}{k-1} = \sigma^2.$$ This at least makes $\widehat{\sigma^2}$ unbiased. With more than $k=2$ groups, there are many other choices of weights that give unbiased estimators. When the group sizes are equal, it's easy to show that this choice gives a minimum-variance unbiased estimator. In general, though, it appears that the MVUE depends on the first four moments of $F$. (I may have done the algebra wrong, but I'm getting some complicated results for the general case.) Regardless, it appears that **the weights provided here will not be far from optimal**. As a concrete example, suppose that each of $X\_1$, $X\_2$, and $X\_3$ is the average of $N\_i=4$ draws. Then $N=12$, $k=3$, and the weights as given in formula $(\*)$ are all given by $\omega\_i = \frac{4}{3-1}=2$. Consequently we should estimate $$\widehat{\sigma^2} = 2((X\_1-\hat{\mu})^2 + (X\_2-\hat{\mu})^2 + (X\_3-\hat{\mu})^2)$$ and, of course, $$\hat{\mu} = \frac{1}{12}(4X\_1 + 4X\_2 + 4X\_3) = (X\_1+X\_2+X\_3)/3.$$
multidimensional array in angular I have a multidimensional array from an API. Is it possible to programatically loop through the array? ``` { success: true, categories: [{ cat_id: "2", name: "This is category One", description: null, photo_url: "/img/test.png", order: "1", items: [{ item_id: "1", title: "Desk", item_url: "/690928460", photo_url: "/desk.png", }, { item_id: "2", title: "Chair", item_url: "/18882823", photo_url: "/chair.png", }, }] }] } ``` **My controller looks like this:** ``` myApp.controller('items', function($scope, $http, $location, Data) { var apiUrl = '/api/items'; $http.get(apiUrl). success(function(data) { $scope.data = Data; $scope.allData = data; }); $scope.changeView = function(view) { $location.path(view); } }); ``` **Angular index file just has: `<div ng-view=""></div>`** **View file** ``` <div class="scrollable categories-container animated-fast slideInUp"> <div class="container categories"> <div class="row" ng-repeat="categories in allData"> <div class="col-xs-6" ng-repeat="category in categories"> <div class="items"> <div class="title"> {{ category.name }} </div> </div> </div> </div> </div> </div> ``` I can loop through the category names fine, but when trying to return items **for EACH** category I don't understand the logic behind it...
I would suggest some simple nested for loops, as for each gives rise to more complexity. As I'm not sure what you want to do with the data let's just create an array of all item names and one of all category names: Within your success function: ``` var items = [], categories = [] for(var i = 0; i < data.categories.length;i++){ categories.push(data.categories[i].name); for(var j = 0; j < data.categories[i].items.length;j++){ items.push(data.categories[i].items[j].name); } } console.log(categories); console.log(items); ``` **EDIT:** Completely missed your html code somehow, here is my solution: ``` <div class="scrollable categories-container animated-fast slideInUp"> <div class="container categories"> <div class="col-xs-6" ng-repeat="category in allData.categories"> <div class="items"> <div class="title"> {{ category.name }} </div> </div> </div> </div> </div> ``` **EDIT 2:** As to your comment: If you want to select the secondary view's contents(ie the items) based on the selection of a category I would suggest a ng-click attribute. A directive could be used but isn't necessary: ``` <div class="scrollable categories-container animated-fast slideInUp"> <div class="container categories"> <div class="col-xs-6" ng-repeat="category in allData.categories"> <div class="title" ng_click="selected_category = category"> {{ category.name }} </div> </div> <div class="col-xs-6" ng-repeat="item in selected_category.items"> <div class="title"> {{ item.name }} </div> </div> </div> </div> ``` So when your categories data is loaded the first ng-repeat is populated with the categories. Each div with class title will have a function called on click which will make the selected\_category object equal the selected category. This will then cause the second view to be populated with all the items in the selected category by Angular's two way bind.
Expose Columns property of a DataGridView in UserControl and make it editable via Designer > > Short description: > > > I have a UserControl with a DataGridView on it. I > want to expose the DataGridView Columns collection to the designer, so > I can change the columns on my User Control at design time. > > > **Question: Which designer attributes do I need for this?** For those interested in the longer version: I have a UserControl with the following features: - a DataGridView that shows "pages" of items from a collection. - a NumericUpdown control to select which page to show. - page up / page down buttons that will disable when the first / last page is shown - Changes to the displayed items are visually marked - Buttons to save / discard the changes. This user control can work autonomic. It has one function to be used by the parent control: - Show page (collection of items to show) The UserControl raises two events: - Event Page changed (with a page number). Should result in loading a new page - Event Save items (with the collection of changed items) I have to show this user control on several forms. The only difference is that the collection of DataGridViewColumn differs per form. I could add the columns programmatically, but it would be easier to create them using the designer.
Usually it's enough to register a suitable [`UITypeEditor`](https://learn.microsoft.com/en-us/dotnet/api/system.drawing.design.uitypeeditor?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) using [`[Editor]`](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.editorattribute?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) attribute. The editor which is used by the `DataGridView` is `DataGridViewColumnCollectionEditor`. But in this case, if we use this editor directly, the editor expect the the property belong to a `DataGridView` and tries to convert value of [`ITypeDescriptorContext.Instance`](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.itypedescriptorcontext.instance?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) to `DataGridVeiew` and since our editing `Columns` property belongs to our user control we will receive an exception: > > Unable to cast object of type '`Type of Control'` to type > '`System.Windows.Forms.DataGridView`'. > > > To solve the problem, we need to create a custom `UITypeEditor` and override [`EditValue`](https://learn.microsoft.com/en-us/dotnet/api/system.drawing.design.uitypeeditor.editvalue?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) and edit `Columns` property of the private `DataGridView` field of your user control. To do so, we create an instance of [`ITypeDescriptorContext`](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.itypedescriptorcontext?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) containing the `DataGridView` and it's `Columns` property and pass it to `EditValue` method of the editor. This way the editor will edit our `Columns` property. We also decorate our property using [`[DesignerSerializationVisibility]`](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.designerserializationvisibility?WT.mc_id=DT-MVP-5003235&view=netframework-4.8) attribute to serialize the collection contents. Here is the implementations. **MyUserControl** I suppose you add a `DataGridView` at design-time to the user control and its name would be `dataGridView1`. ``` public partial class MyUserControl : UserControl { public MyUserControl() { InitializeComponent(); } [Editor(typeof(MyColumnEditor), typeof(UITypeEditor))] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public DataGridViewColumnCollection Columns { get { return this.dataGridView1.Columns; } } } ``` **Editor** ``` public class MyColumnEditor : UITypeEditor { public override UITypeEditorEditStyle GetEditStyle(ITypeDescriptorContext context) { return UITypeEditorEditStyle.Modal; } public override object EditValue(ITypeDescriptorContext context, IServiceProvider provider, object value) { var field = context.Instance.GetType().GetField("dataGridView1", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance); var dataGridView1 = (DataGridView)field.GetValue(context.Instance); dataGridView1.Site = ((Control)context.Instance).Site; var columnsProperty = TypeDescriptor.GetProperties(dataGridView1)["Columns"]; var tdc = new TypeDescriptionContext(dataGridView1, columnsProperty); var editor = (UITypeEditor)columnsProperty.GetEditor(typeof(UITypeEditor)); var result = editor.EditValue(tdc, provider, value); dataGridView1.Site = null; return result; } } ``` **ITypeDescriptionContext Implementation** ``` public class TypeDescriptionContext : ITypeDescriptorContext { private Control editingObject; private PropertyDescriptor editingProperty; public TypeDescriptionContext(Control obj, PropertyDescriptor property) { editingObject = obj; editingProperty = property; } public IContainer Container { get { return editingObject.Container; } } public object Instance { get { return editingObject; } } public void OnComponentChanged() { } public bool OnComponentChanging() { return true; } public PropertyDescriptor PropertyDescriptor { get { return editingProperty; } } public object GetService(Type serviceType) { return editingObject.Site.GetService(serviceType); } } ```
Cannot convert lambda expression to type 'IValueResolver' because it is not a delegate type I am trying to define this simple mapping in my profile ``` CreateMap<CompanyClient, MyDto>() .ForMember( dto => dto.PaymentTerms, opt => opt.MapFrom(companyClient => companyClient.Company.PaymentTerms != null ? companyClient.Company.PaymentTerms.Value : null)) ``` But I'm getting this stupid error message: > > Cannot convert lambda expression to type 'IValueResolver<CompanyClient, MyDto, object>' because it is not a delegate type > > > What exacly is the problem here? I have used plenty of ternary operators elsewhere in my code, but for some reason this nullable `SmartEnum` case has some kind of problem entity: ``` public sealed class CompanyClient { ... Public PaymentTerm? PaymentTerms { get; private set; } ... } ``` dto: ``` public sealed record MyDto { ... Public int? PaymentTerms { get; private init; } ... } ``` PaymentTerm.cs is just a simple [SmartEnum](https://github.com/ardalis/SmartEnum) Note that it does not give any compiler errors when I write it like this: ``` .ForMember( dto => dto.PaymentTerms, opt => opt.MapFrom(companyClient => companyClient.Company.PaymentTerms ?? null)) ``` or ``` .ForMember( dto => dto.PaymentTerms, opt => opt.MapFrom(companyClient => companyClient.Company.PaymentTerms)) ``` What is the problem?
I found that we can get rid of this stupid message by just casting the non-null result to the output type. So where ``` .ForMember( dst => dst.VisitReportFileId, opt => opt.MapFrom(src => src.VisitReportFile == null ? null : src.VisitReportFile.Id.Key)); ``` Gives me a > > Cannot convert lambda expression to type 'IValueResolver<Activity, ActivityDTO, object>' because it is not a delegate type > > > I can fix it by casting the "else" result to `Guid?`: ``` .ForMember( dst => dst.VisitReportFileId, opt => opt.MapFrom(src => src.VisitReportFile == null ? null : (Guid?)src.VisitReportFile.Id.Key)); ```
@selector() in Swift? I'm trying to create an `NSTimer` in `Swift` but I'm having some trouble. ``` NSTimer(timeInterval: 1, target: self, selector: test(), userInfo: nil, repeats: true) ``` *`test()` is a function in the same class.* --- I get an error in the editor: > > Could not find an overload for 'init' that accepts the supplied > arguments > > > When I change `selector: test()` to `selector: nil` the error disappears. I've tried: - `selector: test()` - `selector: test` - `selector: Selector(test())` But nothing works and I can't find a solution in the references.
Swift *itself* doesn't use selectors — several design patterns that in Objective-C make use of selectors work differently in Swift. (For example, use optional chaining on protocol types or `is`/`as` tests instead of `respondsToSelector:`, and use closures wherever you can instead of `performSelector:` for better type/memory safety.) But there are still a number of important ObjC-based APIs that use selectors, including timers and the target/action pattern. Swift provides the `Selector` type for working with these. (Swift automatically uses this in place of ObjC's `SEL` type.) ### In Swift 2.2 (Xcode 7.3) and later (including Swift 3 / Xcode 8 and Swift 4 / Xcode 9): You can construct a `Selector` from a Swift function type using the `#selector` expression. ``` let timer = Timer(timeInterval: 1, target: object, selector: #selector(MyClass.test), userInfo: nil, repeats: false) button.addTarget(object, action: #selector(MyClass.buttonTapped), for: .touchUpInside) view.perform(#selector(UIView.insertSubview(_:aboveSubview:)), with: button, with: otherButton) ``` The great thing about this approach? A function reference is checked by the Swift compiler, so you can use the `#selector` expression only with class/method pairs that actually exist and are eligible for use as selectors (see "Selector availability" below). You're also free to make your function reference only as specific as you need, as per [the Swift 2.2+ rules for function-type naming](https://developer.apple.com/library/prerelease/mac/documentation/Swift/Conceptual/Swift_Programming_Language/Expressions.html#//apple_ref/doc/uid/TP40014097-CH32-ID400). (This is actually an improvement over ObjC's `@selector()` directive, because the compiler's `-Wundeclared-selector` check verifies only that the named selector exists. The Swift function reference you pass to `#selector` checks existence, membership in a class, and type signature.) There are a couple of extra caveats for the function references you pass to the `#selector` expression: - Multiple functions with the same base name can be differentiated by their parameter labels using the aforementioned [syntax for function references](https://developer.apple.com/library/prerelease/mac/documentation/Swift/Conceptual/Swift_Programming_Language/Expressions.html#//apple_ref/doc/uid/TP40014097-CH32-ID400) (e.g. `insertSubview(_:at:)` vs `insertSubview(_:aboveSubview:)`). But if a function has no parameters, the only way to disambiguate it is to use an `as` cast with the function's type signature (e.g. `foo as () -> ()` vs `foo(_:)`). - There's a special syntax for property getter/setter pairs in Swift 3.0+. For example, given a `var foo: Int`, you can use `#selector(getter: MyClass.foo)` or `#selector(setter: MyClass.foo)`. ### General notes: **Cases where `#selector` doesn't work, and naming:** Sometimes you don't have a function reference to make a selector with (for example, with methods dynamically registered in the ObjC runtime). In that case, you can construct a `Selector` from a string: e.g. `Selector("dynamicMethod:")` — though you lose the compiler's validity checking. When you do that, you need to follow ObjC naming rules, including colons (`:`) for each parameter. **Selector availability:** The method referenced by the selector must be exposed to the ObjC runtime. In Swift 4, every method exposed to ObjC must have its declaration prefaced with the `@objc` attribute. (In previous versions you got that attribute for free in some cases, but now you have to explicitly declare it.) Remember that `private` symbols aren't exposed to the runtime, too — your method needs to have at least `internal` visibility. **Key paths:** These are related to but not quite the same as selectors. There's a special syntax for these in Swift 3, too: e.g. `chris.valueForKeyPath(#keyPath(Person.friends.firstName))`. See [SE-0062](https://github.com/apple/swift-evolution/blob/master/proposals/0062-objc-keypaths.md) for details. And even more [`KeyPath` stuff in Swift 4](https://github.com/apple/swift-evolution/blob/master/proposals/0161-key-paths.md), so make sure you're using the right KeyPath-based API instead of selectors if appropriate. You can read more about selectors under [Interacting with Objective-C APIs](https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/BuildingCocoaApps/InteractingWithObjective-CAPIs.html#//apple_ref/doc/uid/TP40014216-CH4-XID_26) in *Using Swift with Cocoa and Objective-C*. **Note:** Before Swift 2.2, `Selector` conformed to `StringLiteralConvertible`, so you might find old code where bare strings are passed to APIs that take selectors. You'll want to run "Convert to Current Swift Syntax" in Xcode to get those using `#selector`.
Exclude attribute from a specific xml element using xslt I am new in xslt. I have the following problem. I need within an xml, to remove a specific attribute (`theAttribute`in the example) from a specific element (e.g. `div`). i.e. ``` <html> <head>...</head> <body> <div id="qaz" theAtribute="44"> </div> <div id ="ddd" theAtribute="4"> <div id= "ggg" theAtribute="9"> </div> </div> <font theAttribute="foo" /> </body> </html> ``` to become ``` <html> <head>...</head> <body> <div id="qaz"> </div> <div id ="ddd"> <div id= "ggg"> </div> </div> <font theAttribute="foo" /> </body> </html> ``` Where attribute theAtribute has been removed. I found this, <http://www.biglist.com/lists/xsl-list/archives/200404/msg00668.html> based on which i made attempts to find the proper solution. i.e. `<xsl:template match="@theAtribute" />` Which removed it from the whole document... and others like match, if choose, etc. Nothing worked.. :-( can you please help me on this? it sound trivial to me, but with xslt, i cannot cope at all... Thank you all in advance
What is not working? Do you want the same content, just without the `@theAtribute`? If so, make sure your stylesheet has the empty template for `@theAtribute`, but also has an identity template that copies everything else into the output: ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <!--empty template suppresses this attribute--> <xsl:template match="@theAtribute" /> <!--identity template copies everything forward by default--> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> ``` If you only want to suppress certain `@theAtribute`, then you can make the match criteria more specific. For instance, if you only wanted to remove that attribute from the `div` who's `@id="qaz"`, then you could use this template: ``` <xsl:template match="@theAtribute[../@id='qaz']" /> ``` or this template: ``` <xsl:template match="*[@id='qaz']/@theAtribute" /> ``` --- If you want to remove `@theAttribute` from all `div` elements, then change the match expression to: ``` <xsl:template match="div/@theAtribute" /> ```
What do I need to do to have my Raspberry Pi host a webpage with dials indicating CPU temperature, CPU load, etc., in real time? I am good with Unix commands and scripting, but I have nearly no web experience. I have a script that grabs metrics I’m interested in, like CPU load or system temp and updates a file every 10 seconds. I would like to point my iPad to a local website hosted by my Raspberry Pi, that has a real-time updating graphical representation of this data. I’ve worked before setting up a simple Apache web server, and I can write HTML and JavaScript. Besides that, I am lost and need someone to point me in the right direction.
I use [Grafana](https://grafana.com) with [InfluxDB](https://www.influxdata.com) for this on my Raspberry Pi 3. They are both relatively easy to setup and connect to each other. They even work well in [Docker](https://www.docker.com) containers on the Raspberry Pi. I stream all my updates into InfluxDB as they are generated. Then Grafana does all the graphical work of displaying them in a nice visual format. I designed a simple dashboard just for my old iPad with its smaller screen. It does sound like a lot of installing and overhead, but it sure does look pretty. [![Enter image description here](https://i.stack.imgur.com/757rT.png)](https://i.stack.imgur.com/757rT.png)
As compared to android, does it cost money to develop for windows 8? I have been reading a number of blogs on the subject. But now I am left with a number of sometimes contradictory information. I hope this community can present me the truth on the matter. WHAT I HAVE: I own a mac osx mountain lion. I also own a pc, which I bought in November 2008, that's running both Windows Vista and Ubuntu. COMPARISON: If **I** decide to develop for android right now, **I** can get everything I need for free: - I can get the IDE (eclipse) for free - I can get the android sdk for free - I can get the emulators for free - I only have to pay a $25 lifetime fee if I decide to publish on the Google Play Store - No computer to buy; no OS to buy; no IDE to buy; nothing to buy. Can someone please tell me: Is it the same for windows 8 development? With my existing computers, would it be cost free to develop for windows 8? If not, what do I need to buy. Please no iffy info, I am looking for decisive information. One blog, for example, talks about a 30 day free-trial of the Windows 8 OS: This is clearly not free now is it? Also is developing for windows phone the same as developing for windows 8? Some blogs treat the two as equivalent. What's the reality? - A: All windows phone apps work on windows 8? - B: All windows 8 apps work on windows phone? - C: Both A and B How about windows 7? Is there a relationship there with windows phones? Note: I understand the details may make this seem like a number of questions in one; but I had to include the details. The question, however, is one: does it cost money to develop for windows based on the criteria I present?
To develop Windows Store/Modern UI applications for Windows 8 you need a computer running the Windows 8 OS. For Windows Phone 8 development you specifically need Windows 8 Professional 64-bit. Windows 8 is not free, but there is a [90-day evaluation for developers](http://msdn.microsoft.com/en-US/evalcenter/jj554510.aspx) that you can download free. Since you have a Vista license you should also be able to upgrade that to Windows 8, but again that is not free (just as OSX upgrades are not, IIRC). [Visual Studio Express versions](http://www.microsoft.com/visualstudio/eng/downloads#d-2012-express) are FREE and available for both Windows 8 and Windows Phone development (they *are* different platforms, but with some overlap in terms of code reuse). To publish to the Windows Store (Windows 8 apps) there is a $49 annual account fee. For the Windows Phone Store, it's a separate account that has a $99 annual fee. - If you are a student and join [DreamSpark](http://dreamspark.com), both of these fees are waived. - If you are a [MSDN subscriber](http://msdn.microsoft.com), likewise these fees are waived. - An MSDN subscription is part of the [BizSpark program for startups](http://bizspark.com) as well, so if you qualify there, you would get free store accounts as well as access to Windows 8. With Windows 7 you would only be able to develop applications for the Windows Phone 7 platform, so given the setup you've described there's no advantage in going to 7 versus 8. When you build Windows Phone applications with Windows 8 development tools you will still be able to target Windows Phone 7 devices in addition to the newer Windows Phone 8 devices. Do keep in mind that Microsoft is always running promotions and incentives in various markets (in the US, for instance, there is the [Keep the Cash](http://aka.ms/keepthecash) program where you can get $100 for up to 10 apps you publish in the Windows Store, and another $1000 for Windows Phone applications, - more than offsetting costs to get up and running)! Likewise, there are Microsoft audience evangelists across the world whose role it is to help you get started and be a successful app publisher on the platform. Do reach out to your local rep as he or she may can provide additional guidance and make you aware of offers/incentives that may help you to get started. If you don't know how to reach him or her, please send me a note via my profile and I will connect you.
(need advice) Talk to MySQL server database from my Android App I am starting my thesis which is an app for android. This app is based on a web platform I had created. The part where I need advice is: *which is the most efficient way to pull data from a MySQL server into the application. Please give me some advice and your experience on the matter.* (I've read about encoding the queries in json, but it seems like awful lot of needless work)
I suggest you to use [RESTful Web Service in Java using Jersey](http://www.vogella.com/articles/REST/article.html) as an intermediate layer between you Android App and MySQL server. You can transfer data in JSON (my suggestion for a mobile app), xml or palin text to your Android App. You can find the benefits of using Web Service in you system in @Elad answer : [Best way to access a remote database: via webservice or direct DB-access?](https://stackoverflow.com/questions/5689539/best-way-to-access-a-remote-database-via-webservice-or-direct-db-access) Also later if you decide to develop other smart phone platform for your system, you just need to reuse the same Web Service. As a result this Web service can be considered as a generic protocol for the mobile user of your system. I used Hibernate to map the data to MySQL database. [RESTful Service Using Jersey with Hibernate Persistence](http://sberka.blogspot.se/2010/02/json-restfull-service-using-jersey-with.html) If you decide to follow this approach note that it is highly recommended to separate your hibernate stuff form your Jersey services. You need to wire your DAO to your Service tier. see what @Rick Mangi wrote to me : [REST with Java (JAX-RS) using Jersey and hibernate](https://stackoverflow.com/questions/11190934/rest-with-java-jax-rs-using-jersey) It is also good approach to use HTTP Client in your Android App, Since it supports @GET, @POST, @DELETE and @PUT commands and you can easily talk to your database like [HTTP GET Request](https://stackoverflow.com/questions/11526437/android-illegalstateexception-in-httpget/11526693#11526693)
Updating Route 53 automatically when Auto Scaling brings up new instance I am using Amazon EC2 Auto Scaling in my environment, whenever Auto Scaling triggers a new instance, I need to change the IP manually in Route 53. I want to automate this process. Tried using [Lifecycle Hooks](https://aws.amazon.com/blogs/compute/using-aws-lambda-with-auto-scaling-lifecycle-hooks/) but didn't see any update for Route 53.
``` # !/bin/bash INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id) PRIVATE_IP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) DOMAIN_NAME=$(aws route53 get-hosted-zone --id "<Hosted Zone ID >" --query 'HostedZone.Name' --output text | sed 's/.$//') hostnamectl set-hostname hostname."${DOMAIN_NAME}" CN=`echo $PRIVATE_IP | cut -d . -f 3` echo $CN a=5 if [ $CN == $a ] then aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"Dns Name"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}' else aws route53 change-resource-record-sets --hosted-zone-id "<Hosted Zone ID >" --change-batch '{"Changes": [{"Action": "UPSERT","ResourceRecordSet": {"Name": "'"< Dns Name>"'","Type": "A","TTL": 60,"ResourceRecords": [{"Value": "'"${PRIVATE_IP}"'"}]}}]}' fi ```
Create function in dynamic sql PostgreSQL Is it possible to create a function or execute anonymous block inside dynamic SQL in PostgreSQL? I'm looking for something like this: ``` Create or replace FUNCTION fff(p1 int) LANGUAGE plpgsql AS $$ DECLARE v_Qry VARCHAR(4000); BEGIN v_Qry := ' Create or replace FUNCTION fff_DYNAMIC_SQL() LANGUAGE plpgsql AS $$ DECLARE v1 INTEGER; begin v1 := ' || p1 || '; RETURN; END; $$;'; EXECUTE v_Qry; RETURN; END; $$; ```
You have three levels of nested string in your code. The best way to deal with that, is to use [dollar quoting](https://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING) for **all** of them. When creating dynamic SQL it's also better to use `format()` instead of string concatenation. Then you only need a single string with placeholders which makes the code a lot easier to read. To nest multiple dollar quoted strings use a different delimiter each time: ``` Create or replace FUNCTION fff(p1 int) returns void LANGUAGE plpgsql AS $$ --<< outer level quote DECLARE v_Qry VARCHAR(4000); BEGIN v_Qry := format( $string$ --<< quote for the string constant passed to the format function Create or replace FUNCTION fff_DYNAMIC_SQL() returns void LANGUAGE plpgsql AS $f1$ --<< quoting inside the actual function body DECLARE v1 INTEGER; begin v1 := %s; RETURN; END; $f1$ $string$, p1); EXECUTE v_Qry; RETURN; END; $$; ``` You also forgot to declare the returned data type. If the function does not return anything, you need to use `returns void`.
Checking the type of an inner exception In my code I'm coming across a situation in which a `System.Reflection.TargetInvocationException` is thrown. In one specific case I know how I want to handle the root exception, but I want to throw all other exceptions. I can think of two ways of doing this, but I'm not sure which is better. 1. ``` try { //code } catch (System.Reflection.TargetInvocationException ex) { if (typeof(ex.InnerException) == typeof(SpecificException)) { //fix } else { throw ex.Innerexception; } } ``` 2. ``` try { //code } catch (System.Reflection.TargetInvocationException ex) { try { throw ex.InnerException; } catch (SpecificException exSpecific) { //fix } } ``` I'm aware that throwing exceptions in general is slow, so I feel the first method would possibly be faster. Alternatively, is there a better way of doing this that I haven't thought of?
Each of your proposed solutions has its own issue. The first method checks that the type of the inner exception is *exactly* the type you're expected. That means that a derived type won't match, which might not be what you intended. The second method overwrites the inner exception's stack trace with the current stack location, as Dan Puzey mentioned. Destroying the stack trace may be destroying the one lead you require in order to fix a bug. The solution is basically what DarkGray posted, with Nick's suggestion and with an added suggestion of my own (in the `else`): ``` try { // Do something } catch (TargetInvocationException ex) { if (ex.InnerException is SpecificException) { // Handle SpecificException } else if (ex.InnerException is SomeOtherSpecificException) { // Handle SomeOtherSpecificException } else { throw; // Always rethrow exceptions you don't know how to handle. } } ``` If you want to re-throw an exception that turns out you can't handle, don't `throw ex;` since that will overwrite the stack trace. Instead use `throw;` which preserves the stack trace. It basically means "I actually didn't want to enter this `catch` clause, pretend I never caught the exception". **Update:** C# 6.0 offers a much better syntax via *Exception Filters*: ``` try { // Do something } catch (TargetInvocationException ex) when (ex.InnerException is SpecificException) { // Handle SpecificException } catch (TargetInvocationException ex) when (ex.InnerException is SomeOtherSpecificException) { // Handle SomeOtherSpecificException } ```
Why is glDrawPixels not working in here? Here's the code. The output is a greyish square - all the time, no matter what the input is and its clearly wrong. My objective is to be able to have all the pixels stored somewhere and display them so I can move on to a simple ray tracer and I can't seem to figure out this glDrawPixels thing. ``` #include <stdlib.h> #include <GL/glut.h > using namespace std; struct RGBType { float r; float g; float b; //float a; }; void display(void) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); RGBType *pixels = new RGBType[250*250]; for (int x = 0; x < 250; x++) { for (int y = 0; y < 250; y++) { pixels->r = 0; pixels->g = 1; pixels->b = 1; //pixels->a = 200; } } glTexSubImage2D(GL_TEXTURE_2D,0,0,0,250,250,GL_RGB,GL_UNSIGNED_BYTE,pixels); //glColor3f(1.0,1.0,1.0); glBegin(GL_POLYGON); glVertex3f(0.0, 0.0, 0.0); glVertex3f(1.0, 0.0, 0.0); glVertex3f(1.0, 1.0, 0.0); glVertex3f(0.0, 1.0, 0.0); glEnd(); glutSwapBuffers(); } void init(void) { //select clearing (background) color glClearColor(0.0, 0.0, 0.0, 0.0); //initialize viewing values glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); } int main(int argc, char** argv) { //Initialise GLUT with command-line parameters. glutInit(&argc, argv); //Set Display Mode glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); //Set the window size glutInitWindowSize(250,250); //Set the window position glutInitWindowPosition(100,100); //Create the window glutCreateWindow("Ray Tracer"); //Call init (initialise GLUT init(); //Call "display" function glutDisplayFunc(display); //Enter the GLUT event loop glutMainLoop(); return 0; } ```
``` struct RGBType { float r; float g; float b; ^^^^^^^^ all floats here... //float a; }; ... RGBType *pixels = new RGBType[250*250]; ... glTexSubImage2D(GL_TEXTURE_2D,0,0,0,250,250,GL_RGB,GL_UNSIGNED_BYTE,pixels); ^^^^^^^^^^^^^^^^ wat ``` You're telling OpenGL to interpret `pixels` as an array of `unsigned char`s, read them in groups of [four bytes](http://www.opengl.org/wiki/Common_Mistakes#Texture_upload_and_pixel_reads), and use the first three bytes of each four as RGB channels. Don't lie to OpenGL. Rarely works out for you in the end. Try `GL_FLOAT` instead. And create an actual texture object first. And bind that texture object before you try to upload data to it. And specify some texture coordinates for your polygon. And enable texturing before your draw your polygon. Something like this: ``` #include <GL/glut.h> struct RGBType { float r; float g; float b; }; GLuint tex = 0; void init() { RGBType pixels[ 250*250 ]; RGBType* temp = pixels; for (int x = 0; x < 250; x++) { for (int y = 0; y < 250; y++) { temp->r = 0; temp->g = 1; temp->b = 1; temp++; } } glGenTextures( 1, &tex ); glBindTexture( GL_TEXTURE_2D, tex ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR ); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB, 250, 250, 0, GL_RGB, GL_FLOAT, NULL ); glTexSubImage2D(GL_TEXTURE_2D,0,0,0,250,250,GL_RGB,GL_FLOAT,pixels); } void display(void) { glClearColor(0, 0, 0, 1); glClear(GL_COLOR_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(-2, 2, -2, 2, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glColor3ub( 255, 255, 255 ); glEnable( GL_TEXTURE_2D ); glBindTexture( GL_TEXTURE_2D, tex ); glBegin(GL_QUADS); glTexCoord2i( 0, 0 ); glVertex2i( 0, 0 ); glTexCoord2i( 1, 0 ); glVertex2i( 1, 0 ); glTexCoord2i( 1, 1 ); glVertex2i( 1, 1 ); glTexCoord2i( 0, 1 ); glVertex2i( 0, 1 ); glEnd(); glutSwapBuffers(); } int main(int argc, char** argv) { glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE); glutInitWindowSize(250,250); glutCreateWindow("Ray Tracer"); init(); glutDisplayFunc(display); glutMainLoop(); return 0; } ```
C++ determine if a container has ::find() I have a functor which operates on a container of type `U` of elements of type `T` like so ``` template<typename T, template<typename...> class U> class asserter { public: asserter(U<T> &c) : container(c) { }; void operator()(T lhs) { CU_ASSERT(container.find(lhs) != container.end()); }; private: U<T> &container; }; ``` which I might use as ``` std::set<std::string> a, c; ... asserter<std::string, std::set> ass(c); for_each(a.begin(), a.end(), ass); ``` Where we are ignoring `std::includes()` for the moment. This works great if the container is one where `U::find()` is defined. If it's not I'd like to fall back to `std::find()`. On the other hand I'd rather use `U::find()` over `std::find()` if it's available. In C++11 (or 17 if necessary) can I determine if `U::find()` is available (possibly restricting to the STL) for U and if so use it, otherwise use `std::find()`?
SFINAE on whether the expression `c.find(value)` is well-formed. Trailing return type is C++11, and not essential here anyway; it just makes the return type easier to write - `decltype(c.find(value))` instead of `decltype(std::declval<Container&>().find(std::declval<const T&>()))`. If the expression would be ill-formed, the first overload of `find_impl` is removed from the overload set, leaving the second overload as the only viable one. The usual `int/long/0` trick for the third parameter makes the first overload preferred when both are viable. ``` template<class Container, class T> auto find_impl(Container& c, const T& value, int) -> decltype(c.find(value)){ return c.find(value); } template<class Container, class T> auto find_impl(Container& c, const T& value, long) -> decltype(std::begin(c)){ return std::find(std::begin(c), std::end(c), value); } template<class Container, class T> auto find(Container& c, const T& value) -> decltype(find_impl(c, value, 0)) { return find_impl(c, value, 0); } ``` The usual disclaimer applies: this relies on expression SFINAE, which is not currently supported by MSVC; Microsoft does plan to add support in an update to MSVC 2015.
waiting for all pids to exit in php My issue is this. I am forking a process so that I can speed up access time to files on disk. I store any data from these files in a tmp file on local desk. ideally, after all processes have finished, I need to access that tmp file and get that data into an array. I then unlink the tmp file as it is no longer needed. My problem is that it would seem that pcntl\_wait() does not acutally wait until all child processes are done before moving on to the final set of operations. So I end up unlinking that file before some random process can finish up. I can't seem to find a solid way to wait for all processes to exit cleanly and then access my data. ``` $numChild = 0; $maxChild = 20; // max number of forked processes. // get a list of "availableCabs" foreach ($availableCabs as $cab) { // fork the process $pids[$numChild] = pcntl_fork(); if (!$pids[$numChild]) { // do some work exit(0); } else { $numChild++; if ($numChild == $maxChild) { pcntl_wait($status); $numChild--; } } // end fork } // Below is where things fall apart. I need to be able to print the complete serialized data. but several child processes don't actually exit before i unlink the file. $dataFile = fopen($pid, 'r'); while(($values = fgetcsv($dataFile,',')) !== FALSE) { $fvalues[] = $values; } print serialize($fvalues); fclose($dataFile); unlink($file); ``` please note that i'm leaving a lot of code out regarding what i'm actually doing, if we need that posted thats not issue.
Try restructuring you code so that you have two loops - one that spawns processes and one that waits for them to finish. You should also use [`pcntl_waitpid()`](http://uk.php.net/manual/en/function.pcntl-waitpid.php) to check for specific process IDs, rather than the simple child counting approach you are currently using. Something like this: ``` <?php $maxChildren = 20; // Max number of forked processes $pids = array(); // Child process tracking array // Get a list of "availableCabs" foreach ($availableCabs as $cab) { // Limit the number of child processes // If $maxChildren or more processes exist, wait until one exits if (count($pids) >= $maxChildren) { $pid = pcntl_waitpid(-1, $status); unset($pids[$pid]); // Remove PID that exited from the list } // Fork the process $pid = pcntl_fork(); if ($pid) { // Parent if ($pid < 0) { // Unable to fork process, handle error here continue; } else { // Add child PID to tracker array // Use PID as key for easy use of unset() $pids[$pid] = $pid; } } else { // Child // If you aren't doing this already, consider using include() here - it // will keep the code in the parent script more readable and separate // the logic for the parent and children exit(0); } } // Now wait for the child processes to exit. This approach may seem overly // simple, but because of the way it works it will have the effect of // waiting until the last process exits and pretty much no longer foreach ($pids as $pid) { pcntl_waitpid($pid, $status); unset($pids[$pid]); } // Now the parent process can do it's cleanup of the results ```
How to get exit status with Ruby's Net::SSH library? I have a snippet of code, simply trying to execute a script on a remote server, in the event that it fails, I'd like to make a follow-up call, imagine this: ``` require 'rubygems' require 'net/ssh' require 'etc' server = 'localhost' Net::SSH.start(server, Etc.getlogin) do |ssh| puts (ssh.exec("true") ? 'Exit Success' : "Exit Failure") puts (ssh.exec("false") ? 'Exit Success' : "Exit Failure") end ``` I would expect (ignoring that stdout and stderr are printed in my contrived example) - but first line should exit with `0` which I would expect Ruby would interperate as `false` and display "Exit Failure" (sure, so the logic is wrong, the ternary needs to be flipped) - but the second line should exit with the opposite status, and it doesn't. I can't even find anything in the documentation about how to do this, and I'm a little worried that I might be doing it wrong?!
I find the following way of running processes with Net::SSH much more useful. It provides you with distinct `stdout` and `stderr`, `exit code` and `exit signal`. ``` require 'rubygems' require 'net/ssh' require 'etc' server = 'localhost' def ssh_exec!(ssh, command) stdout_data = "" stderr_data = "" exit_code = nil exit_signal = nil ssh.open_channel do |channel| channel.exec(command) do |ch, success| unless success abort "FAILED: couldn't execute command (ssh.channel.exec)" end channel.on_data do |ch,data| stdout_data+=data end channel.on_extended_data do |ch,type,data| stderr_data+=data end channel.on_request("exit-status") do |ch,data| exit_code = data.read_long end channel.on_request("exit-signal") do |ch, data| exit_signal = data.read_long end end end ssh.loop [stdout_data, stderr_data, exit_code, exit_signal] end Net::SSH.start(server, Etc.getlogin) do |ssh| puts ssh_exec!(ssh, "true").inspect # => ["", "", 0, nil] puts ssh_exec!(ssh, "false").inspect # => ["", "", 1, nil] end ``` Hope this helps.
Whitespace and truncation with ellipsis on Select-Object I'm trying to figure out why `Select-Object` 1. adds a lot of whitespace at the start of its output; and 2. truncates long properties with ellipsis. Here's a repro of what I mean. Suppose you run these commands on `C:\`: ``` New-Item "MyTest" -Type Directory ``` ``` cd MyTest ``` ``` "Some very long lorem ipsum like text going into a certain file, bla bla bla and some more bla." | Out-File test.txt ``` ``` Get-ChildItem | Select-String "text" | Select-Object LineNumber,Line ``` This will show output like this: ![Select-Object output](https://i.stack.imgur.com/rEAIH.png) The ellipsis I can understand, that would be just the way the command ends up getting formatted when the result is written to the console host. However, the whitespace at the start still confuses me in this case. Things get weirder for me though when I pipe the result to either `clip` or `Out-File output.txt`. I get similarly formatted output, with a lot of whitespace at the start and truncated `Line` properties. Which command is causing this behavior, and how can I *properly* solve this? Most importantly: how can I get the *full* results into a file or onto my clipboard?
I'd say the best way to get the full output into a file would be to export the result as a CSV: ``` Get-ChildItem | Select-String "text" | Select-Object LineNumber,Line | Export-Csv 'out.csv' ``` You could also build a string from the selected properties, which might be better for copying the data to the clipboard: ``` Get-ChildItem | Select-String "text" | ForEach-Object { '{0}:{1}' -f $_.LineNumber, $_.Line } | Tee-Object 'out.txt' | clip ``` The behavior you observed is caused by the way PowerShell displays output. Basically, it looks at the first object and counts the properties. Objects with less than 5 properties are sent to `Format-Table`, otherwise to `Format-List`. The columns of tabular output are spread evenly across the available space. As [@Vesper](https://stackoverflow.com/a/31315168/1630171) already mentioned you can enforce proportional column width by using the `-AutoSize` parameter, and wrapping of long lines by using the `-Wrap` parameter. `Format-List` wraps long strings by default. See [this blog post](http://blogs.msdn.com/b/powershell/archive/2006/04/30/how-powershell-formatting-and-outputting-really-works.aspx) from Jeffrey Snover for more information.
Are there algebraic data types outside of sum and product? By most definitons the common or basic algebraic data types in Haskell or Scala are sum and product. Examples: [1](https://github.com/hemanth/functional-programming-jargon#algebraic-data-type), [2](https://nrinaudo.github.io/scala-best-practices/definitions/adt.html). Sometimes a definition just says [algebraic data types are sum and product](https://wiki.haskell.org/Algebraic_data_type), perhaps for simplicity. However, the definitions leave an impression that other algebraic data types are possible, and sum and product are just the most useful to describe selection or combination of elements. Given there are subtraction, division, raising to an integer power operations in a basic algebra - is it correct some implementation of other alternative algebraic types in programming is possible, but they are not useful? Do any programming languages have algebraic data types implemented that are not sum and product types?
"Algebraic" comes from category theory. Every algebraic data type is an [initial algebra](https://en.wikipedia.org/wiki/Initial_algebra) of a functor. So you could in principle call anything that comes from a functor in this way algebraic, and I think it's quite a large class. Interpreting "algebraic" to mean "high-school algebra" (I don't mean to be condescending, that's just [how we refer to it](http://blog.sigfpe.com/2009/11/memoizing-polymorphic-functions-with.html)) as you have, there are some nice analogies. - Arbitrary powers, not just integer powers, are closely analogous to function types, that is, `A -> B` is analogous to `BA`. In category theory, when you consider a function ("morphism") as an object of a category, it's called an *exponential object*, and the latter notation is used. For fun, see if you can prove the law `CA+B = CA × CB` by writing a bijection between the corresponding types. - Division is analogous to *quotient types*, which is a fascinating area of research that reaches into things as hott and trendy as homotopy type theory. The analogy of quotients to division is not as strong as product types with multiplication, as you have to divide by an equivalence relation. - At this rate, you would expect subtraction to have some beautiful analogy to go with it, but alas I know of none. Dan Piponi has explored it a little through [the antidiagonal](http://blog.sigfpe.com/2007/09/type-of-distinct-pairs.html), but it is far from a general analogy.
How to add a 'total' row in a grouped query (in Postgresql)? How do I add a row to the end of this SELECT so I can see the total of the grouped rows? (I need the total for 'money' and 'requests': ``` SELECT organizations.name || ' - ' || section.name as Section, SUM(requests.money) as money, COUNT(*) as requests FROM schema.organizations -- INNER JOINs omitted -- WHERE -- omitted -- GROUP BY -- omitted -- ORDER BY -- omitted -- ``` Running the above produces: ``` |*Section* | *Money* | *Requests*| |-----------|---------|-----------| |BMO - HR |564 |10 | |BMO - ITB |14707 |407 | |BMO - test |15 |7 | ``` Now what I want is to add a total to the end of that which would display: ``` |BMO - Total|15286 |424 | ``` --- I have tried a few things, and ended up by trying to wrap the select in a WITH statement and failing: ``` WITH w as ( --SELECT statement from above-- ) SELECT * FROM w UNION ALL SELECT 'Total', money, requests from w ``` This produces weird results (I'm getting four total rows - when there should be just one.
You can achieve this by using a UNION query. In the query below, I add an artificial sortorder column and wrap the union query in an outer query so that the sum line appears at the bottom. [I'm assuming you'll be adding your joins and group by clauses...] ``` SELECT section, money, requests FROM -- outer select, to get the sorting right. ( SELECT organizations.name || ' - ' || section.name as Section, SUM(requests.money) as money, COUNT(*) as requests, 0 AS sortorder -- added a sortorder column FROM schema.organizations INNER JOINs omitted -- WHERE -- omitted -- GROUP BY -- omitted -- -- ORDER BY is not used here UNION SELECT 'BMO - Total' as section, SUM(requests.money) as money, COUNT(*) as requests, 1 AS sortorder FROM schema.organizations -- add inner joins and where clauses as before ) AS unionquery ORDER BY sortorder -- could also add other columns to sort here ```
Change spark \_temporary directory path Is it possible to change the `_temporary` directory where spark save its temporary files before writing? In particular, since I am writing single partitions of a table I woud like the temporary folder to be within the partition folder. Is it possibile?
There is no way to use the default FileOutputCommitter because of its implementation, the FileOutputCommiter creates a **${mapred.output.dir}/\_temporary** subdirectory where the files are written and later on, after being committed, moved to **${mapred.output.dir}**. In the end, an entire temporary folder deleted. When two or more Spark jobs have the same output directory, mutual deletion of files will be inevitable. Eventually, I've downloaded ***org.apache.hadoop.mapred.FileOutputCommitter*** and ***org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter*** (you can name it **YourFileOutputCommitter**) made some changes that allows \_temporaly rename in your driver, you'll have to add following code: ``` val conf: JobConf = new JobConf(sc.hadoopConfiguration) conf.setOutputCommitter(classOf[YourFileOutputCommitter]) // update temporary path for committer YourFileOutputCommitter.tempPath = "_tempJob1" ``` note: it's better to use **MultipleTextOutputFormat** to rename files because two jobs that write to the same location can override each other. > > *Update* > > > I've created short post in our tech blog, it has more details <https://www.outbrain.com/techblog/2020/03/how-you-can-set-many-spark-jobs-write-to-the-same-path/>
Java Stream API storing lambda expression as variable This title sounds stupid even to me, but there must be at least somewhat clever way to achieve such effect and I don't know how else to explain it. I need to sort array using sorted in stream API. Here is my stream so far: ``` Arrays.stream(sequence.split(" ")) .mapToInt(Integer::parseInt) .boxed() .sorted((a, b) -> a.compareTo(b)) .forEach(a -> System.out.print(a + " ")); ``` Now I have two different sorts of course - ascending and descending and the sort I need to use is specified in the user input. So what I want to do is having something like switch with 2 cases: "ascending" and "descending" and a variable to store the lambda expression respectively: ``` switch(command) { case "ascending": var = a.compareTo(b); case "descending": var = b.compareTo(a); } ``` Then I my sorted looks like: ``` .sorted((a, b) -> var) ``` I got the idea in a python course I attended. There it was available to store an object in variable, thus making the variable "executable". I realize that this lambda is not an object, but an expression, but I'm asking is there any clever way that can achieve such result, or should I just have ``` if(var) ``` and two diferent streams for each sort order.
The question is not stupid at all. Answering it in a broader sense: Unfortunately, there is no generic solution for that. This is due to the type inference, which determines *one particular type* for the lambda expression, based on the target type. (The section about [type inference](https://docs.oracle.com/javase/tutorial/java/generics/genTypeInference.html#target_types) may be helpful here, but does not cover all details regarding lambdas). Particularly, a lambda like `x -> y` does not have any type. So there is no way of writing *`GenericLambdaType`*`function = x -> y;` and later use `function` as a drop-in replacement for the actual lambda `x -> y`. For example, when you have two functions like ``` static void useF(Function<Integer, Boolean> f) { ... } static void useP(Predicate<Integer> p) { ... } ``` you can call them both with the same lambda ``` useF(x -> true); useP(x -> true); ``` but there is no way of "storing" the `x -> true` lambda in a way so that it later may be passed to **both** functions - you can only store it in a reference with *the type that it will be needed in* later: ``` Function<Integer, Boolean> f = x -> true; Predicate<Integer> p = x -> true; useF(f); useP(p); ``` --- For your particular case, the [answer by Konstantin Yovkov](https://stackoverflow.com/a/36333323/3182664) already showed the solution: You have to store it as a `Comparator<Integer>` (ignoring the fact that you wouldn't have needed a lambda here in the first place...)
Spring Boot Configuration skip registration on multiple @Profile I've a Spring Boot application with different Profile setup : `dev`, `prod`, `qc`, `console` etc. The two configuration classes are setup as follows. `MyConfigurationA` should be registered for all profiles except `console`. `MyConfigurationB` should be registered except for `console` and `dev`. When I run the application with profile `console`, the `MyConfigurationA` doesn't get registered - which is fine. But `MyConfigurationB` gets registered - which I do not want. I've setup the `@Profile` annotation as follows to not register the `MyConfigurationB` for profile `console` and `dev` . But the `MyConfigurationB` is getting registered when I run the application with profile `console`. ``` @Profile({ "!" + Constants.PROFILE_CONSOLE , "!" + Constants.PROFILE_DEVELOPMENT }) ``` --- The documentation ( <http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/Profile.html>) has an example of including one profile and excluding the other. In my example I'm excluding both as `@Profile({"!p1", "!p2"}):` > > @Profile({"p1", "!p2"}), registration will occur if > profile 'p1' is active **OR** if profile 'p2' is not active. > > > **My question is :** How can we skip registration of the configurations of both profiles? `@Profile({"!p1", "!p2"})` is doing OR operation. We need AND operation here. --- The code : ``` @Configuration @Profile({ "!" + Constants.PROFILE_CONSOLE }) public class MyConfigurationA { static{ System.out.println("MyConfigurationA registering..."); } } @Configuration @Profile({ "!" + Constants.PROFILE_CONSOLE , "!" + Constants.PROFILE_DEVELOPMENT }) // doesn't exclude both, its OR condition public class MyConfigurationB { static{ System.out.println("MyConfigurationB registering..."); } } public final class Constants { public static final String PROFILE_DEVELOPMENT = "dev"; public static final String PROFILE_CONSOLE = "console"; ... } ```
`@Profile({"!console", "!dev"})` means (NOT console) **OR** (NOT dev) which is true if you run your app with the profile 'console'. To solve this you can create a custom [Condition](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/Condition.html): ``` public class NotConsoleAndDevCondition implements Condition { @Override public boolean matches(ConditionContext context, AnnotatedTypeMetadata metadata) { Environment environment = context.getEnvironment(); return !environment.acceptsProfiles("console", "dev"); } } ``` And apply the condition via the [@Conditional](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/context/annotation/Conditional.html) annotation to the Configuration: ``` @Conditional(NotConsoleAndDevCondition.class) public class MyConfigurationB { ```
AsyncTask v/s ThreadPoolExecutor for network request I am working on a project where i need to hit a web service download the data which is JSON and will be represented in a list. And all the list items have there thumbnail url's which would be downloaded and displayed in the list item. I have done the entire calling part with both a ThreadPoolExecutor and a AsyncTask. But from a design perspective which is a better option out of: 1. ThreadPoolExecutor 2. AsyncTask Few adv. of ThreadPoolExecutor: 1. User can define no. of concurrent threads that can be executed. 2. Cancelling all the request at once. Few adv. of AsyncTask: 1. Internally it uses a ThreadPoolExecutor, however we cant define no. of threads that run simultaneously. 2. Cancelling a single request is easy. 3. Ability to attach and detach a task. 4. Updating the UI from doInBackground is simple. I know more advantages of AsyncTask, however for a simple application like fetching data from a web service and later on fetching images. Which would be more appropriate a AsyncTask or ThreadPoolExecutor? If you can provide a few reasons regarding your choice it would be helpful. I have read a few article here on SO but none that compares the two. If there are any that i missed sorry for the trouble could you please post me the link for the same. Thanks in advance.
I consider that AsyncTask is useful if you want to load thumbnail using a series "cascade-call": in the onPostExecute, you can start the next AsyncTask to download the next thumbnail. But if you want to improve efficiency, I suggest using ThreadPoolExecutor. This is a sentence from [developer.android.com](http://developer.android.com/reference/java/util/concurrent/ThreadPoolExecutor.html): > > Thread pools address two different problems: they **usually provide > improved performance** when executing large numbers of asynchronous > tasks, **due to reduced per-task invocation overhead**, and they provide a > means of bounding and managing the resources, including threads, > consumed when executing a collection of tasks. Each ThreadPoolExecutor > also maintains some basic statistics, such as the number of completed > tasks. > > > In conclusion, ThreadPoolExecutor was probably designed for cases such as your; for this reason, I suggest you this class.