prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Activity Indicator not showing up I have two issues with activity indicator: 1. Activity Indicator not showing up on UIViewController I have activity indicator added in .xib file. On button click it should start animating. and when response from server is received, before going to next page it should stop animating. I am doing it as follows: ``` activityIndicator.hidden = NO; [activityIndicator performSelector:@selector(startAnimating) withObject:nil afterDelay:0.1]; [self.view bringSubviewToFront:activityIndicator]; ....rest of code here.... activityIndicator.hidden = YES; [activityIndicator stopAnimating]; ``` 1. Activity Indicator not showing up on UITableView For table view I am doing it same way but on didselectrowatindexpath... For tableview I also tried adding activity view to cell accessory, but still not showing up In both cases activity Indicator is not showing up. Please help Thanks
If all this code is in one method or in response to one event, then none of the changes to the views are going be visible until you return to the event loop. You set the activityIndicator.hidden to NO and then set it again to YES before the UI has an opportunity to even refresh. You also apparently stop the animation before you start it. What you need to do is make the activity indicator visible here and start its animation. Then schedule the work to be done (start an asynchronous network connection, or put some work into a queue, or whatever it is you need to get done) and return from this method so that the UI can refresh, the indicator can be drawn, and the animation can actually start. Then later at some point **after** the work is complete, you can hide the indicator and stop the animation. But you can't do all of that on the main thread within one single turn of the event loop. None of your changes will be visible because no drawing at all will happen here while this method is executing (assuming this is on the main thread) I hope that makes sense?
Select top X results per group I have a bunch of RDF Data Cube observations that have an attached attribute, in my case the date on when that value was recorded. The pattern is simple, for example (leaving out other dimension/measure/attributes): ``` <obs1> a qb:Observation ; my:lastupdate '2017-12-31'^^xsd:date ; qb:dataSet <dataSet1> . <obs2> a qb:Observation ; my:lastupdate '2016-12-31'^^xsd:date ; qb:dataSet <dataSet1> . <obs2_1> a qb:Observation ; my:lastupdate '2017-12-31'^^xsd:date ; qb:dataSet <dataSet2> . <obs2_2> a qb:Observation ; my:lastupdate '2015-12-31'^^xsd:date ; qb:dataSet <dataSet2> . ``` So I have multiple `qb:DataSet` in my store. Now I would like to figure out the last X `my:lastupdate` values per dataset. Let's say I want the last 5 values, for each particular DataSet. I can do that very easily for one particular dataset: ``` SELECT * WHERE { ?observation my:lastupdate ?datenstand ; qb:dataSet <dataSet1> } ORDER BY DESC(?datenstand) LIMIT 5 ``` But I'm a bit lost if this is at all possible within a single SPARQL query, per dataset. I tried various combination with sub-selects, LIMIT & GROUP BY combinations but nothing lead to the result I am looking for.
This query pattern was discussed at length on the now defunct SemanticOverflow Q+A site as *'get the 3 largest cities for each country'* and the general consensus was that queries in the form *'get the top n related items for each master item'* are not manageable with a single SPARQL query in an efficient way. The core issue is that nested queries are evaluated bottom-up and GROUP/LIMIT clauses will apply to the whole result set rather than to each group. The only useful exception to the bottom-up rule are `(not) exists` filters, which have visibility on current bindings. You can take advantage of this fact to write queries like: ``` select ?country ?city ?population where { ?country a :Country; :city ?city. ?city :population ?population. filter not exists { select * where { ?country :city ?_city. ?_city :population ?_population. filter ( ?_population > ?population ) } offset 3 } } order by ?country desc(?population) ``` Unfortunately this approach is not usually viable on large real-world datasets, as it involves scanning and filtering the cartesian product of each country/city group.
Javapoet superclass generic Anyone know how I can do the following using javapoet ``` public class MyClassGenerated extends MyMapper<OtherClass>{ } ``` My code of generation: ``` TypeSpec generateClass() { return classBuilder("MyClassGenerated") .addModifiers(PUBLIC) .superclass(???????????????) .build(); } ```
The `ParameterizedTypeName` class allows you to specify generic type arguments when declaring the super class. For instance, if your `MyClassGenerated` class is a subclass of the `MyMapper` class, you can set a generic type parameter of `MyMapper` like so: ``` TypeSpec classSpec = classBuilder("MyClassGenerated") .addModifiers(PUBLIC) .superclass(ParameterizedTypeName.get(ClassName.get(MyMapper.class), ClassName.get(OtherClass.class))) .build(); ``` This will generate a `TypeSpec` object that is equivalent to the following class: ``` public class MyClassGenerated extends MyMapper<OtherClass> { } ``` While not specified in the question, note that you can set any number of generic type arguments by simply adding them in the **correct order** to the `ParameterizedTypeName.get` call: ``` ParameterizedTypeName.get( ClassName.get(SuperClass.class), ClassName.get(TypeArgumentA.class), ClassName.get(TypeArgumentB.class), ClassName.get(TypeArgumentC.class) ); // equivalent to SuperClass<TypeArgumentA, TypeArgumentB, TypeArgumentC> ``` For more information about the `ParameterizedTypeName.get()` method, see the documentation [here](https://square.github.io/javapoet/1.x/javapoet/com/squareup/javapoet/ParameterizedTypeName.html#get-com.squareup.javapoet.ClassName-com.squareup.javapoet.TypeName...-) or the **"$T for Types"** section of the JavaPoet [GitHub page](https://github.com/square/javapoet).
Guava string sorting I've got the following collection of values: ["LA", "INTL", "ca", null]. I need to sort them in alphabetical, case-insensitive order without throwing any possible NPE. How to do this properly by using guava library? Thanks in advance.
You can user the [`Ordering#nullsFirst()`](http://docs.guava-libraries.googlecode.com/git-history/release/javadoc/com/google/common/collect/Ordering.html#nullsFirst()) (or [`Ordering#nullsLast()`](http://docs.guava-libraries.googlecode.com/git-history/release/javadoc/com/google/common/collect/Ordering.html#nullsLast())) to take care of the `null` entries in your list. [`Ordering`](http://docs.guava-libraries.googlecode.com/git-history/release/javadoc/com/google/common/collect/Ordering.html) implements the [`java.util.Comparator`](http://docs.oracle.com/javase/7/docs/api/java/util/Comparator.html?is-external=true) interface so you can use it like: ``` List<String> myList = Arrays.asList("LA", "INTL", "ca", null); Ordering<String> ordering = Ordering.from(String.CASE_INSENSITIVE_ORDER).nullsFirst(); Collections.sort(myList, ordering); ```
Auto complete text view in compose I want to create an auto complete text view in compose, and I created a composable that contains a `TextField` and a `DropDown` menu. The issue I'm seeing with this solution is that when the drop down menu is expanded the text field is no longer actionable, I can't type any text in it. Any suggestions on how to address this? The code is below ``` @Composable fun AutoCompleteText( value: String, onValueChange: (String) -> Unit, onOptionSelected: (String) -> Unit, modifier: Modifier = Modifier, label: @Composable (() -> Unit)? = null, suggestions: List<String> = emptyList() ) { Column(modifier = modifier) { OutlinedTextField( value = value, onValueChange = { text -> if (text !== value) onValueChange(text) }, modifier = Modifier.fillMaxWidth(), label = label, ) DropdownMenu( expanded = suggestions.isNotEmpty(), onDismissRequest = { }, modifier = Modifier.fillMaxWidth() ) { suggestions.forEach { label -> DropdownMenuItem(onClick = { onOptionSelected(label) }) { Text(text = label) } } } } } ```
`DropdownMenu` has a property called [`PopupProperties`](https://developer.android.com/reference/kotlin/androidx/compose/ui/window/PopupProperties) that you can use to disable focusability. This should allow you to be able to continue typing in to the `OutlinedTextField`: ``` OutlinedTextField( value = value, onValueChange = { text -> if (text !== value) onValueChange(text) }, modifier = Modifier.fillMaxWidth(), label = label, ) DropdownMenu( expanded = suggestions.isNotEmpty(), onDismissRequest = { }, modifier = Modifier.fillMaxWidth(), // This line here will accomplish what you want properties = PopupProperties(focusable = false) ) { suggestions.forEach { label -> DropdownMenuItem(onClick = { onOptionSelected(label) }) { Text(text = label) } } } ```
JNDI injection of app name doesn't work, lookup does This doesn't seem to work (=null): ``` @Resource(name = "java:app/AppName") private String appName; ``` But a lookup of the same name does: ``` appName = (String) new javax.naming.InitialContext().lookup("java:app/AppName"); ``` I found many examples having a *lookup* property in `@Resource` instead of *name*. However, I can't find anything about that, I'm quite confident it should be *name*, the *former* isn't part of the spec. FYI, I'm using Glassfish 3.1 and I'm accessing appName from a `@PostConstruct` method in a singleton bean.
The use of "`lookup`" instead of "`name`" in this case is correct, as the entry is already defined in the `java:app` namespace and is merely being looked up, rather than defined. This is part of the [Java EE 6 specification](http://jcp.org/en/jsr/detail?id=316) (which [Glassfish 3.1](http://glassfish.java.net/downloads/3.1-final.html) implements): [`@Resource#lookup()`](http://docs.oracle.com/javaee/6/api/javax/annotation/Resource.html#lookup()). Alternative to looking up the app name using `InitialContext`, you can also use the `@Resource` annotation using the `lookup` attribute: ``` @Resource(lookup = "java:app/AppName") private String appName; ```
injecting FacesContext into spring bean I have bean that i recently converted over from being a managed-bean to being a spring-bean. Everything was ok until at some point the following method is called: ``` Exception e = (Exception) FacesContext.getCurrentInstance().getExternalContext().getSessionMap().get( AbstractProcessingFilter.SPRING_SECURITY_LAST_EXCEPTION_KEY); ``` At this point things blow up because `FacesContext.getCurrentInstance()` returns null. is it possible to inject the faces context into my bean?
> > is it possible to inject the faces context into my bean? > > > Not sure, but in this particular case it's not needed. The [`ExternalContext#getSessionMap()`](http://download.oracle.com/javaee/6/api/javax/faces/context/ExternalContext.html#getSessionMap%28%29) is basically a facade to the attributes of [`HttpSession`](http://download.oracle.com/javaee/6/api/javax/servlet/http/HttpSession.html). To the point, you just need to grab the [`HttpServletRequest`](http://download.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html) in your Spring bean somehow and then get the `HttpSession` from it by [`HttpServletRequest#getSession()`](http://download.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html#getSession%28%29). Then you can access the session attributes by [`HttpSession#getAttribute()`](http://download.oracle.com/javaee/6/api/javax/servlet/http/HttpSession.html#getAttribute%28java.lang.String%29). I don't do Spring, but [Google](http://www.google.com/search?q=get+httpservletrequest+in+spring+bean) learns me that you could obtain it as follows: ``` HttpServletRequest request = ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes()).getRequest(); ``` Once done that, you can just do: ``` Exception e = (Exception) request.getSession().getAttribute(AbstractProcessingFilter.SPRING_SECURITY_LAST_EXCEPTION_KEY); ```
How to set the links in a text block clickable in wp7 I have a text box contain links .the contents in the text are generated during run time.My problem is that the links inside the text is not clickable,how can make all links inside the text block clickable so that when i tap a link it should open the web browser.In android we can set it by using autolink.Is such option is available in wp7 or in wp7.1 mango?
Use a [HyperLink](http://msdn.microsoft.com/en-us/library/system.windows.documents.hyperlink%28v=vs.95%29.aspx). ``` <TextBlock> <Run>Pure Text</Run> <Hyperlink Command="{Binding HyperLinkTapped}">http://google.com</Hyperlink> <Run>Pure Text Again</Run> </TextBlock> ``` This is supported from Windows Phone 7.1 (Mango). You can create your own FlowDocument from the your data, at runtime, if necessary. Example on how to generate a FlowDocument from a string: ``` private void OnMessageReceived(string message) { var textBlock = new RichTextBox() { TextWrapping = TextWrapping.Wrap, IsReadOnly = true, }; var paragraph = new Paragraph(); var runs = new List<Inline>(); foreach (var word in message.Split(' ')) { Uri uri; if (Uri.TryCreate(word, UriKind.Absolute, out uri) || (word.StartsWith("www.") && Uri.TryCreate("http://" + word, UriKind.Absolute, out uri))) { var link = new Hyperlink(); link.Inlines.Add(new Run() { Text = word }); link.Click += (sender, e) => { var hyperLink = (sender as Hyperlink); new WebBrowserTask() { Uri = uri }.Show(); }; runs.Add(link); } else { runs.Add(new Run() { Text = word }); } runs.Add(new Run() { Text = " "}); } foreach (var run in runs) paragraph.Inlines.Add(run); textBlock.Blocks.Add(paragraph); MessagesListBox.Children.Add(textBlock); MessagesListBox.UpdateLayout(); } ```
Using a checkbox to change table row styling I have a dynamically generated table of n rows with a checkbox in each row. How can I make that table row have a different background-color when its checkbox is checked? I have no problem using jQuery if need be.
If you're using jQuery, dead simple: ``` $("tr :checkbox").live("click", function() { $(this).closest("tr").css("background-color", this.checked ? "#eee" : ""); }); ``` [Live example](http://jsbin.com/ubowa6) Basically what that does is identify checkboxes that are contained in a row, watch for clicks on them, and then use their state within the event handler to set the `background-color` CSS style on the row containing the checkbox. Things to consider augmenting/modifying: - I used [`live`](http://api.jquery.com/live/) in case the table is dynamic on the client (e.g., if you add or remove rows), although you may prefer to use [`delegate`](http://api.jquery.com/delegate/) rooted in the table instead. If the table will be completely static, you might just use [`click`](http://api.jquery.com/click/). - The above hooks *all* checkboxes inside rows, which is probably more than you want. jQuery supports nearly all of [CSS3's selectors](http://www.w3.org/TR/css3-selectors/) and [a fair bit more](http://api.jquery.com/category/selectors/), so you can craft the selector for the checkboxes to make it more narrow. A basic change would be to filter by class (`$("tr :checkbox.foo")` where "foo" is the class) or checkbox name (`$("tr :checkbox[name=foo]")`). - As [Morten suggests below](https://stackoverflow.com/questions/5516488/using-a-checkbox-to-change-table-row-styling/5516545#5516545), you might consider adding/removing a class rather than giving the CSS values directly in the code, as that helps decouple your script from your styling.
I can't enable MFA for Oracle Identity Cloud Service user I just sign-up an account of [Oracle Cloud](https://cloud.oracle.com/) After I logged in, It seem the system automatically created a tenancy for me and added me to an Identity Providers named `oracleidentitycloudservice`. They also create one more user starts with `oracleidentitycloudservice/username`. This is identity user page, both 2 of them is me. One of them is federated with `oracleidentitycloudservice` which is created automatically. [![enter image description here](https://i.stack.imgur.com/fPYYc.png)](https://i.stack.imgur.com/fPYYc.png) I can enable MFA for the second account. But I can not enable MFA account for `oracleidentitycloudservice/username`: [![enter image description here](https://i.stack.imgur.com/FXNEq.png)](https://i.stack.imgur.com/FXNEq.png) When I want to login to Identity Console page, I need to use this SSO method: [![enter image description here](https://i.stack.imgur.com/Fys0l.png)](https://i.stack.imgur.com/Fys0l.png) It seem risky if Identity Console page doesn't provide MFA feature. That's what I worry about. Question is: 1. Is it safe if I delete `oracleidentitycloudservice/username`? 2. How can I enable MFA for `oracleidentitycloudservice/username` from Oracle Infastructure page? 3. If Oracle is providing a complicated way to enable MFA for `oracleidentitycloudservice/username`, could it be a security issue?
For those who are in the same situation, here is step to Enable MFA for Oracle Identity cloud service user: 1. Sign in by this SSO method at this screen: [![enter image description here](https://i.stack.imgur.com/DWpbw.png)](https://i.stack.imgur.com/DWpbw.png) 2. Go to `Service User Console` on top-right screen [![List item](https://i.stack.imgur.com/7G7il.png)](https://i.stack.imgur.com/7G7il.png) 3. Go to Admin Console of Identity [![enter image description here](https://i.stack.imgur.com/pimtj.png)](https://i.stack.imgur.com/pimtj.png) 4. Go to My profile on top-right screen [![enter image description here](https://i.stack.imgur.com/olT44.png)](https://i.stack.imgur.com/olT44.png) 5. Go to Security tab, our goal is to give the account permissions so that `MFA` item shows on this screen. At this moment MFA is not yet enabled, move to next step [![enter image description here](https://i.stack.imgur.com/se0vV.png)](https://i.stack.imgur.com/se0vV.png) 6. Go to Admin console at top-right screen [![enter image description here](https://i.stack.imgur.com/mmUhD.png)](https://i.stack.imgur.com/mmUhD.png) 7. Go to Security -> MFA at left side panel, check the box `Mobile App Passcode` [![enter image description here](https://i.stack.imgur.com/F3OHN.png)](https://i.stack.imgur.com/F3OHN.png) 8. Go to Security -> Sign-On Policies, edit the Default Policy [![enter image description here](https://i.stack.imgur.com/WcTo7.png)](https://i.stack.imgur.com/WcTo7.png) 9. Edit Default Sign-on Rule [![enter image description here](https://i.stack.imgur.com/QDSyt.png)](https://i.stack.imgur.com/QDSyt.png) 10. Select the option that you prefer. It's upto you. [![enter image description here](https://i.stack.imgur.com/v5MSy.png)](https://i.stack.imgur.com/v5MSy.png) 11. Go to Security tab and here you can enable `MFA` for your IDCS account [![enter image description here](https://i.stack.imgur.com/se0vV.png)](https://i.stack.imgur.com/se0vV.png) 12. Sign-out and Sign-in again. Now you can use MFA to login. [![enter image description here](https://i.stack.imgur.com/hY7kf.png)](https://i.stack.imgur.com/hY7kf.png)
Applying a function to a backreference within gsub in R I'm new to R and am stuck with backreferencing that doesn't seem to work. In: ``` gsub("\\((\\d+)\\)", f("\\1"), string) ``` It correctly grabs the number in between parentheses but doesn't apply the (correctly defined, working otherwise) function f to replace the number --> it's actually the string "\1" that passes through to f. Am I missing something or is it just that R does not handle this? If so, any idea how I could do something similar, i.e. applying a function "on the fly" to the (actually many) numbers that occur in between parentheses in the text I'm parsing? Thanks a lot for your help.
R does not have the option of applying a function directly to a match via `gsub`. You'll actually have to extract the match, transform the value, then replace the value. This is relativaly easy with the `regmatches` function. For example ``` x<-"(990283)M (31)O (29)M (6360)M" f<-function(x) { v<-as.numeric(substr(x,2,nchar(x)-1)) paste0(v+5,".1") } m <- gregexpr("\\(\\d+\\)", x) regmatches(x, m) <- lapply(regmatches(x, m), f) x # [1] "990288.1M 36.1O 34.1M 6365.1M" ``` Of course you can make `f` do whatever you like just make sure it's vector-friendly. Of course, you could wrap this in your own function ``` gsubf <- function(pattern, x, f) { m <- gregexpr(pattern, x) regmatches(x, m) <- lapply(regmatches(x, m), f) x } gsubf("\\(\\d+\\)", x, f) ``` Note that in these examples we're not using a capture group, we're just grabbing the entire match. There are ways to extract the capture groups but they are a bit messier. If you wanted to provide an example where such an extraction is required, I might be able to come up with something fancier.
how to confirm email source I send an email from my smtp server, for example, setting `From` to *[email protected]*. The recipient will think that email comes from `anydomain`. How to confirm the email source?
There are several approaches to dealing with email forgery: - Use [PGP](http://en.wikipedia.org/wiki/Pretty_Good_Privacy) or [SSL signed certificates](http://en.wikipedia.org/wiki/Smime) - Use [SPF](http://en.wikipedia.org/wiki/Sender_Policy_Framework) - check the `Received` headers (although this isn't reliable) - reply back to the sender and ask if they actually sent it. If you know the sender, maybe ask them in person or over the phone. The main thing to realise is that the From: address isn't any form of guarantee about the originator of a message. Edit: okay I now understand that you're just trying to tag the mail message somehow so that you can recognise which server generated it (in a non-secure way). Here's how using .NET's MailMessage: ``` System.Net.Mail.MailMessage msg = new System.Net.Mail.MailMessage(); msg.Headers.Add("X-Is-Development", "true"); ```
How do I reliably find the full path of a program on the PATH? I need to find the path of a given program on the `PATH` using a shell script. The path must be the actual full path of the program, which can be passed later to one of the `exec*` functions, which does not search the `PATH` itself, e.g. `execv`. There are programs like `kill`, which are available as an actual program and a shell built-in at the same time. If this is case, I need the full path to the actual program. There are several utilities that can find a program on the `PATH` as specified in [Section 2.9.1.1, Command Search and Execution of the POSIX standard](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_09_01_01). There is `which`, which is not part of any standard. It can be a regular program on some systems, whereas some shells provide it is a builtin. It seems to be available on most systems and shells, but the shells with a builtin version, also just return the name of the built-in instead of the path to the executable. Also it is not standardized in any way and may return any output and take different options. ``` bash# which kill /usr/bin/kill dash# which kill /usr/bin/kill fish# which kill /usr/bin/kill mksh# which kill /usr/bin/kill tcsh# which kill kill: shell built-in command. zsh# which kill kill: shell built-in command ``` There is `whence`, which is a built-in of a few shells. But not available on many shells. It will too return the name of the built-in instead of the path to program. A `-p` may be passed to whence to change this behavior. ``` bash# whence kill bash: whence: command not found dash# whence kill dash: 1: whence: not found fish# whence kill fish: Unknown command 'whence' mksh# whence kill kill mksh# whence -p kill /usr/bin/kill tcsh# whence kill whence: Command not found. zsh# whence kill kill zsh# whence -p kill /usr/bin/kill ``` There is the [`command` builtin specified by POSIX:2008](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/command.html). Unfortunately it also searches for regular commands and built-ins and will return the name of the built-in instead of the path to the program shadowed by a built-in of the same name. Some old shells haven't implemented it yet. ``` bash# command -v kill kill dash# command -v kill kill fish# command -v kill /usr/bin/kill mksh# command -v kill kill tcsh# command -v kill command: Command not found. zsh# command -v kill kill ```
Just search for it yourself. ``` export IFS=":" [ -z "${1}" ] && exit 1 for dir in $PATH do if [ -x "${dir}/${1}" ] then echo "${dir}/${1}" exit 0 fi done echo ${1} not found exit 1 ``` Tested in `bash`, `dash`, `ksh`, `mksh`, `zsh` # Update The above is nice for a stand alone script however if you're planning on embedding this into a larger script you may want to use something more like the following. ``` function find_path() { IFS_SAVE="${IFS}" export IFS=":" [ -z "${1}" ] && return 1 for dir in $PATH do if [ -x "${dir}/${1}" ] then echo "${dir}/${1}" export IFS="${IFS_SAVE}" return 0 fi done export IFS="${IFS_SAVE}" echo ${1} not found return 1 } ``` This is so that `IFS` is restored after finding the match, also swapped `exit`'s with `return`'s
"Connect failed: Access denied for user 'root'@'localhost' (using password: YES)" from php function I wrote some function used by a php webpage, in order to interact with a mysql database. When I test them on my server I get this error: ``` "Connect failed: Access denied for user 'root'@'localhost' (using password: YES)" ``` I am able to use them on my pc (using XAMPP) and I can navigate through the tables of the database using the command line in the server. However, the webpage fails to connect. I've checked the password but with no results. It's correct (otherwise I could not log in to mysql from the command line). The call of the function is the following: ``` $conn = new mysqli("localhost", "root", "password", "shop"); ``` Do I have to set something in my server? Thanks Edit: PHP version 5.3.3-7+squeeze1 mysql version: 5.1.49-3 both on debian
I solved in this way: I logged in with root username ``` mysql -u root -p -h localhost ``` I created a new user with ``` CREATE USER 'francesco'@'localhost' IDENTIFIED BY 'some_pass'; ``` then I created the database ``` CREATE DATABASE shop; ``` I granted privileges for new user for this database ``` GRANT ALL PRIVILEGES ON shop.* TO 'francesco'@'localhost'; ``` Then I logged out root and logged in new user ``` quit; mysql -u francesco -p -h localhost ``` I rebuilt my database using a script ``` source shop.sql; ``` And that's it.. Now from php works without problems with the call ``` $conn = new mysqli("localhost", "francesco", "some_pass", "shop"); ``` Thanks to all for your time :)
How to cache QGIS Server WMS? It seems like raster tiles have started to go out of fashion, but still, I need a solution to do it somehow for my QGIS Server's WMS. Up to this point I have tried TileCache, but I couldn't make it to work in OL3, and it also seems to be a little "oldish". So what would be my best bid, if later I would like to use the cached layer in my OL3 application? TileStache, Mapproxy, MapCache? I have my QGIS Server running under CentOS 7.
QGIS Server works well with [MapProxy](https://mapproxy.org/docs/nightly/index.html). With QGIS Server+MapProxy you will get the best of the QGIS styling plus the speed of a tile cache. MapProxy is written in Python and you probably already have Python installed on the server. You can (and you should) run MapProxy in a virtual environment. The MapProxy instructions are quite clear and it is really a question of minutes to have it up and running, fetching data from QGIS Server. 1. It is much lighter than GeoWebCache 2. It caches and serves tiles (just use `tiled: true` in your WMS request) 3. It works pretty well with OpenLayers. As soon as you install it, you will get a demo page, with OpenLayers examples. 4. You can call GetFeatureInfo requests against the cached source 5. You can call GetLegendGraphic requests against the cached source 6. It can handle custom defined grids (as long as you use the same in OpenLayers) 7. You can ask for more than one tile in parallel and take advantage of QGIS Server parallel render support (if enable). 8. Since QGIS Server can store projects on Postgis, you can easily update the project without any uploads. MapProxy will use the updated styles from QGIS Server. ### Example There are very nice small examples in the MapProxy documentation. This one is one of the most complicated examples, because it uses a custom grid and a CRS other than EPSG:3857. If you use the usual `GLOBAL_MERCATOR` grid, it is much simpler (on the MapProxy side and on the OpenLayers side). This is a small example of a `mapproxy.yaml` configuration file, with a custom grid. The source is QGIS Server. I've added a `GetFeatureInfo` request on mouse click to show how these requests can be forwarded to QGIS Server. I've also added the layer's legend (using `service=WMS&REQUEST=GetLegendGraphic&VERSION=1.3.0`). ``` layers: - name: caop title: CAOP by QGIS Server sources: [caop_cache_continente] caches: caop_cache_continente: meta_size: [4, 4] meta_buffer: 20 # 20+4x256+20 # width=1064&height=1064 use_direct_from_level: 14 concurrent_tile_creators: 2 link_single_color_images: true grids: [continente] sources: [continente_wms] sources: continente_wms: type: wms wms_opts: featureinfo: true legendgraphic: true req: url: http://continente.qgis.demo/cgi-bin/qgis_mapserv.fcgi layers: freguesia transparent: true grids: continente: srs: 'EPSG:3763' bbox_srs: 'EPSG:3763' bbox: [-127104, -301712, 173088, 278544] origin: nw res: [ 1172.625, 586.3125, 293.15625, 146.578125, 73.2890625, 36.64453125, 18.322265625, 9.1611328125, 4.58056640625, 2.290283203125, 1.1451416015625, 0.57257080078125, 0.286285400390625, 0.1431427001953125, 0.07157135009765625 ] ``` The following OpenLayers file is able to take fetch the tiles from MapProxy. ``` <!DOCTYPE html> <html> <head> <meta content="text/html;charset=utf-8" http-equiv="Content-Type"> <link rel="stylesheet" href="https://cdn.rawgit.com/openlayers/openlayers.github.io/master/en/v5.3.0/css/ol.css" type="text/css"> <style> .map { height: 600px; width: 100%; } </style> <script src="https://cdn.rawgit.com/openlayers/openlayers.github.io/master/en/v5.3.0/build/ol.js"></script> <script src="resources/js/proj4js/proj4.js"></script> <title>OpenLayers example using QGIS Server and MapProxy</title> </head> <body> <div id="map" class="map"></div> <p><image src="http://mapproxy.qgis.demo/mapproxy/service?service=WMS&REQUEST=GetLegendGraphic&VERSION=1.3.0&style=default&FORMAT=image/png&LAYER=caop&transparent=true"></image></p> <div id="nodelist"><em>Click on the map to get feature info</em></div> <script> proj4.defs("EPSG:3763", "+proj=tmerc +lat_0=39.66825833333333 +lon_0=-8.133108333333334 +k=1 +x_0=0 +y_0=0 +ellps=GRS80 +towgs84=0,0,0,0,0,0,0 +units=m +no_defs"); ol.proj.proj4.register(proj4); var projection = new ol.proj.Projection({ code: 'EPSG:3763', extent: [-127104, -301712, 173088, 278544] }); var projectionExtent = projection.getExtent(); var size = ol.extent.getWidth(projectionExtent) / 256; var newresolutions = new Array(15); var newmatrixIds = new Array(15); for (var z = 0; z < 15; ++z) { newresolutions[z] = size / Math.pow(2, z); newmatrixIds[z] = z; } var tileGrid = new ol.tilegrid.WMTS({ origin: ol.extent.getTopLeft(projectionExtent), // [ 270000, 3650000 ] resolutions: newresolutions, matrixIds: newmatrixIds, tileSize: [256, 256] }); var caop = new ol.layer.Tile({ source: new ol.source.TileWMS({ url: 'http://mapproxy.qgis.demo/mapproxy/service?', params: { layers: 'caop', tiled: true, srs: "EPSG:3763" }, format: 'image/png', projection: projection, tileGrid: tileGrid }) }); var map = new ol.Map({ layers: [caop], target: 'map', view: new ol.View({ projection: projection, center: [0, 0], zoom: 1 }) }); map.on('singleclick', function (evt) { document.getElementById('nodelist').innerHTML = "Loading... please wait..."; var view = map.getView(); var viewResolution = view.getResolution(); var url = caop.getSource().getGetFeatureInfoUrl( evt.coordinate, viewResolution, view.getProjection(), { 'INFO_FORMAT': 'text/html', 'FEATURE_COUNT': 50 }); if (url) { document.getElementById('nodelist').innerHTML = '<iframe seamless src="' + url + '" style="width:100%"></iframe>'; } }); </script> </body> </html> ```
How to save Qimage with 'jpeg' format in PyQt? I platform is `Windows7`, `Python3.4`, `PyQt5`. I found that I can not save a `QImage` object in format `jpeg`. ``` >>> from PyQt5 import QtCore, QtGui >>> i = QtGui.QImage() >>> i.load(r"C:\Users\paleneutron\Pictures\Capture4.PNG") True >>> i.save(r"C:\Users\paleneutron\Pictures\hehe.jpg") False >>> i.save(r"C:\Users\paleneutron\Pictures\hehe.jpg",format = 'jpeg') False >>> i.save('hehe.png') True >>> i.save('hehe.bmp') True >>> i.save('hehe.jpg') False >>> i.save('hehe.jpeg') False ``` In [this page](http://doc.qt.io/qt-5/qimage.html#save), `jpeg` is supported both read and write. Why I got false when doing that? ## EDIT I checked supported formats as comment: ``` >>> QtGui.QImageWriter.supportedImageFormats() [PyQt5.QtCore.QByteArray(b'bmp'), PyQt5.QtCore.QByteArray(b'pbm'), PyQt5.QtCore.QByteArray(b'pgm'), PyQt5.QtCore.QByteArray(b'png'), PyQt5.QtCore.QByteArray(b'ppm'), PyQt5.QtCore.QByteArray(b'xbm'), PyQt5.QtCore.QByteArray(b'xpm')] ``` The problem is here, `jpeg` is missing! But I have `qjpeg.dll` in `C:\Python34\Lib\site-packages\PyQt5\plugins\imageformats`.What should I do to enable `jpeg` in my program?
Thanks Chernobyl! I got the solution by accident. I tried the test code in my full program and it works well. It is because a lot of features of `PyQt` Must construct a `QGuiApplication` before using it. ``` from PyQt5 import QtCore, QtGui, QtWidgets import sys app = QtWidgets.QApplication(sys.argv) print(QtGui.QImageWriter.supportedImageFormats()) ``` Now we got the full supported formats: ``` [PyQt5.QtCore.QByteArray(b'bmp'), PyQt5.QtCore.QByteArray(b'ico'), PyQt5.QtCore.QByteArray(b'jpeg'), PyQt5.QtCore.QByteArray(b'jpg'), PyQt5.QtCore.QByteArray(b'pbm'), PyQt5.QtCore.QByteArray(b'pgm'), PyQt5.QtCore.QByteArray(b'png'), PyQt5.QtCore.QByteArray(b'ppm'), PyQt5.QtCore.QByteArray(b'tif'), PyQt5.QtCore.QByteArray(b'tiff'), PyQt5.QtCore.QByteArray(b'wbmp'), PyQt5.QtCore.QByteArray(b'xbm'), PyQt5.QtCore.QByteArray(b'xpm')] ```
Is it possible to find the distance between ticks in D3.js? Is there a way to find out the distance between the tick marks on the x axis? I'm using the ordinal scale with rangeRoundBands with tells me it doesn't have a tick function. ``` var x= d3.scale.ordinal().rangePoints([_margin.left, cWidth]); x.domain(['Dec','Jan']); var testTicks = x.ticks(2); ``` It generates the axis fine (can't post an image) but I can't figure out how to get the distance (edit: added x.domain)
``` var data = [45, 31, 23], // whatever your data is graphHeight = 400, // however many ticks you want to set numberTicksY = 4, // set y scale // (hardcoded domain in this example to min and max of data vals > you should use d3.max real life) y = d3.scale.linear().range(graphHeight, 0]).domain(23, 45), yAxis = d3.svg.axis().scale(y).orient("left").ticks(numberTicksY), // eg returns -> [20, 30, 40, 50] tickArr = y.ticks(numberTicksY), // use last 2 ticks (cld have used first 2 if wanted) with y scale fn to determine positions tickDistance = y(tickArr[tickArr.length - 1]) - y(tickArr[tickArr.length - 2]); ```
Gulp build with browserify environment variable I'm looking to include either an environment variable or file that my modules can access for conditional flow. ``` // contains env build specific data // or value 'develop' || 'production' var env = require('config'); ``` I know I can access the CL arguments with `yargs` which is great, but I can't seem to find a way to get arguments into my browserify build. ``` var bundleStream = { cache: {}, packageCache: {}, fullPaths: false, entries: [filename], extensions: config.extensions, debug: config.debug, paths: ['./node_modules', './app/js/'], require: ['jquery', 'lodash'] }; var bundle = function() { bundleLogger.start(filename); return bundleStream .bundle() .on('error', handleErrors) .pipe(source(filename.replace('-app', '-bundle'))) .pipe(gulp.dest(process.cwd())) .on('end', reportFinished) .pipe(browserSync.reload({ stream: true })); }; ```
You could create a `config.json` file dynamically, and then require it in your modules: ``` var fs = require('fs'); var gutil = require('gulp-utils'); gulp.task('create-config', function(cb) { fs.writeFile('config.json', JSON.stringify({ env: gutil.env.env, tacos: 'delicious' }), cb); }); gulp.task('browserify', ['create-config'], function() { //... }); ``` In your modules: ``` var config = require('./config.json'); if (config.env === 'production') { //... } ``` And on the command line: ``` gulp --env=production ```
list of dictionary Sorting in python May be this is a duplicate. I didnt find same type of question I have a list of dictionary, for example ``` mylist=[ {'month':'MAR2011','amount':90}, {'month':'MAR2013','amount':190}, {'month':'JUN2011','amount':290}, {'month':'AUG2011','amount':930}, {'month':'DEC2011','amount':30}, {'month':'NOV2010','amount':40}, {'month':'FEB 2013','amount':760}, {'month':'SEP 2012','amount':330}, {'month':'APR2013','amount':50}, {'month':'NOV 2011','amount':450}, {'month':'OCT 2012','amount':450}, ] ``` I want to get output like this : ``` [ {'month':'FEB 2013','amount':760}, {'month':'MAR2011','amount':90}, {'month':'MAR2013','amount':190}, {'month':'APR 2013','amount':50}, {'month':'JUN2011','amount':290}, {'month':'AUG2011','amount':930}, {'month':'SEP 2012','amount':330}, {'month':'OCT 2012','amount':450}, {'month':'NOV2010','amount':40}, {'month':'NOV 2011','amount':450}, {'month':'DEC2011','amount':30}, ] ``` Dont want to consider the year. sort according to the month. Thanks in advance
Create a dictionary mapping month to an ordinal: ``` from calendar import month_abbr month_to_index = {month.upper(): i for i, month in enumerate(month_abbr[1:])} ``` and use that to sort: ``` sorted(mylist, key=lambda d: month_to_index[d['month'][:3]]) ``` Demo: ``` >>> from calendar import month_abbr >>> month_to_index = {month.upper(): i for i, month in enumerate(month_abbr[1:])} >>> import pprint >>> pprint.pprint(sorted(mylist, key=lambda d: month_to_index[d['month'][:3]])) [{'amount': 760, 'month': 'FEB 2013'}, {'amount': 90, 'month': 'MAR2011'}, {'amount': 190, 'month': 'MAR2013'}, {'amount': 50, 'month': 'APR2013'}, {'amount': 290, 'month': 'JUN2011'}, {'amount': 930, 'month': 'AUG2011'}, {'amount': 330, 'month': 'SEP 2012'}, {'amount': 450, 'month': 'OCT 2012'}, {'amount': 40, 'month': 'NOV2010'}, {'amount': 450, 'month': 'NOV 2011'}, {'amount': 30, 'month': 'DEC2011'}] ``` To sort on month first, year second, return a tuple from the key function: ``` sorted(mylist, key=lambda d: (month_to_index[d['month'][:3]], d['month'][-4:])) ``` Demo: ``` >>> pprint.pprint(sorted(mylist, key=lambda d: (month_to_index[d['month'][:3]], d['month'][-4:]))) [{'amount': 760, 'month': 'FEB 2013'}, {'amount': 90, 'month': 'MAR2011'}, {'amount': 190, 'month': 'MAR2013'}, {'amount': 50, 'month': 'APR2013'}, {'amount': 290, 'month': 'JUN2011'}, {'amount': 930, 'month': 'AUG2011'}, {'amount': 330, 'month': 'SEP 2012'}, {'amount': 450, 'month': 'OCT 2012'}, {'amount': 40, 'month': 'NOV2010'}, {'amount': 450, 'month': 'NOV 2011'}, {'amount': 30, 'month': 'DEC2011'}] ``` In this specific case there is no difference because the per-month data is already in sorted order.
Instead of error, why don't both operands get promoted to float or double? 1) If one operand is of type `ulong`, while the other operand is of type `sbyte/short/int/long`, then compile-time error occurs. I fail to see the logic in this. Thus, why would it be bad idea for both operands to instead be promoted to type `double` or `float`? ``` long L = 100; ulong UL = 1000; double d = L + UL; // error saying + operator can't be applied to operands of type ulong and long ``` b) Compiler implicitly converts `int` literal into `byte` type and assigns resulting value to `b`: ``` byte b = 1; ``` But if we try to assign a literal of type `ulong` to type `long`(or to types `int`, `byte` etc), then compiler reports an error: ``` long L = 1000UL; ``` I would think compiler would be able to figure out whether result of constant expression could fit into variable of type `long`?! thank you
To answer the question marked (1) -- adding signed and unsigned longs is probably a *mistake*. If the intention of the developer is to overflow into *inexact* arithmetic in this scenario then that's something they should do explicitly, by casting both arguments to double. Doing so implicitly is *hiding mistakes* more often than it is *doing the right thing*. To answer the question marked (b) -- of course the compiler could figure that out. Obviously it can because it does so for integer literals. But again, this is almost certainly an error. If your intention was to make that a signed long then *why did you mark it as unsigned*? This looks like a *mistake*. C# has been carefully designed so that it looks for weird patterns like this and calls your attention to them, rather than making a guess that you meant to say this weird thing and blazing on ahead as if everything were normal. The compiler is trying to encourage you to write sensible code; **sensible code does not mix signed and unsigned types**.
justify content flex-end not working for IE Flex-end working for chrome and firefox, but not working for ie, go through following code ``` .flex-container { display: flex; flex-wrap: nowrap; background-color: DodgerBlue; flex-direction: column;flex-flow:column; justify-content: flex-end;height:100px } ``` ``` <h1>Flexible Boxes</h1> <div class="flex-container"> <div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> </div> <p>Try to resize the browser window.</p> <p>A container with "flex-wrap: nowrap;" will never wrap its items.</p> <p><strong>Note:</strong> Flexbox is not supported in Internet Explorer 10 or earlier versions.</p> ```
IE seems to align items differently using `justify-content` **when there is an overflow**. it doesn't only happen with `flex-end`, you will face the same using `center`. Any value that will create a top overflow will not work as expected. It will also happen in a row direction. Any property that will create a left overflow will not work. Examples where the alignment is doing nothing: ``` .container { display:inline-flex; height:50px; width:50px; margin:50px; border:2px solid green; } .container span { flex-shrink:0; width:200%; background:red; } .alt { flex-direction:column; } .alt span { height:200%; width:auto; } ``` ``` <div class="container" style="justify-content:flex-end;"> <span></span> </div> <div class="container" style="justify-content:center;"> <span></span> </div> <div class="container alt" style="justify-content:flex-end;"> <span></span> </div> <div class="container alt" style="justify-content:center;"> <span></span> </div> ``` I am surprised to say this, but it seems that IE is doing a good thing in those cases because it's preventing the unwanted overflow which may create issues like described in this question [Centered flex-container grows beyond top](https://stackoverflow.com/q/49278725/8620333) and also this one [Can't scroll to top of flex item that is overflowing container](https://stackoverflow.com/q/33454533/8620333) Considering this, it's difficult to say if it's a bug. It's a probably by design and the IE team made the decision to avoid the *bad* overflow. 1 --- This said, here is a *hack* using some negative margin that will allow you to have the needed behavior on IE: ``` .flex-container { display: flex; flex-wrap: nowrap; background-color: DodgerBlue; flex-direction: column; flex-flow: column; justify-content: flex-end; height: 100px } .flex-container > div:first-child { margin-top:-100vh; /*put a very big margin here*/ } ``` ``` <h1>Flexible Boxes</h1> <div class="flex-container"> <div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> </div> ``` Same hack applied to the previous examples: ``` .container { display:inline-flex; height:50px; width:50px; margin:50px; border:2px solid green; } .container span { flex-shrink:0; width:200%; background:red; } .alt { flex-direction:column; } .alt span { height:200%; width:auto; } ``` ``` <div class="container" style="justify-content:flex-end;"> <span style="margin-left:-100%;"></span> </div> <div class="container" style="justify-content:center;"> <span style="margin: 0 -100%;"></span> </div> <div class="container alt" style="justify-content:flex-end;"> <span style="margin-top:-100%;"></span> </div> <div class="container alt" style="justify-content:center;"> <span style="margin:-100% 0;"></span> </div> ``` 1: I don't have any official proof at the time being.
Saving a Wix checkbox value that defaults to checked to the registry I have an installer authored with Wix. In the UI wizard, there's a checkbox that defaults to checked. I want to save the value of this checkbox to the registry for changes, repairs and upgrades using the (simpler version of the) ["Remember Property" pattern](http://robmensching.com/blog/posts/2010/5/2/The-WiX-toolsets-Remember-Property-pattern) described by Rob Mensching. The checkbox implementation: ``` <Control Id="httpsCheckBox" Type="CheckBox" CheckBoxValue="true" X="30" Y="119" Width="139" Height="17" Text="Enable HTTPS services" Property="ENABLEHTTPS" /> ``` The property definition: ``` <Property Id="ENABLEHTTPS" value="true"> <RegistrySearch Id="EnableHttpsRegistrySearch" Type="raw" Root="HKLM" Key="SOFTWARE\CompanyName\ProductName" Name="EnableHttps" /> </Property> ``` And the property is written to the registry here: ``` <Component Id="RegistryEntries"> <RegistryKey Root="HKLM" Key="SOFTWARE\CompanyName\ProductName"> <RegistryValue Name="EnableHttps" Value="[ENABLEHTTPS]" Type="string" /> </RegistryKey> </Component> ``` The initial install works fine. The value in the registry is "true" if the checkbox is left checked or empty if it's unchecked. The next time the installer is run, to install a new feature for example, the checkbox is always checked regardless of the value in the registry setting. If I remove the default value from the property definition so that the checkbox is unchecked the first time the installer is run, everything works fine. The next time the installer is run the checkbox (and property) have the correct value from the registry. It's like the RegistrySearch does not set the property if the registry value is empty. Am I doing something wrong? Or is there a better way of doing this?
Basically, the element will use the default value if the registry entry is not found or null, and that is what you are experiencing. See the documentation here: <http://wix.sourceforge.net/manual-wix3/wix_xsd_registrysearch.htm> Here is a solution to the problem: <http://www.mail-archive.com/[email protected]/msg32524.html> ``` <Property Id="ENABLEHTTPS" > <RegistrySearch Id="EnableHttpsRegistrySearch" Type="raw" Root="HKLM" Key="SOFTWARE\CompanyName\ProductName" Name="EnableHttps" /> </Property> <CustomAction Id="SetENABLEHTTPS" Property="ENABLEHTTPS" Value="1" Execute="firstSequence" /> <Control Id="httpsCheckBox" Type="CheckBox" CheckBoxValue="1" X="30" Y="119" Width="139" Height="17" Text="Enable HTTPS services" Property="ENABLEHTTPS" /> <InstallUISequence> <Custom Action="SetENABLEHTTPS" Before="AppSearch">NOT Installed AND NOT OLDERVERSIONDETECTED</Custom> </InstallUISequence> <InstallExecuteSequence> <Custom Action="SetENABLEHTTPS" Before="AppSearch">NOT Installed AND NOT OLDERVERSIONDETECTED</Custom> </InstallExecuteSequence> ```
Designing a SQL table with hierarchy/sub-categories I have a table that looks something like this: ID | Keyword | Category | Sub-Category | Sub-Sub-Category | Sub-Sub-Sub-Category Do i need to split it up in two tables (a keyword table and a categories table with parent id) if one keyword can only belong to one category,sub-category...etc. meaning there are no repetition. is there still a need to split it up?
You only need one table to represent a 1-1 mapping. To represent 1-many or many-many mappings, you should use multiple tables. If a keyword can only correspond to one category/sub-category/sub-sub-category, your current layout should be fine. One caveat: if you want to search based on the keyword, there could be performance gains for separating the tables. It's much faster to perform an integer search. The discussion of storing the keyword values in another table coarsely corresponds to [this discussion](https://stackoverflow.com/questions/2032175/should-i-store-country-names-in-mysql-for-each-user) of storing country names (which are mostly static) in another table. Some key advantages of using another table might be things like (spoken) language independence, fast searching, and ease of updating later on.
Is there any ability to use regex to select the classes? I know that there is ability to use the wildcard to select the elements in the page, like this <https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_selectors> But I want to use the regex in the classes names like this ``` div.col-*-2-* { color: green; } ``` ``` <div class="col-1-1-1">Don't select</div> <div class="col-1-2-1">Select</div> <div class="col-3-2-3">Select</div> <div class="col-4-2-1">Select</div> <div class="col-5-3-2">Don't select</div> ```
Your question is great. But the answer is NO. There's no usage about the regular expression in css like what you stated. For your solution, you can use like this: ``` [class*="col-1-2-"],[class*="col-2-2-"],[class*="col-3-2-"],[class*="col-4-2-"],[class*="col-5-2-"]{ color: green; } ``` ``` <div class="col-1-1-1">Don't select</div> <div class="col-1-2-1">Select</div> <div class="col-3-2-3">Select</div> <div class="col-4-2-1">Select</div> <div class="col-5-3-2">Don't select</div> ``` As per your comment, you can use like this: ``` [class="col-1-2-1"],[class="col-2-2-1"], [class="col-3-2-1"],[class="col-4-2-1"], [class="col-5-2-1"]{ color: green; } ``` As per the @RickHitchcock answer, you can implement like this to meet your requirment: ``` [class^="col-"][class*="-2-"][class$="-1"]:not([class^="col-2"]) { color: green; } ``` **Footnote:** As @torazaburo, Do not use weird constructed classnames and then try to do regexp-like stuff to address them. Instead, use multiple classes that allow you to address elements directly.
Question about generics in C# comparing to Java In Java I can specify generic with wildcard "?". It is possible to create a map like this one: `Map<String, ?>`. I'm working with C# and I need a `Dictionary<String, SomeInterface<?>>` (where ? can be int, double, any type). Is this possible in C#? **EDIT:** Example: ``` interface ISomeInterface<out T>{ T Method(); void methodII(); } class ObjectI : ISomeInterface<int>{ ... } class ObjectII : ISomeInterface<double>{ ... } class ObjectIII : ISomeInterface<string>{ .... } ``` I was trying to map this objects into Dictionary like: ``` Dictionary<String, ISomeInterface<?>> _objs = new Dictionary<String, ISomeInterface<?>(); _objs.Add("Object1", new ObjectI()); _objs.Add("Object2", new ObjectII()); _objs.Add("Object3", new ObjectII()); foreach(var keyVal in _objs){ Console.WriteLine(keyVal.Method()); } ``` Objects that implement ISomeInterface are loaded in runtime using Assembly and Activator.createInstance. In the moment of creation I don't if objects implements `ISomeInterface<int>` or `ISomeInterface<double>`. Any help is very much appreciated.
No. However, if you're using C# 4, you can make `ISomeInterface` [covariant](http://msdn.microsoft.com/en-us/library/ee207183.aspx) so that `ISomeInterface<Anything>` will be convertible to `ISomeInterface<object>`. If `ISomeInterface` has methods that take parameters of its type parameter (as opposed to return values), this will be completely impossible, since it would then allow you to pass arbitrary objects as the parameters. **EDIT**: In your specific case, the best solution is to make `IMyInterface<T>` inherit a separate non-generic `IMyInterface` interface and move all members that don't involve `T` to the base interface. You can then use a `Dictionary<string, IMyInterface>` and you won't have any trouble.
Set/Get Java List<> from C code ### Java Code In Java code I have class called `IdentificationResult` which has 3 members: 1. `enrollmentID` 2. `enrollmentSettings` 3. `identParams`. Here is the class: ``` package com.vito.android.framework.service; class IdentificationResult { class IdentParams { byte[] otp; String seedId; } String enrollmentID; String enrollmentSettings; List<IdentParams> identParams; } ``` In the main class I have function `IdentificationResult GetAuthenticationStatus( )`, here is the main Class: ``` public class TokenManager { /* Some code goes here ... */ public IdentificationResult GetAuthenticationStatus( ) { /* Function do some actions here ... */ return new IdentificationResult; } } ``` --- ### C++ Code I call Java method from my C++ code in this way ``` void GetAuthenticationStatus( ) { // Attach current thread. JNIEnv *env = NULL; m_javaVM->AttachCurrentThread( env, NULL ); if( env == NULL ) { return -1; } jclass clazz = NULL; clazz = env->GetObjectClass( m_classObject ); if( clazz == NULL ) { return -1; } // Get class method. jmethodID clazzMethod = NULL; env->GetMethodID( clazz, "GetAuthenticationStatus", "(V;)Lcom/vito/android/framework/service/IdentificationResult;" ); if( clazzMethod == NULL ) { return VCS_RESULT_ERROR; } // Call Java 'GetAuthenticationStatus' function. jobject methodReturnObj = env->CallObjectMethod( m_classObject, clazzMethod ); // Get IdentificationResult Class from Object. jclass identifyResultClass = env->GetObjectClass( methodReturnObj ); if( identifyResultClass == NULL ) { return -1; } // Get identParams. jfieldID fieldID = env->GetFieldID( identifyResultClass , "identParams", "***1. Question***"); if( fieldID == NULL ) { return -1; } else { *** 2. Question *** } } ``` --- ### Questions 1. What I must write here to get `List<IdentParams>` field ID? 2. How I can Get or Set field value?
Okay, I have solve the problem and want to share result with you, here is solution: ``` fieldID = env->GetFieldID( identifyResultClass , "identParams", "Ljava/util/List;" ); if( fieldID != NULL ) { // Find "java/util/List" Class (Standard JAVA Class). jclass listClass = env->FindClass( "java/util/List" ); if( listClass == NULL ) { DBG_WARNING(DBG_CTX, ("Can't Find Class \"java/util/List\".\n")); return -1; } // Get List object field. jobject listObject = env->GetObjectField( methodReturnObj, fieldID ); if( listObject == NULL ) { DBG_WARNING(DBG_CTX, ("Can't get ObjectField for \"List\".\n")); return -1; } // Get "java.util.List.get(int location)" MethodID jmethodID getMethodID = env->GetMethodID( listClass, "get", "(I)Ljava/lang/Object;" ); if( getMethodID == NULL ) { DBG_WARNING(DBG_CTX, ("Can't get MethodID for \"java.util.List.get(int location)\".\n")); return -1; } // Get "int java.util.List.size()" MethodID jmethodID sizeMethodID = env->GetMethodID( listClass, "size", "()I" ); if( sizeMethodID == NULL ) { DBG_WARNING(DBG_CTX, ("Can't get MethodID for \"int java.util.List.size()\".\n")); return -1; } // Call "int java.util.List.size()" method and get count of items in the list. int listItemsCount = (int)env->CallIntMethod( listObject, sizeMethodID ); DBG_DISPLAY(DBG_CTX,("List has %i items\n", listItemsCount)); for( int i=0; i<listItemsCount; ++i ) { // Call "java.util.List.get" method and get IdentParams object by index. jobject identParamsObject = env->CallObjectMethod( listObject, getMethodID, i - 1 ); if( identParamsObject == NULL ) { DBG_WARNING(DBG_CTX, ("Can't get Object from \"identParamsObject\" at index %i.\n", i - 1)); } } ``` Thanks to @Joop Eggen he gives me great idea !!!
Why can't the name of a function expression be reassigned? Why will the following snippet throw an error? ``` "use strict"; (function a() { console.log(typeof a); // function console.log(a = 0); // error })(); ``` Why doesn't this snippet throw an error? ``` "use strict"; (function() { function a() { console.log(a = 0); // 0 } return a; })()(); ``` Why does immediately returning the function throw an error? ``` "use strict"; (function() { return function a() { console.log(a = 0); // error }; })()(); ``` Are function expressions the only case where this happens? Why can't they be reassigned?
Because that's how named FunctionExpressions behave, which is different than how *FunctionDeclarations* do The rules to [create a named *FunctionExpression* are](https://tc39.es/ecma262/#_ref_13324): > > FunctionExpression:functionBindingIdentifier(FormalParameters){FunctionBody} > > > 1. Let scope be the running execution context's LexicalEnvironment. > 2. Let funcEnv be NewDeclarativeEnvironment(scope). > 3. Let envRec be funcEnv's EnvironmentRecord. > 4. Let name be StringValue of BindingIdentifier. > 5. Perform envRec.CreateImmutableBinding(name, false). > 6. Let closure be FunctionCreate(Normal, FormalParameters, FunctionBody, funcEnv). > 7. Perform MakeConstructor(closure). > 8. Perform SetFunctionName(closure, name). > 9. Set closure.[[SourceText]] to the source text matched by FunctionExpression. > 10. Perform envRec.InitializeBinding(name, closure). > 11. Return closure. > > > The important point is the **.5** which does make the name of the function an [immutable binding](https://tc39.es/ecma262/#table-15). --- - In the first case, you try to reassign this *Immutable Binding*. **It throws** - In the second case however, your named function is not a *FunctionExpression*, but a *FunctionDeclaration*, which has [different behavior](https://tc39.es/ecma262/#sec-function-definitions-runtime-semantics-instantiatefunctionobject). - In the last case, it's a *FunctionExpression* and to this regards does the same as the first one.
Remove spaces between single character in string I was trying to remove duplicate words from a string in scala. I wrote a udf(code below) to remove duplicate words from string: ``` val de_duplicate: UserDefinedFunction = udf ((value: String) => { if(value == "" | value == null){""} else {value.split("\\s+").distinct.mkString(" ")} }) ``` The problem I'm facing with this is that it is also removing single character tokens from the string, For example if the string was: ``` "test abc abc 123 foo bar f f f" ``` The output I'm getting is: ``` "test abc 123 foo bar f" ``` What I want to do so remove only repeating words and not single characters, One workaround I could think of was to replace the spaces between any single character tokens in the string so that the example input string would become: ``` "test abc abc 123 foo bar fff" ``` which would solve my problem, I can't figure out the proper regex pattern but I believe this could be done using capture group or look-ahead. I looked at similar questions for other languages but couldn't figure out the regex pattern in scala. Any help on this would be appreciated!
If you want to remove spaces between single character in your input string, you can just use the following regex: ``` println("test abc abc 123 foo bar f f f".replaceAll("(?<= \\w|^\\w|^) (?=\\w |\\w$|$)", "")); ``` **Output:** ``` test abc abc 123 foo bar fff ``` **Demo**: <https://regex101.com/r/tEKkeP/1> **Explanations:** The regex: `(?<= \w|^\w|^) (?=\w |\w$|$)` will match spaces that are surrounded by one word character (with eventually a space before after it, or the beginning/end of line anchors) via positive lookahead/lookbehind closes. **More inputs:** ``` test abc abc 123 foo bar f f f f boo f boo boo f boo f f too f ``` **Associated outputs:** ``` test abc abc 123 foo bar fff f boo f boo boo f boo ff too f ```
Conditionally create either text or striked through text using ternary `html` .. ``` <tr ng-repeat="player in players"> <td ng-cloak>{{ player.standing ? player.name : '<strike>' + player.name + '</strike>' }}</td> <td ng-cloak>{{ player.associatedNumber }}</td> <td ng-cloak> <span ng-class="player.standing === true ? 'label label-success': 'label label-danger'">{{ player.standing }}</span> </td> </tr> ``` Dataset .. ``` [{ "name": "Robert C", "associatedNumber": 21, "standing": true }, { "name": "Joey C", "associatedNumber": 55, "standing": true }, { "name": "Bobby A", "associatedNumber": 15, "standing": true }] ``` This is the first row rendered (others are similiar) .. [![screenshot](https://i.stack.imgur.com/3iuwd.png)](https://i.stack.imgur.com/3iuwd.png) Instead I want to render either the player's name as plain text or striked through if they're not `standing`.
You need to use `ng-class` for this. There are two ways of writing a ternary in Angular. Prior to version 1.1.5 ``` <td ng-cloak data-ng-class="player.standing ? 'null' : 'strikethrough'">{{ player.name }}</td> ``` Version 1.1.5 and later: ``` <td ng-cloak data-ng-class="player.standing && 'null' || 'strikethrough'">{{ player.name }}</td> ``` Add the CSS style for `.strikethrough` and everything is good to go. ``` .strikethrough { text-decoration: line-through; } ``` You can see it working at this plunker: <http://plnkr.co/edit/MYnXLwCC7XI1MrvcI5ti?p=preview>
Can't start Laravel, I get "Base table or view not found" error First I rolled back 2 migrations by mistake, then I ran `php artisan migrate` command and I got the following error: `[Illuminate\Database\QueryException] SQLSTATE[42S02]: Base table or view not found: 1146 Table 'exercise1.categories' doesn't exist (SQL: select * from `categories` where `parent_id` = 0) [PDOException] SQLSTATE[42S02]: Base table or view not found: 1146 Table 'exercise1.categories' doesn't exist` Then I stopped Laravel. After that when I run the `php artisan serve` command for starting Laravel I get the same error. Here are 2 migrations which I've rolled back: **1.** ``` class CreateCategoriesTable extends Migration { public function up() { Schema::create('categories',function (Blueprint $table){ $table->increments('id'); $table->string('name'); $table->text('parent_id'); $table->timestamps(); }); } public function down() { Schema::dropIfExists('categories'); } } ``` **2.** ``` class CreateArticlesTable extends Migration { public function up() { Schema::create('articles', function (Blueprint $table) { $table->increments('id'); $table->string('title')->nullable(false); $table->longText('article')->nullable(false); $table->integer('user_id')->unsigned(); $table->foreign('user_id')->references('id')->on('users')->onDelete('cascade'); $table->timestamps(); }); } public function down() { Schema::dropIfExists('articles'); } } ``` Please help me to solve this frustrating problem. All answers are highly appreciated, thanks in advance.
If you encounter with this problem and if it's not caused by migration files then most probably it happens because of 2 possible reasons. 1. Check **ServiceProviders**' boot function if it contains queries that are querying tables that don't exist. 2. Check if you've created **custom helper function** and autoloaded that helper function in composer.json file. If custom helper function contains queries that are querying tables that don't exist it will cause this error. Since ServiceProviders' boot functions and autoloaded custom helper functions are loaded first when laravel is started all the `php artisan` commands will generate "Base table or view not found" error. At this point what you should do is comment out those queries that are querying nonexistent tables and run `php artisan serve` then run `php artisan migrate`. Then uncomment those lines, save it and everything should work fine. As @devk suggested it's better to check laravel log files which points exactly to where the problem happens. It led me to find a solution. For this don't forget to [Turn on debug mode.](https://stackoverflow.com/questions/37535315/where-are-logs-located)
Jetpack Compose: How to create a rating bar? I'm trying to implement a rating bar. I refer to <https://gist.github.com/vitorprado/0ae4ad60c296aefafba4a157bb165e60> but I don't understand anything from this code. It works but when I use this code the stars don't have rounded corners. I want to implement something like the following : ![](https://i.stack.imgur.com/GgUes.png)
I made very basic sample for this, it would give the basic idea for creating rating bar with sample border and filled png files. ``` @Composable private fun RatingBar( modifier: Modifier = Modifier, rating: Float, spaceBetween: Dp = 0.dp ) { val image = ImageBitmap.imageResource(id = R.drawable.star) val imageFull = ImageBitmap.imageResource(id = R.drawable.star_full) val totalCount = 5 val height = LocalDensity.current.run { image.height.toDp() } val width = LocalDensity.current.run { image.width.toDp() } val space = LocalDensity.current.run { spaceBetween.toPx() } val totalWidth = width * totalCount + spaceBetween * (totalCount - 1) Box( modifier .width(totalWidth) .height(height) .drawBehind { drawRating(rating, image, imageFull, space) }) } private fun DrawScope.drawRating( rating: Float, image: ImageBitmap, imageFull: ImageBitmap, space: Float ) { val totalCount = 5 val imageWidth = image.width.toFloat() val imageHeight = size.height val reminder = rating - rating.toInt() val ratingInt = (rating - reminder).toInt() for (i in 0 until totalCount) { val start = imageWidth * i + space * i drawImage( image = image, topLeft = Offset(start, 0f) ) } drawWithLayer { for (i in 0 until totalCount) { val start = imageWidth * i + space * i // Destination drawImage( image = imageFull, topLeft = Offset(start, 0f) ) } val end = imageWidth * totalCount + space * (totalCount - 1) val start = rating * imageWidth + ratingInt * space val size = end - start // Source drawRect( Color.Transparent, topLeft = Offset(start, 0f), size = Size(size, height = imageHeight), blendMode = BlendMode.SrcIn ) } } private fun DrawScope.drawWithLayer(block: DrawScope.() -> Unit) { with(drawContext.canvas.nativeCanvas) { val checkPoint = saveLayer(null, null) block() restoreToCount(checkPoint) } } ``` Usage ``` Column { RatingBar(rating = 3.7f, spaceBetween = 3.dp) RatingBar(rating = 2.5f, spaceBetween = 2.dp) RatingBar(rating = 4.5f, spaceBetween = 2.dp) RatingBar(rating = 1.3f, spaceBetween = 4.dp) } ``` Result [![enter image description here](https://i.stack.imgur.com/u0gC7.png)](https://i.stack.imgur.com/u0gC7.png) Also created a library that uses gestures, other png files and vectors as rating items is available [here](https://github.com/SmartToolFactory/Compose-RatingBar). ``` @Composable fun RatingBar( modifier: Modifier = Modifier, rating: Float, painterEmpty: Painter, painterFilled: Painter, tintEmpty: Color? = DefaultColor, tintFilled: Color? = null, itemSize: Dp = Dp.Unspecified, rateChangeMode: RateChangeMode = RateChangeMode.AnimatedChange(), gestureMode: GestureMode = GestureMode.DragAndTouch, shimmer: Shimmer? = null, itemCount: Int = 5, space: Dp = 0.dp, ratingInterval: RatingInterval = RatingInterval.Unconstrained, allowZeroRating: Boolean = true, onRatingChangeFinished: ((Float) -> Unit)? = null, onRatingChange: (Float) -> Unit ) ``` [![enter image description here](https://i.stack.imgur.com/YLE3s.gif)](https://i.stack.imgur.com/YLE3s.gif)
Is there a way to identify which perl specific options are passed to a script? I'm aware a script can retrieve all the command line arguments passed to it through ARGV, i.e.: ``` # test.pl print "$ARGV[0]\n"; print "$ARGV[1]\n"; print "$ARGV[2]\n"; ## perl ./test.pl one two three one two three ``` In the above example, the command line arguments passed to the `test.pl` script are "one", "two" and "three". Now, suppose I run the following command: ``` ## perl -d:DumpTrace test.pl one two three or ## perl -c test.pl one two three ``` How can I tell from within the operations of the `test.pl` script that the options `-c` or `-d:DumpTrace` were passed to the perl interpreter? I'm looking for a method that will identify when options are passed to the perl interpreter during the execution of a script: ``` if "-c" was used in the execution of `test.pl` script { print "perl -c option was used in the execution of this script"; } ```
You can use [Devel::PL\_origargv](https://metacpan.org/pod/release/TOBYINK/Devel-PL_origargv-0.004/lib/Devel/PL_origargv.pm) to get access to to command line parameters that was passed to the `perl` interpreter. Example script `p.pl`: ``` use strict; use warnings; use Devel::PL_origargv; my @PL_origargv = Devel::PL_origargv->get; print Dumper({args => \@PL_origargv}); ``` Then running the script like this for example: ``` $ perl -MData::Dumper -I. p.pl $VAR1 = { 'args' => [ 'perl', '-MData::Dumper', '-I.', 'p.pl' ] }; ```
How to get HTTP headers How do you retrieve all HTTP headers from a `NSURLRequest` in Objective-C?
This falls under the easy, but not obvious class of iPhone programming problems. Worthy of a quick post: The headers for an HTTP connection are included in the `NSHTTPURLResponse` class. If you have an `NSHTTPURLResponse` variable you can easily get the headers out as a `NSDictionary` by sending the allHeaderFields message. For synchronous requests — not recommended, because they block — it’s easy to populate an `NSHTTPURLResponse`: ``` NSURL *url = [NSURL URLWithString:@"http://www.mobileorchard.com"]; NSURLRequest *request = [NSURLRequest requestWithURL: url]; NSHTTPURLResponse *response; [NSURLConnection sendSynchronousRequest: request returningResponse: &response error: nil]; if ([response respondsToSelector:@selector(allHeaderFields)]) { NSDictionary *dictionary = [response allHeaderFields]; NSLog([dictionary description]); } ``` With an asynchronous request you have to do a little more work. When the callback `connection:didReceiveResponse:` is called, it is passed an `NSURLResponse` as the second parameter. You can cast it to an `NSHTTPURLResponse` like so: ``` - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse*)response; if ([response respondsToSelector:@selector(allHeaderFields)]) { NSDictionary *dictionary = [httpResponse allHeaderFields]; NSLog([dictionary description]); } } ```
Scala PartialFunctions from concrete ones Is there any quick way to use as a concrete function (of type, say, `(A) => B`) as a `PartialFunction[A, B]`? The most concise syntax I know of is: ``` (a: A) => a match { case obj => func(obj) } ``` Is there an implicit conversion anywhere, something like: ``` implicit def funcAsPartial[A, B](func: A => B) = new PartialFunction[A, B] { def isDefinedAt(a: A) = true def apply(a: A) = func(a) } ``` I guess I just wrote what I was looking for, but does this already exist in the Scala libraries?
Doing this with an implicit conversion is dangerous, for the same reason that `(A) => B` should not inherit from `PartialFunction[A, B]`. That is, the contract of PartialFunction guarantees that you can safely\* call `apply` wherever `isDefinedAt` returns `true`. Function1's contract provides no such guarantee. Your implicit conversion will result in a PartialFunction that violates its contract if you apply it to a function that is not defined everywhere. Instead, use a pimp to make the conversion explicit: ``` implicit def funcAsPartial[A, B](f: A => B) = new { /** only use if `f` is defined everywhere */ def asPartial(): PartialFunction[A, B] = { case a => f(a) } def asPartial(isDefinedAt: A => Boolean): PartialFunction[A, B] = { case a if isDefinedAt(a) => f(a) } } // now you can write val f = (i: Int) => i * i val p = f.asPartial // defined on all integers val p2 = f.asPartial(_ > 0) // defined only on positive integers ``` \* As discussed in the comments, it may not be entirely clear what "safety" means here. The way I think about it is that a PartialFunction explicitly declares its domain in the following precise sense: if `isDefinedAt` returns true for a value `x`, then `apply(x)` can be evaluated in a way that is consistent with the intent of the function's author. That *does not* imply that `apply(x)` will not throw an exception, but merely that the exception was part of the design of the function (and should be documented).
drawing line graph with primary and secondary y axis c# I have been researching ways of drawing charts in c#. I have a specific requirement of drawing a chart with a y axis and x axis and a seconday y axis.I have tried using excel Interop but have not found a solution.I have started working on MSChart component but not reached anything yet the data i am working with is ``` index lines branches 1 20 5 2 30 8 3 34 6 ``` i want to plot the indexies on the x-axis and scale for lines on the left y axis and a scale for branches on the right y axis. I am using .net versions 2.0 and 3.5 if that helps
When creating a series, set the `YAxisType` property to `AxisType.Primary` or `AxisType.Secondary` ``` var lines = new Series("lines"); lines.ChartType = SeriesChartType.Line; lines.Points.Add(new DataPoint(1, 20)); lines.Points.Add(new DataPoint(2, 30)); lines.Points.Add(new DataPoint(3, 34)); lines.YAxisType = AxisType.Primary; chart1.Series.Add(lines); var branches = new Series("branches"); branches.ChartType = SeriesChartType.Line; branches.Points.Add(new DataPoint(1, 5)); branches.Points.Add(new DataPoint(2, 6)); branches.Points.Add(new DataPoint(3, 8)); branches.YAxisType = AxisType.Secondary; chart1.Series.Add(branches); ``` This results in a chart like this, which sounds like what you are after. The example below is a bit ugly, it has lines for primary and secondary y-values, etc. but you can clean that up the way you want it by setting the properties of the chart control. ![enter image description here](https://i.stack.imgur.com/BxXYQ.png)
Display back button in ion-nav-bar iOS only Though I'm sure this is something that isn't uncommon in ionic development, I can't seem to find anything on the web that explains this. If I have the following: ``` <body ng-app="myApp"> <ion-nav-bar class="bar-positive"> </ion-nav-bar> <ion-nav-view></ion-nav-view> </body> ``` And one of the views that I use in the `ion-nav-view` looks like this for exmaple: ``` <ion-view view-title="Profile" ng-controller="profileController" class="profile-view"> <ion-content class="padding has-header"> .... ``` How do I display a back button (really just the `ion-chevron-left` icon) for iOS only, and have it hidden on other devices that have a dedicated hardware button?
There are a couple of ways to achieve platform specific behaviour. You can read up on the back button specifically, [here](http://ionicframework.com/docs/api/directive/ionNavBackButton/). 1. In your controller, you can use the [ionic.Platform](http://ionicframework.com/docs/api/utility/ionic.Platform/) utility to determine the current platform and the Ionic [NavBarDelegate](http://ionicframework.com/docs/api/service/$ionicNavBarDelegate/) to show/hide the button. HTML: ``` <body ng-app="app" ng-controller="appCtrl"> <ion-nav-bar class="bar-positive"> <ion-nav-back-button></ion-nav-bar-back-button> </ion-nav-bar> </body> ``` Controller: ``` .controller('appCtrl', function($scope, $ionicNavBarDelegate) { var isIOS = ionic.Platform.isIOS(); $ionicNavBarDelegate.showBackButton(isIOS); }); ``` 2. In your css, define separate classes to show/hide the element depending on the platform. From [here](http://forum.ionicframework.com/t/how-to-use-platform-ios-and-platform-cordova-css-classes/5898). HTML: ``` <body ng-app="app" ng-controller="appCtrl"> <ion-nav-bar class="bar-positive"> <ion-nav-back-button class="platform-nav"></ion-nav-bar-back-button> </ion-nav-bar> </body> ``` CSS: ``` .platform-ios .platform-nav { display: block; } .platform-android .platform-nav { display: none; } ```
Javascript object reference linked to object in array? If I have an object: ``` var array = []; var theobject = null; array.push({song:"The Song", artist:"The Artist"}, {song:"Another Song", artist:"Another Artist"}); ``` and I do: ``` for(var i = 0; i < array.length; i++) if(array[i].song == "The Song") { theobject = array[i]; break; } ``` If I then change theobject by doing: ``` theobject.song = "Changed Name"; ``` I am having problems where despite myself trying to set ONLY "theobject.song" to be equal to "Changed Name", array[0].song becomes set to "Changed Name" also. What I want is "theobject.song" to become "Changed Name" while array[0].song remains "The Song". What is the best way to accomplish this?
You will never get a reference to your object in the loop. Try: ``` for(var i = 0; i < array.length; i++) if(array[i].song === "The Song") { theobject = array[i]; break; } ``` That will give a reference to the object, and you will be able to change the objects `song` property. If you want to use a copy of the object, then you'll have to do a manual copy. E.g. ``` function clone(obj) { var copy = {}; for (var attr in obj) { if (obj.hasOwnProperty(attr)) { copy[attr] = obj[attr]; } } return copy; } ``` And your loop becomes: ``` for(var i = 0; i < array.length; i++) if(array[i].song === "The Song") { theobject = clone(array[i]); break; } ```
Prolog - multiplying by addition I need simple fuction in SWI-prolog which multiplying by addition. Something like m(X,Y,Z) where for example X=5, Z=3 <==> 5\*3. Y is result: Y=5, Y=10, Y=15 [stop]. I was thinking about something like that: ``` m(X,Y,Z):- Z>0, /*when Z reaches 0 you stop */ I=X+X, W=Z-1, m(I,Y,W). ``` But it always return "false" and dunno why.
Let's start by thinking about what the predicate should describe: it's a relation between three numbers, where the third is the product of the first two. Since you want to describe multiplication by reducing the second argument to zero while adding up the first accordingly many times we are talking about natural numbers. So a nicely descriptive name would be nat\_nat\_prod/3. Next consider the possible cases: 1. The second argument can be zero. Then the product has to be zero as well since X\*0=0. So this is the base case. 2. Otherwise the second argument is greater than zero. Then you want to decrement it by one and calculate the product of the first argument and this new number. Since the predicate can use itself to describe that, this is a recursive goal. Subsequently you add the first argument to the intermediary product described by the recursion. This can be written in Prolog like so: ``` nat_nat_prod(_X,0,0). % case 1) nat_nat_prod(X,Y1,P1) :- % case 2) Y1 > 0, Y0 is Y1-1, nat_nat_prod(X,Y0,P0), P1 is P0+X. ``` Now let's try some queries: ``` ?- nat_nat_prod(5,3,P). P = 15 ; false. ?- nat_nat_prod(5,4,P). P = 20 ; false. ?- nat_nat_prod(5,0,P). P = 0 ; false. ?- nat_nat_prod(1,0,P). P = 0 ; false. ?- nat_nat_prod(1,1,P). P = 1 ; false. ``` However, when playing around with the predicate, you'll notice that the first two arguments have to be instantiated otherwise you'll get an error: ``` ?- nat_nat_prod(1,Y,3). ERROR: >/2: Arguments are not sufficiently instantiated ?- nat_nat_prod(X,1,3). ERROR: is/2: Arguments are not sufficiently instantiated ``` This happens due to the use of >/2 and is/2. You could get around this problem by using CLP(FD) but I think that's beside the point. This way of defining multiplication is obviously very inefficient compared to using the standard arithmetic function \*/2, e.g.: ``` ?- time(nat_nat_prod(2,1000000,P)). % 3,000,000 inferences, 33.695 CPU in 33.708 seconds (100% CPU, 89035 Lips) P = 2000000 ; % 3 inferences, 0.031 CPU in 0.031 seconds (100% CPU, 97 Lips) false. ?- time(P is 2*1000000). % 1 inferences, 0.000 CPU in 0.000 seconds (86% CPU, 82325 Lips) P = 2000000. ``` As already hinted by @false in the comments it is more common to introduce people to [successor arithmetics](https://stackoverflow.com/questions/tagged/successor-arithmetics) first and then to define addition/multiplication of two numbers in s(X) notation this way. Since you can't use the standard arithmetic functions with s(X) numbers, you also don't run into the associated instantiation errors.
Golang proper use of interfaces I am new to Go and am running into a situation that I am unsure how to solve. I am working on some code that takes a DNS packet in raw bytes and returns a struct called DNSPacket. The struct looks like the following ``` type DNSPacket struct { ...some fields Questions []Question Answers []Answer ...some more fields } ``` The issue I am having is with the Answers type which looks like this. ``` type Answer struct { Name string Type int Class int TTL uint32 RdLength int Data []byte } ``` Depending on the type of Answer the `Data` field must be decoded differently. For example if the Answer is an `A` record (Type 1) the data is simply an ipv4 address. However if the Answer is an `SRV` record (Type 33) then the data is contains `port`, `priority`, `weight` and `target` encoded in the byte slice. I thought it would be great if I could have a method on Answer called `DecodeData()` that returns the correct data depending on the type, however since there is no overriding or inheritance in Go I am unsure how to solve this. I tried using an interface to solve this, but it would not compile. I tried something like ``` type DNSRecordType interface { Decode(data []byte) } type RecordTypeSRV struct { target string ...more fields } //to 'implement' the DNSRecordType interface func (record *RecordTypeSRV) Decode(data []byte) { //do the work to decode appropriately and set //the fields on the record } ``` Then in the Answer method ``` func (a *Answer) DecodeData() DNSRecordType { if a.Type === SRVType { record := RecordTypeSRV{} record.Decode(a.Data) return record } //do something similar for other record types } ``` What would be the correct Go way of having a single Answer type, but be able to return different types of Answer Data depending on their type? Sorry, if this is a completely beginner question as I am still very new to Go. Thanks!
Let me summarize your question. You have a DNS Packet with the list of Answers. Based on the type of answer you have to process the data in the answer. ``` type DNSPacket struct { ...some fields Questions []Question Answers []Answer ...some more fields } type Answer struct { Name string Type int Class int TTL uint32 RdLength int Data []byte } ``` **Answer** Let's create an interface that should be implemented to process data. ``` type PacketProcessor interface { Process(Answer) } ``` Let SRV implements the PacketProcessor ``` type SRV struct { ... } func (s *SRV) Process(a Answer) { ... } ``` Your processing logic should be as follows ``` func (a *Answer) Process() { var p PacketProcessor switch a.Type { case SRVType: p = &SRV{} ... //other cases } //finally p.Process(*a) } ``` Hope it helps :). There is a Gurgaon based golang community that is always ready to help developers with their problems. You can join the community via [slack](https://join.slack.com/t/golang-gurgaon/shared_invite/enQtNDE4Njk5OTAzMjIwLWM0MjBkMzlhYjVhNDRiZmMzYTc5NDkzODMwZmMyOGRhMjA0NjM3NDcwNTcwMzY5MjA3YzcyZTA4MWU2MWQyYjk)
How should I use DurationField in my model? In my model I want to be able to input duration, like *2 years*, *5 months*, etc. In version 1.8 DurationField was introduced so I tried using that: In my model I have ``` user_validPeriod = models.DurationField() ``` Trying to add a new *User* from my admin panel, If I try typing something like *2d* or *2 days* in the appearing text-field though I get `Enter a valid duration`. ![enter image description here](https://i.stack.imgur.com/VszE6.png) Can someone provide me with an example of how this field is supposed to be used?
To use a DurationField in django 1.8 you have to use a python `datetime.timedelta` instance like this: Considering this model : ``` from django.db import models class MyModel(models.Model): duration = models.DurationField() ``` You can set a duration this way : ``` import datetime my_model = MyModel() my_model.duration = datetime.timedelta(days=20, hours=10) ``` And query it this way : ``` # Equal durations = MyModel.objects.filter(duration=datetime.timedelta(*args, **kwargs)) # Greater than or equal durations = MyModel.objects.filter(duration__gte=datetime.timedelta(*args, **kwargs)) # Less than or equal durations = MyModel.objects.filter(duration__lte=datetime.timedelta(*args, **kwargs)) ``` More info on datetime.timedelta [here](https://docs.python.org/3/library/datetime.html#datetime.timedelta) and on DurationField [here](https://docs.djangoproject.com/en/stable/ref/models/fields/#django.db.models.DurationField). In your admin panel, you can enter a duration with a string with following format : `[DD] [[hh:]mm:]ss`
Qt - QFileSystemModel How to Get Files in Folder (Noob) I have following Code to list the files in the listView: ``` fileModel = new QFileSystemModel(this); ui->listView->setModel(fileModel); ui->listView->setRootIndex(fileModel->setRootPath(filePath)); ``` I would like to get a list/Map to the files in a Path. how can this be done?
The following snippet will do what you want : ``` QList<QString> path_list; QModelIndex parentIndex = fileModel->index(filePath); int numRows = fileModel->rowCount(parentIndex); for (int row = 0; row < numRows; ++row) { QModelIndex childIndex = fileModel->index(row, 0, parentIndex); QString path = fileModel->data(childIndex).toString(); path_list.append(path); } ``` There is one thing that you should not forget. From the [documentation](http://qt-project.org/doc/qt-5/QFileSystemModel.html#caching-and-performance) : > > Unlike QDirModel(obsolete), QFileSystemModel uses a separate thread to > populate itself so it will not cause the main thread to hang as the > file system is being queried. Calls to rowCount() will return 0 until > the model populates a directory. > > > Hence, you have to wait until you receive `directoryLoaded(const QString & path)` signal from QFileSystemModel after you initiate the model, then fill your list.
Django Rest Framework serializer losing data In my unittests, and in reality, the ModelSerializer class I've created just seems to discard a whole trove of data it is provided with: ``` class KeyboardSerializer(serializers.ModelSerializer): user = serializers.Field(source='user.username') mappings = KeyMapSerializer(many=True, source='*') class Meta: model = Keyboard fields = ('user', 'label', 'is_primary', 'created', 'last_modified', 'mappings') def pre_save(self, obj): obj.user = self.request.user TEST_KEYBOARD_MAP = { 'user': None, 'label': 'New', 'is_primary': True, 'created': '2013-10-22T12:15:05.118Z', 'last_modified': '2013-10-22T12:15:05.118Z', 'mappings': [ {'cellid': 1, 'key': 'q'}, {'cellid': 2, 'key': 'w'}, ] } class SerializerTests(TestCase): def setUp(self): self.data = TEST_KEYBOARD_MAP def test_create(self): serializer = KeyboardSerializer(data=self.data) print serializer.data ``` Output: ``` {'user': u'', 'label': u'', 'is_primary': False, 'created': None, 'last_modified': None, 'mappings': {'id': u'', 'cellid': None, 'key': u''}} ``` What is happening to all the information passed in to the serializer in data?
The `data` key is not populated until you call `is_valid()`. (This is a data-sanitation safety feature that stops you accessing input until you're sure it's safe. Add the call to `is_valid()` and you should see your data. Since you're deserializing though you want to access the `object` attribute to return you `Keyboard` instance. If you review [the DRF docs on deserializing objects](http://django-rest-framework.org/api-guide/serializers.html#deserializing-objects) they show exactly the example you need: > > > ``` > serializer = CommentSerializer(data=data) > serializer.is_valid() > # True > serializer.object > # <Comment object at 0x10633b2d0> > > ``` > > I hope that helps.
How to make a Javafx Image Crop App I am building an application for my colleagues to crop image. I use FXML with Scene Builder to build the GUI. The user click a button to choose a image from his computer. Then, the image is shown in the GUI. User can zoom and move in the pane. At last, click a button to save the edited image to his computer. However, I don't really know what library should I use to build the app. This is my first time to deal with graphics. I have no idea how to read image, crop the image and write the image. [Javafx Canvas](http://docs.oracle.com/javafx/2/api/javafx/scene/canvas/Canvas.html) for the Pane? Any good resources other than the java doc to read to learn how can I do this?
Your question is too much to be answered on StackOverflow. I suggest you start with reading the [official Oracle documentation about JavaFX](http://docs.oracle.com/javase/8/javase-clienttechnologies.htm). However, since it's an interesting topic, here's the answer in code. There are several things you need to consider: - use an ImageView as container - use a ScrollPane in case the image is larger - provide a selection mechanism - crop the image itself - save the image to a file, providing a file chooser dialog This is pretty much it. In the example below use the left mouse button for selection, the right mouse button for the crop context menu which then takes a snapshot of the ImageView node at the seleciton bounds an then saves the image to a file. ``` import java.awt.Graphics2D; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javafx.application.Application; import javafx.embed.swing.SwingFXUtils; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.Bounds; import javafx.geometry.Rectangle2D; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.SnapshotParameters; import javafx.scene.control.ContextMenu; import javafx.scene.control.MenuItem; import javafx.scene.control.ScrollPane; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.scene.image.WritableImage; import javafx.scene.input.MouseEvent; import javafx.scene.layout.BorderPane; import javafx.scene.paint.Color; import javafx.scene.shape.Rectangle; import javafx.scene.shape.StrokeLineCap; import javafx.stage.FileChooser; import javafx.stage.Stage; import javax.imageio.ImageIO; /** * Load image, provide rectangle for rubberband selection. Press right mouse button for "crop" context menu which then crops the image at the selection rectangle and saves it as jpg. */ public class ImageCropWithRubberBand extends Application { RubberBandSelection rubberBandSelection; ImageView imageView; Stage primaryStage; public static void main(String[] args) { launch(args); } @Override public void start(Stage primaryStage) { this.primaryStage = primaryStage; primaryStage.setTitle("Image Crop"); BorderPane root = new BorderPane(); // container for image layers ScrollPane scrollPane = new ScrollPane(); // image layer: a group of images Group imageLayer = new Group(); // load the image // Image image = new Image( getClass().getResource( "cat.jpg").toExternalForm()); Image image = new Image("https://upload.wikimedia.org/wikipedia/commons/thumb/1/14/Gatto_europeo4.jpg/1024px-Gatto_europeo4.jpg"); // the container for the image as a javafx node imageView = new ImageView( image); // add image to layer imageLayer.getChildren().add( imageView); // use scrollpane for image view in case the image is large scrollPane.setContent(imageLayer); // put scrollpane in scene root.setCenter(scrollPane); // rubberband selection rubberBandSelection = new RubberBandSelection(imageLayer); // create context menu and menu items ContextMenu contextMenu = new ContextMenu(); MenuItem cropMenuItem = new MenuItem("Crop"); cropMenuItem.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent e) { // get bounds for image crop Bounds selectionBounds = rubberBandSelection.getBounds(); // show bounds info System.out.println( "Selected area: " + selectionBounds); // crop the image crop( selectionBounds); } }); contextMenu.getItems().add( cropMenuItem); // set context menu on image layer imageLayer.setOnMousePressed(new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent event) { if (event.isSecondaryButtonDown()) { contextMenu.show(imageLayer, event.getScreenX(), event.getScreenY()); } } }); primaryStage.setScene(new Scene(root, 1024, 768)); primaryStage.show(); } private void crop( Bounds bounds) { FileChooser fileChooser = new FileChooser(); fileChooser.setTitle("Save Image"); File file = fileChooser.showSaveDialog( primaryStage); if (file == null) return; int width = (int) bounds.getWidth(); int height = (int) bounds.getHeight(); SnapshotParameters parameters = new SnapshotParameters(); parameters.setFill(Color.TRANSPARENT); parameters.setViewport(new Rectangle2D( bounds.getMinX(), bounds.getMinY(), width, height)); WritableImage wi = new WritableImage( width, height); imageView.snapshot(parameters, wi); // save image // !!! has bug because of transparency (use approach below) !!! // -------------------------------- // try { // ImageIO.write(SwingFXUtils.fromFXImage( wi, null), "jpg", file); // } catch (IOException e) { // e.printStackTrace(); // } // save image (without alpha) // -------------------------------- BufferedImage bufImageARGB = SwingFXUtils.fromFXImage(wi, null); BufferedImage bufImageRGB = new BufferedImage(bufImageARGB.getWidth(), bufImageARGB.getHeight(), BufferedImage.OPAQUE); Graphics2D graphics = bufImageRGB.createGraphics(); graphics.drawImage(bufImageARGB, 0, 0, null); try { ImageIO.write(bufImageRGB, "jpg", file); System.out.println( "Image saved to " + file.getAbsolutePath()); } catch (IOException e) { e.printStackTrace(); } graphics.dispose(); } /** * Drag rectangle with mouse cursor in order to get selection bounds */ public static class RubberBandSelection { final DragContext dragContext = new DragContext(); Rectangle rect = new Rectangle(); Group group; public Bounds getBounds() { return rect.getBoundsInParent(); } public RubberBandSelection( Group group) { this.group = group; rect = new Rectangle( 0,0,0,0); rect.setStroke(Color.BLUE); rect.setStrokeWidth(1); rect.setStrokeLineCap(StrokeLineCap.ROUND); rect.setFill(Color.LIGHTBLUE.deriveColor(0, 1.2, 1, 0.6)); group.addEventHandler(MouseEvent.MOUSE_PRESSED, onMousePressedEventHandler); group.addEventHandler(MouseEvent.MOUSE_DRAGGED, onMouseDraggedEventHandler); group.addEventHandler(MouseEvent.MOUSE_RELEASED, onMouseReleasedEventHandler); } EventHandler<MouseEvent> onMousePressedEventHandler = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent event) { if( event.isSecondaryButtonDown()) return; // remove old rect rect.setX(0); rect.setY(0); rect.setWidth(0); rect.setHeight(0); group.getChildren().remove( rect); // prepare new drag operation dragContext.mouseAnchorX = event.getX(); dragContext.mouseAnchorY = event.getY(); rect.setX(dragContext.mouseAnchorX); rect.setY(dragContext.mouseAnchorY); rect.setWidth(0); rect.setHeight(0); group.getChildren().add( rect); } }; EventHandler<MouseEvent> onMouseDraggedEventHandler = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent event) { if( event.isSecondaryButtonDown()) return; double offsetX = event.getX() - dragContext.mouseAnchorX; double offsetY = event.getY() - dragContext.mouseAnchorY; if( offsetX > 0) rect.setWidth( offsetX); else { rect.setX(event.getX()); rect.setWidth(dragContext.mouseAnchorX - rect.getX()); } if( offsetY > 0) { rect.setHeight( offsetY); } else { rect.setY(event.getY()); rect.setHeight(dragContext.mouseAnchorY - rect.getY()); } } }; EventHandler<MouseEvent> onMouseReleasedEventHandler = new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent event) { if( event.isSecondaryButtonDown()) return; // remove rectangle // note: we want to keep the ruuberband selection for the cropping => code is just commented out /* rect.setX(0); rect.setY(0); rect.setWidth(0); rect.setHeight(0); group.getChildren().remove( rect); */ } }; private static final class DragContext { public double mouseAnchorX; public double mouseAnchorY; } } } ``` Screenshot: ![enter image description here](https://i.stack.imgur.com/rhI9a.jpg) The cropped image: ![enter image description here](https://i.stack.imgur.com/VifTg.jpg)
MongoDB Count total number of true and false values for documents matching a query Using the following data, how would I count the total number of yes and no votes for a collection of records with pollId "hr4946-113" using MongoDBs support for aggregate queries. ``` { "_id" : ObjectId("54abcdbeba070410146d6073"), "userId" : "1234", "pollId" : "hr4946-113", "vote" : true, "__v" : 0 } { "_id" : ObjectId("54afe32fec4444481b985711"), "userId" : "12345", "pollId" : "hr2840-113", "vote" : true, "__v" : 0 } { "_id" : ObjectId("54b66de68dde7a0c19be987b"), "userId" : "123456", "pollId" : "hr4946-113", "vote" : false } ``` This would be the expected Result. ``` { "yesCount": 1, "noCount":1 } ```
The [aggregation framework](http://docs.mongodb.org/manual/reference/method/db.collection.aggregate/) is your answer: ``` db.collection.aggregate([ { "$match": { "pollId": "hr4946-113" } }, { "$group": { "_id": "$vote", "count": { "$sum": 1 } }} ]) ``` Basically the [**`$group`**](http://docs.mongodb.org/manual/reference/operator/aggregation/group/) operator gathers all the data by "key", and "grouping operators" like [**`$sum`**](http://docs.mongodb.org/manual/reference/operator/aggregation/sum/) work on the values. In this case, just adding `1` on the boundaries to indicate a count. Gives you: ``` { "_id": true, "count": 1 }, ``` You can be silly and expand that into a single document response using the [**$cond**](http://docs.mongodb.org/manual/reference/operator/aggregation/cond/) operator to conditionally evaluate the field values: ``` db.collection.aggregate([ { "$match": { "pollId": "hr4946-113" } }, { "$group": { "_id": "$vote", "count": { "$sum": 1 } }}, { "$group": { "_id": null, "yesCount": { "$sum": { "$cond": [ "_id", 1, 0 ] } }, "noCount": { "$sum": { "$cond": [ "_id", 0, 1 ] } } }}, { "$project": { "_id": 0 } } ]) ``` And the result: ``` { "yesCount": 1, "noCount": 0 } ```
Arc progress bar I need to make a **circular progress bar** (image below), loading start from left bottom side up to right bottom side. Light-blue `(#E8F6FD)` color is empty state and strong-blue `(#1CADEB)` is progress. [![Arc progress bar](https://i.stack.imgur.com/VZ9Lg.png)](https://i.stack.imgur.com/VZ9Lg.png) I have tried some approaches, but cannot find the best one for this implementation: 1. First of all I tried using a `div` element with `border-radius: 50%;` and `border-bottom-color: transparent;`, [jsfiddle](https://jsfiddle.net/pv7tyt9b/). In this approach I got a shape exactly like in image but the problem is how can I fill border with progress? 2. The second try was using canvas, and this approach is nice expect the reason that loader appears on the screen only after all `JS` loaded, I would like prevent this behavior and show loader immediately when page is loaded, [jsfiddle](https://jsfiddle.net/rh4pch06/) So my question is there any another approaches that can achive **an arc loader** or any suggestion for listed problems.
You can use an [inline SVG](https://developer.mozilla.org/en-US/docs/SVG_In_HTML_Introduction) with [arc commands](https://developer.mozilla.org/en-US/docs/Web/SVG/Tutorial/Paths#Arcs) to make the arc shape. The animation can be handled with CSS by transitioning the [stroke-dasharray](https://developer.mozilla.org/en-US/docs/Web/SVG/Attribute/stroke-dasharray) property. Here is an example, hover the arc to launch the loading animation : ``` svg { display: block; width: 40%; margin: 0 auto; } .loader { stroke-dasharray: .5 18 19; transition: stroke-dasharray 2s linear; } svg:hover .loader { stroke-dasharray: 19 0 19; } ``` ``` <svg viewbox="0 0.5 10 8"> <path d="M2 8 A 4 4 0 1 1 8 8" fill="none" stroke-width="0.78" stroke="#E8F6FD" /> <path class="loader" d="M2 8 A 4 4 0 1 1 8 8" fill="none" stroke-width="0.8" stroke="#00ACEE" /> </svg> ``` *Note that you will need to add vendor prefixes to the transition property for browser support (more info on [canIuse](http://caniuse.com/#feat=css-transitions)).*
how to find percentage of total in groupby in pandas I have following dataframe in pandas ``` Date tank hose quantity count set flow 01-01-2018 1 1 20 100 211 12.32 01-01-2018 1 2 20 200 111 22.32 01-01-2018 1 3 20 200 123 42.32 02-01-2018 1 1 10 100 211 12.32 02-01-2018 1 2 10 200 111 22.32 02-01-2018 1 3 10 200 123 42.32 ``` I want to calculate percentage of `quantity` and `count` grouping by `Date` and `tank`. My desired dataframe ``` Date tank hose quantity count set flow perc_quant perc_count 01-01-2018 1 1 20 100 211 12.32 33.33 20 01-01-2018 1 2 20 200 111 22.32 33.33 40 01-01-2018 1 3 20 200 123 42.32 33.33 40 02-01-2018 1 1 10 100 211 12.32 25 20 02-01-2018 1 2 20 200 111 22.32 50 40 02-01-2018 1 3 10 200 123 42.32 25 40 ``` I am doing following to achieve this ``` test = df.groupby(['Date','tank']).apply(lambda x: 100 * x / float(x.sum())) ```
Use [`GroupBy.transform`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html) with lambda function, [`add_prefix`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_prefix.html) and [`join`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) to original: ``` f = lambda x: 100 * x / float(x.sum()) df = df.join(df.groupby(['Date','tank'])['quantity','count'].transform(f).add_prefix('perc_')) ``` Or specify new columns names: ``` df[['perc_quantity','perc_count']] = (df.groupby(['Date','tank'])['quantity','count'] .transform(f)) ``` --- ``` print (df) Date tank hose quantity count set flow perc_quantity \ 0 01-01-2018 1 1 20 100 211 12.32 33.333333 1 01-01-2018 1 2 20 200 111 22.32 33.333333 2 01-01-2018 1 3 20 200 123 42.32 33.333333 3 02-01-2018 1 1 10 100 211 12.32 33.333333 4 02-01-2018 1 2 10 200 111 22.32 33.333333 5 02-01-2018 1 3 10 200 123 42.32 33.333333 perc_count 0 20.0 1 40.0 2 40.0 3 20.0 4 40.0 5 40.0 ```
Marionette CollectionView: how to check whether item is added because of "reset" or "add"? I am using Marionette's `CollectionView` to render a list of items with `ItemViews`. Whenever a new item is added, I want to run a short fade-in animation. But not when the collection is rendered initially (or the collection is reset). Before using Marionette, I handled the `reset` and `add` events slightly differently, but I can not figure out how to do this here. I looked at the source code and it seems that `addItemView` is responsible for adding the child view and both `addChildView` (called when `add` is triggered on the collection) and `render` (for `reset` events) call this method. Maybe I am missing something obvious.
This is one way of doing it: Include these functions in your `CompositeView` declaration: ``` onBeforeRender: function(){ this.onBeforeItemAdded = function(){}; }, onRender: function(){ this.onBeforeItemAdded = myAnimation; } ``` This is similar to the solution I present in my book on Marionette (<https://leanpub.com/marionette-gentle-introduction/>) How it works: Marionette triggers the "before:render" before it renders the entire collection, so you can set the the `onBeforeItemAdded` function to do nothing. Once the collection has been rendered, set that function to animate the new item view. Since each time the collection view adds an item view it also triggers the "before:item:added", you can define an `onBeforeItemAdded` function that will automatically be called when that event is triggered. This matching happens thanks to triggerMethod. This solution should solve your problem, without your needing to add flags on the model.
How to handle socket connection's events when app is in background? I want use the following function even when app is in background? ``` - (void)stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent { case NSStreamEventHasBytesAvailable: { NSLog(@"Event:NSStreamEventHasBytesAvailable"); if (theStream == _inputStream) { NSLog(@"NSStreamEventHasBytesAvailable: on Input Stream"); uint8_t buffer[1024]; int len; while ([_inputStream hasBytesAvailable]) { len = [_inputStream read:buffer maxLength:sizeof(buffer)]; if (len > 0) { NSString *output = [[NSString alloc] initWithBytes:buffer length:len encoding:NSASCIIStringEncoding]; if (nil != output) { NSLog(@"server said: %@", output); // to get local notification I am calling below method. [self scheduleNotification]; } } } } break; } ``` The above code is working done in foreGround. I have made all the change given in apple document to the run the app in the background mode- voip. What should i write in AppDelegate method? ``` - (void)applicationDidEnterBackground:(UIApplication *)application { } ``` How to get the stream:handleEvent called in background?
I was dealing with similiar problem a while ago. Few important things to keep in mind: - background "voip" functionality only works on device - don't use simulator to test it - you will probably (tested) got rejected if your app registers as a voip app and isn't really voip app So if this is not a voip app you might actually want to use remote notifications to alert user directly rather than showing local notification. I guess this is the only way for your app to pass App Store validation. Anyway, two links here on SO helped you might find helpful: [How can an iOS app keep a TCP connection alive indefinitely while in the background?](https://stackoverflow.com/questions/5840365/how-can-an-ios-app-keep-a-tcp-connection-alive-indefinitely-while-in-the-backgro) I ended up using voip (as you do) and playing silent audio loop as suggested here - it worked. Not sure if this silent audio loop is still neccessary. [What happens to TCP and UDP (with multicast) connection when an iOS Application did enter background](https://stackoverflow.com/questions/9047147/what-happens-to-tcp-and-udp-with-multicast-connection-when-an-ios-application) Make sure you read [Tips for Developing a VoIP App](http://developer.apple.com/library/ios/#documentation/iphone/conceptual/iphoneosprogrammingguide/AdvancedAppTricks/AdvancedAppTricks.html#//apple_ref/doc/uid/TP40007072-CH7-SW18) and [Technical Note TN2277:Networking and Multitasking](http://developer.apple.com/library/ios/#technotes/tn2277/_index.html)
Cross-domain will not work with a SignalR PersistentConnection **NOTE**: Someone else originally asked this question but deleted it before I could post my answer. Since this question covers many issues that developers face when trying to make SignalR work cross-domain, I decided to replicate it. Plus, I had already finished writing the answer! I'm running a SignalR 1.0.1 server in an ASP.NET MVC .NET Framework 4 project. I have another ASP.NET application on a different domain (different localhost port) trying to connect via the JavaScript client. I get this when my application tries to connect: ``` XMLHttpRequest cannot load http://localhost:31865/api/negotiate?_=1363105027533. Origin http://localhost:64296 is not allowed by Access-Control-Allow-Origin. ``` I've followed all steps to enable cross-domain support with SignalR -- what am I missing? - `jQuery.support.cors = true;` - `$.connection('http://localhost:31865/api', '', false, { jsonp: true, xdomain: true });` - `RouteTable.Routes.MapHubs(new HubConfiguration { EnableCrossDomain = true });` - `RouteTable.Routes.MapConnection<ApiConnection>("/api", "api");` I also added the following to Web.config in the API project: ``` <system.webServer> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> </system.webServer> ``` I'm using a PersistentConnection for my SignalR server, not hubs. Any ideas?
`MapHubs` will configure an endpoint at /signalr for communicating with all your `Hub` classes. Since you are not using Hubs, the call to `MapHubs` is unnecessary. The call to `MapHubs` also does not have any effect on the configuration of your /api connection. Your call to MapConnection should be changed to read as follows: ``` RouteTable.Routes.MapConnection<ApiConnection>("api", "api", new ConnectionConfiguration { EnableCrossDomain = true }); ``` **NOTE:** The second argument to `MapConnection` is the URL. The first argument is the route name. The `/` is unnecessary, but doesn't really hurt in either case. - Setting `jQuery.support.cors = true;` should **ONLY** be done "To enable cross-domain requests in environments that do not support cors yet but do allow cross-domain XHR requests (windows gadget, etc)" [`[1]`](http://api.jquery.com/jQuery.support/). This does not pertain to any versions of IE or any other browser that I know of. If the browser does not support CORS, SignalR will already automatically fall back to JSONP **unless** you set `jQuery.support.cors` to true. If you just set this to true blindly, SignalR will assume that the environment does support cross-domain XHR requests and **not** automatically fall back to JSONP rendering SignalR unable to establish cross-domain connections while running in browsers that truly don't support CORS. - `$.connection('http://localhost:31865/api', '', false, { jsonp: true, xdomain: true });` is incorrect. You should only need ``` var connection = $.connection('http://localhost:31865/api'); ``` `xdomain` is no longer an option for the SignalR JS client, and if you *really* want to specify `jsonp`, you should do it when you `start` the connection like so: ``` connection.start({ jsonp: true}).done(function () { /* ... */ }); ``` I should reiterate that SignalR will *automatically* fall back to JSONP if the environment does not support CORS, so you **should not** specify this option yourself. JSONP does not require an `Access-Control-Allow-Origin` header, but it does force SignalR to use its most inefficient transport: long polling. - You do not need to setup `customHeaders` in your Web.config. SignalR will set the `Access-Control-Allow-Origin` header in SignalR responses automatically when you set `EnableCrossDomain` to true in you `ConnectionConfiguration`. Reference <https://github.com/SignalR/SignalR/wiki/QuickStart-Persistent-Connections> for more advice abut using `PersistentConnections`.
What is the difference between JDBC and JDBI? I want to know about the differences between [JDBC](http://www.oracle.com/technetwork/java/javase/jdbc/index.html) and [JDBI](http://www.jdbi.org/) in java. In particular, which one is generally better and why?
(I am the primary author of jDBI) [jDBI](http://jdbi.org) is a convenience library built on top of [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity). JDBC works very well but generally seems to optimize for the database vendors (driver writers) over the users. jDBI attempts to expose the same functionality, but in an API optimized for users. It is much lower level than things like [Hibernate](https://en.wikipedia.org/wiki/Hibernate_(framework)) or [JPA](https://en.wikipedia.org/wiki/Java_Persistence_API). The closest similar library is probably [MyBatis](https://en.wikipedia.org/wiki/MyBatis) (forked successor to [iBATIS](https://en.wikipedia.org/wiki/IBATIS)). jDBI supports two style APIs, an older fluent style, which looks like: ``` List<Something> r = h.createQuery("select * from something where name = :name and id = :id") .bind(0, "eric") .bind("id", 1) .map(Something.class) .list(); ``` A newer SQL Object API does much more reflective type stuff and really does start to abstract a bunch of JDBC stuff: ``` interface TheBasics { @SqlUpdate("insert into something (id, name) values (:id, :name)") int insert(@BindBean Something something); @SqlQuery("select id, name from something where id = :id") Something findById(@Bind("id") long id); } @Test public void useTheBasics() throws Exception { TheBasics dao = dbi.onDemand(TheBasics.class); dao.insert(new Something(7, "Martin")); Something martin = dao.findById(7); } ``` The library has good reference docs (javadoc) and some reasonable tutorial style documentation at [http://jdbi.org/](http://jdbi.org). It has been around since 2004, and is used by a relatively small number of folks (some few dozen people I know of personally, and maybe a dozen companies) but it works very well for them. Most of the folks who work on it are A+ folks, and are primarily concerned with building a tool that works well for them -- that it is open source is largely a side effect.
PHPCS Limit to Filetype I'm trying to create a custom ruleset for a particular frameworks "guidelines". However, I want to be able to limit sniffs to only be relevant to a .php or .phtml filetype. Is this possible within to have Sniffs to only use or ignore a defined filetype, or would I need to do this check within the sniffs process() method?
You can specify exclude patterns using regular expressions inside a ruleset. By using a negative lookahead (or behind if you prefer) you can limit a specific sniff or error message to files that match the pattern. This examples only runs the DoubleQuoteUsage sniff on .phtml files only: ``` <rule ref="Squiz.Strings.DoubleQuoteUsage"> <exclude-pattern>*\.(?!phtml$)</exclude-pattern> </rule> ``` But the current PHPCS releases use `|` as the delimiter for regular expressions, and escaping that character doesn't seem to work in PHP. I've just committed a change for this in the phpcs-fixer branch (the 2.x line of releases), allowing you to do this: ``` <rule ref="Squiz.Strings.DoubleQuoteUsage"> <exclude-pattern>*\.(?!(php$|phtml$))</exclude-pattern> </rule> ``` If you want to give that a go, you can clone the git repo, checkout the phpcs-fixer branch and run the code directly. Or you can require `2.0.*@dev` via composer. If not, you will need to do the filename check yourself in your sniff's process() method.
Which kind of technology are used on google docs share I need understand and know about how I can do something. In my project I need create something similar to technology on google docs. **When I have a share document with other people and he and I are editing in same time and I see the alterations from him.** I don't know if is called inverse ajax or other type of techology. Someone can explain to me which kind of development is it. **--EDIT--** On trello.com I found the same idea. When someone change a issue o tab in my dashboard I receive this information.
It's called Operational Transformation (OT) This article could be usefull: <http://en.wikipedia.org/wiki/Operational_transformation> Taken from the same article: > > The basic idea of OT can be illustrated by using a simple text editing > scenario as follows. Given a text document with a string "abc" > replicated at two collaborating sites; and two concurrent operations: > > > > ``` > O1 = Insert[0, "x"] (to insert character "x" at position "0") > O2 = Delete[2, "c"] (to delete the character "c" at position "2") > > ``` > > generated by two users at collaborating sites 1 and 2, respectively. > Suppose the two operations are executed in the order of O1 and O2 (at > site 1). After executing O1, the document becomes "xabc". To execute > O2 after O1, O2 must be transformed against O1 to become: O2' = > Delete[3, "c"], whose positional parameter is incremented by one due > to the insertion of one character "x" by O1. Executing O2' on "xabc" > deletes the correct character "c" and the document becomes "xab". > However, if O2 is executed without transformation, it incorrectly > deletes character "b" rather than "c". **The basic idea of OT is to > transform (or adjust) the parameters of an editing operation according > to the effects of previously executed concurrent operations so that > the transformed operation can achieve the correct effect and maintain > document consistency**. > > > And a very cool link implementing this: <http://sharejs.org/>
merge commit does not have change id? I am hard resetting one branch to another. After merging it with the `ours` strategy, I try to push through gerrit. I am getting an error like "missing change id". By default, the merge commit does not have a change id. I have tried to download the commit-msg hook for automatic change id. But it does not help in Windows. Any idea how to add change the id in a merge commit?
Situation: you have a commit that needs to be pushed to gerrit, but the commit has no change-id and gerrit won't accept it. Solution: 1. Download the commit-msg hook from gerrit ``` $ scp -p -P 29418 [email protected]:hooks/commit-msg tims-project/.git/hooks/ ``` Do replace the port number, server address and project name with your own. Also, change the path of `tims-project/.git/hooks/` depending on your current location in the terminal. The hook has to go into `<project_dir>/.git/hooks/`. 2. Go to your repo folder and amend the merge commit ``` $ cd tims-project/ $ git commit --amend ``` In the editor that pops up when you amend, the current commit and it's commit message are shown. **Do nothing here**. Simply exit & save the editor. Because of the commit-msg hook which is a post-commit hook, any commit that is made (created or amended) automatically gets assigned a change-id. 3. push to gerrit ``` $ git push origin HEAD:refs/for/master ``` Replace branch and remote with your own if needed.
How to return ActionResult (file) from class that does not derive from Controller? I have two download file methods, so I wanted to extract part which actually hits the disk to some helper/service class, but I struggle with returning that file to controller and then to user How can I return from class that does not derives from `Controller` a file with that easy-to-work method from `Mvc.ControllerBase.File`? ``` public (bool Success, string ErrorMessage, IActionResult File) TryDownloadFile(string FilePath, string FriendlyName) { try { var bytes = File.ReadAllBytes(FilePath); if (FilePath.EndsWith(".pdf")) { return (true, "", new FileContentResult(bytes, "application/pdf")); } else { return (true, "", ControllerBase.File(bytes, "application/octet-stream", FriendlyName)); } } catch (Exception ex) { return (false, ex.Message, null); } } ``` The error is > > An object reference is required for the non-static field, method, or property 'ControllerBase.File(Stream, string, string)' > > > for this line: ``` return (true, "", ControllerBase.File(bytes, "application/octet-stream", FriendlyName)); ``` Is there any possibility to achieve that?
[`ControllerBase.File`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.controllerbase.file?view=aspnetcore-2.2#Microsoft_AspNetCore_Mvc_ControllerBase_File_System_Byte___System_String_System_String_) is just a convenience method that creates an instance of `FileContentResult` for you. Here's the [actual code](https://github.com/aspnet/AspNetCore/blob/release/2.2/src/Mvc/src/Microsoft.AspNetCore.Mvc.Core/ControllerBase.cs#L1122) that gets used: ``` new FileContentResult(fileContents, contentType) { FileDownloadName = fileDownloadName }; ``` You can simply take that code and use it in your class, like this: ``` return ( true, "", new FileContentResult(bytes, "application/octet-stream") { FileDownloadName = FriendlyName }); ```
How can multiple clients of an openvpn server find each other? I am bringing up an openvpn server that will support multiple clients into a private subnet. So on the private subnet the clients connecting will get IP addresses such as 10.8.0.10, 10.8.0.11, etc. One of the facilities I need is for the clients to be able to find each other. Is there any easy and generally accepted way for a client to see the list of IP addresses has assigned to all clients? I don't need DNS names or anything like that.
In the OpenVPN server configuration file, a prerequisite is the following directive: ``` # Uncomment this directive to allow different # clients to be able to "see" each other. # By default, clients will only see the server. # To force clients to only see the server, you # will also need to appropriately firewall the # server's TUN/TAP interface. client-to-client ``` To facilitate the clients finding each other easily I would suggest dynamic DNS as the (just about) always present enterprise solution. To present a list of active clients you could perhaps either: - find a way of distributing or making available the openvpn-status.log to the clients? - distribute ping scripts or similar to clients, perhaps doing a reverse dns lookup for every live host? - have the clients register/deregister themselves in a custom db or file upon connection and have a scavenging mechanism of some sort. This alternative seems like totally reinventing the wheel, but no doubt it would be a fun way of spending an hour which adds nothing to the world of IT at large.
Absolute value of a std::chrono::duration object Given that `std::chrono::duration` can represent the signed difference between two times, it would seem a very common case to need the absolute value of such a duration. For example, the following code outputs `diff: -5` as expected: ``` using namespace std; using namespace std::chrono; auto now = system_clock::now(); auto then = now - seconds(5); auto diff = then - now; cout << "diff: " << duration_cast<seconds>(diff).count() << endl; ``` It would be nice to be able to do something like: ``` auto diff = abs(then - now); ``` However, I can't see any specialization of `std::abs` for the `chrono` templates in the standard, nor can I see any appropriate member function in `std::chrono::duration`. How should I be transforming a `std::chrono::duration` into it's absolute value?
I would do it like this: ``` template <class Rep, class Period> std::chrono::duration<Rep, Period> abs(std::chrono::duration<Rep, Period> d) { Rep x = d.count(); return std::chrono::duration<Rep, Period>(x >= 0 ? x : -x); } ``` See my list of [`<chrono>` Utilities](http://home.roadrunner.com/~hinnant/duration_io/chrono_util.html) for other `<chrono>` utilities I wish were standard. Please feel free to use them, recommend them to your standards body representatives, or propose them yourself. **Update** I was off my game when I wrote the above. I'm not deleting it, as it serves as a good lesson for both myself and others on how *not* to write `chrono` utilities. Things I don't like about it: 1. It needlessly drops type safety by directly manipulating the `Rep`. 2. It assumes that the literal 0 is implicitly convertible to, or at least comparable with `Rep`. 3. There is no reason for this not to be `constexpr`. 4. I'm not happy about the behavior for unsigned `Rep`. If one says: auto d = abs(t1 - t0); and `t1` and `t0` are based on unsigned durations, then this is likely a logic bug in the code. If `t1 < t0`, then you are likely to get an incorrect, very large duration. If that is what you really want, then you shouldn't be using `abs`, and instead just code the simpler: ``` auto d = t1 - t0; ``` To address these concerns I've rewritten `abs` for durations as: ``` template <class Rep, class Period, class = typename std::enable_if < std::chrono::duration<Rep, Period>::min() < std::chrono::duration<Rep, Period>::zero() >::type > constexpr inline std::chrono::duration<Rep, Period> abs(std::chrono::duration<Rep, Period> d) { return d >= d.zero() ? d : -d; } ``` 1. `duration` has unary `-`, just use it. 2. `duration` has a customizable `zero` trait just so that one doesn't have to assume an intimate cooperation with `0` from `Rep`. Just use it. 3. All of the operations used are `constexpr`, marked `abs` with `constexpr`. 4. The static functions `min` and `zero` are `constexpr`. Using these to determine if `Rep` is signed is more general than using a trait such as `!std::is_unsigned`. I.e. `Rep` might be a `BigNum`, or a C11 `timespec` (augmented with overloaded arithmetic operators). So the question "is signed" is answered with `min() < zero()`. And now this version of `abs` will not accept a `duration<unsigned, milli>` (for example).
Syntactic differences in cp -r and how to overcome them Let's say we are in a blank directory. Then, the following commands: ``` mkdir dir1 cp -r dir1 dir2 ``` Yield two (blank) directories, `dir1` and `dir2`, where `dir2` has been created as a copy of `dir1`. However, if we do this: ``` mkdir dir1 mkdir dir2 cp -r dir1 dir2 ``` Then we instead find that `dir1` has now been put inside `dir2`. This means that **the exact same `cp` command behaves differently depending on whether the destination directory exists**. If it does, then the `cp` command is doing the same as this: ``` mkdir dir1 mkdir dir2 cp -r dir1 dir2/. ``` This seems extremely counter-intuitive to me. I would have expected that `cp -r dir1 dir2` (when `dir2` already exists) would remove the existing `dir2` (and any contents) and replace it with `dir1`, since this is the behavior when `cp` is used for two files. I understand that recursive copies are themselves a bit different because of how directories exist in Linux (and more broadly in Unix-like systems), but I'm looking for some more explanation on *why* this behavior was chosen. Bonus points if you can point me to a way to ensure `cp` behaves as I had expected (without having to, say, test for and remove the destination directory beforehand). I tried a few `cp` options without any luck. And I suppose I'll accept `rsync` solutions for the sake of others that happen upon this question who don't know that command. In case this behavior is not universal, I'm on CentOS, using bash.
The behaviour you're looking for is a [special case](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/cp.html): > > > ``` > cp -R [-H|-L|-P] [-fip] source_file... target > > ``` > > [This] form is denoted by two or more operands where the -R option is specified. The cp utility shall copy each file in the file hierarchy rooted in each source\_file to a destination path named as follows: > > > - If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target, a single `<slash>` character if target did not end in a `<slash>`, and the pathname of the file relative to the directory containing source\_file. > - If target does not exist and two operands are specified, the name of the corresponding destination path for source\_file shall be target; the name of the corresponding destination path for all other files in the file hierarchy shall be the concatenation of target, a `<slash>` character, and the pathname of the file relative to source\_file. > > > It shall be an error if target does not exist and more than two operands are specified ... > > > Therefore I'd say it's not possible to make `cp` do what you want. --- Since your expected behaviour is "`cp -r dir1 dir2` (when `dir2` already exists) would remove the existing `dir2` (and any contents) and replace it with `dir1`": ``` rm -rf dir2 && cp -r dir1 dir2 ``` You don't even need to check if `dir2` exists. --- The `rsync` solution would be adding a trailing `/` to the source so that it doesn't copy `dir1` itself into `dir2` but copies the content of `dir1` to `dir2` (it will still keep existing files in `dir2`): ``` $ tree dir* dir1 └── test.txt dir2 └── test2.txt 0 directories, 2 file $ rsync -a dir1/ dir2 $ tree dir* dir1 └── test.txt dir2 └── test.txt └── test2.txt 0 directories, 3 files $ rm -r dir2 $ rsync -a dir1/ dir2 $ tree dir* dir1 └── test.txt dir2 └── test.txt 0 directories, 2 files ```
Django Error - Reverse for 'password\_reset\_confirm' with arguments '()' and keyword arguments ' I am trying to create a reset password functionality in my application and added the following lines in my `urls.py`. `urls.py` ``` url(r'^resetpassword/passwordsent/$', 'django.contrib.auth.views.password_reset_done', name='password_reset_done'), url(r'^resetpassword/$', 'django.contrib.auth.views.password_reset'), url(r'^reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), url(r'^reset/done/$', 'django.contrib.auth.views.password_reset_complete'), ``` But when I enter my email id on reset password, it is showing an error which I am not able to understand. `Reverse for 'password_reset_confirm' with arguments '()' and keyword arguments '`I have gone through some of the suggestions but none are working. Could anyone any help me with this error? See the below image: [![enter image description here](https://i.stack.imgur.com/e029n.png)](https://i.stack.imgur.com/e029n.png) I
Django needs to know how to resolve the URL from the name used in the `url` template tag. You should add the name to this line: ``` url(r'^reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm'), ``` So it becomes: ``` url(r'^reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm', name='password_reset_confirm'), ``` See more about reverse resolution here: <https://docs.djangoproject.com/en/1.8/topics/http/urls/#reverse-resolution-of-urls>
How to specify Composer install path? I have this definition: ``` { "repositories": [ { "type": "package", "package": { "name": "symfony/sfGuardPlugin", "version": "4.0.2", "dist": { "url": "http://plugins.symfony-project.org/get/sfGuardPlugin/sfGuardPlugin-4.0.2.tgz", "type": "tar" } } } ], "require": { "symfony/sfGuardPlugin": "4.0.*" } } ``` I am using Symfony 1, and I'd like to install them on `plugins/sfGuardPlugin/`. How do I specify this?
It seems that you can define the `vendor` dir to be [something else](https://getcomposer.org/doc/06-config.md#vendor-dir) (`plugins` in your case): ``` { "config": { "vendor-dir": "plugins" } } ``` Then, you might rename the package name to not have a level dir inside, like: ``` "package": { "name": "sfGuardPlugin", ``` So, your `composer.json` should look like this: ``` { "config": { "vendor-dir": "plugins" }, "repositories": [ { "type": "package", "package": { "name": "sfGuardPlugin", "version": "4.0.2", "dist": { "url": "http://plugins.symfony-project.org/get/sfGuardPlugin/sfGuardPlugin-4.0.2.tgz", "type": "tar" } } } ], "require": { "sfGuardPlugin": "4.0.*" } } ``` **Edit** Using this configuration, you will get the path (which is *of course* not good for symfony): > > plugins/sfGuardPlugin/sfGuardPlugin-4.0.2/ > > > I found a workaround with this `composer.json`: ``` { "config": { "vendor-dir": "plugins" }, "repositories": [ { "type": "package", "package": { "name": "sfGuardPlugin", "version": "4.0.2", "source": { "url": "http://svn.symfony-project.com/plugins/sfGuardPlugin/", "type": "svn", "reference": "branches/1.3/" } } } ], "require": { "sfGuardPlugin": "4.0.*" } } ```
Are there any standards for mobile device web browsers in terms of thread sleeping? Given the following jsFiddle, which is a simple incrementing counter <http://jsfiddle.net/C93ms/6/> ....if I visit the url above using a mobile device (smartphone or tablet for the sake of argument), the counter starts incrementing as you'd expect provided there is JavaScript support, then it appears that if I press the "Home" button, or click the power button once to turn off the screen (but keep the phone powered on) then the script will stop running and the counter stops incrementing. This I expect to happen and I appreciate the reasons why as reserving battery life is hugely important on a mobile device, so it makes sense that the UI thread sleeps, or similar. Once you revisit the browser, the counter continues incrementing. In the real world, websites that determine timeout period using JavaScript would not timeout despite the inactivity period, I am assuming. I am also assuming that this will vary by device, by firmware, by software even - what I'm *trying* to ascertain here is whether there's a **standard approach** or **default behaviour** built into mobile development frameworks for this and any form of consistency in how the devices behave. I'm not totally sure I've asked a *good question* here, but I've struggled to find 100% relevant information from SO, or I don't quite know what the question is I need to ask when searching.
No JavaScript framework can stop the execution or change the behaviour of the underlying JS engine. They will not be able to influence `setTimeout`. Yet, the behaviour is standardisized in the current HTML5 draft on the [`WindowTimers` interface](http://www.w3.org/TR/html5/webappapis.html#timers) (which does not mean it was implemented like that). There you will find the note: > > This API does not guarantee that timers will run exactly on schedule. Delays due to CPU load, other tasks, etc, are to be expected. > > > and, even more explicit: > > 9) Optionally, wait a further user-agent defined length of time. > > > > > > > Note: This is intended to allow user agents to pad timeouts as needed to optimise the power usage of the device. For example, some processors have a low-power mode where the granularity of timers is reduced; on such platforms, user agents can slow timers down to fit this schedule instead of requiring the processor to use the more accurate mode with its associated higher power usage. > > > > > > > > > You can see such behaviour also on desktop browsers, which implement a minimum timeout of 4ms (read [explanation on MDN](https://developer.mozilla.org/en/DOM/window.setTimeout#Minimum_delay_and_timeout_nesting)). So, it is legitimate for every device/software/firmware to stop such execution if they only think it would be necessary. You might also want to have a look at the [`WindowAnimationTiming` draft](http://www.w3.org/TR/animation-timing/). And if you do use `setInterval`/`setTimeout` in animations/clocks/etc, always measure the really elapsed time with [`Date` objects](http://ecma-international.org/ecma-262/5.1/#sec-15.9) (e.g. via `Date.now()`).
Launch a program when another program opens I would like a method for starting a program when another program starts. Specifically, I’d like a to open a MS Word document and have my citation manager open simultaneously (EndNote supports this function within MS Word preferences, but I recently switched citation managers). The batch scripts I’ve seen so far, [including this one](https://superuser.com/questions/228352/trigger-the-opening-of-a-second-program-when-opening-another), don’t fit the bill. I’d like to be able to open any existing Word document on my drive and trigger the second program.
One possible solution: 1) Create a batch file such as **C:\ViewDoc.bat**: ``` @echo off start "Word" "C:\Program Files (x86)\Microsoft Office\Office14\WINWORD.EXE" %1 start "CiteMan" "Drive:\Path\to\Citation Manager.exe" ``` Edit the paths as required. 2) Right-click any .DOC file, click on **Open with**, browse and select the batch file: ![1](https://i.stack.imgur.com/Ga1gQ.png) Make sure the **Always use the selected program to open this kind of file** option is checked. 3) Do the same with any .DOCX file. Now when you double-click to open any document, the batch file will run instead of Word directly. It in turn will launch Word and open the selected document, as well as launch the other program you want. --- An alternate way to do this with any .DOCM macro-enabled document is to add the following code via the `Developer tab / Visual Basic editor`: ``` Private Sub Document_Open() Shell "Drive:\Path\to\Citation Manager.exe" End Sub ``` This will auto-launch the specified program whenever the .DOCM is opened.
Renew iOS Provisioning Profile on in-house app My iOS Provisioning Profile will expire soon and I need to know the smoothest way to renew that profile. My certificate doesn't expire for another couple of years, so the certificate itself should be fine.It is an in-house (non-App Store) app and is installed on a number of devices. Which is why I'm wondering if the app will stop working if I do the following: 1. Let the provisioning profile expire. 2. Click generate inside the existing profile. If so, is there any way to update/renew the profile without taking down the app or releasing a new version? If I have to release a new version, is the best option to create a new profile to reduce downtime?
So generating a new provisioning profile will not invalidate any of the apps out there on devices. Basically, you should choose option 2. Generate the new provisioning profile, build a new version of the app with the new provisioning profile, and just make sure all your users / testers update to the new version of the app. Alternatively, you could generate the provisioning profile and then distribute the profile to all the devices through MDM (if you're using an MDM solution) or by email (not a great experience). Basically the app will continue to run as long as the new provisioning profile gets on the device before the old one expires, whether that's through MDM, manually, or by installing a new version of the app with the provisioning profile in the .app payload. Or if your users download any app with the new provisioning profile, assuming that provisioning profile is set up with a wildcard app ID, that will also correct it (see information about that here: <https://stackoverflow.com/a/29121777/3708242>). But option 1 will certainly result in your app refusing to launch once the expiration date arrives.
How can I select data from a dask dataframe by a list of indices? I want to select rows from a dask dataframe based on a list of indices. How can I do that? **Example:** Let's say, I have the following dask dataframe. ``` dict_ = {'A':[1,2,3,4,5,6,7], 'B':[2,3,4,5,6,7,8], 'index':['x1', 'a2', 'x3', 'c4', 'x5', 'y6', 'x7']} pdf = pd.DataFrame(dict_) pdf = pdf.set_index('index') ddf = dask.dataframe.from_pandas(pdf, npartitions = 2) ``` Furthermore, I have a list of indices, that I am interested in, e.g. ``` indices_i_want_to_select = ['x1','x3', 'y6'] ``` From this, I would like to generate a dask dataframe containing only the rows specified in `indices_i_want_to_select`
Edit: dask now supports loc on lists: ``` ddf_selected = ddf.loc[indices_i_want_to_select] ``` The following should still work, but is not necessary anymore: ``` import pandas as pd import dask.dataframe as dd #generate example dataframe pdf = pd.DataFrame(dict(A = [1,2,3,4,5], B = [6,7,8,9,0]), index=['i1', 'i2', 'i3', 4, 5]) ddf = dd.from_pandas(pdf, npartitions = 2) #list of indices I want to select l = ['i1', 4, 5] #generate new dask dataframe containing only the specified indices ddf_selected = ddf.map_partitions(lambda x: x[x.index.isin(l)], meta = ddf.dtypes) ```
API Understanding made with codeigniter I have bought one android application with web service made in codeigniter. There API in this web service is like below. ``` <?php if (!defined("BASEPATH")) exit("No direct script access allowed"); class Site extends back { public function __construct() { parent::__construct(); $this->lang->load("site", "english"); $this->load->model("site_model", "site"); } //===================================================================== public function get_updates($last_author, $last_quote) { $this->db->where("_auid > ", $last_author); $this->db->where("au_status", 1); $result["authors"] = $this->db->get("authors")->result_array(); $this->db->select("quotes.*, authors._auid, authors.au_status"); $this->db->from("quotes"); $this->db->join("authors", "authors._auid = quotes.qu_author AND authors.au_status = 1"); $this->db->where("_quid > ", $last_quote); $this->db->where("qu_status", 1); $result["quotes"] = $this->db->get()->result_array(); echo json_encode($result); } } ``` I am learning php yet. I have made another fresh corephp web service for use. I am not understanding above api and so not able to make similar api for my new web service. Both web service use same database...anyone can please suggest me how can I use above API in my new corephp web service ? sorry for my bad knowledge. Thanks
It is very simple function `get_updates` get 2 parameter as input 1) $last\_author //value of **\_auid**(id) field belong to table author 2) $last\_quote //value of **\_quid**(id) field belong to table quotes ``` $this->db->where("_auid > ", $last_author); $this->db->where("au_status", 1); $result["authors"] = $this->db->get("authors")->result_array(); ``` These lines fetch data from table **author** with matches value of $last\_author parameter And second query fetch data from table **quotes** and it is join with **author** with matches value of $last\_quote parameter. Join is use for smashing two or more tables into a single table. $last\_author and $last\_quote is request parameter send from application. Desired result will be stored in $result and return with json object as response data to application.
Loading/Creating an image into SharpDX in a .NET program I'm trying to use SharpDX to create a simple maze-like 2D program using DirectX. To that end I want to create bitmaps that I can render on-screen for the walls, hallways, outside of the maze, etc. However, I can't seem to figure out how to either load an existing image file into the Bitmap class in the [SharpDX](http://code.google.com/p/sharpdx/) library, or how to create a new such Bitmap class from scratch. Since all the classes are said to be mapped directly to DirectX types, I guess this just means I need to learn more DirectX, but I was hoping there was a simple example somewhere that could show me what I need to do. If I have to construct a new Bitmap from scratch and draw to it, I can do that, it's not difficult getting the pixels I need right, however I can't even seem to figure out that part. Does anyone have any experience with the [SharpDX](http://code.google.com/p/sharpdx/) library and can give me some pointers?
You should ask directly this question to the "Issue" tab on Sharpdx project, as I would be able to give you a quick response from there (you are lucky that I'm sometimes checking its usage from the net ;) ). If you were asking this officially, I could make an improvement to the library and keep a record of your request in the issue database. Concerning your particular issue, it's not clear what kind of DirectX API you are using. I assume that you are using Direct2D? If yes, there is no API to load directly a SharpDX.Direct2D1.Bitmap from a file (I will add this method to the API). I have just uploaded a [bitmap sample](http://code.google.com/p/sharpdx/source/browse/trunk/Samples/Direct2D1/BitmapApp/Program.cs) that is performing this. ``` /// <summary> /// Loads a Direct2D Bitmap from a file using System.Drawing.Image.FromFile(...) /// </summary> /// <param name="renderTarget">The render target.</param> /// <param name="file">The file.</param> /// <returns>A D2D1 Bitmap</returns> public static Bitmap LoadFromFile(RenderTarget renderTarget, string file) { // Loads from file using System.Drawing.Image using (var bitmap = (System.Drawing.Bitmap)System.Drawing.Image.FromFile(file)) { var sourceArea = new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height); var bitmapProperties = new BitmapProperties(new PixelFormat(Format.R8G8B8A8_UNorm, AlphaMode.Premultiplied)); var size = new System.Drawing.Size(bitmap.Width, bitmap.Height); // Transform pixels from BGRA to RGBA int stride = bitmap.Width * sizeof(int); using (var tempStream = new DataStream(bitmap.Height * stride, true, true)) { // Lock System.Drawing.Bitmap var bitmapData = bitmap.LockBits(sourceArea, ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppPArgb); // Convert all pixels for (int y = 0; y < bitmap.Height; y++) { int offset = bitmapData.Stride*y; for (int x = 0; x < bitmap.Width; x++) { // Not optimized byte B = Marshal.ReadByte(bitmapData.Scan0, offset++); byte G = Marshal.ReadByte(bitmapData.Scan0, offset++); byte R = Marshal.ReadByte(bitmapData.Scan0, offset++); byte A = Marshal.ReadByte(bitmapData.Scan0, offset++); int rgba = R | (G << 8) | (B << 16) | (A << 24); tempStream.Write(rgba); } } bitmap.UnlockBits(bitmapData); tempStream.Position = 0; return new Bitmap(renderTarget, size, tempStream, stride, bitmapProperties); } } } ``` As you said, SharpDX is giving access to the raw-low-level DirectX API (unlike for example XNA that is giving a higher level API), so you definitely need to understand how program raw DirectX in order to use SharpDX.
Why is "partial RVO" not performed? Please take a look at this silly function, which should only illustrate the problem and a simplification of the real code: ``` struct A; A create(bool first){ A f(21), s(42); if(first) return f; else return s; } ``` I understand, that because it is not clear which object will be returned during the compilation, we cannot expect return value optimization (RVO) to be always performed. However, one maybe could expect RVO to be performed in 50% of the cases (assuming uniform distribution for `true`/`false` due to lack of further information): just decide for which case RVO (`first==true` or `first==false`) should be performed and apply it for this parameter-value, accepting that in the other case the copy constructor must be called. Yet this "partial RVO" is not the case for all compilers I can get my hands on (see live with [gcc](https://godbolt.org/z/WvOLbG), [clang](https://godbolt.org/z/mDsVX5) and [MSVC](https://rextester.com/WRQHP60128)) - in both cases (i.e. `first==true` or `first==false`) the copy-constructor is used and not omitted. Is there something, that renders the "partial RVO" in the above case invalid or is this an unlikely case of missed optimization by all compilers? --- Complete program: ``` #include <iostream> struct A{ int val; A(int val_):val(val_){} A(const A&o):val(o.val){ std::cout<<"copying: "<<val<<"\n"; } }; A create(bool first){ A f(21), s(42); if(first) return f; else return s; } int main(){ std::cout<<"With true: "; create(true); std::cout<<"With false: "; create(false); } ```
Let's consider what happens if RVO is done for `f`, meaning it is constructed directly in the return value. If `first==true` and `f` gets returned, great, no copy is needed. But if `first==false` then `s` gets returned instead, so the program will copy construct `s` over the top of `f` *before* the destructor for `f` has run. Then after that, the destructor for `f` will run, and now the return value is an invalid object that has already been destroyed! If RVO is done for `s` instead the same argument applies, except that now the problem happens when `first==true`. Whichever one you choose, you avoid a copy in 50% of cases and get undefined behaviour in the other 50% of cases! That's not a desirable optimization! In order to make this work the order of destruction of the local variables would have to be altered so that `f` is destroyed *before* copying `s` into that memory location (or vice versa), and that's a **very** risky thing to mess with. The order of destruction is a fundamental property of C++ that should not be fiddled with, or you'll break RAII and who knows how many other assumptions.
Select tab by name in jquery UI 1.10.0 ***Before*** *jquery UI 1.10.0* I used to indirectly select a tab like this: ``` $("#tabs").tabs( "select", 5 ); ``` or ``` $("#tabs").tabs( "select", "tab6" ); ``` Now, with the same code, using *jquery UI 1.10.0* , you get an error saying that there is "***no such method 'select' for tabs widget instance***". I changed the code to use the "option" "active" like this: ``` $("#tabs").tabs( "option","active", 5 ); ``` However, it looks like I can use only the index. Selecting by ID is not working anymore. So, instead of using the ID like this (which is not working) : ``` $("#tabs").tabs( "option","active", "tab6" ); ``` you have to do it like this: ``` var idx = $('#tabs a[href="#tab6"]').parent().index(); $("#tabs").tabs( "option", "active", idx ); ``` or, in a shorter form ``` $("#tabs").tabs( "option", "active", $("#tab6").parent().index() ); ``` I read the "changelog" (<http://jqueryui.com/changelog/1.10.0/>) and I don't see anything about this change. Is there another way of selecting a tab by name in *jquery UI 1.10.0* ? I created a demo here for whoever wants to try... <http://jsbin.com/ojufej/1>
I ended up using this (see example): <http://jsfiddle.net/AzSUS/> Basically, I added these functions ``` $.fn.tabIndex = function () { return $(this).parent().find(this).index() - 1; }; $.fn.selectTabByID = function (tabID) { $(this).tabs("option", "active", $('#' + tabID).tabIndex()); }; $.fn.selectTabByIndex = function (tabIndex) { $(this).tabs("option", "active", tabIndex); }; ``` Ans use them like this: ``` $("#tabs").selectTabByIndex(0); $("#tabs").selectTabByID('tab2'); ``` As you'll see in the HTML section on my example ... ``` <div id="tabs"> <ul> <li><a href="#tab1">[0] #tab1</a></li> <li><a href="#tab2">[1] #tab2</a></li> <li><a href="#tab3">[2] #tab3</a></li> <li><a href="#tab4">[3] #tab4</a></li> </ul> <div id="tab1">Tab 1 Content</div> <div id="tab2">Tab 2 Content</div> <div id="tab3">Tab 3 Content</div> <div id="tab4">Tab 4 Content</div> </div> ``` ... I have a very simple, well defined structure for the tabs. The "real" application contains 3 levels of tabs See this example with 2 levels: <http://jsfiddle.net/vgaTP/> Another thing that I wasn't clear about is this: I do not want to trigger the "click" on the tab, I just want to "switch" to that tab, without click. For me, the "click" event loads the content of a tab and I do not want to load the content every time I "select" a tab.
Liquibase change column type from Date to DateTime without deleting contained values I'm using Liquibase for data migration. I have a table named Document that already contains values. My table Document contains columns(id, name, dueDate). The dueDate column is of type Date and i want to change his type from DATE to DATETIME. I have adopted the following strategy 1- create a new column duedatenew of type DATETIME 2- copy values from column duedate to duedatenew 3- delete column duedate 4- rename column duedatenew to duedate as described in the following changeset ``` <changeSet id="task-99" author="blaise"> <addColumn tableName="document"> <column name="duedatenew" type="DATETIME" /> </addColumn> <update tableName="document"> <column name="duedatenew" valueComputed="(SELECT duedate FROM document)" /> </update> <dropColumn tableName="document" columnName="duedate" /> <renameColumn tableName="document" oldColumnName="duedatenew" newColumnName="duedate" /> </changeSet> ``` but the execution of changeset always fails during the second step. the copy of data always fails. How can i solve this please?
I was watching the [column](http://www.liquibase.org/documentation/column.html "column documentation") docs... It seems that `valueComputed` should point to a sql function, so a select query will not work... But, according to [this](http://forum.liquibase.org/topic/how-to-upate-a-field-from-another-fields-value), your best option is to use the `sql` Tag to execute the update as your want... Example: ``` <changeSet id="task-99" author="blaise"> <addColumn tableName="document"> <column name="duedatenew" type="DATETIME" /> </addColumn> <sql>update document set duedatenew = duedate</sql> <dropColumn tableName="document" columnName="duedate" /> <renameColumn tableName="document" oldColumnName="duedatenew" newColumnName="duedate" /> </changeSet> ```
Mongoose Query to filter an array and Populate related content I'm trying to query the property that is an array of both reference to another schema and some additional data. For better clarification, here's the schema: ``` var orderSchema = new Schema({ orderDate: Date, articles: [{ article: { type: Schema.Types.ObjectId, ref: 'Article' }, quantity: 'Number' }] }), Order = mongoose.model('Order', orderSchema); ``` While I managed to successfully query the reference, i.e.: ``` Order.find({}).populate('articles.article', null, { price: { $lte: 500 } }).exec(function(err, data) { for (var order of data) { for (var article of order.articles) { console.log(article); } } }); ``` I have some issues querying the `quantity` attribute, i.e. this doesn't work: ``` Order.find({}).where({ 'articles.quantity': { $gte: 5 } }).populate('articles.article', null, { /*price: { $lte: 500 }*/ }).exec(function(err, data) { for (var order of data) { for (var article of order.articles) { console.log(article); } } }); ``` Is it even possible to base the query on `quantity`? And if so, what would be the best approach? Thank you! **UPDATE:** The problem is, the result is either a complete array, or nothing (see updated question). I want to get only those records that have quantity more or the same as 5. With your (and mine) approach I get either no records at all (if I set $gte: 5001) or both records (if I set $gte:5000) ``` { "_id": ObjectId('56fe76c12f7174ac5018054f'), "orderDate": ISODate('2016-04-01T13:25:21.055Z'), "articles": [ { "article": ObjectId('56fe76c12f7174ac5018054b'), "quantity": 5000, "_id": ObjectId('56fe76c12f7174ac50180551') }, { "article": ObjectId('56fe76c12f7174ac5018054c'), "quantity": 1, "_id": ObjectId('56fe76c12f7174ac50180552') } ], "__v": 1 } ```
You need to "project" the match here since all the MongoDB query does is look for a "document" that has *"at least one element"* that is *"greater than"* the condition you asked for. So filtering an "array" is not the same as the "query" condition you have. A simple "projection" will just return the "first" matched item to that condtion. So it's probably not what you want, but as an example: ``` Order.find({ "articles.quantity": { "$gte": 5 } }) .select({ "articles.$": 1 }) .populate({ "path": "articles.article", "match": { "price": { "$lte": 500 } } }).exec(function(err,orders) { // populated and filtered twice } ) ``` That "sort of" does what you want, but the problem is really going to be that will only ever return at most **one** element within the `"articles"` array. To do this properly you need `.aggregate()` to filter the array content. Ideally this is done with MongoDB 3.2 and [`$filter`](https://docs.mongodb.org/manual/reference/operator/aggregation/filter/). But there is also a special way to `.populate()` here: ``` Order.aggregate( [ { "$match": { "artciles.quantity": { "$gte": 5 } } }, { "$project": { "orderdate": 1, "articles": { "$filter": { "input": "$articles", "as": "article", "cond": { "$gte": [ "$$article.quantity", 5 ] } } }, "__v": 1 }} ], function(err,orders) { Order.populate( orders.map(function(order) { return new Order(order) }), { "path": "articles.article", "match": { "price": { "$lte": 500 } } }, function(err,orders) { // now it's all populated and mongoose documents } ) } ) ``` So what happens here is the actual "filtering" of the array happens within the `.aggregate()` statement, but of course the result from this is no longer a "mongoose document" because one aspect of `.aggregate()` is that it can "alter" the document structure, and for this reason mongoose "presumes" that is the case and just returns a "plain object". That's not really a problem, since when you see the `$project` stage, we are actually asking for all of the same fields present in the document according to the defined schema. So even though it's just a "plain object" there is no problem "casting" it back into an mongoose document. This is where the `.map()` comes in, as it returns an array of converted "documents", which is then important for the next stage. Now you call [`Model.populate()`](http://mongoosejs.com/docs/api.html#model_Model.populate) which can then run the further "population" on the "array of mongoose documents". The result then is finally what you want. --- ## MongoDB older versions than 3.2.x The only things that really change here are the aggregation pipeline, So that is all that needs to be included for brevity. **MongoDB 2.6** - Can filter arrays with a combination of [`$map`](https://docs.mongodb.org/manual/reference/operator/aggregation/map/) and [`$setDifference`](https://docs.mongodb.org/manual/reference/operator/aggregation/setDifference/). The result is a "set" but that is not a problem when mongoose creates an `_id` field on all sub-document arrays by default: ``` [ { "$match": { "artciles.quantity": { "$gte": 5 } } }, { "$project": { "orderdate": 1, "articles": { "$setDiffernce": [ { "$map": { "input": "$articles", "as": "article", "in": { "$cond": [ { "$gte": [ "$$article.price", 5 ] }, "$$article", false ] } }}, [false] ] }, "__v": 1 }} ], ``` Older revisions of than that must use [`$unwind`](https://docs.mongodb.org/manual/reference/operator/aggregation/unwind/): ``` [ { "$match": { "artciles.quantity": { "$gte": 5 } }}, { "$unwind": "$articles" }, { "$match": { "artciles.quantity": { "$gte": 5 } }}, { "$group": { "_id": "$_id", "orderdate": { "$first": "$orderdate" }, "articles": { "$push": "$articles" }, "__v": { "$first": "$__v" } }} ], ``` ## The $lookup Alternative Another alternate is to just do everything on the "server" instead. This is an option with [`$lookup`](https://docs.mongodb.org/manual/reference/operator/aggregation/lookup/) of MongoDB 3.2 and greater: ``` Order.aggregate( [ { "$match": { "artciles.quantity": { "$gte": 5 } }}, { "$project": { "orderdate": 1, "articles": { "$filter": { "input": "$articles", "as": "article", "cond": { "$gte": [ "$$article.quantity", 5 ] } } }, "__v": 1 }}, { "$unwind": "$articles" }, { "$lookup": { "from": "articles", "localField": "articles.article", "foreignField": "_id", "as": "articles.article" }}, { "$unwind": "$articles.article" }, { "$group": { "_id": "$_id", "orderdate": { "$first": "$orderdate" }, "articles": { "$push": "$articles" }, "__v": { "$first": "$__v" } }}, { "$project": { "orderdate": 1, "articles": { "$filter": { "input": "$articles", "as": "article", "cond": { "$lte": [ "$$article.article.price", 500 ] } } }, "__v": 1 }} ], function(err,orders) { } ) ``` And though those are just plain documents, it's just the same results as what you would have got from the `.populate()` approach. And of course you can always go and "cast" to mongoose documents in all cases again if you really must. ## The "shortest" Path This really goes back to the orginal statement where you basically just "accept" that the "query" is not meant to "filter" the array content. The `.populate()` can happilly do so becuse it's just another "query" and is stuffing in "documents" by convenience. So if you really are not saving "bucketloads" of bandwith by the removal of additional array members in the orginal document array, then just `.filter()` them out in post processing code: ``` Order.find({ "articles.quantity": { "$gte": 5 } }) .populate({ "path": "articles.article", "match": { "price": { "$lte": 500 } } }).exec(function(err,orders) { orders = orders.filter(function(order) { order.articles = order.articles.filter(function(article) { return ( ( article.quantity >= 5 ) && ( article.article != null ) ) }); return order.aricles.length > 0; }) // orders has non matching entries removed } ) ```
How to get rid of stepping into stl\_vector.h using gdb "s" command? Subject. Compiled: g++ -g -Wall -o program program.cpp Running: gdb -tui program Thank you, Javier.
If you have GDB 7.4 or greater, there's a `skip` command, which lets you skip functions or entire files. After you mark a file or function for skipping, it won't step into it again. From the GDB Manual -- [Skipping Over Functions and Files](https://sourceware.org/gdb/current/onlinedocs/gdb/Skipping-Over-Functions-and-Files.html#Skipping-Over-Functions-and-Files) ``` skip [linespec] skip function [linespec] After running this command, the function named by linespec or the function containing the line named by linespec will be skipped over when stepping. skip file [filename] After running this command, any function whose source lives in filename will be skipped over when stepping. ```
Laravel dynamic relationships - access model attributes on eager load I have an Eloquent relationship on my Laravel model which is dynamic - that is, the value of a particular database field determines which model will get loaded. I am able to load this relationship fine when I first instantiate the model instance and then reference the relation, but it does not work when I eager load that relationship. Specifically, I have a `Product` model. That product might or might not be a parent to another product. If the `parent_id` of the Product is set to `0` then that product is considered a parent part (whether or not it has children). If the `parent_id` is set to a different product's id, then that product is a child. I need to be able to access `Product::with('parent')` and know that the `parent` relation will return with *either* itself (yes, duplicated data) or a different product if it is a child. Here is my relationship so far: ``` public function parent() { if ($this->parent_id > 0) { return $this->belongsTo('App\Product', 'parent_id', 'id'); } else { return $this->belongsTo('App\Product', 'id', 'id'); } } ``` When I am eager loading, `$this->parent_id` is always undefined and therefore this relation will only ever return itself even if it actually is a parent product. Is there any way to access a model's attributes *before* a relation is eager loaded? I thought about working in a separate query before I return the relation, but I realized that I don't have access to the product's id to even run that query. If that is not possible, what are some other ways to address this type of problem? It does not appear that this could be solved through a traditional polymorphic relationship. I only have two possible ideas: - Add some sort of constraint to the `belongsTo` relation where I dynamically determine a foreign key. - Create my own custom relationship that uses a foreign key based on a different database field. I honestly have no idea how I would implement either of those. Am I going about this the right way? Is there something I am overlooking? --- After thinking this over more, I think the simplest way to put the question is: **is there any way to dynamically select a foreign key for a relationship inside the relation itself at runtime**? My use cases don't allow for me to use eager loading constraints when I call the relation - the constraints need to apply to the relation itself.
Because of the way eager loading works, there isn't anything you can really do to the SQL being run to get done what you're looking for. When you do `Product::with('parent')->get()`, it runs two queries. First, it runs the query to get all the products: ``` select * from `products` ``` Next, it runs a query to get the eager loaded parents: ``` select * from `products` where `products`.`id` in (?, ?, ?) ``` The number of parameters (`?`) corresponds to the number of results from the first query. Once the second set of models has been retrieved, the `match()` function is used to relate the objects to each other. In order to do what you want, you're going to have to create a new relationship and override the `match()` method. This will handle the eager loading aspect. Additionally, you'll need to override the `addConstraints` method to handle the lazy loading aspect. First, create a custom relationship class: ``` class CustomBelongsTo extends BelongsTo { // Override the addConstraints method for the lazy loaded relationship. // If the foreign key of the model is 0, change the foreign key to the // model's own key, so it will load itself as the related model. /** * Set the base constraints on the relation query. * * @return void */ public function addConstraints() { if (static::$constraints) { // For belongs to relationships, which are essentially the inverse of has one // or has many relationships, we need to actually query on the primary key // of the related models matching on the foreign key that's on a parent. $table = $this->related->getTable(); $key = $this->parent->{$this->foreignKey} == 0 ? $this->otherKey : $this->foreignKey; $this->query->where($table.'.'.$this->otherKey, '=', $this->parent->{$key}); } } // Override the match method for the eager loaded relationship. // Most of this is copied from the original method. The custom // logic is in the elseif. /** * Match the eagerly loaded results to their parents. * * @param array $models * @param \Illuminate\Database\Eloquent\Collection $results * @param string $relation * @return array */ public function match(array $models, Collection $results, $relation) { $foreign = $this->foreignKey; $other = $this->otherKey; // First we will get to build a dictionary of the child models by their primary // key of the relationship, then we can easily match the children back onto // the parents using that dictionary and the primary key of the children. $dictionary = []; foreach ($results as $result) { $dictionary[$result->getAttribute($other)] = $result; } // Once we have the dictionary constructed, we can loop through all the parents // and match back onto their children using these keys of the dictionary and // the primary key of the children to map them onto the correct instances. foreach ($models as $model) { if (isset($dictionary[$model->$foreign])) { $model->setRelation($relation, $dictionary[$model->$foreign]); } // If the foreign key is 0, set the relation to a copy of the model elseif($model->$foreign == 0) { // Make a copy of the model. // You don't want recursion in your relationships. $copy = clone $model; // Empty out any existing relationships on the copy to avoid // any accidental recursion there. $copy->setRelations([]); // Set the relation on the model to the copy of itself. $model->setRelation($relation, $copy); } } return $models; } } ``` Once you've created your custom relationship class, you need to update your model to use this custom relationship. Create a new method on your model that will use your new `CustomBelongsTo` relationship, and update your `parent()` relationship method to use this new method, instead of the base `belongsTo()` method. ``` class Product extends Model { // Update the parent() relationship to use the custom belongsto relationship public function parent() { return $this->customBelongsTo('App\Product', 'parent_id', 'id'); } // Add the method to create the CustomBelongsTo relationship. This is // basically a copy of the base belongsTo method, but it returns // a new CustomBelongsTo relationship instead of the original BelongsTo relationship public function customBelongsTo($related, $foreignKey = null, $otherKey = null, $relation = null) { // If no relation name was given, we will use this debug backtrace to extract // the calling method's name and use that as the relationship name as most // of the time this will be what we desire to use for the relationships. if (is_null($relation)) { list($current, $caller) = debug_backtrace(DEBUG_BACKTRACE_IGNORE_ARGS, 2); $relation = $caller['function']; } // If no foreign key was supplied, we can use a backtrace to guess the proper // foreign key name by using the name of the relationship function, which // when combined with an "_id" should conventionally match the columns. if (is_null($foreignKey)) { $foreignKey = Str::snake($relation).'_id'; } $instance = new $related; // Once we have the foreign key names, we'll just create a new Eloquent query // for the related models and returns the relationship instance which will // actually be responsible for retrieving and hydrating every relations. $query = $instance->newQuery(); $otherKey = $otherKey ?: $instance->getKeyName(); return new CustomBelongsTo($query, $this, $foreignKey, $otherKey, $relation); } } ``` Fair warning, none of this has been tested.
Type clash in Standard ML datatype with same constructor name I need to describe in Standard-ML a language made of properties and values. My property system is made of properties which can have values, like for example: ``` color: red | yellow | blue | transparent align: left | center | right bgcolor: red | yellow | blue | transparent ``` I created this sml file which tries to describe these properties: ``` datatype colorvalue = Transparent | Yellow | Blue | Red datatype bgcolorvalue = Transparent | Yellow | Blue | Red datatype alignvalue = Left | Center | Right (* Generic property: it can be any of the above *) datatype property = Color of colorvalue | BgColor of bgcolorvalue | Align of alignvalue (* Some values *) val prop1: property = Color Transparent val prop2: property = BgColor Transparent ``` When I compile this in [MoscowML](http://mosml.org/) I get: ``` ,File "c:\Users\myuser\documents\myproj\property.sml", line 21, characters 31-42: ! val prop1: property = Color Transparent ! ^^^^^^^^^^^ ! Type clash: expression of type ! bgcolorvalue ! cannot have type ! colorvalue ``` ### My guess So I think that the problem is that `color` and `bgcolor` share a common property value: `transparent` which reflects in datatypes `colorvalue` and `bgcolorvalue` to share constructor `Transparent`. Actually they share all values, thus all constructors. - Is it the reason for this failure? - In any case, what should I do to describe my system?
It is easy to see that trying to use the same constructor in different types in the same scope would create problems with type inference. For example, what should the type of ``` fun heat Transparent = Yellow | heat Yellow = Red | heat Red = Blue | heat Blue = Blue; ``` be? `colorvalue ->colorvalue` or `bgbcolorvalue -> bgbcolorvalue` or `colorvalue -> bgbcolorvalue` or `bgbcolorvalue -> colorvalue`? The easiest workaround would be to adopt different naming conventions for the constructors. You could also use structures (which is how SML keeps e.g. different uses of the name `map` in the basis library without any clashes). Something like: ``` structure Color = struct datatype value = Transparent | Yellow | Blue | Red end structure BGBColor = struct datatype value = Transparent | Yellow | Blue | Red end; ``` Then you can do things like: ``` - val a = Color.Transparent; val a = Transparent : Color.value - val b = BGBColor.Transparent; val b = Transparent : BGBColor.value ``` This last was run in the SML/NJ REPL and illustrates how there are now no clashes.
split php string into chunks of varying length I am looking for ways to split a string into an array, sort of `str_split()`, where the chunks are all of different sizes. I could do that by looping through the string with a bunch of `substr()`, but that looks neither elegant nor efficient. Is there a function that accept a string and an array, like (`1, 18, 32, 41, 108, 125, 137, 152, 161`), and yields an array of appropriately chopped string pieces? Explode is inappropriate because the chunks are delimited by varying numbers of white spaces.
There is nothing in PHP that will do that for you (it's a bit specific). So as radashk just siad, you just have to write a function ``` function getParts($string, $positions){ $parts = array(); foreach ($positions as $position){ $parts[] = substr($string, 0, $position); $string = substr($string, $position); } return $parts; } ``` Something like that. You can then use it wherever you like, so it's clean: ``` $parts = getParts('some string', array(1, ... 161)); ``` If you really wanted to, you could implode it into a regular expression: ``` ^.{1}.{18} <lots more> .{161}$ ``` would match what you wanted.
How to render a View at specific coordinates of the page? When the screen is touched and hold at a certain position, I would like to render a `View` exactly where the screen is touched( like context menu in android). I have page coordinates `pageX` and `pageY`, And I would like to render a `View` according to these coordinates. Is there any way to give these coordinates in Style object as shown below, ``` <View style={{X: 120, Y: 75}} /> ```
Unlike regular CSS, react-native does not support `position: fixed` to manually position a view on a certain position relative to the viewport. But you could use a workaround with `position: 'absolute'` and the `onLayout` method of the parent component. I will try to clarify: **Step 1:** Let's assume you have a parent view which is exactly the width and height of your screen, then you could easily position the view like so: ``` <View> // this parent view has the same dimensions as the screen // Below is the view you want to position according your x, y coordinates <View style={{ position: 'absolute', left: this.state.x, top: this.state.y }}> </View> </View> ``` **Step 2 (optional):** The touch coordinates you have are screen coordinates, so step 1 isn't sufficient if your view isn't exactly the same size as your screen. But react-native provides a `measureInWindow()` (available through a component's ref) callback to determine a component 's screen coordinates. So basically you call `measureInWindow()` on the parent view whenever a touch takes place to determine the x and y offset of the parent window and you take this offset into account to calculate the correct x and y values.
Why is the output of "openssl passwd" different each time? > > The openssl passwd command computes the hash of a password typed at > run-time or the hash of each password in a list. The password list is > taken from the named file for option -in file, from stdin for option > -stdin, and from the command line otherwise. The UNIX standard algorithm crypt and the MD5-based BSD password algorithm 1 and its > Apache variant apr1 are available. > > > I understand the term "hash" to mean "*turn an input into an output from which is it difficult/impossible to derive the original input*." More specifically, the input:output relationship after hashing is N:M, where M<=N (i.e. hash collision is possible). Why is the output of "`openssl passwd`" different run successively with the same input? ``` > openssl passwd Password: Verifying - Password: ZTGgaZkFnC6Pg > openssl passwd Password: Verifying - Password: wCfi4i2Bnj3FU > openssl passwd -1 "a" $1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ > openssl passwd -1 "a" $1$JhSBpnWc$oiu2qHyr5p.ir0NrseQes1 ``` I must not understand the purpose of this function, because it looks like running *the same hash algorithm on the same input produces multiple unique outputs*. I guess I'm confused by this seeming N:M input:output relationship where M>N.
``` > openssl passwd -1 "a" $1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ ``` This is the extended Unix-style `crypt(3)` password hash syntax, specifically the MD5 version of it. The first `$1$` identifies the hash type, the next part `OKgLCmVl`is the salt used in encrypting the password, then after the separator `$` character to the end of line is the actual password hash. So, if you take the salt part from the first encryption and use it with the subsequent ones, you should always get the same result: ``` > openssl passwd -1 -salt "OKgLCmVl" "a" $1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ > openssl passwd -1 -salt "OKgLCmVl" "a" $1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/ ``` When you're *changing a password*, you should always switch to a new salt. This prevents anyone finding out after the fact whether the new password was actually the same as the old one. (If you want to prevent the re-use of old passwords, you can of course hash the new password candidate twice: once with the old salt and then, if the result is different from the old password and thus acceptable, again with a new salt.) If you use `openssl passwd` with no options, you get the original `crypt(3)`-compatible hash, as described by dave\_thompson\_085. With it, the salt is two first letters of the hash: ``` > openssl passwd "a" imM.Fa8z1RS.k > openssl passwd -salt "im" "a" imM.Fa8z1RS.k ``` You should not use this old hash style in any new implementation, as it restricts the effective password length to 8 characters, and has too little salt to adequately protect against modern methods. (I once calculated the amount of data required to store a full set of rainbow tables for every classic `crypt(3)` hash. I don't remember the exact result, but assuming my calculations were correct, it was on the order of "a modest stack of multi-terabyte disks". In my opinion, that places it within the "organized criminals could do it" range.)
How to delete a certificate from Mac Keychain through code? I've looked into [Apple's Certificate reference](http://developer.apple.com/library/mac/#documentation/Security/Reference/certifkeytrustservices/Reference/reference.html#//apple_ref/doc/uid/TP30000157), and I don't see anything about removing certificates from the Keychain. Is it allowed? If so, how? If not, why not?
Certificates are a subtype of keychain items, so you can use `SecKeychainItemDelete` to remove them. To prevent compiler warnings, you'll need to explicitly cast the `SecCertificateRef` to a `SecKeychainItemRef` — plain C doesn't have language support for subclasses. ``` SecCertificateRef certificate = ...; OSStatus status = SecKeychainItemDelete((SecKeychainItemRef)certificate); if (status) { // Handle error } ``` If you target Mac OS 10.6 or later, you can also use the newer `SecItemDelete` API. It doesn't provide any advantages in the simplest case, but you can change the query argument to delete multiple certificates at once, or delete certificates without having direct references to them. ``` SecCertificateRef certificate = ...; NSDictionary *query = [NSDictionary dictionaryWithObjectsAndKeys: kSecClassCertificate, kSecClass, [NSArray arrayWithObject:(id)certificate], kSecMatchItemList, kSecMatchLimitOne, kSecMatchLimit, nil]; OSStatus status = SecItemDelete((CFDictionaryRef)query); if (status) { // Handle error } ```
Is there a "no-reply" email header? I often see automated emails postfixed with a message like ### Amazon: > > \*Please note: this e-mail was sent from an address that cannot accept incoming e-mail. Please use the link above if you need to contact us again about this same issue. > > > ### Twitter: > > Please do not reply to this message; it was sent from an unmonitored email address. This message is a service email related to your use of Twitter. > > > ### Google Checkout: > > Need help? Visit the Google Checkout help center. Please do not reply to this message. > > > Directly underneath this warning, Gmail shows me a reply input field. It seems to me that there should be some sort of header that could be attached to such automated emails that would tell the recipient's email client to not allow replies. Is there such a header? If not, has it ever been discussed by the groups that control email formats?
[RFC 6854](https://www.rfc-editor.org/rfc/rfc6854) updates [RFC 5322](https://www.rfc-editor.org/rfc/rfc5322) to allow the group construct to be used in the `From` field as well (among other things). A group can be empty, which is likely the only way you've ever seen the group syntax being used: `undisclosed-recipients:;`. [Section 1](https://www.rfc-editor.org/rfc/rfc6854#section-1) of the RFC explicitly lists "no-reply" among the motivations for allowing the group construct in the `From` field: > > The use cases for the "From:" field have evolved. There are numerous instances of automated systems that wish to send email but cannot handle replies, and a "From:" field with no usable addresses would be extremely useful for that purpose. > > > It provides the following example: `From: Automated System:;` However, at the end of the same section, the RFC also says: > > This document recommends against the general use of group syntax in these fields at this time > > > In [section 3](https://www.rfc-editor.org/rfc/rfc6854#section-3), the RFC clarifies that the group syntax in the `From` field is only for [Limited Use](https://www.rfc-editor.org/rfc/rfc2026#section-3.3). Personally, I think this method should not be used – unless we're certain that all relevant clients display the originating domain in some other way (reconstructed from the `Return-Path` or a new header). Otherwise, this defeats all the efforts towards domain authentication (SPF, DKIM, and DMARC). Introducing an additional header field which causes clients to simply hide the reply button seems the much better approach to me. The RFC comments on this aspect in [section 5](https://www.rfc-editor.org/rfc/rfc6854#section-5): > > Some protocols attempt to validate the originator address by matching the "From:" address to a particular verified domain (for one such protocol, see the Author Domain Signing Practices (ADSP) document [RFC5617]). Such protocols will not be applicable to messages that lack an actual email address (whether real or fake) in the "From:" field. Local policy will determine how such messages are handled, and senders, therefore, need to be aware that using groups in the "From:" might adversely affect deliverability of the message. > > > What a failed opportunity…
Difference between Entity and POCO In this [article](http://msdn.microsoft.com/en-us/library/vstudio/dd456853%28v=vs.100%29.aspx) says : > > The Entity Framework enables you to use custom data classes together > with your data model without making any modifications to the data > classes themselves. This means that you can use "plain-old" CLR > objects (POCO), such as existing domain objects, with your data model. > These POCO data classes (also known as persistence-ignorant objects), > which are mapped to entities that are defined in a data model, support > most of the same query, insert, update, and delete behaviors as entity > types that are generated by the Entity Data Model tools. > > > POCO is a DTO with behavior. So POCO is no same as entity? What the difference?
Plain Old CLR Object ([POCO](https://stackoverflow.com/tags/poco/info)) has the same meaning as [Plain Old Java Object (POJO)](http://www.martinfowler.com/bliki/POJO.html). > > The term was coined while Rebecca Parsons, Josh MacKenzie and I were > preparing for a talk at a conference in September 2000. In the talk we > were pointing out the many benefits of encoding business logic into > regular java objects rather than using Entity Beans. We wondered why > people were so against using regular objects in their systems and > concluded that it was because simple objects lacked a fancy name. So > we gave them one, and it's caught on very nicely. > > > by Martin Fowler > > > **POCO is simply a regular object that has no references to any specific framework and does not follow their interfaces or restrictions.** POCO classes are persistence ignorant objects that can be used with any ORM. **[Entity](https://stackoverflow.com/tags/entity/info) is an object which has an identity and can be uniquely determined**. Entities represent domain model and domain logic. Usually they are designed as persistence ignorant POCO objects. But not every POCO object is an Entity. [Value Objects](http://martinfowler.com/bliki/ValueObject.html) are also designed as POCO objects and [they are not Entities.](http://devlicio.us/blogs/casey/archive/2009/02/13/ddd-entities-and-value-objects.aspx)
NGXS State not changing There might be someone who knows why NGXS state cannot change inside of HttpClient POST request. ``` ctx.patchState() ``` Only works outside HTTP POST request. ``` @Action(SignIn) signin(ctx: StateContext<AppStateModel>, { payload }: SignIn) { // ctx.patchState({isLoggedIn:true}) this works! return this.api$.signin(payload) .pipe( tap((user: User) => { console.log(user); ctx.patchState({ isLoggedIn: true }) }) ) .subscribe( (user: User) => { ctx.patchState({ isLoggedIn: true }) } ) } ```
Actually, the state is changing, but you don't see it because you return subscription that hasn't been completed. In other words - You'll see the action being dispatched once the subscription of the returned observable completes. As mentioned in the comments, the returned observable of the actions are being subscribed behind the scene, so there's no need to subscribe to it again. After that being said, you can pass `take(1)` in the pipe. What it does, it completes the subscription of the observable after it triggered once. ``` @Action(SignIn) signin(ctx: StateContext<AppStateModel>, { payload }: SignIn) { return this.api$.signin(payload) .pipe( take(1), // <-- Add that tap((user: User) => ctx.patchState({ isLoggedIn: true })) ); } ```
how to add a progress bar to the login activity while logging in? im creating an app to log in to parse.com and then browse thru projects and other functions but im not able to add a progress bar or anything similar so while the app is loging in nothing is happening im just waiting it to log in and move to the other activity this is my code for the logging in any help please ``` import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import com.androidbegin.parselogintutorial.R; import com.parse.LogInCallback; import com.parse.ParseException; import com.parse.ParseUser; public class LoginActivity extends Activity { // Declare Variables Button loginbutton; String usernametxt; String passwordtxt; EditText password; EditText username; /** Called when the activity is first created. */ public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Get the view from login.xml setContentView(R.layout.login); // Locate EditTexts in login.xml username = (EditText) findViewById(R.id.username); password = (EditText) findViewById(R.id.password); // Locate Buttons in main.xml loginbutton = (Button) findViewById(R.id.login); // Login Button Click Listener loginbutton.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { // Retrieve the text entered from the EditText usernametxt = username.getText().toString(); passwordtxt = password.getText().toString(); // Send data to Parse.com for verification ParseUser.logInInBackground(usernametxt, passwordtxt, new LogInCallback() { public void done(ParseUser user, ParseException e) { // If user exist and authenticated, send user to Welcome.class if(user !=null){ Intent intent = new Intent( LoginActivity.this, AddUserPage.class); startActivity(intent); Toast.makeText(getApplicationContext(), "Successfully Logged in", Toast.LENGTH_LONG).show(); finish(); }else{ Toast.makeText(getApplicationContext(), "No such user", Toast.LENGTH_LONG).show(); username.setText(""); password.setText(""); }} }); } }); } } ```
Define a progress bar as `private ProgressDialog mProgress;` in oncreate use this ``` mProgress = new ProgressDialog(context); mProgress.setTitle("Processing..."); mProgress.setMessage("Please wait..."); mProgress.setCancelable(false); mProgress.setIndeterminate(true); ``` Now this ``` // Login Button Click Listener loginbutton.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { mProgress.show(); // Retrieve the text entered from the EditText usernametxt = username.getText().toString(); passwordtxt = password.getText().toString(); // Send data to Parse.com for verification ParseUser.logInInBackground(usernametxt, passwordtxt, new LogInCallback() { public void done(ParseUser user, ParseException e) { // If user exist and authenticated, send user to Welcome.class if(user !=null){ mProgress.dismiss(); Intent intent = new Intent( LoginActivity.this, AddUserPage.class); startActivity(intent); Toast.makeText(getApplicationContext(), "Successfully Logged in", Toast.LENGTH_LONG).show(); finish(); }else{ mProgress.dismiss(); Toast.makeText(getApplicationContext(), "No such user", Toast.LENGTH_LONG).show(); username.setText(""); password.setText(""); }} }); } }); ```
Unity3D get animator controller current Animation time Hi All I was trying to build an app and try to do things on the scroll. I am having 3 different layers in the Animator Controller of the Unity all weight 1. In the first layer having the on-load animation and in the layer 2 having the on scroll animation which need to be played on top of the layer 1 animation along with it. So to call the animation on scroll I have written a program which is calling the animation in the bases of the scroll so animation on the layer 2 "take001" is getting played on scroll depend upon how much scroll happened. Now I want to get the current Time of the layer 2 animation. Find the code below and the screenshot of the layers I have created in the Unity: [Refer Img] : <https://i.stack.imgur.com/1huCM.jpg> ``` using UnityEngine; using System.Collections; public class MouseMovementScript : MonoBehaviour { Animator anim; AnimatorStateInfo stateInfo; AnimatorClipInfo[] myAnimatorClip; double speedBase = 1; void Start () { anim = GetComponent<Animator>(); stateInfo = anim.GetCurrentAnimatorStateInfo(1); //Output the name of the starting clip } // Update is called once per frame void Update () { var d = Input.GetAxis("Mouse ScrollWheel"); if (d > 0f) { Time.timeScale = 1; anim.SetFloat("Direction", 1.0f); anim.Play("take001"); StartCoroutine(TestCoroutine(d)); anim.Play("BoxAnimation001"); } else if (d < 0f) { Time.timeScale = 1; anim.SetFloat("Direction", -1.0f); } // Cursor Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hit; if(Physics.Raycast (ray, out hit)) { if (Input.GetMouseButtonDown(0)){ if(hit.transform.tag == "Popup_1") { Application.ExternalCall("OpenPopup", 0); } else if(hit.transform.tag == "Popup_2") { Application.ExternalCall("OpenPopup", 1); } else if(hit.transform.tag == "Popup_3") { Application.ExternalCall("OpenPopup", 2); } else if(hit.transform.tag == "Popup_4") { Application.ExternalCall("OpenPopup", 3); } } } } IEnumerator TestCoroutine(float d){ yield return new WaitForSeconds(d); Time.timeScale = 0; } } ```
The simplest way is to divide the current animation state normalized time with `1` and return the remainder from the division. ``` public float GetCurrentAnimatorTime(Animator targetAnim, int layer = 0) { AnimatorStateInfo animState = targetAnim.GetCurrentAnimatorStateInfo(layer); float currentTime = animState.normalizedTime % 1; return currentTime; } ``` This works most of the time but I've seen places where it didn't work as expected --- The proper way to do this is a bit complicated because Unity don't give you access the `AnimationClip` used by the `Animator` and you need the `AnimationClip` to retrieve the current time by multiplying `AnimationClip.length` with `AnimationState.normalizedTime`;. To do this, you have to keep reference of the `AnimationClip` you used in a public variable. Create a dictionary that uses `Animator.StringToHash` as key and the corresponding `AnimationClip` as value. To obtain the current `AnimationClip`, pass `Animator.GetCurrentAnimatorStateInfo.fullPathHash` to the Dictionary and it will give you the proper `AnimationClip`. This clip you can use to obtain the current time by multiplying its length the `AnimationState.normalizedTime`. --- Your `AnimationClip` references: ``` public AnimationClip jumpClip; public AnimationClip moveClip; public AnimationClip lookClip; ``` Get the animation state hash for each Animation State: ``` const string animBaseLayer = "Base Layer"; int jumpAnimHash = Animator.StringToHash(animBaseLayer + ".Jump"); int moveAnimHash = Animator.StringToHash(animBaseLayer + ".Move"); int lookAnimHash = Animator.StringToHash(animBaseLayer + ".Look"); ``` Dictionary to link the each animation state hash with their AnimationClip: ``` Dictionary<int, AnimationClip> hashToClip = new Dictionary<int, AnimationClip>(); ``` Initialize the `Dictionary` in the `Awake` function: ``` void Awake() { hashToClip.Add(jumpAnimHash, jumpClip); hashToClip.Add(moveAnimHash, moveClip); hashToClip.Add(lookAnimHash, lookClip); } ``` Function to obtain `AnimationClip` from the animation state hash: ``` AnimationClip GetClipFromHash(int hash) { AnimationClip clip; if (hashToClip.TryGetValue(hash, out clip)) return clip; else return null; } ``` Finally, a function to get the current Animator time: ``` public float GetCurrentAnimatorTime(Animator targetAnim, int layer = 0) { AnimatorStateInfo animState = targetAnim.GetCurrentAnimatorStateInfo(layer); //Get the current animation hash int currentAnimHash = animState.fullPathHash; //Convert the animation hash to animation clip AnimationClip clip = GetClipFromHash(currentAnimHash); //Get the current time float currentTime = clip.length * animState.normalizedTime; return currentTime; } ``` **Usage**: ``` public Animator anim; void Update() { float time = GetCurrentAnimatorTime(anim, 0); Debug.Log(time); } ```
Split text string in a data.table columns I have a script that reads in data from a CSV file into a `data.table` and then splits the text in one column into several new columns. I am currently using the `lapply` and `strsplit` functions to do this. Here's an example: ``` library("data.table") df = data.table(PREFIX = c("A_B","A_C","A_D","B_A","B_C","B_D"), VALUE = 1:6) dt = as.data.table(df) # split PREFIX into new columns dt$PX = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 1)) dt$PY = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 2)) dt # PREFIX VALUE PX PY # 1: A_B 1 A B # 2: A_C 2 A C # 3: A_D 3 A D # 4: B_A 4 B A # 5: B_C 5 B C # 6: B_D 6 B D ``` In the example above the column `PREFIX` is split into two new columns `PX` and `PY` on the "\_" character. Even though this works just fine, I was wondering if there is a better (more efficient) way to do this using `data.table`. My real datasets have >=10M+ rows, so time/memory efficiency becomes really important. --- ### UPDATE: Following @Frank's suggestion I created a larger test case and used the suggested commands, but the `stringr::str_split_fixed` takes a lot longer than the original method. ``` library("data.table") library("stringr") system.time ({ df = data.table(PREFIX = rep(c("A_B","A_C","A_D","B_A","B_C","B_D"), 1000000), VALUE = rep(1:6, 1000000)) dt = data.table(df) }) # user system elapsed # 0.682 0.075 0.758 system.time({ dt[, c("PX","PY") := data.table(str_split_fixed(PREFIX,"_",2))] }) # user system elapsed # 738.283 3.103 741.674 rm(dt) system.time ( { df = data.table(PREFIX = rep(c("A_B","A_C","A_D","B_A","B_C","B_D"), 1000000), VALUE = rep(1:6, 1000000) ) dt = as.data.table(df) }) # user system elapsed # 0.123 0.000 0.123 # split PREFIX into new columns system.time ({ dt$PX = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 1)) dt$PY = as.character(lapply(strsplit(as.character(dt$PREFIX), split="_"), "[", 2)) }) # user system elapsed # 33.185 0.000 33.191 ``` So the `str_split_fixed` method takes about 20X times longer.
**Update:** From version 1.9.6 (on CRAN as of Sep'15), we can use the function `tstrsplit()` to get the results directly (and in a much more efficient manner): ``` require(data.table) ## v1.9.6+ dt[, c("PX", "PY") := tstrsplit(PREFIX, "_", fixed=TRUE)] # PREFIX VALUE PX PY # 1: A_B 1 A B # 2: A_C 2 A C # 3: A_D 3 A D # 4: B_A 4 B A # 5: B_C 5 B C # 6: B_D 6 B D ``` `tstrsplit()` basically is a wrapper for `transpose(strsplit())`, where `transpose()` function, also recently implemented, transposes a list. Please see `?tstrsplit()` and `?transpose()` for examples. See history for old answers.
List YouTube videos using C# and Google.Apis.YouTube.v3 I'm trying to perform some YouTube video interaction using the latest version of Google.Apis.YouTube.v3 (as of Jan 15, 2014). I have done a NuGet on the following: - Google.Apis.YouTube.v3 - Google.Apis.Authentication - Google.Apis.Drive.v2 (not necessary, but got it anyways) I then attempted to run the code found on: <https://developers.google.com/youtube/v3/docs/playlistItems/list> However, the code has the following references which I can't seem to find in any of the latest NuGet downloads... - `using Google.Apis.Auth.OAuth2.DotNetOpenAuth;` - `using Google.Apis.Samples.Helper;` Then there's the following comment at the top of the code, but the links lead me to nothing useful. `/* External dependencies, OAuth 2.0 support, and core client libraries are at: */` `/* https://code.google.com/p/google-api-dotnet-client/wiki/APIs#YouTube_Data_API */` `/* Also see the Samples.zip file for the Google.Apis.Samples.Helper classes at: */` `/* https://code.google.com/p/google-api-dotnet-client/wiki/Downloads */` I'm beginning to believe the best way to play with YouTube using C# is to use older versions of the YouTube.v3 codebase that coincide with examples folks have seemed to get working. Any help (esp from peleyal) would be much appreciated. Perhaps I'm missing something obvious and need to be beat over the head... BTW, I have downloaded my client secret json file and successfully run a few of the examples contained within the `google-api-dotnet-client-1.7.0-beta.samples.zip` file. However, strangely missing from that samples zip file are any YouTube samples. Also missing from that zip file is the Google.Apis.Samples.Helper classes. Does anyone have some useful example code for interacting with YouTube using the latest NuGet code as of Jan 14, 2014?
So after much research, digging and a little less hair, I figured out a few things. First, log into the "Google Cloud Console". If you're using GAE (Google App Engine) and you click on your GAE project and enable the "YouTube Data API v3", you are guaranteed to get NO WHERE! Instead, back out of your GAE project, and create a new Project called "API Project" for example. Then within **that** project, enable your desired API's and you'll begin to get better results. Much better results. Start first with trying a YouTube search. This allows you to just insert your API key and you don't have to mess with OAuth2 and it requires less dll's, so its a good place to start. Try something like the following: ``` YouTubeService youtube = new YouTubeService(new BaseClientService.Initializer() { ApplicationName = "{yourAppName}", ApiKey = "{yourApiKey}", }); SearchResource.ListRequest listRequest = youtube.Search.List("snippet"); listRequest.Q = "Loeb Pikes Peak"; listRequest.MaxResults = 5; listRequest.Type = "video"; SearchListResponse resp = listRequest.Execute(); foreach (SearchResult result in resp.Items) { CommandLine.WriteLine(result.Snippet.Title); } ``` Feel free to replace CommandLine with regular Console print stmts. Next, move on to OAuth 2.0 and try to get your credentials to go through without erroring. You'll need to download your OAuth JSON file from "Google Cloud Console" under the "Credentials" section. Once you have this file, replace any files named "client\_secrets.json" with the contents of the downloaded json file. In order to get the authorization to work, I found that I was missing the Microsoft.Threading.Tasks.Extensions.Desktop.dll which is the dll that allows the browser to open a window to grant access for the Native Application to muck with your YouTube acct. So if you have some errors during the Authorization part, check the inner exception and there's a chance that might be your issue as well. Disclaimer: The bottom half of the code shown below was snarfed from: github.com/youtube/api-samples/blob/master/dotnet ``` UserCredential credential; using (FileStream stream = new FileStream("client_secrets.json", FileMode.Open, FileAccess.Read)) { credential = GoogleWebAuthorizationBroker.AuthorizeAsync( GoogleClientSecrets.Load(stream).Secrets, new[] { YouTubeService.Scope.Youtube, YouTubeService.Scope.YoutubeUpload }, "user", CancellationToken.None, new FileDataStore("YouTube.Auth.Store")).Result; } var youtubeService = new YouTubeService(new BaseClientService.Initializer() { HttpClientInitializer = credential, ApplicationName = Assembly.GetExecutingAssembly().GetName().Name }); var video = new Video(); video.Snippet = new VideoSnippet(); video.Snippet.Title = "Default Video Title"; video.Snippet.Description = "Default Video Description"; video.Snippet.Tags = new string[] { "tag1", "tag2" }; video.Snippet.CategoryId = "22"; // See https://developers.google.com/youtube/v3/docs/videoCategories/list video.Status = new VideoStatus(); video.Status.PrivacyStatus = "unlisted"; // or "private" or "public" var filePath = @"REPLACE_ME.mp4"; // Replace with path to actual movie file. using (var fileStream = new FileStream(filePath, FileMode.Open)) { var videosInsertRequest = youtubeService.Videos.Insert(video, "snippet,status", fileStream, "video/*"); videosInsertRequest.ProgressChanged += videosInsertRequest_ProgressChanged; videosInsertRequest.ResponseReceived += videosInsertRequest_ResponseReceived; videosInsertRequest.UploadAsync(); } ``` So there's my 2 cents worth. Also, you'll need to do a NuGet on DotNetOpenAuth and within your code, replace any "using" calls to Google.Apis.Auth.OAuth2.DotNetOpenAuth to just "using DotNetOpenAuth". Hopefully this helps others. The big thing was figuring out the GAE versus a new project. Once I figured that out, normal amounts of research started yielding results rather than pure frustration!!
Loop through CURL results I have the following script: ``` #!/bin/bash for line in $('curl -s https://scans.io/data/rapid7/sonar.http/20141209-http.gz | zcat | head -n 1000 | dap json + select vhost + lines'); do echo "$line\n" done ``` For which I am trying to achieve the following: 1. Loop through a dynamically growing list of results from the scans.io curl 2. Output the line which I then propose to pass to PHP to store and process However I'm getting `syntax error near unexpected token`$'\r''` but am not a BASH expert to know what I need to tweak.
Use [Process-Substitution](http://mywiki.wooledge.org/ProcessSubstitution?highlight=%28Process%29%7C%28substitution%29), with a *while-loop*, see [why using for-loop for command output parsing is bad](http://mywiki.wooledge.org/BashPitfalls#for_i_in_.24.28ls_.2A.mp3.29). The unsetting of `IFS` and `read` with the `-r` flag does not allow backslashes to escape any characters and treat strings "as-is". ``` #!/bin/bash while IFS= read -r line do printf '%s\n' "$line" done < <(curl -s https://scans.io/data/rapid7/sonar.http/20141209-http.gz | zcat | head -n 1000 | dap json + select vhost + lines) ```
GoogleService failed to initialize I am using google Maps in my android application. I have created the key and added necessary permissions in manifest file. But soon I start the application I get this message in debugger: > > GoogleService failed to initialize, status: 10, Missing an expected > resource: 'R.string.google\_app\_id' for initializing Google services. > Possible causes are missing google-services.json or > com.google.gms.google-services gradle plugin. > > > I am not sure whats wrong. The Map is working fine and I can use it without any issues. My gradle file has this entry: > > compile 'com.google.android.gms:play-services:8.4.0' > > > What is it complaining about and how do I alleviate it?
You need to place the configuration file (`google-services.json`) generated by [developer.google.com](https://developers.google.com/mobile/add?platform=android), as mentioned in the 2nd step of the official docs [here](https://developers.google.com/identity/sign-in/android/start?hl=en) The process is simple 1. You can select your project or create a new one. 2. Then after selecting desired services (in this case the maps service), you can generate the configuration file. > > For people who have migrated to the Firebase projects they can get the same by going to [Firebase Console](https://console.firebase.google.com/), selecting your project and under settings you will find the configuration file. > > > 3. Then as quoted in step 3 of the official docs [here](https://developers.google.com/identity/sign-in/android/start?hl=en) > > Copy the `google-services.json` file you just downloaded into the app/ or mobile/ directory of your Android > > > **P.S : For people downvoting this answer, please do leave a comment as to why you are down voting it.**
MongoDB sort map elements when searching a particular document I have a query that returns a single document but have a map field. I would like to sort the map elements, is this possible? The data is something like this: **Calendars:** ``` { "_id" : "1f5c0468-3249-4eee-ae65-79fc134f37c7", "countryId" : "60a562ba-43d1-48b3-bd07-7aebbb2c0123", "name" : "Bangladesh Holiday 2019", "year" : 2019, "holidays" : { "2019-02-21" : "Language Martyrs' Day", "2019-03-17" : "Sheikh Mujibur Rahman's birthday", "2019-03-26" : "Independence Day", "2019-04-14" : "Bengali New Year", "2019-04-21" : "Shab e-Barat", "2019-05-01" : "May Day", "2019-05-19" : "Buddha Purnima", "2019-05-31" : "Jumatul Bidah", "2019-06-01" : "Night of Destiny", "2019-06-02" : "Night of Destiny", "2019-06-03" : "Compensate leave for Night of Destiny", "2019-06-04" : "Eid al-Fitr", "2019-06-05" : "Eid ul-Fitr", "2019-06-06" : "Eid ul-Ftr Holiday", "2019-08-09" : "Compensate leave for Eid ul-Adha", "2019-08-11" : "Eid ul-Adha Day 1", "2019-08-12" : "Eid ul-Adha Day 2", "2019-08-13" : "Eid ul-Adha Day 3", "2019-08-15" : "National Mourning Day", "2019-08-23" : "Janmashtami", "2019-09-10" : "Ashura", "2019-10-08" : "Durga Puja", "2019-11-10" : "Eid e-Milad-un Nabi", "2019-12-16" : "Victory Day", "2019-12-25" : "Christmas Day" } } ``` The simple query to get a particular document is like this: ``` db.getCollection('calendars').find({ "_id" : "1f5c0468-3249-4eee-ae65-79fc134f37c7" }) ``` Now, is it possible to sort the map of holidays by date?
Based on your sample JSON: (**As far as I understood**) - Date is key (We must have some mechanism to convert the key to value) - Date is a string (We must have to convert String date to Date object) - Then sort on date - Wrap up whole structure again same as sample JSON but should be sorted Below is the result of using aggregation: ``` db.Collection.aggregate([ { $match: { "_id" : "1f5c0468-3249-4eee-ae65-79fc134f37c7" }}, // Let's fetch particular document { $project: { countryId: 1, name: 1, year: 1, holidays: { $objectToArray: "$holidays" }, // Let's convert Object to Array first }}, { $unwind: "$holidays" }, // Let's unwind So we can easily convert String date to Date object { $addFields: { convertedDate: { $toDate: "$holidays.k" } // Let's convert string date to date object }}, { $sort: { "convertedDate": -1 }}, // let's sort converted date (-1 or 1) { $group: { // Let's wrap up whole unwinded object to single _id: "$_id", countryId: {$first: "$countryId"}, name: {$first: "$name"}, year: {$first: "$year"}, holidays: {$push: "$holidays"} }}, { $project: { // Let's convert holiday to object again but this time it would be sorted countryId: 1, name: 1, year: 1, holidays: { $arrayToObject: "$holidays" } }} ]) ``` **Output:** ``` { "_id" : ObjectId("5d4be8738ac090b7119314a3"), "countryId" : "60a562ba-43d1-48b3-bd07-7aebbb2c0123", "name" : "Bangladesh Holiday 2019", "year" : 2019, "holidays" : { "2019-12-25" : "Christmas Day", "2019-12-16" : "Victory Day", "2019-11-10" : "Eid e-Milad-un Nabi", "2019-10-08" : "Durga Puja", "2019-09-10" : "Ashura", "2019-08-23" : "Janmashtami", "2019-08-15" : "National Mourning Day", "2019-08-13" : "Eid ul-Adha Day 3", "2019-08-12" : "Eid ul-Adha Day 2", "2019-08-11" : "Eid ul-Adha Day 1", "2019-08-09" : "Compensate leave for Eid ul-Adha", "2019-06-06" : "Eid ul-Ftr Holiday", "2019-06-05" : "Eid ul-Fitr", "2019-06-04" : "Eid al-Fitr", "2019-06-03" : "Compensate leave for Night of Destiny", "2019-06-02" : "Night of Destiny", "2019-06-01" : "Night of Destiny", "2019-05-31" : "Jumatul Bidah", "2019-05-19" : "Buddha Purnima", "2019-05-01" : "May Day", "2019-04-21" : "Shab e-Barat", "2019-04-14" : "Bengali New Year", "2019-03-26" : "Independence Day", "2019-03-17" : "Sheikh Mujibur Rahman's birthday", "2019-02-21" : "Language Martyrs' Day" } } ```
How can I increase the size of a node proportional to the number of indegrees in Gephi? This is an example of what I am trying to achieve. It is quite simple and some Gephi experts could help me out. I want the node size to increase to the proportional of indegrees towards that node. This is obviously a directed graph as you might have figured. ![enter image description here](https://i.stack.imgur.com/j97jb.png)
As you expect, this is a commong Gephi ranking feature. - Go to ranking dock (normally at the top-left of the UI coupled with partition dock), you can find screenshots and details at <http://wiki.gephi.org/index.php/GUI_Description> - chose the node tab in order to rank by node, chose the "red diamond" icon, in order to rank by size. - In the dropdown menu chose InDegree as rank parameter. (if you don't see the indegree parameter see further on this answer). - Set min size and max size as you wish, - select the range of the InDegree values to consider (usually the complete range). - If you want to increase the size proportionally to indegree in a linear way, leave the spline as default, otherwise you can curve the correlation. That's it, push apply and you'll have it. If you don't see the InDegree parameter you need to calculate it before. To do this: - focus on the statistics dock (bottom-right of the GUI). - run the Avarege Degree statistic (which - if your network is directed - calculates degree, indegree and outdegree). - to be shure your network is directed, focus on the context dock (upper-right of the UI). It says number of nodes, number of edges, and type of graph (directed or undirected). - after calculating the degrees values, gephi associates them to the nodes and you can use them as ranking parameters.
Zig-zag scan an N x N array I have a simple array. The array length always has a square root of an integer. So 16, 25, 36 etc. ``` $array = array('1', '2', '3', '4' ... '25'); ``` What I do, is arrange the array with HTML so that it looks like a block with even sides. ![](https://i.stack.imgur.com/6d1lx.jpg) What I want to do, is sort the elements, so that when I pass the JSON encoded array to jQuery, it will iterate the array, fade in the current block, and so I'd get a sort of wave animation. So I'd like to sort the array kind of like this ![](https://i.stack.imgur.com/CHIiE.jpg) So my sorted array would look like ``` $sorted = array('1', '6', '2', '3', '7', '11', '16, '12' .. '25'); ``` Is there way to do so?.. Thanks
Here's mine. ``` function waveSort(array $array) { $dimension = pow(count($array),0.5); if((int)$dimension != $dimension) { throw new InvalidArgumentException(); } $tempArray = array(); for($i = 0; $i < $dimension; $i++) { $tempArray[] = array_slice($array,$i*$dimension,$dimension); } $returnArray = array(); for($i = 0; $i < $dimension * 2 -1; $i++) { $diagonal = array(); foreach($tempArray as $x => $innerArray) { if($i - $x >= 0 && $i - $x < $dimension) { $diagonal[] = $innerArray[$i - $x]; } } if($i % 2 == 1) { krsort($diagonal); } $returnArray = array_merge($returnArray,$diagonal); } return $returnArray; } ``` Usage: ``` <?php $a = range(1,25); var_dump(waveSort($a)); ``` Output ``` array(25) { [0]=> int(1) [1]=> int(6) [2]=> int(2) [3]=> int(3) [4]=> int(7) [5]=> int(11) [6]=> int(16) [7]=> int(12) [8]=> int(8) [9]=> int(4) [10]=> int(5) [11]=> int(9) [12]=> int(13) [13]=> int(17) [14]=> int(21) [15]=> int(22) [16]=> int(18) [17]=> int(14) [18]=> int(10) [19]=> int(15) [20]=> int(19) [21]=> int(23) [22]=> int(24) [23]=> int(20) [24]=> int(25) } ```
Scala Tuple type inference in Java This is probably a very noobish question, but I was playing a bit with Scala/Java interaction, and was wondering how well did Tuples play along. Now, I know that the `(Type1, Type2)` syntax is merely syntactic sugar for `Tuple2<Type1, Type2>`, and so, when calling a Scala method that returns a Tuple2 in a plain Java class, I was expecting to get a return type of `Tuple2<Type1, Type2>` For clarity, my Scala code: ``` def testTuple:(Int,Int) = (0,1) ``` Java code: ``` Tuple2<Object,Object> objectObjectTuple2 = Test.testTuple(); ``` It seems the compiler expects this to be of parameterized types `<Object,Object>`, instead of, in my case, `<Integer,Integer>` (this is what I was expecting, at least). Is my thinking deeply flawed and is there a perfectly reasonable explanation for this? OR Is there a problem in my Scala code, and there's a way of being more... explicit, in the cases that I know will provide an API for Java code? OR Is this simply a limitation?
`Int` is Scala's integer type, which [is a value class](http://docs.scala-lang.org/tutorials/tour/unified_types.html), so it gets special treatment. It is different from `java.lang.Integer`. You can specify `java.lang.Integer` specifically if that's what you need. ``` [dlee@dlee-mac scala]$ cat SomeClass.scala class SomeClass { def testIntTuple: (Int, Int) = (0, 1) def testIntegerTuple: (java.lang.Integer, java.lang.Integer) = (0, 1) } [dlee@dlee-mac scala]$ javap SomeClass Compiled from "SomeClass.scala" public class SomeClass implements scala.ScalaObject { public scala.Tuple2<java.lang.Object, java.lang.Object> testIntTuple(); public scala.Tuple2<java.lang.Integer, java.lang.Integer> testIntegerTuple(); public SomeClass(); } ```
Testing two environments with jest I'd like to set two different environments and be able to run both in watch mode. ``` |-- /server | |-- index.js <- Node |-- /client | |-- index.js <- jsdom |-- package.json ``` Actually I run jest twice for each environment, providing a different config file for each: ``` $ yarn test -- --config=server.config.json $ yarn test -- --config=client.config.json ``` But this doesn't let me run both at the same time.
**EDIT (Jan 2018):** It is now possible to do so (since Jest v20), and the option is called `projects`. Read more about [it the docs](https://jestjs.io/docs/en/configuration.html#projects-arraystring--projectconfig). Basically you can define an array of your projects you want Jest to be run within: ``` { "projects": ["<rootDir>/client", "<rootDir>/server", "<rootDir>/some-glob/*"] } ``` Just remember every project needs to have its own config. If you want the config to be picked up automatically, put it inside `jest.config.js` file or like usually in `package.json`. If you prefer placing your config somewhere else (e.g. in `configs/jest.js`), you'll need to point to the path of the config file (with the `rootDir` option set properly): ``` { "projects": ["<rootDir>/client/configs/jest.js", "<rootDir>/server/configs/jest.js"] } ``` **ORIGINAL ANSWER:** Currently this is not possible, but there's an issue for that case: <https://github.com/facebook/jest/issues/1206>. Feel free to jump in and leave a comment!
Meteor findOne query returns undefined in one template helper. In other template helpers, same query works well Suppose I have a Meteor collection called `GameStatus`. I have users with different roles, but I publish the GameStatus collection for all users. I simply use the following in server/publications.coffee ``` Meteor.publish 'gamestatus', -> GameStatus.find() ``` For two of the roles ('S' and 'B') I have no problem when I use the following Template helper (defined in files **client/views/seller.coffee** and **client/views/buyer.coffee**) ``` currentRound: -> return GameStatus.findOne().currentRound ``` For these I never get the following error. ``` Uncaught TypeError: Cannot read property 'currentRound' of undefined ``` But for another role ('admin'), using the same template helper (defined in file **client/views/admin.coffee**) gives the above show Uncaught TypeError. It works if I instead write: ``` currentRound: -> return GameStatus.findOne()?.currentRound ``` I sort of understand why this works. I think, the collection is first not available when the page is being loaded, then it becomes available. But why does this not happen for other templates shown above? Would really appreciate if someone can help clarify this.
I believe exactly when a collection is ready won't always be consistent, so if you want to cover all your bases, always code for the case where a collection is not ready. There's a quick-and-dirty way of dealing with collections that aren't ready, and a more sophisticated solution you can find in the todos example. Quick and dirty solution would look like this. ``` currentRound: -> gameStatusrecord = GameStatus.findOne(); if(gameStatusRecord) gameStatusRecord.currentRound ``` This will work. Until the collection is ready, currentRound will return null, and your template will briefly render and probably just show a blank for current round. So not ideal user experience but not a huge deal. For a more sophisticated solution, you can check whether a collection that you have subscribed is ready to be queried using the "ready" function. If a collection is not ready, you can render some other template, such as "loading", which guarantees that the currentRound helper won't ever be called until the collection is ready. For instance, in the todos example, the client subscribes to the "lists" collection on line 24 of todos.js: ``` var listsHandle = Meteor.subscribe('lists', function () { ``` Then defines a helper function for the lists template on line 80 of todos.js ``` Template.lists.loading = function () { return !listsHandle.ready(); }; ``` Then in the lists template in todos.html line 20, it doesn't try to render any templates unless the listsHandle is ready. ``` <h3>Todo Lists</h3> {{#if loading}} <div id="lists">Loading...</div> {{else}} <div id="lists"> {{#each lists}} <!--etc.--> ```
Getting nil in parsing Firebase values swift using Codable and CodableFirebase I am using Firebase Realtime Database, using codable approach in swift and external library CodableFirebase. I have created model structure but when I am trying to parse values (as i am getting all values) with model structure it gives me nil. My database has keys which might I am not properly handling in nested values. Please help. Thanks database structure snapshot attached. [![database](https://i.stack.imgur.com/NN7nI.png)](https://i.stack.imgur.com/NN7nI.png) Code: ``` Database.database().reference().child("users").observeSingleEvent(of: .value, with: { (snapshot) in guard let value = snapshot.value as? [String: Any] else { return } do { let friendList = try FirebaseDecoder().decode(Response.self, from: value) guard let conversationUid = value["conversationUid"] as? String, let friendStatus = value["friendStatus"] as? String, let notify = value["notify"] as? Bool, let phNumber = value["phoneNumber"] as? String, let uid = value["uid"] as? String else { return } } catch let error { print(error) } }) ``` JSON: ``` { "FTgzbZ9uWBTkiZK9kqLZaAIhEDv1" : { "friends" : { "zzV6DQSXUyUkPHgENDbEjXVBj2" : { "conversationUid" : "-L_w2yi8gh49GppDP3r5", "friendStatus" : "STATUS_ACCEPTED", "notify" : true, "phoneNumber" : "+9053", "uid" : "zzV6DQSXUyUkPHgEZ9EjXVBj2" } }, "lastLocation" : { "batteryStatus" : 22, "latitude" : 48.90537, "longitude" : 28.042, "timeStamp" : 1556568633477, "uid" : "FTgzbZ9uWkiZK9kqLZaAIhEDv1" }, "profile" : { "fcmToken" : "fp09-Y9ZAkQ:APA91bFgGBsyFx0rtrz7roxzpE_MmuSaMc4is-XIu7j718qjRVCSHY4PvbNjL1LZ-iytaeDKviIRMH", "name" : "Mt Bt", "phoneNumber" : "+90503", "uid" : "FTgzbZ9uWBTkiZLZaAIhEDv1" } } ``` Model: ``` struct Response : Codable { let friends : Friend? let lastLocation : LastLocation? let profile : Profile? } struct Friend: Codable { let converstionUid: String? let friendStatus: String? let notify: Bool? let phoneNumber: String? let uid: String? } struct Profile : Codable { let fcmToken : String? let name : String? let phoneNumber : String? let uid : String? } struct LastLocation : Codable { let batteryStatus : Int? let latitude : Float? let longitude : Float? let timeStamp : Int? let uid : String? } ```
Your code is reading the entire `users` node, and then tries to read the `conversationUid` and other properties from that node. Since these properties don't exist **directly under the `users` node**, you get null. To properly parse this JSON, you'll need to navigate the three levels of child nodes before you try to read the named properties like `conversationUid`. ``` Database.database().reference().child("users").observeSingleEvent(of: .value, with: { (snapshot) in for userSnapshot in snapshot.children.allObjects as! [DataSnapshot] { let friendsSnapshot = userSnapshot.childSnapshot(forPath: "friends") for friendSnapshot in friendsSnapshot.children.allObjects as! [DataSnapshot] { guard let value = friendSnapshot.value as? [String: Any] else { return } do { guard let conversationUid = value["conversationUid"] as? String, ... ``` The above code first loops over the first-level child nodes under `/users`, and it then loops over the children of the `friends` node for each user.
How do I save user input as the name of a class instance? I'm trying to use a user-inputted string as the name for a class instance. In this example, I'm trying to use the user input to name the class instance `player1`. However, it is not letting me because `player1` is already defined when I set it as an instance of the `players` class. ``` System.out.println("Enter your name, player1: "); Scanner input = new Scanner(System.in); //the user enters their name String player1 = input.next(); players player1 = new players(); ```
Without pointing out the obvious of the variable names, I'll take a different approach at answering. Maybe you want to take an input and actually set it as the `player`s name, in an OOP way. You obviously have a class `player`, so why not take in a `name` argument` in the constructor ``` public class Player { private String name; public Player(String name){ this.name = name; } public String getName(){ return name; } } ``` Then when you get the input you do this ``` String playerName = input.nextLine(); Player player1 = new Player(playerName); ``` Now when you create multiple `Player`'s they will each have a distinct `name` --- Also you should follow Java naming convention. Class names start with capital letters --- **UPDATE** You need to create a new Player for every instance ``` String playerName = input.nextLine(); Player player1 = new Player(playerName); playerName = input.nextLine(); Player player2 = new Player(playerName); playerName = input.nextLine(); Player player3 = new Player(playerName); playerName = input.nextLine(); Player player4 = new Player(playerName); ```
passing array of objects to a component in vue.js I am having problem to pass an array of objects to component in Vue.js 2.2. Here is my component ``` <vue-grid :fields = "[ {name: 'Person Name', isSortable: true}, {name: 'Country', isSortable: true}]" ></vue-grid> ``` It doesn't work as it renders the curly braces in the browser. I've tried without the quotation `"` in the object and without the colon `:` in front of `fields` property. None of these work either. However, if I just pass a simple string that works. I don't know why object is not working. I have found a similar [question](https://stackoverflow.com/questions/41577211/passing-an-array-to-a-component-in-vue-js-2-0) but answer was given for php. I need the solution just for JavaScript. I want to hard code the object array in the component.
You are passing it correctly. You must have something else happening behind the scenes. Ensure your template has a wrapping element. [See this fiddle](https://jsfiddle.net/yqo5cjpc/4/) ``` <div id="vue-app"> <h2> Vue App </h2> <vue-grid :fields = "[ {name: 'Person Name', isSortable: true}, {name: 'Country', isSortable: true}]" ></vue-grid> </div> <script id="vue-grid-template" type="text/x-template"> <div> <h3>Grid</h3> <div class="grid"> Fields are: <ul> <li v-for="field in fields"> {{field.name}} - {{field.isSortable}} </li> </ul> </div> </div> </script> <script> Vue.component('vue-grid', { props: ['fields'], template: '#vue-grid-template' }); new Vue({ el: '#vue-app' }); </script> ```
Correctly implementing (concurrent) asyncio for multiple functions I have couple of functions and their execution is not dependent each other. What I am trying to do is execute them concurrently instead of sequentially (synchronous). I have added event loop as well, but I am not able to figure out if it is working correctly or not. This is the implementation: **File 1:** ``` import file2 def funcA(): a, b = 1, 2 file2.main(a, b) ``` **File2:** ``` def main(a, b): asyncio.get_event_loop().run_until_complete(_main(a, b)) async def _main(a, b): out1 = await funcA(a, b) out2 = await funcB(a, b) out3 = await funcC(a, b) async def funcA(a, b): result = 1 # some processing done here return result async def funcB(a, b): result = 1 # some processing done here return result async def funcC(a, b): result = 1 # some processing done here return result ``` I am not able to figure out if these are working concurrently or not. I am adding `time.sleep(10)` in any function, executions stops there. I don't want them to run in background as I **need output from those functions**.
One way to do what you want would be to use `asyncio.run()` in `main` and then `gather` in the async version of main. To simulate long processing, use `asyncio.sleep()` See the following code: ``` import asyncio def main(a, b): res = asyncio.run(async_main(a, b)) print(f"in main, result is {res}") async def funcA(a, b): print('funcA - start') await asyncio.sleep(3) result = (a+b) # some processing done here print('funcA - end') return result async def funcB(a, b): print('funcB - start') await asyncio.sleep(3) result = (a+b)*2 # some processing done here print('funcB - end') return result async def funcC(a, b): print('funcC - start') await asyncio.sleep(3) result = (a+b)*3 # some processing done here print('funcC - end') return result async def async_main(a, b): print("in async_main") res = await asyncio.gather(funcA(a, b), funcB(a, b), funcC(a, b)) print(f"in async_main, result is {res}") return res if __name__ == "__main__": main(1, 2) ``` The result is: ``` in async_main funcA - start funcB - start funcC - start funcA - end funcB - end funcC - end in async_main, result is [3, 6, 9] in main, result is [3, 6, 9] ```
Literal vs Constructor notation for primitives, which is more proper for starters? So I am a TA for a class at my University, and I have a bit of a disagreement about how to present datatypes for absolute beginner programmers (In which most never programmed before). My teacher tells students they must strictly use constructors to create primitive datatypes such as Numbers and Strings, her reasoning is to treat JavaScript as if its strongly typed so students will be used to the languages that are in the future. I understand why, but I think it has bad trade-offs. ``` var num = new Number(10); // This is encouraged. var num = 10; // This is discouraged (students will lose points for doing this). ``` My instructor does not make a distinction between these and students are told to treat them as if they are primitive Numbers, Strings, etc. Although I believe at least for starters who don't know better to use `datatype.valueOf()` when necessary, and don't know much at all what objects are yet. Literal notation would be (and I consider it to be) more proper and standard, the other way would cause confusion. Since there a consistency issues with constructor notation because they are objects (and I don't want students to worry about that). For example these don't make sense to beginners: ``` var num1 = new Number(1); var num2 = new Number(1); if(num1 === num2) ... ; // Does not run. if(num1 == num2) ... ; // Does not run. if(num1 == 1) ... ; // But this does. var num2 = new Number(2); if(num1 < num2) ... ; // So does this. switch(num1){ case 1: ... // Does not run. break; default: ... // This runs break; } ``` As you can see this would be confusing for someone just learning what an `if statement` is.I feel as if she is encouraging bad practice and discouraging good practice. So what do you think between literal and constructor notation for primitive values, which is considered more standard/proper and which is better for beginners to use?
As someone who spent extra time creating a `main` function in every Python program so that we'd be more prepared for Java's `public static void main` when the time came, there is a place for slightly-less-than-best practices when it comes to learning how to programming. Now a teacher myself, using constructor functions in JavaScript is not that place. First, it results in misbehavior with control flow, an essential part of the beginning steps of programming. Secondly, it does the students a disservice by mis-teaching an essential language in the web developer's toolkit. Finally, *it does not prepare one for constructors!* As a side note, JavaScript does not easily fall into any typical programming paradigm, and as such is unsuitable as a first language in a four-year college curriculum (in this author's opinion). # Constructor functions block understanding of control flow Firstly, let's look at control flow. Without control flow, a beginner may never be able to construct anything more complex than `Hello world`. It is **absolutely essential** for a beginner to have a solid understanding of each item in the control flow basket. A good instructor will provide several examples for each type, as well as a full explanation of what the difference is between `true` and `truthy`. Constructor functions completely ruin a beginner's understanding of control flow, as in the following example: ``` var myBool = new Boolean(false); if (myBool) { console.log('yes'); } // 'yes' is logged to the console. ``` Yes, this is one of JavaScript's ['wat'](https://www.destroyallsoftware.com/talks/wat) moments. Should the language act like this? Probably not. Does it? Absolutely. Beginners need to see simple examples that *make sense* to the uninitiated mind. There is no brain space for edge cases when first starting out. Examples like this only do a disservice to those the professor is supposed to be teaching. # Constructor functions have no place in JavaScript This one's a bit more opinion-based. A caveat: JavaScripters may find some use for constructor functions in using them to convert between types (i.e. `Number('10')`) but there are usually better ways. Virtually every time a constructor function is used in JavaScript, it's a misuse. I am of the mind that if one wants a literal number, just write that out. No need for all the extra typing, not to mention the various type conflicts and hidden gotchas that come from using constructors](<https://stackoverflow.com/a/369450/1216976>). It's worth noting that both JSLint and JSHint (written by people much smarter than I) discourage the use of constructors in this fashion.
Get the image and SHA image ID of images in pod on Kubernetes deployment How can I get the image ID (the docker sha256 hash) of a image / container within a Kubernetes deployment?
Something like this will do the trick (you must have `jq` installed): ``` $ kubectl get pod --namespace=xx yyyy -o json | jq '.status.containerStatuses[] | { "image": .image, "imageID": .imageID }' { "image": "nginx:latest", "imageID": "docker://sha256:b8efb18f159bd948486f18bd8940b56fd2298b438229f5bd2bcf4cedcf037448" } { "image": "eu.gcr.io/zzzzzzz/php-fpm-5:latest", "imageID": "docker://sha256:6ba3fe274b6110d7310f164eaaaaaaaaaa707a69df7324a1a0817fe3b475566a" } ```
How to get the URL fragment identifier from HttpServletRequest How do I get the URL fragment identifier from `HttpServletRequest`? The [javadocs](http://docs.oracle.com/javaee/6/api/javax/servlet/http/HttpServletRequest.html) doesn't seem to mention it.
You can't get the URL fragment in the way you'd like. Typically, the browser doesn't send the fragment to the server. This can be verified by using a network protocol analyser like tcpdump, [Ethereal](http://www.ethereal.com/), [Wireshark](http://www.wireshark.org/), [Charles](http://www.charlesproxy.com/). However, you can send the fragment string as a GET/POST parameter on a JavaScript request. To get the value using JavaScript, use `window.location.hash`. You can then pass this value as a GET or POST parameter in an AJAX request, and use the getParameter methods on the HttpServletRequest for the AJAX request. Here's what [RFC3986: Uniform Resource Identifier (URI): Generic Syntax](https://www.rfc-editor.org/rfc/rfc3986#section-3.5) has to say: > > The fragment identifier is separated > from the rest of the URI prior to a dereference, and thus the > identifying information within the fragment itself is dereferenced > solely by the user agent, regardless of the URI scheme. Although > this separate handling is often perceived to be a loss of > information, particularly for accurate redirection of references as > resources move over time, it also serves to prevent information > providers from denying reference authors the right to refer to > information within a resource selectively. Indirect referencing also > provides additional flexibility and extensibility to systems that use > URIs, as new media types are easier to define and deploy than new > schemes of identification. > > >
How to sort array of objects where keys are dates I have searched for this question and no existing answer seems to apply. Consider the following: ``` [ { 'August 17th 2016': [75] }, // 75 is the length of the array which contains up to 75 objects ... { 'August 1st 2016': [5] }, { 'August 28th 2016': [5] }, ... ] ``` What is the best way to sort the objects in this array by their date and still keep the "english" representation of their key? **Note**: The key is used as a chart label. Everywhere I look `array.sort` is used, but that's on the object's key of say `created_at`. The result should be: ``` [ { 'August 1st 2016': [5] }, { 'August 17th 2016': [75] } { 'August 28th 2016': [5] }, ... ] ``` I am not sure how to proceed so I don't have anything to *show*.
This can be accomplished by using `date.parse` on the object key. I took the first object key as it appears there is only 1 in each entry of the array. The tricky part is that `date.parse` does not work on "12th" or "1st", so, we have to temporarily replace the "th", or "st" with a `,`. This way, `date.parse` works on the string. ``` var dates = [{ 'August 17th 2016': [75] }, { 'August 1st 2016': [5] }, { 'August 28th 2016': [5] }] const replaceOrdinals = o => { return Object.keys(o)[0].replace(/\w{2}( \d+$)/, ',$1'); } dates = dates.sort((a, b) => { return Date.parse(replaceOrdinals(a)) - Date.parse(replaceOrdinals(b)) }); console.log(dates); ``` **Keep in mind:** From @adeneo in the comments: [`Date.parse`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/parse) is implentation dependant. You will probably want to read through it's documentation to determine if things like time zones will mess things up. As a more sure method, you can use something like moment.js for date parsing.
Why are these global variables when they have namespace scope? In the following code, [this](http://www.learncpp.com/cpp-tutorial/42-global-variables/) is calling the variables in the globals.cpp namespace ***actual global variables***. **globals.h** ``` #ifndef GLOBALS_H_ #define GLOBALS_H_ namespace Constants { // forward declarations only extern const double pi; extern const double avogadro; extern const double my_gravity; } #endif ``` **globals.cpp** ``` namespace Constants { // actual global variables extern const double pi(3.14159); extern const double avogadro(6.0221413e23); extern const double my_gravity(9.2); // m/s^2 -- gravity is light on this planet } ``` **source.cpp** ``` #include <iostream> #include <limits> #include "globals.h" int main() { double value_of_pi = Constants::pi; std::cout << value_of_pi; std::cin.clear(); std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); std::cin.get(); return 0; } ``` I know that `extern` is used to access global variables in other translation units. I'm probably reading too much into this, but why are the variables in the globals.cpp namespace considered to be global when they have namespace scope? Also, am I right to assume that `Constants::pi` retrieves the identifier `pi` from the forward declaration in the globals.h namespace? *This question is a continuation of a previous question that I asked* [here](https://stackoverflow.com/questions/30517546/how-does-extern-work-in-namespaces).
Global roughly means accessible from every translation unit, no matter if the variable is in a namespace or not. The best example is [`std::cout`](http://en.cppreference.com/w/cpp/io/cout), which is a global variable defined in `namespace std`, and which represents an instantiation of `std::basic_ostream<>`. Regarding your second question, `Constants::pi` is accessible because you include the header `globals.h`, which declares `extern const double Constants::pi;`. This declaration instructs the compiler that the variable has external linkage, and it is your responsibility to define it in some `.cpp` file (which you do in `globals.cpp`)1). So the linker is able to find the symbol in the `globals.cpp`, and that's it. --- 1) Note that you can even provide the definition `extern const double pi = 3.14;` directly in the header file, but it is not recommended, since including the header in multiple translation units will lead to a duplicate symbol.