text
stringlengths
64
89.7k
meta
dict
Q: Is it right to be retagging everyone's questions? Because the website is in beta, I think, anyone can retag anyone's question. There are thus some people retagging every single question that is being asked, even when the retagging is not uncontroversial. Is this always going to be allowed or is it just during the beta phase? Should there be a guideline as to when to do those things? A: I would say that the retagging is normal and to be expected during the very early beta. The way to fix it is to have some well defined tagging conventions on meta, and then retag based on agreed policies. A: Indeed one thing we're supposed to be doing on the meta page is deciding what tagging system is best. This is one of the things where the whole "community owned" setup is a bit annoying, in the early stages of MO Anton made a lot of these sorts of decisions himself. I think Harry jumped the gun a little by making changes without discussing them here first, but on the other hand one of the points of the private beta is to try out some different tagging schemes to see what works right.
{ "pile_set_name": "StackExchange" }
Q: Uncaught ReferenceError: require is not defined in Electron I am trying and failing to connect to a MySQL database in Electron. When I run the program with npm start, I get this error in the console: I was told to make changes to my code and I made them but nothing. I downloaded that module called bundlee but it didn't work either and I don't know if I used it correctly. A: It is likely that nodeIntegration is set to false (which is the default) in your BrowserWindow webPreferences. Try setting up your browser window with something like this: win = new BrowserWindow({ width: 800, height: 600, webPreferences: { nodeIntegration: true } })
{ "pile_set_name": "StackExchange" }
Q: Parsing JSON API in C# so I'm fairly new to programming but am looking to go much deeper with it. I recently started to get involved in a project to create a WinForm program for a website that uses an API system in JSON. I've never used an API before so I'm not quite sure how it works but after looking at it for a few minutes I seem to have the gist of it. What I don't get is how exactly parsing JSON in C# works. I found this link after a little google searching. And I got it working (somewhat) with this code. static void Main(string[] args) { WebClient c = new WebClient(); var vLogin = c.DownloadString("https://www.openraid.us/index.php/api/login/username/password"); //Returns string //{"status":1,"error":null,"token":"250-1336515541-c48d354d96e06d488d1a2530071ef07c9532da26"} //Token = random, no decisive length*/ JObject o = JObject.Parse(vLogin); Console.WriteLine("Login Status: " + o["status"]); String sToken = "" + o["token"]; Console.WriteLine(sToken); Console.WriteLine(""); //Breaks after this var myRaids = c.DownloadString("https://www.openraid.us/index.php/api/myraids/"+sToken); JObject r = JObject.Parse(myRaids); //error occurs here String sEventId = "" + r["event_id"]; Console.WriteLine("Event ID: " + sEventId); Console.ReadLine(); } So to me it looks like I have parsing 1 page done and handled, but when I move onto the second I get this error. Error reading JObject from JsonReader. Current JsonReader item is not an object: StartArray. Path '', line 1, position 1. So I guess my question is, how do I parse more than 1 page or call of JSON and what would be the easiest way to break up each section of the JSON object (Such as status, error, and token, into C# strings? A: Did you try to JArray instead? Depending on what kind of object you are trying to return WebClient client = new WebClient(); var data = client.DownloadString(""); var jArray = JArray.Parse(data);
{ "pile_set_name": "StackExchange" }
Q: Spark broadcasted variable returns NullPointerException when run in Amazon EMR cluster The variables I share via broadcast are null in the cluster. My application is quite complex, but I have written this small example that works flawlessly when I run it locally, but it fails in the cluster: package com.gonzalopezzi.bigdata.bicing import org.apache.spark.broadcast.Broadcast import org.apache.spark.rdd.RDD import org.apache.spark.{SparkContext, SparkConf} object PruebaBroadcast2 extends App { val conf = new SparkConf().setAppName("PruebaBroadcast2") val sc = new SparkContext(conf) val arr : Array[Int] = (6 to 9).toArray val broadcasted = sc.broadcast(arr) val rdd : RDD[Int] = sc.parallelize((1 to 4).toSeq, 2) // a small integer array [1, 2, 3, 4] is paralellized in two machines rdd.flatMap((a : Int) => List((a, broadcasted.value(0)))).reduceByKey(_+_).collect().foreach(println) // NullPointerException in the flatmap. broadcasted is null } I don't know if the problem is a coding error or a configuration issue. This is the stacktrace I get: 15/07/07 20:55:13 INFO scheduler.DAGScheduler: Job 0 failed: collect at PruebaBroadcast2.scala:24, took 0.992297 s Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, ip-172-31-36-49.ec2.internal): java.lang.NullPointerException at com.gonzalopezzi.bigdata.bicing.PruebaBroadcast2$$anonfun$2.apply(PruebaBroadcast2.scala:24) at com.gonzalopezzi.bigdata.bicing.PruebaBroadcast2$$anonfun$2.apply(PruebaBroadcast2.scala:24) at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:202) at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:64) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693) at scala.Option.foreach(Option.scala:236) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354) at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) Command exiting with ret '1' Can anyone help me fix this? At least, can you tell me if you see something strange in the code? If you think the code is ok, please tell me, as it would mean that the problem is in the configuration of the cluster. Thanks in advance. A: Finally I got it working. It doesn't work declaring the object like this: object MyObject extends App { But it works, if you declare an object with a main function: object MyObject { def main (args : Array[String]) { /* ... */ } } So, the short example in the question works if I rewrite it this way: object PruebaBroadcast2 { def main (args: Array[String]) { val conf = new SparkConf().setAppName("PruebaBroadcast2") val sc = new SparkContext(conf) val arr : Array[Int] = (6 to 9).toArray val broadcasted = sc.broadcast(arr) val rdd : RDD[Int] = sc.parallelize((1 to 4).toSeq, 2) rdd.flatMap((a : Int) => List((a, broadcasted.value(0)))).reduceByKey(_+_).collect().foreach(println) } } This problem seems related to this bug: https://issues.apache.org/jira/browse/SPARK-4170
{ "pile_set_name": "StackExchange" }
Q: Clarification on comparative constructions is needed Here's a sentence: シベリアより東の地域はシベリアの中部ほど寒くないです。 I understand what it says but I don't understand why 「より」 is used right after 「シベリア」, and why the word order is not like this: シベリアの東の地域はシベリアの中部ほど寒くないです。 Can the reason be in those 2 different comparative constructions in one sentence? I assume that one cannot say something like 「AよりBほどC」, thus involving 「AはBほどC」to make it work properly. Even if it's right, I'd like to know the reason 「シベリアより」 is even taking place and, what's more, I'd like to know how to translate it properly. Every time I read it I want to say "Comparing to Siberia, eastern region is not that cold, as central one", but then again, "comparing to Siberia" just makes no sense for me. Also, I cannot understand why author used this, but not the 「シベリアの東の・・・」. A: What you actually stumble over is this expression: シベリアより東の地域 region(s) to the east of Siberia Here シベリアより東(だ) is a noun predicate that modifies 地域, and this より has nothing to do with the rest of the sentence. ~より東 (lit. "more east than"?) may be a strange wording to European languages speakers, but it's a sound phrase in Japanese to describe what's at removes from a location in eastern direction (as opposed to being the east end of the location). Similarly: ~より上 "above; higher than" ~より下 "below; lower than" ~より左 "to the left of" ~より右 "to the right of" If you reword them using ~の左 etc. it'll usually be understood as "next to it to the left". So, シベリアより東の地域はシベリアの中部ほど寒くないです。 Regions to the east of Siberia are not as cold as central Siberia.
{ "pile_set_name": "StackExchange" }
Q: What is the difference between NaN and Inf, and NULL and NA in R? What is the difference between NaN and Inf, and NULL and NA in R? Why ?NA and ?NULL tell me that "NA" has a length of "1" whereas NULL has a length of "0"? A: In short NaN : means 0/0 -- Stands for Not a Number NA : is generally interpreted as a missing, does not exist NULL : is for empty object. For an exact definition, you can read the documentation, which is very well written. A: In R language, there are two closely related null-like values: NA and NULL. Both are used to represent missing or undefined values. NULL represents the null object, it's a reserved word. NULL is perhaps returned by expressions and functions, so that values are undefined. NA is a logical constant of length 1, which contains a missing value indicator. NA can be freely coerced to any other vector type except raw. There are also constants NA_integer_, NA_real_, NA_complex_ and NA_character_ of the other atomic vector types which support missing values: all of these are reserved words in the R language.
{ "pile_set_name": "StackExchange" }
Q: Create partition on new raid1 hardware In future, I'll need to add more 2 disks over RAID1 on my ubuntu server. I already has a RAID1, but I have more 2 slots in my server and I'll put there more 2 disks to create another RAID1. I never did it, the only thing I did is simply add normal disks (not in RAID). My concern is will I need to do something different from the traditional? Something like, first activate the RAID (as I did when I was installing the ubuntu in the currently RAID1). Or this will be automatic and I only will need to create the partitions and format them? A: The computer is not a mind reader so it can't automagically know you want to create a new raid array when you plug in two new disks. You will have to create the new array the same as you had to for the first one.
{ "pile_set_name": "StackExchange" }
Q: C# compiler flags I understand C# compiles down to byte code similar to Java but are there any compiler flags like the safeseh or gs flags for C/C++ applicable to C#? I'm not sure if they would be necessary as presumably all these things are implemented in the CLR? A: There are no security flags in the C# compiler that I remember, if we exclude the /unsafe that disables the possibility of writing C# code with raw C pointers. Even without that flag, you can often write equivalent "unsafe" code in other ways that will compile, so that flag is a red herring. Protection against buffer overruns and similar attacks is included for free in .NET thanks to how strings and arrays are handled. You can't write after the end of an array (and nearly all the other collections of .NET are base internally on arrays), and you can't write to strings (they are immutable, and when you allocate them you allocate them with the "right" size for their content, that is decided and fixed upon creation of the string). Note that you can still easily make a C# program crash (by passing illegal data for example... a text when a number is expected), but this crash (in truth normally one Exception) won't permit you to take control of the machine, or to overwrite pieces of memory.
{ "pile_set_name": "StackExchange" }
Q: What group should public_html belong to? I have an appache server running on Linux - CentOS. In order to be able to edit my php files on Windows, I linked the server to my Dropbox account and created a symlink from the Dropbox folder, which is located under /root/Dropbox, to my public_html folder. Then when I tried to edit a file in public_html through Windows, its permission turned to root and thus I got the famous 500 error. I guessed it has to do with the mentioned symlink's permission, so I changed the permission for the symlink to my user account but it didn't change. But what happed next overwhelmed me: suddenly when I try to access any page on my site I get: Forbidden You don't have permission to access /My/site/name/page.php on this server. Digging around I found out that the public_html owner and group is root, ps aux | grep apache showed root 4533 0.0 0.0 10892 1604 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4534 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4535 0.0 0.1 10892 2952 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4536 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4537 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4538 0.0 0.1 10892 2956 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4551 0.0 0.1 10892 2208 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4556 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4565 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 4572 0.0 0.1 10892 2200 ? S Jul31 0:00 /usr/local/apache/bin/httpd -k start -DSSL changing the group of public_html to nobody did the trick and got this error off. But I don't know if it should be like this, I mean, I don't know what group it had before. So I have two qustions: 1. Given the mentioned apache's user, to what user should public_html be belong to? 2. If the answer to 1 is root, can you think of any reason that caused this error to suddenly happen, and what should be done in order to solve it? It's worth to mention that I started by posting the question here but I didn't get any answer so I'm trying here. Hope it's legal. A: You could run Dropbox as a non-root user, have public_html owned by that user and the apache group, and permissioned rwxrwx--- (i.e. 770) so that both your user and Apache can read and write. Also, as a general principle of Linux/Unix administration, you should never run applications as root unless you absolutely have to. To explain why Apache appears to use root, applications are only allowed to listen on privileged ports (those below 1024) if they are started with root privileges. As HTTP/HTTPS is served on ports 80/443 (respectively), Apache is started as root, and then forks processes under its own user (by default, called 'apache' on Red Hat based distributions - of which CentOS is one - or 'www-data' on Debian-based distribufions - e.g. Ubuntu). The unprivileged user can be configured in your Apache configuration, though for 95% of applications the default is fine.
{ "pile_set_name": "StackExchange" }
Q: angular2 injecting service to a pipe Does anyone know how to injecting service to a Pipe in angular2? I've got an example in Plunkr So currently the service is doing nothing. I've put the service in the constructor of the pipe, and put the Http in the Provider of app. Instead I got an error saying ChangeDetectionError {_wrapperMessage: "No provider for ConnectionBackend! (MyPipe -> MySe…Backend) in [Hello {{name | MyPipe}} in App@2:10]", _originalException: NoProviderError, _originalStack: "Error: DI Exception↵ at NoProviderError.BaseExc…ularjs.org/2.0.0-beta.8/angular2.dev.js:11284:19)", _context: _Context, _wrapperStack: "Error: No provider for ConnectionBackend! (MyPipe …gularjs.org/2.0.0-beta.8/angular2.dev.js:12682:27"…} I'm not really sure what else I need to do. I'm using angular2 beta 8. Help will be much appreciated. Thanks A: Updated plunk You needed to provide HTTP_PROVIDERS in order to inject Http into MyService. MyService also needed to be decorated with Injectable().
{ "pile_set_name": "StackExchange" }
Q: How to Save File Path in the MySQL Database Using string in C# I have a picturebox control and also a imagepath textbox control.When I upload an image into the picturebox control then the full path of image in my computer is also pasted in the imagepath textbox control as E:\Engineer\picture.jpg but when I save this image and also its path in my database, then the path is saved without backslashes such as like this: E:Engineerpicture.jpg I am using string type for the path. This is the main problem which had stuck my developing process of application. If any other explaination is needed then kindly let me know. Thank You A: I did a bit research...but could not find escaping a string after it is initialized as suggested in the other answer. So it leaves us with this option: string newFileName = fileName.Replace("\\","\\\\"); Doing this before the DB insert should help you, but I am not sure if it is the best way to do this.
{ "pile_set_name": "StackExchange" }
Q: Is it safe to use Rijndael.Create() instead of new RijndaelManaged() I've read some on this topic, but I'm still not 100% comfortable with the answers I see. When you create a cryptographic algorithm using Rijndael.Create(), you get an object of type RijndaelManaged - there doesn't seem to be a difference between this and calling new RijndaelManaged() (or New RijndaelManaged() for you VB folks). :) From what I've read, the Rijndael.Create() method exists so that you don't need to worry about the specific implementation, in case it changes in a future version. But my question is: suppose that does happen, and .NET 5.0 returns a different implementation. Is there a guarantee that items encrypted using RijndaelManaged can be decrypted without issue using SomeFutureRijndaelManaged? I can't imagine that they would be incompatible, but I just want to confirm that. Thanks A: Rijndael.Create adds a layer of abstraction and additional redirects so that supposedly it can provide a system specific version of the algorithm. In practice it's extremely slow, requiring trips through the Crypto API to resolve OID string mappings to eventually arrive at the RijndaelManaged class. We've found that instead of providing stability across platforms it instead causes issues across Windows 2000/XP/Vista/Windows. Plus it's several hundred times slower to create the instance of the object through the Create methods than to just instantiate the RijndaelManaged class directly. We found this to be a major bottle neck when encrypting/decrypting data in memory. As far as "future proofing" - there's no such thing with security algorithms. When .NET 5.0 comes out. You'll have to update to accomodate any changes they make regardless of the method you create the algorithm. You can use <supportedRuntime /> in your application's .config file to lock in a .NET version so that you only allow your app to switch once you've tested and updated.
{ "pile_set_name": "StackExchange" }
Q: Can I Create Multiple Web Applications for Host-Named Site Collections I would like to create 2 or 3 Separate Web Applications that will have Host-Named site Collections. Is this possible without binding additional IP addresses? Can this be scripted via PowerShell? A: OP: Is this possible without binding additional IP addresses? Yes OP: Can this be scripted via PowerShell? Yes Refer to Mark Arend's excellent blog post Host Named Site Collections (HNSC) for SharePoint 2010 Architects (MSDN Blogs) under Option 3 – Identify by Host Header. While the config is more fussy than the other options, if your requirements or platform do not allow multiple IPs then it does the trick. Mark's blog post includes full PowerShell examples. Here is quick preview of his scripts: #Once per server node in farm New-WebBinding -Name "Standard" -HostHeader "sitename.customer.com" #Once for the farm New-SPSite http://sitename.customer.com -HostHeaderWebApplication http://Standard -Name "Our Work Site" -Template "STS#1" -OwnerAlias CUSTOMER\Administrator I have successfully implemented several farms this way.
{ "pile_set_name": "StackExchange" }
Q: nested placeholders in Ramda I am trying to pass a function into a filter which itself is nested in deeper functions Conceptually, something like this (broken) example: const testData = [ {foo: "foo", bar: ""} ]; const myFilter = a => !R.isEmpty(a); const clean = R.when( R.either(R.is(Array), R.is(Object)), R.pipe( R.map(a => clean(R.__)(a)), R.filter(R.__) ) ) const cleanEmpties = clean(myFilter); cleanEmpties(testData); //fail: should not include `bar`, but it does What's the right way to do this? Just to illustrate the point, this hardcoded alternative does work as intended: const cleanEmpties = R.when( R.either(R.is(Array), R.is(Object)), R.pipe( R.map(a => cleanEmpties(a)), R.filter(myFilter) ) ) cleanEmpties(testData); //working, does not include `bar` A: The problem is because of how R.__ is interpreted when referenced multiple times in the same function. If you haven't passed in enough arguments, a curried function will return that expects more arguments to fill up the gaps. R.gt(4,3) // true R.gt(R.__, R.__)(4, 3) //true R.gt(R.__, R.__)(4)(3) //true R.gt(R.__, R.__)(4) // function n(r){return 0===arguments.length||b(r)?n:t.apply(this,arguments)} If you change your function syntax to accept an explicit arg, the code works as expected: const clean = f => R.when( R.either(R.is(Array), R.is(Object)), R.pipe( R.map(a => clean(f)(a)), R.filter(f) ) );
{ "pile_set_name": "StackExchange" }
Q: Split javascript string into array I have this javscript string: response "[[[#],[b,#],[b,w,b,b,b,#],[b,#],[b,w,w,b,#]],[[b,#],[b,b,#],[b,b,w,#],[b,b,b,#],[b,b,#]],[[b,#],[b,b,b,b,#],[b,w,#],[b,b,b,#],[b,#]],[[b,#],[b,#],[w,w,w,#],[b,b,w,w,#],[b,w,#]],[[b,#],[b,b,b,b,#],[b,w,b,#],[b,w,#],[b,b,#]]]" This corresponds to a board game and which field (e.g [b,w,b,b,b,#]) is a cell with black and white pieces. The # is the top of the stack. I need to parse this in order to create an array of tiles. I have this: XMLscene.prototype.readBoard = function(data){ var response = data.target.response; console.log("REPONSE NO PARS" + response); response = response.split("],"); console.log("REPONSE " + response); response[0] = response[0].substring(1); response[5] = response[5].substring(0, response[5].length - 2); for(var i = 0; i < response.length; i++) { response[i] = response[i].substring(1); response[i] = response[i].split("),"); for(var j = 0; j < response[i].length; j++) response[i][j] = response[i][j].substring(5); } this.scene.board.loadTiles(response); //this.scene.client.getPrologRequest('quit', 0, 1); }; to be parsed in this function: gameBoard.prototype.loadTiles = function(board){ console.log("BOARD : " + board); this.tiles = []; for (var i = 0; i < board.length; i++){ for (var j = 0; j < board[i].length; j++){ var player = board[i][j].split(",")[0]; console.log("PLAYER : " + player); var type = board[i][j].split(",")[1]; console.log("Type : " + type); if (type != "e") { var tile = this.createTile(type, this.scene ,i*6 + j+100, player); tile.line = i; tile.col = j; this.tiles.push(tile); } } } } The board structure I want is something like this: for the first stack: [#] It's an empty cell [b,#] - A cell with one piece - black [b,w,w,b,#] - A cell with a black piece in the bottom, then two white pieces and a black on the top, therefore the black player is the owner of the stack! The stack owner is the player that have his piece on the top of the stack (closest to #) Is there any way to get an array with each stack being the element of it? Regards A: You could transform the data to JSON like this, ignoring the hashes as they seem to give information that is already known (the stack ends at the right): var response = JSON.parse(response.replace(/,?#/g, '').replace(/[bw]/g, '"$&"')); Then you can for instance identify the current player for a stack at (i, j), like this: var player = board[i][j].slice(-1)[0]; // last element on the stack Snippet: // Sample data var response = "[[[#],[b,#],[b,w,b,b,b,#],[b,#],[b,w,w,b,#]],[[b,#],[b,b,#],[b,b,w,#],[b,b,b,#],[b,b,#]],[[b,#],[b,b,b,b,#],[b,w,#],[b,b,b,#],[b,#]],[[b,#],[b,#],[w,w,w,#],[b,b,w,w,#],[b,w,#]],[[b,#],[b,b,b,b,#],[b,w,b,#],[b,w,#],[b,b,#]]]"; // Convert to nested array var board = JSON.parse(response.replace(/,?#/g, '').replace(/[bw]/g, '"$&"')); // Print the stack at 3, 3 console.log(board[3][3].join(',')); // Print player for that stack: console.log(board[3][3].slice(-1)[0]);
{ "pile_set_name": "StackExchange" }
Q: PHP ZF2: Class not found I've got a problem with finding a class. I am using ZF2, and I have ElephantIO under \public\lib. Netbeans finds the class I am trying to use, and it points me to it when I type "use ElephantIO\Engine\SocketIO\Version0X;"... However when I run the code in my localhost, it returns Class 'ElephantIO\Engine\SocketIO\Version0X' not found in /Applications/MA.... I am calling it in new Version0X('http://localhost:5555'); I can also access it using NetBeans' Open declaration. What seems to be the problem? More: I tried adding a require_once for the Version0X php file and it did work, but then it told me that another class which extends Version0X isn't found, so I guessed this will be too much trouble to add all files. A: It seems it wasn't accessible from public/lib, so I used composer to install it in vendor. That is all it has needed.
{ "pile_set_name": "StackExchange" }
Q: need sequence as A1 A2 in postgres Dear all, Using sequence we can insert a sequence of integers in postgers like that I just want to insert A1 and A2 , Is this possible to do this in a straight forward manner , or otherwise do need to write a function ? Please let me know the easiest way of doing this Thanks in advance ! A: create table test ( a text ); CREATE TABLE insert into test(a) select 'A'||generate_series(1,4)::text; INSERT 0 4 select * from test; a ---- A1 A2 A3 A4 (4 rows)
{ "pile_set_name": "StackExchange" }
Q: Sony VAIO mute button I installed ubuntu on my sony VAIO laptop. The laptop has two extra buttons near the power button - AV mode and mute (see the picture). The AV mode button works great (opens rhytmbox music player) but the mute button does not work (but it did work in the previously installed windows). How can make the mute button work? A: Move you mouse pointer to the Launcher and select the cog-wheel icon System Settings. When the system settings panel appears select the icon Keyboard. When the keyboard applet opens there are two tabs; Typing and Shortcuts. Select the shortcuts tab. When the shortcuts tab opens the left pane lists the various categories you can choose from. Select the category Sound and Media. Within the Sound and Media category click on the option Volume mute and you are prompted for a new accelerator key combination. For this tutorial hold the Ctrl and Alt keys down at the same time and tap M. Sample screen: From now on Ctrl+Alt+M will mute and un-mute your sound like a toggle switch. Your screen will also pop up a cool notification.
{ "pile_set_name": "StackExchange" }
Q: How to search for an element in a vector? i have a code like this.... std::vector<string>::iterator p; p = find(v.begin(),v.end(),"asdasda"); cout << *p << endl; if "asdasda" is not a part of the vector, p points to some garbage and cout gives a seg fault. what should be the if statement that would make the cout execute onlyif "asdasda" was found? and also the position of "asdasda" in v.. like if we had earlier declared v[3] as "asdasda", then how can i know from the find that v[3] is "asdasda"? A: p doesn't point to "garbage", it simply becomes v.end() when the content is not found. if (p != v.end()) { std::cout << "v[" << (p - v.begin()) << "] = "; std::cout << *p << std::endl; } A: If std::find doesn't find anything, the iterator is set to v.end() in this case. if ( p != v.end() ) { // iterator is good } Also note the general case of std::find. Here's a typical definition of it: namespace std { template <class InputIterator, class T> InputIterator find(InputIterator start, InputIterator finish, const T& value); } In case std::find fails, it will return the finish iterator because the search is [start, finish) and includes start but excludes finish.
{ "pile_set_name": "StackExchange" }
Q: Read /proc/pid/maps using read() I want to extract the information from /proc/pid/maps, such as: the start address, the end address, and the permission. However, for learning purpose, I want to use the low-level system call, such as: open, read, and lseek. I am confused how to grep the information I need. There are some challenges I found: How to read the whole file using read()? Right now, I am using while((n = read(fd, buf, BUFSIZ)) > 0), to read the file, but apparently it reads in multiple batch, increasing the BUFSIZ * 2, does not solve the problem. I am trying to grep the start address, by, read the character one by one until I find the - separator, and then lseek to the first character in the line and use read to get the start address. It works fine for the first iteration, but it mess up for the second iteration of while((n = read(fd, buf, BUFSIZ)) > 0). What is the best way to extract the information? A: Yes, read(fd, buf, BUFSIZ) will read BUFSIZ bytes from fd (unless you’ve reached the last block, in which case it will read all the remaining bytes, the number of which will be returned in n and will be ≤BUFSIZ).  If you want to read all the data in one shot, you can dowc /proc/pid/mapsto see how long it is (and hence how big to make BUFSIZ).  But this would be counterproductive.  Reading an entire file into memory is rarely necessary or even useful, and you can’t go around changing the parameters in your programs every time you run them on a different input file.  If you want to learn how to write programs that read files, you’ll be much better off learning how to handle them a block at a time. The second part of your question doesn’t make much sense.  grep?  Are you writing a C program or a shell script (or both)? And reading data that you want, throwing it away, and going back and reading it again is a really bad habit to get into.  It’s ridiculous to read the start address up to the -, and then lseek backwards and read it again — especially if you’re talking about reading the entire file into memory.  If you’ve got the entire file in memory, why in the world would you want to read part of it again? I would recommend using fgets.  But, if you really prefer to useread, go ahead and do it.  You’ll need to look for newline (\n) characters and so identify the lines (fgets would do this for you) and then extract the desired information from the lines. You can certainly analyze the lines character-by-character if you want.  But I recommend sscanf.  Be sure to check the return value.
{ "pile_set_name": "StackExchange" }
Q: Maximum determinant of Latin squares I strongly conjecture that the maximum absolute determinant of a Latin square can be attained by a circulant matrix. For example, $$\pmatrix {5&4&2&3&1 \\ 1&5&4&2&3 \\ 3&1&5&4&2 \\ 2&3&1&5&4 \\ 4&2&3&1&5}$$ has determinant $2325$, which is indeed the maximum absolute value of the determinant of a Latin square of size $5 \times 5$. The sign of the determinant is not important because changing two rows always gives a Latin square with positive determinant, if the given has a negative determinant. Is this conjecture true ? If the conjecture is true, there would be a suitable way to find the maximum absolute value of the determinant of a Latin square of size $n \times n$. I tried to find out how an arbitrary Latin square can be transformed in the circulant form without changing its determinant, but without success. If the conjecture is true, the maximal values are (on the left side, the top row of the matrix is given, the sign is not considered): $$ 1\ 2\ \ \ \ \ \ \ 3$$ $$ 3\ 1\ 2\ \ \ \ \ \ 18$$ $$ 4\ 1\ 2\ 3\ \ \ \ \ 160$$ $$ 5\ 4\ 2\ 3\ 1\ \ \ \ \ \ 2 \ 325$$ $$ 6\ 5\ 3\ 2\ 4\ 1\ \ \ \ \ 41 \ 895$$ The next $4$ values are $961\ 772,\ 26\ 978\ 400,\ 929\ 587\ 995\ and \ 36\ 843\ 728\ 625$. So, an upper bound for the determinant of a $9 \times 9$-Sudoku-matrix would be $929\ 587\ 995$. A: Let $A=[C_1,\cdots,C_n]\in M_n$ be a latin square, $M=A^TA$, $a_n=1\times n+2\times (n-1)+\cdots+n\times 1$, $b_n=\sum_{i=1}^ni^2=n(n+1)(2n+1)/6$ and $u=[1,\cdots,1]^T$. Note that $m_{i,j}=<C_i,C_j>\in[a_n,b_n]$ and $m_{i,i}=b_n$. Moreover, for every $i$, $\sum_{j}m_{i,j}=n^2(n+1)^2/4$ and $c_n=\sum_{j|j\not= i}m_{i,j}=(n-1)n^2(n+1)(3n+2)/12$, which leads to the mean of the off-diagonal entries being $\mu_n=n(n+1)(3n+2)/12$. Put $M=b_n.I+S$ where the diagonal of the symmetric matrix $S$ is $0$, $s_{i,j}\in [a_n,b_n]$ and $g_i(S)=\sum_{j|j\not= i}s_{i,j}-c_n=0$; we seek the max of $f:S\rightarrow \det(M)$ under the constraints $g_i(S)=0$. According to Lagrange, there is $(\lambda_i)_i\in\mathbb{R}^n$ s.t., for every $i\not= j$, $tr(adj(M)E_{i,j})+\lambda_i=0$, that is $z_{j,i}+\lambda_i=0=z_{i,j}+\lambda_i$ where $z_{i,j}$ is the $i,j$ entry of $adj(M)$; then, for every $i$, the $(z_{i,j})_{j|j\not= i}$ are equal and, since $adj(M)$ is symmetric, the $(z_{i,j})_{i\not= j}$ are equal. Finally, $M^{-1}$ is in the form $D+\alpha U$ where $D=diag(d_i)$ is diagonal, $\alpha\in\mathbb{R}$ and $U=[u_{i,j}]$ is defined by $u_{i,i}=0$ and , if $i\not= j$, then $u_{i,j}=1$. Therefore $MM^{-1}=(b_nI+S)(D+\alpha U)=b_nD+SD+\alpha b_n U+\alpha SU=I$. We consider the diagonals of RHS and LHS: for every $i$, $b_nd_i+\alpha c_n=1$; consequently, the $(d_i)_i$ are equal and $M^{-1}$ is in the form $M^{-1}=\beta I+\alpha U$. Thus $S$ is a polynomial in $U$ and all the non-zero entries of $S$ are equal to $\mu_n$. Conclusion. The (ideal) maximum of $|\det(A)|$ is reached when the $(m_{i,j})_{i\not= j}$ are $\mu_n$. Note that, in general, $\mu_n$ is not an integer and the considered ideal maximum is not reached; yet, the idea is to find a matrix $A$ s.t. $A^TA$ is close to $b_nI+\mu_n U$. The case $n=5$. Then $\mu_5=42.5$ and $\sqrt{\det(b_5I+\mu_5 U)}\approx 2343.75$. The real maximum $2325$ is reached (according to Peter) with a circulant matrix s.t. the $(m_{i,j})_{i\not= j}$ are $42$ or $43$. The case $n=6$. Then $\mu_6=70$ but $\sqrt{\det(b_6I+\mu_6 U)}\approx 42439.23$ is not an integer. A candidate for the real maximum is $41895$; it is reached with a circulant matrix s.t. the $(m_{i,j})_{i\not= j}$ are $69,70,71$. The case $n=9$. Then $\mu_9=217.5$ and $\sqrt{\det(b_9I+\mu_9 U)}\approx 934173632.8$. A candidate for the real maximum is $929587995$; it is reached with a circulant matrix and a sudoku; the last one is s.t. the $(m_{i,j})_{i\not= j}$ are $216,217,218,219$. Remark. The above reasoning works, in the same way and with the same conclusion, when we replace $M$ with $M'=AA^T$. Thus, for such an ideal solution, $A^TA=AA^T$, that is, $A$ is a normal matrix. Thus, good candidates for the Graal are the circulant matrices -that are normal-. EDIT. We consider the Cholesky decomposition $b_nI+\mu_n.U=C^TC$; then $A^TA=b_n+\mu_n.U$ iff $A=PC$ where $P\in O(n)$. Finally, the idea is to rotate the known triangular matrix $C$ to make it close to an integer matrix (of course, it is not easy). For example, when $n=9$, a candidate solution $B$ has $[1,2,4,3,8,9,5,6,7]$ as first row; clearly, $B=RC$, where $R$ is close to an orthogonal matrix $T$ (consider the polar decomposition of $R$); if we replace $R$ with $T$, then we obtain $[1.02,2.07,4.08,2.89,7.99,9.02,4.95,5.92,7.07]$ as new first row; of course, this is this last form that must be discovered. We may also require that $A$ is normal, that is equivalent to the linear equation in the unknown $P$: $MP-PCC^T=0$. Unfortunately, the associated kernel is big; for example,when $n=9$, its dimension is $65$.
{ "pile_set_name": "StackExchange" }
Q: Azure apimanagement with subscriptions on a subset of operations, is it possible? Setting up an API with Azure API management. We've created 2 products, one that requires subscription and one that don't. We did this as vi have a single API where we want some of the operations to required subscription and others where we don't. Is this possible in a single API or do we need to create two APIs? The issue with 2 APIs is that any prefix ala "/api" needs to be different, and we want it to look like a single API A: This is not possible, unfortunately. As stated in the documentation subscriptions only apply to Products and individual APIs. Se this UserVoice suggestion where "Operation Visibility" is suggested. Greetings from Denmark ;-) /rasmus
{ "pile_set_name": "StackExchange" }
Q: Protocol conversion / normalization: Biztalk, alternatives? We have a need to take dozens of different protocols from systems such as security systems, fire alarms, camera systems etc.. and integrate them into a single common protocol. I would like this to be a messaging server that many systems could subscribe to and or communicate through. polling and non-polling "drivers" (protocol converters) handle RS232 / RS485 / tcp programmable "drivers" in a managed language like Java or C# rules engine capability Does biztalk fit this? Are there open source alternatives? Is there a Java / Java EE way to do this? At one end the system would be a SCADA system at the other is is kind of a middleware / messaging server. Any thoughts on the best way to proceed would be appreciated. I know that there will be a considerable amount of programming involved on the driver side, however as tempted as I am, building the whole system from scratch would not be appropriate. A: I would avoid BizTalk for SCADA and RS232/RS485 because these typically require realtime (or at least low latency) solutions. BizTalk is optimized for high throughtput, but has the drawback of having high latency by default. You can tweak BizTalk for low latency, but at this point you'll find you bypass almost everything BizTalk has built-in and it would probably get in the way instead of helping you. A: If you don't mind working on the Java platform there's a lightweight protocol switcher and implementation of the Enterprise Integration Patterns in an open source project called Apache Camel. Camel can already speak most of the common protocols and technologies like files, email, JMS, XMPP and so forth so there'd be no actual coding required for those things. To add new custom protocols the simplest route is to build on top of the MINA component which takes care of all the networking, socket handling, threading and so forth (e.g. NIO versus BIO et al). Then you just extend it to add your own protocol codec (how to marshal/unmarshal messages on the socket with possibly using framing etc). The HL7 component is an example of doing this. More detail on writing MINA codecs here. Then once you've got your camel component (lets call it foo) you could then bridge from any protocol to any other protocol using simple URIs to implement any of the Enterprise Integration Patterns such as Content Based Router, Recipient List, Routing Slip etc e.g. in Java code // route all messages from foo // to a single queue on JMS from("foo://somehost:1234"). to("jms:MyQueue"); // route all messages from foo component // to a queue using a header from("foo://somehost:1234"). recipientList(). simple("activemq:MyPrefix.${headers.cheese}"); A: www.livedata.com It's a bit pricey but it's a python based engine that can take one protocol and spit out another, it's already setup for multiple scada protocols such as ICCP, modbus, OPC, and DNP out of the box. Then you can talk whatever you want downstream. John
{ "pile_set_name": "StackExchange" }
Q: Errors while compiling irrlicht - android port (Mac OSX) I'v downloaded irrlicht android from here: https://gitorious.org/irrlichtandroid/irrlichtandroid/source/f12c3b9743d64dc5cd61931f40e42e8ca64b40ef: I'v tried to compile irllicht android using ndk-build, I get the following errors: In static member function 'static void irr::os::Printer::log(const c8*, irr::ELOG_LEVEL)': error: format not a string literal and no format arguments [-Werror=format-security] In static member function 'static void irr::os::Printer::log(wchar_t const*, irr::ELOG_LEVEL)': error: format not a string literal and no format arguments [-Werror=format-security] In static member function 'static void irr::os::Printer::log(const c8*, const c8*, irr::ELOG_LEVEL)': error: format not a string literal and no format arguments [-Werror=format-security] In static member function 'static void irr::os::Printer::log(const c8*, const path&, irr::ELOG_LEVEL)': error: format not a string literal and no format arguments [-Werror=format-security] make: *** [obj/local/armeabi/objs/irrlicht/os.o] Error 1 Does anyone know how to solve this issue?? any help will be appreciated. A: Found the solution for the issue, In project/default.propeties , I have changed: target=android-4 to target=android-18 And in include/IrrCompileConfig.h I have commented out: //#define _IRR_COMPILE_WITH_OGLES1_ since I need only OpenGL ES 2. This solved the issue , irrlichtandroid compiled successfully using ndk-build , libirrlicht.so file generated in my project libs folder.
{ "pile_set_name": "StackExchange" }
Q: How can I retrieve an image from SQL Server using ASP.NET MVC with the Entity Framework? I have seen other questions about this, but I have not seen a complete solution for this. I am using ASP.NET MVC with the Entity Framework, and I have a SQL Server database using an image datatype. View: <% foreach (var v in (IEnumerable<MyNamespace.Models.MyObject>)ViewData.Model) { %> <span> <%= v.Name %> </span> <br /> <span> <%= v.Description %> </span> <br /> <!-- Display image here --> <% } %> Controller: public ActionResult Index() { ViewData.Model = _db.MyObject.ToList(); return View(); } What do I need to do in my view and in my controller to insert my image while doing my best to stay with the principles of ASP.NET MVC? What I have so far: View: <img src="<%= Url.Action( "ShowImage", "Controller", new { id = v.ID } ) %>" /> Controller: public ActionResult ShowImage(int id) { } This is what I have so far, but I could be way off. It seems like there should be a much easier method to do this. A: The View looks fine, I just use src="/Controller/GetImage?id=xxxxx" which is effectively the same. The Controller action is a little different in that it's returning a FileContentResult thus. My image in the DB also stores the MIME type, the images are uploaded to the server so I just grab that at upload time. public FileContentResult GetImage(Guid ImageID) { Image image = (from i in myRepository.Images where i.ImageID == ImageID select i).SingleOrDefault(); if (image == null) { return File(System.IO.File.ReadAllBytes(Server.MapPath("/Content/Images/nophoto.png")), "image/png"); } else { return File(image.ImageBlob, image.ImageMimeType); } } Code to the Image class [Table(Name="Images")] public class Image { [Column(IsPrimaryKey = true, IsDbGenerated = true, AutoSync = AutoSync.OnInsert)] public Guid ImageID { get; set; } [Column] public bool OnDisk { get; set; } [Column] public string ImagePath { get; set; } [Column] public byte[] ImageBlob { get; set; } [Column] public string ImageMimeType { get; set; } [Column(AutoSync = AutoSync.Always, DbType = "rowversion NOT NULL", CanBeNull = false, IsDbGenerated = true, IsVersion = true)] public Binary ConcurrencyStamp { get; set; } } A: Using Lazurus' answer, I have come up with the following solution. I like his approach, but for simplicity's sake, I am using this to get started and I think it is the best (for now) to help other's understand it. Any better answers are welcome, as well as criticisms about this code since I know it is not very elegant at all. public FileContentResult ShowImage(long ID) { return File( _db.Vehicles.First(m => m.ID == ID).Picture, MediaTypeNames.Image.Jpeg); }
{ "pile_set_name": "StackExchange" }
Q: WPF editable master-detail with DataGrid updating on save I'm very new to WPF, so I thought I'd start with something simple: A window that allows users to manage users. The Window contains a DataGrid along with several input controls to add or edit users. When a user selects a record in the grid, the data is bound to the input controls. The user can then make the required changes & click the "Save" button to persist the changes. What's happening however, is that as soon as a user makes a change in one of the input controls, the corresponding data in the DataGrid gets updated as well, before the "Save" button was clicked. I would like the DataGrid to only be updated once the user clicks "Save". Here is the XAML for the view: <Window x:Class="LearnWPF.Views.AdminUser" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:vms="clr-namespace:LearnWPF.ViewModels" Title="User Administration" Height="400" Width="450" ResizeMode="NoResize"> <Window.DataContext> <vms:UserViewModel /> </Window.DataContext> <StackPanel> <GroupBox x:Name="grpDetails" Header="User Details" DataContext="{Binding CurrentUser, Mode=OneWay}"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <Label Grid.Column="0" Grid.Row="0">First Name:</Label> <TextBox Grid.Column="1" Grid.Row="0" Style="{StaticResource TextBox}" Text="{Binding FirstName}"></TextBox> <Label Grid.Column="0" Grid.Row="1">Surname:</Label> <TextBox Grid.Column="1" Grid.Row="1" Style="{StaticResource TextBox}" Text="{Binding LastName}"></TextBox> <Label Grid.Column="0" Grid.Row="2">Username:</Label> <TextBox Grid.Column="1" Grid.Row="2" Style="{StaticResource TextBox}" Text="{Binding Username}"></TextBox> <Label Grid.Column="0" Grid.Row="3">Password:</Label> <PasswordBox Grid.Column="1" Grid.Row="3" Style="{StaticResource PasswordBox}"></PasswordBox> <Label Grid.Column="0" Grid.Row="4">Confirm Password:</Label> <PasswordBox Grid.Column="1" Grid.Row="4" Style="{StaticResource PasswordBox}"></PasswordBox> </Grid> </GroupBox> <StackPanel Orientation="Horizontal"> <Button Style="{StaticResource Button}" Command="{Binding SaveCommand}" CommandParameter="{Binding CurrentUser}">Save</Button> <Button Style="{StaticResource Button}">Cancel</Button> </StackPanel> <DataGrid x:Name="grdUsers" AutoGenerateColumns="False" CanUserAddRows="False" CanUserResizeRows="False" Style="{StaticResource DataGrid}" ItemsSource="{Binding Users}" SelectedItem="{Binding CurrentUser, Mode=OneWayToSource}"> <DataGrid.Columns> <DataGridTextColumn Header="Full Name" IsReadOnly="True" Binding="{Binding FullName}" Width="2*"></DataGridTextColumn> <DataGridTextColumn Header="Username" IsReadOnly="True" Binding="{Binding Username}" Width="*"></DataGridTextColumn> </DataGrid.Columns> </DataGrid> </StackPanel> </Window> The Model has nothing special in it (the base class merely implements the INotifyPropertyChanged interface & firing the associated event): public class UserModel : PropertyChangedBase { private int _id; public int Id { get { return _id; } set { _id = value; RaisePropertyChanged("Id"); } } private string _firstName; public string FirstName { get { return _firstName; } set { _firstName = value; RaisePropertyChanged("FirstName"); RaisePropertyChanged("FullName"); } } private string _lastName; public string LastName { get { return _lastName; } set { _lastName = value; RaisePropertyChanged("LastName"); RaisePropertyChanged("FullName"); } } private string _username; public string Username { get { return _username; } set { _username = value; RaisePropertyChanged("Username"); } } public string FullName { get { return String.Format("{0} {1}", FirstName, LastName); } } } The ViewModel (IRemoteStore provides access to the underlying record store): public class UserViewModel : PropertyChangedBase { private IRemoteStore _remoteStore = Bootstrapper.RemoteDataStore; private ICommand _saveCmd; public UserViewModel() { Users = new ObservableCollection<UserModel>(); foreach (var user in _remoteStore.GetUsers()) { Users.Add(user); } _saveCmd = new SaveCommand<UserModel>((model) => { Users[Users.IndexOf(Users.First(e => e.Id == model.Id))] = model; }); } public ICommand SaveCommand { get { return _saveCmd; } } public ObservableCollection<UserModel> Users { get; set; } private UserModel _currentUser; public UserModel CurrentUser { get { return _currentUser; } set { _currentUser = value; RaisePropertyChanged("CurrentUser"); } } } And for the sake of completeness, here's the implementation of my Save ICommand (this doesn't actually persist anything yet, as I wanted to get the databinding working correctly first): public class SaveCommand<T> : ICommand { private readonly Action<T> _saved; public SaveCommand(Action<T> saved) { _saved = saved; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { _saved((T)parameter); } } As is apparent, I'm looking to implement this using a pure MVVM pattern. I've tried setting the bindings in the DataGrid to OneWay, but this causes changes to not be reflected in the grid (although new entries do get added). I've also had a look at this SO question, which suggested using a "selected" property on the ViewModel. My original implementation, as posted above, already had such a property (called "CurrentUser"), but with the current binding configuration, the grid is still updated as users make changes. Any assistance would be greatly appreciated, as I've been bumping my head against this issue for several hours now. If I've left anything out, please comment & I will update the post. Thank you. A: Thank you for providing that much of code, it was much easier for me to understand your question. First, I'll explain your current "User input -> Data grid" flow bellow: When you are typing, for example, text inside Username: TextBox, the text that you are typing is eventually, at some point, backed inside the TextBox.Text property value, in our case, is the current UserModel.Username property, because they are binded and he is the property value: Text="{Binding UserName}"></TextBox> The fact that they are binded means that no matter when you set UserModel.Username property, PropertyChanged is raised and notifies for a change: private string _username; public string Username { get { return _username; } set { _username = value; RaisePropertyChanged("Username"); // notification } } When PropertyChanged is raises, it notifies the changes for all UserModel.Username subscribers, in our case, one of the DataGrid.Columns, is a subscriber. <DataGridTextColumn Header="Username" IsReadOnly="True" Binding="{Binding Username}" Width="*"></DataGridTextColumn> The problem with the flow above starts at the place where you back the user input text. What you need is a place to back your user input text without setting it directly to the current UserModel.Username property, because if it will, it will start the flow I described above. I would like the DataGrid to only be updated once the user clicks "Save" My solution for your question is instead of directly backing the TextBoxes texts inside the current UserModel, backing the texts inside a temporary place, so when you click on "Save", it will copy the text from there to the current UserModel, and the properties set accessor inside CopyTo method will automatically update the DataGrid. I made the following changes for your code, the rest left the same: View <GroupBox x:Name="GroupBoxDetails" Header="User Details" DataContext="{Binding Path=TemporarySelectedUser, Mode=TwoWay, UpdateSourceTrigger=LostFocus}"> ... <Button Content="Save" Command="{Binding Path=SaveCommand}" CommandParameter="{Binding Path=TemporarySelectedUser}"/> // CommandParameter is optional if you'll use SaveCommand with no input members. ViewModel ... public UserModel TemporarySelectedUser { get; private set; } ... TemporarySelectedUser = new UserModel(); // once in the constructor. ... private UserModel _currentUser; public UserModel CurrentUser { get { return _currentUser; } set { _currentUser = value; if (value != null) value.CopyTo(TemporarySelectedUser); RaisePropertyChanged("CurrentUser"); } } ... private ICommand _saveCommand; public ICommand SaveCommand { get { return _saveCommand ?? (_saveCommand = new Command<UserModel>(SaveExecute)); } } public void SaveExecute(UserModel updatedUser) { UserModel oldUser = Users[ Users.IndexOf( Users.First(value => value.Id == updatedUser.Id)) ]; // updatedUser is always TemporarySelectedUser. updatedUser.CopyTo(oldUser); } ... Model public void CopyTo(UserModel target) { if (target == null) throw new ArgumentNullException(); target.FirstName = this.FirstName; target.LastName = this.LastName; target.Username = this.Username; target.Id = this.Id; } User input --text input--> Temporary user --click Save--> Updates user and UI. Its seems that your MVVM approach is View-First, one of many "View-First" approach guidelines is for each View you should create a corresponding ViewModel. So, it would be more "accurate" to rename your ViewModel after the View its abstracting, e.g. rename UserViewModel to AdminUserViewModel. Also, You can rename your SaveCommand to Command because it is answers the whole command pattern solution, rather then the special "saving" case. I would suggest you to use one of the MVVM frameworks (MVVMLight is my recommendation) as best practice for MVVM study, there are plenty out there. Hope it helped.
{ "pile_set_name": "StackExchange" }
Q: Why spring beans are not created in jar files, but in class files I am using spring for some dependency injection via the annotations. The problem ist whenever I start the application and use the .jar ,created by gradle, in the classpath Im getting the following exception: "org.springframework.beans.factory.NoSuchBeanDefinitionException" But if the /build/classes/main/ is in the classpath the beans are created and no exception is thrown. So the beans are created in the build/classes/main/ but not in the build/libs/*.jar A: Set @ComponentScan("classpath*:org.mypackage") to let Spring scan jars as well.
{ "pile_set_name": "StackExchange" }
Q: Can't access register variable in a loop I've been following this example playbook to create rackspace servers using Ansible http://nicholaskuechler.com/2015/01/09/build-rackspace-cloud-servers-ansible-virtualenv/ Which works great, but only works on one server at a time, so I am trying to make it more dynamic, using with_items to loop through the number of servers I want to build tasks: - name: Rackspace cloud server build request local_action: module: rax credentials: "{{ credentials }}" name: "{{ item }}" flavor: "{{ flavor }}" image: "{{ image }}" region: "{{ region }}" files: "{{ files }}" wait: yes state: present networks: - private - public with_items: - server-app-01 - server-app-02 register: rax This creates the servers fine, but when I try and add this to the deploy group using the method in the link, I get an error, as expected as now there is a 'results' key I"ve tried all kinds of ways to try and target this in the way that I perceive the documentation to allude to: - name: Add new cloud server to host group local_action: module: add_host hostname: "{{ item.success.name }}" ansible_ssh_host: "{{ item.success.rax_accessipv4 }}" ansible_ssh_user: root groupname: deploy with_items: rax.results (I’ve also tried many other kinds of ways to target this) But I get ‘One or more undefined variables: ‘list object’ has no attribute ‘rax_accessipv4” This is a stripped down version of the object I get back from rax, through debug. These servers don't exist anymore. http://pastebin.com/NRvM7anS Can anyone tell me where I'm going wrong I'm starting to go a bit mad A: If you notice the type of rax.results.success is list. So this: hostname: "{{ item.success.name }}" should be hostname: "{{ item.success[0].name }}" or hostname: "{{ item['success'][0]['name'] }}" . { "changed": true, "msg": "All items completed", "results": [ { "instances": [ { "name": "server-app-01", "rax_accessipv4": "134.213.51.171", "rax_accessipv6": "2a00:1a48:7808:101:be76:4eff:fe08:5251", } ], "item": "server-app-01", "success": [ { "name": "server-app-01", "rax_accessipv4": "134.213.51.171", "rax_accessipv6": "2a00:1a48:7808:101:be76:4eff:fe08:5251", } ], "timeout": [] }, ...... }
{ "pile_set_name": "StackExchange" }
Q: Gridview Scroll with OnTouchListener Basically, I want to fast select item as well as scroll Gridview. In OnItemClickListener scroll working fine but its not fast select multiple item like OnTouchListener. My code: gridView.setOnTouchListener(new View.OnTouchListener() { public boolean onTouch(View v, MotionEvent me) { int action = me.getActionMasked(); float currentXPosition = me.getX(); float currentYPosition = me.getY(); int position = gridView.pointToPosition((int) currentXPosition, (int) currentYPosition); switch (action) { case (MotionEvent.ACTION_MOVE): break; case (MotionEvent.ACTION_UP): if (position != -1) { //Here my logic to add item in basket list return true; } } return false; } }); Right now its scroll and add item as well. How I stop ACTION_UP(selection Item) event after ACTION_MOVE(when scroll)? A: I was applying this code for scroll gridview on OntouchListener and prevent scroll on select item. And work fine in my side. If anyone have better solution bring it on. My Code: private int moveCount=0; private boolean ignore = false; gridView.setOnTouchListener(new View.OnTouchListener() { public boolean onTouch(View v, MotionEvent me) { v.getParent().requestDisallowInterceptTouchEvent(true); int action = me.getActionMasked(); // MotionEvent types such as ACTION_UP, ACTION_DOWN float currentXPosition = me.getX(); float currentYPosition = me.getY(); int position = gridView.pointToPosition((int) currentXPosition, (int) currentYPosition); if (ignore && action == MotionEvent.ACTION_UP) return false; switch (action) { case (MotionEvent.ACTION_MOVE): moveCount++; Log.d(DEBUG_TAG, "Action was MOVE " + position); if(moveCount>3) { ignore = true; } break; case (MotionEvent.ACTION_UP): addItem(position); Log.d(DEBUG_TAG, "Action was UP " + position); return true; case (MotionEvent.ACTION_DOWN): Log.d(DEBUG_TAG, "Action was DOWN " + position); moveCount=0; ignore = false; return true; case (MotionEvent.ACTION_CANCEL): addItem(position); moveCount=0; ignore = false; gridView.setFocusable(true); Log.d(DEBUG_TAG, "Action was CANCEL " + position); return true; case (MotionEvent.ACTION_OUTSIDE): Log.d(DEBUG_TAG, "Movement occurred outside bounds " + "of current screen element " + position); return true; } Log.d("clickTouch=", "" + position); return false; } });
{ "pile_set_name": "StackExchange" }
Q: how to search negative number in solr? In solr I want to search one field with negative number like nodeId:-1. in schema.xml, I defined it like this: <field name="nodeId" type="int" indexed="true" stored="true" /> solr throws error when use "nodeId:-1" to search like this: org.apache.lucene.queryParser.ParseException: Cannot parse 'storeId:-1': Encountered " "-" "- "" at line 1, column 8. Was expecting one of: "(" ... "*" ... ... ... ... ... "[" ... "{" ... ... I must search with storeId:\-1 or storeId:"-1" to get answer. now the question is : Can I modify any solr configration files to search without any escape characters? Or another way to resolve this problem without modify java code. Thanks. A: I personally think escaping properly inside your Java code is the better way. ClientUtils.escapeQueryChars would be the method of choice. A: "-" is a special character for the query parser, which is used to mark some clauses as prohibited. If you don't want to escape this character, you need to change your query parser. You may want to give a try to the raw query parser: q={!raw f=nodeId}-1 but it has none of the features of the default query parser. Actually, the raw query parser only allows you to run pure term queries.
{ "pile_set_name": "StackExchange" }
Q: Java Spring JDBC template problem public List<Weather> getWeather(int cityId, int days) { logger.info("days: " + days); return getSimpleJdbcTemplate().query("SELECT weather.id, cities.name, weather.date, weather.degree " + "FROM weather JOIN cities ON weather.city_id = cities.id " + "WHERE weather.city_id = ? AND weather.date BETWEEN now()::date AND (now() + '? days')::date", this.w_mapper, cityId, days); } error : org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [SELECT weather.id, cities.name, weather.date, weather.degree FROM weather JOIN cities ON weather.city_id = cities.id WHERE weather.city_id = ? AND weather.date BETWEEN now()::date AND (now() + '? days')::date]; The column index is out of range: 2, number of columns: 1.; nested exception is org.postgresql.util.PSQLException: The column index is out of range: 2, number of columns: 1. it works with : public List<Weather> getWeather(int cityId, int days) { logger.info("days: " + days); return getSimpleJdbcTemplate().query("SELECT weather.id, cities.name, weather.date, weather.degree " + "FROM weather JOIN cities ON weather.city_id = cities.id " + "WHERE weather.city_id = ? AND weather.date = now()::date", this.w_mapper, cityId); } so the problem is when im using two ? marks in my query. how can i make it work to with 2 ? marks??? A: The problem is probably in this part: '? days' The question mark is inside a literal string and so it is not recognized by the sql parser. You could try to rewrite it using the string concatenation operator, although I'm not 100% sure that is valid syntax in this case. According to this page on the postgres wiki you should be able to simply omit the string 'days', since adding a date and an integer is interpreted as adding the specified number of days. BETWEEN now()::date AND now()::date + ? A: Rewrite the SQL part AND weather.date BETWEEN now()::date AND (now() + '? days')::date as AND weather.date BETWEEN now()::date AND ? and set it with a fullworthy java.sql.Date value instead of days. Calendar calendar = Calendar.getInstance(); calendar.add(Calendar.DATE, days); Date endDate = new Date(calendar.getTimeInMillis()); // ... (once again, it's java.sql.Date, not java.util.Date!)
{ "pile_set_name": "StackExchange" }
Q: Passing parameters in windows service I want to develop a windows service which will be accepting a datatable from an aspx page. Both the windows service and the website are hosted on same machine. Also I need to set a date and time on which this service is to work. this date and time are to change according to customer needs. once again the date and time are to be fed from the aspx page. A: The question is a bit broad, meaning it's hard to give a good example with a code sample unless you're a bit more specific about what you have already figured out, and what you need help with. Based on that, I'm guessing you're not sure where to start. Two possibilities: Simple: Have the ASPX page and the server both point to the same database. In order for the aspx page to send data to the service, the ASPX page should update the database; the service should read from it. Just set the service up to poll periodically. You can also store you r dates in the DB and have the ASPX page update the dates. More advanced: Use WCF (Windows Communication Foundation) to allow the service to listen for requests from the ASPX page. (Google WECF for example code). You can pass DataSets as parameters into functions, even using WCF.
{ "pile_set_name": "StackExchange" }
Q: How do I explicitly dismiss a toast notification without having to wait for it to disappear on its own? On Windows Phone when events of interest happen on the phone such as finding a nearby Wi-Fi network when your disconnected, a new text message arrives or an app wants to let you know something happened when you are not in the app a toast notification is displayed in the system tray area in the user's accent color with a message. How do I dismiss the toast notification without having to tap it and hit back or waiting for it to disappear on it's own? A: Simply swipe the toast notification to the right and it will leave your screen. If you have multiple pending toast notifications (for instance, if you receive multiple texts concurrently), you will have to swipe each of them in turn. However, this will dismiss them. You don't even have to unlock the phone.
{ "pile_set_name": "StackExchange" }
Q: ng-options predefined option Object Have a look at my example http://plnkr.co/edit/21ewxVIaRm4IHyF3TgoD?p=preview I need the option object to be stored as ng-model so i use for ng-options ng-options="m as m.name for m in list" However, if i set ng-model manually as the object it does not select the correct option in the dropdown list by default. It stores the data correctly, but it only pre-selects an option if use this expression ng-options="m.name as m.name for m in list" but this stores a string on the model instead of the options object. Am I doing something incorrectly? Goal: set ng-model to the default option object and have it be automatically selected in the dropdown. A: Updated plunker. For the version you are using (1.0.8), you would have to resolve the object using a loop: $scope.selected = { name:"Something" } $scope.setSelected = function() { angular.forEach($scope.list, function(item) { if (item.name == $scope.selected.name) { $scope.selected = item; } }) } $scope.setSelected(); More recent versions, have track by syntax supported in the ng-options. So you could write: ng-options="m.name for m in list track by m.name" And this would set the object that matches the predicate.
{ "pile_set_name": "StackExchange" }
Q: SQL Trigger Store Procedure Compile Error I have a question on what is giving me this error: Msg 547, Level 16, State 0, Procedure AddIntoClass, Line 12 The INSERT statement conflicted with the FOREIGN KEY constraint "FK_CourseEnrolled_StudentDemographic". The conflict occurred in database "PK_Final", table "dbo.StudentDemographic", column 'StudentID'. This is my code so far: Create Procedure AddIntoClass(@StudentID int, @CourseName nvarchar(30), @SectionNumber nvarchar(30), @TimeOfDay nvarchar(30), @Term int) As Begin Insert into CourseEnrolled Values(@StudentID, @CourseName, @SectionNumber, @TimeOfDay, @Term) End EXEC AddIntoClass 2, 'Biology', '1003', '2:00pm', 2 Any help would be appriciated, Thanks! A: Before inserting into CourseEnrolled table, the student id value of "2", should be available in the StudentDemographic table. You are inserting StudentId value of "2" into CourseEnrolled table, which has a foreign key reference on "StudentId" to StudentDemographic table.
{ "pile_set_name": "StackExchange" }
Q: What are the ObjectManager Factories differences? I see that Magento 2 has 3 factory classes (4 if you count the abstract one). \Magento\Framework\ObjectManager\Factory\Dynamic\Developer \Magento\Framework\ObjectManager\Factory\Dynamic\Production \Magento\Framework\ObjectManager\Factory\Compiled All of the above extend \Magento\Framework\ObjectManager\Factory\AbstractFactory. I assumed that when on developer mode, the Developer factory would be used, but sometimes the Compiled one is used and I cannot pinpoint the conditions for this. Can someone please explain when each factory is used and what are the differences between them? A: And condition is here: \Magento\Framework\App\EnvironmentFactory::createEnvironment It does check if file with compiled content is exist for current area: "/var/di/global.ser", "/var/di/frontend.ser" and "/var/di/adminhtml.ser" correspondingly. The difference is in performance of Object Manager. Constructor dependencies and plugins information is serialized in those files, so Object Manager instantiating objects faster, without using Reflection and calculating chains of dependencies. And doesn't matter if your instance is running in developer or other mode. Object Manager factory is resolved based on availability of those files only. If file for corresponding area is available then "Compiled" factory is used, if file is not available then "Developer" factory is used instead. And "Production" factory is left out and will be removed from the code base.
{ "pile_set_name": "StackExchange" }
Q: Add FileUpload dynamically with a different name field I need the ability to add multiple file uploads based on the user needs BUT the user has to be able to assign a name to the upload for later usage. As you can see I'm only dynamically adding more file uploads but assign a name to these uploads seems to be my problem. Is there any way I can achieve this? The code in my View : @using Microsoft.Web.Helpers @model MusicNews.Web.Models.ViewModel.ArticleCreateModel @{ ViewBag.Title = "Create"; } @section content { @using (Html.BeginForm("Create", "Article", FormMethod.Post, new { enctype = "multipart/form-data" })) { ... @FileUpload.GetHtml("Files",2,true,false,"Add more") <p> <input type="submit" value="Create" /> </p> } } The code in my controller looks like : [Authorize] public ActionResult Create() { ViewBag.ArticleTypes = new SelectList(ArticleTypes, "Type"); return View(); } [HttpPost] [Authorize] public ActionResult Create(ArticleCreateModel article) { var files = Request.Files; if (ModelState.IsValid) { ... } return View(article); } A: You probably have to create additional uploads yourself. This can be done using jQuery, for example: Here's the HTML: <div id="uploads"> <div id="uploadtemplate"> <input type="file" name="upload" /> <input type="text" name="FileName" /> <div> <a href="#" id="addFile">Add file</a> </div> On load, we'll clone the "template" in a variable for later use. On click, we'll clone the template and add it to the document. $(function() {/ $('#addFile').data('uploadtemplate', $('#uploadtemplate').attr('id', '').clone()); $('#addFile').click(function(e) { $(this).data('uploadtemplate').clone().insertBefore( $this) ); e.preventDefault(); }); }); Your model will be: public class Foo { public string[] FileName { get; set; } // will contain all file names given by the user } Then parse Request.Files and do the magic you know :-) The Foo.FileName field wil lcontain a file name given by the user for every uploaded file. You can use that as the first file in Request.Files will map to Foo.FileName[0] and so on.
{ "pile_set_name": "StackExchange" }
Q: How can I get mongo documents with multiple non-exclusive selectors using Mongoose? I need to build a single query to return : Documents from their IDs (optional) Documents matching a text search string (optional) The other documents sorted by score and with a count limited by an integer argument The total limit is the provided one + the length of the documents array IDs of the first condition. I used to work with meteor where you can return an array of queries cursors. In this case, I am working with a mongoose backend and I am not sure of how to proceed. I assume I need to use Model.aggregate and provide my conditions as an array. However, the request fails with the error Arguments must be aggregate pipeline operators. Each of my conditions works fine individually with a regular find() query. Here is my graphQL query resolver, where I can't find what is going wrong: async (root, { search, selected = 0, limit = 10 }, { models: { tag } }) => { try { let selector = [{}] // {} should return the documents by default if no other condition is set if (selected.length) selector.push({ _id: { $in: selected } }) if (search && search.length) selector.push({ $text: { $search: search, $caseSensitive: false, $diacriticSensitive: false } }) const tags = await tag.aggregate(selector).sort('-score').limit(limit + selected.length) return { ok: true, message: "Tags fetched", data: tags } } catch (err) { return { ok: false, message: err.message }; } } ), When I log the selector with all the arguments set, it returns an array of the following form: [ {}, { _id: { '$in': [Array] } }, { '$text': { '$search': 'test', '$caseSensitive': false, '$diacriticSensitive': false } } ] UPDATE Based on @Ashh answer, with an additional $or operator, the full agregator variable look like this: { '$match': { '$or': { _id: { '$in': [ '5e39745e0ac14b1731a779a3', '5e39745d0ac14b1731a76984' ] }, '$text': { '$search': 'test', '$caseSensitive': false, '$diacriticSensitive': false } } } }, { '$sort': { score: -1 } }, { limit: 12 } I still get the "Arguments must be aggregate pipeline operators" error, and I don't see where, if the $text argument is not present, I get the default documents by score. @Ashh, I'll wait for your updated answer to validate it. Thanks again for your help. A: Mongoose aggregate() function uses $match stage which is equivalent to the find() but accepts some stages as array of elements to filter the documents. You can check the example here Mongoose Aggregate. And rest is your code fault. It should be async (root, { search, selected = 0, limit = 10 }, { models: { tag } }) => { try { const aggregate = [] let selector = { $match: { }}; aggregate.push(selector) if (selected.length) { aggregate[0].$match['$or'] = []; aggregate[0].$match.$or.push({ _id: { $in: selected }}); } if (search && search.length) { aggregate[0].$match['$or'] = aggregate[0].$match['$or'] ? aggregate[0].$match['$or'] : [] aggregate[0].$match.$or.push({ $text: { $search: search, $caseSensitive: false, $diacriticSensitive: false }}) } aggregate.push({ $sort: { score: - 1 }}) aggregate.push({ $limit: limit }) const tags = await tag.aggregate(aggregate) return { ok: true, message: "Tags fetched", data: tags }; } catch (err) { return { ok: false, message: err.message }; } };
{ "pile_set_name": "StackExchange" }
Q: Implement or replace commons logging I use Spring which somehow relies on org.apache.commons.logging. I have got my own logger that already implements SLF4J and some proprietary protocol. So I'm really keen to use my logger. In SLF4J you implement org.slf4j.impl.StaticLoggerBinder and use some factory and logger interface. Is it the same in commons logging? Implement org.apache.commons.logging.LogFactory and use a few interfaces? Is there some reference implementation of LogFactory? Or am I thinking completely wrong here? All these different logging "standards" are driving me crazy. Thanks A: It should be quite easy to shut off commonslogging in Spring and integrate it with slf4j instead. At that point you can simply pass your own logger to Spring and use it through slf4j functions. You can find the information you need at paragraph 1.3.2 here. Also, I'm not a fan of reinventing the wheel and I would strongly suggest you to look at existing or brand new logging frameworks like LogBack for instance.
{ "pile_set_name": "StackExchange" }
Q: Going back to previous if statement when fail if statement I only started programming in Java 1 week ago so sorry for all the bad code. I have been recreating a "refinement system" from a game I used to play just because I thought it would be a good idea for a beginner project. This system improves gear based on a percentage chance of success. As you get a higher refinement, the percentage chance goes down. One method of refining in my project is where every failed refinement doesn't reset your refinement level to 0, but instead decreases the refinement level by 1 every time you fail. I have already successfully created a method that resets to 0 upon fail, but can't seem to figure out how to decrease the level by 1 upon fail. So my question is, how can I get my "refinement level" to decrease by 1 upon fail, rather than reset to 0. Also, the console needs to return to the previous IF statement upon fail otherwise it will not work. This was my attempt: import java.util.Random; public class Tisha { public static void main(String[] args) throws InterruptedException { if (new Random().nextDouble() <= 0.535) { System.out.println("Refine successful - Refine level 1"); } else { System.out.println("Refine failed - Refine reset"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 2"); } else { System.out.println("Refine failed - Refine level 1"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 3"); } else { System.out.println("Refine failed - Refine level 2"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 4"); } else { System.out.println("Refine failed - Refine level 3"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 5"); } else { System.out.println("Refine failed - Refine level 4"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 6"); } else { System.out.println("Refine failed - Refine level 5"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 7"); } else { System.out.println("Refine failed - Refine level 6"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.335) { System.out.println("Refine successful - Refine level 8"); } else { System.out.println("Refine failed - Refine level 7"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.285) { System.out.println("Refine successful - Refine level 9"); } else { System.out.println("Refine failed - Refine level 8"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.235) { System.out.println("Refine successful - Refine level 10"); } else { System.out.println("Refine failed - Refine level 9"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.155) { System.out.println("Refine successful - Refine level 11"); } else { System.out.println("Refine failed - Refine level 10"); } Thread.sleep(250); if (new Random().nextDouble() <= 0.085) { System.out.println("Refine successful - Refine level 12"); } else { System.out.println("Refine failed - Refine level 11"); } } } Output: Refine successful - Refine level 1 Refine failed - Refine level 1 Refine failed - Refine level 2 Refine failed - Refine level 3 Refine failed - Refine level 4 Refine successful - Refine level 6 Refine failed - Refine level 6 Refine failed - Refine level 7 Refine successful - Refine level 9 Refine failed - Refine level 9 Refine successful - Refine level 11 Refine successful - Refine level 12 Process finished with exit code 0 Sorry for the messy code, I warned you. A: Set your refinement level as a variable, and increase or decrease accordingly. int refinementLevel = 0; if (new Random().nextDouble() <= 0.155) { refinementLevel++; System.out.println("Refine successful - Refine level "+refinementLevel); } else { refinementLevel--; System.out.println("Refine failed - Refine level "+refinementLevel); } A: My try on this problem, feel free to ask if anything is not clear. I used a do-while loop to solve your "go-back" problem and some variables to make the program dynamic class Tisha { // An integer field to store the current level of refinement. int level = 0; // An array of probabilities for each level of refinement. double[] probs = new double[]{ 0.535, 0.335, 0.335, 0.335, 0.335, 0.335, 0.335, 0.335, 0.285, 0.235, 0.155, 0.085 }; // The random number generator Random random = new Random(); // This function tries to perform a refinement public void tryRefinement() { if (random.nextDouble() <= probs[level]) { level++; System.out.println("Refine successful - Refine level " + level); } else { level--; if (level >= 0) System.out.println("Refine failed - Refine level " + level); else System.out.println("Refine failed - Refine reset"); } } // The main method public static void main(String[] args) throws InterruptedException { // Create a 'Tisha' object (are you familiar with OOP?) Tisha tisha = new Tisha(); // Keep refininig until the level reaches 0 do { tisha.tryRefinement(); Thread.sleep(250); } while (tisha.level > 0); } }
{ "pile_set_name": "StackExchange" }
Q: Adding extra values to JS Object Ok I have an HTML list containing the data I need. It looks like this ul li[data-trainer]>trainee li[data-trainer]>trainee li[data-trainer]>trainee I iterate it and want to make an object that looks like this { "Trainer":{0:Trainee1}, "Trainer2":{1:Trainee2,2:Trainee3,3:Trainee4} } I tried this and a bunch more var Data = new Object(); var i = 0; $(".blah-blah-ul li.active").each(function(){ i++; var trainer = $(this).attr("data-trainer"); var trainee = $(this).text(); Data[trainer] = {}; Data[trainer][i] = trainee; }) console.log(Data); Data[trainer][i] = trainee leaves me with only the last trainee in the list. Data from console.log: Object Trainer1: Object 2: Trainee2 Trainer3: Object 4: Trainee6 I tried making an array and using push or making a string but that didn't work. If someone could please recommend a proper solution, it would be greatly appreciated. Here's the HTML <ul class="blah-blah-ul"> <li class="active" data-trainer="Trainer1">Trainee1</li> <li class="active" data-trainer="Trainer1">Trainee2</li> <li data-trainer="Trainer2">Trainee3</li> <li data-trainer="Trainer2">Trainee4</li> <li class="active" data-trainer="Trainer3">Trainee5</li> <li class="active" data-trainer="Trainer3">Trainee6</li> </ul> A: Your problem was in order to have more than one value assigned to the element in the array, the sub element must be an array. This allows for adding multiple Trainee to the Trainer item in the Data object. var Data = new Object(); var i = 0; $(".blah-blah-ul li.active").each(function(){ i++; var trainer = $(this).attr("data-trainer"); var trainee = $(this).text(); if(typeof Data[trainer] === 'undefined') { Data[trainer] = {}; } Data[trainer][i] = {trainee}; }) console.log(Data); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <ul class="blah-blah-ul"> <li class="active" data-trainer="Trainer1">Trainee1</li> <li class="active" data-trainer="Trainer1">Trainee2</li> <li data-trainer="Trainer2">Trainee3</li> <li data-trainer="Trainer2">Trainee4</li> <li class="active" data-trainer="Trainer3">Trainee5</li> <li class="active" data-trainer="Trainer3">Trainee6</li> </ul> Documentation on Javascript Arrays using Push, can be found here w3schools
{ "pile_set_name": "StackExchange" }
Q: what is the argument we get in callback of UIcontrol push button in Matlab I am showing lot of buttons on image using pushbutton with UIcontrol.(around 20) How to handle callback with single function(which has similar behaviour and i just have to change id for each button,i dont want to write 20 callback for each.) S = uicontrol('style','push',... 'units','pix',... 'position',Pos,... 'string',Button_name,... 'fontsize',10,... 'fontweight','bold'); set(S,'callback',{@pb1_call}) % Set the callbacks. end function [] = pb1_call(varargin) disp(varargin) end A: The Matlab documentation describes this reasonably well, look at uicontrol properties. function pushbutton1_Callback(hObject,eventdata) display Goodbye close(gcbf) The callback fires with the firing object and event data. If you set a Tag onto the uicontrol you could do: function pushbutton1_Callback(hObject,eventdata) buttonID = get(hObject, 'Tag'); switch buttonID case 'button1' ... end Also worth noting, if the callback is a method of a handle class then there are three arguments: function pushbutton1_Callback(self, hObject, eventdata)
{ "pile_set_name": "StackExchange" }
Q: maven-jar-plugin does not include .gitignore file I try to package an application into a jar file with maven. Somehow all files except .gitignore files are added to the jar. Why is this file skipped and how can I disable this? Even if I try to include it like below the include is ignored and the jar file remains empty. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <includes> <include>**/.gitignore</include> </includes> </configuration> </plugin> maven-jar-plugin version: 3.1.0 maven version: 3.5.2 A: I tried this with a src/main/resources/.gitignore and it worked with the default maven-jar-plugin:2.4, i.e. .gitignore was packaged into the JAR. Then I used the maven-jar-plugin:3.1.0 you mention and it did not work, as you describe. It turned out that it doesn't work from v2.5 onwards.
{ "pile_set_name": "StackExchange" }
Q: Docker for Windows and docker-maven-plugin - "SSLException: Unrecognized SSL message, plaintext connection" error I'm using Docker Desktop for Windows v1.13.0 and docker-maven-plugin v0.4.13 on my local Windows 10 Pro machine. I'm using mvn clean package docker:build to build my project and generate the docker image. The build fails: [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 25.006 s [INFO] Finished at: 2017-01-19T14:48:45-02:00 [INFO] Final Memory: 68M/619M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project monitoramentoRS: Exception caught: java.util.concurrent.ExecutionException: com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? -> [Help 1] Creating the docker image on the command line directly (docker build -t ...) works fine. The maven plugin was working fine with Docker toolbox and Oracle Virtual Box on Windows 7. Therefore, I believe there's a TLS-related configuration issue between docker-maven-plugin and the Docker for Windows daemon. I've tried different configuration combinations using DOCKER_HOST (no port indication, 2375, 2376), DOCKER_TLS_VERIFY, and DOCKER_TLS to no avail. Also tried the "tls" and "tlsverify" attributes of the "advanced" Docker for Windows daemon configuration. Has anyone been able to make docker-maven-plugin create a docker image on Docker for Windows? My %HOME%\.docker\config.json file only contains an auths collection: { "auths": { "our-corporate-private-docker-registry-address": { "auth": "an-authorization-token" }, "https://index.docker.io/v1/": { "auth": "an-authorization-token" } } } Below is the docker-maven-plugin config. <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>0.4.13</version> <configuration> <useConfigFile>false</useConfigFile> <!-- true yields the same error --> <registryUrl>${docker.private.registry}</registryUrl> <imageName>${docker.private.registry}/myrepo/myimage</imageName> <imageTags> <imageTag>latest</imageTag> </imageTags> <dockerDirectory>${basedir}/docker</dockerDirectory> <!-- Dockerfile location --> <resources> <resource> <targetPath>/</targetPath> <directory>${project.build.directory}</directory> <include>${project.build.finalName}.${project.packaging}</include> </resource> </resources> </configuration> </plugin> A: There might be some configuration element under %HOME%.docker affecting the communication with docker-maven-plugin. Try to remove your %HOME%.docker folder and restart docker. After that, run the oc login -u user https://url-to-openshift:port --insecure-skip-tls-verify and docker login -u user -p token url-to-private-registry and then open the %HOME%.docker and if your file is like this: { "auths": { "url-to-private-registry": {} }, "credsStore": "wincred" } then remove the credsStore part because the spotify docker-maven-plugin doesn't support it. Example: { "auths": { "url-to-private-registry": {} } } When you run the docker login again, it will generate the token again and you shouldn't have any authentication problem. After the login, your %HOME%.docker\config.json will look like this: { "auths": { "url-to-private-registry": { "auth:" "token-that-docker-maven-plugin-needs-when-property-useConfigFile-is-true" } } } At least, it worked for me.
{ "pile_set_name": "StackExchange" }
Q: Meaning of randomness in space I am a non-math person and have a question about randomness: When generating elements from a finite set using some algorithm, it is clear what it means when saying that elements should be randomly generated. How about for infinite sets? For example if I want to generate real numbers randomly, what does it mean to be random? In general, how is random defined in Euclidean n-space? How about for subsets of n-space, eg. generating random points on the (n-1) unit sphere? Thanks. A: There are two aspects: First, you need to specify a probability distribution. This is a function that says for a subset (ignoring questions of measureability for the moment) how probable it is for a random element to lie in this set. Example: You have a finite set $$ A = \{1, 2, 3, 4 ,5 , 6 \} $$ A probability distribution $p$ would be: $$ p(x) = \frac{1}{6} $$ for $x = 1, ..., 6$. Note that the sum over all probabilities needs to be $1$. For an infinite set like the interval [0, 1], a probability distribution would be a function that has an integral of 1: $$ \int_{0}^1 p(x) d x = 1 $$ The simplest possible example would be the uniform distribution, the constant function $$ p(x) = 1 $$ (This time I implicitly assumed that we are talking about probability distributions that have a Radon-Nikodym density with respect to the Lesbegue measure.) On "infinite" sets like the whole real line (or $\mathbb{R}^n$), there is no uniform distribution, because its integral could not be 1 anymore, but there are a lot of interesting other distributions. The second important aspect is independence: You need to specify if you know anything about the next random element if I tell you about what happended before. If you say: What happens next, i.e. the next random element, is independent of what happened before, one says that the random elements are independent. But there are a lot of interesting situations where this is not so. When you play in a casino and have a fixed budget, you cannot play if you go bankrupt, for example.- In this case "what happens next" does depend on the events that happended before. To pick up one of your examples: The unit sphere in $\mathbb{R}^n$ has finite Lesbegue measure, so that it is possible to define the uniform probability distribution on it. And lets also say that we would like to generate elements that are independent. In the one dimensional case, i.e. the unit circle, we could generate independent uniformly distributed elements of the unit interval $ x \in [0, 1]$ and calculate $$ e^{i x} = \cos(x) + i \sin(x) $$ which will result in numbers that are uniformly distributed on the circle. (In case you don't know about complex numbers, you can write the latter as $(\cos(x), \sin(x))$, in cartesian coordinates in $\mathbb{R}^2$).
{ "pile_set_name": "StackExchange" }
Q: Serial Port AT Commands for GSM Modem Device not working on loop I've been currently working on a .net project that sends SMS messages through a GSM Modem Device. I can manage to send a single SMS message just fine, but when I start to loop the AT Commands to send SMS to multiple recipients, the behaviour of the application gets very clunky. Sometimes it could only send an SMS to the first recipient it could find, most of the times none at all. Below is the source code for my method on sending SMS to multiple recipients: String MessageBody = String.Format("Everyone,\n\nEquipment Current Reading: {0} tph\nCurrent Status: {1}\n\n".Replace("\n",Environment.NewLine), CurrentValue, EqValue); SerialPort SP = new SerialPort(); SP.PortName = "COM3"; using (TestEntities TE = new TestEntities()) { List<vwRecipient_MasterList> RecipientList = TE.vwRecipient_MasterList.Where(r => r.Group_id == R_Group).Select(r => r).ToList(); foreach (vwRecipient_MasterList Recipient in RecipientList) { SP.Open(); String formattedRecipientNo = Char.ConvertFromUtf32(34) + Recipient.MobileNo + Char.ConvertFromUtf32(34); SP.Write("AT+CMGF=1" + Char.ConvertFromUtf32(13)); SP.Write("AT+CMGS=" + formattedRecipientNo + Char.ConvertFromUtf32(13)); SP.Write(MessageBody + Char.ConvertFromUtf32(26) + Char.ConvertFromUtf32(13)); SP.Close(); } } A: So I did a bit more research on how to utilize the serial port class based on the MSDN reference (for serial ports) and a few articles around here at SO, and I've come up with this unorthodox solution that involves using the SerialDataReceivedEventHandler provisioned by the SerialPort class and an infinite while loop. First of all, I created two properties in the class scope that is visible for both methods of the SendSMS(WebMethod) and the DataRecieved(Event): // Holds the text output from the SerialPort public string spReadMsg { get; set; } // Used as a fail-safe terminator of the infinite loop used in the Web Method. public DateTime executionTime { get; set; } The following is the DataRecievedHandler event. Basically, all this event does is store the text response from the SerialPort in the spReadMsg property for the SendSMS method private void DataRecievedHandler(object sender, SerialDataReceivedEventArgs e) { try { SerialPort sp = (SerialPort)sender; string indata = sp.ReadExisting(); Debug.Print("Data Received:"); Debug.Print(indata); // Store to class scope property spReadMsg for the sendSMS method to read. spReadMsg = indata; } catch (Exception ex) { Debug.Print(ex.Message); } } Finally, I added a few lines on my Web Method to listen to the DataRecieved event whenever the desired response for SMS Message has been sent successfully. According to this article on using AT commands for Modem Devices: http://www.smssolutions.net/tutorials/gsm/sendsmsat. The SerialPort should return a +CMGS: # response to identify that message sending has been completed. So All I have to do is wait for a +CMGS: response which will let the program know that the message has been sent and is ready to send the next message to the next recipient. I made a makeshift listener for the web method using an infinite while loop that terminates once the response +CMGS: has been read from the serial port or when the it takes longer than 30 seconds to get the desired response. [WebMethod] public void sendSMSMessage_inHouse(String Recipients, String MessageBody) { String sanitizedRecipient = Recipients.Replace(" ", ""); var RecipientList = Recipients.Split(',').ToList(); String sanitizedMessage = MessageBody.Replace(@"\n", Environment.NewLine); SerialPort SP = new SerialPort(); SP.PortName = "COM3"; SP.DataReceived += new SerialDataReceivedEventHandler(DataRecievedHandler); SP.Open(); // Initally set property to the "condtion" value to allow first message to be // run without the datarecieved response from the serial port spReadMsg = "+CMGS:"; // Set executionTime inital value for comparison executionTime = DateTime.Now; foreach (String Recipient in RecipientList) { // Infinite Loop listens to the response from the serial port while (true) { // If the +CMGS: response was recieved continue with the next message // Use Contains comparison for substring check since some of the +CMGS: responses // contain carriage return texts along with the repsonse // Then empty the spReadMsg property to avoid the loop from prematurely //sending the next message when the latest serial port response has not yet been //updated from the '+CMGS:' response if (!string.IsNullOrEmpty(spReadMsg) && spReadMsg.Contains("+CMGS:")) { spReadMsg = string.Empty; break; } // If takes longer than 30 seconds to get the response since the last sent //message - break. if (DateTime.Now > executionTime.AddSeconds(30)) { break; } } // Once the while loop breaks proceed with sending the next message. String formattedRecipientNo = Char.ConvertFromUtf32(34) + Recipient + Char.ConvertFromUtf32(34); SP.Write("AT+CMGF=1" + Char.ConvertFromUtf32(13)); //Thread.Sleep(800); SP.Write("AT+CMGS=" + formattedRecipientNo + Char.ConvertFromUtf32(13)); //Thread.Sleep(800); SP.Write(sanitizedMessage + Char.ConvertFromUtf32(26) + Char.ConvertFromUtf32(13)); //Thread.Sleep(1000); //Thread.Sleep(2000); // Get the Datetime when the current message was sent for comparison in // the next while loop executionTime = DateTime.Now; } SP.Close(); }
{ "pile_set_name": "StackExchange" }
Q: Wordpress, Dropdown Selection Menu for Author Profile field Wordpress noob here, After searching for two days without any success!, i'm trying to display a difference color in the front end for each selected option from menu. I used this code for creating menu inside users profile <table class="form-table"> <tr> <th><label for="dropdown">Job Stats</label></th> <td> <?php //get dropdown saved value $selected = get_the_author_meta( 'user_job_stats', $user->ID ); //there was an extra ) here that was not needed ?> <select name="user_job_stats" id="user_job_stats"> <option class="available" value="available" <?php echo ($selected == "available")? 'selected="selected"' : '' ?>>Available</option> <option class="busy" value="busy" <?php echo ($selected == "busy")? 'selected="selected"' : '' ?>>Busy</option> </select> <span class="description">Select Stats.</span> </td> </tr> </table> And i used this code for displaying them in the front end <div class="job-stats"> <?php if (!empty(get_the_author_meta('user_job_stats', $curauth->ID))) { ?> <dt><?php echo $curauth->user_job_stats; ?></dt> <?php } ?> </div> What i'm trying to do is when users selected an option ex:Busy I want to make the background color of the option " Busy" to be red IN FRONT END And with option "Available" background color to be green IN FRONT END. Any help please? A: use CSS's pseudo class to style the relevant tags. <style> .available:checked { background: green; } .busy:checked { background: red; } </style> <table class="form-table"> <tr> <th><label for="dropdown">Job Stats</label></th> <td> <?php //get dropdown saved value $selected = get_the_author_meta( 'user_job_stats', $user->ID ); //there was an extra ) here that was not needed ?> <select name="user_job_stats" id="user_job_stats"> <option class="available" value="available" <?php echo ($selected == "available")? 'selected="selected"' : '' ?>>Available</option> <option class="busy" value="busy" <?php echo ($selected == "busy")? 'selected="selected"' : '' ?>>Busy</option> </select> <span class="description">Select Stats.</span> </td> </tr> </table> and for the front-end: <style> dt.available { background: green; } dt.busy { background: red; } </style> <div class="job-stats"> <?php if (!empty(get_the_author_meta('user_job_stats', $curauth->ID))) { ?> <dt class="<?php echo $curauth->user_job_stats ?>"><?php echo $curauth->user_job_stats; ?></dt> <?php } ?> </div>
{ "pile_set_name": "StackExchange" }
Q: Linear Transformation Proof (Show $L_1(v_i)=L_2(v_i)$) I understand that the properties of scalar multiplication and addition allow for the expansion of $L_1(v)$ and $L_2(v)$ but I dont see how they are equal. They would only be equal if $L_1 = L_2$, but I don't see how. A: When you expand $L_1(v)$ and $L_2(v)$ you still do not know if they are equal. But $$L_1(v) = \alpha_1L_1(v_1)+\dots+\alpha_nL_1(v_n) = \alpha_1L_2(v_1)+\dots+ \alpha_nL_2(v_n) = L_2(v)$$ for any $v$. Then $L_1 = L_2$.
{ "pile_set_name": "StackExchange" }
Q: C# Equivalent for stringByAddingPercentEscapesUsingEncoding? I'm trying to find a way to mimic the behavior of stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding from the iPhone, using C# (for Windows Phone). I don't know anything about iPhone programming, and a web search for that doesn't seem to provide any document pages that might explain what it actually does to help me figure out how to mimic it. I mean, I can see that it "percent escapes" a string using an encoding, but I can't find any examples of what it does to confirm that the output I would be getting is correct. Is it just a simple URL Encode? A: I'm not pretty sure if I understood your question, but HttpUtility.UrlEncode might be what you're looking for. At MSDN you'll find it's definition and examples. Update: this is the official doc from Apple, regarding the iOS method you mentioned.
{ "pile_set_name": "StackExchange" }
Q: Calling a C# web service from with PHP with a long parameter We have a customer that is trying to call our web service written in C# from PHP code. The web service call takes a long as parameter. This call works fine for other customers calling from C# or Java but this customer is getting an error back from the call. I haven't debugged their specific call but I am guessing that the 64bit integer is getting truncated somehow from PHP. The customer says they are just making the web service call with a string but is there a wrapper in PHP that does type conversion. Could this be losing the number information? Thanks for any info. A: Most PHP installations won't support 64 bit integers - 32 is the max. You can check this by reading the PHP_INT_SIZE constant (4 = 32bit, 8 = 64bit) or read the PHP_INT_MAX value. <?php echo PHP_INT_SIZE, "\n", PHP_INT_MAX; ?> If the web service class he is using is trying to type-convert a string representation of a 64 bit integer, then yes, it's mostly likely being truncated or converted into a float. You can sort of see this behavior with this simple test <?php echo intval( "12345678901234567890" ); // prints 2147483647, the max value for a 32 bit signed int. Without knowing the details of his implementation, it's difficult to postulate on what a good solution/workaround might be.
{ "pile_set_name": "StackExchange" }
Q: How do I set a global configuration variable everywhere in rails I have an application that will eventually be open-sourced. Currently it runs on my own domain, but I want to set the domain to be a variable that can be changed. Basically after declaring: foo = ENV['DOMAIN_NAME'] || 'example.com' I want to be able to do reference it in: Views Configuration Controllers Helpers Optionally: Can i set up a configuration file somewhere that holds all my globally declared variables, and then reference them everywhere? This would allow me to make a config.rb.sample file that I can ask users to fill their settings in. A: You can use Figaro. It's really really good and easy to use. You don't need any configuration for it whatsoever, you add the gem in your Gemfile, bundle and that's it! To store your variables you need to create a new file in config/application.yml and then store them like: # Global # aws_access_key: ~ aws_secret_key: ~ aws_s3_host: 's3-eu-west-1.amazonaws.com' rails_secret: 'ce223735d819fb993466ac5e615fff07cc71c19db40e211b83a3ac579203fcf4db78251f4143025e99aabffb1ea46bd252b7b16e50c4c88e5407b42fe5d4e6c4' devise_secret: '9c1fdc65b9f385c54c99e1a81ea398269749f12eee6790c12921dcf1ba7579864ef0fe40f8bcf33d2d78fcbbb506573f5a0c864090de9f3fd991f8367c2aee7c' # Per Environment # development: domain: 'lvh.me:3000' production: domain: ~ # Puma # max_threads: 5 web_concurency: 2 And to access them you simply call Figaro.env.rails_secret or ENV['rails_secret'] :) More info: https://github.com/laserlemon/figaro
{ "pile_set_name": "StackExchange" }
Q: How to make an element retrieved by an API selected by default Its a simple issue, i guess, but i couldn't find a suitable answer in my search, here. I have this list of names coming from an API and a feature for the selected ones: name turns red (CSS), an image comes up and details about the person get into these input fields (also coming from API). But when I load the page, none is selected (of course) and the page gets user "unfrieldly". How can I have the first one selected by default? Here's how it looks like when selected (and how i want it to be by default): And this is like I have it now: The HTML: <div class="container"> <ngx-loading [show]="loading" [config]="{ backdropBorderRadius: '14px' }"></ngx-loading> <div class="row"> <div class="col-md-3" *ngIf="people"> <ul *ngFor="let star of people.results"> <li (click)="onSelect(star)" [class.selected]="star === selectedPeople">{{star.name}}</li> </ul> </div> <hr> <div class="col-md-7" *ngIf="picture"> <img class="img-fluid" [src]="getImageUrl(selectedPeople)"> </div> <div class="col-md-2"> <div class="row"> <label>Height</label> </div> <div *ngIf="selectedPeople"> <div class="row"> <input class="form-control" [(ngModel)]="selectedPeople.height"> </div> </div> <div class="row"> <label>Hair</label> </div> <div *ngIf="selectedPeople"> <div class="row"> <input class="form-control" [(ngModel)]="selectedPeople.hair_color"> </div> </div> <div class="row"> <label>Mass</label> </div> <div *ngIf="selectedPeople"> <div class="row"> <input class="form-control" [(ngModel)]="selectedPeople.mass"> </div> </div> <div class="row"> <label>Eyes</label> </div> <div *ngIf="selectedPeople"> <div class="row"> <input class="form-control" [(ngModel)]="selectedPeople.eye_color"> </div> </div> </div> </div> </div> TS: people: People; selectedPeople: People; picture=false; loading = false; constructor(private starService: StarService) { this.getChars(); } ngOnInit() { } getImageUrl(person) { return "../../assets/" + person.name + ".jpg"; } onSelect(persona: People): void { this.selectedPeople = persona; this.picture=true; } getChars() { this.loading = true; this.starService.getChars().subscribe(data => { this.loading = false; this.people = data; console.log(this.people) }); } } This is the People Interface: export interface People { birth_year: string; films:Films created: string; edited: string; eye_color: string; gender: string; hair_color: string; height: string; mass: string; name: string; skin_color: string; starships: Starships; url: string; } A: It seems like there are some data typing issues here. How about something like this: people: People[]; // This should be an array of people selectedPeople: People; // This is just one person picture=false; loading = false; constructor(private starService: StarService) { // this.getChars(); // This should be in the onInit } ngOnInit() { this.getChars(); } getImageUrl(person) { return "../../assets/" + person.name + ".jpg"; } onSelect(persona: People): void { this.selectedPeople = persona; this.picture=true; } getChars() { this.loading = true; this.starService.getChars().subscribe(data => { this.loading = false; this.people = data.results; // Seems that this should be the results this.onSelect(this.people[0]); // Then you can treat people as a simple array. console.log(this.people) }); } } The HTML also needs to change: <ul *ngFor="let star of people"> This is now just people and not people.results because we already pulled the results off in the getChars method and set the people property to the actual list of people.
{ "pile_set_name": "StackExchange" }
Q: How to make an assert for required field messages? How do I make an assert for this required field message? Required field message I don't think css selectors will work on this, because this message comes from the browser itself, isn't it? Is there a way to do it? A: It does come from the browser, so you probably don't want to test it that way. If you want to be able to run your automation on a variety of platforms you might instead assert the presence of the attribute 'required' on that field. .useCss().assert.elementPresent("input[required]")
{ "pile_set_name": "StackExchange" }
Q: c# application updater run from memory? I know that I can have Updater.exe & MainProgram.exe but I want to keep my release in one EXE file. Can I run updater.exe from memory (updater.exe will be included in mainprogram.exe) and then shut down mainprogram.exe and keep the updater working (it will update mainprogram.exe) in new thread? Is it possible? Or is there any other solution where I can just keep one single released EXE file? A: Include the updater.exe as a resource in your app and then save it to a temporary file and run that.
{ "pile_set_name": "StackExchange" }
Q: How to publish binaries TFS 2017 Whenever i do the publish its having the CS and Solution files as well. I tried too many things but all in vain enter image description here These are my MS build arguments /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactstagingdirectory)\CC A: You're specifying a package location in your MSBuild arguments: /p:PackageLocation="$(build.artifactstagingdirectory)\CC. That's the path you want to publish as an artifact if all you want is the packaged binaries. The path you're currently specifying, $(agent.builddirectory)\s\Main\State is the location used by the build agent to synchronize source code, so of course you're capturing source code when you publish that as an artifact.
{ "pile_set_name": "StackExchange" }
Q: TCP traffic on port 1433 blocked by NAT rules We have a SQL server that is hosted on AWS, the SQL server it not directly accessible on the internet, it relies on a NAT box to route traffic to it. We are trying to set up a Linked SQL server from this server to another one outside of AWS, this requires the two SQL servers to talk to each other on port 1433 TCP. The relevant sections from the iptable look like this: target prot source destination DNAT udp anywhere anywhere udp dpt:ms-sql-m to:172.10.10.10:1434 DNAT tcp anywhere anywhere tcp dpt:ms-sql-s to:172.10.10.10:1433 From our own testing we know that we can link any server to the one on AWS but not the other way around. Does anything look wrong? The problem started occurring when our intfra engineer 'removed and added them same rules' Are there any clues in that? Is order relavent? Using tracetcp we found the following: Doing this command on the aws sql server 'tracetcp.exe 183.23.53.22 1433' where the ip is that of the other externally hosted server, it would get to the destination in 1 hop, but it would also do the same reguardless of any random ip address we tried. Where as if we did the same command but on another other port other than 1433, it would hit the NAT box first and then do many hops A: Check your iptables rules with iptables-save and re-post them. Verify that your DNAT rules have some method of excluding traffic originating from inside the network, for example -i <extif>, ! -i <intif>, or ! -s 172.10.10.10. I strongly suspect it is resending your packets back to the internal origin server.
{ "pile_set_name": "StackExchange" }
Q: How to let cron job pull the docker image once after deploying? I have a dozen of cron jobs on GKE. My docker registry is down. The status of these cron jobs becomes: ImagePullBackOff My thinking is, the cron jobs should pull the docker image once after deploying and use the cached/local docker image. Shouldn't pull the docker image every time from remote docker registry when the cron job creates a new pod. It's a waste, because the docker image doesn't change (I mean the application code of cron job). So, is there a way to do this? Purpose: if can do this, my cron jobs will always running using local docker image before next deploying, even if docker registry is down. A: you can use one of the "Container Images" properties mentioned here. Please setup in your deployment: imagePullPolicy: IfNotPresent. Note: if imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied. Please verify your deployment settings and verify also if docker images are present on the machine.
{ "pile_set_name": "StackExchange" }
Q: Understanding a proof of Diaconescu's theorem I am trying to walk through the proof of Diaconescu's theorem that the axiom of choice implies the law of excluded middle at http://plato.stanford.edu/entries/intuitionism/#ChoAxi. To paraphrase: Let $A$ be a statement (which we will think of as one which might not have a constructive proof or disproof). Let $$X = \{x\in \{0, 1\}|x = 0 \vee (x = 1 \wedge A)\},$$ $$Y= \{x\in \{0, 1\}| x = 1 \vee (x = 0 \wedge A)\}.$$ Let $f:\{X, Y\}\to \{0, 1\}$ be a choice function. Then if $f(X) \neq f(Y)$, then $X \neq Y$, giving $\neg A$, whereas if $f(X) = f(Y)$, then $A$ holds. Thus $A\vee \neg A$. I also looked through the proof on Wikipedia. The things I don't get: Why can't we just let $f(X) = 0$ and $f(Y) = 1$. Can we not say $T \vee P = T$ in intuitionistic logic? The Wikipedia version doesn't have this problem because they just have the second part of the logical statement in the definition of the sets. More to the point, doesn't this proof implicitly use the law of excluded middle? If $P = (f(X) = f(Y))$, the proof is using $P \vee \neg P \iff T$, isn't it? I'm assuming that my logic is flawed in that last point somehow. Paradoxically, my intuition about intuitionistic logic is nonexistent, because I never know if I am secretly using the law of the excluded middle (especially since I am trained to use it automatically all the time). A: Why can't we just let $f(X)=0$ and $f(Y)=1$. Because the function has to be extensional. If $X = Y$ then the quoted definition would not give a function. The proof is using $P∨¬P⟺T$, isn't it? Yes, because equality for natural numbers is decidable: given two natural numbers $n,m$, the constructive systems to which Diaconescu's theorems applies will prove "$n = m \lor m \not = n$".
{ "pile_set_name": "StackExchange" }
Q: django admin site models not showing i have a site with django, is showing a basic view, and the admin site, but when i log into the admin site i cannot see the models: this are my models.py from django.db import models # Create your models here. class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Admin: pass class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(max_length=200) votes = models.IntegerField() class Admin: pass so im referencing the admin from my db scheme, but cannot see this tables in my admin, what is missing? thanks! A: You need to create a file called admin.py and register any models you want to be accessible from the Django admin: from django.contrib import admin from myproject.myapp.models import MyModel class MyModelAdmin(admin.ModelAdmin): pass admin.site.register(MyModel, MyModelAdmin)
{ "pile_set_name": "StackExchange" }
Q: Calculate the number of tiling combinations Source - Zonal Informatics Olympiad 2006 Question Paper Have tried deriving the answer through solutions to smaller problems, but of no avail. A: Now incorporating EuYu's comment: If $f(n)$ is the number of ways of tiling a $2 \times n$ rectangle then $$f(n)=f(n-1)+f(n-2)+2\sum_{j\lt n-2}f(j)$$ starting at $f(0)=1$, since the right hand end can be a vertical double tile, two horizontal double tiles, or in two orientations an L on the end with repeating horizontal doubles eventually capped by another L. You can now answer the particular questions by hand, or by noting this is also $$f(n)=2f(n-1)+f(n-3)$$ starting at $f(0)=f(1)=1$, with the generating function $1/(1-2x-x^3),$ or just look up OEIS A052980 .
{ "pile_set_name": "StackExchange" }
Q: repeating the rows of a data frame I'm trying repeat the rows of a dataframe. Here's my original data: pd.DataFrame([ {'col1': 1, 'col2': 11, 'col3': [1, 2] }, {'col1': 2, 'col2': 22, 'col3': [1, 2, 3] }, {'col1': 3, 'col2': 33, 'col3': [1] }, {'col1': 4, 'col2': 44, 'col3': [1, 2, 3, 4] }, ]) which gives me col1 col2 col3 0 1 11 [1, 2] 1 2 22 [1, 2, 3] 2 3 33 [1] 3 4 44 [1, 2, 3, 4] I'd like to repeat the rows depending on the length of the array in col3 i.e. I'd like to get a dataframe like this one. col1 col2 0 1 11 1 1 11 2 2 22 3 2 22 4 2 22 5 3 33 6 4 44 7 4 44 8 4 44 9 4 44 What's a good way accomplishing this? A: You can also use reindex and index.repeat df = df.reindex(df.index.repeat(df.col3.apply(len))) df = df.reset_index(drop=True).drop("col3", axis=1) # To reset index and drop col3 # Output: col1 col2 0 1 11 1 1 11 2 2 22 3 2 22 4 2 22 5 3 33 6 4 44 7 4 44 8 4 44 9 4 44
{ "pile_set_name": "StackExchange" }
Q: c# linq update all records in database im trying to update all records in a sql table using the following code but no data is updated. does anyone know why? using (DataContext db = new DataContext()) { foreach (Item record in db.Items) { record.Description += "bla"; db.SubmitChanges(); } } code for setter: [Column(Storage="_Description", DbType="NVarChar(400) NOT NULL", CanBeNull=false)] public string Description { get { return this._Description; } set { if ((this._Description != value)) { this._Description = value; } } } A: Out of curiosity, see if moving the SubmitChanges() outside of the loop makes a difference: using (DataContext db = new DataContext()) { foreach (Item record in db.Items) { record.Description += "bla"; } db.SubmitChanges(); } A: From the details of the setter you have posted in the comments, your Description property has not been correctly created for notification of property changes. Did you write the property yourself or was it generated by the VS2008 tooling? Your Item class (all Linq to Sql entities for that matter should implement INotifyPropertyChanging and INotifyPropertyChanged) which will give you both PropertyChanging event and PropertyChanged events, if you used the VS2008 tools you should get a couple of methods like the following in your entity classes: protected virtual void SendPropertyChanging(string propertyName) { if (this.PropertyChanging != null) { this.PropertyChanging(this, new PropertyChangingEventArgs(propertyName)); } } protected virtual void SendPropertyChanged(string propertyName) { if (this.PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } Now within your setter of your property you should use these methods to raise the required events: [Column(Name="Description", Storage="_Description", DbType="NVarChar(400) NOT NULL", CanBeNull=false)] public string Description { get { return this._Description; } set { if ((this._Description != value)) { this.SendPropertyChanging("Description"); this._Description = value; this.SendPropertyChanged("Description"); } } } I also noticed you don't have the Name property set in your column attribute so add it just in case (I have it included in my example assuming your column name is "Description").
{ "pile_set_name": "StackExchange" }
Q: How do you wrap synchronous network I/O trivially with Tokio? There is an evident lapse in my understanding on concurrent development in Rust unfortunately. This question stems from weeks repeated struggles to solve a seemingly "trivial" problem. Problem Domain Developing a Rust library, named twistrs that is a domain name permutation and enumeration library. The aim and objective of the library, is to be provide a root domain (e.g. google.com) and generate permutations of that domain (e.g. guugle.com) and enrichment that permutation (e.g. it resolves to 123.123.123.123). One of its objectives, is to perform substantially faster than its Python counterpart. Most notably, network calls such as, but not limited to, DNS lookups. Currently Design Proposal The idea behind the library (apart from being a learning ground) is to develop a very trivial security library that can be implemented to meet various requirements. You (as a client) can choose to interact directly to the permutation or enrichment modules internally, or use the library provided async/concurrent implementation. Note that there is no shared state internally. This is probably very inefficient, but somewhat meaningless for the time being as it prevents a lot of issues. Current Problem Internally the DNS lookup is done synchronously and blocks by nature. I'm having trouble turning this into concurrent code. The closest I could get was to use tokio mpsc channels, and perform spawn a single tokio task: use twistrs::enrich::{Result, DomainMetadata}; use twistrs::permutate::Domain; use tokio::sync::mpsc; #[tokio::main] async fn main() { let domain = Domain::new("google.com").unwrap(); let _permutations = domain.all().unwrap().collect::<Vec<String>>(); let (mut tx, mut rx) = mpsc::channel(1000); tokio::spawn(async move { for (i, v) in _permutations.into_iter().enumerate() { let domain_metadata = DomainMetadata::new(v.clone()); let dns_resolution = domain_metadata.dns_resolvable(); if let Err(_) = tx.send((i, dns_resolution)).await { println!("receiver dropped"); return; } } }); while let Some(i) = rx.recv().await { println!("got: {:?}", i); } } That said, an astute reader will immediately notice that this blocks, and effectively runs the DNS lookups synchronously either way. Trying to spawn a Tokio task within the for-loop is not possible, due to move being done on the tx (and tx not impl Copy): for (i, v) in _permutations.into_iter().enumerate() { tokio::spawn(async move { let domain_metadata = DomainMetadata::new(v.clone()); let dns_resolution = domain_metadata.dns_resolvable(); if let Err(_) = tx.send((i, dns_resolution)).await { println!("receiver dropped"); return; } }); } Removing the await ofcourse will result in nothing happening, as the spawned task needs to be polled. How would I effectively wrap all those synchronous calls into async tasks, that can run independently and eventually converge into a collection? A similar Rust project I came across was batch_resolve, which does a tremendous job at this (!). However, I found the implementation to be exceptionally complicated for what I'm looking to achieve (maybe I'm wrong). Any help or insight to achieve this is greatly appreciated. If you want a quick way to reproduce this, you can simply clone the project and update the examples/twistrs-cli/main.rs using the first code snippet in this post. A: Edit: I misinterpreted your question and didn't realize that the DNS resolution itself wasn't asynchronous. The following approach won't actually work with synchronous code and will just result in the executor stalling because of the blocking code, but I'll leave it up in case you switch to an asynchronous resolution method. I'd recommend using tokio's asynchronous lookup_host() if that fits your needs. Async executors are designed to handle large numbers of parallel tasks, so you could try spawning a new task for every request, using a Semaphore to create an upper bound on the number of running tasks at once. The code for that might look like this: let (mut tx, mut rx) = mpsc::channel(1000); let semaphore = Arc::new(Semaphore::new(1000)); // allow up to 1000 tasks to run at once for (i, v) in _permutations.into_iter().enumerate() { let domain_metadata = DomainMetadata::new(v.clone()); let mut tx = tx.clone(); // every task will have its own copy of the sender let permit = semaphore.acquire_owned().await; // wait until we have a permit let dns_resolution = domain_metadata.dns_resolvable(); tokio::spawn(async move { if let Err(_) = tx.send((i, dns_resolution)).await { println!("receiver dropped"); return; } drop(permit); // explicitly release the permit, to make sure it was moved into this task }); // note: task spawn results and handle dropped here } while let Some(i) = rx.recv().await { println!("got: {:?}", i); } If the overhead of the extra tasks proves too significant, you can try instead combining the tasks into a single future, using facilities like FuturesUnordered from the futures crate. This allows you to take an arbitrarily large list of futures and poll them all repeatedly within a single task.
{ "pile_set_name": "StackExchange" }
Q: How do I compare two text files with RSpec? I have a method which compares if two text files have the same content. How do I compare if two text files have the same content using RSpec? A: On a trivial level: IO.read(file1).should == IO.read(file2) If you want to do something nicer, you're likely going to need to write a new matcher, something like have_same_content_as defined to check for the above condition. "Up and Running with Custom RSpec Matchers" is a nice tutorial on writing custom matchers. A: For others who stumble across this, check the FileUtils#cmp method: require 'fileutils' expect(FileUtils.compare_file(file1, file2)).to be_truthy
{ "pile_set_name": "StackExchange" }
Q: "Double Load" Issue with PHP isset() and $_POST data I know I am doing something wrong by using a combination of isset(), $_POST and $_GET but I am wondering what would be the easiest and painless way to tackle my issue. The issue arises when I submit a HTML form... It reloads the page with the post data. I capture the submit with a php isset() function, process the $_POST data and then run a window.location to refresh the page with some $_GET data. For example... //example.php ... <form action="" method="post"> <input name="stage1name" type="text" class="textfield" size="22" value="<?php echo $ClaimRow['ClaimantName']; ?>"> <input name="VehicleEngine" type="text" class="textfield" size="20" value="<?php echo $vehiclerow['VehicleEngine']; ?>"> <input name="VehicleFuel" type="text" class="textfield" size="20" value="<?php echo $vehiclerow['VehicleFuel']; ?>"> <input name="submitInfo" type="submit" value="<?php echo $LANG_Claims_Change_Info; ?>" /> </form> ... <?php if (isset($_POST['submitInfo'])) { $stage1name= mysqli_real_escape_string($db, $_POST['stage1name']); $VehicleEngine= mysqli_real_escape_string($db, $_POST['VehicleEngine']); $VehicleFuel= mysqli_real_escape_string($db, $_POST['VehicleFuel']); mysqli_query($db, "DO SOME COOL SQL HERE"); //I've done what I need to do so lets reload the page with the updated data echo "<script>window.location='example.php?vehicle=" . $vehicleID . "&claimTab=Personal'</script>"; } ?> Like I said before, this method works fine however the user gets the effect of a "double load" and is very epiletic fit inducing. Any ideas how best to combat this? Thanks EDIT - Additional Example I realised that this one example might not make complete sense so I put together another example which hopefully will. ... <form action="" method="post"> <select name="addresstype[]" id="multiselectfrom" onchange='this.form.submit();' size="9" style="width:110px; text-align:center;"> <option style="<?php if ($AddType == 'Claimant') { echo'background-color:#9AD3F1 !important;'; } ?>" value="Claimant"><?php echo $LANG_Claims_Claimant; ?></option> <option style="<?php if ($AddType == 'Vehicle') { echo'background-color:#9AD3F1 !important;'; } ?>" value="Vehicle"><?php echo $LANG_Claims_Vehicle; ?></option> <option style="<?php if ($AddType == 'Repairer') { echo'background-color:#9AD3F1 !important;'; } ?>" value="Repairer"><?php echo $LANG_Claims_Repairer; ?></option> <option style="<?php if ($AddType == 'Insurer') { echo'background-color:#9AD3F1 !important;'; } ?>" value="Insurer"><?php echo $LANG_Claims_Insurer; ?></option> <option style="<?php if ($AddType == 'Fleet') { echo'background-color:#9AD3F1 !important;'; } ?>" value="Fleet"><?php echo $LANG_Claims_Fleet; ?></option> </select> </form> ... <?php if (isset($_POST['addresstype'])) { foreach ($_POST['addresstype'] as $addresstype) { $addresstype2 = $addresstype; } echo "<script>window.location='claims.php?claimID=" . $claim . "&claimTab=Addresses&AddressType=" . $addresstype2 . "'</script>"; } ?> The above example is supposed to take the result of the form and change the window.location depending on the form result. It does work however it loads the page twice in doing so. A: After much testing. It turns out CMorriesy was correct. However for future reference you will need to turn output_buffering = on in php.ini and also add <?php ob_start(); ?> to the very first line of your code. header('Location: example.php?vehicle=' . $vehicleID . '&claimTab=Personal'); die();
{ "pile_set_name": "StackExchange" }
Q: Best practice on git push after rebasing locally As we know, git denies git push origin BRANCH after BRANCH is rebased with master. If I am the only one who is working on BRANCH, I can use --force while pushing, or simply delete the remote BRANCH and psuh BRANCH again. So, I know the solutions to the problem but what I don't know is if my solutions have downsides. What is the best practice used by developers? Also what is the safe/best way to handle the issue if more than one people are working on BRANCH? A: Few points I can call from my experience: You choose to do rebase only in feature branches and never in master/dev so less people have to deal with that. Perhaps, drop a message in HipChat/Slack that rebase has happened. Remind them git stash is available to protect their current work. Your team members should use git pull --rebase in case change is easy to be perceived. It is still a common practice to rebase for hotfixes on master/dev branches, that is also a good opportunity to let your team members get used to rebasing.
{ "pile_set_name": "StackExchange" }
Q: Checkbox keeps state across components in React JS I'm new in React JS, but I read about <input> that you have to save actual state in onChange like described here: React DOC - Forms I have a list with a checkbox and I applied same behavior here in CampaignsRow var campaignsData = [{Name: "First"}, {Name: "Second"}, {Name: "Third"}]; var CampaignsRow = React.createClass({ getInitialState: function() { return {checked: false}; }, checkedChange: function (e) { this.setState({checked: e.target.checked}); }, render: function() { console.log(this.props.data.Name, this.state.checked); return ( <div className="row"> <div className="cell checkbox"> <input type="checkbox" checked={this.state.checked} onChange={this.checkedChange}/> </div> <div className="cell campaignName">{this.props.data.Name}</div> </div> ); } }); var CampaignsTable = React.createClass({ render: function() { var rows = this.props.campaigns.map(function(campaign) { return ( <CampaignsRow data={campaign}/> ); }); return <div className="table"> <div className="row header"> <div className="cell checkbox"><input type="checkbox"/></div> <div className="cell campaignName">Name</div> </div> {rows} </div> } }); ReactDOM.render(<CampaignsTable campaigns={campaignsData} /> ,document.getElementById('reactContainer')); My problem is, if I check the checkbox at the campaign with name First and then I remove first item by campaignsData.shift() (to simulate downloading new data from Server) and then render again, checkbox at Second campaign is checked. What is the purpose of this.state when it is not attached to the instance. Render works fine, because in the console is printed Second true, so this.state.checked was moved from First to Second campaign. A: You should add unique key property to multiple components, so that React can keep track of identities: var rows = this.props.campaigns.map(function(campaign) { return ( <CampaignsRow key={campaign.name} data={campaign}/> ); });
{ "pile_set_name": "StackExchange" }
Q: Compile C in Visual Studio 2012 without MSVCRT runtime Visual Studio 2012 (and earlier versions) are capable of compiling C code. Plain C, not C++. It would be a good feature if you wanted to avoid the runtime hazzle. I thought of compiling plain C binaries and was hoping to do so without the MSVCRT runtime. After adding the /TC (compile as C) option I was hoping to get a binary with only basic dependencies such as kernel32 and ntdll. But instead, this was linked: We want to use VS 2012 and not the runtime. The GCC compiler doesn't need it, so there must be a way to compile a "simple" binary in VS, too. We don't necessarily need complex string functions or date/time libraries, just simple code. Question: Is it possible to compile C code in Visual Studio 2012 without the MSVCRT runtime (or even C++ code) ? Edit: without static linking (/MT) A: The correct answer to the question "Is it possible to compile C code in Visual Studio 20xx without the MSVCRT runtime (or even C++ code)?" is to use the /MT option (Configuration Properties > C/C++ > Code Generation > Runtime Library=Multi-threaded (/MT)). This creates an executable with no dependencies on any MSVCRTxx exactly as you wanted. As far as I know, that's all it does. It places no restrictions on anything you want to do - all the standard C library functions like memcpy still work. The only other difference is that the .EXE file is slightly larger. I've been making and distributing EXE files created like this from pure ANSI C code for years without any problems whatsoever using MSVC6, MSVC2005, MSVC2008 and MSVC2013. As to the answer to the question with the qualifier "without static linking (/MT)", well, you can't.
{ "pile_set_name": "StackExchange" }
Q: Comprehensive List of Essential Software for General Developers on Mac and PC This may seem like an odd request, but as a computer science student, I'm always running into apps that make doing a development task easier than the way I was doing it before. Unfortunately, I tend to discover these apps long after doing things the hard way for far too long. I'm only on mac, but I figured I'd include both Mac and PC for future reference (if I ever have both systems). For me, a student of C++ programming, I'm currently religiously using just a few pieces of software on Mac: XCODE - IDE Atom - Text Editing, HTML, and a few other things Cyberduck - SFTP into my school's Linux system. Terminal - (Haven't tried iTerm2 yet or any other Terminal alternative) Go2Shell - quick folder navigation for Terminal What other utilitarian apps do you guys find particularly helpful for you as developers? Feel free to mention any software you may use to help your workflow. I hope this question isn't too broad of a topic for S.O. If so, please feel free to remove it. Also I didn't know what tag to use for this topic, so if the mods need to move this thread to a more appropriate area, that would be great. A: Well, your list does not look bad at all ;) Most developers will have a basic set of tools such as: An IDE (Integrated development environment,e.g. phpStorm, Aptana,etc..) - where you write your code. Various Compilers (e.g. C\CPP compiler for a C\CPP developer, or a LESS compiler for a web developer, whatever you use in your daily work) - to compile your raw code\markup into an executable\usable format. A Debugger - to debug your code. A Local development stack (e.g. LAMP, used mainly by web developers) - to execute your code and see how it works, debug, etc.. A Dependency management tool - optional: if you have a big project with many dependencies. A Version control system (such as Git, SVN, etc..) - to maintain your project as a proper code repository. An FTP client (if you upload files to a server) That is generally what you need to write software\applications, anything in addition to that is considered helpful but you don't really need it. There are some fancy tools for lazy people, those tools can save you some time but the huge disadvantage is when you start to rely on those tools and then you stop understanding how things actually are constructed and work - which will make the maintaining of your software a nightmare. The best thing is to know when to use "helper" tools, but not many of them, use them only if you have to, and do not get to the situation where you rely on them - because then if they have a bug or a mysterious flaw, you will be dead in the water until the next hotfix or patch comes out. Good luck !
{ "pile_set_name": "StackExchange" }
Q: Library/API Runtime Between Versions I was having a conversation with a friend about the C# StringBuilder class, and what it's behavior was. I'll paraphrase, but my side of the conversation was something like this (I oversimplified because exactly how StringBuilder works isn't important for my question) StringBuilder is more efficient to use for extensive string concatenation than to simply use +. The reason is that StringBuilder doesn't dynamically create a new string for each concatenation operation. It waits until all of your desired concatenations are "built" and only then does it dynamically allocate the space and give you back your string. My friend said something along the lines of That may be true today, but if you are reliant on such optimizations, then you should make your own StringBuilder. In future versions, there's no reason that StringBuilder couldn't use simple string concatenation (str1 = str2 + str3). Libraries only guarantee functional equivalence, not runtime equivalence. Is this true for libraries? If I use a library's Sort() function that has runtime O(n*log(n)), is it possible that a future version would change the runtime to O(n^2)? Is the same true for executable tools? Could (for example) grep's runtime fundamentally change in the future? That aside, wouldn't it be good practice for a library/API/tool developer to keep runtimes for the same calls similar over time? A: Short answer: it is not true for libraries or tools "in general". Each vendor can guarantee for his library or tool whatever he wants. There are libraries and tools where the vendor does guarantee functional equivalence and the equivalence of certain aspects of non-functional requirements just functional equivalence, not more only syntactical equivalence none of the above And even if a vendor guarantees you something for a specific version or version line of a product, noone can hinder him legally to create a new version or product line with a different API, or different non-functional behaviour. So the question "can it be changed" is not really the correct one, instead you should ask is "how likely is that for a specific tool or library?" For your example of the StringBuilder class, IMHO it is absurdly unlikely that MS will change its run time behaviour within the .NET framework in a manner so the effort of writing such a class by yourself will ever be worth it. The cite of your friend sounds more like superstitious nonsense an overcautious misconception for me, at least for this case. Microsoft added such a class explicitly to the .NET framework to provide a mutable alternative to the immutable String class with certain performance aspects in mind, the documentation of that class is very detailed about that. MS in the past tried to keep newer versions of the .NET framework mostly backwards compatible to older versions, even if that means not to fix certain bugs or live with some imperfectness. And changing the run time behaviour of a StringBuilder in a significant manner would not break only your program, but most probably ten thousands of other programs - StringBuilder is one of the central core classes of the framework, and widely used among the .NET ecosystem. That is nothing any sensible library vendor would change lightheartly. When they annoy their customers too much, customers start looking for a different vendor, and that will cost them money. The same is true for lots of other tool or library vendors, and those which do not care for this risk to annoy their customers until they look for a different vendor. To give you another example: the C++ standard library gives explicit specifications for the run time behaviour of std::sort, it guaranteed to be O(n * log(n)) for the average case, see Wikipedia. And for question of "grep": I am pretty sure there already exist different implementations of grep from different older unix or unix-like systems, and I would be astonished if they all have the same run time behaviour. The Posix standard makes them have same command line switches, at least for any Posix-conform OS. However, today the fact Linux including GNU grep is so popular, you can probably rely even on its non-functional behaviour at least on any decent Linux system. A: ...if you are reliant on such optimizations, then you should make your own ... It's a bit of a tautology to say so, but code guarantees only what it guarantees. One of the purposes of having libraries is to form a boundary that's opaque to callers, and the only time there should be a complaint is when one of the guarantees isn't being met. (This is concept is at the core of design by contract.) Your colleague is saying that if you become reliant on a particular implementation, you've established requirements that go beyond the original guarantees. That doesn't mean you have to write your own, but it does you can't switch to another implementation until you've verified that it meets all of the requirements. If I use a library's Sort() function that has runtime O(n*log(n)), is it possible that a future version would change the runtime to O(n^2)? Sure. I doubt anyone would do that intentionally unless someone said "I need you to make this code run slower." A future version might perform better, which is something most folks like. But, again, if you've dependent on the library to operate in a certain amount of time, a faster implementation won't meet your requirements and will break your software. ... wouldn't it be good practice for a library/API/tool developer to keep runtimes for the same calls similar over time? Not really. The reductio ad absurdum implication would be that implementations should be immutable because someone may have become dependent on the incorrect behavior brought about by a bug. The only changes you could make would be the additions of functions that do completely new things or are modified versions of those that already exist (e.g. foo(), foo_version_2(), etc.). There are a few very rare cases where that's desirable, but most times if you think you need this, you should probably think again. Good practice for library developers is to document changes in a way that library users can understand what's changed. Good practice for users of that library is to understand the changes and the effects on their own code before deploying them. Bad practice is to drop in a new version of a library just because one has become available. All of this applies to whole programs the same way it applies to libraries. Most people expect that the -x switch in grep(1) will print only lines exactly matching the pattern. A change in that behavior between versions will break things that depend on it and require the dependencies to change or that the old version remain in place.
{ "pile_set_name": "StackExchange" }
Q: regex optional repetitive group Suppose the following string: some text here [baz|foo] and here [foo|bar|baz] and even here [option]. I've managed to get matched only by this ugly regex (Regex101.com demo): /(?: \[ (?: \|? ([^\|\[\]]+) )? (?: \|? ([^\|\[\]]+) )? (?: \|? ([^\|\[\]]+) )? \] )/ugx The point is that I need matches to be grouped by square brackets. So currently I do have result I need: [ { "match": 1, "children": [ { "group": 1, "start": 16, "end": 19, "value": "baz" }, { "group": 2, "start": 20, "end": 23, "value": "foo" } ] }, { "match": 2, "children": [ { "group": 1, "start": 35, "end": 38, "value": "foo" }, { "group": 2, "start": 39, "end": 42, "value": "bar" }, { "group": 3, "start": 43, "end": 46, "value": "baz" } ] }, { "match": 3, "children": [ { "group": 1, "start": 63, "end": 69, "value": "option" } ] } ] The result is correct but that regex is limited to the number of repeating blocks in the pattern. Is there some workaround to make it match all options inside sqare brackets? A: You won't be able to produce capturing groups recursively within a pattern since engine doesn't provide you with such an ability. Saying that, you have two options: Building a Regular Expression based on number of occurrences of pipe | in your input string. This way you can build a single regex with most possible repetitive patterns of ([^][|]+) that will do a group match as you desire: $pattern = (function () use ($string) { $array = []; for ($i = 0; $i <= substr_count($string, "|"); $i++) { $array[] = $i == 0 ? '([^][|]+)' : '([^][|]+)?'; } return implode("\|?", $array); })(); By giving an input string like: some text here [baz] and here [you|him|her|foo|bar|baz|foo|option|test] and even here [another]. Cooked regex would be: ~\[([^][|]+)\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?\|?([^][|]+)?]~ Live demo And then you can simply use it: preg_match_all("~\[$pattern]~", $string, $matches, PREG_SET_ORDER); Live demo That's a workaround to show that you can save time and avoid headache in building your Regular Expression only and Regular Expressions are not a simple - handy solution always. Benefit from other language functionalities. Above workaround doesn't bring a solid solution. It is doing much work that is not needed. Below code does fit the job: // Capture strings between brackets preg_match_all('~\[([^]]+)]~', $string, $matches); $groups = []; foreach ($matches[1] as $values) { // Explode them on pipe $groups[] = explode('|', $values); } Output would be: Array ( [0] => Array ( [0] => baz ) [1] => Array ( [0] => you [1] => him [2] => her [3] => foo [4] => bar [5] => baz [6] => foo [7] => option [8] => test ) [2] => Array ( [0] => another ) ) Live demo
{ "pile_set_name": "StackExchange" }
Q: How can I create a barcode without text on barcodesinc.com? I'm trying to create a barcode on barcodesinc.com's Free Online Barcode Generator. Under advanced options, I see that there is a checkbox for "Draw Value Text", when I uncheck it and regenerate the barcode, though, it is still checked. When you generate a barcode, e.g.: It uses a URL such as the following: http://www.barcodesinc.com/generator/image.php?code=123456&style=197&type=C39&width=200&height=50&xres=1&font=3 When I change the other comboboxes, I notice that the parameter that is changing is style. It stands to reason that I can send a style value which will not include the text underneath the barcode. The question is which style value do I need to use? style border text stretch negative ----- ------ ---- ------- -------- ??? no no no no 197 yes yes no no 709 yes yes no yes 453 yes yes yes no 965 yes yes yes yes A: The style with everything set to no is 68. I found this out by noticing that it says "Powered by Barcode", and then I tried the sample generator on that site, and examined the style parameter.
{ "pile_set_name": "StackExchange" }
Q: Retrieve specific chunk file from MongoDB gridfs I have stored 1.5GB video file to MongoDB GridFS using php script with Chunk Size 15MB. I can retrieve the full video file using filename. But I wanted to skip chunk files and retrieve only specific part of the video. Is it possible to skip and retrieve specific chunk files? Thanks, Hari. A: Although your requirement sounds simple at a glance, I believe its implementation is more involved than it looks, since if I understand correctly, what you want to build is essentially a video streaming system. GridFS does not have a knowledge of the content of what you are storing. It is mainly a convention to enable you to store documents larger than 16 MB in MongoDB (larger than the maximum BSON size). GridFS achieves this by splitting the input file into "chunks", but it is up to the application to make sense of the data in the chunks. If you require specific part of a video by retrieving a specific chunk range, you would need to consider some things: Videos are frequently compressed using variable rate encoding (VBR), which means that one 15 MB chunk could contain varying minutes of video (e.g. could be 15 minutes in one chunk, could be 12 minutes in another chunk, etc.). Video files frequently contain a metadata part (e.g. title, synopsis, etc.). You would need to recognize this metadata part and separate them from the actual video packets. Most of the time, this metadata size is different from the video packet size. How granular is the "seeking" you want. E.g., if you require a 1-second seek granularity, you would need to calculate how much data is contained in a 1-second video. This could be calculated somewhat easier if you are using Constant bitrate encoding (CBR). Hence, to achieve what you require, you would need to: Encode your video using constant bitrate (CBR). Be able to dissect the video format to be able to store one video packet in one chunk exactly. Store the video metadata somewhere else. Basically, this means that you would also need to construct the player that can understand this storage scheme to be able to "seek" into a certain point in the video reliably.
{ "pile_set_name": "StackExchange" }
Q: PIVOT basics: Why do my aggregate return NULL? I've got a basic PIVOT-question here that probably won't cause you gurus any trouble: Ive got this SQL that is working fine: SELECT order_year, SUM(amount) AS Amount FROM dbo.mytable GROUP BY order_year; This returns sonething like: 2010 7000000 2007 8051222 2008 7099057 2009 13088790 Now I want to pivot the table using the same principles as described in this MSDN-article: http://msdn.microsoft.com/en-us/library/ms177410.aspx I tried this: SELECT 'Amount' AS Total_Amount_Sorted_By_Order_Year, [0], [1], [2], [3], [4] FROM (SELECT order_year, amount FROM dbo.mytable ) AS SourceTable PIVOT ( SUM(amount) FOR order_year IN ([0], [1], [2], [3], [4]) ) AS PivotTable; But this returns a bunch of NULLs! :( Amount NULL NULL NULL NULL NULL What am I doing wrong? Any help appreciated! Thanks! A: Thank you Barry. I misread the documentation on MSDN, and thought the [0], [1] etc labels where enumerations of the pivot-columns... (!) Replacing them with the actual Years made the aggregations work! SELECT 'Amount' AS Total_Amount_Sorted_By_Order_Year, [2007], [2008], [2009], [2010] FROM (SELECT order_year, amount FROM dbo.mytable ) AS SourceTable PIVOT ( SUM(amount) FOR order_year IN ([2007], [2008], [2009], [2010]) ) AS PivotTable;
{ "pile_set_name": "StackExchange" }
Q: Table columns in Android's TableLayout Does anyone know whether it is possible to add columns to a TableLayout in Android? I need to display a relatively huge amount of data within a table. Obviously, this shouldn't be displayed all in one column. You can add rows by creating a TableRow()-Object. Furthermore there's some kinda xml-parameter, but I don't know how to access it via the Java-Code. TextView val = new TextView(this); val.setText(args.get(i)); table.addView(val); table.addView(tr,new TableLayout.LayoutParams( LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT)); I was unable to find a matching value in the TableRow.LayoutParams that worked for me. But probably I tried to use it in the wrong way. table.addView(val,new TableRow.LayoutParams( LayoutParams. ?? )); A: As you add new views to the tablet row each view becomes a new "column". So, if you do something like this (assuming your row data is in List rowData and your column data is in a List in rowData. for (RowData rd : rowData) { TableRow row = new TableRow(this); for (DataPoint dp : rd.getDataPoints()) { TextView tv = new TextView(this); tv.setText(dp.getText()); /* any other styling stuff here */ row.addView(tv); } table.addView(row); } That will result in a table, with multiple columns, all within a single table. For layout params, you can use: new TableRow.LayoutParams(0, LayoutParams.WRAP_CONTENT, (float) 5); What that means is 0 width, wrap content on height, then a relative "weight" for the width. Android will layout your columns nicely with this.
{ "pile_set_name": "StackExchange" }
Q: How do I sync with DropBox? I want to sync DropBox with Android, so I can use the files on Android. So I installed DropBox on Android, but it won't download any files! (or at least I don't see them under sdcard/Android/com.dropbox.android. So I can't use the the files on Android. Did I get it wrong somehow? I tried Folder downloader, but I lose the sync feature of DropBox this way. The files modified on my Android won't be synced with my laptop. I was expecting to see Dropbox folder option on Android, same as on the laptop. But there's no such option. A: DropSpace has some limitations according to it's description on the Play store: *** current limitation: Only files existing on the sdcard are synced. That is, it's not a full 2-way sync. I've been using DropSync. It does sync in both directions and I've had no cause for complaint. A: Oh -- I found it DropSpace allows one to select a folder on Android -- and keep it fully synced with any (but not the root) Dropbox folder. A: To download, you need to open the DropBox app, and select the file you want to download. To upload, you need to open the DropBox app, and select Upload here from the DropBox menu. The files you see on the DropBox app are not the actual files. They are only links that are in the DropBox server, which you must download. The DropBox app for Android does not work the same way as the DropBox app for the PC. The PC version automatically downloads / syncs the file on your PC.
{ "pile_set_name": "StackExchange" }
Q: Tkinter.pack() method doesn't show GUI I try to get familiar with python GUI, so in order to do so I'm building a simple calculator. Most of it works, but I'm trying to add a scrollbar to my text display widget, when I use the following piece of code: self.scroll_bar = ttk.Scrollbar(self.master) self.scroll_bar.pack(side=tk.RIGHT, fill=tk.Y) # self.scroll_bar.grid(row=0, column=4) self.screen = tk.Text(self.master, height=5, width=30) # self.screen.grid(row=0, column=0, columnspan=4) self.screen.pack(side=tk.LEFT, fill=tk.BOTH) self.screen.configure(font=("Calibri", 20, "bold", "italic"), foreground="black", background="whitesmoke") self.scroll_bar.config(command=self.screen.yview, orient=tk.VERTICAL) self.screen.config(yscrollcommand=self.scroll_bar.set) The GUI doesn't even open up, while when I remove the pack() lines and uses grid it works but looks extremely unbalanced. My full code for reproducibility is: import tkinter as tk from tkinter import ttk import re # the calculator gui text for the buttons calculator_button_text = { 'clear': 'C', 'zero': '0', 'one': '1', 'two': '2', 'three': '3', 'four': '4', 'five': '5', 'six': '6', 'seven': '7', 'eight': '8', 'nine': '9', 'decimal': '.', 'plus': '+', 'minus': '-', 'multiply': 'x', 'divide': u"\u00F7", 'equal': '=', 'delete': u"\u232B" } class CalculatorApp(tk.Frame): ''' Main Calculator Class ''' def __init__(self, master): tk.Frame.__init__(self, master) self.master = master self.user_in_list = [] self.buttons_text = calculator_button_text self.equation = "" self.configure_gui() self.create_calc_widgets() # in order to change the button colors (mac) style = ttk.Style(self.master) style.theme_use('clam') style.configure('style.TButton', foreground="firebrick", background="darkgrey", font=("Arial", 20, "bold", "italic")) style.map('TButton', foreground=[('pressed', 'blue'), ('active', 'darkgreen')], background=[('pressed', 'dimgrey'), ('active', 'grey')]) def configure_gui(self): self.master.title("Calculator") self.master.configure(bg='black') def create_calc_widgets(self): self.create_input_field() self.create_buttons() def create_input_field(self): ## this work without scrollbar # self.screen = tk.Text(self.master, height=5, width=30) # self.screen.grid(row=0, column=0, columnspan=4, sticky=tk.W + tk.E) # self.screen.configure(font=("Calibri", 20, "bold", "italic"), # foreground="black", background="whitesmoke") self.scroll_bar = ttk.Scrollbar(self.master) self.scroll_bar.pack(side=tk.RIGHT, fill=tk.Y) # self.scroll_bar.grid(row=0, column=4) self.screen = tk.Text(self.master, height=5, width=30) # self.screen.grid(row=0, column=0, columnspan=4) self.screen.pack(side=tk.LEFT, fill=tk.BOTH) self.screen.configure(font=("Calibri", 20, "bold", "italic"), foreground="black", background="whitesmoke") self.scroll_bar.config(command=self.screen.yview, orient=tk.VERTICAL) self.screen.config(yscrollcommand=self.scroll_bar.set) def create_buttons(self): button_col = 0 button_row = 1 text_in_row = ('C', '123d', '456-', '789x', '0.+/', '=') for row in text_in_row: for text in row: self.configure_button(text, button_row, button_col) button_col += 1 button_col = 0 button_row += 1 def configure_button(self, txt, r, c): to_write = (txt == '=') # if txt in ('0123456789'): # value = int(txt) if (txt == 'd'): txt = u"\u232B" elif (txt == '/'): txt = u"\u00F7" key = list(calculator_button_text.keys())[ list(calculator_button_text.values()).index(txt) ] calculator_button_text[key] = ttk.Button(self.master, text=txt, style='style.TButton', command=lambda: self.click(txt, to_write)) calculator_button_text[key].grid(row=r, column=c, ipady=10, ipadx=10) if (key=='clear'): self.configure_clear_button(key) elif (key=='equal'): self.configure_equal_button(key) def configure_clear_button(self, k): calculator_button_text[k].grid(columnspan=4, sticky=tk.W + tk.E) def configure_equal_button(self, k): calculator_button_text[k].grid(columnspan=4, sticky=tk.W + tk.E) def click(self, cmd, new_line): if (cmd != '='): self.display_on_screen(cmd, new_line) else: self.equation = re.sub(u"\u00F7", '/', self.equation) self.equation = re.sub("x", '*', self.equation) ans = str(eval(self.equation)) self.display_on_screen(ans, new_line) self.equation = "" def display_on_screen(self, val, new_line=False): if (not new_line): self.screen.insert(tk.END, val) self.equation += str(val) else: self.screen.insert(tk.END, '\n') self.screen.insert(tk.END, val) self.screen.insert(tk.END, '\n') self.screen.yview('end') def clear_screen(self): # set equation to empty before deleting screen self.equation = "" self.screen.delete('1.0', tk.END) if __name__ == '__main__': root = tk.Tk() calc_gui = CalculatorApp(root) # Make window fixed (cannot be resized) root.resizable(width=False, height=False) root.mainloop() I will mention that I'm using python 3.X on MacOS Mojave, and I've had a problems with the Button color so eventually I moved from tk.Button() to ttk.Button() Would appreciate some help A: Indeed, the geometry manager cannot mix pack and grid in the same section. So you have to choose one mode to organize the layout of the window, or split the window into 2 frames to be able to pack widgets in one and grid in the other. You can try with the following : self.screen = tk.Text(self.master, height=5, width=30) self.screen.grid(row=0, column=0, columnspan=4, sticky='nswe') self.scroll_bar = ttk.Scrollbar(self.master, command=self.screen.yview) self.scroll_bar.grid(row=0, column=3, sticky='nse')
{ "pile_set_name": "StackExchange" }
Q: add custom properties into inherited system.windows.forms.button How to add custom properties into visual studio properties windows for inherited class from controls of system.windows.forms ? example custombutton: public class CustomBtn:Button { public int MaxImgNumber{get;set;} public int MinImgNumber{get;set;} } I would like to show "MaxImgNumber" into the properties windows when i click on the custom buttons, like picture below thx in advance A: You are looking for BrowsableAttribute Specifies whether a property or event should be displayed in a Properties window.
{ "pile_set_name": "StackExchange" }
Q: What's the difference between "家" (ya), "屋" (ya), and "や" (ya) as used in the names of shops/stores/restaurants? As a gyudon addict I have noticed that the names of the three major national restaurant chains all end in "ya" but they used two different characters: "吉野家" (Yoshinoya) "松屋" (Matsuya) "すき家" (Sukiya) Other shops and restaurants I've noticed just use the hiragana instead: "や" (ya) So is there a subtle difference where one is more like restaurant and the other is more like shop/store? And is the hiragana a handy way to be ambiguous or would people reading such a sign immediately know whether "や" stood for "屋" or "家" based on their language intuition? While I'm at it, is this yet another character for "ya" used in the same contexts? "店" A: 屋 and 家 both roughly mean "house", with 屋 tending more towards the meaning of building and 家 more towards home. The choice of which to use is entirely the owner's. や is the ambiguous way to write either and is pretty much a stylistic choice. Do keep in mind that in the olden days Japanese stores tended to be part home, part store, with the owners living in the back while serving guests out front. You can still find such stores today, but they're disappearing in favor of purely business stores. The naming stuck though, possibly due to it's "homeliness".* 店 ten has the pure meaning of "store". * Note that I'm pretty sure that even in in the olden days there were purely business stores called -ya. I can't say whether 屋 was used for such whereas 家 was used for "home stores" or whether the choice was always arbitrary. A: deceze's answer may be correct (I do not know), but in present Japanese, 屋 means that it is a store whereas 家 puts more emphasis on the fact that it has been inherited for generations. For 屋, besides your example, it is often combined with the merchandise: 靴屋, 自転車屋, 魚屋, etc. 家 usually combines with the family name that is inherited. A: -屋{や} is also used in some words describing character traits, e.g. 恥{は}ずかしがり屋{や} (bashful person) 寂{さび}しがり屋{や} (lonely person) 寒{さむ}がり屋{や} (someone who gets cold easily, cold-blooded) くすぐったがり屋{や} (ticklish person) 目{め}立{だ}ちたがり屋{や} (attention seeker) のんびり屋{や} (lazy, laid-back person) they usually end in -(が)り屋 but also professions (often used to refer to the the shop's owner): 八{や}百{お}屋{や} (greengrocer) 床{とこ}屋{や} (barber) 大{おお}屋{や} (landlord/landlady) 酒屋{さかや} (sake dealer/brewer) 質屋{しちや} (pawnbroker) 殺{ころ}し屋{や} (professional killer/hitman) While -家{か} (NB: -ka, not -ya) is also used for some professions, usually(but not always) related to creativity: 漫画家【まんがか】 (mangaka, manga/comic writer) 画家 【がか】 (painter) 作家【さっか】 (writer/author) 所説家【しょせつか】 (novelist, fiction writer) 芸術家【げいじゅつか】 (artist (in entertainment industry)) 評論家【ひょうろんか】 (critic) 農家 【のうか】 (farmer/plant grower) 実業家【じつぎょうか】 (businessman)
{ "pile_set_name": "StackExchange" }
Q: How to minimize sum of matrix-convolutions? Given $A$, what should be B so that $\lVert I \circledast A - I \circledast B \rVert _2$ is minimal for any $I$? $I \in \mathbb{R}^{20x20}, A \in \mathbb{R}^{5x5}, B \in \mathbb{R}^{3x3}. $ Note that $B$ is smaller than $A$. I $\circledast$ K is a convolution on $I$ with kernel $K$. The result is padded with zeros as to match the shape of input $I$, this means that $(I \circledast K ) \in \mathbb{R}^{20x20}$. Does a closed form exist? Do I need to use a Fourier transform? As an extension, how does this work when $A,B$ are not square, and can be of arbitrary size (instead of the special case here where $A$ is larger than $B$? A: I will start with the case of one dimension first. Convolution is linear $$z=I*B-I*A=I*(B-A)$$ Expand to definition of convolution $$z_n = \sum_{n'}I_{n-n'}(B_{n'}-A_{n'})$$ Square of norm is $$\|I*B-I*A\|^2=\sum_nz_n^2$$ To find the minimum (proof that this yields minimum is shown later), do partial differentiation on each element of $B$ $${\partial\over\partial B_p}\sum_nz_n^2=2\sum_nz_n{\partial z_n\over\partial B_p} = 0 \tag{1}\label{eq1}$$ where $m$ is valid index of $B$. Partial differentiation of convolution is $${\partial z_n\over\partial B_p}=I_{n-p} \tag{2}\label{eq2}$$ therefore $\eqref{eq1}$ is $$\sum_nz_n{\partial z_n\over\partial B_p} = \sum_nI_{n-p}z_n = \sum_n\sum_{n'}I_{n-p}I_{n-n'}(B_{n'}-A_{n'})=0$$ giving us the following relation $$\sum_n\sum_{n'\in B}I_{n-p}I_{n-n'}B_{n'} = \sum_n\sum_{n'\in A}I_{n-p}I_{n-n'}A_{n'}\tag{3}\label{eq3}$$ This is a linear system of equations that can be solved using matrix methods. Note that the notation $n'\in A$ is just a notation to mean the indices where $A$ is defined. With change of variable $$n=k+p$$ we make things more readable, and find that the coefficient is auto-correlation of $I$. $$\begin{aligned}\sum_n\sum_{n'}I_{n-p}I_{n-n'}A_{n'} &=\sum_p\sum_{n'}I_kI_{k+p-n'}A_{n'} \\ &= \sum_{n'}(I\star I)_{p-n'}A_{n'}\end{aligned}$$ using this on both sides of $\eqref{eq1}$, we get $$\sum_{n'\in B}(I\star I)_{p-n'}B_{n'}=\sum_{n'\in A}(I\star I)_{p-n'}A_{n'}$$ RHS consists of only constants, and autocorrelation of real values are symmetric about index 0. Therefore $$\boxed{\sum_{n'\in B}(I\star I)_{p-n'}B_{n'}=[(I\star I)*A]_p}$$ There are as many $p$ as there are $n'$ on the LHS, and also due to the symmetry of $I\star I$, the coeffiecients form a symmetric circulant matrix. Solve this system to get the optimal $B$. Unfortunately, the above also says that optimal $B$ depends on $I$ as well. Also, as I've mentioned in the comment, if you want the centers of $A$ and $B$ to match, you have to pad $B$ accordingly on both sides. Proof that this minimizes $\|z\|$ From $\eqref{eq2}$ we see that second order derivative does not exist. Hessian of $\|z\|$ is then $$\begin{aligned}\mathrm{H}_{pq}&={\partial^2\over\partial B_p\partial B_q}\sum_nz_n^2=2{\partial\over\partial B_q}\sum_nz_n{\partial z_n\over\partial B_p}\\ &= 2{\partial\over\partial B_q}\sum_nz_nI_{n-p} \\ &= 2\sum_nI_{n-p}I_{n-q} \end{aligned}$$ $I_{n-p}$ is an element of a circulant matrix $C$. The Hessian can be rewritten as $$\mathrm{H} = 2C^TC$$ so the second derivative test is $$\det(\mathrm{H}) = 2\det(C^TC)=2\det(C^T)\det(C)=2[\det(C)]^2\geq 0$$ The test is inconclusive when $\det(C)=0$, which happens when you have a flat image. 2D version The line of reasoning is the same as the 1D version. For convolution $$z_{mn} = \sum_{m'n'}I_{(m-m')(n-n')}(B_{m'n'}-A_{m'n'})$$ minimizing the norm of the above is equivalent to solving the linear equation $$\boxed{\sum_{m'n'\in B}(I\star I)_{(p-m')(q-n')}B_{m'n'}=[(I\star I)*A]_{p'q'}}$$ The matrix on the LHS is no longer a symmetric circulant matrix. But if you index $B$ row by row (or maybe column by column, I've not tried that) you instead get a symmetric block Toeplitz matrix.
{ "pile_set_name": "StackExchange" }
Q: How to fix PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_curl.dll'? I currently have PHP 5.5.12 and Apache 2.4 installed on Windows Server 2008 R2. Everything is running perfectly with no issue/warnings. What I have done is copied the same Apache file/configuration to another server. I copied the C:\PHP directory and then the C:\Apache24 directory and pasted them into the new server. Then I installed the Apache with one change (ie httpd -k install.) I changed the port number from 80 to 8877. The Apache is working with no issue and it is running on the 8877 port. I can also open the default page by going to SERVER_IP_ADDRESS:8877 and it works. But, PHP is not working as it should. In the error.log file from the Apache server I get the warning listed below PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_curl.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_ldap.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_mysqli.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_openssl.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_pdo_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_sqlsrv_55_ts.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library 'ext\\php_pdo_sqlsrv_55_ts.dll' - The specified module could not be found.\r\n in Unknown on line 0 I can't seem to figure out why I get this error? All the .dll files that the warning is stating "The specified module could not be found" do exists in the C:\PHP\ext directoryphp The PHP extensions are located in C:\php\ext Inside the file php.ini I have this variable extension_dir = "ext" here is a directory listing of the ext folder Volume in drive C is OS Volume Serial Number is C63C-1D75 Directory of C:\php\ext 07/29/2014 06:42 PM <DIR> . 07/29/2014 06:42 PM <DIR> .. 04/30/2014 02:46 PM 66,560 php_bz2.dll 04/30/2014 02:46 PM 72,704 php_com_dotnet.dll 04/30/2014 02:46 PM 507,392 php_curl.dll 04/30/2014 02:46 PM 18,944 php_enchant.dll 04/30/2014 02:46 PM 43,008 php_exif.dll 04/30/2014 02:46 PM 2,679,808 php_fileinfo.dll 04/30/2014 02:46 PM 1,358,848 php_gd2.dll 04/30/2014 02:46 PM 40,960 php_gettext.dll 04/30/2014 02:46 PM 240,128 php_gmp.dll 04/30/2014 02:46 PM 831,488 php_imap.dll 04/30/2014 02:46 PM 65,024 php_interbase.dll 04/30/2014 02:46 PM 261,632 php_intl.dll 04/30/2014 02:46 PM 179,200 php_ldap.dll 04/30/2014 02:46 PM 1,239,552 php_mbstring.dll 04/30/2014 02:46 PM 36,864 php_mysql.dll 04/30/2014 02:46 PM 88,576 php_mysqli.dll 04/30/2014 02:46 PM 141,824 php_oci8.dll 04/30/2014 02:46 PM 142,336 php_oci8_11g.dll 04/30/2014 02:46 PM 120,320 php_opcache.dll 04/30/2014 02:46 PM 72,704 php_openssl.dll 04/30/2014 02:46 PM 21,504 php_pdo_firebird.dll 04/30/2014 02:46 PM 24,576 php_pdo_mysql.dll 04/30/2014 02:46 PM 23,040 php_pdo_oci.dll 04/30/2014 02:46 PM 20,480 php_pdo_odbc.dll 04/30/2014 02:46 PM 27,648 php_pdo_pgsql.dll 04/30/2014 02:46 PM 465,408 php_pdo_sqlite.dll 08/28/2012 04:15 PM 186,520 php_pdo_sqlsrv_54_ts.dll 06/26/2013 03:22 PM 166,400 php_pdo_sqlsrv_55_ts.dll 04/30/2014 02:46 PM 90,112 php_pgsql.dll 04/30/2014 02:46 PM 12,288 php_shmop.dll 04/30/2014 02:46 PM 385,536 php_snmp.dll 04/30/2014 02:46 PM 236,544 php_soap.dll 04/30/2014 02:46 PM 54,784 php_sockets.dll 04/30/2014 02:46 PM 617,472 php_sqlite3.dll 08/28/2012 04:15 PM 204,952 php_sqlsrv_54_ts.dll 06/26/2013 03:22 PM 183,296 php_sqlsrv_55_ts.dll 04/30/2014 02:46 PM 31,744 php_sybase_ct.dll 04/30/2014 02:46 PM 236,544 php_tidy.dll 04/30/2014 02:46 PM 51,712 php_xmlrpc.dll 04/30/2014 02:46 PM 231,936 php_xsl.dll 40 File(s) 11,480,368 bytes 2 Dir(s) 83,103,895,552 bytes free When I try to access the website I get this error Fatal error: Undefined class constant 'MYSQL_ATTR_INIT_COMMAND' which is because the extensions are not loaded. I am assuming that the configuration are correct since the same configuration are working on a different server. How can I fix this PHP Startup issue? A: As Darren commented, Apache don't understand php.ini relative paths in Windows. In PHP manual we have an How-to install Apache 2.x on Microsoft Windows. One of the comments suggests using absolute paths. So, try changing the relative paths in your php.ini to absolute paths. extension_dir="C:\full\path\to\php\ext" A: Use absolute path: extension_dir="C:\full\path\here"
{ "pile_set_name": "StackExchange" }
Q: Express a complex number in modulus amplitude form Express a complex number in modulus amplitude form $\displaystyle 1+\sin \alpha +i\cos \alpha $ My Attempt: $\displaystyle r\cos \theta= 1+\sin \alpha $ $\displaystyle r\sin \theta= \cos \alpha $ Squaring and adding.. $\displaystyle r^2= (1+\sin \alpha)^2+ \cos^2 \alpha$ $\displaystyle r^2= 2(1+\sin \alpha) $ $\displaystyle \tan \theta = \frac{\cos \alpha}{1+\sin \alpha} $ How to break up $\displaystyle 1+\sin \alpha $? A: It is better to solve it this way: $$\begin{aligned} 1+\sin\alpha+i\cos \alpha &=1+\cos\left(\frac{\pi}{2}-\alpha\right)+i\sin\left(\frac{\pi}{2}-\alpha\right)\\ & \stackrel{*}{=}2\cos^2\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)+i2\sin\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)\cos\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)\\ &=2\cos\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)\left(\cos\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)+i\sin\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)\right)\\ &=2\cos\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)e^{i\left(\pi/4-\alpha/2\right)}\\ \end{aligned}$$ $(*)$, In this step I used the following formulas: $\cos(2x)=2\cos^2x-1$ and $\sin(2x)=2\sin x\cos x$ In your method, I am not seeing how you get that expression for $\tan\theta$. Rather it should be $\tan\theta=\dfrac{\cos\alpha}{1+\sin\alpha}$. Hence, $$\tan\theta=\frac{\sin\left(\frac{\pi}{2}-\alpha\right)}{1+\cos\left(\frac{\pi}{2}-\alpha\right)}=\frac{2\sin\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)\cos\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)}{2\cos^2\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)}$$ $$\Rightarrow \tan\theta=\tan\left(\frac{\pi}{4}-\frac{\alpha}{2}\right)$$
{ "pile_set_name": "StackExchange" }
Q: How to choose my image according to the windows phone theme in xaml In my xaml code I have this: <Button Grid.ColumnSpan="2" Grid.Row="3" Height="72" Name="btnSend" Click="btnSend_Click"> <Button.Background> <ImageBrush x:Name="imButton" ImageSource="/icons/appbar.feature.email.rest.png" Stretch="None"/> </Button.Background> </Button> For the imageSource I use a default icon from sdk, my problem is when I change de theme in light, the icon doesn't change and stay white. How to change this image when changing theme? A: You can using transparent for solving this problem. First, create a style for this button: <phone:PhoneApplicationPage.Resources> <Style x:Key="IconButton" TargetType="Button"> <Setter Property="Background" Value="Transparent"/> <Setter Property="BorderBrush" Value="{StaticResource PhoneForegroundBrush}"/> <Setter Property="Foreground" Value="{StaticResource PhoneForegroundBrush}"/> <Setter Property="BorderThickness" Value="{StaticResource PhoneBorderThickness}"/> <Setter Property="FontFamily" Value="{StaticResource PhoneFontFamilySemiBold}"/> <Setter Property="FontSize" Value="{StaticResource PhoneFontSizeMediumLarge}"/> <Setter Property="Padding" Value="10,3,10,5"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid Background="Transparent"> <VisualStateManager.VisualStateGroups> </VisualStateManager.VisualStateGroups> <Border x:Name="ButtonBackground" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Background="{TemplateBinding Background}" CornerRadius="0" Margin="{StaticResource PhoneTouchTargetOverhang}"> <Grid x:Name="ContentContainer" OpacityMask="{TemplateBinding Content}" Background="{TemplateBinding Foreground}"/> </Border> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> after that use it as following: <Button Style="{StaticResource IconButton}" > <ImageBrush ImageSource="/icons/home.png"> </Button> more info try find here
{ "pile_set_name": "StackExchange" }
Q: TBB::parallel_for creates too many class/body copies? I followed the basic parallel_for example of TBB. The documentation states: Template function parallel_for requires that the body object have a copy constructor, which is invoked to create a separate copy (or copies) for each worker thread. My algorithm needs some memory per concurrent worker to operate. I now allocate the memory in the copy constructor. It works, but these are the numbers on my 8 thread-machine: On a range of 0-10000 I get about 2000 work chunks (calls of operator()) and the copy constructor is called about 300 times! That's the problem: 300 memory allocations where only 8 are needed. I checked that there are only 8 threads running, and definitely not used more than 8 class copies concurrently. Am I completely wrong assuming that the number of copies correlates with the number of threads? Is there a better way to allocate the memory? #include "tbb/tbb.h" using namespace tbb; class ApplyFoo { float *const my_a; public: void operator()( const blocked_range<size_t>& r ) const { float *a = my_a; for( size_t i=r.begin(); i!=r.end(); ++i ) Foo(a[i]); // Foo uses the allocated memory } ApplyFoo( float a[] ) : my_a(a) {} // the Copy-Constructor is called work every ApplyFoo( const ApplyFoo& other ) : my_a(a) { // Allocate some memory here... } ~ApplyFoo() { // Free the memory here... } }; void ParallelApplyFoo( float a[], size_t n ) { parallel_for(blocked_range<size_t>(0,n), ApplyFoo(a)); } A: Am I completely wrong assuming that the number of copies correlates with the number of threads? You are right to assume the correlation for used default partitioner (auto_partitioner), but the multiplier is big enough and depends on run-time conditions thus the number of copies can be as big as the number of subranges. So, there is no surprise. However, the number of subranges can be controlled by specifying the gain-size: size_t p = task_scheduler_init::default_num_threads(); size_t grainsize = 2*n/p-1; parallel_for(blocked_range<size_t>(0,n,grainsize), ApplyFoo(a)); The computation 2*n/p-1 here is because in TBB, grainsize is not a minimal size of a possible sub-range but the threshold used to decide whether to split. Also, for the completely predictable behavior of the partitioner with the number of parallel_for body copies (independently from run-time conditions), use the simple_partitioner instead: parallel_for(blocked_range<size_t>(0,n), ApplyFoo(a), simple_partitioner()); Though, it can lead to additional overheads for the big ranges and small grain-sizes since it does not aggregate the ranges. Is there a better way to allocate the memory? Yes, and the grain-size is not a good way for this since it prevents TBB scheduler from better load-balancing. I recommend using thread local storage containers instead. Unlike compiler-based TLS, it is possible to traverse over the values in order to clean up the memory in one place and even if the origin thread is gone.
{ "pile_set_name": "StackExchange" }
Q: Converting Cartesian image to polar, appearance differences I'm trying to do a polar transform on the first image below and end up with the second. However my result is the third image. I have a feeling it has to do with what location I choose as my "origin" but am unsure. radius = sqrt(width**2 + height**2) nheight = int(ceil(radius)/2) nwidth = int(ceil(radius/2)) for y in range(0, height): for x in range(0, width): t = int(atan(y/x)) r = int(sqrt(x**2+y**2)/2) color = getColor(getPixel(pic, x, y)) setColor( getPixel(radial,r,t), color) A: There are a few differences / errors: They use the centre of the image as the origin They scale the axis appropriately. In your example, you're plotting your angle (between 0 and in your case, pi), instead of utilising the full height of the image. You're using the wrong atan function (atan2 works a lot better in this situation :)) Not amazingly important, but you're rounding unnecessarily quite a lot, which throws off accuracy a little and can slow things down. This is the code combining my suggested improvements. It's not massively efficient, but it should hopefully work :) maxradius = sqrt(width**2 + height**2)/2 rscale = width / maxradius tscale = height / (2*math.pi) for y in range(0, height): dy = y - height/2 for x in range(0, width): dx = x - width/2 t = atan2(dy,dx)%(2*math.pi) r = sqrt(dx**2+dy**2) color = getColor(getPixel(pic, x, y)) setColor( getPixel(radial,int(r*rscale),int(t*tscale)), color) In particular, it fixes the above problems in the following ways: We use dx = x - width / 2 as a measure of distance from the centre, and similarly with dy. We then use these in replace of x, y throughout the computation. We will have our r satisfying 0 <= r <= sqrt( (width/2)^2 +(height/2)^2 ), and our t eventually satisfying 0 < t <= 2 pi so, I create the appropriate scale factors to put r and t along the x and y axes respectively. Normal atan can only distinguish based on gradients, and is computationally unstable near vertical lines... Instead, atan2 (see http://en.wikipedia.org/wiki/Atan2) solves both problems, and accepts (y,x) pairs to give an angle. atan2 returns an angle -pi < t <= pi, so we can find the remainder modulo 2 * math.pi to it to get it in the range 0 < t <= 2pi ready for scaling. I've only rounded at the end, when the new pixels get set. Any questions, just ask!
{ "pile_set_name": "StackExchange" }
Q: Add a vote to convert a question to community wiki As far as I know, the only ways for a question to get converted to a community wiki are: The original poster converts it. A mod converts it. Could we get a vote for this, just like there are to close or delete questions? A: As long as Community Wiki is a one-way street, I'm not sure this is a good idea. If five people vote to close, five people can vote to reopen. If five people vote to make CW, there is no going back. Unless the site allows a question to go back and forth between CW and question (which I don't think it should), I think this will be a nightmare. A: Don't forget that posts convert to CW after a number of edits - I think it's still 5. So you can help force a question to CW by making an edit to it, but that would rely on 4 others making edits as well or the OP making several outside the 5 minute window, so it's neither effective nor particularly fair and I wouldn't recommend it. The other method to community wiki is if there are enough answers to the question, 30 for SO and SF, 15 for SU.
{ "pile_set_name": "StackExchange" }
Q: How does the event chaining works for multiple objects? I'm performing a deleting action on a page when the user click on the confirmation button on a twitter bootstrap modal window button. I have two buttons: one allow the user to cancel the action, and another one to confirm. When the user clicks on the confirm delete button, when the modal is hidden, I perform my actions, so for example I can show an animation and actually delete the item. If the user click on few items but his/her choice is the cancel button, when he/she clicks on the item he/she want to delete, the deletion is performed also on the elements where the choice has been to cancel. Should not the 'hidden' event be detached from the element once it is performed? I know I can detach the event chaining changing $('#confirmDeleteModal').on('hidden', function() { to $('#confirmDeleteModal').off('hidden').on('hidden', function() { but I really would understand why this happen. Am I missing something? The code is: $(document).ready(function(){ $('.delete').on('click', function() { var itemID = $(this).data('product-id') $('#confirmDeleteModal').modal('show'); $('#confirmDelete').on('click', function() { $('#confirmDeleteModal').on('hidden', function() { // Here I do my stuff to perform deletion $('#result').append('This method has been called for ' + itemID + ' <br />' ) }); }); }); }); I hope I have exposed clearly my question. I prepared a JS Bin as well: http://jsbin.com/inulaw/5/edit A: The problem here is that you are attaching additional listeners to the click and hidden events each time. To fix this, chain the jQuery .off('eventName') method before calling the .on('eventname') again. Here's your code updated and working great in the JS Bin: $(document).ready(function(){ $('.delete').on('click', function() { var itemID = $(this).data('product-id') $('#confirmDeleteModal').modal('show'); $('#confirmDeleteModal').off('hidden'); // must reset from previous $('#confirmDelete').off('click').on('click', function() { $('#confirmDeleteModal').on('hidden', function() { // Here I do my stuff to perform deletion $('#result').append('This method has been called for ' + itemID + ' <br />' ) }); }); }); }); EDIT: I moved the $('#confirmDeleteModal').off('hidden'); to above the click event so it resets whether or not the confirm is clicked.
{ "pile_set_name": "StackExchange" }
Q: What is replacement of .deb packages to automatically install dependency in Ubuntu? In Ubuntu I install some software via deb packages (dpkg -i), because in "apt-get install" old versions of software. But I have to manually install all dependency to deb package. How to avoid this? For example installation of bareos 16 version in Ubuntu. I install it from deb package via official site. But there are other files on page (Packages, Release, Sources ...). What should I use and how to replace deb package and install dependency automatically? For example in CentOS as far as I know there is file .repo A: To add the Bareos community repository, you first need to download and import their key: curl http://download.bareos.org/bareos/release/latest/xUbuntu_16.04/Release.key | sudo apt-key add - then add the repository: echo deb http://download.bareos.org/bareos/release/latest/xUbuntu_16.04/ / | sudo tee /etc/apt/sources.list.d/bareos.list before updating: sudo apt update You will then be able to install Bareos packages and their dependencies using apt install bareos, and they will be upgraded by apt upgrade whenever new releases are made available.
{ "pile_set_name": "StackExchange" }
Q: what does "ch - 'a' " mean? I was going through a piece of code which i was not able to understand. Thought of checking this with our community. In the below code, i am not able to understand what the line count[ch-'a']++ does. or how can we write the same in java 7. What the explanation says is that we have a String s and an int array count. We iterate through the string s and count the no of occurrences of the characters in s and put the frequencies of count into the array. please help!! String s = "test"; int[] count = new int[26]; for (int i = 0; i < s.length(); i++) { char ch = s.charAt(i); count[ch-'a']++; } A: You are iterating over an array of ints count and incrementing the integer value at index ch-'a', which results in an integer value, e.g count['a'-'a'] == count[0] to flag that the character exists in the string. you subtract in ch - 'a' because the integer value of alphabetic characters does not start at 0. A: the code is trying to count the number of occurrence of each character. and it allocates it such that a occupies position 0 b occupies position 1 etc etc to get position 0, you need to call 'a' - 'a' to get position 1, you need to call 'b' - 'a' so what is happening in "count[ch-'a']++;" is eqivalent to int position = ch -'a'; // get position count[position] = count [position] + 1; // increment the count in that particular position
{ "pile_set_name": "StackExchange" }
Q: C# - A faster alternative to Convert.ToSingle() I'm working on a program which reads millions of floating point numbers from a text file. This program runs inside of a game that I'm designing, so I need it to be fast (I'm loading an obj file). So far, loading a relatively small file takes about a minute (without precompilation) because of the slow speed of Convert.ToSingle(). Is there a faster way to do this? EDIT: Here's the code I use to parse the Obj file http://pastebin.com/TfgEge9J using System; using System.IO; using System.Collections.Generic; using OpenTK.Math; using System.Drawing; using PlatformLib; public class ObjMeshLoader { public static StreamReader[] LoadMeshes(string fileName) { StreamReader mreader = new StreamReader(PlatformLib.Platform.openFile(fileName)); MemoryStream current = null; List<MemoryStream> mstreams = new List<MemoryStream>(); StreamWriter mwriter = null; if (!mreader.ReadLine().Contains("#")) { mreader.BaseStream.Close(); throw new Exception("Invalid header"); } while (!mreader.EndOfStream) { string cmd = mreader.ReadLine(); string line = cmd; line = line.Trim(splitCharacters); line = line.Replace(" ", " "); string[] parameters = line.Split(splitCharacters); if (parameters[0] == "mtllib") { loadMaterials(parameters[1]); } if (parameters[0] == "o") { if (mwriter != null) { mwriter.Flush(); current.Position = 0; } current = new MemoryStream(); mwriter = new StreamWriter(current); mwriter.WriteLine(parameters[1]); mstreams.Add(current); } else { if (mwriter != null) { mwriter.WriteLine(cmd); mwriter.Flush(); } } } mwriter.Flush(); current.Position = 0; List<StreamReader> readers = new List<StreamReader>(); foreach (MemoryStream e in mstreams) { e.Position = 0; StreamReader sreader = new StreamReader(e); readers.Add(sreader); } return readers.ToArray(); } public static bool Load(ObjMesh mesh, string fileName) { try { using (StreamReader streamReader = new StreamReader(Platform.openFile(fileName))) { Load(mesh, streamReader); streamReader.Close(); return true; } } catch { return false; } } public static bool Load2(ObjMesh mesh, StreamReader streamReader, ObjMesh prevmesh) { if (prevmesh != null) { //mesh.Vertices = prevmesh.Vertices; } try { //streamReader.BaseStream.Position = 0; Load(mesh, streamReader); streamReader.Close(); #if DEBUG Console.WriteLine("Loaded "+mesh.Triangles.Length.ToString()+" triangles and"+mesh.Quads.Length.ToString()+" quadrilaterals parsed, with a grand total of "+mesh.Vertices.Length.ToString()+" vertices."); #endif return true; } catch (Exception er) { Console.WriteLine(er); return false; } } static char[] splitCharacters = new char[] { ' ' }; static List<Vector3> vertices; static List<Vector3> normals; static List<Vector2> texCoords; static Dictionary<ObjMesh.ObjVertex, int> objVerticesIndexDictionary; static List<ObjMesh.ObjVertex> objVertices; static List<ObjMesh.ObjTriangle> objTriangles; static List<ObjMesh.ObjQuad> objQuads; static Dictionary<string, Bitmap> materials = new Dictionary<string, Bitmap>(); static void loadMaterials(string path) { StreamReader mreader = new StreamReader(Platform.openFile(path)); string current = ""; bool isfound = false; while (!mreader.EndOfStream) { string line = mreader.ReadLine(); line = line.Trim(splitCharacters); line = line.Replace(" ", " "); string[] parameters = line.Split(splitCharacters); if (parameters[0] == "newmtl") { if (materials.ContainsKey(parameters[1])) { isfound = true; } else { current = parameters[1]; } } if (parameters[0] == "map_Kd") { if (!isfound) { string filename = ""; for (int i = 1; i < parameters.Length; i++) { filename += parameters[i]; } string searcher = "\\" + "\\"; filename.Replace(searcher, "\\"); Bitmap mymap = new Bitmap(filename); materials.Add(current, mymap); isfound = false; } } } } static float parsefloat(string val) { return Convert.ToSingle(val); } int remaining = 0; static string GetLine(string text, ref int pos) { string retval = text.Substring(pos, text.IndexOf(Environment.NewLine, pos)); pos = text.IndexOf(Environment.NewLine, pos); return retval; } static void Load(ObjMesh mesh, StreamReader textReader) { //try { //vertices = null; //objVertices = null; if (vertices == null) { vertices = new List<Vector3>(); } if (normals == null) { normals = new List<Vector3>(); } if (texCoords == null) { texCoords = new List<Vector2>(); } if (objVerticesIndexDictionary == null) { objVerticesIndexDictionary = new Dictionary<ObjMesh.ObjVertex, int>(); } if (objVertices == null) { objVertices = new List<ObjMesh.ObjVertex>(); } objTriangles = new List<ObjMesh.ObjTriangle>(); objQuads = new List<ObjMesh.ObjQuad>(); mesh.vertexPositionOffset = vertices.Count; string line; string alltext = textReader.ReadToEnd(); int pos = 0; while ((line = GetLine(alltext, pos)) != null) { if (line.Length < 2) { break; } //line = line.Trim(splitCharacters); //line = line.Replace(" ", " "); string[] parameters = line.Split(splitCharacters); switch (parameters[0]) { case "usemtl": //Material specification try { mesh.Material = materials[parameters[1]]; } catch (KeyNotFoundException) { Console.WriteLine("WARNING: Texture parse failure: " + parameters[1]); } break; case "p": // Point break; case "v": // Vertex float x = parsefloat(parameters[1]); float y = parsefloat(parameters[2]); float z = parsefloat(parameters[3]); vertices.Add(new Vector3(x, y, z)); break; case "vt": // TexCoord float u = parsefloat(parameters[1]); float v = parsefloat(parameters[2]); texCoords.Add(new Vector2(u, v)); break; case "vn": // Normal float nx = parsefloat(parameters[1]); float ny = parsefloat(parameters[2]); float nz = parsefloat(parameters[3]); normals.Add(new Vector3(nx, ny, nz)); break; case "f": switch (parameters.Length) { case 4: ObjMesh.ObjTriangle objTriangle = new ObjMesh.ObjTriangle(); objTriangle.Index0 = ParseFaceParameter(parameters[1]); objTriangle.Index1 = ParseFaceParameter(parameters[2]); objTriangle.Index2 = ParseFaceParameter(parameters[3]); objTriangles.Add(objTriangle); break; case 5: ObjMesh.ObjQuad objQuad = new ObjMesh.ObjQuad(); objQuad.Index0 = ParseFaceParameter(parameters[1]); objQuad.Index1 = ParseFaceParameter(parameters[2]); objQuad.Index2 = ParseFaceParameter(parameters[3]); objQuad.Index3 = ParseFaceParameter(parameters[4]); objQuads.Add(objQuad); break; } break; } } //}catch(Exception er) { // Console.WriteLine(er); // Console.WriteLine("Successfully recovered. Bounds/Collision checking may fail though"); //} mesh.Vertices = objVertices.ToArray(); mesh.Triangles = objTriangles.ToArray(); mesh.Quads = objQuads.ToArray(); textReader.BaseStream.Close(); } public static void Clear() { objVerticesIndexDictionary = null; vertices = null; normals = null; texCoords = null; objVertices = null; objTriangles = null; objQuads = null; } static char[] faceParamaterSplitter = new char[] { '/' }; static int ParseFaceParameter(string faceParameter) { Vector3 vertex = new Vector3(); Vector2 texCoord = new Vector2(); Vector3 normal = new Vector3(); string[] parameters = faceParameter.Split(faceParamaterSplitter); int vertexIndex = Convert.ToInt32(parameters[0]); if (vertexIndex < 0) vertexIndex = vertices.Count + vertexIndex; else vertexIndex = vertexIndex - 1; //Hmm. This seems to be broken. try { vertex = vertices[vertexIndex]; } catch (Exception) { throw new Exception("Vertex recognition failure at " + vertexIndex.ToString()); } if (parameters.Length > 1) { int texCoordIndex = Convert.ToInt32(parameters[1]); if (texCoordIndex < 0) texCoordIndex = texCoords.Count + texCoordIndex; else texCoordIndex = texCoordIndex - 1; try { texCoord = texCoords[texCoordIndex]; } catch (Exception) { Console.WriteLine("ERR: Vertex " + vertexIndex + " not found. "); throw new DllNotFoundException(vertexIndex.ToString()); } } if (parameters.Length > 2) { int normalIndex = Convert.ToInt32(parameters[2]); if (normalIndex < 0) normalIndex = normals.Count + normalIndex; else normalIndex = normalIndex - 1; normal = normals[normalIndex]; } return FindOrAddObjVertex(ref vertex, ref texCoord, ref normal); } static int FindOrAddObjVertex(ref Vector3 vertex, ref Vector2 texCoord, ref Vector3 normal) { ObjMesh.ObjVertex newObjVertex = new ObjMesh.ObjVertex(); newObjVertex.Vertex = vertex; newObjVertex.TexCoord = texCoord; newObjVertex.Normal = normal; int index; if (objVerticesIndexDictionary.TryGetValue(newObjVertex, out index)) { return index; } else { objVertices.Add(newObjVertex); objVerticesIndexDictionary[newObjVertex] = objVertices.Count - 1; return objVertices.Count - 1; } } } A: Based on your description and the code you've posted, I'm going to bet that your problem isn't with the reading, the parsing, or the way you're adding things to your collections. The most likely problem is that your ObjMesh.Objvertex structure doesn't override GetHashCode. (I'm assuming that you're using code similar to http://www.opentk.com/files/ObjMesh.cs. If you're not overriding GetHashCode, then your objVerticesIndexDictionary is going to perform very much like a linear list. That would account for the performance problem that you're experiencing. I suggest that you look into providing a good GetHashCode method for your ObjMesh.Objvertex class. See Why is ValueType.GetHashCode() implemented like it is? for information about the default GetHashCode implementation for value types and why it's not suitable for use in a hash table or dictionary. A: Edit 3: The problem is NOT with the parsing. It's with how you read the file. If you read it properly, it would be faster; however, it seems like your reading is unusually slow. My original suspicion was that it was because of excess allocations, but it seems like there might be other problems with your code too, since that doesn't explain the entire slowdown. Nevertheless, here's a piece of code I made that completely avoids all object allocations: static void Main(string[] args) { long counter = 0; var sw = Stopwatch.StartNew(); var sb = new StringBuilder(); var text = File.ReadAllText("spacestation.obj"); for (int i = 0; i < text.Length; i++) { int start = i; while (i < text.Length && (char.IsDigit(text[i]) || text[i] == '-' || text[i] == '.')) { i++; } if (i > start) { sb.Append(text, start, i - start); //Copy data to the buffer float value = Parse(sb); //Parse the data sb.Remove(0, sb.Length); //Clear the buffer counter++; } } sw.Stop(); Console.WriteLine("{0:N0}", sw.Elapsed.TotalSeconds); //Only a few ms } with this parser: const int MIN_POW_10 = -16, int MAX_POW_10 = 16, NUM_POWS_10 = MAX_POW_10 - MIN_POW_10 + 1; static readonly float[] pow10 = GenerateLookupTable(); static float[] GenerateLookupTable() { var result = new float[(-MIN_POW_10 + MAX_POW_10) * 10]; for (int i = 0; i < result.Length; i++) result[i] = (float)((i / NUM_POWS_10) * Math.Pow(10, i % NUM_POWS_10 + MIN_POW_10)); return result; } static float Parse(StringBuilder str) { float result = 0; bool negate = false; int len = str.Length; int decimalIndex = str.Length; for (int i = len - 1; i >= 0; i--) if (str[i] == '.') { decimalIndex = i; break; } int offset = -MIN_POW_10 + decimalIndex; for (int i = 0; i < decimalIndex; i++) if (i != decimalIndex && str[i] != '-') result += pow10[(str[i] - '0') * NUM_POWS_10 + offset - i - 1]; else if (str[i] == '-') negate = true; for (int i = decimalIndex + 1; i < len; i++) if (i != decimalIndex) result += pow10[(str[i] - '0') * NUM_POWS_10 + offset - i]; if (negate) result = -result; return result; } it happens in a small fraction of a second. Of course, this parser is poorly tested and has these current restrictions (and more): Don't try parsing more digits (decimal and whole) than provided for in the array. No error handling whatsoever. Only parses decimals, not exponents! i.e. it can parse 1234.56 but not 1.23456E3. Doesn't care about globalization/localization. Your file is only in a single format, so there's no point caring about that kind of stuff because you're probably using English to store it anyway. It seems like you won't necessarily need this much overkill, but take a look at your code and try to figure out the bottleneck. It seems to be neither the reading nor the parsing.
{ "pile_set_name": "StackExchange" }
Q: Does Nashorn Javascript compile "eval" statements? I understand that Nashorn compiles to JVM byte code on the fly. But, what does Nashorn do when it encounters the eval function with a String? Does it compile the string contents or interpret it? For example: function sayHi() { console.log("hi world"); } for (var i=0;i<10;i++) { eval("sayHi()"); // what happens here? } A couple options could be: 1) it does not compile the string within an eval 2) it compiles it once, caches it, and then reuses the same byte code if it encounters the same string (as in the loop above) 3) it re-compiles the contents of an eval String a-fresh each time Of course this is a small example in which the contents of an eval string is just a method call, but imagine it is more complex JS code being passed as a string into eval. A: Nashorn always compiles javascript to bytecode for execution. There is no interpreter for JS. Yes, compiled/loaded Classes are unloaded if not reachable from live objects.
{ "pile_set_name": "StackExchange" }
Q: which development software should be used for VLC on mac? i want to do some modification and development for VLC. i download its source code, vlc-1.1.5. and it is written by C. so usually which development environment should i use, xcode or some others? thx... A: I thought VLC was developed in Qt. Do you see the class names starting with Q? In case it is developed in Qt, Qt Creator or KDevelop would be a good choice of development environment.
{ "pile_set_name": "StackExchange" }
Q: Facebook JS affecting CSS/@font-face in IE? I seem to notice that Facebook's JS <div id="fb-root"></div> <script src="http://connect.facebook.net/en_US/all.js#appId=APP_ID&amp;xfbml=1"></script> seems to affect my site's CSS in IE. eg. say headers use font1 and body use font2. sometimes, in IE all fonts use font1 or even swap, headers used font2 and body use font1 ... It also seem to affect some PIE CSS stuff. Anyone having the same problem? A: I had the exact same problem. I use a downloaded font for my headers and on IE8, the Facebook Javascript screwed up the fonts. This occurred when I structured my code in what I assumed was the proper architecture - the Facebook Javascript include was up in my header with the rest of my Javascript includes. When I moved the javascript include down to the actual div that added the like button, the problem went away. <div id="facebooklike" style="position: absolute; left: 645px; top: -37px;"> <div id="fb-root"></div><script src="http://connect.facebook.net/en_US/all.js#xfbml=1"></script><fb:like href="http://www.tripinsurance.com" send="false" width="350" show_faces="false" font="arial"></fb:like> </div> I think the issue may be happening if the Facebook code is loaded before the the div loads on the page.
{ "pile_set_name": "StackExchange" }
Q: JavaScript - Filter Object by key I am looking for a short and efficient way to filter objects by key, I have this kind of data-structure: {"Key1":[obj1,obj2,obj3], "Key2":[obj4,obj5,obj6]} Now I want to filter by keys, for example by "Key1": {"Key1":[obj1,obj2,obj3]} A: you can use the .filter js function for filter values inside an object var keys = {"Key1":[obj1,obj2,obj3], "Key2":[obj4,obj5,obj6]}; var objectToFind; var keyToSearch = keys.filter(function(objects) { return objects === objectToFind }); The keyToSearch is an array with all the objects filter by the objectToFind variable. Remember, in the line return objects === objectToFind is where you have to should your statement. I hope it can help you.
{ "pile_set_name": "StackExchange" }
Q: How can we derive the asymptotic expansion for the second derivative of the gamma-function? We can expression the first derivative of the gamma function as: $$\Gamma'(s) \sim -\frac{1}{s^2}+\frac{6\gamma^2+\pi^2}{12}+O(s)$$ but what about the second derivative? I do not know how to approach the problem. Thank you. A: You could obtain the expansion of any derivative of $\Gamma(s)$ using its own expansion and derivating term wise $$\Gamma(s)=\frac{1}{s}-\gamma +\frac{1}{12} \left(6 \gamma ^2+\pi ^2\right) s+\frac{1}{6} \left(-\gamma ^3-\frac{\gamma \pi ^2}{2}+\psi ^{(2)}(1)\right)s^2+\frac{1}{24} \left(8 \gamma \zeta (3)+\gamma ^4+\gamma ^2 \pi ^2+\frac{3 \pi ^4}{20}\right)s^3+\frac{ \left(-40 \left(6 \gamma ^2+\pi ^2\right) \zeta (3)-288 \zeta (5)-12 \gamma ^5-20 \gamma ^3 \pi ^2-9 \gamma \pi ^4\right)}{1440}s^4+O\left(s^5\right)$$ and truncate the result to $O(s)$.
{ "pile_set_name": "StackExchange" }
Q: numpy : How to convert an array type quickly I find the astype() method of numpy arrays not very efficient. I have an array containing 3 million of Uint8 point. Multiplying it by a 3x3 matrix takes 2 second, but converting the result from uint16 to uint8 takes another second. More precisely : print time.clock() imgarray = np.dot(imgarray, M)/255 print time.clock() imgarray = imgarray.clip(0, 255) print time.clock() imgarray = imgarray.astype('B') print time.clock() dot product and scaling takes 2 sec clipping takes 200 msec type conversion takes 1 sec Given the time taken by the other operations, I would expect astype to be faster. Is there a faster way to do type conversion, or am I wrong when guesstimating that type conversion should not be that hard ? Edit : the goal is to save the final 8 bit array to a file A: When you use imgarray = imgarray.astype('B'), you get a copy of the array, cast to the specified type. This requires extra memory allocation, even though you immediately flip imgarray to point to the newly allocated array. If you use imgarray.view('uint8'), then you get a view of the array. This uses the same data except that it is interpreted as uint8 instead of imgarray.dtype. (np.dot returns a uint32 array, so after the np.dot, imgarray is of type uint32.) The problem with using view, however, is that a 32-bit integer becomes viewed as 4 8-bit integers, and we only care about the value in the last 8-bits. So we need to skip to every 4th 8-bit integer. We can do that with slicing: imgarray.view('uint8')[:,::4] IPython's %timeit command shows there is a significant speed up doing things this way: In [37]: %timeit imgarray2 = imgarray.astype('B') 10000 loops, best of 3: 107 us per loop In [39]: %timeit imgarray3 = imgarray.view('B')[:,::4] 100000 loops, best of 3: 3.64 us per loop
{ "pile_set_name": "StackExchange" }
Q: routing: understanding the default route vs. prefix length, administrative distance and metrics I'm trying to understand how routers make their routing decisions. This is what I'v come up with and I'd like to know whether I'm right or not: First check whether there is an entry in the routing table for that destination address. If there is none, send it to the default route. If there is one, make a longest-route-match, then check administrative distance and then metrics. Is that correct? A: Is that correct? No. You need to understand the difference between routing (control plane) e forwarding (data plane). Routing builds the routing table by performing route selection from protocols like OSPF, BGP, static routes, so on. Forwarding looks up packet destination by querying the routing table. Back to your question: "Metric" usually is a parameter for selecting a route within routing process for SPECIFIC protocol. "Administrative distance" is a parameter for selecting a route among DISTINCT protocols. Both "metric" and "administrative distance" lie in the control plane (routing). Thus, they are used to choose the best routes for BUILDING the routing table. "Prefix length". Longest match first is a route lookup strategy of the data plane. For every received packet, the forwarding engine queries the routing table using the longest match algorithm in order to pick the best route FROM the routing table. The "default route" can be seen just as the 'shortest prefix ever' (encompassing all other prefixes), useful as last resort option for the longest match lookup.
{ "pile_set_name": "StackExchange" }
Q: DAG Level Access Control (Airflow 1.10.4) I use Airflow 1.10.4, created a role test_role and a user test_user with that role. I also created a DAG with access_control with DAG(DAG_NAME, schedule_interval='@daily', default_args=default_args, access_control={ 'test_role': {'can_dag_read'}, }, ) as dag: DummyOperator(task_id='run_this_1') >> DummyOperator( task_id='run_this_2') >> DummyOperator(task_id='run_this_3') but when I login using that user, I didn’t see this DAG. anything wrong? A: I guess access control parm is not released yet. Kindly refer airflow jira and change log. As a workaround, we can go with webUI access control options. If you find any promising solution please let me know as well. Thanks in advance!
{ "pile_set_name": "StackExchange" }
Q: Handling null values in linear regression, which are suppose to be higher than the non-null values I am currently doing a linear regression, where i try to predict the housing prices based on different variables that describe the house's spatial features (such as the distance to the closest city, the closest road etc.). My problem is, one of the original datasets have only calculated the distance to the closest road if the road is within a 2 km radius of the house. so any house which do not have a road closer than 2 km to it have gotten a NULL value instead of the distance. I was therefore wondering, is it possible to replace these null values? for example with some value above 2 km? A: My suggestion would be to include a dummy if the values are missing. If it is significant you can conclude that living more than 2km from a road (decreases, probably) the value of the house.
{ "pile_set_name": "StackExchange" }
Q: Tricky conceptual question: ball sliding and rolling down incline We all are familiar with the classic ball rolling down the incline exercise in rotational dynamics. Here is quite a tricky conceptual problem: You have an incline of fixed height, but the angle of inclination may vary. Consider the total kinetic energy $K$ of the ball at the bottom of the incline. Describe the graph of $K$ vs. the angle of inclination $\theta$. We can assume for simplicity that the static and kinetic friction coefficient are the same. Here are some conceptual observations. Now, for $\theta < \theta_s$ where $\theta_s$ is the minimum angle at which the ball slips, friction does not do any work on the ball (rolling friction), so $K=mgh$ (the graph is a straight line) But for $\theta > \theta_s$, the ball both slips and rolls, so some of the kinetic energy is lost to slipping. Thus we should the graph to decrease. As $\theta$ increases, the friction force decreases (it is proportional to $\cos(\theta)$, so we should expect the graph to increase after some point. For $\theta=90$, we are back to $K=mgh$. I also suspect that we have some quadratic-like behavior for $\theta_s<\theta<90$, but I don't know exactly how to quantify the behavior of the ball in this region as it is both slipping and rolling, which makes things somewhat complicated. One might naively say that the energy lost due to slipping is $fd$ where $f$ is the friction force and $d$ is the distance along the incline which the ball travels. However I believe this is not the case, as the effective distance over which friction acts, call it $d_{eff}$ is less than $d$, and depends on the relationship between the angular velocity and the translational velocity of the ball. Note really that this problem can be solved if one has a clear understanding of the mechanics for rolling and slipping scanrios, so it may be helpful to say a few things about this. A: The dynamics of a ball rolling down an incline is interesting. Let's start by figuring out the forces that come into play for the non-slipping case (mass m, radius R, angle of ramp $\theta$): If we consider the motion of the ball as a rotation about point $P$, then the torque is given by $$\Gamma = mgR\sin\theta$$ and the moment of inertia about $P$ is the moment of inertia about $C$ plus $mR^2$ (from the parallel axes theorem). Since $I=\frac25 mR^2$ for a sphere, that means that the moment of inertia about P is $$I_P = \frac75 mR^2$$ The angular acceleration, $\dot{\omega}$ is $$\dot{\omega} = \frac{\Gamma}{I_P} \\ = \frac{mgR\sin\theta}{\frac75 mR^2}\\ = \frac57 \frac{g\sin\theta}{R}$$ We can now compute the response force $f_f$ along the surface, since the torque that appears about the center $C$ should give the same acceleration: $$f_f\ R=I_C\ \dot\omega = \left(\frac25 mR^2\right)\left( \frac57 \frac{g\sin\theta}{R}\right)\\ f_f = \frac27 m g \sin \theta$$ Checking for consistency, the linear acceleration of the center of mass is given by the net force, so $$\begin{align} m a &= f_a - f_f \\ &= mg \sin \theta - \frac27 m g \sin \theta \\ &= \frac57 mg \sin\theta\\ a &= \frac57 g \sin \theta \end{align}$$ Of course without slipping, we know that $\dot\omega R = a$, and indeed this expression for $a$ agrees with the earlier one for $\dot\omega$. Now we add sliding motion. Clearly, the sphere will slide when $f_f > \mu f_n$, which means $$\frac27 mg \sin \theta > \mu m g \cos \theta\\ \mu < \frac27 \tan \theta$$ Note that this is much lower than the usual condition for sliding when there is no rolling. If the force of friction is less than the $f_f$ needed to maintain rolling contact, we know it is constant at $$f_f = \mu m g \cos \theta$$ We can now compute the acceleration of the ball down the slope: $$\begin{align} a &= \frac{f_a - f_f}{m}\\ &= g \left(\sin \theta - \mu \cos \theta\right) \end{align}$$ The distance $d$ from top to bottom, given a constant height $h$, is $$d = \frac{h}{\sin \theta}$$ so the time taken is $$\begin{align} t &= \sqrt{\frac{2d}{a}}\\ &=\sqrt{\frac{2h}{g \sin\theta (\sin\theta - \mu\cos\theta)}} \end{align}$$ and at that point the velocity is $$\begin{align} v &= at\\ &=\sqrt{2ad}\\ &=\sqrt{\frac{2g \left(\sin \theta - \mu \cos \theta\right)h}{\sin\theta}} \end{align}$$ And the kinetic energy is $$\begin{align}E &= \frac12 m v^2 \\ &= m g h \frac{\left(\sin \theta - \mu \cos \theta\right)}{\sin\theta}\\ &= mgh(1-\mu\cot\theta) \end{align}$$ The rolling kinetic energy is given by the rotational velocity of the ball. With a constant torque $\Gamma$ and time $t$, the energy is $$\begin{align} E &= \frac12 I\omega^2\\ &= \frac12 I \left(\frac{\Gamma t}{I}\right)^2\\ &= \frac{\Gamma^2 t^2}{2I}\\ &= \frac{f_f^2 R^2}{\frac45 m R^2} \frac{2h}{g \sin\theta (\sin\theta - \mu\cos\theta)}\\ &= \frac{\mu^2 m^2 g^2 \cos^2\theta R^2}{\frac45 m R^2} \frac{2h}{g \sin\theta (\sin\theta - \mu\cos\theta)}\\ &= \frac{5 \mu^2 m g h\cos^2\theta}{2 \sin\theta (\sin\theta - \mu\cos\theta)} \end{align}$$ Plotting these for a couple of values of $\mu$, you get the following (note - this is updated - there was a factor 2 missing in my expression for $t$): When the sphere starts slipping, you lose energy. As the ramp angle increases, the degree of slip becomes greater and so more energy is lost in heat. As the ramp becomes steeper still, the energy dissipated will become less, until there is none when the ramp is vertical.
{ "pile_set_name": "StackExchange" }