source
sequence
text
stringlengths
99
98.5k
[ "bicycles.stackexchange", "0000051085.txt" ]
Q: Must replacement crankset have the same number of teeth? I have a Giant Boulder 520 (1996) and I am upgrading it bit by bit. I bought a new Shimano 7 speed cassette 11-28T. Now, I want to replace my crank set, however I do not know which size to buy. Does it have to have the same number of teeth as my current one or does it not matter? I currently have a 42/32/24 crank set, do I have to replace it with a 42/32/24 crank set to avoid having to replace the derailleur or anything else? I am currently thinking of getting a Shimano crank set FCTY501 42/34/24 compatible with 6/7/8 speed. Will it work with my new cassette and will I need to adjust my derailleur? A: You are not changing the largest and smallest chainring sizes, so if nothing else has changed a 42/32/24 crank would be compatible with the rest of your drivetrain. However, I looked up the specification of your bike on Bicycle Blue Book. The original cassette was a 7-speed 13-28 tooth. You may have problems with the 11 tooth small sprocket. If you want to check, you can look up the specs for whatever your rear derailleur is. You can find technical documentation for Shimano products on the Shimano Bicycle Components Site. Rear derailleurs typically have the following specifications: Minimum rear sprocket size Maximum rear sprocket size Maximum front chainring tooth difference Total capacity You'll need to ensure that your derailleur can handle an 11 tooth smallest rear sprocket, and a slightly increased total capacity. The total capacity is the difference between the numbers of teeth on the largest front ring and rear sprocket, and the smallest front ring and rear sprocket. I.e., a measure of the extremes of chain slack the derailleur has to accommodate. For an 13-28 cassette and 42/32/24 crank: (42+28) - (24+13) = 33, 11-28 cassette : (42+28) - (24+11) = 35.
[ "stackoverflow", "0027547319.txt" ]
Q: Sidekiq threads accessing global variable I have a controller that spins off 6 sidekiq threads for faster parallel processing of a large file. Before that however I want to provide these threads with a few variables that should be available accross all threads because they variables themselves are fairly memory intensive. (it is only reading from that, not writing, so the concurrency issues doesn't exist) In other words my controller looks like this def foo $bar1 = .... $bar2 = ... worker.perform_async()... worker2.perform_async()... end I don't want to put those global vars into the perform methods because serializing those to redis chokes the entire thing. My issue is that the workers cannot see these variables and die because of a no method error (i.e. trying to call .first on on of them gives that error because the var is nil for the workers). How come? Is there any other way to do this that won't kill my memory? (i.e. I don't want to take up most of the mem with 6x the same large array) A: Sidekiq runs on a separate process, so it doesn't share the same memory as the initiator of the worker. If the data is static, you might want to load it on the start of the sidekiq process (maybe when you configure the sidekiq server). If it changes per task, you should model it in a way where you can create a global repository to hold it (if redis is not good for this, maybe you can try memcached)...
[ "math.stackexchange", "0001400732.txt" ]
Q: How to "floor" in an imaginary quadratic integer ring? In his answer to this question What is a concrete example to demonstrate that $\mathcal{O}_{\mathbb{Q}(\sqrt{-19})}$ is NOT a norm-Euclidean domain? Robert Soupe essentially looks up in a map to try to find remainders for the Euclidean algorithm. But how do you actually do the equivalent of the floor function in this domain? A: It is quite clear that the concern here is using the Euclidean algorithm, or determining if the algorithm can be used. But since this is not explicitly stated in the question, I will not make any promises as to the suitability of my answer to the algorithm. The integers of an imaginary quadratic field form either a rectangular or lozenge lattice. Therefore if a number in the field is not an integer, it falls within a rectangular or lozenge-shaped region that has integers for corners. For example: $$\frac{10}{\frac{3}{2} + \frac{\sqrt{-19}}{2}} = \frac{15}{7} - \frac{5 \sqrt{-19}}{7}.$$ This is not an algebraic integers, but it falls within a lozenge-shaped region that has the integers $$\frac{3}{2} - \frac{5 \sqrt{-19}}{2}, 2 - 3 \sqrt{-19}, \frac{5}{2} - \frac{5 \sqrt{-19}}{2}, 2 - 2 \sqrt{-19}$$ for corners. (Someone doublecheck my math or geometry as the case may be, please). It's hard to say that any of these is closer to some negative infinity than the others, but you can say that exactly one of them is closer to $0$ than the others. I suggest trying that one for your "floor."
[ "stackoverflow", "0019595067.txt" ]
Q: git add, commit and push commands in one? Is there any way to use these three commands in one? git add . git commit -a -m "commit" (do not need commit message either) git push Sometimes I'm changing only one letter, CSS padding or something. Still, I have to write all three commands to push the changes. There are many projects where I'm only one pusher, so this command would be awesome! A: Building off of @Gavin's answer: Making lazygit a function instead of an alias allows you to pass it an argument. I have added the following to my .bashrc (or .bash_profile if Mac): function lazygit() { git add . git commit -a -m "$1" git push } This allows you to provide a commit message, such as lazygit "My commit msg" You could of course beef this up even more by accepting even more arguments, such as which remote place to push to, or which branch. A: I ended up adding an alias to my .gitconfig file: [alias] cmp = "!f() { git add -A && git commit -m \"$@\" && git push; }; f" Usage: git cmp "Long commit message goes here" Adds all files, then uses the comment for the commit message and pushes it up to origin. I think it's a better solution because you have control over what the commit message is. The alias can be also defined from command line, this adds it to your .gitconfig: git config --global alias.cmp '!f() { git add -A && git commit -m "$@" && git push; }; f' A: While I agree with Wayne Werner on his doubts, this is technically an option: git config alias.acp '! git commit -a -m "commit" && git push' Which defines an alias that runs commit and push. Use it as git acp. Please be aware that such "shell" aliases are always run from the root of your git repository. Another option might be to write a post-commit hook that does the push. Oh, by the way, you indeed can pass arguments to shell aliases. If you want to pass a custom commit message, instead use: git config alias.acp '! acp() { git commit -a -m "$1" && git push ; } ; acp' (Of course, now, you will need to give a commit message: git acp "My message goes here!")
[ "stackoverflow", "0038843436.txt" ]
Q: What is the wrong? Different results of the server response message I wonder, what is the mistake in code. i want to get a correct result. but now, send strange results on the server side. the link below is the reference. https:// firebase.google.com/docs/cloud-messaging/server#choose and results are below. connect ready host: fcm-xmpp.googleapis.com, and port: 5236 connect ok connect! msg: <stream:stream to="gcm.googleapis.com"version="1.0"xmlns="jabber: client"xmlns:stream="http://eth erx.jabber.org/streams"> channelConnected e.getMessage(): BigEndianHeapChannelBuffer(ridx=0, widx=7, cap=7) MessageDumpByte> - length:<7> [0000] 15 03 01 00 02 02 46 ......F messageReceived: add to, It has posted a simple code that I made. public class client2 { final String host = "fcm-xmpp.googleapis.com"; // final String host = "127.0.0.1"; final int port = 5236; Channel channel = null; public static void main(String[] args) throws Exception { client2 client = new client2(); client.init(); } public void init() { ClientBootstrap bootstrap = new ClientBootstrap( new NioClientSocketChannelFactory( Executors.newCachedThreadPool(), Executors.newCachedThreadPool())); bootstrap.setPipelineFactory(new ChannelPipelineFactory() { public ChannelPipeline getPipeline() throws Exception { return Channels.pipeline(new ClientHandler()); } }); System.out.println("connect ready"); System.out.println("host: " + host + ", and port: " + port); ChannelFuture future = bootstrap.connect(new InetSocketAddress(host, port)); channel = future.getChannel(); System.out.println("connect ok"); } } class ClientHandler extends SimpleChannelUpstreamHandler { private ChannelBuffer firstMessage; private final AtomicLong transferredBytes = new AtomicLong(); public ClientHandler() { } public long getTransferredBytes() { return transferredBytes.get(); } @Override public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) { System.out.println("connect!"); StringBuilder msg = new StringBuilder(); msg.append("<stream:stream to=").append("\"") .append("gcm.googleapis.com").append("\"").append("version=") .append("\"").append("1.0").append("\"").append("xmlns=") .append("\"").append("jabber:client").append("\"") .append("xmlns:stream=").append("\"") .append("http://etherx.jabber.org/streams").append("\"") .append(">"); System.out.println("msg: " + msg.toString()); firstMessage = ChannelBuffers.copiedBuffer(msg.toString(), CharsetUtil.UTF_8); e.getChannel().write(firstMessage); } @Override public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) { System.out.println("e.getMessage(): " + e.getMessage()); transferredBytes.addAndGet(((ChannelBuffer) e.getMessage()) .readableBytes()); ChannelBuffer cb = (ChannelBuffer) e.getMessage(); byte[] message = cb.array(); try { String dump = Utility.MessageDumpByte(message); System.out.println(dump); System.out.println("messageReceived: " + new String(message, "UTF-8")); } catch (Exception e1) { e1.printStackTrace(); } } } i used jdk 1.7, Netty version 3.X and It made with reference to this document. https://github.com/xose/netty-xmpp http://www.programcreek.com/java-api-examples/index.php?source_dir=gcm_server-master/src/com/grokkingandroid/sampleapp/samples/gcm/ccs/server/CcsClient.java# A: What you are missing is that you must first establish a TLS connection (which is quite clearly explained in the documentation). The response you're getting says: 0x15 Alert 0x03 0x01 TLS version 1.0 0x00 0x02 Message Length 0x02 Fatal 0x46 Certificate unknown So, go study on how to use netty's SSL handler.
[ "stackoverflow", "0050889241.txt" ]
Q: app not deploying to heroku I am having a problem with deploying my app on heroku. I am stuck and cannot find the answer. I have went through the heroku troubleshooting and deployment procedures. I have been trying for 3 days. I have rewritten my env file numerous times and every time I do, it doesn't work on localhost. I have MONGOD and NODEMON running. I reverted back to the original and it is working. But still will not deploy on heroku. Thanks in advance. Here is what my env.js looks like module.exports = { db: process.env.MONGODB_URI || 'mongodb://localhost/sailcast', port: process.env.PORT || 3000 } package.json "scripts": { "start": "node index.js", "postinstall": "gulp default" }, "dependencies": { "bcrypt": "^2.0.1", "bluebird": "^3.5.1", "body-parser": "^1.18.3", "ejs": "^2.6.1", "express": "^4.16.3", "express-ejs-layouts": "^2.4.0", "express-flash": "0.0.2", "express-session": "^1.15.6", "express-sessions": "^1.0.6", "express-static": "^1.2.5", "forever": "^0.15.3", "method-override": "^2.3.10", "mongoose": "^5.1.5", "morgan": "^1.9.0", "nodemon": "^1.17.5", "passport": "^0.4.0", "request": "^2.87.0", "request-promise": "^4.2.2", "yarn": "^1.7.0", "yarn.lock": "0.0.1-security" }, "devDependencies": { "babel-preset-es2015": "^6.24.1", "gulp": "^3.9.1", "gulp-babel": "^7.0.1", "gulp-clean-css": "^3.9.4", "gulp-plumber": "^1.2.0", "gulp-sass": "^4.0.1", "gulp-uglify": "^3.0.0" } Here are the heroku logs. I can't figure out the error. just keep getting application error 2018-06-16T02:00:26.831441+00:00 app[web.1]: npm ERR! [email protected] start: `node index.js` 2018-06-16T02:00:26.831595+00:00 app[web.1]: npm ERR! Exit status 1 2018-06-16T02:00:26.831982+00:00 app[web.1]: npm ERR! Failed at the [email protected] start script. 2018-06-16T02:00:26.831821+00:00 app[web.1]: npm ERR! 2018-06-16T02:00:26.832134+00:00 app[web.1]: npm ERR! This is probably not a problem with npm. There is likely additional logging output above. 2018-06-16T02:00:26.839530+00:00 app[web.1]: 2018-06-16T02:00:26.839805+00:00 app[web.1]: npm ERR! A complete log of this run can be found in: 2018-06-16T02:00:26.839898+00:00 app[web.1]: npm ERR! /app/.npm/_logs/2018-06-16T02_00_26_833Z-debug.log 2018-06-16T02:00:26.898270+00:00 heroku[web.1]: Process exited with status 1 2018-06-16T02:00:27.233835+00:00 heroku[web.1]: State changed from starting to crashed 2018-06-16T05:23:18.892587+00:00 heroku[web.1]: State changed from crashed to starting 2018-06-16T05:23:22.696724+00:00 heroku[web.1]: Starting process with command `npm start` 2018-06-16T05:23:24.791728+00:00 app[web.1]: 2018-06-16T05:23:24.791759+00:00 app[web.1]: > [email protected] start /app 2018-06-16T05:23:24.791761+00:00 app[web.1]: > node index.js 2018-06-16T05:23:24.791762+00:00 app[web.1]: 2018-06-16T05:23:25.449215+00:00 app[web.1]: module.js:681 2018-06-16T05:23:25.449238+00:00 app[web.1]: return process.dlopen(module, path._makeLong(filename)); 2018-06-16T05:23:25.449240+00:00 app[web.1]: ^ 2018-06-16T05:23:25.449242+00:00 app[web.1]: 2018-06-16T05:23:25.449244+00:00 app[web.1]: Error: /app/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header 2018-06-16T05:23:25.449247+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T05:23:25.449245+00:00 app[web.1]: at Object.Module._extensions..node (module.js:681:18) 2018-06-16T05:23:25.449249+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T05:23:25.449250+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T05:23:25.449252+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T05:23:25.449253+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T05:23:25.449255+00:00 app[web.1]: at Object.<anonymous> (/app/node_modules/bcrypt/bcrypt.js:6:16) 2018-06-16T05:23:25.449257+00:00 app[web.1]: at Module._compile (module.js:652:30) 2018-06-16T05:23:25.449258+00:00 app[web.1]: at Object.Module._extensions..js (module.js:663:10) 2018-06-16T05:23:25.449260+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T05:23:25.449261+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T05:23:25.449263+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T05:23:25.449264+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T05:23:25.449266+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T05:23:25.449267+00:00 app[web.1]: at Object.<anonymous> (/app/models/user.js:2:16) 2018-06-16T05:23:25.449269+00:00 app[web.1]: at Module._compile (module.js:652:30) 2018-06-16T05:23:25.449270+00:00 app[web.1]: at Object.Module._extensions..js (module.js:663:10) 2018-06-16T05:23:25.449271+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T05:23:25.449273+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T05:23:25.449274+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T05:23:25.449276+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T05:23:25.449277+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T05:23:25.457003+00:00 app[web.1]: npm ERR! code ELIFECYCLE 2018-06-16T05:23:25.457360+00:00 app[web.1]: npm ERR! errno 1 2018-06-16T05:23:25.458445+00:00 app[web.1]: npm ERR! [email protected] start: `node index.js` 2018-06-16T05:23:25.458586+00:00 app[web.1]: npm ERR! Exit status 1 2018-06-16T05:23:25.458827+00:00 app[web.1]: npm ERR! 2018-06-16T05:23:25.458989+00:00 app[web.1]: npm ERR! Failed at the [email protected] start script. 2018-06-16T05:23:25.459138+00:00 app[web.1]: npm ERR! This is probably not a problem with npm. There is likely additional logging output above. 2018-06-16T05:23:25.493154+00:00 app[web.1]: 2018-06-16T05:23:25.493346+00:00 app[web.1]: npm ERR! A complete log of this run can be found in: 2018-06-16T05:23:25.493475+00:00 app[web.1]: npm ERR! /app/.npm/_logs/2018-06-16T05_23_25_460Z-debug.log 2018-06-16T05:23:25.564550+00:00 heroku[web.1]: Process exited with status 1 2018-06-16T05:23:25.653739+00:00 heroku[web.1]: State changed from starting to crashed 2018-06-16T10:54:29.552763+00:00 heroku[web.1]: State changed from crashed to starting 2018-06-16T10:54:35.778827+00:00 heroku[web.1]: Starting process with command `npm start` 2018-06-16T10:54:39.194359+00:00 app[web.1]: 2018-06-16T10:54:39.194379+00:00 app[web.1]: > [email protected] start /app 2018-06-16T10:54:39.194381+00:00 app[web.1]: > node index.js 2018-06-16T10:54:39.194383+00:00 app[web.1]: 2018-06-16T10:54:40.337060+00:00 app[web.1]: return process.dlopen(module, path._makeLong(filename)); 2018-06-16T10:54:40.337027+00:00 app[web.1]: module.js:681 2018-06-16T10:54:40.337063+00:00 app[web.1]: ^ 2018-06-16T10:54:40.337065+00:00 app[web.1]: 2018-06-16T10:54:40.337067+00:00 app[web.1]: Error: /app/node_modules/bcrypt/lib/binding/bcrypt_lib.node: invalid ELF header 2018-06-16T10:54:40.337069+00:00 app[web.1]: at Object.Module._extensions..node (module.js:681:18) 2018-06-16T10:54:40.337071+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T10:54:40.337073+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T10:54:40.337074+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T10:54:40.337077+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T10:54:40.337076+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T10:54:40.337079+00:00 app[web.1]: at Object.<anonymous> (/app/node_modules/bcrypt/bcrypt.js:6:16) 2018-06-16T10:54:40.337080+00:00 app[web.1]: at Module._compile (module.js:652:30) 2018-06-16T10:54:40.337082+00:00 app[web.1]: at Object.Module._extensions..js (module.js:663:10) 2018-06-16T10:54:40.337084+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T10:54:40.337087+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T10:54:40.337085+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T10:54:40.337088+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T10:54:40.337090+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T10:54:40.337091+00:00 app[web.1]: at Object.<anonymous> (/app/models/user.js:2:16) 2018-06-16T10:54:40.337093+00:00 app[web.1]: at Module._compile (module.js:652:30) 2018-06-16T10:54:40.337094+00:00 app[web.1]: at Object.Module._extensions..js (module.js:663:10) 2018-06-16T10:54:40.337096+00:00 app[web.1]: at Module.load (module.js:565:32) 2018-06-16T10:54:40.337097+00:00 app[web.1]: at tryModuleLoad (module.js:505:12) 2018-06-16T10:54:40.337099+00:00 app[web.1]: at Function.Module._load (module.js:497:3) 2018-06-16T10:54:40.337100+00:00 app[web.1]: at Module.require (module.js:596:17) 2018-06-16T10:54:40.337102+00:00 app[web.1]: at require (internal/module.js:11:18) 2018-06-16T10:54:40.348452+00:00 app[web.1]: npm ERR! code ELIFECYCLE 2018-06-16T10:54:40.350021+00:00 app[web.1]: npm ERR! errno 1 2018-06-16T10:54:40.355451+00:00 app[web.1]: npm ERR! [email protected] start: `node index.js` 2018-06-16T10:54:40.356150+00:00 app[web.1]: npm ERR! Exit status 1 2018-06-16T10:54:40.357519+00:00 app[web.1]: npm ERR! 2018-06-16T10:54:40.358261+00:00 app[web.1]: npm ERR! Failed at the [email protected] start script. 2018-06-16T10:54:40.359222+00:00 app[web.1]: npm ERR! This is probably not a problem with npm. There is likely additional logging output above. 2018-06-16T10:54:40.384205+00:00 app[web.1]: 2018-06-16T10:54:40.384792+00:00 app[web.1]: npm ERR! A complete log of this run can be found in: 2018-06-16T10:54:40.385145+00:00 app[web.1]: npm ERR! /app/.npm/_logs/2018-06-16T10_54_40_366Z-debug.log 2018-06-16T10:54:40.480586+00:00 heroku[web.1]: State changed from starting to crashed 2018-06-16T10:54:40.461008+00:00 heroku[web.1]: Process exited with status 1 A: That looks fine. One thing that came up to my mind is that if your file called app.js or index.js since it is an error with npm start? Maybe you made a typo. Another thing that came up to my mind is that, if you set up the environment database value from the heroku. Since, I cannot see your whole heroku logs, it is another possibility
[ "stackoverflow", "0023478434.txt" ]
Q: ClassNotFoundException while implementing a web client I'm implemeting a webclient to consume a RESTful webserv using a POST call. I'm using Jersey API for it. import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; public class MyJerseyClient { public void updateGame(String url) { Client client = Client.create(); WebResource webResource = client.resource(url); ClientResponse response = webResource.type("application/json").post(ClientResponse.class); if (response.getStatus() != 200) { System.out.println("o/p >> ERROR!!"); } else { System.out.println(response.getEntity(String.class);); } } } I've imported external jars using Project > Properties > Java Build Path > Add External Jars. But still getting an error - Can someone please point me what I might be missing here? A: While the required jersey-client JAR file is in your build path, it doesnt appear to be on your client's runtime classpath.
[ "stackoverflow", "0062160191.txt" ]
Q: Firebase: Retrieve Daily Active Users via API I am wondering whether there is a way to retrieve the data that is displayed in the firebase console, such as daily user per day etc, via an API: Use case is that we want to create one unified dashboard and don't want to jump between applications and give everyone access to everything. Did I miss something in the docs? Best Philipp A: No, there is no public API for Analytics data that you see in the console. Your only supported option to get a hold of that data is through an export to BigQuery. After that, you will have access to all the raw data to build your own dashboard.
[ "stackoverflow", "0030807420.txt" ]
Q: Looping through array to create mysqli query I am trying to loop through a PHP array to build my query: $args = array("this", "that", "other"); $arg_string = ''; foreach ($args as $key => $value) { $arg_string .= "&& StockMastID LIKE '%{".$value."}%' "; } $arg_string = substr($arg_string, 3); // Remove && from first $args $query = $db->query("SELECT * FROM tblStockDet WHERE `$arg_string`"); So, theoretically, the query should now be: $query = $db->query("SELECT * FROM tblStockDet WHERE StockMastID LIKE '%{this}%' && StockMastID LIKE '%{that}%' && StockMastID LIKE '%{other}%'"); This is not working as I am getting a Fatal error: Call to a member function fetch_assoc() on a non-object error when executing the query. The query executes fine if I manually type it out instead of using the loop, so there must either be an error in the foreach syntax, or something with the quotes or ticks around $arg_string in the query, but I can't figure out what it is. NOTE: Or, should I abandon this and go for REGEXP instead? If so, how would that look? A: You should really use a prepared statement, but you have a problem here: $query = $db->query("SELECT * FROM tblStockDet WHERE `$arg_string`"); ^ ^ These characters will be part of your query string. The backticks will cause your query to fail completely, the problem that you are having now. Backticks are for identifiers like table and column names. I assume that the { ... } curly quotes are intentional as you seem to have them in your working query as well. If not, you should remove them.
[ "codereview.stackexchange", "0000015079.txt" ]
Q: Poker Hand Kata Similar to, but distinct from Poker hand identifier. I'm working towards solving this kata. The below code doesn't print the result yet, and it reads hand strings rather than game strings. Take 4: import Data.String import Data.List import Data.Ord data Rank = Two | Three | Four | Five | Six | Seven | Eight | Nine | Ten | Jack | Queen | King | Ace deriving (Eq, Ord, Show, Bounded, Enum) instance Read Rank where readsPrec _ value = let tbl = zip "23456789TJQKA" [Two .. Ace] in case lookup (head value) tbl of Just r -> [(r, tail value)] Nothing -> error $ "Invalid rank: " ++ value data Suit = H | C | D | S deriving (Eq, Ord, Show, Read) data Card = Card { rank :: Rank, suit :: Suit } deriving (Eq, Ord, Show) instance Read Card where readsPrec _ [r, s] = [(Card (read [r]) (read [s]), "")] readsPrec _ value = error $ "Invalid card: " ++ value data Hand = Hand { handRank :: HandRank, cards :: [Card] } deriving (Eq, Show, Ord) instance Read Hand where readsPrec _ value = [(Hand (getHandRank res) res, "")] where res = reverse . sort . map read $ words value data HandRank = HighCard [Rank] | Pair [Rank] | TwoPair [Rank] | ThreeOfAKind [Rank] | Straight [Rank] | Flush [Rank] | FullHouse [Rank] | FourOfAKind [Rank] | StraightFlush [Rank] deriving (Eq, Ord, Show) data GameOutcome = Winner String Hand | Tie deriving (Eq, Ord) instance Show GameOutcome where show o = case o of Winner player hand -> player ++ " wins with " ++ show (handRank hand) Tie -> "Tie" isFlush :: [Card] -> Bool isFlush = (1==) . length . group . map suit isStraight :: [Card] -> Bool isStraight cards = let rs = sort $ map rank cards run = [(head rs) .. (last rs)] in rs == run getHandRank :: [Card] -> HandRank getHandRank cards = let ranks = map rank cards rankGroups = sortByLen $ group ranks relevantRanks = map (!!0) rankGroups handRank = case cards of _ | isFlush cards && isStraight cards -> StraightFlush | has4 rankGroups -> FourOfAKind | has3 rankGroups && has2 rankGroups -> FullHouse | isFlush cards -> Flush | isStraight cards -> Straight | has3 rankGroups -> ThreeOfAKind | countGroupsOf 2 rankGroups == 2 -> TwoPair | has2 rankGroups -> Pair | otherwise -> HighCard in handRank relevantRanks winner :: Hand -> Hand -> GameOutcome winner h1 h2 = case compare h1 h2 of GT -> Winner "Player 1" h1 LT -> Winner "Player 2" h2 EQ -> Tie ------------------------------- -- General Utility Functions -- ------------------------------- hasGroupOf :: Int -> [[a]] -> Bool hasGroupOf n groups = n `elem` map length groups has4 = hasGroupOf 4 has3 = hasGroupOf 3 has2 = hasGroupOf 2 countGroupsOf :: Int -> [[a]] -> Int countGroupsOf n groups = length $ filter (\g -> length g == n) groups sortByLen :: [[a]] -> [[a]] sortByLen = sortBy (flip $ comparing length) Added comparison function Fixed a bug relating to improper sorting in some situations (replaced nub with relevantRanks in getHandRank Ran it through hlint My only experience with Haskell so far is some playing around with Parsec and a few half-read-throughs of WYAS48, so please be obnoxious about style issues. All feedback welcome, but I would particularly like to ask Are there built-ins/better implementations of the "General Utility" functions defined at the bottom? Is there a clearer or more succinct way of writing Read Rank? Is there a clearer or more flexible way of writing getHandRank, with particular emphasis on closely connecting those predicates with the data entry? A: It's generally a good idea to give explicit type signatures for your top level definitions. It helps your code's readers (e.g. us) understand your code. hasGroupOf looks perfectly clear (save for the missing type signature), but I'd write the other two like this: import Data.Ord (comparing) countGroupsOf :: Int -> [a] -> Int countGroupsOf n groups = length $ filter (\g -> length g == n) groups sortByLen :: [[a]] -> [[a]] sortByLen = sortBy (flip $ comparing length) Actually, for that last one, I'd probably want lists of equal length to sort in a predictable order: import Data.Monoid sortByLen :: [[a]] -> [[a]] sortByLen = sortBy (mconcat [flip $ comparing length, compare]) (You are expected to know that Ordering is a monoid, and that a -> b is a monoid when b is a monoid. The monoid I'm using is [a] -> [a] -> Ordering.)
[ "stackoverflow", "0063310210.txt" ]
Q: How to run CMake with predetermined settings? In Visual Studio, a project using CMake will have a CMakeSettings.json file that specifies the command line parameters to be used. In IDEs other than Visual Studio, how do I control which parameters CMake is run with? Assuming CMake is run from the project root, I want CMake initialization to be run like this: cmake -B build -DCMAKE_BUILD_TYPE=Debug And I want CMake building to run like this: cmake --build build My project structure looks like this: project/ |- bin/ | |- program (executable) |- build/ | |- ... CMake files ... |- src/ | |- program.c | |- CMakeLists.txt |- CMakeLists.txt project/CMakeLists.txt: cmake_minimum_required(VERSION 3.10) project(Project VERSION 1.0) file(MAKE_DIRECTORY bin) add_subdirectory(src) project/src/CMakeLists.txt: set(TARGET_NAME program) file(GLOB_RECURSE FILES *.c) add_executable(${TARGET_NAME} ${FILES}) file(MAKE_DIRECTORY ${PROJECT_SOURCE_DIR}/bin/${TARGET_NAME}) set_target_properties(${TARGET_NAME} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/bin/${TARGET_NAME}) I understand that developers might want to use a different IDEs other than Visual Studio. Would it be a good idea to have a command line script to run CMake commands properly? My goal is to be able to have a developer download my project and have it build and run with their IDE with minimal setup. A: In IDEs other than Visual Studio, how do I control which parameters CMake is run with? Depends on what IDE you are using. There is no standard way. Typically, there is a settings form. A good solution is to document the command line commands that should be used. And ideally, those commands should be as conventional as possible, such as those that you quoted in the question, so that anyone who can already use CMake is able to compile the program without the help of an IDE and even without reading the documentation. You could write a trivial POSIX shell script to run those commands, which can be convenient and can work as the documentation (as long as you describe its purpose in the readme file).
[ "stackoverflow", "0022991909.txt" ]
Q: Rails 4 validates uniqueness: {scope: :parent_id} for nested resource I was wondering whether anybody felt kind enough to help me figure out why this isn't working. I have a Model lets call it Task which belongs to a Project Model. I basically want each Task to have a unique name per project (Project1 could have a task called task1 and so could Project2 but both could only have one called task1) . This seems to be what the :scope option is for but it doesn't seem to be working for me. The task model is a nested resource within project and as such I call the create action via project_tasks_path(@project). It works fine creating tasks and assigning them to projects but the scope of the uniqueness validation is not taking hold. If I create a task task1 in Project1 I can't create one with the same name in task 2. This is my setup: Task.rb class Task < ActiveRecord::Base belongs_to :project validates :name, presence: true, uniqueness: {scope: :project_id} tasks_controller.rb def create @project = Project.find_by(id: params[:project_id]) @task = Task.new(model_params) #print task to stdout puts "@task" ap @task respond_to do |format| if @task.save flash[:notice] = "Successfully created task" format.js else # no flash as form handles errors format.js { render action: 'new' } format.html { render action: 'new' } end end end for some reason when I output the contents of the newly created task, I get the following #<Task:0x007ff7c7c3b178> { :id => nil, :name => "test", :project_id => nil, :created_at => nil, :updated_at => nil } It seems that because project_id hasn't been set at this point it's using 'nil' as the value. What's the best way to get around this? would it just be a custom validator? Edit 1 def model_params params.require(:model).permit(:name, :project_id) end A: Right, having been playing around with this, it seems that the way to make this type of validation is pretty straight forward. All it requires is that the nested resource be built in relation to it's project, this forces the :parent_id to be passed through to the validation as expected. In the case of this toy example, that means that the create action has to look something like: @project = Project.find_by(id: params[:project_id]) @task = @project.tasks.build(model_params) It should be noted that because of Rails not supporting generation of nested resources from the command line, the way that the scaffold generated controllers handle creation is by Model.new(model_params) and then saving, this doesn't seem to pick up the :parent_id in time for the validation and so will need changing as above (in terms of the parent).
[ "stackoverflow", "0028455963.txt" ]
Q: Bundling and Minification in MVC 6 It looks like Bundling and Minification are no longer built into MVC 6 since there is no more App_Start and Bundle.Config. Is this going to be the case after final release? I'm guessing Grunt should be used since that seems to be baked into Visual Studio 2015. UPDATE: It looks like Microsoft has switched to Gulp instead of Grunt in RC1. A: Bundler & Minifier Extension The default ASP.NET Core MVC 6 project template uses a Bundler & Minifier extension. The default template used to use Gulp which was far more powerful but it was deemed too complex for newbie developers who wanted something simple. You can read more about the switch away from Gulp and the reasoning here or read the documentation for the Bundler & Minifier extension here. WebPack, Gulp, Grunt, Brocoli, etc. A much nicer and far more powerful method is to use Gulp, or any other task runner (There are others named Grunt, Brocoli etc. Gulp is apparently nicer to work with and newer than Grunt but also more popular than Brocoli). You can use the ASP.NET MVC Boilerplate project template to get a project with Gulp built in. The new kid on the block is called WebPack which is according to Google about as popular as Gulp at the moment. ASP.NET MVC 5 Bundling and Minification and Smidge The old bundling and minification in ASP.NET MVC 5 has been dropped but there is a project on GitHub to build it for MVC 6 called Smidge. A: Grunt is the recommended approach in ASP.NET 5 applications. There are no plans to build a system like the previous ASP.NET Bundling and Minification (Optimization) system.
[ "ru.stackoverflow", "0000912220.txt" ]
Q: Как в С реализовать функцию которая устанавливает в 0 заданное количество бит? Подскажите каким образом в С можно реализовать функцию которая принимает на вход void указатель, и длину size_t void zerobyte(void *p, size_t n) программа должна работать по следующему принципу void main() { int p = 3; //00000011 zerobyte(&p, 1);//p должен стать 2 p = 3; zerobyte(&p, 2);//а тут p должен стать 0 } проблема в том что я максимум смог найти как обнулить только первый бит void zerobyte(void *p, size_t n) { p &= ~0x01; } И ума не приложу как это можно сделать. Каким образом можно реализовать эту функцию? A: void zerobyte(void *p, size_t n){ n= (1<<n) - 1; //преобразует n из 10(n раз)B в 1(n раз)B *p &= ~n; //создаёт маску из всех установленных бит, кроме n младших бит }
[ "stackoverflow", "0056021945.txt" ]
Q: How to deduce `std::function` parameters from actual function? Given a class class Foo { public: std::shared_ptr<const Bar> quux(const std::string&, std::uint32_t); } I can declare an std::function that has the same interface: std::function<std::shared_ptr<const Bar>(const std::string&, std::uint32_t)> baz = ... Is there a way of compressing that declaration such that the template arguments to std::function are derived from the declaration of that method, something like: std::function<functype(X::quux)> baz = ... where functype is an imaginary C++ operator similar to decltype. Is there a way to do this / does c++ have such a capability? I do see that the method has a slightly different signature actually as it would also take a reference/pointer to the this object; it would be fine for me to derive such a signature too. A: Yes, you can. Adapting How do I get the argument types of a function pointer in a variadic template class? to your request, we get: template<typename T> struct function_traits; template<typename R, typename C, typename ...Args> struct function_traits<R(C::*)(Args...)> { using type = std::function<R(Args...)>; }; class Bar; class Foo { public: std::shared_ptr<const Bar> quux(const std::string&, std::uint32_t); }; int main() { std::cout << std::is_same< std::function<std::shared_ptr<const Bar>(const std::string&, std::uint32_t)>, function_traits<decltype(&Foo::quux)>::type>::value << std::endl; } To make it work with constant methods you will need another specialization: template<typename R, typename C, typename ...Args> struct function_traits<R(C::*)(Args...) const> { using type = std::function<R(Args...)>; }; But you will get problems with overloaded methods, because in order to resolve overloading you will need to specify the arguments anyway.
[ "stackoverflow", "0016973533.txt" ]
Q: how to select data based on the joint results of subquery I want to select some data from a table A based on the subquery result from another table B Structure of A Dates NAME VALUE 02/01/2012 CC1 CC_value 02/01/2012 CC2 CC_value 02/02/2012 CC1 CC_value 02/02/2012 CC2 CC_value ...... 03/01/2012 CC8 CC_value ...... Structure of B Dates CC 02/01/2012 CC1 02/02/2012 CC2 ...... 03/01/2012 CC7 Given Dates range, I want to first find the corresponding pair of (Dates, CC) in Table B and based on the pair, I want to find CC_value in Table A. I am trying to write a pair in ms-access, but it is not allowed. How can I write the sql? Any idea? Thanks so much. A: This is a basic join query with filtering: select a.value from a join b on a.dates = b.dates and a.name = b.cc where b.date between DATE1 and DATE2 I don't think you need a subquery at all.
[ "apple.stackexchange", "0000212878.txt" ]
Q: Terminal command that gives the type of computer you're on I am running MATLAB code on two different computers, both with the same username. I would like to distinguish between the two Macs by having the code identify that one system is a desktop, and the other a laptop. Is there a Terminal command that will print the type of computer I am using (so I can implement the relevant function?) A: This command should work for you: sysctl hw.model This will return the Model Identifier for your machine. The below is an example for a mid-2012 Retina MacBook Pro: Machine123:~ username$ sysctl hw.model hw.model: MacBookPro10,1 Below are some references on Apple's site that describe Model Identifiers - there doesn't seem to be a single reference: MacBook Pro Model Identifiers MacBook Model Identifiers MacBook Air Model Identifiers iMac Model Identifers Mac Pro Model Identifiers Mac Mini Model Identifiers
[ "stackoverflow", "0017103941.txt" ]
Q: OpenLayers-2.12 map width variable with Bootstrap I'm using an OpenLayers-2.12 map inside an HTML5 page styled with the CSS framework Bootstrap. Is that possible to have a variable map width, as demonstrated here with OpenLayers version 2.12? If so, could you please give me an example? Thanks in advance. A: A width of 100% (CSS) width: 100%; on the div where OpenLayers is applied works with OpenLayers 2.12
[ "stackoverflow", "0015743932.txt" ]
Q: String.prototype.myFunction not returning a string? Why does the below code not return this properly? It should just return 'image', rather than all of the letters in an object, shouldn't it? String.prototype.removeExtension = function(){ return (this.lastIndexOf('.') !== -1) ? this.substr(0, this.lastIndexOf('.')) : this; } 'image.jpg'.removeExtension(); // returns 'image' 'image.png.jpg'.removeExtension(); // returns 'image.jpg' 'image'.removeExtension(); // returns String {0: "i", 1: "m", 2: "a", 3: "g", 4: "e", removeExtension: function} A: this always references an object within the context of a scope (*). You need to invoke .toString() for instance, to get the pseudo primitive value out of it. return this.toString(); If you return this just like that, it'll reference the current instance of that String-object currently invoked. (*) (only exception is ES5 strict mode, where this also might reference the undefined value
[ "stackoverflow", "0002419841.txt" ]
Q: C#: why can I not create a system namespace? I remember few weeks ago when I reorgnized our code and created some namespaces in our project I got error and the system did not allow me to create a companyName.projectName.System namespace, I had to change it to companyName.projectName.Systeminfo. I don't know why. I know there is a System namespace but it is not companyName.projectName.System. I think A.B.C namespace should be different with A.A.C namespace. Right? EDIT The error I got is like the this: Error 7 The type or namespace name 'Windows' does not exist in the namespace 'MyCompany.SystemSoftware.System' (are you missing an assembly reference?) C:\workspace\SystemSoftware\SystemSoftware\obj\Release\src\startup\App.g.cs 39 39 SystemSoftware A: The code below compiles and runs, so I think you'll need to give us a bit more detail as there's no reason you can't create a namespace such as companyName.projectName.System as far as I'm aware. namespace ConsoleApplication1 { class Program { static void Main(string[] args) { var x = new ConsoleApplication1.Project.System.Something(); } } } namespace ConsoleApplication1.Project.System { public class Something { } } A: You're experiencing a namespace clash. If you name the last part of your namespace System, then the compiler will have a hard time determining if you're referring to the (Microsoft) System namespace or an inner System namespace at your current level (or even an System class or property or ...). You'll experience the same problem with class names and namespace parts. You can't create a class called System for the same reasons. Unless you feel like specifying full namespaces for all of your instances.
[ "stackoverflow", "0034493353.txt" ]
Q: How can I get the number of records for today in five-minute intervals? Assuming I have a column named creation_timestamp on a table named bank_payments, I would like to break today into five-minute intervals, and then query the database for the count in each of those intervals. I'm going to read this manually (i.e. this is not for consumption by an application), so the output format does not matter as long as I can use it to get the five-minute time period, and the count of records in that period. Is it possible to do this entirely on the side of the database? A: If you want to group by your records in table on 5 min interval then you can try this: SELECT col1, count(col1), creation_timestamp FROM bank_payments WHERE DATE(`creation_timestamp`) = CURDATE() GROUP BY UNIX_TIMESTAMP(creation_timestamp) DIV 300, col1
[ "stackoverflow", "0058632741.txt" ]
Q: Find the latest entry amongst same number with a Dash? We have a table that stores a list of all the quotes we have sent out. Anytime a customer revises the quotes, the system automatically appends a -1 or -2 based on last used number. As an example Original Quote Number : 24545 Customer asked for a revision, the quote number is now 24545-1, after sending the quote, we now have a revision again and the Quote is 24545-2 and so on. I want to run a SQL query that will show them their Top 20 Quotes and incase of revisions, it should show the latest revisions. Can you please help me? I have already written a Query that would bring me top 20 quotes for the last 10 days. SELECT Top 20 EstimateNumber,CustName,JobDescription,TotalSellPrice,EstimateStatus,EstimateDate,CommissionTableA FROM [Enterprise32].[dbo].[tablename1] where EstimateDate BETWEEN DATEADD(Day, -10, getdate()) AND GETDATE() AND SalesRepCode = $id And TotalSellPrice > '5000' AND EstimateStatus = 'P' Order By TotalSellPrice DESC A: This makes some assumptions, but I think this might work. If not, sample data and expected result will be invaluable: USE Enterprise32; GO WITH CTE AS( SELECT V.EstimateNumber, V.RevisionNumber, TN1.CustName, TN1.JobDescription, TN1.TotalSellPrice, TN1.EstimateStatus, TN1.EstimateDate, TN1.CommissionTableA, ROW_NUMBER() OVER (PARTITION BY V.EstimateNumber ORDER BY V.RevisionNumber DESC) AS RN FROM dbo.TableName1 TN1 CROSS APPLY (VALUES(NULLIF(CHARINDEX('-',TN1.EstimateNumber),0)))CI(I) CROSS APPLY (VALUES(TRY_CONVERT(int,LEFT(TN1.EstimateNumber,ISNULL(CI.I,LEN(TN1.EstimateNumber))-1)),ISNULL(TRY_CONVERT(int,STUFF(TN1.EstimateNumber,1,CI.I,'')),0)))V(EstimateNumber,RevisionNumber) WHERE TN1.EstimateDate BETWEEN DATEADD(Day, -10, getdate()) AND GETDATE() AND TN1.SalesRepCode = $id And TN1.TotalSellPrice > '5000' AND TN1.EstimateStatus = 'P') SELECT TOP (20) EstimateNumber, RevisionNumber, CustName, JobDescription, TotalSellPrice, EstimateStatus, EstimateDate, CommissionTableA FROM CTE WHERE RN = 1;
[ "stackoverflow", "0000392022.txt" ]
Q: What's the best way to send a signal to all members of a process group? I want to kill a whole process tree. What is the best way to do this using any common scripting languages? I am looking for a simple solution. A: You don't say if the tree you want to kill is a single process group. (This is often the case if the tree is the result of forking from a server start or a shell command line.) You can discover process groups using GNU ps as follows: ps x -o "%p %r %y %x %c " If it is a process group you want to kill, just use the kill(1) command but instead of giving it a process number, give it the negation of the group number. For example to kill every process in group 5112, use kill -TERM -- -5112. A: Kill all the processes belonging to the same process tree using the Process Group ID (PGID) kill -- -$PGID     Use default signal (TERM = 15) kill -9 -$PGID     Use the signal KILL (9) You can retrieve the PGID from any Process-ID (PID) of the same process tree kill -- -$(ps -o pgid= $PID | grep -o '[0-9]*')   (signal TERM) kill -9 -$(ps -o pgid= $PID | grep -o '[0-9]*')   (signal KILL) Special thanks to tanager and Speakus for contributions on $PID remaining spaces and OSX compatibility. Explanation kill -9 -"$PGID" => Send signal 9 (KILL) to all child and grandchild... PGID=$(ps opgid= "$PID") => Retrieve the Process-Group-ID from any Process-ID of the tree, not only the Process-Parent-ID. A variation of ps opgid= $PID is ps -o pgid --no-headers $PID where pgid can be replaced by pgrp. But: ps inserts leading spaces when PID is less than five digits and right aligned as noticed by tanager. You can use: PGID=$(ps opgid= "$PID" | tr -d ' ') ps from OSX always print the header, therefore Speakus proposes: PGID="$( ps -o pgid "$PID" | grep [0-9] | tr -d ' ' )" grep -o [0-9]* prints successive digits only (does not print spaces or alphabetical headers). Further command lines PGID=$(ps -o pgid= $PID | grep -o [0-9]*) kill -TERM -"$PGID" # kill -15 kill -INT -"$PGID" # correspond to [CRTL+C] from keyboard kill -QUIT -"$PGID" # correspond to [CRTL+\] from keyboard kill -CONT -"$PGID" # restart a stopped process (above signals do not kill it) sleep 2 # wait terminate process (more time if required) kill -KILL -"$PGID" # kill -9 if it does not intercept signals (or buggy) Limitation As noticed by davide and Hubert Kario, when kill is invoked by a process belonging to the same tree, kill risks to kill itself before terminating the whole tree killing. Therefore, be sure to run the command using a process having a different Process-Group-ID. Long story > cat run-many-processes.sh #!/bin/sh echo "ProcessID=$$ begins ($0)" ./child.sh background & ./child.sh foreground echo "ProcessID=$$ ends ($0)" > cat child.sh #!/bin/sh echo "ProcessID=$$ begins ($0)" ./grandchild.sh background & ./grandchild.sh foreground echo "ProcessID=$$ ends ($0)" > cat grandchild.sh #!/bin/sh echo "ProcessID=$$ begins ($0)" sleep 9999 echo "ProcessID=$$ ends ($0)" Run the process tree in background using '&' > ./run-many-processes.sh & ProcessID=28957 begins (./run-many-processes.sh) ProcessID=28959 begins (./child.sh) ProcessID=28958 begins (./child.sh) ProcessID=28960 begins (./grandchild.sh) ProcessID=28961 begins (./grandchild.sh) ProcessID=28962 begins (./grandchild.sh) ProcessID=28963 begins (./grandchild.sh) > PID=$! # get the Parent Process ID > PGID=$(ps opgid= "$PID") # get the Process Group ID > ps fj PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 28348 28349 28349 28349 pts/3 28969 Ss 33021 0:00 -bash 28349 28957 28957 28349 pts/3 28969 S 33021 0:00 \_ /bin/sh ./run-many-processes.sh 28957 28958 28957 28349 pts/3 28969 S 33021 0:00 | \_ /bin/sh ./child.sh background 28958 28961 28957 28349 pts/3 28969 S 33021 0:00 | | \_ /bin/sh ./grandchild.sh background 28961 28965 28957 28349 pts/3 28969 S 33021 0:00 | | | \_ sleep 9999 28958 28963 28957 28349 pts/3 28969 S 33021 0:00 | | \_ /bin/sh ./grandchild.sh foreground 28963 28967 28957 28349 pts/3 28969 S 33021 0:00 | | \_ sleep 9999 28957 28959 28957 28349 pts/3 28969 S 33021 0:00 | \_ /bin/sh ./child.sh foreground 28959 28960 28957 28349 pts/3 28969 S 33021 0:00 | \_ /bin/sh ./grandchild.sh background 28960 28964 28957 28349 pts/3 28969 S 33021 0:00 | | \_ sleep 9999 28959 28962 28957 28349 pts/3 28969 S 33021 0:00 | \_ /bin/sh ./grandchild.sh foreground 28962 28966 28957 28349 pts/3 28969 S 33021 0:00 | \_ sleep 9999 28349 28969 28969 28349 pts/3 28969 R+ 33021 0:00 \_ ps fj The command pkill -P $PID does not kill the grandchild: > pkill -P "$PID" ./run-many-processes.sh: line 4: 28958 Terminated ./child.sh background ./run-many-processes.sh: line 4: 28959 Terminated ./child.sh foreground ProcessID=28957 ends (./run-many-processes.sh) [1]+ Done ./run-many-processes.sh > ps fj PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 28348 28349 28349 28349 pts/3 28987 Ss 33021 0:00 -bash 28349 28987 28987 28349 pts/3 28987 R+ 33021 0:00 \_ ps fj 1 28963 28957 28349 pts/3 28987 S 33021 0:00 /bin/sh ./grandchild.sh foreground 28963 28967 28957 28349 pts/3 28987 S 33021 0:00 \_ sleep 9999 1 28962 28957 28349 pts/3 28987 S 33021 0:00 /bin/sh ./grandchild.sh foreground 28962 28966 28957 28349 pts/3 28987 S 33021 0:00 \_ sleep 9999 1 28961 28957 28349 pts/3 28987 S 33021 0:00 /bin/sh ./grandchild.sh background 28961 28965 28957 28349 pts/3 28987 S 33021 0:00 \_ sleep 9999 1 28960 28957 28349 pts/3 28987 S 33021 0:00 /bin/sh ./grandchild.sh background 28960 28964 28957 28349 pts/3 28987 S 33021 0:00 \_ sleep 9999 The command kill -- -$PGID kills all processes including the grandchild. > kill -- -"$PGID" # default signal is TERM (kill -15) > kill -CONT -"$PGID" # awake stopped processes > kill -KILL -"$PGID" # kill -9 to be sure > ps fj PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 28348 28349 28349 28349 pts/3 29039 Ss 33021 0:00 -bash 28349 29039 29039 28349 pts/3 29039 R+ 33021 0:00 \_ ps fj Conclusion I notice in this example PID and PGID are equal (28957). This is why I originally thought kill -- -$PID was enough. But in the case the process is spawn within a Makefile the Process ID is different from the Group ID. I think kill -- -$(ps -o pgid= $PID | grep -o [0-9]*) is the best simple trick to kill a whole process tree when called from a different Group ID (another process tree). A: pkill -TERM -P 27888 This will kill all processes that have the parent process ID 27888. Or more robust: CPIDS=$(pgrep -P 27888); (sleep 33 && kill -KILL $CPIDS &); kill -TERM $CPIDS which schedule killing 33 second later and politely ask processes to terminate. See this answer for terminating all descendants.
[ "stackoverflow", "0028303120.txt" ]
Q: Change activity in android program automatically at a given time I want to move from an activity to another activity at a given time. How can I set this time for waiting in that page? A: It depends on what you are trying to do. There are several ways you can do this. For example, You can set up a Timer object and based on whatever criteria, set up a task to call startActivity() to do the switch. Or you can use a Handler and use the postDelayed() method to accomplish the same thing. Simply set up the handler and the time delay and then in the runnable call startActivity(). However, unless you give a bit more information, it is sort of hard to give you the right solution. Can you give a bit more context on what you are trying to solve? Maybe the below references can help - Timer:http://developer.android.com/reference/java/util/Timer.html - Handler: http://developer.android.com/reference/android/os/Handler.html Building on what Lennon just suggested below, you can use and Event mechanism, like Otto or EventBus to trigger the startActivity() based on a particular event being fired. Otto:http://square.github.io/otto/ EventBus: https://github.com/greenrobot/EventBus
[ "stackoverflow", "0036997163.txt" ]
Q: Display Instragram pictures with http://instafeedjs.com/ I try to get my latest imagefrom my Instagramm Feed using http://instafeedjs.com/ I get the plugin working as I can see in the console that my images get fetched, BUT they don't get displayed somehow. My code is following <script type="text/javascript"> var userFeed = new Instafeed({ get: 'user', userId: '12345', accessToken: '123456' }); userFeed.run(); Any clues what I overlooked to make the script display the latest image from my feed? Or an any image at all? A: Do you have the <div id="instafeed"></div> placed in your html somewhere? If not, add it!
[ "askubuntu", "0001161146.txt" ]
Q: Unable to remove the packages in ubuntu When I try to remove the virtual box installed on ubuntu as follow, sachin-verma@sachin-verma:~$ sudo apt-get remove virtualbox-5.1 Then it shows following errors: Reading package lists... Error! N: Ignoring file '50unattended-upgrades.ucf-dist' in directory '/etc/apt/apt.conf.d/' as it has an invalid filename extension N: Ignoring file 'getdeb.list.bck' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/in.archive.ubuntu.com_ubuntu_dists_xenial_main_binary-amd64_Packages E: The package lists or status file could not be parsed or opened. Please help me out ? PS.- This is also happening, when I am trying to uninstall other packages. A: Move problematic file out the way with sudo mv /etc/apt/apt.conf.d/50unattended-upgrades.ucf-dist ~/ sudo mv /etc/apt/sources.list.d/getdeb.list.bck ~/ clear APT lists with sudo rm /var/lib/apt/lists/* /var/lib/apt/lists/partial/* and retry to get new package lists with sudo apt-get update Then finally remove unneeded package(s) with sudo apt-get remove virtualbox-5.1
[ "ru.stackoverflow", "0000221789.txt" ]
Q: Как подключить JS и CSS в зависимости от котроллера? Можно как нибудь добавить еще один манифест pipeline ? A: Используйте там, куда хотите подключить: <%= stylesheet_link_tag 'targetstylesheet' if params[:controller] == 'targetcontroller' %> И аналогично для JS
[ "math.stackexchange", "0001272165.txt" ]
Q: Why are every finite language decidable? I don't understand why every finite language is decidable. For example, if I have an infinite set of strings L over an alphabet E, why is there a Turing Machine M that decides L? I understand the definition of decidable; however, why is every finite language decidable? Does there every exist a finite language that is not accepted? EDIT: Any proof on this that I can go through is also appreciated A: In a finite language there will be a maximal length of any string in the language -- call it $n$. There are finitely many possible strings of at most $n$ symbols. Construct a Turing machine with a state for each of those strings. As long as the state corresponds to a string of less than $n$ symbols it will move right and switch to a state that encodes the prefix of the input it has seen up until now. When it is in a state that corresponds to a full length-$n$ string, the machine will halt and accept if the string it saw is in the language and it's currently reading a blank square; otherwise it will reject.
[ "stackoverflow", "0003353582.txt" ]
Q: LaTeX - Description list - Split the item across multiple lines I have the following LaTeX file. Notice how the item on the description is very long ...foo.... \documentclass{article} \begin{document} \begin{description} \item[foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo foo] bar \item[baz] bang \end{description} \end{document} It produces a file like this: The foo is all on one line and will run off the edge. Is there anyway to split the description part across multiple lines so it doesn't run off the edge of the page? I want to be able to do this only in the preamble, since I can't edit the actual body of the document. A: Changing the description environment in the preamble, using mdwlist: \usepackage{mdwlist} \renewenvironment{description}% { \begin{basedescript}{ \desclabelstyle{\nextlinelabel} \renewcommand{\makelabel}[1]{% \parbox[b]{\textwidth}{\bfseries##1}% }% \desclabelwidth{2em}}} { \end{basedescript} }
[ "mathoverflow", "0000336326.txt" ]
Q: Irreducible skew polynomials over an algebraically closed field Let $\mathbb{F}$ be a field, and denote with $\mathbb{F}[t,\sigma]$ the skew-polynomial ring, where $\sigma$ is an automorphim of $\mathbb{F}$. Recall that the multiplication of this ring is defined by the rule $t\cdot \alpha =\sigma(\alpha) t$ for every $\alpha\in\mathbb{F}$. If $\mathbb{F}$ is algebraically closed, is it true that the irreducible elements are exactly those of degree one? What about the special case when $\mathbb{F}$ has characteristic $p>0$, and $\sigma$ is the Frobenius automorphism? A: There is no compelling reason for this proprty to be true in general, but it holds for quadratic polynomials in characteristic $p$ and the Frobenius automorphism. Let us consider the special case of a monic reciprocal quadratic polynomial $p(t)=t^2-ct+1,\,$ to be factored as $(t-a)(t-b).\,$ Equating the coefficients, $a+\sigma(b)=c$ and $ab=1$, so $a=b^{-1}$ and $b^{-1}+\sigma(b)=c$. The resulting equation for $b$ is not algebraic in general, and need not have solutions: if ${\Bbb F}={\Bbb C}$ and $\sigma$ is the complex conjugation, $c=0$ would mean that $|b|=-1$, which is impossible; thus $p(t)=t^2+1$ is irreducible. On the other hand, if $\sigma(b)=b^p$ is the Frobenius automorphism in characteritic $p $ then the equation is algebraic, has a root by the algebraic closedness assumption, and such a factorization exists. (This argument easily extends to general quadratics.) There is extensive literature on skew-polynomial rings and their generalizations, and I recommend consulting it for further information on this and other basic questions about their properties. For example the monograph of McConnell and Robson "Noncommutative Noetherian Rings" has a chapter devoted to these classes of rings.
[ "stackoverflow", "0047913572.txt" ]
Q: PDO insert into one column Really idk whats wrong here every time I try to insert into 1 signal column? Database connected very well, table name is stad, and it contains two columns id AI PRIMARY and name unique <?php @$submit = $_POST['submit'] ; @$stadName = $_POST['stadName']; if(isset($submit)){ if(!$stadName){ echo "Error" ; }else{ $sql = 'INSERT INTO stad name VALUES :name'; $stmt = $pdo->prepare($sql); $stmt->execute(['name' => $stadName]); echo "DONE" ; } } ?> connection file <?php $serv = '127.0.0.1'; $user = 'root'; $pass = 'root'; $dbname = 'akfoot'; // db base connection $pdo = new PDO('mysql:dbname='.$dbname.';host='.$serv.'; charset=utf8',$user,$pass); ?> Database: A: You're missing brackets around your column list and your value list (even if it's just one column, those brackets are required): $sql = 'INSERT INTO stad (name) VALUES (:name)'; $stmt = $pdo->prepare($sql); $stmt->execute(['name' => $stadName]);
[ "stackoverflow", "0028121153.txt" ]
Q: Handling Multiple AJAX calls I have the following code using AJAX to pull JSON data from a few different URLs and I want to store them into separate arrays. Here is a sample: var var1 = $.ajax({ url: FEED1, dataType: 'jsonp', crossDomain: true, success: function (data) { if (data) { ItemAry1 = data.posts; } } }); var var2 = $.ajax({ url: FEED2, dataType: 'jsonp', crossDomain: true, success: function (data) { if (data) { ItemAry2 = data.posts; } } }); In my code I have several of these. The issue that each array has the same exact data. Even the FEED1 and FEED2 are urls to different data. A: make a function! var serviceURL = "http://www.example.com"; var itemArrays = {}; function getFeed(category_id){ return $.ajax({ url: serviceURL, data: { "json":"get_category_post", "category_id":category_id, "count": 25 }, dataType: 'jsonp', crossDomain: true, success: function (data) { if (data) { itemArrays[i] = data.posts; } } }); } var feedPromises = []; for (var i = 0, count = 9; i < count; i++){ //start the process to get all the feeds and save their ajax promises into an array feedPromises.push(getFeed(i)); } // wait until all the feeds return data to continue $.when.apply(this, feedPromises) .done(function(){ // when all the data calls are successful you can access the data via itemArrays[0]; itemArrays[1]; console.log(itemArrays); });
[ "stackoverflow", "0016942800.txt" ]
Q: ActionResultFilter page not found error I am developing a web site using .NET MVC3. I have a controller where I make a file download to the client. [DeleteFileAfterDownloadFilter()] public FileResult DownloadVersion(int VersionID) { //staff to get the tempZipFile return File(tempZipFile, "zip", "file.zip"); } what I like to do is to delete the file after downloading this file. I figured that I can use ActionFilterAttribute. So I wrote the class below: public class DeleteFileAfterDownloadFilter : ActionFilterAttribute { public override void OnResultExecuted(ResultExecutedContext filterContext) { string fileName = ((FileStreamResult)filterContext.Result).FileDownloadName; File.Delete(fileName); base.OnResultExecuted(filterContext); } } I guess I have 2 problems here. First one is that when I run this thing it gives me the error of ".../Company/DownloadVersion?versionID=2057" page could not be found. What is the way to make it work? And the second problem is that as you might have realized "((FileStreamResult)filterContext.Result).FileDownloadName" is probably not the filePath that I want to delete. It should be the "tempZipFile" local variable in controller. But I don't know how to pass that value to this event. A: I gave your filter a spin and (after corrections) it produces a nasty COM error. This is because the async nature of the the operation: OnResultExecuted is your last chance to do something but this happens when the response (with the filename but not the file itself) has been send back to the client. When the client (browser) then starts the download a Not Found error or worse is produced. In other words, your approach looks nice but it won't work. Some rough ideas for a solution: make sure your server side files have unique names and a reliable timestamp use a background process to periodically clean them up, or clean up old file every time you prepare a new one I changed your filter like this: public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); var r = filterContext.Result as FilePathResult; // not FileContent string fileName = filterContext.RequestContext.HttpContext.Server.MapPath(r.FileName); System.IO.File.Delete(fileName); } Update : Thanks to this SO answer, the following should work: public override void OnResultExecuted(ResultExecutedContext filterContext) { base.OnResultExecuted(filterContext); var r = filterContext.Result as FilePathResult; // not FileContent string fileName = filterContext.RequestContext.HttpContext.Server.MapPath(r.FileName); filterContext.HttpContext.Response.Flush(); filterContext.HttpContext.Response.End(); System.IO.File.Delete(fileName); }
[ "es.stackoverflow", "0000116385.txt" ]
Q: ¿Por que muestra los datos como null si tengo un registro con mas de un usuario? Nuevamente necesito de su ayuda comunidad, tengo la siguiente tabla en mi base de datos, con los registros del horario de un usuario(los rut son ficticios) Tengo registrados tres horarios con la misma fecha y distinta hora para un mismo usuario, si quiero consultar su disponibilidad de atención para un día x funciona correctamente, esto lo hago con la siguiente consulta. select * from ( SELECT MIN(horario.hrs_ini) AS hrs_ini, MAX(horario.hrs_ter) AS hrs_ter, id_hr, fecha_registro FROM horario INNER JOIN usuarios ON horario.rut_usu = usuarios.rut_usu WHERE usuarios.rut_usu= '17.811.942-4' AND horario.lunes='ATTE. ESTUDIANTES' AND fecha_registro = (SELECT MAX(fecha_registro) FROM horario) ORDER BY id_hr DESC LIMIT 14 ) tmp order by tmp.id_hr asc El resultado de esto es lo siguiente Ahora bien, si registro el horario para otro usuario, todo ok. Pero si ejecuto la misma consulta sql anterior para el usuario que registre primero, es decir el que tiene el rut_usu= '17.811.942-4', Los datos me los muestra en nulo, pero si utilizo el rut del nuevo usuario ejemplo rut_usu= '11.111.111-1' , se muestra correctamente(para ese usuario).¿Por que sucede esto ? A: intenta así : select * from ( SELECT MIN(horario.hrs_ini) AS hrs_ini, MAX(horario.hrs_ter) AS hrs_ter, id_hr, fecha_registro FROM horario INNER JOIN usuarios ON horario.rut_usu = usuarios.rut_usu WHERE usuarios.rut_usu= '17.811.942-4' AND horario.lunes='ATTE. ESTUDIANTES' AND fecha_registro = (SELECT MAX(fecha_registro) FROM horario where rut_usu = '17.811.942-4' ) ORDER BY id_hr DESC LIMIT 14 ) tmp order by tmp.id_hr asc
[ "magento.stackexchange", "0000201656.txt" ]
Q: Magento 2 JS Merge Undeifned Varaible Issue I have defined admin form field dependency in my uicomponent form <field name="redirect_in"> <argument name="data" xsi:type="array"> <item name="options" xsi:type="object">Namespace\Modulename\Model\Config\Source\Options</item> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">int</item> <item name="label" xsi:type="string" translate="true">Store Info</item> <item name="component" xsi:type="string">Namespace_Modulename/js/form/element/options</item> <item name="formElement" xsi:type="string">select</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_in</item> <item name="default" xsi:type="string">0</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> </item> </item> </argument> </field> <field name="storeviews"> <argument name="data" xsi:type="array"> <item name="options" xsi:type="object">Magento\Store\Ui\Component\Listing\Column\Store\Options</item> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">int</item> <item name="label" xsi:type="string" translate="true">Store View</item> <item name="formElement" xsi:type="string">select</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_store_id</item> <item name="visibleValue" xsi:type="string">0</item> <item name="visible" xsi:type="boolean">true</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> </item> </item> </argument> </field> <field name="external_url"> <argument name="data" xsi:type="array"> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">text</item> <item name="label" xsi:type="string" translate="true">External Link</item> <item name="formElement" xsi:type="string">input</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_external_url</item> <item name="visibleValue" xsi:type="string">1</item> <item name="visible" xsi:type="boolean">false</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> <item name="validate-url" xsi:type="boolean">true</item> </item> </item> </argument> </field> The dependent js to get and reterive the value Namespace\Modulename\Model\Config\Source\Options define([ 'underscore', 'uiRegistry', 'Magento_Ui/js/form/element/select', 'Magento_Ui/js/modal/modal' ], function (_, uiRegistry, select, modal) { 'use strict'; return select.extend({ initialize: function () { this._super(); var storeviewField = uiRegistry.get('index = storeviews'); var externalUrl = uiRegistry.get('index = external_url'); if(this.value() == 1){ externalUrl.show(); storeviewField.hide(); } else { storeviewField.show(); externalUrl.hide(); } return this; }, }); }); This works great until I merge JS from magento setting. I get the following error storeviewField undifined Why is it causing issue when I merge the JS Files? A: I had the same issue I moved my column above. Try moving your storeviews column above redirect_in: <field name="storeviews"> <argument name="data" xsi:type="array"> <item name="options" xsi:type="object">Magento\Store\Ui\Component\Listing\Column\Store\Options</item> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">int</item> <item name="label" xsi:type="string" translate="true">Store View</item> <item name="formElement" xsi:type="string">select</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_store_id</item> <item name="visibleValue" xsi:type="string">0</item> <item name="visible" xsi:type="boolean">true</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> </item> </item> </argument> </field> <field name="external_url"> <argument name="data" xsi:type="array"> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">text</item> <item name="label" xsi:type="string" translate="true">External Link</item> <item name="formElement" xsi:type="string">input</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_external_url</item> <item name="visibleValue" xsi:type="string">1</item> <item name="visible" xsi:type="boolean">false</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> <item name="validate-url" xsi:type="boolean">true</item> </item> </item> </argument> </field> <field name="redirect_in"> <argument name="data" xsi:type="array"> <item name="options" xsi:type="object">Namespace\Modulename\Model\Config\Source\Options</item> <item name="config" xsi:type="array"> <item name="dataType" xsi:type="string">int</item> <item name="label" xsi:type="string" translate="true">Store Info</item> <item name="component" xsi:type="string">Namespace_Modulename/js/form/element/options</item> <item name="formElement" xsi:type="string">select</item> <item name="source" xsi:type="string">modulename</item> <item name="dataScope" xsi:type="string">redirect_in</item> <item name="default" xsi:type="string">0</item> <item name="validation" xsi:type="array"> <item name="required-entry" xsi:type="boolean">true</item> </item> </item> </argument> </field>
[ "stackoverflow", "0028239656.txt" ]
Q: Optimizing my program by accessing data locally instead of a remote database in C# I have a database with 4 tables filled with millions of rows. I have my program run on several computers computing data and then returning it to the database. The huge bottleneck in my program design is that for each calculation, it has to download the data and then perform calculations on it and then save the results to the database. When I had the data on the local network it performed with crazy speed so I realize that the resources to download the data from a remote server is the problem. What are some ways I can save data from the remote database either before or after my code runs so I can make my program more efficient. These calculations are done once and aren't needed again and I have 24 computers running this same program. static void Main(string[] args) { try { List<StockData> stockData = new List<StockData>(); List<StockMarketCompare> stockCompareData = new List<StockMarketCompare>(); List<StockData> sandpInfo = new List<StockData>(); List<StockData> sandpDateInfo = new List<StockData>(); List<StockData> globalList = new List<StockData>(); List<StockData> amexList = new List<StockData>(); List<StockData> nasdaqList = new List<StockData>(); List<StockData> nyseList = new List<StockData>(); List<DateTime> completedDates = new List<DateTime>(); SymbolInfo symbolClass = new SymbolInfo(); bool isGoodToGo = false; string symbol, market; int activeSymbolsCount = 0; int rowCount = 0, completedRowCount = 0; DateTime date = new DateTime(); DateTime searchDate = new DateTime(); // get the data here using (StockRatingsTableAdapter stockRatingsAdapter = new StockRatingsTableAdapter()) using (OoplesDataSet.StockRatingsDataTable stockRatingsTable = new OoplesDataSet.StockRatingsDataTable()) using (SymbolsTableAdapter symbolAdapter = new SymbolsTableAdapter()) using (OoplesDataSet.SymbolsDataTable symbolTable = new OoplesDataSet.SymbolsDataTable()) using (DailyAmexDataTableAdapter dailyAmexAdapter = new DailyAmexDataTableAdapter()) using (OoplesDataSet.DailyAmexDataDataTable dailyAmexTable = new OoplesDataSet.DailyAmexDataDataTable()) using (OoplesDataSet.OldStockRatingsDataTable historicalRatingsTable = new OoplesDataSet.OldStockRatingsDataTable()) using (OldStockRatingsTableAdapter historicalRatingsAdapter = new OldStockRatingsTableAdapter()) using (OoplesDataSet.OldStockRatingsDataTable historicalRatingSymbolTable = new OoplesDataSet.OldStockRatingsDataTable()) using (OldStockRatingsTableAdapter historicalRatingSymbolAdapter = new OldStockRatingsTableAdapter()) using (OoplesDataSet.DailyGlobalDataDataTable sandp500Table = new OoplesDataSet.DailyGlobalDataDataTable()) using (OoplesDataSet.CurrentSymbolsDataTable currentSymbolTable = new OoplesDataSet.CurrentSymbolsDataTable()) using (CurrentSymbolsTableAdapter currentSymbolAdapter = new CurrentSymbolsTableAdapter()) { // fill the s&p500 info first dailyGlobalAdapter.ClearBeforeFill = true; dailyGlobalAdapter.FillBySymbol(sandp500Table, Calculations.sp500); var sandpQuery = from c in sandp500Table select new StockData { Close = c.Close, Date = c.Date, High = c.High, Low = c.Low, Volume = c.Volume }; sandpInfo = sandpQuery.AsParallel().ToList(); // set the settings for the historical ratings adapter historicalRatingsAdapter.ClearBeforeFill = true; // fill the stock ratings info stockRatingsAdapter.Fill(stockRatingsTable); // get all symbols in the stock ratings table var symbolsAmountQuery = from c in stockRatingsTable select new SymbolMarket { Symbol = c.Symbol, Market = c.Market }; List<SymbolMarket> ratingSymbols = symbolsAmountQuery.AsParallel().ToList(); if (ratingSymbols != null) { activeSymbolsCount = ratingSymbols.AsParallel().Count(); } for (int i = 0; i < activeSymbolsCount; i++) { symbol = ratingSymbols.AsParallel().ElementAtOrDefault(i).Symbol; market = ratingSymbols.AsParallel().ElementAtOrDefault(i).Market; dailyAmexAdapter.FillBySymbol(dailyAmexTable, symbol); historicalRatingSymbolAdapter.FillBySymbolMarket(historicalRatingSymbolTable, market, symbol); if (dailyAmexTable != null) { var amexFillQuery = from c in dailyAmexTable select new StockData { Close = c.Close, Date = c.Date, High = c.High, Low = c.Low, Volume = c.Volume }; amexList = amexFillQuery.AsParallel().ToList(); rowCount = amexList.AsParallel().Count(); } if (historicalRatingSymbolTable != null) { completedRowCount = historicalRatingSymbolTable.AsParallel().Count(); completedDates = historicalRatingSymbolTable.AsParallel().Select(d => d.Date).ToList(); } currentSymbolAdapter.Fill(currentSymbolTable); var currentSymbolQuery = from c in currentSymbolTable where c.Symbol == symbol && c.Market == market select c; List<OoplesDataSet.CurrentSymbolsRow> currentSymbolRow = currentSymbolQuery.AsParallel().ToList(); // if the rows don't match up and if no other computer is working on the same symbol if (rowCount - 30 != completedRowCount && currentSymbolRow.Count == 0) { // update the table to let the other computers know that we are working on this symbol var computerQuery = from c in currentSymbolTable where c.ComputerName == Environment.MachineName select c; List<OoplesDataSet.CurrentSymbolsRow> currentComputerRow = computerQuery.AsParallel().ToList(); if (currentComputerRow.Count > 0) { // update currentComputerRow.AsParallel().ElementAtOrDefault(0).Symbol = symbol; currentComputerRow.AsParallel().ElementAtOrDefault(0).Market = market; OoplesDataSet.CurrentSymbolsDataTable tempCurrentTable = new OoplesDataSet.CurrentSymbolsDataTable(); tempCurrentTable = (OoplesDataSet.CurrentSymbolsDataTable)currentSymbolTable.GetChanges(); if (tempCurrentTable != null) { currentSymbolAdapter.Adapter.UpdateCommand.UpdatedRowSource = System.Data.UpdateRowSource.None; currentSymbolAdapter.Update(tempCurrentTable); tempCurrentTable.AcceptChanges(); tempCurrentTable.Dispose(); Console.WriteLine(Environment.MachineName + " has claimed dominion over " + symbol + " in the " + market + " market!"); } } else { // insert currentSymbolAdapter.Insert(symbol, market, Environment.MachineName); Console.WriteLine(Environment.MachineName + " has claimed dominion over " + symbol + " in the " + market + " market!"); } Parallel.For(0, rowCount - 30, new ParallelOptions { MaxDegreeOfParallelism = Environment.ProcessorCount }, j => { if (amexList.AsParallel().Count() > 0) { date = amexList.AsParallel().ElementAtOrDefault(j).Date; searchDate = date.Subtract(TimeSpan.FromDays(60)); if (completedDates.Contains(date) == false) { var amexQuery = from c in sandpInfo where c.Date >= searchDate && c.Date <= date join d in amexList on c.Date equals d.Date select new StockMarketCompare { stockClose = d.Close, marketClose = c.Close }; var amexStockDataQuery = from c in amexList where c.Date >= searchDate && c.Date <= date select new StockData { Close = c.Close, High = c.High, Low = c.Low, Volume = c.Volume, Date = c.Date }; stockCompareData = amexQuery.AsParallel().ToList(); stockData = amexStockDataQuery.AsParallel().ToList(); isGoodToGo = true; } else { isGoodToGo = false; } } if (completedDates.Contains(date) == false) { var sandpDateQuery = from c in sandpInfo where c.Date >= searchDate && c.Date <= date select c; sandpDateInfo = sandpDateQuery.AsParallel().ToList(); symbolClass = new SymbolInfo(symbol, market); isGoodToGo = true; } else { isGoodToGo = false; } if (isGoodToGo) { sendMessage(sandpInfo, date, symbolClass, stockData, stockCompareData); } }); } } } } catch (Exception ex) { Console.WriteLine(ex.Message); Console.WriteLine(ex.StackTrace); } } A: What you seem to be doing is overkill and will not bring you that far. On a first glance I spot couple of lines that I suspect of the N+1 syndrome. The massive use of AsParallel to create a list or even count won't bring any benefits either. What worries me most however, you're talking about 4 tables but I count 13 Adapters? Almost obvious that you are doing all the work client side. Instead of blindly filling datasets with full table contents and then filter on the result: craft a queries with the data you need without omitting a WHERE clause. Now all your [24 computers] are plowing through the same mess. And as mentioned in the comments above, you'll be amazed how much you can do with T-SQL. Preprocess that data server-side; merge, join & filter and perhaps aggregate the results into a 5-th (temp) table. Let you other computers query against those results with partioning in mind so they all take about 1/24th part of the work. That is not using AsParallel but being Parallel. Way more efficient, less trouble some and way clearer Bottom line: Redesign :-)
[ "stackoverflow", "0012101301.txt" ]
Q: simple integration constant <- function(x){1} integrate(constant,lower=1,upper=10) this is probably more than ridiculously simple, but any reason as to why this isn't working? is there a different way I should be writing the function for constants? A: You can use the Vectorize function to convert a non-vectorized function into the sort of function that integrate requires: > constant <- function(x){1} > Vconst <- Vectorize(constant) > integrate(Vconst,lower=1,upper=10) 9 with absolute error < 1e-13
[ "stackoverflow", "0059334046.txt" ]
Q: Is App Bundle enough or are APK Splits necessary to reduce an APK size? I'm using OpenCV Library v4.1.2 to implement the GrabCut algorithm, I use only the following : import org.opencv.android.Utils; import org.opencv.core.CvType; import org.opencv.core.Mat; import org.opencv.core.Rect; import org.opencv.core.Size; import org.opencv.imgcodecs.Imgcodecs; import org.opencv.imgproc.Imgproc; After generating the APK, it's come with huge size 70MB Gradle: defaultConfig { applicationId "abc.app" minSdkVersion 21 targetSdkVersion 29 versionCode 1 versionName "1.0" multiDexEnabled true ndk.abiFilters 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64' testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } My Question: Is Android App Bundle enough to reduce the APK size or I should split it by each architecture? A: When publishing an Android App Bundle, Play will do the split by ABI (and DPI and language) for you. This is one of the major benefits of the Android App Bundle: you only have to publish a single artifact with a single versionCode and Play can manage all the best practices around reducing the size of your app on your behalf.
[ "magento.stackexchange", "0000042399.txt" ]
Q: Adding attribute options programmatically are not available for use immediatly I'm creating Magento attribute options via a script, but I need to then be able to get the new ID and use it straight away in the same script. At the moment it's not pulling the id through - if I kill the script and re-start it it picks up the created option and returns the ID, but not as part of the same script. Here is the code I am using: $attr = Mage::getModel('catalog/product')->getResource()->getAttribute($key); if ($attr->usesSource()) { $vattr_id = $attr->getSource()->getOptionId($value); }else{ echo "No Source"; $vattr_id = false; } if($vattr_id){ return $vattr_id; }else{ $attr_model = Mage::getModel('catalog/resource_eav_attribute'); $attr = $attr_model->loadByCode('catalog_product', $key); $attr_id = $attr->getAttributeId(); $option['attribute_id'] = $attr_id; $option['value'][$value][0] = $value; $option['value'][$value][1] = $value; $setup = new Mage_Eav_Model_Entity_Setup('core_setup'); $setup->addAttributeOption($option); $attr = Mage::getModel('catalog/product')->getResource()->getAttribute($key); if ($attr->usesSource()) { $vattr_id = $attr->getSource()->getOptionId($value); echo "AttrID: $vattr_id"; } } Running this (with the required Mage::app() etc), creates the option, you can see it in the Magento backend, but the $vattr_id is NULL. If I reload the script, then it finds the attribute option in that first block as it should. I guess it's something to do with how Magento is caching the models, but not sure where I need to look to clear these? Thanks! A: I am using slightly different approach to save attribute option value $arg_value = 'your option label'; $attribute = Mage::getModel('eav/config')->getAttribute('catalog_product', 'your attribute code'); $flag=0; foreach ( $attribute->getSource()->getAllOptions(true, true) as $option ) { if($arg_value == $option['label']) { unset($attribute); $flag=1; return $option['value'] ; } } if($flag==0){ $attribute_model = Mage::getModel('eav/entity_attribute'); $attribute_options_model= Mage::getModel('eav/entity_attribute_source_table') ; $attribute_code = $attribute_model->getIdByCode('catalog_product', $arg_attribute); $attribute = $attribute_model->load($attribute_code); $attribute_table = $attribute_options_model->setAttribute($attribute); $options = $attribute_options_model->getAllOptions(false); $value['option'] = array($arg_value,$arg_value); $result = array('value' => $value); $attribute->setData('option',$result); $attribute->save(); $attribute = Mage::getModel('eav/config')->getAttribute('catalog_product', $arg_attribute); foreach ( $attribute->getSource()->getAllOptions(true, true) as $option ) { if($arg_value == $option['label']) { unset($attribute); return $option['value'] ; } } } this will create new option value if it not exist and return id of option.
[ "stackoverflow", "0004156872.txt" ]
Q: What is the fastest way in python to build a c array from a list of tuples of floats? The context: my Python code pass arrays of 2D vertices to OpenGL. I tested 2 approaches, one with ctypes, the other with struct, the latter being more than twice faster. from random import random points = [(random(), random()) for _ in xrange(1000)] from ctypes import c_float def array_ctypes(points): n = len(points) return n, (c_float*(2*n))(*[u for point in points for u in point]) from struct import pack def array_struct(points): n = len(points) return n, pack("f"*2*n, *[u for point in points for u in point]) Any other alternative? Any hint on how to accelerate such code (and yes, this is one bottleneck of my code)? A: You can pass numpy arrays to PyOpenGL without incurring any overhead. (The data attribute of the numpy array is a buffer that points to the underlying C data structure that contains the same information as the array you're building) import numpy as np def array_numpy(points): n = len(points) return n, np.array(points, dtype=np.float32) On my computer, this is about 40% faster than the struct-based approach. A: You could try Cython. For me, this gives: function usec per loop: Python Cython array_ctypes 1370 1220 array_struct 384 249 array_numpy 336 339 So Numpy only gives 15% benefit on my hardware (old laptop running WindowsXP), whereas Cython gives about 35% (without any extra dependency in your distributed code). If you can loosen your requirement that each point is a tuple of floats, and simply make 'points' a flattened list of floats: def array_struct_flat(points): n = len(points) return pack( "f"*n, *[ coord for coord in points ] ) points = [random() for _ in xrange(1000 * 2)] then the resulting output is the same, but the timing goes down further: function usec per loop: Python Cython array_struct_flat 157 Cython might be capable of substantially better than this too, if someone smarter than me wanted to add static type declarations to the code. (Running 'cython -a test.pyx' is invaluable for this, it produces an html file showing you where the slowest (yellow) plain Python is in your code, versus python that has been converted to pure C (white). That's why I spread the code above out onto so many lines, because the coloring is done per-line, so it helps to spread it out like that.) Full Cython instructions are here: http://docs.cython.org/src/quickstart/build.html Cython might produce similar performance benefits across your whole codebase, and in ideal conditions, with proper static typing applied, can improve speed by factors of ten or a hundred.
[ "stackoverflow", "0010964177.txt" ]
Q: HTTPWebRequest and CookieContainer I create a new HTTPWebRequest, but I am unable to assign a CookieContainer to it, how would this be possible? CookieContainer = new CookieContainer(); //Create a WebRequest to get the file HttpWebRequest fileReq = (HttpWebRequest)HttpWebRequest.Create(@"http://www.example.com/file.zip); A: Here is a simple example of using CookieContainer public static void Main(string[] args) { if (args == null || args.Length != 1) { Console.WriteLine("Specify the URL to receive the request."); Environment.Exit(1); } HttpWebRequest request = (HttpWebRequest)WebRequest.Create(args[0]); request.CookieContainer = new CookieContainer(); HttpWebResponse response = (HttpWebResponse) request.GetResponse(); // Print the properties of each cookie. foreach (Cookie cook in response.Cookies) { // Show the string representation of the cookie. Console.WriteLine ("String: {0}", cook.ToString()); } }
[ "stackoverflow", "0025751249.txt" ]
Q: How can i implement an estimated reading time feature in wordpress? I am trying to integrate estimated reading time in a wordpress theme and I can't seem to get this to work. I took the code from here http://wptavern.com/estimated-time-to-read-this-post-eternity . I pasted this into functions.php function bm_estimated_reading_time() { $post = get_post(); $words = str_word_count( strip_tags( $post->post_content ) ); $minutes = floor( $words / 120 ); $seconds = floor( $words % 120 / ( 120 / 60 ) ); if ( 1 < = $minutes ) { $estimated_time = $minutes . ' minute' . ($minutes == 1 ? '' : 's') . ', ' . $seconds . ' second' . ($seconds == 1 ? '' : 's'); } else { $estimated_time = $seconds . ' second' . ($seconds == 1 ? '' : 's'); } return $estimated_time; } and then called it <p class="ert"><?php bm_estimated_reading_time() ?></p> in content-single.php, right after the author link and nothing gets displayed. If I inspect the post in chrome I can see the paragraph, but it is empty. What am I doing wrong, or what else should I be doing instead ? A: The function returns a value. You're not echoing the returned value. <?php echo bm_estimated_reading_time() ?>
[ "webmasters.stackexchange", "0000096309.txt" ]
Q: What could cause a page to stop from appearing on StumbleUpon? I've got a page that got some attention from stumbleupon (~100 likes and ~2700 views), and then suddenly 'disappeared' - stumbles stopped completely. The page content wasn't changed in any way. Looking at StumbleUpon's content guidelines, no relevant info was found. What could cause this? Is there any action a webmaster can do to retain stumbles? update: The response from StumbleUpon support forum: Your account is fully functional. Your page is not blocked. If you are seeking guaranteed traffic, then [...] StumbleUpon Ads is designed to do that for you. Is it possible that SU is purposely blocking pages in order to push users into buying ads? A: Well, this is little bit too broad because there could be a numerous reasons for this, but I'll try to narrow down. You are banned. From this article here you can see many reasons for account being removed / banned etc. If your stumbles are not showing up in your SU profile, your account might be banned. Here are some possible reasons for banning in SU. Too many different SU usernames voting from the same IP address. Reciprocal voting activity, based on tracked patterns or published confirmation (i.e., a blog post or social media campaign suggesting potential reciprocal voting activities) Too many users voting on the same story and coming from the same referring URL – e.g., from a forum listing. Misuse of the ‘send’ button. The SU browser tool bar has a Send button that lets you message your SU friends on some content you’d like them to look at. If you you’re only sending them your stories, votes for your site could be discounted. Complaints. This is a pretty broad area, and there can be any sort of complaint from other users which might cause you problems on SU. StumbleUpon is wonderful for traffic. Try to contact them directly to see what's up with your account. EDIT: But, from my experiences, I know that in first week or two I get many views. Eventually I'm getting lower and lower and lower views. You need to put fresh and new articles over and over and over again, and that's it. This is not unusuall. I thought that you were banned. But, this is (from my experience) normal behaviour. It's stumble upon dude. I don't use it so often as I used to do. I don't know what else to say about this matter. It site for generating traffic.
[ "stackoverflow", "0003676548.txt" ]
Q: How to define environment variable in input command of su This command has an empty output. su user -c "ABC=abc;echo $ABC" Any idea, how can I define a variable in the input command? A: Change your quotes to single quotes. The double quotes allow the variable to be substituted in the current environment where it's not set yet. To see the difference, try your version with $USER and compare it to this one: su user -c 'ABC=abc; echo $ABC; echo $USER'
[ "raspberrypi.stackexchange", "0000044515.txt" ]
Q: Installation of Safari on a RPI 3 model B I am a diehard mac user and have recently bought an RPI 3 model B. Because I am so used to Safari I was curious if it is possible to install Safari on a pi, and if so how. A: No, you cannot. Safari requires an x86 processor to run, and the Raspberry Pi has an ARM processor. For more information on why this is problematic, we have a helpful blog post.
[ "homebrew.stackexchange", "0000005037.txt" ]
Q: Oatmeal stout: Steeping or mash? Ladies and Gentlemen of Homebrewing, I was cruising my Google+ account, and I am stalking (I mean following/stalking) Wil Wheaton, and I noticed he is a homebrewer! Tonight he posted a question. I directed him here, however I doubt that he would read my post, so perhaps you all could head over to Wil Wheaton's Google+ Profile and help him out. He asks: I'm making an oatmeal stout for Jaime Paglia, and I'm trying to figure out the best time to put in the oats. This will be a partial mash, using extract for the base and some steeped specialty grains. I can't find consensus on the usual forums. Some brewers say to put the oats (about 1 pound for a 5 gallon batch) in with the steeping grains. Others say you have to mash the grains with some 2 row, but don't say how much 2 row you should use, and how to scale back the extract if you go that way. I'm not afraid to do that kind of mashing (yay! experience!), but it's easier to use extract, so that's my preference at this point in my evolution as a brewer. So I was thinking that I'd use the basic stout recipe I have, and I'd toast a pound of quick oats in the oven first, then steep them with the specialty grains when they were all toasty and golden and good. I'm very interested to hear opinions on this, and I thank you all in advance for sharing whatever experience you've had. You may also want to check out some of his other homebrew related posts. A: If you don't mash the oats, you're simply adding starch to your wort. That starch can serve as food for bacteria and encourage an infection in your beer. Bottom line...don't steep oats. Mash them with a diastatic malt. A: You may notice your malt extract say something like "non-diastatic, unhopped, pure malt extract", or something similar. Diastatic power is the ability of a malt to convert starch to sugar. In an extract, you don't need it because it's already been converted for you. However, to get starch to turn into fermentable sugar, a diastatic malt is required. The typical malt with the high diastatic power is 2-row. Adjuncts (such as oat), on the other hand, do not have this power on their own. That said, they can piggy-back off of the enzymes in a diastatic malt such as two-row to perform the conversion and get fermentables. This is why you may have heard that you need to use 2-row along with the oats. A 1:1 ratio should suffice. A: I made an oatmeal porter which came out really, really nicely. The mouthfeel was smooth and silky but not overdone. The only downside was that it had next to zero head retention. To get the "smooth" palate, you need to mash the oatmeal with your grains, but you really do not want to overdo the oats. I used 100g oats in a 15l batch (about 1/3 pound of oats in 5 gallons) which was easily enough. I plan on repeating my recipe but need to work out how to retain that head. Good luck on the stout.
[ "stackoverflow", "0026604738.txt" ]
Q: How to invoke pointer to member function from static member function? I need to get a member function called by a standard function pointer, so I tried to abstract things like this: class Sample { public: virtual void doSomething(void) = 0; }; class A : public Sample { void doSomething(void); // details omitted }; class B : public Sample { void doSomething(void); // details omitted }; class Executor { public: Executor(Sample *sample) : func(&sample->doSomething) { } static void *execute(void *data) { Executor *pX = data; (pX->*func)(); // error invalid access of func from static function (pX->*pX->func)(); // error pointer to member type 'void (Sample::)()' // incompatible with object type 'Executor' } private: void (Sample::*func)(void); }; int main(void) { A myA; B myB; Executor x0(&myA); Executor x1(&myB); externallyInvoke(&Executor::execute, &x0); externallyInvoke(&Executor::execute, &x1); } externallyInvoke is a Linux system call, which takes a function pointer and a data pointer. I'd like to use a static member function together with a this-pointer as data. ... and I don't want classes like A or B to have static members. So my idea was to create an interface like class Sample, that gets extended by A and B. My problem is that I don't know how to invoke the pointer to member function from inside the Executor::execute function. A: The problem is that you need two objects inside execute - one is the instance of Executor which will supply func, and the other is an instance of (a class derived from) Sample on which func will be invoked. So you have to store the object inside Executor, not the function: class Executor { public: Executor(Sample *sample) : obj(sample) { } static void *execute(void *data) { Executor *pX = static_cast<Executor*>(data); pX->obj->doSomething(); } private: Sample *obj; }; int main() { // note that `void main()` is not legal C++ A myA; B myB; Executor x0(&myA); Executor x1(&myB); externallyInvoke(&Executor::execute, &x0); externallyInvoke(&Executor::execute, &x1); } A pointer to member function (such as your original void (Sample::*func)()) identifies a function within a class, but does not store the object. You'd still need to provide one to call the function.
[ "stackoverflow", "0019966240.txt" ]
Q: Protractor clear() not working I have two tests: it('should filter the phone list as user types into the search box', function() { var results = ptor.findElements(protractor.By.repeater('phone in phones').column('phone.name')); results.then(function(arr) { expect(arr.length).toEqual(3); }); var queryInput = ptor.findElement(protractor.By.input('query')); queryInput.sendKeys('nexus'); results = ptor.findElements(protractor.By.repeater('phone in phones').column('phone.name')); results.then(function(arr) { expect(arr.length).toEqual(1); }); queryInput.clear(); queryInput.sendKeys('motorola'); results = ptor.findElements(protractor.By.repeater('phone in phones').column('phone.name')); results.then(function(arr) { expect(arr.length).toEqual(2); }); }); it('should display the current filter value within an element with id "status"', function() { //expect(element('#status').text()).toMatch(/Current filter: \s*$/); var queryInput = ptor.findElement(protractor.By.input('query')); queryInput.clear(); expect(ptor.findElement(protractor.By.id('status')).getText()).toMatch('Current Filter:'); //input('query').enter('nexus'); //queryInput.clear(); //queryInput.sendKeys('nexus'); //expect(element('#status').text()).toMatch(/Current filter: nexus\s*$/); //expect(ptor.findElement(protractor.By.id('status')).getText()).toMatch('^\Current Filter:.'); //alternative version of the last assertion that tests just the value of the binding //using('#status').expect(binding('query')).toBe('nexus'); }); the first test, search box, works great. the second test, status, does not pass because the last value entered in queryInput is carried over to the second test, and the queryInput.clear() does not work. However, in the second test, if i make a call queryInput.sendKeys("something"), "something" will display. If I take out the clear() in the second test, I'll see "motorolasomething". So, while it seems the clear() is working, my test is not passing if I just have clear() in the second test, when i run the second test, I will see "motorola", even when clear() is called prior to the second test. I'm wondering why the clear() is not clearing in the second test when I do not have a sendKeys() after it. A: The Documentation of clear() says the following: [ !webdriver.promise.Promise ] clear( ) Schedules a command to clear the {@code value} of this element. This command has no effect if the underlying DOM element is neither a text INPUT element nor a TEXTAREA element. Returns: A promise that will be resolved when the element has been cleared. so in order to get clear to do what you want, you have to work with the promise that it returns! to do so you have to use then() here is how it works: queryInput.clear().then(function() { queryInput.sendKeys('motorola'); }) so clear() returns you a promise to clear the input and then() tells the promise what to do as soon as the input is cleared. A: await element.sendKeys(Key.chord(Key.CONTROL, 'a')); await element.sendKeys(Key.DELETE); A: Clear().then(..) doesn't work for me. So this is my work around: queryInput.sendKeys(protrator.Key.chord(protrator.Key.CONTROL, 'a')); queryInput.sendKeys('nexus')
[ "stackoverflow", "0046857554.txt" ]
Q: Res.locals.apsinlge not working I am trying to use apsingle in my template but it is not working. I get the correct data when I console.log(apsingle); but inside the template it just isn't working at all. It return Partial route: (req, res, next) => { AP.findById(req.params.id).exec(function(err, foundAP){ if(err){ console.log(err); } else { res.locals.apsingle = foundAP; } }); next(); } Loop and if statement inside template: {% if apsingle %} {% for ap in apsingle %} <tr> <td>{{ap.type}}</td> <td>{{ap.model}}</td> <td>{{ap.notes}}</td> </tr> {% endfor %} {% endif %} If I do a test of: {% if apsingle == null %} <h1>I'm NULL</h1> {% endif %} Then it outputs it, so the apsingle is coming through to the template as null. Output asked for by Andy: { _id: objectID, type: 'ap',', model: ';lkj;l', notes: '', __v: 0, author: id: someID } Error mentioned to Andy: TypeError: Cannot read property 'name' of undefined at Object.eval [as tpl] (eval at <anonymous> (/home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:498:13), <anonymous>:10:1706) at compiled (/home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:619:18) at Object.eval [as tpl] (eval at <anonymous> (/home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:498:13), <anonymous>:7:154) at compiled (/home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:619:18) at /home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:559:20 at /home/ubuntu/workspace/asset-management/node_modules/swig/lib/swig.js:690:9 at tryToString (fs.js:456:3) at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:443:12) A: Be careful of scope, having it inside one bracket that you don't mean it to be causes a lot of issues. Started: (req, res, next) => { AP.findById(req.params.id).exec(function(err, foundAP){ if(err){ console.log(err); } else { res.locals.apsingle = foundAP; } }); next(); } Solved: (req, res, next) => { AP.findById(req.params.id).exec(function(err, foundAP){ if(err){ console.log(err); } else { res.locals.apsingle = foundAP; } }); } next();
[ "stackoverflow", "0037216673.txt" ]
Q: How can I temporary bypass python virtual environment from inside a bash script? I do have a bash script that needs to install some Python packages on the system instead of an virtual environment which may or may not be activated when the script is executed. This script is called by people that may already have a python virtual environment activated and I do want to be sure that for few commands I do not use it. I tried to use the deactivate command but it seems that is not available, even if bash detects the virtual environment (presence of the VIRTUAL_ENV variable). As a side note, I don't want to permanently disable the virtual environment. I just want to run few commands outside it. How can I do this? A: If activating before starting the script If you did the activate step in a parent shell, not in the shell instance running the script itself, then non-exported variables and functions are unavailable during its runtime. To be entirely clear about definitions: source my-virtualenv/bin/activate # this runs in the parent shell ./my-shell-script # the shell script itself is run in a child process created by # fork()+execve(); does not inherit shell variables / functions, so # deactivate WILL NOT WORK here. (source my-shell-script) # creates a subshell with fork(), then directly invokes # my-shell-script inside that subshell; this DOES inherit shell # variables / functions, and deactivate WILL WORK here. You have three options: Export the deactivate function and its dependencies from the parent shell, before starting the script. This is as given below, and looks something like: source my-virtualenv/bin/activate export VIRTUAL_ENV ${!_OLD_VIRTUAL_@} export -f deactivate ./my-script-that-needs-to-be-able-to-deactivate You could optionally define an activation function that does this for you, like so: # put this in your .bashrc activate() { source "$1"/bin/activate && { export VIRTUAL_ENV ${!_OLD_VIRTUAL_@} export -f deactivate } } # ...and then activate virtualenvs like so: activate my-virtualenv Make some guesses, within the script, about what the prior Python environment looked like. This is less reliable, for obvious reasons; however, as virtualenv does not export the shell variables containing the original PYTHON_HOME, that information is simply unavailable to child-process shells; a guess is thus the best option available: best_guess_deactivate() { if [[ $VIRTUAL_ENV && $PATH =~ (^|:)"$VIRTUAL_ENV/bin"($|:) ]]; then PATH=${PATH%":$VIRTUAL_ENV/bin"} PATH=${PATH#"$VIRTUAL_ENV/bin:"} PATH=${PATH//":$VIRTUAL_ENV/bin:"/} unset PYTHONHOME VIRTUAL_ENV fi } ...used within a limited scope as: run_python_code_in_virtualenv_here (best_guess_deactivate; run_python_code_outside_virtualenv_here) run_python_code_in_virtualenv_here Run the script in a forked child of the shell that first sourced activate with no intervening exec() call That is, instead of invoking your script as a regular subprocess, with: # New shell instance, does not inherit non-exported (aka regular shell) variables ./my-shell-script ...source it into a forked copy of the current shell, as # Forked copy of existing shell instance, does inherit variables (source ./my-shell-script) ...or, if you trust it to hand back control to your interactive shell after execution without messing up state too much (which I don't advise), simply: # Probably a bad idea source ./my-shell-script All of these approaches have some risk: Because they don't use an execve call, they don't honor any shebang line on the script, so if it's written specifically for ksh93, zsh, or another shell that differs from the one you're using interactively, they're likely to misbehave. If activating within the script The most likely scenario is that the shell where you're running deactivate isn't a direct fork()ed child (with no intervening exec-family call) of the one where activate was sourced, and thus has inherited neither functions or (non-exported) shell variables created by that script. One means to avoid this is to export the deactivate function in the shell that sources the activate script, like so: printf 'Pre-existing interpreter: '; type python . venv-dir/bin/activate printf 'Virtualenv interpreter: '; type python # deactivate can be run in a subshell without issue, scoped to same printf 'Deactivated-in-subshell interpreter: ' ( deactivate && type python ) # this succeeds # however, it CANNOT be run in a child shell not forked from the parent... printf 'Deactivated-in-child-shell (w/o export): ' bash -c 'deactivate && type python' # this fails # ...unless the function is exported with the variables it depends on! export -f deactivate export _OLD_VIRTUAL_PATH _OLD_VIRTUAL_PYTHONHOME _OLD_VIRTUAL_PS1 VIRTUAL_ENV # ...after which it then succeeds in the child. printf 'Deactivated-in-child-shell (w/ export): ' bash -c 'deactivate && type python' My output from the above follows: Pre-existing interpreter: python is /usr/bin/python Virtualenv interpreter: python is /Users/chaduffy/test.venv/bin/python Deactivated-in-subshell interpreter: python is /usr/bin/python Deactivated-in-child-shell (w/o export): bash: deactivate: command not found Deactivated-in-child-shell (w/ export): python is /usr/bin/python Assuming you've fixed that, let's run once more through using a subshell to scope deactivation to make it temporary: . venv-dir/activate this-runs-in-venv # minor performance optimization: exec the last item in the subshell to balance out # ...the performance cost of creating that subshell in the first place. (deactivate; exec this-runs-without-venv) this-runs-in-venv
[ "stackoverflow", "0004288444.txt" ]
Q: Bufferwriting/sending messages problem in javaNIO My problem is concerning JAVANIO client server message passing,i m unsure about defining the problem technically but: it seems that buffer is caching the data and when it is done then it is sending all together which is disturbing logic: private void sendCreate(String line,SocketChannel from) /* A new client wishes to join the world. This requires the client to find out about the existing clients, and to add itself to the other clients' worlds. Message format: create name xPosn zPosn Store the user's name, extracted from the "create" message */ { StringTokenizer st = new StringTokenizer(line); st.nextToken(); // skip 'create' word userName = st.nextToken(); String xPosn = st.nextToken(); // don't parse String zPosn = st.nextToken(); // don't parse // request details from other clients sendBroadcastMessage( "wantDetails " + achannel.socket().getInetAddress() + " " + port,from); // tell other clients about the new one sendBroadcastMessage( "create " + userName + " "+xPosn+" "+zPosn,from); } // end of sendCreate() method responsible for broadcasting messages from server: private void sendBroadcastMessage(String mesg, SocketChannel from) { prepWriteBuffer(mesg); Iterator i = clients.iterator(); while (i.hasNext()) { SocketChannel channel = (SocketChannel) i.next(); if (channel != from) channelWrite(channel, writeBuffer); } } i m assuming that this should send the first message i.e sendBroadcastMessage( "wantDetails " + achannel.socket().getInetAddress() + " " + port,from); but this is not,it seems that it is waiting for other method call i.e sendBroadcastMessage( "create " + userName + " "+xPosn+" "+zPosn,from);and then sending both message as one message which is affecting application logic.ideally it should or it should send the first message after first call to sendBroadcastMessage and then when client recive the first then other call should be processed. these are methods which are using in sendBroadcastMessage(): private void prepWriteBuffer(String mesg) { // fills the buffer from the given string // and prepares it for a channel write writeBuffer.clear(); writeBuffer.put(mesg.getBytes()); writeBuffer.putChar('\n'); writeBuffer.flip(); } private void channelWrite(SocketChannel channel, ByteBuffer writeBuffer) { long nbytes = 0; long toWrite = writeBuffer.remaining(); // loop on the channel.write() call since it will not necessarily // write all bytes in one shot try { nbytes += channel.write(writeBuffer); } catch (ClosedChannelException cce) { cce.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } // get ready for another write if needed writeBuffer.rewind(); } please suggest some solution. thanks, jibby lala Edit : what about this,i got this patch from some chat app: private void prepWriteBuffer(String mesg) { // fills the buffer from the given string // and prepares it for a channel write writeBuffer.clear(); writeBuffer.put(mesg.getBytes()); writeBuffer.putChar('\n'); writeBuffer.flip(); } // called needs to remove the channel if it fails, otherwise it will fail forever. private void channelWrite(SocketChannel channel, ByteBuffer writeBuffer) { long nbytes = 0; long toWrite = writeBuffer.remaining(); // loop on the channel.write() call since it will not necessarily // write all bytes in one shot try { while (nbytes != toWrite) { nbytes += channel.write(writeBuffer); try { Thread.sleep(CHANNEL_WRITE_SLEEP); } catch (InterruptedException e) { } } } catch (ClosedChannelException cce) { } catch (Exception e) { } // get ready for another write if needed writeBuffer.rewind(); } A: perhaps you intended while(writeBuffer.remaining()>0) channel.write(writeBuffer); However, you problem appears to be that you assume there is some type of magic marker between messages. However, no such divider exists. A stream is just a stream of bytes. When you read in a blocking mode you will get atleast one byte, you may get more this may span what was multiple writes but unless you include in the stream where you expect a message to start and end, you will have no way of knowing. A simple approach is to write the length of the message at the start of the message and read at most single message until you get all of it. Something like. private void prepWriteBuffer(String mesg) { // fills the buffer from the given string // and prepares it for a channel write writeBuffer.clear(); byte[] bytes = mesg.getBytes()); writeBuffer.putInt(bytes.length); writeBuffer.put(bytes); writeBuffer.flip(); } // called needs to remove the channel if it fails, otherwise it will fail forever. private void channelWrite(SocketChannel channel, ByteBuffer writeBuffer) throws IOException { while(writeBuffer.remaining()>0) channel.write(writeBuffer); writeBuffer.rewind(); }
[ "ru.stackoverflow", "0000570423.txt" ]
Q: Стоит ли указывать модификатор доступа final для класса? Сам модификатор говорит о том, что класс нельзя будет расширять. Но я думаю в плане оптимизации самого кода, если я точно знаю, что я не собираюсь наследоваться от класса A, то почему бы его не сделать final ... Может это будет подсказка для копилятора при компиляции или при выполнении кода? A: Модификатор final запрещает наследоваться от данного класса. С точки зрения оптимизации и производительности, это хорошая подсказка для jvm. При вызове методов у какого то класса, происходит поиск в классах наследниках, для определения - переопределен ли данный метод. Таким образом обеспечивается полиморфизм. Хотя, существует очень много оптимизаций и совсем необязательно, что он будет каждый раз делать эти манипуляции. Если мы укажим final, то интерпретатор не будет ничего искать, точно зная, что этот метод не был переопределен.
[ "stackoverflow", "0003867383.txt" ]
Q: Setup project custom action reading checkboxes I have checkboxes and a dialogue added. I need to be able to read the state of the boxes from a custom action. I also need the path which I have but I can't find how to read the state of the checkboxes. How can this be done? public override void Commit(IDictionary savedState) { base.Commit(savedState); String TargetDirectory = Path.GetDirectoryName(Context.Parameters["AssemblyPath"]); MessageBox.Show(TargetDirectory); // Code needed to read the checkboxes! } A: Found it! in custom Actions add /tool="[XYZ] " /MyInfo="[ABC] " where XYZ and ABC are the CheckboxNProperty then read them in in the custom action above thus MessageBox.Show(Context.Parameters["XYZ"]);
[ "stackoverflow", "0012327346.txt" ]
Q: Receive iPhone User Data I am creating an app with various tests and I need a way to receive the test results from the users. Is there a way for me to save the test results when the user is done taking the test and be able to access the results from my computer or something? Thanks in advance, Joshua A: In an app I am making I used JSON, JavaScript Object Notation, to communicate between my app and a web server, where I store my info. It is fairly simple to use. Check out this tutorial http://www.raywenderlich.com/2965/how-to-write-an-ios-app-that-uses-a-web-service. This is what I used and I am fairly new to programming in general, and I just had to make a few adjustments to make it work. This is pretty broad, but this should get you in the right direction. A: If you are familiar with sql commands.Why dont you try sqlite for creating a database where you can store the results, update them ,retrieve them.you can refer to this site http://www.techotopia.com/index.php/IPhone_Database_Implementation_using_SQLite or this one http://www.icodeblog.com/2008/08/19/iphone-programming-tutorial-creating-a-todo-list-using-sqlite-part-1/
[ "stackoverflow", "0011565820.txt" ]
Q: Starting R and calling a script from a batch file I have an R-based GUI that allows some non-technical users access to a stats model. As it stands, the users have to first load R and then type loadGui() at the command line. While this isn't overly challenging, I don't like having to make non-technical people type anything at a command line. I had the idea of writing a .bat file (users are all running Windows, though multi-platform solutions also appreciated) that starts R GUI, then autoruns that command. My first problem is opening RGui from the command line. While I can provide an explicit path, such as "%ProgramW6432%\R\R-2.15.1\bin\i386\Rgui.exe" it will need updating each time R is upgraded. It would be better to retrieve the location of RGui from the %path% environment variable, but I don't know an easy way to parse that. The second, larger problem is how to call commands for R on startup from the command line. My first thought is that I could take a copy of ~/.Rprofile, append the extra command, and then replace the original copy of the file once R is loaded. This is awfully messy though, so I'd like an alternative. Running R in batch mode isn't an option, firstly since I can't persuade GUIs to display themselves, and secondly because I would like the R console available, even if the users shouldn't need to use it. If you want a toy GUI to test your ideas, try this: loadGui <- function() { library(gWidgetstclck) win <- gwindow("test") rad <- gradio(letters[1:3], cont = win) } A: Problem 1: I simply do not ever install in the suggested default directory on Windows, but rather group R and a few related things in, say, c:/opt/ where I install R itself in, say,c:/opt/R-current so that the path c:/opt/R-current/bin will remain constant. On upgrade, I first renamed to R-previous and then install into a new R-current. Problem 2: I think I solved that many moons ago with scripts. You can now use Rscript.exe to launch these, and there are tcltk examples for waiting for a prompt. A: I have done similar a couple of times. In my cases the client was using windows so I just installed R on their computer and created a shortcut on their desktop to run R. Then I right click on the shortcut and choose properties to get the propertiest dialog. I then changed the "Start in" folder to the one where I wanted it to run from (which had the .Rdata file with the correct data and either a .First function in the .Rdata file or .Rprofile in the folder). There is also a "Run:" option that has a "Minimized" option to run the main R window minimized. I had created the functions that I wanted to run (usually a specialized gui using tcltk) and any needed data and saved them in the .Rdata file and also either created .First or .Rprofile to run the comnand that showed the gui. The user double clicks on the icon on the desktop and up pops my GUI that they can work with while ignoring the other parts.
[ "stackoverflow", "0036593592.txt" ]
Q: drop down list in grdiview header and rowdatabound column change i have a gridview that contains databound columns. two columns contain information that needs to be adjusted in the rowdatabound e.g if the cell in the column contains a 2 then change it to display 'code'. i have done this before, however, now i have a drop down list to add to the gridview header, and the list items of the drop down will be 'all', 'uncode', 'code', now that i have put the drop down list in the header, when the grdiview populates it no longer changes the value '2' to say 'code', it keeps it as '2'. Can anyone shed some light in how i can get it to change back to 'code' here is the code for the gridview item header/databind <asp:TemplateField SortExpression ="true"> <HeaderTemplate> Coding Stage <asp:DropDownList ID ="ddlCoded" runat="server" OnSelectedIndexChanged="CodedChanged" AutoPostBack="true" AppendDataBoundItems ="true"> <asp:ListItem Text="ALL" Value="ALL"></asp:ListItem> <asp:ListItem Text="uncode" Value="uncode"></asp:ListItem> <asp:ListItem Text="code" Value="code"></asp:ListItem> </asp:DropDownList> </HeaderTemplate> <ItemTemplate> <%#Eval("codingStage")%> </ItemTemplate> </asp:TemplateField> if you need anymore information then please contact me and i will be happy to provide A: markup: <ItemTemplate> <asp:Label ID="lblStage" runat="server" Text='<%#ShowStage(Eval("codingStage"))%>'></asp:Label> </ItemTemplate> in code behind: protected string ShowStage(int codingStage) { switch (codingStage) { case 0: return "All"; case 1: return "Uncode"; case 2: return "Code"; default: return "All"; } }
[ "es.stackoverflow", "0000010666.txt" ]
Q: ¿Se puede mejorar este código? Me estoy preguntando, si de alguna manera, se puede mejorar o almenos achicar el siguiente código: func addCourse(cursoRecibido: Course) throws { let curso = Course() if let author = cursoRecibido.author { curso.author = author } else { throw addCourseError.emptyAuthor } if let title = cursoRecibido.title { curso.title = title } else { throw addCourseError.emptyTitle } if let duration = cursoRecibido.duration { curso.duration = duration } else { throw addCourseError.emptyDuration } if let date = cursoRecibido.uploadDate { curso.uploadDate = date } else { throw addCourseError.emptyUploadDate } if let views = cursoRecibido.views { curso.views = views } dao.addCourse(curso) } La clase de este método trata sobre el business logic para guardar el curso en sqlLite. A: Deberías empezar por el hecho de preguntarte porque creas un nuevo curso para añadir cuando ya te viene uno en la función como parámetro. Quizás deberías verificar que todos los campos están rellenados y entonces guardarlo. Aún así, hay muchas posibles soluciones. Te paso dos, una reduciendo tú código, y otra como yo lo haría: Solución 1 func addCourse(cursoRecibido: Course) throws { // Verifico que todos los datos existen guard let author = cursoRecibido.author, title = cursoRecibido.title, duration = cursoRecibido.duration, uploadDate = cursoRecibido.uploadDate, views = cursoRecibido.views else { // Lanzar excepción general que diga algo como "Todos los datos son requeridos" throw addCourseError.AllDataRequired } // Creo el ccurso let curso = Course() curso.author = author curso.title = title curso.duration = duration curso.uploadDate = uploadDate curso.views = views // Añado el curso dao.addCourse(curso) } Solución 2 func addCourse(cursoRecibido: Course) throws { // Verifico que todos los datos existen guard let author = cursoRecibido.author, title = cursoRecibido.title, duration = cursoRecibido.duration, uploadDate = cursoRecibido.uploadDate, views = cursoRecibido.views else { // Lanzar excepción general que diga algo como "Todos los datos son requeridos" throw addCourseError.AllDataRequired } // Añado el curso dao.addCourse(cursoRecibido) }
[ "stackoverflow", "0028112024.txt" ]
Q: Bootstrap - input box within a col-xs-6 container <div class="container-fluid"> <div class="row"> <div class="col-xs-6 text-left"> Search <span> <input style="width:100%" type="text" /> </span> </div> <div class="col-xs-6 text-right"> <span>...</span> </div> </div> </div> How do I ensure the above input box is 100% the size of the container minus the text "Search" ? JSFIDDLE: http://jsfiddle.net/c9kprdqh/ UPDATE: I'd like "Search [...]" to be 100% of the col-xs-6 container. So in other words, I need the "[...]" to be the remainder of the container (minus the Search text) but I'm not sure how to do this. UPDATE: I'd like the above to be on one line. So "Search [...]" should be one line. A: Fiddle There are a couple of ways to solve this problem. Here is how i would do it. <span>Search</span> <div id="rest"> <input></input> </div> span { float:left; } //rest = a div containing the input element #rest { overflow: hidden; width: auto; } //Finally the input input { width: 100%; }
[ "stackoverflow", "0002807654.txt" ]
Q: Multi threaded file processing with .NET There is a folder that contains 1000s of small text files. I aim to parse and process all of them while more files are being populated into the folder. My intention is to multithread this operation as the single threaded prototype took six minutes to process 1000 files. I like to have reader and writer thread(s) as the following. While the reader thread(s) are reading the files, I'd like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it. Once it's read, rename it to completed. How do I approach such a multithreaded application? Is it better to use a distributed hash table or a queue? Which data structure do I use that would avoid locks? Is there a better approach to this scheme? A: Since there's curiosity on how .NET 4 works with this in comments, here's that approach. Sorry, it's likely not an option for the OP. Disclaimer: This is not a highly scientific analysis, just showing that there's a clear performance benefit. Based on hardware, your mileage may vary widely. Here's a quick test (if you see a big mistake in this simple test, it's just an example. Please comment, and we can fix it to be more useful/accurate). For this, I just dropped 12,000 ~60 KB files into a directory as a sample (fire up LINQPad; you can play with it yourself, for free! - be sure to get LINQPad 4 though): var files = Directory.GetFiles("C:\\temp", "*.*", SearchOption.AllDirectories).ToList(); var sw = Stopwatch.StartNew(); //start timer files.ForEach(f => File.ReadAllBytes(f).GetHashCode()); //do work - serial sw.Stop(); //stop sw.ElapsedMilliseconds.Dump("Run MS - Serial"); //display the duration sw.Restart(); files.AsParallel().ForAll(f => File.ReadAllBytes(f).GetHashCode()); //parallel sw.Stop(); sw.ElapsedMilliseconds.Dump("Run MS - Parallel"); Slightly changing your loop to parallelize the query is all that's needed in most simple situations. By "simple" I mostly mean that the result of one action doesn't affect the next. Something to keep in mind most often is that some collections, for example our handy List<T> is not thread safe, so using it in a parallel scenario isn't a good idea :) Luckily there were concurrent collections added in .NET 4 that are thread safe. Also keep in mind if you're using a locking collection, this may be a bottleneck as well, depending on the situation. This uses the .AsParallel<T>(IEnumeable<T>) and .ForAll<T>(ParallelQuery<T>) extensions available in .NET 4.0. The .AsParallel() call wraps the IEnumerable<T> in a ParallelEnumerableWrapper<T> (internal class) which implements ParallelQuery<T>. This now allows you to use the parallel extension methods, in this case we're using .ForAll(). .ForAll() internally crates a ForAllOperator<T>(query, action) and runs it synchronously. This handles the threading and merging of the threads after it's running... There's quite a bit going on in there, I'd suggest starting here if you want to learn more, including additional options. The results (Computer 1 - Physical Hard Disk): Serial: 1288 - 1333ms Parallel: 461 - 503ms Computer specs - for comparison: Quad Core i7 920 @ 2.66 GHz 12 GB RAM (DDR 1333) 300 GB 10k rpm WD VelociRaptor The results (Computer 2 - Solid State Drive): Serial: 545 - 601 ms Parallel: 248 - 278 ms Computer specifications - for comparison: Quad Core 2 Quad Q9100 @ 2.26 GHz 8 GB RAM (DDR 1333) 120 GB OCZ Vertex SSD (Standard Version - 1.4 Firmware) I don't have links for the CPU/RAM this time, these came installed. This is a Dell M6400 Laptop (here's a link to the M6500... Dell's own links to the 6400 are broken). These numbers are from 10 runs, taking the min/max of the inner 8 results (removing the original min/max for each as possible outliers). We hit an I/O bottleneck here, especially on the physical drive, but think about what the serial method does. It reads, processes, reads, processes, rinse repeat. With the parallel approach, you are (even with a I/O bottleneck) reading and processing simultaneously. In the worst bottleneck situation, you're processing one file while reading the next. That alone (on any current computer!) should result in some performance gain. You can see that we can get a bit more than one going at a time in the results above, giving us a healthy boost. Another disclaimer: Quad core + .NET 4 parallel isn't going to give you four times the performance, it doesn't scale linearly... There are other considerations and bottlenecks in play. I hope this was on interest in showing the approach and possible benefits. Feel free to criticize or improve... This answer exists solely for those curious as indicated in the comments :) A: Design The Producer/Consumer pattern will probably be the most useful for this situation. You should create enough threads to maximize the throughput. Here are some questions about the Producer/Consumer pattern to give you an idea of how it works: C# Producer/Consumer pattern C# producer/consumer You should use a blocking queue and the producer should add files to the queue while the consumers process the files from the queue. The blocking queue requires no locking, so it's about the most efficient way to solve your problem. If you're using .NET 4.0 there are several concurrent collections that you can use out of the box: ConcurrentQueue: http://msdn.microsoft.com/en-us/library/dd267265%28v=VS.100%29.aspx BlockingCollection: http://msdn.microsoft.com/en-us/library/dd267312%28VS.100%29.aspx Threading A single producer thread will probably be the most efficient way to load the files from disk and push them onto the queue; subsequently multiple consumers will be popping items off the queue and they'll process them. I would suggest that you try 2-4 consumer threads per core and take some performance measurements to determine which is most optimal (i.e. the number of threads that provide you with the maximum throughput). I would not recommend the use a ThreadPool for this specific example. P.S. I don't understand what's the concern with a single point of failure and the use of distributed hash tables? I know DHTs sound like a really cool thing to use, but I would try the conventional methods first unless you have a specific problem in mind that you're trying to solve. A: I recommend that you queue a thread for each file and keep track of the running threads in a dictionary, launching a new thread when a thread completes, up to a maximum limit. I prefer to create my own threads when they can be long-running, and use callbacks to signal when they're done or encountered an exception. In the sample below I use a dictionary to keep track of the running worker instances. This way I can call into an instance if I want to stop work early. Callbacks can also be used to update a UI with progress and throughput. You can also dynamically throttle the running thread limit for added points. The example code is an abbreviated demonstrator, but it does run. class Program { static void Main(string[] args) { Supervisor super = new Supervisor(); super.LaunchWaitingThreads(); while (!super.Done) { Thread.Sleep(200); } Console.WriteLine("\nDone"); Console.ReadKey(); } } public delegate void StartCallbackDelegate(int idArg, Worker workerArg); public delegate void DoneCallbackDelegate(int idArg); public class Supervisor { Queue<Thread> waitingThreads = new Queue<Thread>(); Dictionary<int, Worker> runningThreads = new Dictionary<int, Worker>(); int maxThreads = 20; object locker = new object(); public bool Done { get { lock (locker) { return ((waitingThreads.Count == 0) && (runningThreads.Count == 0)); } } } public Supervisor() { // queue up a thread for each file Directory.GetFiles("C:\\folder").ToList().ForEach(n => waitingThreads.Enqueue(CreateThread(n))); } Thread CreateThread(string fileNameArg) { Thread thread = new Thread(new Worker(fileNameArg, WorkerStart, WorkerDone).ProcessFile); thread.IsBackground = true; return thread; } // called when a worker starts public void WorkerStart(int threadIdArg, Worker workerArg) { lock (locker) { // update with worker instance runningThreads[threadIdArg] = workerArg; } } // called when a worker finishes public void WorkerDone(int threadIdArg) { lock (locker) { runningThreads.Remove(threadIdArg); } Console.WriteLine(string.Format(" Thread {0} done", threadIdArg.ToString())); LaunchWaitingThreads(); } // launches workers until max is reached public void LaunchWaitingThreads() { lock (locker) { while ((runningThreads.Count < maxThreads) && (waitingThreads.Count > 0)) { Thread thread = waitingThreads.Dequeue(); runningThreads.Add(thread.ManagedThreadId, null); // place holder so count is accurate thread.Start(); } } } } public class Worker { string fileName; StartCallbackDelegate startCallback; DoneCallbackDelegate doneCallback; public Worker(string fileNameArg, StartCallbackDelegate startCallbackArg, DoneCallbackDelegate doneCallbackArg) { fileName = fileNameArg; startCallback = startCallbackArg; doneCallback = doneCallbackArg; } public void ProcessFile() { startCallback(Thread.CurrentThread.ManagedThreadId, this); Console.WriteLine(string.Format("Reading file {0} on thread {1}", fileName, Thread.CurrentThread.ManagedThreadId.ToString())); File.ReadAllBytes(fileName); doneCallback(Thread.CurrentThread.ManagedThreadId); } }
[ "ethereum.stackexchange", "0000079542.txt" ]
Q: Smart contract DB Is it a good practice to create a smart contract to serve as a DB table? For example, lets say that I have users that need to create Operations. My operations objects need to contain id, description, date to start and date to finish. Ideally I know I could store this in centralized DB. Although for my business requirements, if would be convenient to have this data completely open. I know my use case does not require Eth or any other token transfer, is this still a good practice? A: It is possible to do so. Although your Contract would have costs for Write and Update operations. I think AusIV answers your question in some way with this answer
[ "stackoverflow", "0021664685.txt" ]
Q: Jquery selector alternative for parent.next.next I have the following html structure that gets repeated as rows in a table <div class='class1'> <input type="checkbox" id="some_id" checked> </div> <div class='class2'> <label> Some Label </label> </div> <div class='class3'> Some Text </div> When a checkbox is selected, I get the text assoicated with it using the jquery selector this.$('#some_id').parent().next().next(); Is there a better way of achieving this ? A: $(this).parent().siblings('.class3').text(); or $('#some_id').parent().siblings('.class3').text(); or try this: $('#some_id').parent().nextAll().eq(1).text();
[ "stackoverflow", "0000006134.txt" ]
Q: How do you kill all Linux processes that are older than a certain age? I have a problem with some zombie-like processes on a certain server that need to be killed every now and then. How can I best identify the ones that have run for longer than an hour or so? A: Found an answer that works for me: warning: this will find and kill long running processes ps -eo uid,pid,etime | egrep '^ *user-id' | egrep ' ([0-9]+-)?([0-9]{2}:?){3}' | awk '{print $2}' | xargs -I{} kill {} (Where user-id is a specific user's ID with long-running processes.) The second regular expression matches the a time that has an optional days figure, followed by an hour, minute, and second component, and so is at least one hour in length. A: If they just need to be killed: if [[ "$(uname)" = "Linux" ]];then killall --older-than 1h someprocessname;fi If you want to see what it's matching if [[ "$(uname)" = "Linux" ]];then killall -i --older-than 1h someprocessname;fi The -i flag will prompt you with yes/no for each process match. A: For anything older than one day, ps aux will give you the answer, but it drops down to day-precision which might not be as useful. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 7200 308 ? Ss Jun22 0:02 init [5] root 2 0.0 0.0 0 0 ? S Jun22 0:02 [migration/0] root 3 0.0 0.0 0 0 ? SN Jun22 0:18 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S Jun22 0:00 [watchdog/0] If you're on linux or another system with the /proc filesystem, In this example, you can only see that process 1 has been running since June 22, but no indication of the time it was started. stat /proc/<pid> will give you a more precise answer. For example, here's an exact timestamp for process 1, which ps shows only as Jun22: ohm ~$ stat /proc/1 File: `/proc/1' Size: 0 Blocks: 0 IO Block: 4096 directory Device: 3h/3d Inode: 65538 Links: 5 Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-06-22 15:37:44.347627750 -0700 Modify: 2008-06-22 15:37:44.347627750 -0700 Change: 2008-06-22 15:37:44.347627750 -0700
[ "stackoverflow", "0054691214.txt" ]
Q: case when filter is empty - select all I need to add a state filter to the procedure. When filter is not empty like 'AL', 'AK' etc all works fine, but if filter is empty I should return all rows. where ... and state = case when @stateFilter != '' then @stateFilter else ??? end A: You may rephrase your logic as follows: WHERE ... AND (state = @stateFilter OR @stateFilter = ''); The last condition would return true if the state equals the variable passed in, or the variable passed in happens to be empty string. A: A quick fix might be something like this: and state LIKE case when @stateFilter != '' then @stateFilter else '%' end
[ "superuser", "0000439054.txt" ]
Q: Apache reverse proxy: no protocol handler I am trying to configure a reverse proxy with apache, but I am getting a No protocol handler was valid for the URL error, which I do not understand. This is the relevant configuration of apache: ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ ProxyPassReverse /gonvaled/examples/jsonrpc/output/services/ http://localhost:8000/services/ The requests is reaching apache as: POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1 And they should be forwarded to my internal service, located at: 0.0.0.0:8000/services/EchoService.py These are the logs: ==> /var/log/apache2/error.log <== [Wed Jun 20 02:05:20 2012] [debug] proxy_util.c(1506): [client 127.0.0.1] proxy: http: found worker http://localhost:8000/services/ for http://localhost:8000/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html [Wed Jun 20 02:05:20 2012] [debug] mod_proxy.c(998): Running scheme http handler (attempt 0) [Wed Jun 20 02:05:20 2012] [warn] proxy: No protocol handler was valid for the URL /gonvaled/examples/jsonrpc/output/services/EchoService.py. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule. [Wed Jun 20 02:05:20 2012] [debug] mod_deflate.c(615): [client 127.0.0.1] Zlib: Compressed 614 to 373 : URL /gonvaled/examples/jsonrpc/output/services/EchoService.py, referer: http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html ==> /var/log/apache2/access.log <== 127.0.0.1 - - [20/Jun/2012:02:05:20 +0200] "POST /gonvaled/examples/jsonrpc/output/services/EchoService.py HTTP/1.1" 500 598 "http://localhost/gonvaled/examples/jsonrpc/output/JSONRPCExample.safari.cache.html" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.162 Safari/535.19" A: I found the problem. The proxy_http module needs to be activated too in Apache (I had only proxy_html and proxy) A: For me, on apache httpd 2.4, this happened because i was missing the trailing slash: Did not work: <Proxy balancer://mycluster> BalancerMember http://192.168.111.7 BalancerMember http://192.168.111.80 </Proxy> ProxyPass / balancer://mycluster ProxyPassReverse / balancer://mycluster Worked! <Proxy balancer://mycluster> BalancerMember http://192.168.111.7 BalancerMember http://192.168.111.80 </Proxy> ProxyPass / balancer://mycluster/ ProxyPassReverse / balancer://mycluster/ (added / at the end)
[ "stackoverflow", "0028999997.txt" ]
Q: my view gives 404 error even that post with that slug exists Before this does not happened but after installing django-contrib-comments app, when I click on a post detail link to get post I get 404 error. But in shell there is no problem my urls.py: urlpatterns = patterns('', url(r'^$', views.index, name='index'), url(r'^(?P<type>\w+)/(?P<slug>[\w|\W]+)/$', views.included_posts, name="included_posts"), url(r'^post/(?P<slug>[\w|\W]+)/$', views.detail, name="detail"), url(r'^paginated-tags/$', views.listing, name="listing"), ) my views.py: def detail(request, slug): posts = Post.published_posts.all() post = get_object_or_404(posts, slug=slug) return render(request, 'blog/index.html', {'post': post}) published_posts is my custom manager. A: Your included_posts url catches the post/someslug url before the detail url. Move the included_posts at the end of the urls.py: urlpatterns = patterns('', url(r'^$', views.index, name='index'), url(r'^post/(?P<slug>[\w|\W]+)/$', views.detail, name="detail"), url(r'^paginated-tags/$', views.listing, name="listing"), url(r'^(?P<type>\w+)/(?P<slug>[\w|\W]+)/$', views.included_posts, name="included_posts"), ) As side note: ([\w-]+) is the common regex for slugs. Your ([\w|\W]+) will match any string (for example: "some () nonslug [] chars")
[ "math.stackexchange", "0000610730.txt" ]
Q: Russell's definition of finite cardinals whether the thought had been previously adumbrated, perhaps confusedly, i know not, but the name of Bertrand Russell has become associated with the assertion that: the number $2$ is the set of all sets which contain exactly two elements. i think there is even today some adherence to this rather neat definition. but my elementary knowledge of these matters suffices to give rise to two questions: a) should not Russell have said: the proper class of sets having exactly two elements? b) does this imply that to understand the meaning of the symbol $2$ it is necessary to have a model of the entire class of transfinite cardinals, and to accept something like the Axiom of Choice? and is the theory of these infinities in fact research into the Kolmogorov Complexity of the concept of a finite cardinal? A: Let me answer your first question, since I'm not quite sure I understand the second one at its current form. Russell's definition was given much before von Neumann introduced the term "proper class". In addition to that, von Neumann suggested we choose a representative from each equivalence class, and using the axiom of choice and the definition of ordinals he suggested, we have the modern definition of cardinals instead. One of the first serious mathematical definitions of cardinal was the one devised by Gottlob Frege and Bertrand Russell, who defined a cardinal number |A| as the set of all sets equipollent to A. (Moore 1982, p. 153; Suppes 1972, p. 109). Unfortunately, the objects produced by this definition are not sets in the sense of Zermelo-Fraenkel set theory, but rather "proper classes" in the terminology of von Neumann. (Weisstein, Eric W. "Cardinal Number." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/CardinalNumber.html)
[ "salesforce.stackexchange", "0000144471.txt" ]
Q: Active Currency Not Showing Up (Multi-currency enabled, Reports) Apologies if I am asking a silly question here, but I cannot figure out why my list of currencies (When editing a report -> Preview -> Show -> Currencies Using) does not show all my active currencies in the org. If I go to Manage Currencies, I see the currencies I need to report on are active, I can also pull them with SOQL queries, however I can't seem to find a way to "update" the list of available currencies for a particular report from the report edit page. Any ideas? A: Solved - It was very silly actually, for some reason when I use my mouse I could not scroll through the complete list of currencies on the report edit page, but when using the arrow keys I was able to see the full list of active currencies!
[ "stackoverflow", "0003666206.txt" ]
Q: How to embed jquery library in asp.net custom server control? I emebeded jquery library in custom server control. but it's not working. it throws "object expected error". the complete code listing is given below. jquery-1.4.1.js is rename it to jquery.js using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Text; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace ServerControl1 { [DefaultProperty("Text")] [ToolboxData("<{0}:ServerControl1 runat=server></{0}:ServerControl1>")] public class ServerControl1 : WebControl { [Bindable(true)] [Category("Appearance")] [DefaultValue("")] [Localizable(true)] public string Text { get { String s = (String)ViewState["Text"]; return ((s == null) ? "[" + this.ID + "]" : s); } set { ViewState["Text"] = value; } } protected override void RenderContents(HtmlTextWriter output) { output.Write("<p>Hello World!!</p>"); } public static void RegisterJQuery(ClientScriptManager cs) { cs.RegisterClientScriptResource(typeof(ServerControl1), "ServerControl1.Resources.jquery.js"); } protected override void OnPreRender(EventArgs e) { if (!this.DesignMode) { // Register the JavaScript libraries ClientScriptManager cs = this.Page.ClientScript; ServerControl1.RegisterJQuery(cs); } } protected override void OnInit(EventArgs e) { string javascript = "<script type='text/javascript'> " + "$(document).ready(function () { " + "alert($('p').text()); " + "});</script>"; if (!(Page.ClientScript.IsClientScriptBlockRegistered("bhelp"))) Page.ClientScript.RegisterStartupScript(this.GetType(), "bhelp", javascript); base.OnInit(e); } } } [assembly: System.Web.UI.WebResource("ServerControl1.Resources.jquery.js", "text/javascript")] A: Thjis article seems to describe what you are trying to do: http://weblogs.asp.net/dwahlin/archive/2007/04/29/creating-custom-asp-net-server-controls-with-embedded-javascript.aspx Is it of any help?
[ "stackoverflow", "0052901057.txt" ]
Q: Getting 'unencrypted_secret' when trying to login bitbucket I'm trying to login bitbucket with two-step verification, but I'm getting this error: field unencrypted_secret is deprecated. Am I doing something wrong? A: Atlassian Bitbucket said: This is a known issue and should be fixed now. ^Lewis So... this happens a lot..
[ "salesforce.stackexchange", "0000111843.txt" ]
Q: Escape backslash I'm trying to define a string with the following value string s = 'Something\Something'; but I got Illegal character sequence '\S' in string literal. I tried escaping the backlash with another backlash but I ended up with a string that has two backlashes ('Something\\Something') . What am I doing wrong? A: You need to use \\ instead \ string s = 'Something\\Something'; system.debug(s); Output 19:13:22:003 USER_DEBUG [3]|DEBUG|Something\Something
[ "spanish.stackexchange", "0000004700.txt" ]
Q: Difference between "empezar" and "comenzar" What is the difference between empezar and comenzar? Is one more formal than the other? A: Absolutely interchangeable as Eric states in his answer. I agree. However, I'd add that empezar is a little more common than comenzar. Actually Ngrams says in written Spanish, in the last 60 years, the frequency of use has been reversed. That is: And comenzar is slightly more formal. But I have no other sources than native speaker's gut. A: An existing answer, sadly the accepted one, has stated that empezar is more widely used than comenzar for things like "a project". That just doesn't make sense My conclusion is not that some is used more than the other. The difference is not significant enough as for being considered as a definitive answer. That's my point. A: I use them interchangeably, but if I had to distinguish I'd say that comenzar is normally a group or event, and empezar is normally a third person. That being said, ?puedo comenzar? is perfectly acceptable so they really are pretty interchangeable. I speak Mexican Spanish as a caveat.
[ "rpg.stackexchange", "0000035056.txt" ]
Q: Melee touch attack with two weapons? I have a Rogue/Swordsage that fights with 2 daggers and has Two Weapon Fighting (TWF). I'm considering taking Fire Riposte (Tome of Battle, 53) which grants the ability to do an immediate melee touch attack when dealt damage. If I'm fighting with 2 daggers can I even use this? A: Yes, You Can Use It It doesn't say that you need a free hand to make this attack, nor does it say that you need to be holding a particular item in hand to make it. So you can use the maneuver even if you used Two Weapon Fighting in the last round. The fact that it's a touch attack is just to tell you what AC number to roll against. It would be very hard to use if that wasn't the case, as things like shields, two handed weapons, and even a torch would interfere otherwise. Although not a rule, the flavor text gives you a hint about how this works: You focus the pain from a wound you have just suffered into a fiery manifestation of revenge. The maneuver in this case is actually fire shooting off of you when someone else hits you and hitting them back, so you can picture it working without you having to have an empty hand. No Penalties For It Either Using Two Weapon Fighting imposes a penalty on your attacks (-2 in your case), but those penalties don't apply to this attack. The reason why is this: Once you take a two-weapon fighting penalty, the penalty applies to all the attacks you make with that hand during your current action. It does not apply to attacks you make during some other character's turn. The reason why is that Two Weapon Fighting doesn't specify that its penalties last outside of your action. Compare to something like Power Attack, where the effects last until your next turn. So, go ahead and use your twin daggers. It doesn't affect your use of this ability at all.
[ "stackoverflow", "0051689192.txt" ]
Q: Recognize internal files as JSON What's the way to tell Sublime automatically open it's own files, i.e. .sublime-settings .sublime-keymap .sublime-theme etc. as JSON-files? (I mean, with proper syntax highlighting). A: In this specific case I‘d recommend installing PackageDev. Generally, you can configure to always assign a specific syntax to a file type.
[ "dba.stackexchange", "0000252240.txt" ]
Q: PostgreSQL: joining on jsonb array Using this contrived example structure: Base table data { "uid": "b12345", "nested": ["n12345", "n34567"] } Nested table data [ { "uid": "n12345", "message": "Hello world" }, { "uid": "n34567", "message": "Hello world" } ] I'm trying to join the tables on the nested array such that each uid is replaced with its corresponding record: { "uid": "b12345", "nested": [ { "uid": "n12345", "message": "Hello world" }, { "uid": "n34567", "message": "Hello world" } ] } The solution in this post looks very close to what I need, but it seems like the main difference/blocker is that the nested array here is initially flat. Here is the SQL I've been using to test it, including a query closely modeled on the above post. I'll appreciate any help! A: This seems to do what you want: select jsonb_build_object('uid', b.uid, 'nested', jsonb_agg(to_jsonb(m))) from base b join lateral ( select n.uid, n.message from nested n join jsonb_array_elements_text(b.nested) as x(bid) on x.bid = n.uid ) m on true group by b.uid order by b.uid; Online example: https://rextester.com/ASA37564
[ "stackoverflow", "0025732544.txt" ]
Q: Can XLL return arbitrary sized arrays from a single-cell formula? I've looked over many threads about XLL's and arrays here on SO, but I'm more confused now than ever. Sorry if the following is completely noob, but that's why I'm here... I've been told that it is not possible to write an XLL function that can return arbitrary arrays of data. You can return arrays from your code, but they must be called from an array formula (CSE). This implies that the size of the return space has to be pre-defined. In our case the function will return N rows (and/or cols) of data, and we have no idea in advance how many there will be. Ideally we would just place a single formula in one cell and have it use as many cells to the left and down as needed. The code examples I've found here don't strongly suggest one or the other, but it seems like XLOPER12 (and maybe XLOPER) have no limits on the size that is returned. Does Excel impose a limit after it is returned? A: An XLL UDF (also a VBA UDF)or can return a variable size array - but Excel will only assign the result(s) to the cell(s) that the UDF is entered into. To handle an unknown number of results you can: enter the UDF into the maximum number of cells that would be needed or return a handle to an internal array and build other UDFs that can interpret the handles or build a resizer platform: see http://colinlegg.wordpress.com/2014/08/25/self-extending-udfs-part-1/ https://groups.google.com/forum/#!topic/exceldna/oBKpr0BCgmU
[ "stackoverflow", "0012785119.txt" ]
Q: Executing command line commands on a remote server I have a server running HAProxy, and would like to change iptables on that server in order to preform some maintenance on my app servers. What would be the easiest way to do this? Is there a way I can preform system commands on a remote machine from C++? Or would I need to have a program running on the HAProxy machine in order to change the iptables for that machine? Any guidiance for this would be really helpful. Thank you A: ssh <hostname> "iptables <parameters>" from linux plink <hostname> "iptables <parameters>" from Windows
[ "stackoverflow", "0004431697.txt" ]
Q: Attempt to initialize the CRT more than once I am using VS2008 to port code from VC6. When I ran the new build app, I get this error "R6031 Attemp to initialize the CRT more than once. This indicates a bug in your application". There are a total of 21 dlls that are involve in the build this one app. Some DLL has .c files in them and explicitly calls _CRT_INIT() in DllMain. code below: BOOL APIENTRY DllMain (HANDLE hModule, DWORD dwReason, LPVOID lpReserved) { switch( dwReason) { case DLL_PROCESS_ATTACH: case DLL_THREAD_ATTACH: case DLL_THREAD_DETACH: case DLL_PROCESS_DETACH: if(!_CRT_INIT( hModule, dwReason, lpReserved)) return FALSE; break; } return TRUE; } I am not sure how to fix this problem. Do I need to comment out the call to _CRT_INIT()? Thanks in advance. A: Yes, you should not need to call _CRT_INIT() explicitly. It's probably being called by one or another DLLMain. See MSDN for details. Edit I think you have misread MSDN: When building a DLL which uses any of the C Run-time libraries, in order to ensure that the CRT is properly initialized, either the initialization function must be named DllMain() and the entry point must be specified with the linker option -entry:_DllMainCRTStartup@12 - or - You have named the init function DllMain(), so _CRT_INIT() is being called automatically. I think. Why not simply comment out that line and see what happens?
[ "stackoverflow", "0040966021.txt" ]
Q: Multiple calls using Alamofire to different APIs - how to save results to an array? I'm developing an app that fetches data from few different APIs using Alamofire (each call is done using a function). Then I have to collect all of the results (Double type in my case) to one array to calculate the average. As long as Alamofire uses asynchronous calls, it is impossible to simply append new value to an array from inside of call. Here is a function that calls each of functions responsible for fetching data by Alamofire: func collectData() { fetchFromFirstAPI() fetchFromSecondAPI() //etc. } And here is an example of one of these functions: func fetchFromFirstAPI() { let APIKey = "XXXXXXXXX" let APIURL = "http://urlapi.com/api" as URLConvertible let parameters: Parameters = ["APPKEY": APIKey] Alamofire.request(APIURL, method: .get, parameters: parameters, encoding: URLEncoding.default).validate().responseJSON { response in switch response.result { case .success(let data): let json = JSON(data) if let result = json["main"]["value"].double { myArray.append(result) } else { print("error") } case .failure: print("error") } } } And the array: var myArray: [Double] = [] How to deal with it? A: You said: As long as Alamofire uses asynchronous calls, it is impossible to simply append new value to an array from inside of call. You can actually go ahead and append items to the array inside the completion handlers. And because Alamofire calls its completion handlers on the main queue by default, no further synchronization is needed. And you can use dispatch groups to know when all the requests are done. First, I'd give my methods completion handlers so I know when they're done, e.g.: func fetchFromFirstAPI(completionHandler: @escaping (Double?, Error?) -> Void) { let APIKey = "XXXXXXXXX" let APIURL = "http://urlapi.com/api" let parameters: Parameters = ["APPKEY": APIKey] Alamofire.request(APIURL, parameters: parameters).validate().responseJSON { response in switch response.result { case .success(let data): let json = JSON(data) if let result = json["main"]["value"].double { completionHandler(result, nil) } else { completionHandler(nil, FetchError.valueNotFound) } case .failure(let error): completionHandler(nil, error) } } } Where enum FetchError: Error { case valueNotFound } And you can then do something like: func performRequestsAndAverageResults(completionHandler: @escaping (Double?) -> ()) { var values = [Double]() let group = DispatchGroup() group.enter() fetchFromFirstAPI { value, error in defer { group.leave() } if let value = value { values.append(value) } } group.enter() fetchFromSecondAPI { value, error in defer { group.leave() } if let value = value { values.append(value) } } group.notify(queue: .main) { completionHandler(self.average(values)) } } func average<T: FloatingPoint>(_ values: [T]) -> T? { guard values.count > 0 else { return nil } let sum = values.reduce(0) { $0 + $1 } return sum / T(values.count) } Now, you may want to handle errors differently, but this should illustrate the basic idea: Just append the results in the completion handlers and then use dispatch groups to know when all the requests are done, and in the notify closure, perform whatever calculations you want. Clearly, the above code doesn't care what order these asynchronous tasks finish. If you did care, you'd need to tweak this a little (e.g. save the results to a dictionary and then build a sorted array). But if you're just averaging the results, order doesn't matter and the above is fairly simple.
[ "stackoverflow", "0040034139.txt" ]
Q: Hive Protocol Bufferer - NullPointerException while creating table in Hive Thanks in advance. Currently we are trying to create Hive table by using Protocol Buffers byte data. We have followed all possible steps of creating Hive table by using Protocol buffer, however getting NullPointerException while creating Hive table. Below are the all required details. Versions - 1. protoc 3.0.0 2. elephant-bird - 4.14 3. Hortornworks Sandbox Hive version - 1.2.1 4. Protobuf-java 3.0.0 The proto file used is package tutorial; option java_package = "com.mycom.hive.protobuf.serialized"; option java_outer_classname = "BankProtoTest"; message BankClass{ required string bankAmount= 1; required string bankLocation= 2; optional string bankName= 3; } message BankInfo { repeated BankClass bankClass = 1; } We are creating the Java class by using below command protoc.exe -I=input-proto --java_out=java-output input-proto\BankProto.proto The above command generates the Java class for input protocol buffer file. After this we copied this protocol buffer java file to a maven java project and then created JAR file. We copied the JAR file into hive lib path i.e. '/usr/hdp/current/hive-client/lib'. Below is the create table command create external table bankproto row format serde "com.twitter.elephantbird.hive.serde.ProtobufDeserializer" with serdeproperties ("serialization.class"="com.mycom.hive.protobuf.serialized.BankProtoTest$BankInfo") stored as inputformat "org.apache.hadoop.mapred.SequenceFileInputFormat" outputformat "org.apache.hadoop.mapred.SequenceFileOutputFormat" location '/user/root/protobuf-input/'; The input file present in the location is stored as sequence file in HDFS. After executing this command getting below exception. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException java.lang.NullPointerException) Any help related to this is appreciated. Thanks again. Avinash Deshmukh A: We are able to resolve this issue. The issue was about the correct compatible version of protobuf version. We found that current version of elephant-bird (4.14) is depend on protobuf version 2.6.0
[ "stackoverflow", "0002495306.txt" ]
Q: Finding longest non-repeating path through connected nodes I've been working on this for a couple of days now without success. Basically, I have a bunch of nodes arranged in a 2D matrix. Every node has four neighbors, except for the nodes on the sides and corners of the matrix, which have 3 and 2 neighbors, respectively. Imagine a bunch of square cards laid out side by side in a rectangular area--the project is actually simulating a sort of card/board game. Each node may or may not be connected to the nodes around it. Each node has a function (get_connections()), that returns the nodes immediately around it that it is connected to (so anywhere from 0 to 4 nodes are returned). Each node also has an "index" property, that contains it's position on the board matrix (eg '1, 4' -> row 1, col 4). What I am trying to do is find the longest non-repeating path of connected nodes given a particular "start" node. I've uploaded a couple of images that should give a good idea of what I'm trying to do: (source: necessarygames.com) (source: necessarygames.com) In both images, the highlighted red cards are supposedly the longest path of connected cards containing the most upper-left card. However, you can see in both images that a couple of cards that should be in the path have been left out (Romania and Maldova in the first image, Greece and Turkey in the second) Here's the recursive function that I am using currently to find the longest path, given a starting node/card: def get_longest_trip(self, board, processed_connections = list(), processed_countries = list()): #Append this country to the processed countries list, #so we don't re-double over it processed_countries.append(self) possible_trips = dict() if self.get_connections(board): for i, card in enumerate(self.get_connections(board)): if card not in processed_countries: processed_connections.append((self, card)) possible_trips[i] = card.get_longest_trip(board, processed_connections, processed_countries) if possible_trips: longest_trip = [] for i, trip in possible_trips.iteritems(): trip_length = len(trip) if trip_length > len(longest_trip): longest_trip = trip longest_trip.append(self) return longest_trip else: print card_list = [] card_list.append(self) return card_list else: #If no connections from start_card, just return the start card #as the longest trip card_list = [] card_list.append(board.start_card) return card_list The problem here has to do with the processed_countries list: if you look at my first screenshot, you can see that what has happened is that when Ukraine came around, it looked at its two possible choices for longest path (Maldova-Romania, or Turkey, Bulgaria), saw that they were both equal, and chose one indiscriminantly. Now when Hungary comes around, it can't attempt to make a path through Romania (where the longest path would actually be), because Romania has been added to the processed_countries list by Ukraine. Any help on this is EXTREMELY appreciated. If you can find me a solution to this, recursive or not, I'd be happy to donate some $$ to you. I've uploaded my full source code (Python 2.6, Pygame 1.9 required) to: http://www.necessarygames.com/junk/planes_trains.zip The relevant code is in src/main.py, which is all set to run. A: You do know the longest path problem in a graph with cycles is NP-hard? A: ...Romania has been added to the processed_countries list by Ukraine. Use separate processed_countries lists for each graph path. They say one code example is worth thousand words, so I've changed your code a little (untested): def get_longest_trip(self, board, processed_countries = list()): # see https://stackoverflow.com/questions/576988/python-specific-antipatterns-and-bad-practices/577198#577198 processed_countries = list(processed_countries) processed_countries.append(self) longest_trip = list() if self.get_connections(board): possible_trips = list() for card in self.get_connections(board): if card not in processed_countries: possible_trips.append(card.get_longest_trip(board, processed_countries)) if possible_trips: longest_trip = max(possible_trips, key=len) longest_trip.append(self) if not longest_trip: longest_trip.append(self) return longest_trip Unrelated matters: Traceback (most recent call last): File "main.py", line 1171, in <module> main() File "main.py", line 1162, in main interface = Interface(continent, screen, ev_manager) File "main.py", line 72, in __init__ self.deck = Deck(ev_manager, continent) File "main.py", line 125, in __init__ self.rebuild(continent) File "main.py", line 148, in rebuild self.stack.append(CountryCard(country, self.ev_manager)) File "main.py", line 1093, in __init__ Card.__init__(self, COUNTRY, country.name, country.image, country.color, ev_manager) File "main.py", line 693, in __init__ self.set_text(text) File "main.py", line 721, in set_text self.rendered_text = self.render_text_rec(text) File "main.py", line 817, in render_text_rec return render_textrect(text, self.font, text_rect, self.text_color, self.text_bgcolor, 1) File "/home/vasi/Desktop/Planes and Trains/src/textrect.py", line 47, in render_textrect raise TextRectException, "The word " + word + " is too long to fit in the rect passed." textrect.TextRectException: The word Montenegro is too long to fit in the rect passed. There are 16 different bak files in your source package. Sixteen. Sixteeeen. Think about it and start to use version control.
[ "stackoverflow", "0035908245.txt" ]
Q: trying to simulate a soocer game shootout using arrays I have to create a game of 5 rounds simulating a soccer shootout using a 2x3 array that represents the goal. The computer randomly picks 3 places to block and the user chooses one place to shoot. If the user chooses a coordinate that is not blocked then its a goal. Two functions are needed, one where the computer picks 3 random places to block and the other function is prints out the goal every round. If the user scores 3 times then they win, otherwise they lose. The output should look like this(B=Blocked, G=Goal, "-" = empty space): B - B B - G Ive been stuck on my code and have gotten an error that I just cant seem to fix within both functions #include <iostream> #include <stdlib.h> #include <cmath> #include <ctime> using namespace std; void computerPick(char soccer[]); void shot(char shooter[]); int main() { int userInputX; int userInputY; srand(time(NULL)); char soccer[2][3]; for(int i=0; i<2; i++) { for(int j=0; j<3; j++) { soccer[i][j]='-'; } } cout<<"Pick a X coordinate to shoot at: "<<endl; cin>>userInputX; cout<<"Pick a Y coordinate to shoot at: "<<endl; cin>>userInputY; computerPick(soccer); shot(soccer,userInputY,userInputX); } void computerPick(char soccer[]) { int x = rand()%3; int y = rand()%2; soccer[x][y]='B'; } void shot(char shooter[], int userInputY, int userInputX) { int score=0; if(shooter[userInputX][userInputY]!='B') cout<<"shot is good"<<endl; else cout<<"shot is blocked"<<endl; } A: You have to use correct types for arguments and have to match the prototype declaration and definition of functions. This code compiles: #include <iostream> #include <stdlib.h> #include <cmath> #include <ctime> using namespace std; void computerPick(char soccer[][3]); void shot(char shooter[][3], int userInputY, int userInputX); int main() { int userInputX; int userInputY; srand(time(NULL)); char soccer[2][3]; for(int i=0; i<2; i++) { for(int j=0; j<3; j++) { soccer[i][j]='-'; } } cout<<"Pick a X coordinate to shoot at: "<<endl; cin>>userInputX; cout<<"Pick a Y coordinate to shoot at: "<<endl; cin>>userInputY; computerPick(soccer); shot(soccer,userInputY,userInputX); } void computerPick(char soccer[][3]) { int x = rand()%3; int y = rand()%2; soccer[x][y]='B'; } void shot(char shooter[][3], int userInputY, int userInputX) { int score=0; if(shooter[userInputX][userInputY]!='B') cout<<"shot is good"<<endl; else cout<<"shot is blocked"<<endl; }
[ "stackoverflow", "0030756966.txt" ]
Q: how to use own material design icons with materializecss? I need to know how to add own mdi icons in custom classes. the provided mdi classes provided don't meet my expectations. How can i add those and use them like the default one <i class="mdi-image-facebook"></i> <i class="mdi-image-linkedin"></i> Thanks in advance A: Well, the is an alternative solution provided by this website. It's include preety much a good number of icons such as social buttons. http://materialdesignicons.com/getting-started You just have to add this css link to get the icons and add them as classes <link href="css/materialdesignicons.min.css" media="all" rel="stylesheet" type="text/css" /> and <i class="mdi mdi-facebook-box"></i> <!-- facebook box -->
[ "stackoverflow", "0013959073.txt" ]
Q: How to style filepicker.io iframe I am using: filepicker.pick({ container: "filepicker_iframe" ...) to open a filepicker dialog in given iframe. Is it possible to somehow style elements in iframe to match my site styles? A: Yes, you can provide a css file to include inside the iframe on your developer portal
[ "travel.stackexchange", "0000051965.txt" ]
Q: Can I get a temporary/pay-as-you-go EC card for use in Germany? I am temporarily working in Germany, and am at a company where the onsite canteen only supports payment using an EC card. I am from the UK, where this concept doesn't exist (in fact, I think EC cards don't exist outside Germany). I have access to conventional MasterCard/Visa credit/debit/ATM cards, and of course Euros in cash. Is it possible to get some form of temporary EC card, ideally with a pay-as-you-go style model, where I can top the card up with money and run it down? I know such a concept exists for other card types (for example Mastercard). Ideally I would apply for/get this online, or with minimum paperwork/hassle (e.g. I would like to avoid opening a full current account with a German bank). I don't speak German either. A: The EC cards where rebranded as Girocard a few years back and is only available with a bank account. There are some banks that let (almost) anyone open an account that can only carry a positive balance (as long as you aren't a US citizen or resident). That would take a week or two until you have the card. Usually banks with walk-in branches don't offer free accounts in your situation and online-based banks usually don't have an English-language website. Any of the free bank accounts at online banks do the trick if you have a German address (e.g. DKB, DAB bank, Consorsbank, comdirect, ING Diba). Opening an account is not too complicated, you fill in a simple online form with your details, print the application, go to a post office with your passport and registration confirmation, and wait for your card. Money to such accounts can usually only be transferred for free with a bank transfer from another european Euro account. All other methods incur a hefty fee. However, comdirect customers can use Commerzbank ATMs to deposit cash for free and 1822direkt customers can use ATMs of the Frankfurter Sparkasse as well. Maybe a V Pay or Maestro prepaid card would work at that machine but I know no German bank that offers one of those. There are some banks in other countries that have such cards.
[ "stackoverflow", "0053316710.txt" ]
Q: Spark unpersisting dataframe after it has been persisted agai due to shuffle Let's say we have the following scenario: val df: DataFrame = dataframe val df2 = df.partition(col("id").cache ... some transformations/joins by "id" resulting `val dfTransformed` ... val df3 = dfTransformed.groupBy("name").agg(<someaggregations>) val df4 = df3.partition(col("id").cache ... some transformations/joins by "id"... ... My question is do I need at some time to unpersist df2? Or when I persist df4 then it automatically overwrites df2? If I need to unpersist df2 then when to unpersist it? If I do: val df3 = df2.groupBy("name").agg(<someaggregations>) df2.unpersist val df4 = df3.partition(col("id").cache Won't I losse some performance due to removing data from memory? If I do: val df3 = df2.groupBy("name").agg(<someaggregations>) val df4 = df3.partition(col("id").cache df2.unpersist Won't I unpersist df4 actually? A: Or when I persist df4 then it automatically overwrites df2? Spark doesnt overwrite df2, because you may have a different branch of computation DAG started from df2, something like val df4 = df3.partition(col("id").cache val df2_1 = df2.groupBy("another_name") Won't I losse some performance due to removing data from memory? In general yes. The main thing you need to know is that Spark does lazy executions. So on the stage val df3 = f2.groupBy("name").agg(<someaggregations>) nothing actually happens. Look at official Spark guide of transformations and actions. Won't I unpersist df4 actually? No.
[ "stackoverflow", "0040784703.txt" ]
Q: Error While publishing Azure Cloud service with .net 4.6 Error : "The feature named NetFx46 that is required by the uploaded package is not available in the OS * chosen for the deployment." A: In Service Configuration files (*.cscfg) change osFamily="4" to osFamily="5".
[ "stackoverflow", "0061265951.txt" ]
Q: Count occurance of multiple columns by group in R I have a df that looks like this: Room Item Red Square Basement Ball TRUE FALSE Basement Basket TRUE TRUE Basement Table FALSE TRUE Basement Desk TRUE TRUE I want to count the number of Square, Red, and both square + red items, so the final DF looks like this: Room Square Red Both Basement 1 1 2 I tried df %>% group_by(Room, Square, Red) %>% count() to give me count of the categories, but I'm not sure how to format it as I want it. A: In this pipeline it is necessary to name the newly-created variables with different names so that, when you use summarise, the second and third variables don't use the newly-created variable Square. I later rename them within the same pipeline. df %>% group_by(Room) %>% summarise( Square_new = sum(Square & !Red), Red_new = sum(Red & !Square), Both_new = sum(Square & Red) ) %>% rename(Square = Square_new, Red = Red_new, Both = Both_new) Output # A tibble: 1 x 4 # Room Square Red Both # <chr> <int> <int> <int> # 1 Basement 1 1 2
[ "math.meta.stackexchange", "0000030326.txt" ]
Q: 1/2 of question answered by different people In the question I posed here: A characterization of Weak Convergence in $L^p$ spaces I have a two way implication, one way was answered by one person, the other answered by another. I'd like to award both of them an answer. I know there's a way to do with bounty, but I can't find the link. How do I award multiple answers? A: In terms of reputation, one way to minimise the difference between the two answers is to upvote and accept the answer (by ticking it) you think is the most clear and of higher quality. This would gain them +25 rep. For the other answer, you can upvote too and provide the bounty of +50. This would gain them +60 rep; the difference is only 35 rep. As @Mars had pointed out below, the difference in reputation can slowly be reduced as the accepted answer floats to the top and naturally, more people would upvote that one. As has been said in the comments, you need to wait two days to start a bounty. After that time, a link for 'start a bounty' will appear under 'add a comment', in which you can select the amount (+50), and the reason, which you can choose Reward existing answer - One or more of the answers is exemplary and is worthy of an additional bounty. Note that at any one time, at most three bounties can be started by one person. From your profile I see you've already started one, so as long as you don't start two more before this question you'll be fine. N.B. In some cases, when you don't award the bounty of +50 within seven days and the grace period, only a bounty of +25 is given to the other answerer. This way may work as well. For further details, see here, taken from the Help Centre: If you do not award your bounty within 7 days (plus the grace period), the highest voted answer created after the bounty started with a minimum score of 2 will be awarded half the bounty amount (or the full amount, if the answer is also accepted). If two or more eligible answers have the same score (their scores are tied), the oldest answer is chosen. If there's no answer meeting those criteria, no bounty is awarded to anyone. If the bounty was started by the question owner, and the question owner accepts an answer posted during the bounty period, and the bounty expires without an explicit award then we assume the bounty owner liked the answer they accepted and award it the full bounty amount at the time of bounty expiration.
[ "pt.stackoverflow", "0000448783.txt" ]
Q: Remover linhas com menor frequência do pandas.dataframe Possuo um dataframe com mais de 13000 linhas e gostaria de remover algumas baseado na frequência com que aparecem levando em consideração a coluna nomeada variedade. df.variedade.value_counts() RB867515 5084 SP813250 2500 RB855453 981 others 849 RB855156 750 RB855536 633 SP832847 561 RB835054 541 SP801842 423 SP835073 326 RB835486 253 RB845210 199 SP803280 187 RB72454 164 RB966928 146 Name: variedade, dtype: int64 Gostaria de manter apenas as 3 variedades que mais aparecem e apagar o restante, alterando assim a quantidade de linhas para pouco mais de 8000. Tentei o comando: v = df[['variedade']] df[v.replace(v.apply(pd.Series.value_counts)).gt(900).all(1)] Entretanto, após pedir um value_counts da coluna variedade aparece que possuo mais de 13000 linhas ainda. Alguém tem alguma ideia de onde estou errando? A: Combine o value_counts com um head(3).index para criar uma máscara com os elementos que mais aparecem no DataFrame. Após, com isin selecione eles. mask = df['variedade'].value_counts().head(3).index df = df.loc[df['variedade'].isin(mask)]
[ "stackoverflow", "0050244743.txt" ]
Q: Firestore: enablePersistence() and then using redux with offline database? So, essentially, I'm using Create-React-App and I want to allow users to add data to redux either offline or online. I also want to sync redux with Firestore. In my main attempt, I initialize my firebase settings: // ./firebase/firebase.js var firestoreDatabase; firebase.initializeApp(config.firebase.config); firebase.firestore().enablePersistence().then(() => { firestoreDatabase = firebase.firestore(); }); export { firebase, firestoreDatabase }; Then, to make sure this has fired properly (this is definitely wrong, but I can't figure out the best place to catch the enablePersistence() return... ): // src/index.js import { firebase, firestoreDatabase } from "./firebase/firebase"; firebase.auth().onAuthStateChanged(user => { store.dispatch(setReduxData()).then(() => { if (firestoreDatabase) { ReactDOM.render(application, document.getElementById("root")); } }); }); ACTIONS FILE import { firestoreDatabase } from "../firebase/firebase"; export const setReduxData = () => { return (dispatch, getState) => { const uid = getState().auth.uid; const data = { newData: '123' }; return firestoreDatabase .collection("Users") .doc(uid) .collection("data") .add(data) .then(ref => { // so, this never gets fired dispatch( addData({ id: ref.id, ...data }) ); }) So the dispatch never gets fired, however, when I refresh the application, the data I entered { newData: '123' } is added to the store. I think my entire way of handling this is wrong. I don't like exporting firestoreDatabase as undefined and then updating it when enablePersistence() returns... I would like to just enablePersistence() once and then use the cache or the server depending on if the user is online or not... Redux should operate the same regardless... Any thoughts and feedback are welcome! A: So, I figured out how to load Firestore properly in my application: In my firebase.js file: import * as firebase from "firebase"; import config from "../config"; // https://firebase.google.com/docs/reference/js/ firebase.initializeApp(config.firebase.config); const database = firebase.database(); const auth = firebase.auth(); const googleAuthProvider = new firebase.auth.GoogleAuthProvider(); export { firebase, googleAuthProvider, auth, database }; Then, I added a firestore.js file: import { firebase } from "./firebase"; import "firebase/firestore"; import { notification } from "antd"; firebase.firestore().settings({ timestampsInSnapshots: true }); const handleError = error => { if (error === "failed-precondition") { notification.open({ message: "Error", description: "Multiple tabs open, offline data only works in one tab at a a time." }); } else if (error === "unimplemented") { notification.open({ message: "Error", description: "Cannot save offline on this browser." }); } }; export default firebase .firestore() .enablePersistence() .then(() => firebase.firestore()) .catch(err => { handleError(err.code); return firebase.firestore(); }); And then I call firestore in my actions file: import firestore from "../firebase/firestore"; return firestore .then(db => { var newData = db .collection("Users") .doc(uid) .collection("userData") .doc(); newData.set(data); var id = newData.id; dispatch(addData({ id, ...data })); }) .catch(err => { // notification }); Essentially, I separated out my redux and Firestore, but ultimately they are connected through the Firestore id.
[ "stackoverflow", "0005532196.txt" ]
Q: Apache Commons Math 2.2 Percentile bug? I am not 100% sure if this is a bug or I am not doing something right but if you give Percentile a large amount of data that is the consistent of the same value (see code below) the evaluate method takes a very long time. If you give Percentile the random values evaluate takes a considerable shorter time. As noted below Median is a subcalss of Percentile. Percentile java doc private void testOne(){ int size = 200000; int sameValue = 100; List<Double> list = new ArrayList<Double>(); for (int i = 0; i < size; i++) { list.add((double)sameValue); } Median m = new Median(); m.setData(ArrayUtils.toPrimitive(list.toArray(new Double[0]))); long start = System.currentTimeMillis(); System.out.println("Start:"+ start); double result = m.evaluate(); System.out.println("Result:" + result); System.out.println("Time:"+ (System.currentTimeMillis()- start)); } private void testTwo(){ int size = 200000; List<Double> list = new ArrayList<Double>(); Random r = new Random(); for (int i = 0; i < size; i++) { list.add(r.nextDouble() * 100.0); } Median m = new Median(); m.setData(ArrayUtils.toPrimitive(list.toArray(new Double[0]))); long start = System.currentTimeMillis(); System.out.println("Start:"+ start); double result = m.evaluate(); System.out.println("Result:" + result); System.out.println("Time:"+ (System.currentTimeMillis()- start)); } A: This is a known issue between versions 2.0 and 2.1 and has been fixed for version 3.1. Version 2.0 did indeed involve sorting the data, but in 2.1 they seemed to have switched to a selection algorithm. However, a bug in their implementation of that led to some bad behavior for data with lots of identical values. Basically they used >= and <= instead of > and <. A: It's well known that some algorithms can exhibit slower performance for certain data sets. Performance can actually be improved by randomizing the data set before performing the operation. Since percentile probably involves sorting the data, I'm guessing that your "bug" is not really a defect in the code, but rather the manifestation of one of the slower performing data sets.
[ "stackoverflow", "0019748653.txt" ]
Q: Loop to change content of href for many anchors The content of my posts in Wordpress is a big markup. It is coming from MS Word so it is text wrapped by HTML nested tags and inline styles. I have a segment of code that is repeated many times in the content (It represents text footnotes). This segment, for the first footnote for example is: <sup><a title="" href="file:///C:/Users/hp/Desktop/file.docx#_ftn1" name="_f tnref1"> <span class="MsoFootnoteReference"> <span dir="LTR"> <span class="MsoFootnoteReference"> <span lang="EN-US" style="font-size: 16pt; line-height: 115%;"> [1] </span> </span> </span> </span> </a></sup> ..... <a title="" href="file:///C:/Users/hp/Desktop/file.docx#_ftnref1" name="_ftn1"> <span class="MsoFootnoteReference"> <span dir="LTR" lang="EN-US" style="font-size: 12.0pt; font-family: 'Simplified Arabic','serif';"> <span class="MsoFootnoteReference"> <span lang="EN-US" style="font-size: 12pt; line-height: 115%;"> [1] </span> </span> </span> </span> </a> My goal is to change the 2 hrefs from: href="file:///C:/Users/hp/Desktop/file.docx#_ftn1" href="file:///C:/Users/hp/Desktop/file.docx#_ftnref1" to: href="#_ftn1" href="#_ftnref1" so that the user can jump from one anchor to the other. Questions: 1- Is is better to use server side language instead of jquery? 2- How to loop over the repetitive segments and change the href contents of each couple of anchors? Thank you very much in advance for your invaluable assistance. Solution: With the use of Regular expression provided by Broxzier + PHP, the code below is working and can be applied to any data before persisting it on the database. if(preg_match_all('/href\s*=\s*"[^"]+(#[^"]+)"/',get_the_content(),$match)) { echo preg_replace('/href\s*=\s*"[^"]+(#[^"]+)"/','href="$1"', get_the_content()); } A: 1- Is is better to use server side language instead of jquery? Neither. The best and fastest option would be to totally remove the website and page name from the link if they're the same as the current page. One way would be using Regular Expressions, this could be done via JavaScript, but I strongly suggest doing this by using a text editor and replace the old data (Wordpress saves revisions anyway). The following regex will grab the href attribute href\s*=\s*"[^"]+(#[^"]+)" Replace this with: href="\1" And you're done. 2- How to loop over the repetitive segments and change the href contents of each couple of anchors? Use a global flag to do this. Since it's content I advice you to do it manually or change the regex so that it will only match the current url. Please note that this will also replace occurrences in the content, if there is any text like href="website#flag" in there. I assumed this was not the case. --
[ "stackoverflow", "0052458947.txt" ]
Q: Get bad result for random walk I want to implement random walk and compute the steady state. Suppose my graph is given as in the following image: The graph above is defined in a file as follows: 1 2 0.9 1 3 0.1 2 1 0.8 2 2 0.1 2 4 0.1 etc To read and build this graph, I use the following method: def _build_og(self, original_ppi): """ Build the original graph, without any nodes removed. """ try: graph_fp = open(original_ppi, 'r') except IOError: sys.exit("Could not open file: {}".format(original_ppi)) G = nx.DiGraph() edge_list = [] # parse network input for line in graph_fp.readlines(): split_line = line.rstrip().split('\t') # assume input graph is a simple edgelist with weights edge_list.append((split_line[0], split_line[1], float(split_line[2]))) G.add_weighted_edges_from(edge_list) graph_fp.close() print edge_list return G In the function above do I need to define the graph as DiGraph or simpy Graph? We build the transition matrix as following: def _build_matrices(self, original_ppi, low_list, remove_nodes): """ Build column-normalized adjacency matrix for each graph. NOTE: these are column-normalized adjacency matrices (not nx graphs), used to compute each p-vector """ original_graph = self._build_og(original_ppi) self.OG = original_graph og_not_normalized = nx.to_numpy_matrix(original_graph) self.og_matrix = self._normalize_cols(og_not_normalized) And I normalize the matrix using : def _normalize_cols(self, matrix): """ Normalize the columns of the adjacency matrix """ return normalize(matrix, norm='l1', axis=0) now to simulate the random walk we define : def run_exp(self, source): CONV_THRESHOLD = 0.000001 # set up the starting probability vector p_0 = self._set_up_p0(source) diff_norm = 1 # this needs to be a deep copy, since we're reusing p_0 later p_t = np.copy(p_0) while (diff_norm > CONV_THRESHOLD): # first, calculate p^(t + 1) from p^(t) p_t_1 = self._calculate_next_p(p_t, p_0) # calculate L1 norm of difference between p^(t + 1) and p^(t), # for checking the convergence condition diff_norm = np.linalg.norm(np.subtract(p_t_1, p_t), 1) # then, set p^(t) = p^(t + 1), and loop again if necessary # no deep copy necessary here, we're just renaming p p_t = p_t_1 We define the initial state (p_0) by using the following method: def _set_up_p0(self, source): """ Set up and return the 0th probability vector. """ p_0 = [0] * self.OG.number_of_nodes() # convert self.OG.number_of_nodes() to list l = list(self.OG.nodes()) #nx.draw(self.OG, with_labels=True) #plt.show() for source_id in source: try: # matrix columns are in the same order as nodes in original nx # graph, so we can get the index of the source node from the OG source_index = l.index(source_id) p_0[source_index] = 1 / float(len(source)) except ValueError: sys.exit("Source node {} is not in original graph. Source: {}. Exiting.".format(source_id, source)) return np.array(p_0) To generate the next state, we use the following function and the power iteration strategy: def _calculate_next_p(self, p_t, p_0): """ Calculate the next probability vector. """ print 'p_0\t{}'.format(p_0) print 'p_t\t{}'.format(p_t) epsilon = np.squeeze(np.asarray(np.dot(self.og_matrix, p_t))) print 'epsilon\t{}'.format(epsilon) print 10*"*" return np.array(epsilon) Suppose the random walk can start from any node (1, 2, 3 or 4). When runing the code i get the following result: 2 0.32 3 0.31 1 0.25 4 0.11 The result must be: (0.28, 0.30, 0.04, 0.38). So can someone help me to detect where my mistake is? I don't know if the problem is in my transition matrix. A: Here is what the matrix should be (given than your transition matrix multiplies the state vector from the left, it is a left stochastic matrix, where the columns add up to 1, and the (i, j) entry is the probability of going from j to i). import numpy as np transition = np.array([[0, 0.8, 0, 0.1], [0.9, 0.1, 0.5, 0], [0.1, 0, 0.3, 0], [0, 0.1, 0.2, 0.9]]) state = np.array([1, 0, 0, 0]) # could be any other initial position diff = tol = 0.001 while diff >= tol: next_state = transition.dot(state) diff = np.linalg.norm(next_state - state, ord=np.inf) state = next_state print(np.around(state, 3)) This prints [0.279 0.302 0.04 0.378]. I can't tell if you are loading the data incorrectly, or something else. The step with "column normalization" is a warning sign: if the given transition probabilities don't add up to 1, you should report bad data, not normalize the columns. And I don't know why you use NetworkX at all when the data is already presented as a matrix: the table you are given can be read as column row entry and this matrix is what is needed for calculations.
[ "math.stackexchange", "0000149155.txt" ]
Q: Convincing Proof of Measure Theory Approximation There is a standard means of approximating a bounded nonnegative function from below in a measure theoretic setting, which is $$f_n=2^{-n}\lfloor{2^nf}\rfloor\wedge n=2^{-n}\sum_{j=0}^{n2^n}j\mathbf{1}_{A_{n_j}}$$ where $A_{n_j}=f^{-1}[\frac{j}{2^n},\frac{j+1}{2^n})$ for $j\neq n2^n$ and $A_{n_j}=f^{-1}[n,\infty)$ for $j=n2^n$. I see intuitively (by drawing pictures) why this is a uniform approximation, and where the second equality comes from. However I can't see how to prove these rigorously in a clean way. Does anyone have a clean and insightful proof of (a) the second equality and (b) that the $f_n$ uniformly approximate $f$? Many thanks! A: Since $f$ is bounded, for every $n$ large enough $f\leqslant n$ everywhere hence the part $\wedge n$ of the formula defining $f_n$ does not modify the function. Then $f_n(x)=j/2^n$ if and only if $j/2^n\leqslant f(x)\lt (j+1)/2^n$ hence $f\leqslant f_n\lt f+1/2^n$ uniformly.
[ "stackoverflow", "0042961371.txt" ]
Q: Regex matching with end dollar sign on URL pattern in python Here's ther scenario, I'd like to extract secondary path in URL, so the following URL should all return 'a-c-d' /opportunity/a-c-d /opportunity/a-c-d/ /opportunity/a-c-d/123/456/ /opportunity/a-c-d/?x=1 /opportunity/a-c-d?x=1 My code snippet is as follows: m = re.match("^/opportunity/([^/]+)[\?|/|$]", "/opportunity/a-c-d") if m: print m.group(1) It works for all possible URLs above EXCEPT the first one /opportunity/a-c-d. Could anyone help explain the reason and rectify my regex please? Thanks a lot! A: Don't do this. Use the urlparse module instead. Here is some test code: from urlparse import urlparse urls = [ '/opportunity/a-c-d', '/opportunity/a-c-d/', '/opportunity/a-c-d/123/456/', '/opportunity/a-c-d/?x=1', '/opportunity/a-c-d?x=1', ] def secondary(url): try: return urlparse(url).path.split('/')[2] except IndexError: return None for url in urls: print '{0:30s} => {1}'.format(url, secondary(url)) and here is the output /opportunity/a-c-d => a-c-d /opportunity/a-c-d/ => a-c-d /opportunity/a-c-d/123/456/ => a-c-d /opportunity/a-c-d/?x=1 => a-c-d /opportunity/a-c-d?x=1 => a-c-d A: The $ in your regex is matching the literal '$' character, not the end of line character. Instead, you probably want this: m = re.match(r"^/opportunity/([^/?]+)\/?\??", "/opportunity/a-c-d") if m: print m.group(1)
[ "stackoverflow", "0030310030.txt" ]
Q: How to check for null Json propery in Azure Stream Analytics query? We have an input stream of Json events from Event Hub in a following form: ... { "DeviceId": null, "ReportDateUtc": "2015-05-12T20:57:13.0000000Z", ... }, { "DeviceId": "device123", "ReportDateUtc": "2015-05-12T20:57:13.0000000Z", ... } ... When I test-run the following query, the output record count is 0: SELECT * FROM [events-input] WHERE DeviceId IS NULL Looks like Json nulls are not exactly SQL NULLs, so what would be the proper way to check for null values in a query? A: There is a bug in the in-portal debugging experience where NULL values are not handled correctly. This will be fixed soon. If you start actual job it will properly.
[ "stackoverflow", "0022270100.txt" ]
Q: Converting Dictionary into multiple 1D int[] arrays How can I convert a dictionary<string, string> into a number of one dimension integer array at runtime? For example, Dictionary<string, string> dic = new Dictionary<string, string>(); dic.Add("one", "1,3,6,8"); dic.Add("two", "2,3,6,9"); dic.Add("n", "3,4,1,8"); The desired resulting arrays would be: int[] arr1 = new int[] {1,3,6,8}; int[] arr2 = new int[] {2,3,6,9}; int[] arrn = new int[] {3,4,1,8}; Could anyone help me to achieve this conversion? A: Well for there to be separate variables, you'll need to know all the numbers involved at compile-time. (You can't just generate more variables at execution time... and you really don't want to get into the business of working out the word representations of numbers.) At that point it's fairly easy though: arr1 = ParseValues(dic["one"]); arr2 = ParseValues(dic["two"]); arr3 = ParseValues(dic["three"]); // etc static int[] ParseValues(string values) { return values.Split(',').Select(int.Parse).ToArray(); } Note that this will fail with empty strings - if you need it to handle that (returning an empty array) you want: return values.Split(new[] {','}, StringSplitOptions.RemoveEmptyEntries) .Select(int.Parse) .ToArray(); EDIT: If you don't know how many values there will be, but you know they'll always be present with no gaps, you could have: // This has to go as far as you need to cope with string[] words = { "one", "two", "three", "four", "five", "six", ... }; List<int[]> arrays = words.TakeWhile(word => dic.ContainsKey(word)) .Select(word => dic[word].Split(',') .Select(int.Parse) .ToArray()) .ToList(); (Note that in this list, element 0 will correspond to "one" etc... it'll all be offset by one.)
[ "stackoverflow", "0005689721.txt" ]
Q: multithreading in android I have an app in android,a kind of client-server in which the client has stored some gps data in Sqlite database and once connected with server it delivers it that data which at its turn stores it in it's own database for future manipulation! Now the problem is that I have to create a server that accepts multiple clients and I cannot find a decent example in this way.I'm not using no services in combination with it! Both my client and server are on android!!!! I have already did the connection between client and server,but only with one thread(I mean my server can accept only one client at this moment!) Is this suitable? How to implement simple threading with a fixed number of worker threads Thank u in advance! A: A simple POST request from the client to the server should be good enough. Encode the data in a suitable format (JSON/XML) and send it as a POST HTTP request. I don't understand why you want to write your own server. I would just use a PHP/Python script running with Apache to receive the POST request and store the data in a database (MySQL, PostGre). On your Android device, you should put all your code in an AsyncTask. Android uses the standard Apache libraries to make the HTTP request.
[ "stackoverflow", "0018552440.txt" ]
Q: Merge two arrays in Go... container assign error I am trying to figure out what is wrong with my code all this morning but couldn't. It says it cannot assign containers. Please check this go play ground http://play.golang.org/p/RQmmi7nJAK And the problematic code is below. func My_Merge(container []int, first_index int, mid_index int, last_index int) { left_array := make([]int, mid_index-first_index+1) right_array := make([]int, last_index-mid_index) temp_i := 0 temp_j := 0 for i := first_index; i < mid_index; i++ { left_array[temp_i] = container[i] temp_i++ } for j := mid_index; j < last_index+1; j++ { right_array[temp_j] = container[j] temp_j++ } i := 0 j := 0 for elem := first_index; elem < len(container); elem++ { if left_array[i] <= right_array[j] { container[elem] = left_array[i] i++ if i == len(left_array) { container[elem+1:last_index] = right_array[j:] break } } else { container[elem] = right_array[j] j++ if j == len(right_array) { container[elem+1:last_index] = left_array[i:] break } } } } I am getting the errors in the line container[elem+1:last_index] = right_array[j:]. Even if I delete the whole block, I am getting errors. Could anybody help me on this? I would greatly appreciate it. A: You can't assign to a slice expression in Go. You need to use copy: copy(container[elem+1:last_index], right_array[j:]) But apparently there are other problems too, since when I change that in the playground I get an index out of range error.
[ "scifi.stackexchange", "0000186277.txt" ]
Q: What does it mean that a Wizard is more powerful than another? When wizards are dueling, if they both know how to cast the Killing Curse, then what is the meaning of one being more powerful than the other? Specifically, we are repeatedly told how powerful Voldemort is. What exactly does this mean? Except for his Horcruxes, he should be no more advantaged in a duel. PS: This question is not a duplicate. Its essence is that being able to cast a killing curse is the epitome of magical power in duels, and so Voldemort should not have become anybody special. A: There are several things that determine a wizard's ability. Since you specified duels, here are a few factors that would make one wizard superior to another in a duel: Repertoire. When Harry fought Draco Malfoy in The Half-Blood Prince, he used a spell he'd just learned: Sectumsempra. Draco didn't know what it did (though to be fair, neither did Harry), which made it harder to defend against. Reflexes. During the same Harry/Draco fight, one of Draco's spells "missed Harry by inches." This shows that it's possible to dodge some spells simply by not being in their path. Willpower. Some spells, like the Killing Curse, require a certain force of will to make them work-- you have to want to kill your opponent. Voldemort clearly has this, most others likely don't. Training. This is a bit of a catch-all, but we have plenty of examples that show what being good at magic means. Spells can go insufficiently right (Harry's first attempts at a Patronus in The Prisoner of Azkaban are little more than puffs of white smoke) or flat-out wrong (Neville transplants his own ears onto a cactus in The Goblet of Fire). One wizard's spells could simply overpower another's. Bonus: Reputation. There are also instances of someone gaining a reputation as a skilled witch or wizard without much evidence to support it. Everyone "knows" Gilderoy Lockhart is a great wizard, despite being completely incompetent at everything but a Memory Charm. Everyone "knows" Sirius Black is extremely dangerous, despite being nobody special in terms of magical skill. While Voldemort is shown to be a powerful wizard by any objective measure, it's also possible his reputation exceeds his actual skill. A: POS has the best answer to this question, though its universe is not the same as canon. In POS, people can’t just keep casting spells. Most people will get completely exhausted if they cast two killing spells in a row, or keep holding up a Protego (especially in the face of attacks). I think canon is plain dumb in regards to magical power.