prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Why does the update command in sql insist on using slanted single quotes? I have been having trouble updating some fields in a mysql database, and I am frustrated that at times mysql refuses to allow me to query unless I wrap fields in slanted quotes rather than single ones. It seems to have no discernible pattern for doing so, could someone explain what is going on here? This query works with no quotes of any kind (x is primary key) ``` UPDATE table SET x = 1821 WHERE x =1820 ``` These two querys fail with syntax problems near the o. ``` UPDATE table SET o = 200 WHERE x =1820 UPDATE table SET 'o' = 200 WHERE x =1820 ``` This query works. ``` UPDATE table SET `o` = 200 WHERE x =1820 ``` Is there any legitimate reason mysql is insistent on using the slanted quotes?
In almost all SQL dialects, strings (like input values) go single quoted and identifiers (like table or column names) go unquoted: ``` SELECT column_name, function_name() AS column_alias FROM table_name WHERE another_column='The value to match' ``` This is valid for MySQL, Oracle, SQL-Server and a long etcetera. This is the general rule. Then, there's a particular situation: when you want to use certain name for an identifier that would normally be invalid. In such case, MySQL allows you to use backtick quotes around the name (other DBMS use square brackets, double quotes or simply don't allow you to use such name): ``` SELECT `Full Name` -- Name with spaces FROM `from` -- Reserved word ``` In general, it's better to simply avoid these kind of names, although I understand there are cases where a really good name is [already *taken*](http://dev.mysql.com/doc/refman/5.5/en/reserved-words.html): ``` SELECT `key`, value FROM settings ``` As about your examples, it's obvious that they aren't real queries since using `table` as a table name would trigger a syntax error in them all ;-)
How to make TextGeometry in THREE JS follow mouse? This is my source code. I am trying to make the text rotate according to mouse position. ``` // Initialization const scene = new THREE.Scene(); let camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 ); let renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true }); let body = document.getElementsByTagName("body"); let pageX = 0.5; let pageY = 0.5; renderer.setSize( window.innerWidth, window.innerHeight ); document.getElementById("board").appendChild(renderer.domElement); // Handle resize event window.addEventListener('resize', () => { renderer.setSize( window.innerWidth, window.innerHeight ); camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); }); camera.position.z = 20; // Create light let directLight = new THREE.DirectionalLight('#fff', 4); directLight.position.set(0, 7, 5); scene.add(directLight); var light = new THREE.AmbientLight( 0x404040 ); // soft white light scene.add( light ); function animate (){ requestAnimationFrame( animate ); var loader = new THREE.FontLoader(); loader.load( 'https://threejs.org/examples/fonts/helvetiker_regular.typeface.json', function ( font ) { var geometry = new THREE.TextGeometry( 'Hello three.js!', { font: font, size: 3, height: 0.5, curveSegments: 4, bevelEnabled: true, bevelThickness: 0.02, bevelSize: 0.05, bevelSegments: 3 } ); geometry.center(); var material = new THREE.MeshPhongMaterial( { color: '#dbe4eb', specular: '#dbe4eb' } ); var mesh = new THREE.Mesh( geometry, material ); mesh.rotation.x = (pageY - 0.5) * 2; mesh.rotation.y = (pageX - 0.5) * 2; scene.add( mesh ); } ); renderer.render(scene, camera); } animate(); // Get mouse coordinates inside the browser document.body.addEventListener('mousemove', (event) => { pageX = event.pageX / window.innerWidth; pageY = event.pageY / window.innerHeight; }); renderer.render(scene, camera); </script> ``` This is the best I could get. The problem is that each time I move the mouse, it instantiates a new mesh and rotates it accordingly, and I only need one mesh to follow the mouse. Can anyone help? Thanks in advance!
As you already figured out, each frame you're reloading the font and ultimately recreating the mesh each time. To get around this you need to move the font loading and object creation inside some initialization function, so it just happens once. The only part of code you want to keep inside the render loop is the updating of the text's rotation according to the mouse movement: ``` mesh.rotation.x = (pageY - 0.5) * 2; mesh.rotation.y = (pageX - 0.5) * 2; ``` This will bring up another problem though. Since `mesh` is a local object defined inside the callback function of the font loader, it won't be accessible outside. Luckily three.js offers a property called **.name** which you can use to give your object a name. e.g. ``` var mesh = new THREE.Mesh(geometry, material); mesh.name = "myText"; scene.add(mesh); ``` Later on you can get a reference to this object using: ``` scene.getObjectByName("myText") ``` Here's an example: ``` var container, scene, camera, renderer, pageX, pageY; function init() { scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000); renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true }); pageX = 0.5; pageY = 0.5; renderer.setSize(window.innerWidth, window.innerHeight); document.getElementById("container").appendChild(renderer.domElement); window.addEventListener('resize', () => { renderer.setSize(window.innerWidth, window.innerHeight); camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); }); camera.position.z = 20; let directLight = new THREE.DirectionalLight('#fff', 4); directLight.position.set(0, 7, 5); scene.add(directLight); var light = new THREE.AmbientLight(0x404040); // soft white light scene.add(light); var loader = new THREE.FontLoader(); loader.load('https://threejs.org/examples/fonts/helvetiker_regular.typeface.json', function(font) { var geometry = new THREE.TextGeometry('Hello three.js!', { font: font, size: 3, height: 0.5, curveSegments: 4, bevelEnabled: true, bevelThickness: 0.02, bevelSize: 0.05, bevelSegments: 3 }); geometry.center(); var material = new THREE.MeshPhongMaterial({ color: '#dbe4eb', specular: '#dbe4eb' }); var mesh = new THREE.Mesh(geometry, material); mesh.name = "myText"; scene.add(mesh); animate(); }); document.body.addEventListener('mousemove', (event) => { pageX = event.pageX / window.innerWidth; pageY = event.pageY / window.innerHeight; }); } function animate() { requestAnimationFrame(animate); render(); } function render() { scene.getObjectByName("myText").rotation.x = (pageY - 0.5) * 2; scene.getObjectByName("myText").rotation.y = (pageX - 0.5) * 2; renderer.render(scene, camera); } init(); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r120/three.min.js"></script> <div id="container"></div> ```
Why does shifting 24 bits result in a negative value? When I perform a 24 bit-shift for a number less than 256 I get a negative result? Why is that? ``` console.log( (200<<23)>>23 ); console.log( (200<<24)>>24 ); // ??? ```
Since bit shifts work in binary, let's look at the binary representation. 32 bits are used for those operators. ``` Decimal | Binary 200 | 0000 0000 0000 0000 0000 0000 1100 1000 ``` Now shift left 24 places... ``` 200 | 0000 0000 0000 0000 0000 0000 1100 1000 -939524096 | 1100 1000 0000 0000 0000 0000 0000 0000 ``` Most importantly, notice how the first bit is now a `1`, which indicates a negative number in signed 32-bit numbers. The next thing to note is that `>>` is a *sign-propagating* right shift. This means that when you shift right again, you are shifting in copies of the first bit. ``` -939524096 | 1100 1000 0000 0000 0000 0000 0000 0000 -56 | 1111 1111 1111 1111 1111 1111 1100 1000 ``` However if you use `>>>` instead, you get a zero-fill right shift. As the name implies, it always shifts in `0`s. ``` -939524096 | 1100 1000 0000 0000 0000 0000 0000 0000 200 | 0000 0000 0000 0000 0000 0000 1100 1000 ```
Creating boost.geometry.model.polygon from 2D C List Supposed I have the following dataset ``` double * data = (double *) malloc(sizeof(double) * 100 * 2); for (ii = 0; ii < 100; ii++) { data[2*ii] = ii; data[2*ii + 1] = ii; } ``` how can I create a boost polygon from this data? thanks
A complete example ``` #include <iostream> #include <boost/polygon/polygon.hpp> #include <vector> // Some typedefs namespace bpl = boost::polygon; typedef bpl::polygon_data<double> Polygon; typedef bpl::polygon_traits<Polygon>::point_type Point; int main() { // Your C-style data (assumed (x,y) pairs) double * data = (double *) malloc(sizeof(double) * 100 * 2); for (int ii = 0; ii < 100; ii++) { data[2*ii] = ii; data[2*ii + 1] = ii; } // Convert to points std::vector<Point> points; for (int i=0;i<100;++i) points.push_back(Point(data[2*i],data[2*i+1])); // Create a polygon Polygon polygon; polygon.set(points.begin(),points.end()); // Do something with the polygon std::cout << "Perimeter : " << bpl::perimeter(polygon) << std::endl; std::cout << "Area : " << bpl::area(polygon) << std::endl; return 0; } ``` Just to illustrate the flexibility you actually have: with a bit of extra typedef work, its possible to define your own pair-of-doubles point type which can be aliased onto your data, which avoids the intermediate copy... ``` #include <iostream> #include <boost/polygon/polygon.hpp> #include <vector> // Define a point type which can be aliased to your 'C' points struct Pt { double x; double y; }; // Some typedefs namespace bpl = boost::polygon; typedef bpl::polygon_data<double> Polygon; // Add the necessary to use Pt namespace boost { namespace polygon { template <> struct geometry_concept<Pt> {typedef point_concept type;}; template <> struct point_traits<Pt> { typedef double coordinate_type; static inline coordinate_type get(const Pt& pt,orientation_2d orient) { return (orient == HORIZONTAL ? pt.x : pt.y); } }; template <> struct point_mutable_traits<Pt> { static inline void set(Pt& pt, orientation_2d orient, int value) { if(orient == HORIZONTAL) pt.x = value; else pt.y = value; } static inline Pt construct(double x,double y) { Pt r; r.x=x; r.y=y; return r; } }; } } int main() { // Your C-style data (assumed (x,y) pairs) double * data = (double *) malloc(sizeof(double) * 100 * 2); for (int ii = 0; ii < 100; ii++) { data[2*ii] = ii; data[2*ii + 1] = ii; } // Reinterpret your data as an array of Pt const Pt*const pts=reinterpret_cast<const Pt*>(data); // Create a polygon Polygon polygon; polygon.set(pts,pts+100); // Do something with the polygon std::cout << "Perimeter : " << bpl::perimeter(polygon) << std::endl; std::cout << "Area : " << bpl::area(polygon) << std::endl; return 0; } ``` And this trend could be continued to a [custom polygon class](http://www.boost.org/doc/libs/1_51_0/libs/polygon/doc/gtl_custom_polygon.htm).
Looping Results with an External API Call and findOneAndUpdate I am trying to write a program that gets the documents from a mongo database with mongoose and process them using an API and then edits each document in the database with the results of the processing. My problem is that I have problems because I don't understand completely nodejs and the asynchronous. This is my code: ``` Model.find(function (err, tweets) { if (err) return err; for (var i = 0; i < tweets.length; i++) { console.log(tweets[i].tweet); api.petition(tweets[i].tweet) .then(function(res) { TweetModel.findOneAndUpdate({_id: tweets[i]._id}, {result: res}, function (err, tweetFound) { if (err) throw err; console.log(tweetFound); }); }) .catch(function(err) { console.log(err); }) } }) ``` The problem is that in the findOneAndUpdate, tweets is undefined so it can't find that id. Any solution? Thanks
The core thing you are really missing is that the Mongoose API methods also use ["Promises"](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise), but you seem to just be copying from documentation or old examples using callbacks. The solution to this is to convert to using Promises only. ## Working with Promises ``` Model.find({},{ _id: 1, tweet: 1}).then(tweets => Promise.all( tweets.map(({ _id, tweet }) => api.petition(tweet).then(result => TweetModel.findOneAndUpdate({ _id }, { result }, { new: true }) .then( updated => { console.log(updated); return updated }) ) ) ) ) .then( updatedDocs => { // do something with array of updated documents }) .catch(e => console.error(e)) ``` Aside from the general conversion from callbacks, the main change is using [`Promise.all()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) to resolve the ouput from the [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) being processed on the results from [`.find()`](http://mongoosejs.com/docs/api.html#find_find) instead of the `for` loop. That is actually one of the biggest problems in your attempt, since the `for` cannot actually control when the async functions resolve. The other issue is "mixing callbacks", but that is what we are generally addressing here by only using Promises. Within the [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) we return the `Promise` from the API call, chained to the [`findOneAndUpdate()`](http://mongoosejs.com/docs/api.html#findoneandupdate_findOneAndUpdate) which is actually updating the document. We also use `new: true` to actually return the modified document. [`Promise.all()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all) allows an "array of Promise" to resolve and return an array of results. These you see as `updatedDocs`. Another advantage here is that the inner methods will fire in "parallel" and not in series. This usually means a faster resolution, though it takes a few more resources. Note also that we use the "projection" of `{ _id: 1, tweet: 1 }` to only return those two fields from the [`Model.find()`](http://mongoosejs.com/docs/api.html#find_find) result because those are the only ones used in the remaining calls. This saves on returning the whole document for each result there when you don't use the other values. You could simply just return the [`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) from the [`findOneAndUpdate()`](http://mongoosejs.com/docs/api.html#findoneandupdate_findOneAndUpdate), but I'm just adding in the `console.log()` so you can see the output is firing at that point. Normal production use should do without it: ``` Model.find({},{ _id: 1, tweet: 1}).then(tweets => Promise.all( tweets.map(({ _id, tweet }) => api.petition(tweet).then(result => TweetModel.findOneAndUpdate({ _id }, { result }, { new: true }) ) ) ) ) .then( updatedDocs => { // do something with array of updated documents }) .catch(e => console.error(e)) ``` Another "tweak" could be to use the "bluebird" implementation of [`Promise.map()`](http://bluebirdjs.com/docs/api/promise.map.html), which both combines the common [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) to [`Promise`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise)(s) implementation with the ability to control "concurrency" of running parallel calls: ``` const Promise = require("bluebird"); Model.find({},{ _id: 1, tweet: 1}).then(tweets => Promise.map(tweets, ({ _id, tweet }) => api.petition(tweet).then(result => TweetModel.findOneAndUpdate({ _id }, { result }, { new: true }) ), { concurrency: 5 } ) ) .then( updatedDocs => { // do something with array of updated documents }) .catch(e => console.error(e)) ``` An alternate to "parallel" would be executing in sequence. This might be considered if too many results causes too many API calls and calls to write back to the database: ``` Model.find({},{ _id: 1, tweet: 1}).then(tweets => { let updatedDocs = []; return tweets.reduce((o,{ _id, tweet }) => o.then(() => api.petition(tweet)) .then(result => TweetModel.findByIdAndUpdate(_id, { result }, { new: true }) .then(updated => updatedDocs.push(updated)) ,Promise.resolve() ).then(() => updatedDocs); }) .then( updatedDocs => { // do something with array of updated documents }) .catch(e => console.error(e)) ``` There we can use [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) to "chain" the promises together allowing them to resolve sequentially. Note the array of results is kept in scope and swapped out with the final `.then()` appended to the end of the joined chain since you need such a technique to "collect" results from Promises resolving at different points in that "chain". --- ## Async/Await In modern environments as from NodeJS V8.x which is actually the current LTS release and has been for a while now, you actually have support for `async/await`. This allows you to more naturally write your flow ``` try { let tweets = await Model.find({},{ _id: 1, tweet: 1}); let updatedDocs = await Promise.all( tweets.map(({ _id, tweet }) => api.petition(tweet).then(result => TweetModel.findByIdAndUpdate(_id, { result }, { new: true }) ) ) ); // Do something with results } catch(e) { console.error(e); } ``` Or even possibly process sequentially, if resources are an issue: ``` try { let cursor = Model.collection.find().project({ _id: 1, tweet: 1 }); while ( await cursor.hasNext() ) { let { _id, tweet } = await cursor.next(); let result = await api.petition(tweet); let updated = await TweetModel.findByIdAndUpdate(_id, { result },{ new: true }); // do something with updated document } } catch(e) { console.error(e) } ``` Noting also that [`findByIdAndUpdate()`](http://mongoosejs.com/docs/api.html#findbyidandupdate_findByIdAndUpdate) can also be used as matching the `_id` is already implied so you don't need a whole query document as a first argument. --- ## BulkWrite As a final note if you don't actually need the updated documents in response at all, then [`bulkWrite()`](http://mongoosejs.com/docs/api.html#bulkwrite_bulkWrite) is the better option and allows the writes to generally process on the server in a single request: ``` Model.find({},{ _id: 1, tweet: 1}).then(tweets => Promise.all( tweets.map(({ _id, tweet }) => api.petition(tweet).then(result => ({ _id, result })) ) ).then( results => Tweetmodel.bulkWrite( results.map(({ _id, result }) => ({ updateOne: { filter: { _id }, update: { $set: { result } } } }) ) ) ) .catch(e => console.error(e)) ``` Or via `async/await` syntax: ``` try { let tweets = await Model.find({},{ _id: 1, tweet: 1}); let writeResult = await Tweetmodel.bulkWrite( (await Promise.all( tweets.map(({ _id, tweet }) => api.petition(tweet).then(result => ({ _id, result })) )).map(({ _id, result }) => ({ updateOne: { filter: { _id }, update: { $set: { result } } } }) ) ); } catch(e) { console.error(e); } ``` Pretty much all of the combinations shown above can be varied into this as the [`bulkWrite()`](http://mongoosejs.com/docs/api.html#bulkwrite_bulkWrite) method takes an "array" of instructions, so you can construct that array from the processed API calls out of every method above.
Using jsonPath looking for a string I'm trying to use jsonPath and the pick function to determine if a rule needs to run or not based on the current domain. A simplified version of what I'm doing is here: ``` global { dataset shopscotchMerchants <- "https://s3.amazonaws.com/app-files/dev/merchantJson.json" cachable for 2 seconds } rule checkdataset is active { select when pageview ".*" setting () pre { merchantData = shopscotchMerchants.pick("$.merchants[?(@.merchant=='Telefora')]"); } emit <| console.log(merchantData); |> } ``` The console output I expect is the telefora object, instead I get all three objects from the json file. If instead of merchant=='Telefora' I use merchantID==16 then it works great. I thought jsonPath could do matches to strings as well. Although the example above isn't searching against the merchantDomain part of the json, I'm experiencing the same problem with that.
Your problem comes from the fact that, as stated in [the documentation](http://docs.kynetx.com/docs/Simple_Predicates), the string equality operators are `eq`, `neq`, and `like`. `==` is only for numbers. In your case, you want to test if one string is equal to another string, which is the job of the `eq` string equality operator. Simply swap `==` for `eq` in you JSONpath filter expression and you will be good to go: ``` global { dataset shopscotchMerchants <- "https://s3.amazonaws.com/app-files/dev/merchantJson.json" cachable for 2 seconds } rule checkdataset is active { select when pageview ".*" setting () pre { merchantData = shopscotchMerchants.pick("$.merchants[?(@.merchant eq 'Telefora')]"); // replace == with eq } emit <| console.log(merchantData); |> } ``` I put this to the test in my own test ruleset, the source for which is below: ``` ruleset a369x175 { meta { name "test-json-filtering" description << >> author "AKO" logging on } dispatch { domain "exampley.com" } global { dataset merchant_dataset <- "https://s3.amazonaws.com/app-files/dev/merchantJson.json" cachable for 2 seconds } rule filter_some_delicous_json { select when pageview "exampley.com" pre { merchant_data = merchant_dataset.pick("$.merchants[?(@.merchant eq 'Telefora')]"); } { emit <| try { console.log(merchant_data); } catch(e) { } |>; } } } ```
Undefined attribute in jade template engine I'm simply trying to display a value in an input field with Jade (0.20.3) and Express (2.5.8): ``` input(name='username', type='text', id="username", value=username) ``` It is quite simple, but this throws an error when the value is undefined: ``` username is not defined ``` However, the documentation indicates: ``` When a value is undefined or null the attribute is not added, so this is fine, it will not compile 'something="null"'. ``` Is there something that I would have done wrong?
Short answer: use `locals.someVar` if you're not sure that `someVar` exists. Longer Answer: I think that Brandon's initial answer and last comment are correct (though the `#{...}` syntax isn't needed), but to elaborate a bit: There's a difference between passing in an a variable (technically, an object property) with a value of `undefined`, and not passing that variable at all. Because Jade transforms your template into JS source and `eval`s it (in the context of a `with` block), you have to make sure that you're not referring to any variables that haven't been passed in, or they'll be . [This blog post](https://javascriptweblog.wordpress.com/2010/08/16/understanding-undefined-and-preventing-referenceerrors/) has some background on undefined vs undeclared variables and `ReferenceError`. Your Jade template should work correctly if you do one of these things: ``` // ok, even if req.session.username is not defined res.render('index', { username: req.session.username }) // ditto res.local('username', req.session.username); res.render('index') ``` But these won't work: ``` res.locals(req.session) //if no username property exists res.render('index', { /* no username */ } ) ``` If it's not practical to manually pass in each parameter you might want to refer to, you can refer to the variable in your template as properties on the `locals` object (e.g. `locals.username`)
Securing Grails REST service for use with mobile applications I am busy doing some research into using REST services with mobile applications and would appreciate some insight. The scenario is as follows. Consider a web application that provides a service to users. The web application will also be the main interaction point for the users. This will be done in Grails, and secured with Spring Security. Now, we want to provide a REST service so that users can use the service via mobile applications. Since Grails has such nice support for making the existing web application RESTful, we will use the built-in Grails support for that. My question now is, what would be the "best" way to secure the REST service interface so that it can be use from mobile applications (native- iOS, Andriod, WM7, BB). The information exchanged are highly sensitive, so the more secure, the better. Thanks
We decided to split our grails project in three... - model-domain-project (This is the "admin" section with all the views/controller scaffolded, and all the services, domain) - web-app (this is the main application, controllers, views) - api-rest-app (this is the rest controllers) The model-domain-project is a plugin that it's plugged in the web-app and the api-app, contains the domain model, services, and all the database security, transactions, etc. The web-app is all the html templates, views and controllers, here we are using the attributes of Spring Security The api-rest-app we are using grails-filters and we are using Basic-Authorization via https with a token with an expiration date... if the expiration date of the token is reached you will have to ask for another token with a "request-token" we sent you with the first token... (it's more or less like oauth2) To get the two first tokens, you will have to confirm the device via a login with user/phone/password then you receive a key via sms that you will have to enter in the app Do not know if this the best way, but it's the way we do it... Sometimes we are using the web-app as client and call the api-rest-app...
Packing sets of non power of 2 integers I have a set of integers, each with a specific range: ``` foo = [1, 5] bar = [1, 10] baz = [1, 200] ``` I can calculate how many bits are required to store each number separately based on the number of different states that they can have: ``` foo = 5 possible states ~ 3 bits bar = 10 possible states ~ 4 bits baz = 200 possible states ~ 8 bits ``` Which gives me a total of 15 bits. But every number has a range that is unused, resulting in wasted space. I can instead calculate the required bits for the whole set by calculating all the possible states of all the numbers combined: ``` 5 * 10 * 200 = 10000 possible states ~ 14 bits ``` This could save me a whole bit! And this is where my question comes in: what is the best way to load and store numbers using this type of layout?
A list of variables with different ranges like this: ``` foo = [1, 5] bar = [1, 10] baz = [1, 200] ``` Can (almost?) be interpreted as a mixed-radix number representation. If they started at zero the correspondence would be immediate, but since these start at one (or in general: if they are *any* finite set of possibilities) they must be remapped a little first, here just by subtracting one for conversion to the "packed" state and adding one back when decoding it again. The encoding is nice and easy, involving only cheap operations: ``` packed = (foo - 1) + 5 * (bar - 1) + (5 * 10) * (baz - 1) ``` The scale factors come from the number of possible states of course. Every element needs to be remapped into a contiguous range starting at zero, and then scaled by the product of the #states of the preceding elements, with the first being scaled by 1 (the empty product). By the way note that [1 .. 5] has 5 states, not 4. Decoding involves remainders and divisions, the simplest (but not in general the fastest) way is extracting digit-by-digit: ``` // extract foo foo = packed % 5 + 1 // drop foo from packed representation packed /= 5 // extract bar (which is now the lowest digit in 'packed') bar = packed % 10 + 1 // drop bar packed /= 10 // top digit is left over baz = packed + 1 ``` For larger examples it would be more efficient to first "chop" the packed number into a few separate parts, and then decode those independently. This prevents having a long chain of dependent operations, which the digit-by-digit method naturally results in. Working directly with the packed representation is generally tricky, except to add and subtract from the elements *if* you know that would not overflow.
find and remove files bigger than a specific size and type I want to clean up my server from large log files and backups. I came up with this: ``` find ./ -size +1M | xargs rm ``` But I do not want to include mp3 and mp4. I just want to do this for log and archive files (zip, tar, etc.) How will the command look like?
``` find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -delete ``` - the `\( \)` construct allows to group different filename patterns - by using `-delete` option, we can avoid piping and troubles with `xargs` See [this](https://unix.stackexchange.com/questions/131766/why-does-my-shell-script-choke-on-whitespace-or-other-special-characters), [this](https://unix.stackexchange.com/questions/24954/when-is-xargs-needed) and [this](https://unix.stackexchange.com/questions/90886/how-can-i-find-files-and-then-use-xargs-to-move-them) - `./` or `.` is optional when using `find` command for current directory **Edit:** As Eric Renouf notes, if your version of `find` doesn't support the `-delete` option, use the `-exec` option `find -type f \( -name "*zip" -o -name "*tar" -o -name "*gz" \) -size +1M -exec rm {} +` where all the files filtered by `find` command is passed to `rm` command
ImportError: No module named '\_version' when importing mechanize I installed mechanize via pip and get an errer when I import the module: ``` $ python Python 3.5.2 (default, Jun 28 2016, 08:46:01) [GCC 6.1.1 20160602] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import mechanize Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.5/site-packages/mechanize/__init__.py", line 119, in <module> from _version import __version__ ImportError: No module named '_version' ``` The file `-version.py` is present in the site-packages directory: ``` $ ls /usr/lib/python3.5/site-packages/mechanize _auth.py __init__.py _response.py _beautifulsoup.py _lwpcookiejar.py _rfc3986.py _clientcookie.py _markupbase.py _sgmllib_copy.py _debug.py _mechanize.py _sockettimeout.py _firefox3cookiejar.py _mozillacookiejar.py _testcase.py _form.py _msiecookiejar.py _urllib2_fork.py _gzip.py _opener.py _urllib2.py _headersutil.py _pullparser.py _useragent.py _html.py __pycache__ _util.py _http.py _request.py _version.py ``` What am I missing?
If you look at [`setup.py`](https://github.com/jjlee/mechanize/blob/master/setup.py#L38) you'll see `mechanize` is a `Python 2.x` package: ``` Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.4 Programming Language :: Python :: 2.5 Programming Language :: Python :: 2.6 Programming Language :: Python :: 2.7 ``` Apart from that, you can see in `mechanize/__init__.py` that all imports are relative: ``` from _version import __version__ ``` instead of explicit: ``` from ._version import __version__ ``` *[In python 3, this results in import errors.](https://stackoverflow.com/questions/12172791/changes-in-import-statement-python3)* There's an [issue](https://github.com/jjlee/mechanize/issues/96) opened for `Py3` support and it lists some alternatives you could try. That, or port it :-).
How to syntax highlight JSON inside a HTML page on the client side? I am using a HTML documentation page by an **external service** which renders JSON snippets **within an HTML page**. The HTML source code looks like this: ``` <pre>{ "product-link": "https://example.com/product-link", "teaser_image": "https://example.com/teaser-image", "product_image_first": "https://example.com/product-image-first", "headline": "Example headline", }</pre> ``` The JSON block renders **without** syntax highlighting. Since I am **not** in control of the external service I would like to apply syntax highlighting (color) to the JSON snippet via user script. I found [Greasemonkey](http://www.greasespot.net/) but still missing the point on how to inject a syntax highlighter library.
Thanks to *xander* here is the first working version of my user script base on [code-prettify](https://github.com/google/code-prettify): ``` (function(d) { stylizePreElements = function() { var preElements = document.getElementsByTagName("pre"); for (i = 0; i < preElements.length; ++i) { var preElement = preElements[i]; preElement.className += "prettyprint"; } }; injectPrettifyScript = function() { var scriptElement = document.createElement('script'); scriptElement.setAttribute("src", "https://cdn.rawgit.com/google/code-prettify/master/loader/run_prettify.js"); document.head.appendChild(scriptElement); }; stylizePreElements(); injectPrettifyScript(); })(document) ``` Thank you for making my day nicer!
Git Push: What is the difference between HEAD:refs/heads/ and ? What does command 1 do that command 2 doesn't? ``` 1. git push <projectpath> HEAD:refs/heads/<branch> 2. git push <projectpath> <branch> ``` What is the meaning of "HEAD:refs/heads/"?
[VonC's answer](https://stackoverflow.com/a/38496360/1256452) is correct (and upvoted), but I think another way of looking at this might make more sense. Note that all of this is assuming that you're using the four-word form of `git push`, i.e., `git push *remote* *refspec*`. The *`remote`* part here is usually just the name `origin`. We'll define *`refspec`* better in a moment. # What `git push` does What `git push` needs to do (and therefore does) is to call up another Git instance on another machine,1 then give that other Git a set of *references* (usually branch names, sometimes tag names) to update. A *reference* is simply a name, like `master` or `v1.2`, that ideally *should* be fully-qualified (`refs/heads/master` or `refs/tags/v1.2`) so that we can be sure what *kind* of reference it is—branch, tag, or whatever. In order for the other Git to update the references your Git hands over, your Git must *also* hand over some of those big ugly SHA-1 hashes: one per reference. In other words, your Git is going to ask their Git to set *their* `refs/heads/master` to, say, `ed4f38babf3d81693a68d06cd0f5872093c009f6`. (At this point—actually, just a bit before this point, really—your Git and their Git have a conversation about which objects yours want to send them, and which objects they already have, all done by these big ugly hash IDs. Once the two Gits agree about what's going to be sent over, yours does the `counting objects` and `compressing objects` and then sends them the objects. The "now, please set some names" part happens nearly last.) ### Getting the name and hash parts Note that there are two parts to your Git's request: (1) a fully-qualified reference, and (2) the big-ugly-hash. (In fact, there's also a third part, the `--force` flag, but that part is easy and we can just ignore it.) But where does *your* Git get these? If you write: ``` git push origin somename ``` you've given *your* Git two pieces of information: the name `origin`, which your Git uses to look up the URL, and the name `somename`. Your Git uses this to figure out the full name. Is `somename` a tag? If so, the *full* name is `refs/tags/somename`. Is `somename` a branch? If so, the *full* name is `refs/heads/somename`. Either way works. Of course, you can also write out the full name yourself—and if the name is both a branch *and* a tag, you may *want* to do that, rather than letting Git pick one for you.2 So, where does your Git get the big ugly hash? The answer is: from that same name. The name `somename`, whether it's a branch or a tag, just names some particular Git object. If you want to see the hash yourself, you can do that any time: ``` git rev-parse somename ``` will show it to you. This is, in fact, how I got `ed4f38babf3d81693a68d06cd0f5872093c009f6`: I went to a Git repository for Git and did `git rev-parse v2.1.1` and it printed out that hash, because `v2.1.1` is a valid tag in any complete copy of the Git repository since version 2.1.1 came out. Note that when you *do* use this form—this `git push *remote* *name*` form—Git looks up the *`name`* argument in *your* repository for both purposes: to find out its full name, and to get its hash. It doesn't matter where your `HEAD` is, only what that full name points to. ### But Git does not have to use your branch's (or tag's) ID The fourth argument to `git push` is called a *refspec*, and its syntax actually allows two parts separated by a colon: ``` git push origin src:dst ``` In this case, the `dst` part supplies the *name*, but the `src` part supplies the hash. Git runs the `src` part through `git rev-parse` and that produces the hash. So you can: ``` git push origin mybranch:refs/tags/v42 ``` to create tag `v42` in the other Git repository, using whatever commit hash your branch `mybranch` identifies. ### Normally `HEAD` contains a branch name In Git, `HEAD` *always* names the current commit. *Usually* it does so by naming a branch, and letting the branch name the commit. So, usually `HEAD` contains a branch name like `master`, and a branch name always gets you the *tip commit* of that branch (that's how Git *defines* "tip commit"; see the definition of branch in [the Git glossary](https://www.kernel.org/pub/software/scm/git/docs/gitglossary.html)). But *always*,3 `HEAD` can be turned into a commit: ``` $ git rev-parse HEAD 2b9288cc90175557766ef33e350e0514470b6ad4 ``` because `HEAD` is either a branch name (which is then the tip commit), or else you have a "detached HEAD", in which case Git stores the current commit ID directly in `HEAD`. ### Pushing when HEAD is detached Remember that in order to push, Git needs to get those two pieces of information: the hash, and a (full) name. When `HEAD` *isn't* "detached", Git can get both from it: `HEAD` has a branch name—in the full name form, in fact—and the branch name has the hash. But when you are in "detached HEAD" mode, `HEAD` only has a hash. Git *can't* find a branch name in `HEAD`. There might not *be* one: you might have checked out a commit by ID, or maybe you checked out by tag name, as in: ``` $ git checkout v2.1.1 ``` which put you in this "detached HEAD" mode. In this case, Git demands that you supply both the source hash `src`—you can still use the name `HEAD` to get it—*and* the `dst` destination name. And, if you use `HEAD` as the source, Git really needs *you* to spell out the full destination, because Git can't tell, at this point, if it should be a branch (`refs/heads/dst`) or a tag (`refs/tags/dst`).4 # Other forms of `git push` You can run `git push` with fewer arguments, e.g.: ``` git push origin ``` or even just: ``` git push ``` What happens here is that without a *`refspec`*, Git consults your `push.default` setting first. Usually this is `simple` (the default since Git version 2.0). In this case, Git simply uses `HEAD` to figure out what to push—which, of course, works only when `HEAD` is not detached. That's just what we described above. (Three of the other settings also use `HEAD`. One of them—the one that was the default before Git version 2.0—does not, but that particular setting proved too error-prone, which is why the default changed. You probably should not use it, at least not unless you are a Git master.) (And, if you leave out the *`remote`*, Git again uses `HEAD` to figure out where to push to, defaulting, if needed, to `origin`.) You can also push multiple refspecs: ``` git push origin branch1 branch2 tag1 HEAD:refs/tags/tag2 ``` In this case, each refspec is handled in the usual way: get its fully qualified name if needed, so that your Git can give their Git a fully qualified name each time; and look up its hash ID if you didn't use the `src:dst` form (or if you *did* use the `src:dst` form, look up `src`'s ID instead). You can use wildcards in refspecs: ``` git push origin 'refs/heads/*:refs/heads/*' ``` (some shells will eat, mangle, [fold, spindle, or mutilate](http://onlinelibrary.wiley.com/doi/10.1111/j.1542-734X.1992.1504_43.x/abstract) the `*`s so you may need to use quotes, as in this example; other shells won't—or at least usually won't—but it doesn't hurt to quote). This will push *all* your branches, or at least try to. This tends to be overly enthusiastic, pushing all your temporary work and experimentation branches, and is probably not what you want, but it's what Git did by default prior to version 2.0. And, you can use an empty `src`: ``` git push origin :refs/heads/deleteme ``` which is a special-case syntax that means "have my Git ask their Git to *delete* that reference" (to delete a tag, spell out the tag). As with a detached HEAD, the lack of a fully-qualified name on *your* side means you should fully-qualify the name for *their* side. (See footnote 4 again.) ### The force flag If you add `--force` to your `git push` command, your Git passes this flag on to their Git. Instead of a polite request—"please, sir, would you like to set your `refs/heads/master` to `ed4f38babf3d81693a68d06cd0f5872093c009f6`?"—your Git will send it as a rather insistent demand. Their Git can still refuse either way, but their Git will, by default, do it even if it's not sensible. Refspecs allow you to control this flag more tightly. The force flag in an individual refspec is a leading plus sign `+`. For instance, suppose you have new commits for both `master` and `develop` branches, and also a new set of *rebased* commits for `experiment`, which everyone else has agreed that you are allowed to force-push. You could do this: ``` git push origin develop master; git push -f origin experiment ``` but you can combine it all into one big push: ``` git push origin develop +experiment master ``` The leading `+` on `experiment` makes *that* one a command ("update `experiment`!") while leaving the others as polite requests ("please, sir, if you like, update `develop` and `master`"). (This is all a bit esoteric for `push`, but is actually something you use regularly every day with `git fetch`, which uses refspecs with `+` flags to create and update your remote-tracking branches.) --- 1If the "other repo" is on your same machine and you're using a `file://` or local path based URL, this isn't quite true, but the principle is the same and the operations go the same way. 2Better yet, don't get yourself in this situation in the first place. It's very confusing to have one name that is both a branch name *and* a tag name. (There are similar confusing situations to avoid due to Git's habit of abbreviating: don't name branches with names that resemble remote names, for instance. Git will handle them just fine, but *you* might not. :-) ) 3Actually, there's one exception to this rule, which most people will never notice: when `HEAD` names an "unborn branch". Mostly this occurs in a new repository, which has no commits at all. Obviously, if there are no commits, there is no commit ID that `HEAD` could name. It also occurs when you use `git checkout --orphan` to create a new orphan branch. 4If you use an unqualified name, their Git will look up the name to qualify it. This means you may not know what kind of name you are trying to update or delete. It's generally not a good idea, anyway.
How to run validations of sub-class in Single Table Inheritance? In my application, I have a class called Budget. The budget can be of many types.. For instance, let's say that there are two budgets: FlatRateBudget and HourlyRateBudget. Both inherit from the class Budget. This is what I get so far: ``` class Budget < ActiveRecord::Base validates_presence_of :price end class FlatRateBudget < Budget end class HourlyRateBudget < Budget validates_presence_of :quantity end ``` In the console, if I do: ``` b = HourlyRateBudget.new(:price => 10) b.valid? => false b.errors.full_messages => ["Quantity can't be blank"] ``` As, expected. The problem is that the "type" field, on STI, comes from params.. So i need to do something like: ``` b = Budget.new(:type => "HourlyRateBudget", :price => 10) b.valid? => true ``` Which means that rails is running validations in the super-class instead of instantiating the sub class after I set up the type. I know that is the expected behaviour, since I'm instantiating a class that dosen't need the quantity field, but I wonder if there is anyway to tell rails to run the validations for the subclass instead of the super.
You could probably solve this with a custom validator, similar to the answer on this question: [Two models, one STI and a Validation](https://stackoverflow.com/questions/2561316/two-models-one-sti-and-a-validation) However, if you can simply instantiate the intended sub-type to begin with, you would avoid the need for a custom validator altogether in this case. As you've noticed, setting the type field alone doesn't magically change an instance from one type to another. While ActiveRecord will use the `type` field to instantiate the proper class upon **reading** the object from the database, doing it the other way around (instantiating the superclass, then changing the type field manually) doesn't have the effect of changing the object's type while your app is running - it just doesn't work that way. The custom validation method, on the other hand, could check the `type` field independently, instantiate a copy of the appropriate type (based on the value of the `type` field), and then run `.valid?` on that object, resulting in the validations on the sub-class being run in a way that appears to be dynamic, even though it's actually creating an instance of the appropriate sub-class in the process.
Prevent creating new attributes outside \_\_init\_\_ I want to be able to create a class (in Python) that once initialized with `__init__`, does not accept new attributes, but accepts modifications of existing attributes. There's several hack-ish ways I can see to do this, for example having a `__setattr__` method such as ``` def __setattr__(self, attribute, value): if not attribute in self.__dict__: print "Cannot set %s" % attribute else: self.__dict__[attribute] = value ``` and then editing `__dict__` directly inside `__init__`, but I was wondering if there is a 'proper' way to do this?
I wouldn't use `__dict__` directly, but you can add a function to explicitly "freeze" a instance: ``` class FrozenClass(object): __isfrozen = False def __setattr__(self, key, value): if self.__isfrozen and not hasattr(self, key): raise TypeError( "%r is a frozen class" % self ) object.__setattr__(self, key, value) def _freeze(self): self.__isfrozen = True class Test(FrozenClass): def __init__(self): self.x = 42# self.y = 2**3 self._freeze() # no new attributes after this point. a,b = Test(), Test() a.x = 10 b.z = 10 # fails ```
Good practices for multiple language data in Core Data i need a multilingual coredata db in my iphone app. I could create different database for each language but i hope that in iphone sdk exist an automatically way to manage data in different language core data like for resources and string. Someone have some hints?
I've done something similar to Shortseller, but without the use of categories. [![alt text](https://i.stack.imgur.com/qVVRW.png)](https://i.stack.imgur.com/qVVRW.png) `InternationalBook` and `LocalizedBook` are both custom managed objects with a one-to-many relationship (one international book to many localised books). In the implementation of `InternationalBook`, I've added a custom accessor for `title`: ``` - (NSString *)title { [self willAccessValueForKey:@"title"]; NSString *locTitle = nil; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"locale==%@", [DataManager localeString]]; NSSet *localizedSet = [self.localizedBook filteredSetUsingPredicate:predicate]; if ([localizedSet count] > 0) { locTitle = [[localizedSet valueForKey:@"localizedTitle"] anyObject]; } [self didAccessValueForKey:@"title"]; return locTitle; } ``` `[DataManager localeString]` is a class method which returns the user's language and country code: `en_US`, `fr_FR`, etc. See documentation on `NSLocale` for details. See the "Custom Attribute and To-One Relationship Accessor Methods" section of the Core Data Programming Guide for an explanation of `willAccessValueForKey:` and `didAccessValueForKey:`. When populating the data, I grab a string representing the user's current locale (`[DataManager localeString]`), and store that along the with localised book title in a new `LocalizedBook` object. Each `LocalizedBook` instance is added to an `NSMutableSet`, which represents the one-to-many relationship. ``` NSMutableSet *bookLocalizations = [internationalBook mutableSetValueForKey:@"localizedBook"]; // internationalBook is an instance of InternationalBook // set the values for locale and localizedTitle LocalizedBook *localizedBook = (LocalizedBook *)[NSEntityDescription insertnNewObjectEntityForName:@"LocalizedBook" inManagedObjectContext:self.bookMOC]; localizedBook.locale = [DataManager localeString]; localizedBook.localizedTitle = theLocalizedTitle; // assume theLocalizedTitle has been defined. [bookLocalizations addObject:localizedBook]; [bookLocalizations setValue:localizedBook forKey:@"localizedBook"]; ``` Since the localised titles are being stored in the `LocalizedBook` managed object, you can make the `title` attribute a transient, but if you do that you can't use `title` in a predicate. The nice thing about this approach is that the implementation of the to-many relationship is transparent to any consumers. You simply request `internationalBook.title` and the custom accessor returns the appropriate value based on the user's locale behind the scenes.
Terraform settings - remote state s3 - InvalidParameter validation error ### Environment Terraform v0.12.24 + provider.aws v2.61.0 Running in an alpine container. ### Background I have a basic terraform script running ok, but now I'm extending it and am trying to configure a remote (S3) state. **terraform.tf:** ``` terraform { backend "s3" { bucket = "labs" key = "com/company/labs" region = "eu-west-2" dynamodb_table = "labs-tf-locks" encrypt = true } } ``` The bucket exists, and so does the table. I have created them both with terraform and have confirmed through the console. ### Problem When I run `terraform init` I get: ``` Error refreshing state: InvalidParameter: 2 validation error(s) found. - minimum field size of 1, GetObjectInput.Bucket. - minimum field size of 1, GetObjectInput.Key. ``` ### What I've tried `terraform fmt` reports no errors and happily reformats my `terraform.tf` file. I tried moving the stanza into my `main.tf` too, just in case the `terraform.tf` file was being ignored for some reason. I got exactly the same results. I've also tried running this without the alpine container, from an ubuntu ec2 instance in aws, but I get the same results. I originally had the name of the terraform file in the key. I've removed that (thanks) but it hasn't helped resolve the problem. Also, I've just tried running this in an older image: `hashicorp/terraform:0.12.17` but I get a similar error: ``` Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found. - minimum field size of 1, ListObjectsInput.Bucket. ``` I'm guessing that I've done something trivially stupid here, but I can't see what it is.
# Solved!!! I don't understand the problem, but I have a working solution now. I deleted the `.terraform` directory and reran `terraform init`. This is ok for me because I don't have an existing state. The insight came from reading the error from the 0.12.17 version of terraform, which complained about not being able to read the workspace. ``` Error: Failed to get existing workspaces: InvalidParameter: 1 validation error(s) found. - minimum field size of 1, ListObjectsInput.Bucket. ``` Which initially led me to believe there was a problem with an earlier version of tf reading a newer version's configuration. So, I blew away the `.terraform` and it worked with the older tf, so I did it again and it worked with the newer tf too. Obviously, something had gotten itself screwed up in terraform's storage. I don't know how or why. But, it works for me, so...
Android Flavors : Multiple java and src directories for each flavor Can anyone explain to me how I can use for each of my flavors more then just the main and the flavor specific java/src directories ? My goal is to have directories which are used by more than one flavor but not all of them. For example I have 3 flavors : A1, A2 and B. - All favors use main/src (default main src directory) - A1 uses A1/src (default flavor src directory) - A2 uses A2/src (default flavor src directory) - B uses B/src (default flavor src directory) - **A1 and A2 use A/src ("special" shared directory)** Is this possible ? If so, what should I put in my build.gradle file ? And as a bonus question, can I chose in which order gradle goes looking for files in my different directories ? For example if I have a.png declared in both A/src and A1/src, can I tell gradle to first look for this file in A/src, and only if nothing is found look for it in A1/src ?
As described [here](http://tools.android.com/tech-docs/new-build-system/resource-merging) > > As mentioned above, each sourceSet can define multiple resource folders. > > > You can define multiple resource folder. For example something like this: ``` android { ... sourceSets { main { //.... res.srcDirs = ['/src/main/res'] } flavorA1 { res.srcDirs = ['/src/flavor1/res', '/src/commonA/res'] } flavorA2 { res.srcDirs = ['/src/flavor2/res', '/src/commonA/res'] } //.....other flavors } } ```
Assigning an array to an ArrayList in Java Is it possible to assign an `array` to an `ArrayList` in Java?
You can use [`Arrays.asList()`](http://download.oracle.com/javase/1.4.2/docs/api/java/util/Arrays.html#asList(java.lang.Object[])): ``` Type[] anArray = ... ArrayList<Type> aList = new ArrayList<Type>(Arrays.asList(anArray)); ``` or alternatively, [`Collections.addAll()`](http://download.oracle.com/javase/1.5.0/docs/api/java/util/Collections.html#addAll%28java.util.Collection,%20T...%29): ``` ArrayList<Type> aList = new ArrayList<Type>(); Collections.addAll(theList, anArray); ``` Note that you aren't technically assigning an array to a List (well, you can't do that), but I think this is the end result you are looking for.
Using SQLite Trigger to update "LastModified" field This might be more of a design question, but here goes. I'm writing an Android app that uses a local SQLite database (with multiple tables) that syncs with a MySQL database every now-and-then. I only want to update modified rows in my database. To do this, I'm adding a column "`last_modified`" to each row that indicates the time when that row was added/updated/replaced/etc. I'm new to database operations, but I've seen that a Trigger might be the best way to do this. I have a couple questions relating to Triggers, SQLite, and Android. I've read this link: [on update current\_timestamp with SQLite](https://stackoverflow.com/questions/6578439/on-update-current-timestamp-with-sqlite) It basically says that I'm using the right approach. My questions are: 1. Where should I put the `db.execSQL("CREATE TRIGGER...")` statement? Before or after I create the tables? 2. Can I use the same Trigger for every table in my database? i.e, can the Trigger automatically detect which table and row is being updated/inserted/replaced/etc. and notify to set that row's "`last_modified`" field, or do I have to create a separate Trigger for each table? 3. Since I'm quite new to database operations, could you provide an example Android Trigger statement that performs the above behavior, or provide a resource to an example? Or if Triggers are a bad idea, are there any better alternatives? Thank you.
A short and sweet answer for you: 1. After, so the trigger has a valid table to reference. 2. You need to execute a CREATE TRIGGER for every table / column combination you want affected. The database won't *assume* because another table has a `last_modified` column that you want this one to behave the same... 3. The trigger in your link is executable (I used it myself), just change the table / column names. Lastly, using a trigger like this is the *easiest* way I know to maintain `last_modified` or `last_accessed` timestamp. My trigger (in java form): ``` private static final String UPDATE_TIME_TRIGGER = "CREATE TRIGGER update_time_trigger" + " AFTER UPDATE ON " + TABLE_NAME + " FOR EACH ROW" + " BEGIN " + "UPDATE " + TABLE_NAME + " SET " + TIME + " = current_timestamp" + " WHERE " + ID + " = old." + ID + ";" + " END"; ``` **Addition** According to the [SQLite website](http://www.sqlite.org/lang_createtrigger.html) you need to create a trigger for each type of action. In other words, you **cannot** use: ``` CREATE TRIGGER trigger_name AFTER UPDATE, INSERT ... ``` From your last comment you may have figured out the best way to handle an INSERT statement for our purpose: ``` CREATE TABLE foo ( _id INTEGER PRIMARY KEY, last_modified TIMESTAMP NOT NULL DEFAULT current_timstamp); ``` In this table, you do not need to create a timestamp trigger for an INSERT statement, since it is done already. (Fun fact: `INTEGER PRIMARY KEY` implicitly adds `AUTOINCREMENT NOT NULL` as well as the default incremental value to our `_id` column.)
Example code for CMFCMenuButton? Sorry for the newbie question, but can anyone point me at sample code that illustrates the use of the CMFCMenuButton? The Microsoft help refers to "New Controls samples", but these samples seem to be in the Visual Studio 2008 "Feature Pack", and this refuses to install on my system since I'm running VS 2013 and don't have VS 2008. I haven't been able to find the samples as stand-alone code. To be specific, I have a dialog bar in which I want a button labelled Save with drop-down options of Save All and Save Visible (with Save All the default). But any working code would at least get me started.
Declare data members: ``` CMFCMenuButton m_button_menu; CMenu m_menu; ``` Also add the button's id to message map and data exchange: ``` BEGIN_MESSAGE_MAP(CMyDialog, CDialogEx) ON_BN_CLICKED(IDC_MFCMENUBUTTON1, OnButtonMenu) ... END_MESSAGE_MAP void CMyDialog::DoDataExchange(CDataExchange* pDX) { CDialogEx::DoDataExchange(pDX); DDX_Control(pDX, IDC_MFCMENUBUTTON1, m_button_menu); } ``` Define: ``` BOOL CMyDialog::OnInitDialog() { CDialogEx::OnInitDialog(); //... m_menu.LoadMenu(IDR_MENU1); m_button_menu.m_hMenu = m_menu.GetSubMenu(0)->GetSafeHmenu(); return TRUE; } ``` Where `IDR_MENU1` is a regular menu bar and we get its first submenu. For example: ``` IDR_MENU1 MENU BEGIN POPUP "Dummy" BEGIN MENUITEM "&Item1", ID_FILE_ITEM1 MENUITEM "&Item2", ID_FILE_ITEM2 END END ``` If button's drop-down arrow is clicked, a popup menu appears, menu result is passed to `OnButtonMenu`. If left side of button is clicked, then `OnButtonMenu` is called directly, without showing a popup menu. ``` void CMyDialog::OnButtonMenu() { CString str; switch (m_button_menu.m_nMenuResult) { case ID_FILE_ITEM1: str = L"first menu item clicked"; break; case ID_FILE_ITEM2: str = L"second menu item clicked"; break; default: str = L"Button click (popup menu did not appear, or menu ID is not handled)"; break; } MessageBox(str); } ``` \*\* When working with docking controls, dialog bars, etc. MFC may run its own subclass, I don't think `DoDataExchange` gets called. `m_button_menu` could be invalid. `GetDlgItem` can be used to find the correct pointer: ``` CMFCMenuButton* CMyDlgBar::GetButtonMenu() { CMFCMenuButton* pButton = &m_button_menu; if (!IsWindow(pButton->m_hWnd)) pButton = (CMFCMenuButton*)GetDlgItem(IDC_MFCMENUBUTTON1); return pButton; } ``` Everywhere else we use `GetButtonMenu()` instead of `m_button_menu`. For example: ``` int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CFrameWnd::OnCreate(lpCreateStruct) == -1) return -1; //... m_dlgbar.Create(...); m_dlgbar.m_menu.LoadMenu(IDR_MENU1); m_dlgbar.GetButtonMenu()->m_hMenu = m_dlgbar.m_menu.GetSubMenu(0)->GetSafeHmenu(); return 0; } void CMainFrame::OnButtonMenu() { CString str; switch (GetButtonMenu()->m_nMenuResult) ... } ``` # What if the Drop-Down Arrow does not show? Then read the answer [here](https://stackoverflow.com/a/3151825/2287576) that explains the changes needed to your RC file.
Selecting max() of multiple columns Ok, here's my table: ``` product_id version_id update_id patch_id 1 1 0 0 1 1 1 0 1 1 1 1 1 1 2 0 1 1 2 1 2 1 0 0 2 2 0 0 2 3 0 0 2 3 0 1 3 1 0 0 3 1 0 1 ``` Now I want to select the latest version of a product, so the version with the highest update\_id & patch\_id. For example, the latest version of - product 1 should return 1, 2, 1 - product 2 should return 3, 0, 1 - product 3 should return 1, 0, 1 I was trying all kinds of stuff with GROUP BY and HAVING, tried subqueries, but I still can't figure out a way to accomplish this. Can anybody help me out to find the right query, or should I think of writing a php function for this? **Edit** Some additional info: - The columns together are the primary key (there are more colums, but for this problem they don't matter) - None of the columns is auto-increment This is the table: ``` CREATE TABLE IF NOT EXISTS `db`.`patch` ( `product_id` INT NOT NULL , `version_id` INT NOT NULL , `update_id` INT NOT NULL , `patch_id` INT NOT NULL PRIMARY KEY (`product_id`, `version_id`, `update_id`, `patch_id`) , INDEX `fk_patch_update1` (`product_id` ASC, `version_id` ASC, `update_id` ASC) ) ``` **Edit 2** Flagged as duplicate, it is not: The other question looks for records higher than a value for any of the three different columns. In this question we look for the highest version number grouped by the product\_id. **Edit 3** rgz's answer tells me again that this is a duplicate. First of all: this question is older. Secondly, I don't think the answer is the same. rgz suggests using the following query: ``` SELECT product_id, GREATEST(version_id, update_id, patch_id) AS latest_version FROM patch. ``` GREATEST(1,2,3) returns 3, right? Wat if we have these values: ``` product_id version_id update_id patch_id 1 1 0 0 1 1 2 8 1 3 0 0 ``` As I understand, this query wil return: ``` product_id latest_version 1 1 1 8 1 3 ``` But it should return: ``` product_id version_id update_id patch_id 1 3 0 0 ``` I don't think GREATEST could help. If you think it will, please prove me wrong.
This is one example of when unique identifers come in useful. Imagine you have an autoincrememnting ID field, you can then find the id you want for each product by using a correlated sub-query... ``` SELECT * FROM yourTable WHERE id = ( SELECT id FROM yourTable AS lookup WHERE lookup.product_id = yourTable.product_id ORDER BY version_id DESC, update_id DESC, patch_id DESC LIMIT 1 ) ``` The equivalent *without* a unique identifer requires multiple correlated sub-queries... ``` SELECT * FROM yourTable WHERE version_id = ( SELECT MAX(version_id) FROM yourTable AS lookup WHERE lookup.product_id = yourTable.product_id ) AND update_id = ( SELECT MAX(update_id) FROM yourTable AS lookup WHERE lookup.product_id = yourTable.product_id AND lookup.version_id = yourTable.version_id ) AND patch_id = ( SELECT MAX(patch_id) FROM yourTable AS lookup WHERE lookup.product_id = yourTable.product_id AND lookup.version_id = yourTable.version_id AND lookup.update_id = yourTable.update_id ) ``` This would be significantly slower than on a table with a unique identifier column. Another alternative (without a unique identifier) is to self-join on different levels of aggregation. ``` SELECT yourTable.* FROM (SELECT product_id, MAX(version_id) AS max_version_id FROM yourTable GROUP BY product_id) AS version INNER JOIN (SELECT product_id, version_id, MAX(update_id) AS max_update_id FROM yourTable GROUP BY product_id, version_id) AS update ON update.product_id = version.product_id AND update.version_id = version.max_version_id INNER JOIN (SELECT product_id, version_id, updatE_id, MAX(patch_id) AS max_patch_id FROM yourTable GROUP BY product_id, version_id) AS patch ON patch.product_id = update.product_id AND patch.version_id = update.version_id AND patch.update_id = update.max_update_id INNER JOIN yourTable ON yourTable.product_id = patch.product_id AND yourTable.version_id = patch.version_id AND yourTable.update_id = patch.update_id AND yourTable.patch_id = patch.max_patch_id ```
Dosen't Reflection API break the very purpose of Data encapsulation? Very recently I came across the Reflection API and to my surprise we can access and even alter the private variables.I tried the following code ``` import java.lang.reflect.Field; public class SomeClass{ private String name = "John"; } public class Test{ public static void main(String args[]) throws Exception { SomeClass myClass = new SomeClass(); Field fs = myClass.getClass().getDeclaredField("name"); fs.setAccessible(true); System.out.println("Variable is " + fs.getName() + " and value is " + fs.get(myClass)); fs.set(myClass, "Sam"); System.out.println("Variable is " + fs.getName() + " and value is " + fs.get(myClass)); } } ``` and I got the following output. ``` Variable is name and value is John Variable is name and value is Sam ``` We say Java is an Object oriented language and it's main features are Data Encapsulation, Inheritance, Polymorphism.. etc. Isn't the reflection API changing the very purpose of Data Encapsulation? Why do we have to use Reflection API? I read in some sites that it can be used for testing purpose but according to me modules are tested and that can be done easily using JUnit test cases. So can anyone explain why do we have such a hack?
> > Isn't the reflection API changing the very purpose of Data Encapsulation? > > > Yes and no. - Yes, some uses of the reflection API *can* break data encapsulation. - No, not all uses of the reflection API *do* break data encapsulation. Indeed, a wise programmer only breaks encapsulation via the reflection API when there is a good reason to do so. - No, reflection API does not change the *purpose* of data encapsulation. The *purpose* of data encapsulation remains the same ... even if it someone wilfully breaks it. > > Why do we have to use Reflection API? > > > There are **many** uses of reflection that **DO NOT** break encapsulation; e.g. using reflection to find out what super types a class has, what annotations it has, what members it has, to invoke accessible methods and constructors, read and update accessible fields and so on. And there are situations where is is acceptable (to varying degrees) to use the encapsulation breaking varieties of reflection: - You might need to look inside an encapsulated type (e.g. access / modify private fields) as the simplest way (or only way) to implement certain unit tests. - Some forms of Dependency Injection (aka IoC), Serialization and Persistence entail accessing and/or updating private fields. - Very occasionally, you need to break encapsulation to work around a bug in some class that you cannot fix. > > I read in some sites that it can be used for testing purpose but according to me modules are tested and that can be done easily using JUnit test cases. So can anyone explain why do we have such a hack? > > > That depends on the design of your class. A class that is designed to be testable will either be testable without the need to access "private" state, or will expose that state (e.g. `protected` getters) to allow testing. If the class doesn't do this, then a JUnit test may need to use reflection to look inside the abstraction. This is not desirable (IMO), but if you are writing unit tests for a class that someone wrote, and you can't "tweak" the APIs to improve testability, then you may have to choose between using reflection or not testing at all. --- The bottom line is that data encapsulation is an ideal that we strive to achieve (in Java), but there are situations where the pragmatically correct thing to do is to break it or ignore it. Note that not all OO languages support strong data encapsulation like Java does. For example, Python and Javascript are both unarguably OO languages, yet both make it easy for one class to access and modify the state of objects of another class ... or even change the other classes behaviour. Strong data abstraction is not central to everyone's view of what Object-Oriented means.
Dell WD19TB Thunderbolt dock - Failed to authorize device In pursuit of an answer to [this question](https://askubuntu.com/q/1188043/254743), I purchased a [Dell WD19TB](https://www.dell.com/en-us/work/shop/dell-thunderbolt-dock-wd19tb/apd/210-arik/pc-accessories) dock for my XPS 13 7390 "Developer Edition" running 18.04.3 LTS. After connecting everything, the HDMI external monitor works fine, but the USB 3 and gigabit ethernet ports don't work at all. I looked at my devices->Thunderbolt, and the dock shows up as "pending": [![pending](https://i.stack.imgur.com/9wdc1.png)](https://i.stack.imgur.com/9wdc1.png) I clicked on the dock name in that screen and this dialog popped up, asking me to "authorize and connect" to the dock: [![authorize](https://i.stack.imgur.com/aD7dN.png)](https://i.stack.imgur.com/aD7dN.png) I click the "authorize and connect" button, and after typing in my sudo password, I got another dialog stating "Failed to authorize device: kernel error." [![failed](https://i.stack.imgur.com/GAWHE.png)](https://i.stack.imgur.com/GAWHE.png) After this, back on the devices->Thunderbolt screen, the dock shows with an error: [![error](https://i.stack.imgur.com/vAo9M.png)](https://i.stack.imgur.com/vAo9M.png) Dell support has no idea what to do with Ubuntu users, it seems, and directed me to the official Ubuntu forum, where I will be posting as well. Do I need to install some drivers for the dock? If so, what drivers and where do I get them? How do I get a Dell WD19TB dock working with my system?
I figured this out, mostly by accident. In the BIOS, there are several settings related to Thunderbolt. One is "Thunderbolt Security Level," which defaults to "User Authorization." I changed this setting to "No Security," reconnected the dock, booted it up, and now it sees everything that is connected to the dock. [![BIOS Thunderbolt Security](https://i.stack.imgur.com/IkJM5.jpg)](https://i.stack.imgur.com/IkJM5.jpg) Now, when I look in "Devices -> Thunderbolt," the dock shows up as "Authorized." [![Thunderbolt Authorized](https://i.stack.imgur.com/emcyM.png)](https://i.stack.imgur.com/emcyM.png) Hope this helps someone in the future.
Do all linux distros have same boot files and all the main files? I am having a question that do all the linux distros boot files,grub files and kernel files , that are main to run them and only the iso image of the distro is different? I have Fedora installed on my system and can replace it with manjaro by changing the grub entry? How safe it is?
Various distributions of course have different packages of pretty much everything. Nevertheless, three components are typically rather well isolated from each other: bootloader, kernel, userspace programs. 1. Bootloader needs to be able to boot various kernels, otherwise its usability would be quite limited. 2. The kernel doesn't really depend much on the userspace, since it is providing a basic environment for the userspace to run in. 3. The userspace has some dependence on the kernel, but usually not for the basic tasks. It may be requiring various kernel functionality for certain aspects (even rather substantial), but it is often possible to use kernel from another distribution from "around the same time" (couple of months of difference should not matter, unless the userspace is using some bleeding edge features). If you just want to do piece-wise bootstrap of new distribution: - run dist-A - install dist-B-without-kernel (e.g in `chroot`) - boot B-with-kernel-A - install kernel-B - boot dist-B-with-kernel-B it should work just fine (source: *"been there, done that"*). Depending on your particular usecase (kernel features needed by the userspace), you may even be able to run happily a "foreign" kernel without issues.
Possible fix for getting NaN in java I have a method in the Candy Class named pricePerHundredGrams and what it is supposed to do is multiply the variable price times 100.00 and divide that answer by the variable weightGrams, and finally return that result to the variable wammy. When the variable wammy is called for in the very 2nd last statement of this code, it is supposed to pass the answer to return result. And ultimately c1 and c2 should display that result as well...but I get NaN for "per hundred grams". What is wrong with my code? ``` public class whatever { public static void main (String[] args) { processCandies(); System.out.println("end of processing"); } public static void processCandies() { Candy c1 = new Candy("Hershey", 145, 4.35, 233); Candy c2 = new Candy("Milky Way", 390, 2.66, 126); System.out.println(c1); System.out.println(c2); } } class Candy { private String name; private int calories; private double price; private double weightGrams; double wammy = pricePerHundredGrams(price, weightGrams); /** Constructor @param name @param calories @param price @param gram */ public Candy(String n, int cal, double p, double wG) { name = n; calories = cal; price = p; weightGrams = wG; } public String getName() { return name; } public int getCalories() { return calories; } public double getPrice() { return price; } public double getWeightGrams() { return weightGrams; } public double pricePerHundredGrams(double price, double weightGrams) { return (price * 100.00) / weightGrams; } public String toString() { String result; result = name + "\n" + calories + " calories\n" + weightGrams + " grams\n" + wammy + " per hundred grams\n"; return result; } ``` }
You are initializing `wammy` with the result of `pricePerHundredGrams`, but `price` and `weightGrams` haven't been initialized yet, so they're both `0`. For `double` arithmetic, `0` divided by `0` is `NaN` (it's indeterminate in math). Initialize `wammy` *after* `price` and `weightGrams` have valid values in your constructor: ``` public Candy(String n, int cal, double p, double wG) { name = n; calories = cal; price = p; weightGrams = wG; // Initialize wammy here. wammy = pricePerHundredGrams(price, weightGrams); } ``` Additionally, since they are already instance variables, you don't need to pass `price` and `weightGrams` as parameters to `pricePerHundredGrams`.
I am trying to find a solution to build a CSS string within my MVC ASP.NET application, Thoughts please? I am trying to find a solution to build a CSS string within my ASP.NET MVC Web Application. I would expect this to be at the selector level. So for example I may have a class "TableFormat" which might have the following CSS string. ``` font-family: Arial, Helvetica, sans-serif; font-size: 12px; font-style: normal; text-align: left; background-color: green; color: White; ``` Instead of having my users needing to know CSS, it would be far better to have a widget that allows them to select a font, color etc and then behind the scenes the widget would construct the above string. I then store this away into the DB for future use in a Razor View. I suspect it is the sort of thing that might exist as a JS widget. However I have not found anything apart from the dedicated CSS builders within bigger packages such as DW. **So my question is:** Do you know of a Javascript Control that does this, or another control perhaps within the ASP.NET MVC world that provides the above features. This would map to a textarea field. Otherwise I may need to write my own control.
You need to build a backend part where the users would select the properties they want and store it in the database. Than you can create a new `Controller` that would handle those custom values, something like this ``` public class CssController : Controller { public ActionResult Custom() { ViewData["fontFamily"] = "Arial, Helvetica, sans-serif"; //get from database ViewData["fontSize"] = "12px"; //get from database ViewData["selector"] = ".TableFormat"; //get from database return View(); } } ``` `Custom.cshtml` View would look something like this ``` @{ Layout = null; Response.ContentType = "text/css"; } @ViewData["selector"] { font-family: @ViewData["fontFamily"]; font-size: @ViewData["fontSize"]; } ``` If you navigate to /css/custom you would see something like this ``` .TableFormat { font-family: Arial, Helvetica, sans-serif; font-size: 12px; } ``` Now in the page that you want dynamically generated css you can include the view as a regular css file, for example: ``` <link href="/Css/Custom" rel="stylesheet" /> ```
Is there a string type with 8 BIT chars? I need to store much strings in RAM. But they do not contain special unicode characters, they all contains only characters from "ISO 8859-1" that is one byte. Now I could convert every string, store it in memory and convert it back to use it with .Contains() and methods like this, but this would be overhead (in my opinion) and slow. Is there a string class that is fast and reliable and offers some methods of the original string class like .Contains()? I need this to store more strings in memory with less RAM used. Or is there an other way to do it? Update: Thank you for your comments and your answer. I have a class that stores string. Then with one method call I need to figure out if I already have that string in memory. I have about **1000 strings to figure out** if they are in the list **a second**. hundred of millions in total. The average size of the string is about 20 chars. It is really the RAM that cares me. I even thought about compress some millions of strings and store these packages in memory. But then I need to decompress it every time I need to access the values. I also tried to use a HashSet, but the needed memory amount was even higher. I don't need the true value. Just to know if the value is in the list. So if there is a hash-value that can do it, even better. But all I found need more memory than the pure string. Currently there is no plan for further internationalization. So it is something I would deal with when it is time to :-) I don't know if using a database would solve it. I don't need to fetch anything, just to know if the value was stored in the class. And I need to do this fast.
It is very unlikely that you will win any significant performance from this. However, if you need to save memory, this strategy may be appropriate. - To convert a `string` to a `byte[]` for this purpose, use `Encoding.Default.GetBytes()`[1]. - To convert a `byte[]` back to a `string` for display or other string-based processing, use `Encoding.Default.GetString()`. - You can make your code look nicer if you use extension methods defined on `string` and `byte[]`. Alternatively, you can wrap the `byte[]` in a wrapper type and put the methods there. Make this wrapper type a `struct`, not a `class`, otherwise it will incur extra heap allocations, which is what you’re trying to avoid. I want to warn you, though — you are throwing away the ability to have Unicode in your application. You should normally have all alarm bells go off every time you *think* you need to do this. It is best if you structure your code in such a way that you can easily go back to using `string` when memory sizes will have gone up and memory consumption stops being an issue. --- [1] `Encoding.Default` returns the current 8-bit codepage of the running operating system. The default for this on English-language Windows is Windows-1252, which is what you want. For Russian Windows it will be Windows-1251 (Cyrillic) etc.
data modeling in Cassandra with columns that can be text or numbers I have table with 5 columns. ``` 1. ID - number but it can stored as text or number 2. name - text 3. date - date value but can stored as date or text 4. time - number but it can stored as text or number 5. rating - number but it can stored as text or number ``` I want to find which data type will make my table faster for write. How can I find. Any Cassandra stress yaml for this there?
Regarding [answer](https://stackoverflow.com/questions/21360688/key-validation-class-type-in-cassadra-utf8-or-longtype) that @BryceAtNetwork23 provided, it will be the same with Cassandra 2.1 or in Cassandra 2.2 (but Cassandra 3.0 will probably be a different story as the team is currently rewriting the storage engine, see [CASSANDRA-8099](https://issues.apache.org/jira/browse/CASSANDRA-8099)). Data is stored is still stored in binary. However there's more to say there. And you may want to consider the actual data being stored, and the performance your project need to achieve, query per seconds, etc. Depending on these goals or constraints an interesting approach is to have a look at the size of the serialized data for a given [type on cassandra](http://docs.datastax.com/en/cql/3.0/cql/cql_reference/cql_data_types_c.html). - If the data is a number, for example with a `long` in Java that has a size 8 bytes, there's a match the cassandra `bigint` type in size, that mean there's no cost associated when serializing, a plain copy will do. Also this has the benefit that the key is small enough so that it doesn't *stress* cassandra key cache. - If the data is a piece of text, for example a `String` in Java, which is encoded in UTF-16 in the runtime, but when serialized in Cassandra with `text` type then UTF-8 is used. UTF-16 always use 2 bytes *per character* and sometime 4 bytes, but UTF-8 is space efficient and depending on the character can be 1, 2, 3 or 4 bytes long. That mean that there's CPU work to serialize such data for encoding/decoding purpose. Also depending on the text for example `158786464563`, data will be stored with 12 bytes. That means more space is used and more IO as well. Note cassandra offers the `ascii` type that follows the US-ASCII character set and is always using [1 byte per character](https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/serializers/AsciiSerializer.java). - If data is a UUID (a value of 128 bits), in Java the `UUID` type uses 2 `long`s so it is 16 bytes long, and Cassandra store them as 16 bytes as well ([they use the Java UUID type](https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/serializers/UUIDSerializer.java)). Again that always depend on the mileage of your project, what are the goals, existing constraints. But here's my *un-educated* options : - If the data that has to be inserted is always a number that is inside the long range `[−9,223,372,036,854,775,808 ; +9,223,372,036,854,775,807]`, I'll got for a `bigint` type - UUID is fine - If the cluster is not under heavy load (like 100k query per seconds) and space is not an issue then `text` is not an issue, but if it is or if usage may grow I'd avoid `text` for key if possible. Another option is to use a `blob` type, i.e. a binary types, where it is possible to use any data the way you want according to the business of the software. This could allow space efficient, IO efficient storage, and to CPU efficient as well. But depending on the needs it may be necessary to manage a lot of things in the client code, like ordering, serialization, comparison, mapping, etc...
git stash equivalents in other revision control systems? Does hg, svn or others have an option like git stash?
The general name for that feature is: **Shelving**: the ability to actually upload intermediate revisions to the server without really checking them in. In a CVCS (Centralized VCS), you actually need to upload those intermediate data to a central server. But in a [DVCS (Distributed VCS)](https://stackoverflow.com/questions/2563836/sell-me-distributed-revision-control/2563917#2563917), you just need to store them in a the local repository. There is: - the [shelve extension](https://www.mercurial-scm.org/wiki/ShelveExtension) for Mercurial - [temporary branches for SVN](https://stackoverflow.com/questions/1554278/temporarily-put-away-uncommited-changes-in-subversion-a-la-git-stash), or [patch files](http://www.benrady.com/2009/09/svn-replacement-for-git-stash.html) - [p4tar](http://public.perforce.com/wiki/P4tar) (again patch based) for Perforce, even though the [Perforce 2009.2 has now shelve and unshelve features](https://stackoverflow.com/questions/2104184/is-there-an-equivalent-to-git-stash-in-perforce/2105898#2105898). - [saved checked-out data in Plastic SCM](http://codicesoftware.blogspot.com/2006/11/branching-and-shelving.html) (for shelving data) You can find all the other SCM shelving commands in this [SCM comparison table on Wikipedia](http://en.wikipedia.org/wiki/Comparison_of_revision_control_software#Advanced_commands). - Accurev: ~~keep / co~~ (this is disputed [in this question](https://stackoverflow.com/q/19765112/6309)) - Bazaar: shelve / unshelve - Darcs: revert / unrevert
Serialize data-attributes I have a drag and drop system, and each dropped item receives three data-attributes. - data-id - data-order - data-content Now I should put these attributes into a multidimensional array. It should look like this: data-order = "0" [data-id = "1", data-content = "blabla"]; data-order = "1" [data-id = "2", data-content = "another content"]; But I have no idea how to achieve this. Haven't working with arrays in jQuery yet. Thanks for the help! Here's something what I've been trying, but without succes: ``` <li class="dropped" data-order="0" data-id="1" data-content="blabla"></li> <li class="dropped" data-order="0" data-id="2" data-content="another content"></li> $(".send").click(function() { var itterate = $(".dropped"); var data_array = new Array(); for (var i in itterate.data()) { var sub_array = new Array(); sub_array['data-order'] = i; sub_array['data-id'] = itterate.data()[i]; sub_array['data-content'] = itterate.data([i]); data_array.push(sub_array); } var serialized = $.param(itterate.serializeArray().concat(data_array)); $("#result").text(serialized); }); ``` Here's a [jsfiddle](http://jsfiddle.net/hk120Lhq/)
<http://jsfiddle.net/hk120Lhq/1/> ``` $(".send").click(function() { var data_array = new Array(); $(".dropped").each(function(){ var item = {}; item['data-order'] = $(this).data('order'); item['data-id'] = $(this).data('id'); item['data-content'] = $(this).data('content'); data_array.push(item); }); var serialized = JSON.stringify(data_array); $("#result").text(serialized); }); ``` So `.each` is the same as the loop you are trying to do. I'm not sure what you were doing in your loop, but I think this is the result you wanted. Further simplified: <http://jsfiddle.net/hk120Lhq/3/> If you just want to get all the data attributes from your li and aren't sure how many there will be or what they will be called, do this: <http://jsfiddle.net/hk120Lhq/4/> ``` $(".send").click(function() { var data_array = new Array(); $(".dropped").each(function(){ var item = {}; for(var i in $(this).data()){ item[i] = $(this).data(i); } data_array.push(item); }); var serialized = JSON.stringify(data_array); $("#result").text(serialized); }); ```
Rendering view after multiple SELECT queries in Express I'm a bit new in Node.JS and Express framework and I have a great problem with the code below: ``` app.get('/student', function(req, res) { var dbRequest = 'SELECT * FROM Students WHERE IDCard = \'' + req.query['id'] + '\''; db.all(dbRequest, function(error, rows) { if(rows.length !== 0) { /* Save data. */ } else res.render('incorrect_student'); /* Render the error page. */ }); dbRequest = 'SELECT * FROM Groups WHERE Name = \'' + req.query['group'] + '\''; db.all(dbRequest, function(error, rows) { /* Add selected data to previous saved data. */ } }); res.render('student', {data: /* data from both queries above */}); }); ``` As I have written in comment blocks, I would like to: execute first select query, save data from *rows* object, execute second query, again save received data in other object and then finally render the page passing data from both queries. My question is, what is the best way to do that? I know that there is a problem caused by anonymous function. I have tried to fix the problem for over five hours as follows: 1. Clone *rows* object to another in anonymous function and then pass it to *res.render*. This solution dosen't work, because values of copied object are not visible (undefined) outside this function - only inside it. 2. Render the *student* page twice - it was really naive of course. 3. Change *db.all* command to *db.prepare* and then *db.run* - it wasn't working too. 4. Return object by the anonymous function and then assign it to external object defined between *app.get* and *var dbRequest*. The result was as described in 1st point. I have also an idea to create "subpages" containig parts of *student* page, which need variables from only one query. The other idea is to use some other functions of *db*, *req*, *res* or *app* objects. But, as I said before, I'm new in Express and I don't know how to realize my above ideas. Please note that it is impossible to join tables - in fact, I want to make 4-5 queries and then render my view. I'm using SQLite3 database. Thank you very much for your help! I hope that you'll help me to solve my problem.
In your situation, I would split up the database calls into separate calls, and make use of the `next` middleware function. It would looks something like: ``` function findStudent(req, res, next) { var dbRequest = 'SELECT * FROM Students WHERE IDCard = \'' + req.query['id'] + '\''; db.all(dbRequest, function(error, rows) { if(rows.length !== 0) { req.students = rows; return next(); } res.render('incorrect_student'); /* Render the error page. */ }); } function findGroups(req, res, next) { dbRequest = 'SELECT * FROM Groups WHERE Name = \'' + req.query['group'] + '\''; db.all(dbRequest, function(error, rows) { /* Add selected data to previous saved data. */ req.groups = rows; next(); } }); } function renderStudentsPage(req, res) { res.render('student', { students: req.students, groups: req.groups }); } app.get('/student', findStudent, findGroups, renderStudentsPage); ``` When you GET `/student`, you first call `findStudent`. Once the db call is finished, it will either render an error page, or call next(). Calling next goes to the next function, `findGroups`, which will then call `renderStudentsPage`. You can store the data on the req object as you move down the line of functions. Hope this helps, and here is more info: <http://expressjs.com/guide/using-middleware.html> --- edit/note: I did not mention it earlier, but if you pass in an argument when calling `next()`, you will trigger the error handling state. Convention dictates `next()` is left parameter-less unless you have met an error instance. You want to separate out the UI rendering aspect from the database call, so going further, your code could look like: ``` function findStudent(req, res, next) { var dbRequest = 'SELECT * FROM Students WHERE IDCard = \'' + req.query['id'] + '\''; db.all(dbRequest, function(error, rows) { if (error || !rows.length) { return next(error); } req.students = rows; return next(); }); } ``` And then elsewhere in your code you can handle rendering error pages.
Unable to get AppCompat ActionBar/Toolbar to become an overlay So, I have a Toolbar which I'm setting as my supportActionBar, which I want to be in overlay mode. At this point I feel like I've tried everything, but nothing seems to work. Here's the styles I currently have: ``` <style name="Theme.ArgleBargle" parent="@style/Theme.AppCompat.Light"> <item name="colorPrimary">@color/primary</item> <item name="colorAccent">@color/accent</item> <item name="vpiIconPageIndicatorStyle">@style/Widget.IconPageIndicator</item> <item name="vpiTabPageIndicatorStyle">@style/Widget.TabPageIndicator</item> <item name="windowActionBar">false</item> <item name="windowActionBarOverlay">true</item> <item name="android:windowActionBarOverlay">true</item> <item name="android:windowBackground">@color/background</item> </style> <style name="CustomActionBar" parent="@style/ThemeOverlay.AppCompat.Light"> <item name="android:windowActionBarOverlay">true</item> <!-- Support library compatibility --> <item name="windowActionBarOverlay">true</item> </style> ``` And here's the code for my Toolbar: ``` <android.support.v7.widget.Toolbar android:id="@+id/actionbar" android:layout_height="wrap_content" android:layout_width="match_parent" android:minHeight="?attr/actionBarSize" android:background="?attr/colorPrimary" app:theme="@style/CustomActionBar"/> ``` And finally, here's where I'm setting it in the Activity: ``` actionbar = (Toolbar) findViewById(R.id.actionbar);; setSupportActionBar(actionbar); ``` I've also tried calling `getWindow().requestFeature(Window.FEATURE_ACTION_BAR_OVERLAY)` with no luck.
So, it turns out the issue was simultaneously totally simple, and yet not obvious (to me, at least). Basically the issue is that although the Toolbar is being set as the Actionbar, it is still at the end of the day a layout element declared in XML. So, the trick is of course to make sure it overlaps in the XML. Here's a sample from my layout, now: ``` <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <android.support.v7.widget.Toolbar android:id="@+id/actionbar" android:layout_height="wrap_content" android:layout_width="match_parent" android:minHeight="?attr/actionBarSize" android:background="?attr/colorPrimary"/> <FrameLayout android:id="@+id/container" android:layout_width="match_parent" android:layout_height="match_parent" /> </RelativeLayout> ``` All works as expected now. Hope this can help someone else if they have the same issue!
How do I add a swift class to my framework header? In Xcode 6 (beta 4 6A267n), I added a Framework (selecting Swift as the language) and the framework header has this comment: // In this header, you should import all the public headers of your framework using statements like #import Through SO I found that you have to append "-Swift" to your header file name, however I was not able to get this to work. I've also added $(SRCROOT) to my header search path.
It sounds like you are trying to create your own Swift library? In which case, you do not need to create an external interface as described here: [Swift: how can I create external interface for static library (public headers analog in Objective-C .h)](https://stackoverflow.com/questions/24070722/swift-how-can-i-create-external-interface-for-static-library-public-headers-an) With Swift you identify your public interface by marking classes and methods as `public`. The header that has the `-Swift` suffix is a [bridging header](https://developer.apple.com/library/prerelease/ios/documentation/Swift/Conceptual/BuildingCocoaApps/MixandMatch.html) and is what you use to bridge Objective-C code so that it can be used in your Swift application.
Creating sub items in solution explorer We use allot of partial classes in C# to categorise our code for large classes. In the solution explorer we will have code files listed as such: ``` - MyClass.cs - MyClass.Partial1.cs - MyClass.Partial2.cs ``` But what we want to have is something like this (as you would see with designer files etc.): ``` - MyClass.cs ˪ MyClass.Partial1.cs ˪ MyClass.Partial2.cs ``` I'm aware that you can modify the project file manually to set-up this dependency like this: ``` <Compile Include="MyClass.cs" /> <Compile Include="MyClass.Partial1.cs"> <DependentUpon>MyClass.cs</DependentUpon> </Compile> <Compile Include="MyClass.Partial2.cs"> <DependentUpon>MyClass.cs</DependentUpon> </Compile> ``` Is there a short-cut in visual studio? Or maybe an extension that will help with this? (Without needing to edit the project file manually)
Duplicate of [this](https://stackoverflow.com/questions/3621345/how-to-group-partial-class-files-in-solution-explorer-vs2010) and [this](https://stackoverflow.com/questions/3617539/group-files-in-visual-studio) question. Their answer: `Group Items` as available in [VSCommands for Visual Studio 2010](http://visualstudiogallery.msdn.microsoft.com/d491911d-97f3-4cf6-87b0-6a2882120acf/) There is also a plugin that does **only** this: [NestIn](http://visualstudiogallery.msdn.microsoft.com/9d6ef0ce-2bef-4a82-9a84-7718caa5bb45) **Update** There is a second plugin that does pretty much the same thing: [File Nesting](https://visualstudiogallery.msdn.microsoft.com/3ebde8fb-26d8-4374-a0eb-1e4e2665070c)
Hyper-V vs Virtual Machine Platform vs Windows Hypervisor Platform settings in Programs and Features? Windows 10 shows these three separate features that seem to be the same thing or related. - Hyper-V (note that in latest Windows 10 version this has disappeared from the "Windows Features" checklist depicted bellow); it only shows in the services list. - Virtual Machine Platform - Windows Hypervisor Platform What does each setting do exactly, and how do they correlate? [![enter image description here](https://i.stack.imgur.com/WVui5.jpg)](https://i.stack.imgur.com/WVui5.jpg)
> > How do they correlate? > > > They are separate independent features and do not directly correlate with one another. > > What does each setting do exactly? > > > - `Hyper-V` is Microsoft's Hypervisor. - `Virtual Machine Platform` - "Enables platform support for virtual machines" and is required for [WSL2](https://docs.microsoft.com/en-us/windows/wsl/wsl2-about). Virtual Machine Platform can be used to create MSIX Application packages for an App-V or MSI. - `Windows Hypervisor Platform` - "Enables virtualization software to run on the Windows hypervisor" and at one time was required for [Docker on Windows](https://docs.docker.com/docker-for-windows/install/#system-requirements). The Hypervisor platform is an API that third-party developers can use in order to use Hyper-V. Oracle VirtualBox, Docker, and QEMU are examples of these projects. > > The Windows Hypervisor Platform adds an extended user-mode API for > third-party virtualization stacks and applications to create and > manage partitions at the hypervisor level, configure memory mappings > for the partition, and create and control the execution of virtual > processors. > > > Sources: - [Windows Hypervisor Platform](https://docs.microsoft.com/en-us/virtualization/api/) - [MSIX documentation](https://docs.microsoft.com/en-us/windows/msix/) - [Why can't VirtualBox or VMware run with Hyper-V enabled on Windows 10](https://superuser.com/questions/1208850/why-cant-virtualbox-or-vmware-run-with-hyper-v-enabled-on-windows-10)
ESLint Unexpected use of isNaN I'm trying to use the `isNaN` global function inside an arrow function in a Node.js module but I'm getting this error: `[eslint] Unexpected use of 'isNaN'. (no-restricted-globals)` This is my code: ``` const isNumber = value => !isNaN(parseFloat(value)); module.exports = { isNumber, }; ``` Any idea on what am I doing wrong? PS: I'm using the AirBnB style guide.
As the [documentation suggests](https://github.com/airbnb/javascript#standard-library--isnan), use [`Number.isNaN`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isNaN). ``` const isNumber = value => !Number.isNaN(Number(value)); ``` --- Quoting Airbnb's documentation: > > Why? The global isNaN coerces non-numbers to numbers, returning true > for anything that coerces to NaN. If this behavior is desired, make it > explicit. > > > ``` // bad isNaN('1.2'); // false isNaN('1.2.3'); // true // good Number.isNaN('1.2.3'); // false Number.isNaN(Number('1.2.3')); // true ```
How do I return a Python bottle template that prints out key-value pairs from a dictionary? The purpose is to extract data from a dictionary and return the key-value pairs in the form of a table This is the section of my Python code: ``` dictionary = dict() dictionary = {'hello': 1, 'hi': 2, 'go': 3} output = template('make_table', wordList=dictionary) return output ``` This is part of my make\_table.tpl file: ``` <table> %for index in wordList: <tr> <td>{{index}} </td> </tr> %end </table> ``` Unfortunately the tpl file only diplays the keys: 'hello', 'hi' and 'go', but not their values. What I want is to be able to able to display: > > hello 1 hi 2 go 3 > > > Can anybody tell me how I index values as well on the tpl file?
You can iterate over the dict items in the template using `iteritems()`: ``` <table> %for key, value in wordList.iteritems(): <tr> <td>{{key}} </td> <td>{{value}} </td> </tr> %end </table> ``` Demo: ``` >>> from bottle import template >>> t = """ ... <table> ... %for key, value in wordList.iteritems(): ... <tr> ... <td>{{key}} </td> ... <td>{{value}} </td> ... </tr> ... %end ... </table> ... """ >>> print template(t, wordList={'hello': 1, 'hi': 2, 'go': 3}) <table> <tr> <td>go </td> <td>3 </td> </tr> <tr> <td>hi </td> <td>2 </td> </tr> <tr> <td>hello </td> <td>1 </td> </tr> </table> ```
How to prevent a label in a stack layout from getting cut off I've got two labels next to each other. The first is pretty long and should get truncated if necessary. The second should be displayed completely - no matter what. **This is my code:** ``` MainPage = new ContentPage { Content = new StackLayout { Orientation = StackOrientation.Horizontal, VerticalOptions = LayoutOptions.CenterAndExpand, BackgroundColor = Color.Orange, Padding = 10, Children = { new Label { Text = "The answer to life the universe and everything", HorizontalOptions = LayoutOptions.Start, LineBreakMode = LineBreakMode.TailTruncation, BackgroundColor = Color.Yellow, }, new Label { Text = "42", HorizontalOptions = LayoutOptions.EndAndExpand, LineBreakMode = LineBreakMode.NoWrap, BackgroundColor = Color.Yellow, }, }, }, }; ``` **This is what I get:** ![](https://i.stack.imgur.com/JYr4j.png) **My question:** How can I prevent "42" from getting cut off? **Explanation of the current result:** I think I know, why the second label is too small. The size of both labels is negotiated *before* the first text is truncated. So both labels need to shrink a bit, although the first label could handle it better. **Idea 1:** I experimented with a relative layout (instead of the stack layout). However, it's a rather complicated approach and I get different results on iOS and Android. So I hope for a simpler fix. **Idea 2:** Should I override one of the layouting or size negotiation methods? Maybe the "42" layout can enforce getting it's desired width. **Idea 3:** Maybe we can come up with custom label renderers for iOS and Android that handle the size negotiation as expected. --- **Update** "42" is just an example. In my final app, this will change dynamically and can be a string of different length. So setting `MinimumWidthRequest` doesn't help without knowing the width of the rendered string.
> > How can I prevent "42" from getting cut off? > > > You can set the `MinimumWidthRequest` in the "42" `Label` to prevent it from being cut off. For details about `MinimumWidthRequest` please refer to remarks section of the [document](https://developer.xamarin.com/api/property/Xamarin.Forms.VisualElement.MinimumWidthRequest/). Code example: ``` MainPage = new ContentPage { Content = new StackLayout { Orientation = StackOrientation.Horizontal, VerticalOptions = LayoutOptions.CenterAndExpand, BackgroundColor = Color.Orange, Padding = 10, Children = { new Label { Text = "The answer to life the universe and everything The answer to life the universe and everything", HorizontalOptions = LayoutOptions.Start, LineBreakMode = LineBreakMode.TailTruncation, BackgroundColor = Color.Yellow, }, new Label { MinimumWidthRequest = 50, Text = "42", HorizontalOptions = LayoutOptions.EndAndExpand, LineBreakMode = LineBreakMode.NoWrap, BackgroundColor = Color.Yellow, }, }, }, }; ``` Here is the result: [![enter image description here](https://i.stack.imgur.com/0tHCT.png)](https://i.stack.imgur.com/0tHCT.png)
omni xml create xml What is the fastest way to implement a creation of xml file in this format: ``` <?xml version="1.0" encoding="Unicode" standalone="yes"?> <A V1="string" V2=String > <B> <C V3="1" V4="1" V5="0"/> </B> <C V6="14.25" V7="0.2"/> <D> <E V8="1" V9="1" V10="2"> </E> <E V8="2" V9="1" V10="2"> <F V11="a" V12="B"> <G>0</G> </F> </E> <E V8="1" V9="1" V10="2"> </E> <E V8="2" V9="1" V10="2"> <F V11="a" V12="B"> <G>0</G> </F> </E> </D> </A> ``` There are a lot of e, where I can generate in iterations. However I can't seem to grasp the best approach with Omni. Creating 10 to 20 objects for so much constant seems a mess and too much. And could you also mention how to set the encoding to generate the file?
This should get you started: ``` uses OmniXML, OmniXMLUtils; procedure GetEAttr(var v8, v9, v10: integer); begin v8 := Random(10); v9 := Random(10); v10 := Random(10); end; procedure TForm54.FormCreate(Sender: TObject); var i : integer; node1 : IXMLNode; node2 : IXMLNode; root : IXMLNode; v10 : integer; v8 : integer; v9 : integer; xmlDoc: IXMLDocument; begin xmlDoc := CreateXMLDoc; xmlDoc.AppendChild(xmlDoc.CreateProcessingInstruction('xml', 'version="1.0" encoding="Unicode" standalone="yes"')); root := AppendNode(xmlDoc, 'A'); SetNodeAttr(root, 'V1', 'string'); SetNodeAttr(root, 'V2', 'string'); node1 := AppendNode(root, 'B'); node2 := AppendNode(node1, 'C'); SetNodeAttr(node2, 'V3', '1'); SetNodeAttr(node2, 'V4', '1'); SetNodeAttr(node2, 'V5', '0'); node1 := AppendNode(root, 'C'); SetNodeAttr(node1, 'V6', '14.25'); SetNodeAttr(node1, 'V7', '0.2'); node1 := AppendNode(root, 'D'); for i := 1 to 4 do begin GetEAttr(v8, v9, v10); node2 := AppendNode(node1, 'E'); SetNodeAttrInt(node2, 'V8', v8); SetNodeAttrInt(node2, 'V9', v9); SetNodeAttrInt(node2, 'V10', v10); end; XMLSaveToFile(xmlDoc, 'test.xml', ofIndent); end; ```
How can I use Perl default array variable @\_ with push? In the [perlvar documentation](http://perldoc.perl.org/perlvar.html) there is a text about `@_`: > > Within a subroutine the array @\_ contains the parameters passed to > that subroutine. Inside a subroutine, @\_ is the default array for the > array operators push, pop, shift, and unshift. > > > It is a common way to use `shift` without parameters to get first element from the array. It is very often used as: ``` sub some_method { my $self = shift; # the same as `my $self = shift @_;` ... } ``` But in the documentation there is written that it can be used with `push`, but I can't create working example without explicitly specifying `@_` to the push. From reading this doc I'm expecting `push 123;` to push to `@_`, but it is not working.
I believe you've found an error in the documentation. `@_` is the default for `pop` and `shift`, but not for `push` and `unshift`. For both `push` and `unshift`, the array has to be specified explicitly. `perldoc -f push` shows the syntax as: ``` push ARRAY,LIST push EXPR,LIST ``` with no option to leave the array unspecified; likewise for `perldoc -f unshift`. (The OP has submitted a Perl bug report; see <https://gist.github.com/bessarabov/2e938f4bbb79e78d1941>) **UPDATE:** This has been corrected. In the git repo for Perl, it was corrected in commit `256ca3d37ed232e10fbb2884be45691e745be7a9`, 2015-06-21. The fix appears in releases 5.23.1 and 5.24.0. This doesn't seem to be mentioned in any of the `perldelta*.pod` files.
What is the difference between [Class new] and [[Class alloc] init] in iOS? > > **Possible Duplicate:** > > [alloc, init, and new in Objective-C](https://stackoverflow.com/questions/3330963/alloc-init-and-new-in-objective-c) > > > I am a little confused about `[Class new]` and `[[Class alloc] init]`. I have defined an object `content` using `[Class new]` and `[[Class alloc] init]`. ``` (1). NSMutableArray *content = [NSMutableArray new]; (2). NSMutableArray *content = [[NSMutableArray alloc] init]; ``` My question is about the differences between `[Class new]` and `[[Class alloc] init]`. For me, (1) and (2) are similar. If (1) and (2) are similar, then why do we use `[[Class alloc] init]` most of the time, compared to `[Class new]`? I think that there must be some difference. Kindly explain the differences, pros & cons of both?
**Alloc:** Class method of NSObject. Returns a new instance of the receiving class. **Init**: Instance method of NSObject. Implemented by subclasses to initialize a new object (the receiver) immediately after memory for it has been allocated. **New**: Class method of NSObject. Allocates a new instance of the receiving class, sends it an init message, and returns the initialized object. **Release**: Instance method of NSObject delegate. Decrements the receiver’s reference count. **Autorelease**: Instance method of NSObject delegate. Adds the receiver to the current autorelease pool. **Retain:** Instance method of NSObject delegate. Increments the receiver’s reference count. **Copy:** Instance method of NSObject delegate. Returns a new instance that’s a copy of the receiver. So to conclude we can say that **alloc goes with init** **new = alloc + init**
Slow performance on Azure DocumentDB I'm currently facing quite slow response times from Azure DocumentDB (first time trying it). There are 31 objects in a collection, which I am going to fetch and return to the caller. The code I am using is this: ``` public async Task<List<dynamic>> Get(string collectionName = null) { // Lookup from Dictionary, takes literally no time var collection = await GetCollectionAsync(collectionName); var sw = Stopwatch.StartNew(); var query = await _client.CreateDocumentQuery(collection.DocumentsLink, new FeedOptions { MaxItemCount = 1000 }) .AsDocumentQuery() .ExecuteNextAsync(); Trace.WriteLine($"Get documents: {sw.ElapsedMilliseconds} ms"); return query.ToList(); } ``` To instantiate the client, I'm using the following code: ``` _client = new DocumentClient(new Uri(endpoint), authKey, new ConnectionPolicy { ConnectionMode = ConnectionMode.Direct, ConnectionProtocol = Protocol.Tcp }); ``` The response times I am getting from the `Stopwatch` is between 360ms and 1200ms to return 31 objects. For me, that is *quite* slow. Without the custom `ConnectionPolicy` the average response time is ca 950ms. Am I doing something wrong here? Is it possible to speed these requests up somehow? Here is the output from the Trace, printing out the Stopwatch's elapsed time: ``` Get documents: 1984 ms Get documents: 1252 ms Get documents: 1246 ms Get documents: 359 ms Get documents: 356 ms Get documents: 356 ms Get documents: 351 ms Get documents: 1248 ms Get documents: 1314 ms Get documents: 1250 ms ```
**Updated to reflect latest service changes (1/22/2017):** DocumentDB guarantees p99 read latency < 10 ms and p99 write latency < 15 ms with SLAs on the database side. The tips below still apply to achieve low latency reads using the SDKs\*\* **Updated to reflect latest service changes (6/14/2016):** There is no need to cache self-links when using routing via user-defined ids. Also added a few more tips.\*\* Reads typically take <1 ms on the DocumentDB storage partition itself; and the bottleneck is often the network latency between the application and the database. Thus, it is best to have the application running in the same datacenter as the database. Here are some general tips on SDK usage: **Tip #1: Use a singleton DocumentDB client for the lifetime of your application** Note that each DocumentClient instance is thread-safe and performs efficient connection management and address caching when operating in Direct Mode. To allow efficient connection management and better performance by DocumentClient, it is recommended to use a single instance of DocumentClient per AppDomain for the lifetime of the application. ~~**Tip #2: Cache document and collection SelfLinks for lower read latency**~~ In Azure DocumentDB, each document has a system-generated selfLink. These selfLinks are guaranteed to be unique and immutable for the lifetime of the document. Reading a single document using a selfLink is the most efficient way to get a single document. Due to the immutability of the selfLink, you should cache selfLinks whenever possible for best read performance. ``` Document document = await client.ReadDocumentAsync("/dbs/1234/colls/1234354/docs/2332435465"); ``` Having said that, it may not be always possible for the application to work with a document’s selfLink for read scenarios; in this case, the next most efficient way to retrieve a document is to query by the document’s user provided Id property. For example: ``` IDocumentQuery<Document> query = (from doc in client.CreateDocumentQuery(colSelfLink) where doc.Id == "myId" select document).AsDocumentQuery(); Document myDocument = null; while (query.HasMoreResults) { FeedResponse<Document> res = await query.ExecuteNextAsync<Document>(); if (res.Count != 0) { myDocument = res.Single(); break; } } ``` **Tip #3: Tune page size for queries/read feeds for better performance** When performing a bulk read of documents using read feed functionality (i.e. ReadDocumentFeedAsync) or when issuing a DocumentDB SQL query, the results are returned in a segmented fashion if the result set is too large. By default, results are returned in chunks of 100 items or 1 MB, whichever limit is hit first. In order to reduce the number of network round trips required to retrieve all applicable results, you can increase the page size using x-ms-max-item-count request header to up to 1000. In cases where you need to display only a few results, e.g., if your user interface or application API returns only ten results a time, you can also decrease the page size to 10 in order to reduce the throughput consumed for reads and queries. You may also set the page size using the available DocumentDB SDKs. For example: ``` IQueryable<dynamic> authorResults = client.CreateDocumentQuery(documentCollection.SelfLink, "SELECT p.Author FROM Pages p WHERE p.Title = 'About Seattle'", new FeedOptions { MaxItemCount = 1000 }); ``` **A few more tips (6/14/2016):** - Use point-reads (e.g. read document instead of query document) for lookup by id - Configure the DocumentDB client (using ConnectionPolicy) to use direct connectivity over gateway - Collocate clients in the same Azure Region as your database - Call OpenAsync() to prevent higher first call latency - You can debug LINQ queries by calling ToString() on the queryable to see the SQL query sent over the wire For more performance tips, check out this [blog post](http://azure.microsoft.com/en-us/blog/performance-tips-for-azure-documentdb-part-1-2/).
ContextSwitchDeadlock Was Detected error in C# I am running a C# application, and during run-time I get the following error: > > The CLR has been unable to transition from COM context 0x20e480 to COM context 0x20e5f0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. > > > Can anyone please help me out with the problem here? Thanks a lot.
The main thread of your program has been busy executing code for a minute. It is not taking care of its normal duties, pumping the message loop. That's illegal when you use COM servers in a worker thread: calls to their methods cannot be dispatched until your main thread goes idle again. It should be readily visible, your UI should be dead as a door nail. Windows should have replaced your main window with a ghost that displays "Not Responding". Closing the window won't work, no click events have any effect. Whatever your main thread is doing should be done by a worker thread instead. The `BackgroundWorker` class is good for that, you'll find many usage help in the MSDN Library article for it. Use Debug + Break All, Debug + Windows + Threads if you have no idea what the main thread is doing. One more possible cause: be sure to install service pack 1 if you are using the RTM version of VS2005.
How do functional languages model side-effects? Since side-effects break referential transparency, don't they go against the point of functional languages?
There are two techniques that are used by purely functional programming languages to model side effects: 1) A world type that represents external state, where each value of that type is guaranteed by the type system to be used only once. In a language that uses this approach the function `print` and `read` might have the types `(string, world) -> world` and `world -> (string, world)` respectively. They might be used like this: ``` let main w = let w1 = print ("What's your name?", w) in let (name, w2) = read w1 in let w3 = print ("Your name is " ^ name, w2) in w3 ``` But not like this: ``` let main w = let w1 = print ("What's your name?", w) in let (name, w2) = read w in let w3 = print ("Your name is " ^ name, w2) in w3 ``` (because w is used twice) All built-in functions with side-effects would take and return a world value. Since all functions with side-effects are either built-ins or call other functions with side-effects, this means that all functions with side-effects need to take and return a world. This way it is not possible to call a function with side-effects twice with the same arguments and referential transparency is not violated. 2) An IO monad where all operations with side effects have to be executed inside that monad. With this approach all operations with side effects would have type `io something`. For example `print` would be a function with type `string -> io unit` and `read` would have type `io string`. The only way to access the value of performing operation would be to use the "monadic bind" operation (called >>= in haskell for example) with the IO operation as one argument and a function describing what to do with the result as the other operand. The example from above would look like this with monadic IO: ``` let main = (print "What's your name?") >>= (lambda () -> read >>= (lambda name -> print ("Your name is " ^ name))) ```
Javascript Countdown Timer - Client Time vs mySQL Time The following code is a countdown timer. It pulls an ending datetime stamp from mySQL and uses it to count to. The issue is that the mysql time may be in a different time zone than the client who is looking at the page with the timer. I also pull the current timestamp from mySQL with NOW(), thinking that this would allow the timer to count as the user who created it intended. if I put the NOW() value in this snippet ``` var timeDiff = target - (new Date()); ``` like so ``` var nt='2015-03-11 05:12:15'.split(/[- :]/); var timeDiff = target - (new Date(nt[0],nt[1]-1,nt[2],nt[3],nt[4],nt[5])); ``` the counter shows the correct time left when the page loads but does not count interactively any longer. I think I need to get the difference in hours between the clients local time and the mySQL NOW() and adjust the date in this line to get the interactive timer to run. ``` var timeDiff = target - (new Date()); ``` nothing I try seems to work. This is the working script if the client happens to be int he same time zone. ``` <script language="javaScript"> document.write(hrs); function timeDiff(target) { function z(n) {return (n<10? '0' : '') + n;} var timeDiff = target - (new Date()); var hours = timeDiff / 3.6e6 | 0; var minutes = timeDiff % 3.6e6 / 6e4 | 0; var seconds = timeDiff % 6e4 / 1e3 | 0; if (hours<0 || minutes<0 || seconds<0) { document.getElementById('divBody').style.display='none'; document.getElementById('divExpired').style.display=''; return '<b>EXPIRED</b>'; } else { return '<b>' + z(hours) + '</b> Hours, <b>' + z(minutes) + '</b> Mins, <b>' + z(seconds) + '</b> Secs'; } } function doCountDown(target) { document.getElementById('timer').innerHTML = '<img src=\"/backhaul/images/my/al-active.png\" class=\"vm2\" /> <span style=\"color:#c40000\"><b>EXPIRES IN</b></span>: ' + timeDiff(target); var lag = 1020 - (new Date() % 100); setTimeout(function(){doCountDown(target);}, lag); } window.onload = function() { //Insert Expiratin Date from mySQL into t var var t='2015-03-12 00:00:00'.split(/[- :]/); doCountDown(new Date(t[0],t[1]-1,t[2],t[3],t[4],t[5])); } </script> ```
There are many ways of doing this, but I'll elaborate on two ways. ## Method 1 : Adjust the time on the client side One way is what you are trying to do which is to get the current time of the server and find the difference with the client's current time. You can simply adjust the server target time to the client's time. This can be done with ``` var serverDifference=Date.parse(mysql_data.now)-Date.now() var target=Date.parse(mysql_data.server_time_in_two_hours)-serverDifference ``` Then you can input it into your function without problem. ## Method 2: Calculate the times remaining, server side Since you just need a countdown timer, I think it's more appropriate to simply send the seconds left server side. This can be done with SQL using ``` select timestampdiff(SECOND,now(),end_time) seconds_left from timers; ``` Then you simply just make a timer that counts down based on the number of seconds left instead of a target date. You can calculate the number of seconds left by deducting the time that the javascript has run from the time received from the server. So something like ``` var client_start=Date.now() function timeDiff_fromseconds(target) { function z(n) {return (n<10? '0' : '') + n;} var timeDiff =target-(Date.now()-client_start) var hours = timeDiff / 3.6e6 | 0; var minutes = timeDiff % 3.6e6 / 6e4 | 0; var seconds = timeDiff % 6e4 / 1e3 | 0; if (hours<0 || minutes<0 || seconds<0) { return '<b>EXPIRED</b>'; } else { return '<b>' + z(hours) + '</b> Hours, <b>' + z(minutes) + '</b> Mins, <b>' + z(seconds) + '</b> Secs'; } } ``` There is also [`performance.now()`](https://developer.mozilla.org/en-US/docs/Web/API/Performance/now) as suggested by [@dandavis](https://stackoverflow.com/users/2317490/dandavis). This returns the number of milliseconds since the tab opened and is accurate to 1/1000th of a millisecond. And this doesn't change even if the system clock of the client browser changes. For full support, you should use a [polyfill](https://gist.github.com/paulirish/5438650) ([As of the time of this writing, iOS Safari doesn't support it](http://caniuse.com/#search=performance.now)). In this context we can replace `Date.now()` with it. ## JSFiddle Demo: [<http://jsfiddle.net/3o3u5r5j/1/>](http://jsfiddle.net/3o3u5r5j/1/)
Ansible | Set\_fact with condition I have the list: ``` val_list: - val: "D" - val: "A" - val: "B" - val: "C" ``` I would like to set variable true if list contain `val: B` without creating register variable task, for example: ``` - set_fact: containB: false - set_fact: containB: true when: item.val=='B' with_items - "{{ val_list }}" ``` I would like to convert to something like that: ``` - set_fact: containB: true if item.val=='B' | default(false) #or containB: true if any val in val_list is 'B' with_items - "{{ val_list }}" ```
1. extract values from the list with the `map` filter 2. check that 'B' is an element of the resulting list In a nutshell: ``` - hosts: localhost vars: val_list: - val: "D" - val: "A" - val: "B" - val: "C" list_contains_B: "{{ 'B' in (val_list | map(attribute='val')) }}" tasks: - name: Does list contain B ? debug: var: list_contains_B ``` Which gives: ``` PLAY [localhost] *********************************************************************************************************************************************************************************************************************** TASK [Gathering Facts] ***************************************************************************************************************************************************************************************************************** ok: [localhost] TASK [Does list contain B ?] *********************************************************************************************************************************************************************************************************** ok: [localhost] => { "list_contains_B": true } PLAY RECAP ***************************************************************************************************************************************************************************************************************************** localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
TYPO3 extbase & IRRE: add existing records with 'foreign\_selector' I "kickstarted" an extension with the extbase extension builder that contains some 1:1 and 1:n relations. It automatically set the field types to 'inline' and displayed a nice IRRE UI in the backend. But by default, there is no way to select an existing record, just create new ones. ![enter image description here](https://i.stack.imgur.com/GHscV.png) I found various explanations on how to achieve this with 'foreign\_selector', but all of them very sketchy. The feature itself should be working, see <https://forge.typo3.org/issues/43239> Can someone walk me through this or point to a working example in the TER? I could create a step-by-step tutorial from the example, once I get it to work. PS The field's TCA config as generated by `extension_builder`: ``` 'myfield' => array( 'exclude' => 1, 'label' => 'LLL:EXT:myextension/Resources/Private/Language/locallang_db.xlf:tx_myextension_domain_model_myitem.myfield', 'config' => array( 'type' => 'inline', 'foreign_table' => 'tx_myextension_domain_model_myfield', 'foreign_field' => 'myitem', 'maxitems' => 9999, 'appearance' => array( 'collapseAll' => 0, 'levelLinksPosition' => 'top', 'showSynchronizationLink' => 1, 'showPossibleLocalizationRecords' => 1, 'showAllLocalizationLink' => 1 ), ), ), ```
The main problem is that IRRE relations of type 1:n work like this: A child record holds the uid of its parent. So your table tx\_myext\_domain\_model\_city holds the UID of your (imaginary) tx\_myext\_domain\_model\_address. Therefore with the default configuration you will not be able to select a city multiple times as it can only have exactly one parent. So you will need to use a relation table for this field. This table needs to contain a uid field for both the address (uid\_address) and the city (uid\_city): ``` CREATE TABLE tx_irreforeignselectordemo_address_city_mm ( uid int(11) NOT NULL auto_increment, pid int(11) DEFAULT '0' NOT NULL, uid_address int(11) unsigned DEFAULT '0' NOT NULL, uid_city int(11) unsigned DEFAULT '0' NOT NULL, sorting int(11) unsigned DEFAULT '0' NOT NULL, PRIMARY KEY (uid), KEY parent (pid) ); ``` And it needs to have a TCA configuration for these fields (while the table itself can be hidden): ``` return array( 'ctrl' => array( 'title' => 'Relation table', 'hideTable' => TRUE, 'sortby' => 'sorting', ), 'columns' => array( 'uid_address' => Array( 'label' => 'Address', 'config' => Array( 'type' => 'select', 'foreign_table' => 'tx_irreforeignselectordemo_domain_model_address', 'size' => 1, 'minitems' => 0, 'maxitems' => 1, ), ), 'uid_city' => Array( 'label' => 'City', 'config' => Array( 'type' => 'select', 'foreign_table' => 'tx_irreforeignselectordemo_domain_model_city', 'foreign_table_where' => ' AND sys_language_uid IN (0,-1)', 'size' => 1, 'minitems' => 0, 'maxitems' => 1, ), ), ), 'types' => array( '0' => array('showitem' => 'uid_address,uid_city') ), 'palettes' => array() ); ``` You can then configure the TCA of your address to make it an IRRE field: ``` 'type' => 'inline', 'foreign_table' => 'tx_yourext_address_city_mm', 'foreign_field' => 'uid_address', 'foreign_label' => 'uid_city', 'foreign_selector' => 'uid_city', 'foreign_unique' => 'uid_city', 'foreign_sortby' => 'sorting', ``` Note that `foreign_unique` tells TYPO3 that a city can only selected once. And you need to define the relation from the other side (from your city TCA): ``` 'addresses' => array( 'exclude' => 1, 'label' => 'Addresses', 'config' => array( 'type' => 'inline', 'foreign_table' => 'tx_irreforeignselectordemo_address_city_mm', 'foreign_field' => 'uid_city', 'foreign_label' => 'uid_address', ), ), ``` Once your configuration is complete, you will be able to use this in the Backend. Since this is a non-standard MM relation, Extbase will not be able to deal with it by default. But we can compare this to the sys\_file\_reference table that was introduced in TYPO3 6. So we build an Extbase model for the CityRelation with the properties "address" and "city" and map this model to our mm table: ``` config.tx_extbase.persistence.classes { Visol\Irreforeignselectordemo\Domain\Model\CityRelation { mapping { tableName = tx_irreforeignselectordemo_address_city_mm columns { uid_address.mapOnProperty = address uid_city.mapOnProperty = city } } } } ``` Now in our address model, we define the city (or cities - of you allow more than one choice) as ObjectStorage of type CityRelation: ``` /** * Cities * * @var \TYPO3\CMS\Extbase\Persistence\ObjectStorage<\Visol\Irreforeignselectordemo\Domain\Model\CityRelation> */ protected $cities = NULL; ``` We now have a property "cities" that contains the references to all selected cities. You can iterate through them and use them: ``` <f:for each="{address.cities}" as="cityRelation"> <li>{cityRelation.city.name}</li> </f:for> ``` Since I couldn't find an all-in-one demo for this and was interested in the topic, I created a demo extension that does what I just described - based on the Core and two extensions that deal with the topic: <https://github.com/lorenzulrich/irreforeignselectordemo> The solution is an m:n approach anyway (because 1:n wouldn't work for the reasons stated above) so I decided to use "cities" instead of "city". While this might not make sense for selecting a city (as suggested by your post), it might make sense for other opportunities. Feel free to replace "cities" by "city" and set maxItems in the inline configuration to one - then you have kind of an 1:n.
How are environment variables used in Jenkins with Windows Batch Command? I'm trying to use **Jenkins (Global) environment variables** in my xcopy script. ``` ${WORKSPACE} doesn't work "${WORKSPACE}" doesn't work '${WORKSPACE}' doesn't work ```
I know nothing about Jenkins, but it looks like you are trying to access environment variables using some form of unix syntax - that won't work. If the name of the variable is WORKSPACE, then the value is expanded in Windows batch using `%WORKSPACE%`. That form of expansion is performed at parse time. For example, this will print to screen the value of WORKSPACE ``` echo %WORKSPACE% ``` If you need the value at execution time, then you need to use delayed expansion `!WORKSPACE!`. Delayed expansion is not normally enabled by default. Use `SETLOCAL EnableDelayedExpansion` to enable it. Delayed expansion is often needed because blocks of code within parentheses and/or multiple commands concatenated by `&`, `&&`, or `||` are parsed all at once, so a value assigned within the block cannot be read later within the same block unless you use delayed expansion. ``` setlocal enableDelayedExpansion set WORKSPACE=BEFORE ( set WORKSPACE=AFTER echo Normal Expansion = %WORKSPACE% echo Delayed Expansion = !WORKSPACE! ) ``` The output of the above is ``` Normal Expansion = BEFORE Delayed Expansion = AFTER ``` Use `HELP SET` or `SET /?` from the command line to get more information about Windows environment variables and the various expansion options. For example, it explains how to do search/replace and substring operations.
How to start launch the HSQLDB server as described in the Hibernate tutorial? Trying to follow step #4 in [this Hibernate tutorial](http://docs.jboss.org/hibernate/orm/3.3/reference/en/html/tutorial.html#tutorial-firstapp-configuration): ``` mvn exec:java -Dexec.mainClass="org.hsqldb.Server" -Dexec.args="-database.0 file:target/data/tutorial" ``` I am getting this error: ``` [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'exec'. [INFO] ------------------------------------------------------------------------ [INFO] Building First Hibernate Tutorial [INFO] task-segment: [exec:java] [INFO] ------------------------------------------------------------------------ [INFO] Preparing exec:java [INFO] No goals needed for project - skipping [INFO] [exec:java {execution: default-cli}] [WARNING] java.lang.ClassNotFoundException: org.hsqldb.Server at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:285) at java.lang.Thread.run(Thread.java:662) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An exception occured while executing the Java class. org.hsqldb.Server [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: < 1 second [INFO] Finished at: Mon Dec 17 16:35:42 EST 2012 [INFO] Final Memory: 6M/15M [INFO] ------------------------------------------------------------------------ ``` This is despite downloading the latest hsqldb package and installing it per [the FAQ](http://hsqldb.org/web/hsqlFAQ.html). The `hsqldb.jar` file is located in `C:\hsqldb-2.2.9\hsqldb-2.2.9\hsqldb\lib` and the classpath env var points at it: ``` CLASSPATH=C:\hsqldb-2.2.9\hsqldb-2.2.9\hsqldb\lib ``` So why am I receiving this **ClassNotFoundException: org.hsqldb.Server** error? What am I doing wrong?
First of all, your CLASSPATH is not generally correct for any jar. A jar name must be specified, for example: ``` CLASSPATH=C:\hsqldb-2.2.9\hsqldb-2.2.9\hsqldb\lib\hsqldb.jar ``` Second, CLASSPATH is not necessary for this tutorial, as it is a Maven project. You should add the relevant dependency to the pom.xml file that is described in section 1.1.1 of the tutorial you mention. Simply add this block to the ones listed in the pom.xml, inside the tab: ``` <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.2.9</version> </dependency> ```
AMP Form submission redirect or response Good morning, I have a form on a test AMP page (is AMP validate) and the form works: I receive the email with result (I use a php to handle it). I don't know (I did some try but I think I still missing an example) the syntax for let the AMP page responce correctly (now I get "Subscription failed!" but I do get the email) or redirect after submission. Here my example: [AMP page with form](https://pizzaoventhailand.com/TMP/contattaci_standby_AMP.htm) (I receive form result after submission but I don't know how to redirect or get "Subscription successful!" message) [Non AMP page with form](https://pizzaoventhailand.com/TMP/contattaci_standby_NO_AMP.htm) (I receive form result and it redirect correctly) action-xhr file destination code: [Code of the php here in txt format](https://pizzaoventhailand.com/TMP/form_AMP_EN_nocaptcha_3.php.txt) (the file that handle the form result) Any example will be great for me also only about redirect. Thank you
You can redirect user after successful submission, however it can only be done if you are submitting values to a secure URL `(like https://www.example.com)`. **HTML With AMP-Form** ``` <!doctype html> <html amp lang="en"> <head> <meta charset="utf-8"> <script async src="https://cdn.ampproject.org/v0.js"></script> <title>Hello, AMPs</title> <link rel="canonical" href="http://example.ampproject.org/article-metadata.html" /> <meta name="viewport" content="width=device-width,minimum-scale=1,initial-scale=1"> <script type="application/ld+json"> { "@context": "http://schema.org", "@type": "NewsArticle", "headline": "Open-source framework for publishing content", "datePublished": "2015-10-07T12:02:41Z", "image": [ "logo.jpg" ] } </script> <style amp-boilerplate>body{-webkit-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-moz-animation:-amp-start 8s steps(1,end) 0s 1 normal both;-ms-animation:-amp-start 8s steps(1,end) 0s 1 normal both;animation:-amp-start 8s steps(1,end) 0s 1 normal both}@-webkit-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-moz-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-ms-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@-o-keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}@keyframes -amp-start{from{visibility:hidden}to{visibility:visible}}</style><noscript><style amp-boilerplate>body{-webkit-animation:none;-moz-animation:none;-ms-animation:none;animation:none}</style></noscript> <script async custom-element="amp-form" src="https://cdn.ampproject.org/v0/amp-form-0.1.js"></script> </head> <body> <h1>Hello World!</h1> <form target="_top" action-xhr="https://test.php" method="post" name="test"> <input type="text" name="name" value="ABRA KA DABRA!"> <input type="submit"/> </form> </body> </html> ``` **SUBMISSION REQUEST HANDLER CODE IN PHP** ``` <?php if(!empty($_POST)) { $name = $_POST['name']; /*/ this is the email we get from visitors*/ $domain_url = (isset($_SERVER['HTTPS']) ? "https" : "http") . "://$_SERVER[HTTP_HOST]"; $redirect_url = 'https://example.com/thank-you'; /*//-->MUST BE 'https://';*/ header("Content-type: application/json"); header("Access-Control-Allow-Credentials: true"); header("Access-Control-Allow-Origin: *.ampproject.org"); header("AMP-Access-Control-Allow-Source-Origin: ".$domain_url); /*/ For Sending Error Use this code /*/ if(!mail("[email protected]" , "Test submission" , "email: $name <br/> name: $name" , "From: $name\n ")){ header("HTTP/1.0 412 Precondition Failed", true, 412); echo json_encode(array('errmsg'=>'There is some error while sending email!')); die(); } else { /*/--Assuming all validations are good here--*/ if( empty($redirect_url)) { header("Access-Control-Expose-Headers: AMP-Access-Control-Allow-Source-Origin"); } else { header("AMP-Redirect-To: ".$redirect_url); header("Access-Control-Expose-Headers: AMP-Redirect-To, AMP-Access-Control-Allow-Source-Origin"); } echo json_encode(array('successmsg'=>$_POST['name'].'My success message. [It will be displayed shortly(!) if with redirect]')); die(); } }?> ```
Replace a substring in a string except when the string is inside quotes Regex dialect: Java The problem: given a string, replace all occurrences of a substring inside it, except when these occurrences are inside quotes. Example1: ``` string: "test substr 'test substr' substr" substring: "substr" replacement: "YYYY" output: "test YYYY 'test substr' YYYY" ``` Example2: ``` string: "test sstr 'test sstr' sstr" substring: "substr" replacement: "YYYY" output: "test sstr 'test sstr' sstr" ``` Example3: ``` string: "test 'test substr'" substring: "substr" replacement: "YYYY" output: "test 'test substr'" ``` This is my best try thus far: ``` Regex: ((?:[^']*'[^']+')*?[^']*?)substring Replace: $1replacement ``` The problem with it is that it needs a substring outside the quotes after the last string within the quotes otherwise it doesn't work, so Example3 will fail (output: "test 'test YYYY'"). Many thanks for your help.
Here's a way: ``` public class Main { public static void main(String [] args) { String[] tests = { "test substr 'test substr' substr", "test sstr 'test sstr' sstr", "test 'test substr'" }; String regex = "substr(?=([^']*'[^']*')*[^']*$)"; for(String t : tests) { System.out.println(t.replaceAll(regex, "YYYY")); } } } ``` prints: ``` test YYYY 'test substr' YYYY test sstr 'test sstr' sstr test 'test substr' ``` Note that this does not work if `'` can be escaped with a `\` for example. A quick explanation: The following: `([^']*'[^']*')*` will match 0 or an even number of single quotes with non quotes in between, and `[^']*$` matches any non-quotes and the end-of-string. So, the complete regex `substr(?=([^']*'[^']*')*[^']*$)` matches any `"substr"` that has 0 or an even number of single quotes ahead of it, *when looking all the way to the end-of-string!* Looking all the way to the end-of-string is the key here. If you wouldn't do that, the following `"substr"` would also be replaced: ``` aaa 'substr' bbb 'ccc ddd' eee ^ ^ ^ | | | i ii iii ``` because it "sees" an even number of single quotes ahead of it (**i** and **ii**). You must force it to look at the entire string to the right of it (all the way to `$`)!
Alternate method of iterating than os.walk Recently, I became tired of how slowly `os.walk` seems to run (really, it should be called `os.crawl`), and made something recursive which moves much faster. I was using it to take directories I wanted and add them to an array, but this time I need to add all files to an array that aren't in an 'exclude array.' Does anyone have any advice? ``` import os exclude_folders = ['excludefolder1', 'excludefolder2'] product_path_names = ['folder1', 'folder2', 'folder3'] # This function replaces the os.walk, it is faster and stops scanning subdirectories # when it find a good folder name. def findDir(path): for directory in os.listdir(path): # If it's a folder that we want to scan, go for it. if(os.path.isdir(os.path.join(path, directory))): # If folder is a product name, add it. if(directory.lower() in product_path_names): print os.path.join(path, directory) product_dirs.append(os.path.join(path, directory)) with open(event_log_path, "a") as log: log.write(os.path.join(path, directory) + "\n") break # If folder is not a product name, scan it. else: #print os.path.join(path, directory) if(directory.lower() not in exclude_folders): findDir(os.path.join(path, directory)) ```
## Style - According to PEP 8, functions should be named using `lower_case()`. - Python code typically doesn't have parentheses around `if` conditions. - You call `os.path.join()` everywhere. It would be worthwhile to assign the joined path to a variable. ## Error handling If any operation fails (for example, if you do not have permission to list some directory), then the whole search aborts with an exception. ## Modularity and reusability The inclusion and exclusion lists should be parameters rather than globals. By the Single Responsibility Principle, a function should do just one thing. The function you have written not only finds the directories that have the desired names, it also prints their path to `sys.stdout`, appends the results to a list, and logs the occurrence. By making the function into a generator, you would give the caller the flexibility to do whatever it wants with the results. The caller can even have the option to terminate early after the first result. `find_dirs()` would then become a more generically useful function. ``` from __future__ import print_function import os import sys def find_dirs(root_dir, names, exclude_folders=[]): try: for entry in os.listdir(root_dir): entry_path = os.path.join(root_dir, entry) entry_lowercase = entry.lower() if os.path.isdir(entry_path): if entry_lowercase in names: yield entry_path elif entry_lowercase not in exclude_folders: for result in find_dirs(entry_path, names, exclude_folders): yield result except OSError as e: print(e, file=sys.stderr) product_dirs = [] event_log_path = '/tmp/findlog.txt' with open(event_log_path, 'a') as log: for lib in find_dirs('/', ['lib', 'library'], ['user']): print(lib) product_dirs.append(lib) log.write(lib + "\n") break # Stop after finding the first match ```
Allow some SQL in public API? I'm exposing a more or less public API that allows the user to query datasets from a database. Since the user will need to filter out specific datasets I'm tempted to accept the WHERE-part of the SELECT statement as an API parameter. Thus the user could perform queries as complex as she'd like without worrying about a cluttered API interface. I'm aware of the fact that I would have to catch SQL-injection attempts. Do you think that this would circumvent the purpose of an API wrapping a database too much or would you consider this a sane approach?
In general, I'd recommend against letting them embed actual sql in their requests You can allow them to submit `where` conditions in their request pretty easily: ``` <where> <condition "field"="name" "operator"="equal" "value"="Fred"/> </where> ``` or something similar. The value of doing this is muli-fold: 1. You parse each condition and make sure they're correct before running them 2. You can create 'fake' fields, such as "full\_name" that may not exist. 3. You can limit the columns they can put conditions on 4. You can isolate the users from actual changes in your underlying database. I think the last point is actually most important. The day will come when you'll need to make changes to the underlying schema of the database. Eventually, it will happen. At that point you'll appreciate having some 'translation' layer between what the users send in and the queries. It will allow you to isolate the users from actual changes in the underlying database. The API should present an 'abstracted' version of the actual tables themselves that meet the users needs and isolate them from changes to the actual underlying database.
How can I set the value of a struct variable on an interface slice? How can I set `testEntity.Val` that is contained in `[]interface{}{}` using reflection? ``` type testEntity struct { Val int } func main() { slice := []interface{}{testEntity{Val: 3}} sliceValue := reflect.ValueOf(slice) elemValue := sliceValue.Index(0) // Prints: can set false fmt.Println("can set", elemValue.Elem().Field(0).CanSet()) } ``` <http://play.golang.org/p/lxtmu9ydda> Can someone explain why it works with `[]testEntity{}` but not `[]interface{}{}` as shown here: <http://play.golang.org/p/HW5tXEUTlP>
According to [this issue](https://code.google.com/p/go/issues/detail?id=8321) this behaviour is correct. ## Semantics The reason is that there is no equivalent syntax in Go that allows you to take the address of a value masked by an `interface{}` value. This behaviour can be demonstrated in a simplified example. If you have the following code ``` slice := []testEntity{testEntity{Val: 3}} slice[0].Val = 5 fmt.Println(slice) ``` then the line ``` slice[0].Val = 5 ``` actually translates to ``` (&slice[0]).Val = 5 ``` To modify a slice element it must be addressable, or else the value would not have been propagated back to the slice. Go does that automatically for you. Now let us modify the example and use an `interface{}` slice instead: ``` slice := []interface{}{testEntity{Val: 3}} slice[0].Val = 5 fmt.Println(slice) ``` This example will obviously not work since `interface{}` does not have a field called `Val`. So, we would need to assert the type of `slice[0]`: ``` slice[0].(testEntity).Val = 5 ``` While the type system is now content with that line, the addressability rules are not. Now the change of `Val` would be lost since we are operating on a copy of `slice[0]`. To counter this we would have to take the address of `slice[0]` and this, as you will see, leads us nowhere: ``` slice := []interface{}{testEntity{Val: 3}} (&slice[0]).(*testEntity).Val = 5 fmt.Println(slice) ``` This code cannot work since `(&slice[0])` is not of type `interface{}` anymore but `*interface{}`. As a consequence we cannot assert that value to `*testEntity` since only interface values can be asserted. This means that **there is no Go syntax for setting an interface value in a slice** and since reflection only models Go's behaviour, this will not work with reflection as well, even if it is technically possible. We could imagine some syntax that takes the address of the underlying value of `slice[0]` while retaining the type but this syntax does not exist. The same goes for reflection. Reflection *knows* the underlying type of the interface and could easily use that information to provide a type-safe pointer to `slice[0]`'s underlying value so we can use `Set()` on that, but it does not since Go does not. ## Technical reason The reflect package uses several flags to mark the abilities of `Value` objects. The two flags related to setting values using `Set` are `flagAddr` for addressability and `flagRO` for marking `Value` objects as read-only (e.g. unexported attributes). To set using `Set()`, `flagRO` must be unset and `flagAddr` must be set. If you look into the definition of [`Value.Elem()` in `reflect/value.go`](https://code.google.com/p/go/source/browse/src/pkg/reflect/value.go?name=go1.3#827) you will find the [line responsible](https://code.google.com/p/go/source/browse/src/pkg/reflect/value.go?name=go1.3#844) for handing the kind flags to the new value: ``` fl := v.flag & flagRO // ... return Value{typ, val, fl} ``` In the snipped `v` is the current value, `elementValue` in your case. As you can see, this will only copy the read-only flag and **not the `flagAddr`** that is needed for setting a value using `Value.Set()`. Changing that line to ``` fl := v.flag & (flagRO|flagAddr) ``` will enable us to use `Set()` but will also make it possible to change the underlying value of a `interface{}` value to any other value, breaking type safety.
How to merge list of list into single list in pyspark In spark dataframe, I have 1 column that contain list of list as rows. I want to merge the list of strings into one. ``` INPUT DATAFRAME: +-------+--------------------+ | name |friends | +-------+--------------------+ | Jim |[["C","A"]["B","C"]]| +-------+--------------------+ | Bill |[["E","A"]["F","L"]]| +-------+--------------------+ | Kim |[["C","K"]["L","G"]]| +-------+--------------------+ OUTPUT DATAFRAME: +-------+--------------------+ | name |friends | +-------+--------------------+ | Jim |["C","A","B"] | +-------+--------------------+ | Bill |["E","A","F","L"] | +-------+--------------------+ | Kim |["C","K","L","G"] | +-------+--------------------+ ``` I want to merge the list of list into single list and remove the duplicates as well. Thanks in advance
I think you can rely on a combination of `explode` to deconstruct the lists and `collect_set` to rebuild them: ``` import pyspark from pyspark.sql import SparkSession from pyspark import SparkContext import pandas as pd from pyspark.sql import functions as F from pyspark.sql import Window sc = SparkContext.getOrCreate() spark = SparkSession(sc) columns = ['name', 'friends'] data = [("Jim", [["C","A"], ["B","C"]]), ("Bill", [["E","A"], ["F","L"]]), ("Kim", [["C","K"], ["L","G"]])] pd_data = pd.DataFrame.from_records(data=data, columns=columns) spark_data = spark.createDataFrame(pd_data) first_explode = spark_data.withColumn("first_explode", F.explode((F.col("friends")))) first_explode.show() +----+----------------+-------------+ |name| friends|first_explode| +----+----------------+-------------+ | Jim|[[C, A], [B, C]]| [C, A]| | Jim|[[C, A], [B, C]]| [B, C]| |Bill|[[E, A], [F, L]]| [E, A]| |Bill|[[E, A], [F, L]]| [F, L]| | Kim|[[C, K], [L, G]]| [C, K]| | Kim|[[C, K], [L, G]]| [L, G]| +----+----------------+-------------+ ``` First level deconstructed. Now for the second one: ``` second_explode = first_explode.withColumn("second_explode", F.explode(F.col("first_explode"))) second_explode.show() +----+----------------+-------------+--------------+ |name| friends|first_explode|second_explode| +----+----------------+-------------+--------------+ | Jim|[[C, A], [B, C]]| [C, A]| C| | Jim|[[C, A], [B, C]]| [C, A]| A| | Jim|[[C, A], [B, C]]| [B, C]| B| | Jim|[[C, A], [B, C]]| [B, C]| C| |Bill|[[E, A], [F, L]]| [E, A]| E| |Bill|[[E, A], [F, L]]| [E, A]| A| |Bill|[[E, A], [F, L]]| [F, L]| F| |Bill|[[E, A], [F, L]]| [F, L]| L| | Kim|[[C, K], [L, G]]| [C, K]| C| | Kim|[[C, K], [L, G]]| [C, K]| K| | Kim|[[C, K], [L, G]]| [L, G]| L| | Kim|[[C, K], [L, G]]| [L, G]| G| +----+----------------+-------------+--------------+ ``` Reconstruct the list, discarding duplicates: ``` grouped = second_explode.groupBy("name").agg(F.collect_set(F.col("second_explode")).alias("friends")) grouped.show() +----+------------+ |name| friends| +----+------------+ | Jim| [C, B, A]| |Bill|[F, E, A, L]| | Kim|[K, C, G, L]| +----+------------+ ```
Android: specify circular reveal animation starting position? I currently have a circular reveal animation that starts from the very right of the toolbar. I want the initial circle center to start from the "search" icon which had me struggle to find an answer. I tried altering the `cx` and `xy` values and am unsuccessful. Any help is appreciated. ``` final Toolbar search_bar = (Toolbar) findViewById(R.id.search_toolbar); if (android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) { int cx = search_bar.getWidth(); int cy = search_bar.getHeight()/2; float finalRadius = (float) Math.hypot(cx, cy); Animator anim = ViewAnimationUtils.createCircularReveal(search_bar, cx, cy, 0, finalRadius); search_bar.setVisibility(View.VISIBLE); anim.start(); } ``` [![enter image description here](https://i.stack.imgur.com/2Yf5c.png)](https://i.stack.imgur.com/2Yf5c.png)
try this : ``` public void startCircularReveal() { final View view = findViewById(R.id.linearLayout); final View startView = findViewById(R.id.button_container); int cx = (startView.getLeft() + startView.getRight()) / 2; int cy = (startView.getTop() + startView.getBottom()) / 2; view.setBackgroundColor(Color.parseColor("#6FA6FF")); int finalRadius = Math.max(cy , view.getHeight() - cy); Animator anim = ViewAnimationUtils.createCircularReveal(view, cx, cy, 0, finalRadius); anim.addListener(new Animator.AnimatorListener() { @Override public void onAnimationStart(Animator animation) { } @Override public void onAnimationEnd(Animator animation) { } @Override public void onAnimationCancel(Animator animation) { } @Override public void onAnimationRepeat(Animator animation) { } }); anim.setDuration(200); view.setVisibility(View.VISIBLE); anim.start(); } ``` Change the view to the view that you want to do reveal on top of it ( usually root view ), and startView to your "search" icon view, and onAnimationEnd do whatever you want after the reveal is finished. **UPDATE** If the reveal is not on top of your views, there is a little trick to fix it by adding this view to the bottom of your layout ``` <LinearLayout android:id="@+id/linearLayout" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@color/custom_theme_color" android:orientation="vertical" android:visibility="invisible"> </LinearLayout> ``` make sure that the view "view" set to the id of that layout ``` final View view = findViewById(R.id.linearLayout); ``` Basically what I did is adding a LinearLayout and set its width and height to match match\_parent, so this view is on top of everything and this view is where the reveal will be on top of, and to make sure that view doesn't effect your layout I set visibility to invisible, and set it to visible right at the beginning of the reveal: view.setVisibility(View.VISIBLE); I hope it helps.
set ng-init on select with scope variable I have a directive that I am trying to set the `ng-init` to a variable set with `$scope` ``` <select ng-init="safe.id=currentSafe" ng-options="safe as safe.name for safe in safes track by safe.id" ng-model="safe" ng-change="getSafeUrl(safe.id)"></select> ``` In the `link` function on the directive I have: ``` $scope.currentSafe = '72824ca7-99ab-4f16-a56c-3c98328c73fd'; ``` This is not working. However if I change the template to read: ``` <select ng-init="safe.id='72824ca7-99ab-4f16-a56c-3c98328c73fd'" ng-options="safe as safe.name for safe in safes track by safe.id" ng-model="safe" ng-change="getSafeUrl(safe.id)"></select> ``` Why am I not able to use a string from `$scope` but I can use it directly?
1) You should not use [select as with track by](https://code.angularjs.org/1.3.10/docs/api/ng/directive/select#-select-as-and-track-by-), they are not designed to work together. See documentation for details. 2) Do not use [ng-init](https://docs.angularjs.org/api/ng/directive/ngInit) for initializing property which should be done on the controller, ng-inited expressions are not watched, set `$scope.safe.id` on the link/controller function of the directive itself. In your case you might be setting `currentSafe` asynchronously but ng-init would have evaluated it already by then. ``` <select ng-options="safe.name for safe in safes track by safe.id" ng-model="safe" ng-change="getSafeUrl(safe.id)"></select> ``` and set: ``` $scope.safe = {id:whateverid} ```
Can Lync / UCWA be used to set the presence-information for an other user on the Lync platform? We have a telephony platform, and we are looking to change the presence status of Lync users, based on events in the telephony platform. We would prefer to use the UCWA REST api, as we are familiar with REST api's and have most of the infrastructure in place already. Checking the documentation, it quickly becomes apparent that a user can set its own presence. However, we don't want to keep track of the authentication information for every single user. Instead, can some users (presumably an administrator-like user) alter the presence status of other users using UCWA? Any other practical possibilities for managing the presence-status of Lync users externally?
*Via an answer I provided on the MSDN Forums: [Can UCWA be used to set the presence-information for an other user on the Lync platform?](http://social.msdn.microsoft.com/Forums/lync/en-US/5954c4c8-b606-40d0-9435-522f7f4bd794/can-ucwa-be-used-to-set-the-presenceinformation-for-an-other-user-on-the-lync-platform?forum=ucwebapi)* **Simple Answer**: This is not possible in UCWA as the logged in user (endpoint) does not have privileges to take those type of actions. **Is it possible another way Answer**: You could make use of UCMA and create a Trusted Application which would allow creation of a UserEndpoint which would not need to know the user's credentials to make changes to their presence data. You would need to wrap this logic into a Web API and connect it to your specific application/scenario.
Using LINQ to group social networking links for each user I have a model named `User`. Each user can have some social links. Now I want to get all users with their social link as a single row in the result. Here are my models: ``` public class User { public int Id { get; set; } public string Name { get; set; } public ICollection<SocialLink> SocialLinks { get; set; } } public class SocialLink { public string Url { get; set; } public SocialType Type { get; set; } public User User { get; set; } public int UserId { get; set; } } public enum SocialType { FaceBook = 0, Twitter = 1, Youtube = 2, Linkedin = 3, GooglePlus = 4, Telegram = 5, Instagram = 6 } public class UserViewModel { public string Name { get; set; } public string Facebook { get; set; } public string Twitter { get; set; } public string Youtube { get; set; } public string Linkedin { get; set; } public string GooglePlus { get; set; } public string Telegram { get; set; } public string Instagram { get; set; } } ``` I want my query to be like this in the result: ``` Name | Facebook | Twitter | Youtube | Linkedin | GooglePlus | Telegram | Instagram ====================================================================================== Someone | facebook | @someone | youtube | linkedin | google | @tele | @someins ``` I have written following query, It works as I expected but I wonder if there's a better way for doing this: ``` users.Select(p => new UserViewModel { Name = p.Name, Facebook = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.FaceBook).Url ?? "#", Twitter = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.Twitter).Url ?? "#", Youtube = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.Youtube).Url ?? "#", Linkedin = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.Linkedin).Url ?? "#", Telegram = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.Telegram).Url ?? "#", Instagram = p.SocialLinks.FirstOrDefault(x => x.Type == SocialType.Instagram).Url ?? "#", }).ToList(); ```
You could use a `Dictionary` and make properties of `UserViewModel` to address elements of this dictionary as follows: ``` public class UserViewModel { private readonly Dictionary<SocialType, string> SocialLinks; public UserViewModel(User user) { SocialLinks = (user.SocialLinks ?? new SocialLink[0]) .ToDictionary(x => x.Type, x => x.Url); Name = user.Name; } private string GetUrl(SocialType socialType) { string url; return SocialLinks.TryGetValue(socialType, out url) && url != null ? url : "#"; } public string Name { get; set; } public string Facebook { get { return GetUrl(SocialType.FaceBook); } set { SocialLinks[SocialType.FaceBook] = value; } } public string Twitter { get { return GetUrl(SocialType.Twitter); } set { SocialLinks[SocialType.Twitter] = value; } } public string Youtube { get { return GetUrl(SocialType.Youtube); } set { SocialLinks[SocialType.Youtube] = value; } } ... } ``` Then your query will looks like: ``` users.Select(u => new UserViewModel(u)).ToList(); ```
Common HTTPclient and proxy I am using apache's common httpclient library. Is it possible to make HTTP request over proxy? More specific, I need to use proxy list for multithreaded POST requests (right now I am testing with single threaded GET requests). I tried to use: ``` httpclient.getHostConfiguration().setProxy("67.177.104.230", 58720); ``` I get errors with that code: ``` 21.03.2012. 20:49:17 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect 21.03.2012. 20:49:17 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: Retrying request 21.03.2012. 20:49:19 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect 21.03.2012. 20:49:19 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: Retrying request 21.03.2012. 20:49:21 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: I/O exception (java.net.ConnectException) caught when processing request: Connection refused: connect 21.03.2012. 20:49:21 org.apache.commons.httpclient.HttpMethodDirector executeWithRetry INFO: Retrying request org.apache.commons.httpclient.ProtocolException: The server xxxxx failed to respond with a valid HTTP response at org.apache.commons.httpclient.HttpMethodBase.readStatusLine(HttpMethodBase.java:1846) at org.apache.commons.httpclient.HttpMethodBase.readResponse(HttpMethodBase.java:1590) at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:995) at org.apache.commons.httpclient.ConnectMethod.execute(ConnectMethod.java:144) at org.apache.commons.httpclient.HttpMethodDirector.executeConnect(HttpMethodDirector.java:495) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:390) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:170) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:396) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:324) at test.main(test.java:42) ``` When I remove that line, everything runs fine as expected.
For httpclient 4.1.x you can set the proxy like this (taken from [this example](http://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientExecuteProxy.java)): ``` HttpHost proxy = new HttpHost("127.0.0.1", 8080, "http"); DefaultHttpClient httpclient = new DefaultHttpClient(); try { httpclient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy); HttpHost target = new HttpHost("issues.apache.org", 443, "https"); HttpGet req = new HttpGet("/"); System.out.println("executing request to " + target + " via " + proxy); HttpResponse rsp = httpclient.execute(target, req); ... } finally { // When HttpClient instance is no longer needed, // shut down the connection manager to ensure // immediate deallocation of all system resources httpclient.getConnectionManager().shutdown(); } ```
Why it's necessary to frozen all inner state of a Batch Normalization layer when fine-tuning The following content comes from Keras tutorial > > This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case. > > > Why we should freeze the layer when fine-tuning a convolutional neural network? Is it because some mechanisms in tensorflow keras or because of the algorithm of batch normalization? I run an experiment myself and I found that if trainable is not set to false the model tends to catastrophic forgetting what has been learned before and returns very large loss at first few epochs. What's the reason for that?
During training, varying batch statistics act as a regularization mechanism that can improve ability to generalize. This can help to minimize overfitting when training for a high number of iterations. Indeed, using a very large batch size [can harm generalization](https://arxiv.org/abs/1804.07612) as there is less variation in batch statistics, decreasing regularization. When fine-tuning on a new dataset, batch statistics are likely to be very different if fine-tuning examples have different characteristics to examples in the original training dataset. Therefore, if batch normalization is not frozen, the network will learn new batch normalization parameters (gamma and beta in the [batch normalization paper](https://arxiv.org/abs/1502.03167)) that are different to what the other network paramaters have been optimised for during the original training. Relearning all the other network parameters is often undesirable during fine-tuning, either due to the required training time or small size of the fine-tuning dataset. Freezing batch normalization avoids this issue.
Parsing a lightweight language in Python Say I define a string in Python like the following: ``` my_string = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" ``` I would like to parse that string in Python in a way that allows me to index the different structures of the language. For example, the output could be a dictionary `parsing_result` that allows me to index the different elements in a structred manner. For example, the following: ``` parsing_result['names'] ``` would hold a `list` of strings: `['name1', 'name2']` whereas `parsing_result['options']` would hold a dictionary so that: - `parsing_result['something']['options']['opt2']` holds the string `"text"` - `parsing_result['something_else']['options']['opt1']` holds the string `"58"` My first question is: How do I approach this problem in Python? Are there any libraries that simplify this task? For a working example, I am not necessarily interested in a solution that parses the exact syntax I defined above (although that would be fantastic), but anything close to it would be great. ### Update 1. It looks like the general right solution is using a parser and a lexer such as [ply](http://www.dabeaz.com/ply/) (thank you @Joran), but the documentation is a bit intimidating. Is there an easier way of getting this done when the syntax is **lightweight**? 2. I found [this thread](https://stackoverflow.com/questions/1648537/how-to-split-a-string-by-commas-positioned-outside-of-parenthesis) where the following regular expression is provided to partition a string around outer commas: ``` r = re.compile(r'(?:[^,(]|\([^)]*\))+') r.findall(s) ``` But this is assuming that the grouping character are `()` (and not `{}`). I am trying to adapt this, but it doesn't look easy.
I highly recommend **[pyparsing](http://pyparsing.wikispaces.com/)**: > > The pyparsing module is an alternative approach to creating and > executing simple grammars, vs. the traditional lex/yacc approach, or > the use of regular expressions. > > > The Python representation of the grammar is quite > **readable**, owing to the **self-explanatory class names**, and the use of > '+', '|' and '^' operator definitions. The parsed results returned from parseString() **can be accessed as a nested list, a dictionary, or an object with named attributes**. > > > **Sample code** (Hello world from the pyparsing docs): ``` from pyparsing import Word, alphas greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here hello = "Hello, World!" print (hello, "->", greet.parseString( hello )) ``` **Output:** ``` Hello, World! -> ['Hello', ',', 'World', '!'] ``` ***Edit:*** Here's a solution to your sample language: ``` from pyparsing import * import json identifier = Word(alphas + nums + "_") expression = identifier("lhs") + Suppress("=") + identifier("rhs") struct_vals = delimitedList(Group(expression | identifier)) structure = Group(identifier + nestedExpr(opener="{", closer="}", content=struct_vals("vals"))) grammar = delimitedList(structure) my_string = "something{name1, name2, opt1=2, opt2=text}, something_else{name3, opt1=58}" parse_result = grammar.parseString(my_string) result_list = parse_result.asList() def list_to_dict(l): d = {} for struct in l: d[struct[0]] = {} for ident in struct[1]: if len(ident) == 2: d[struct[0]][ident[0]] = ident[1] elif len(ident) == 1: d[struct[0]][ident[0]] = None return d print json.dumps(list_to_dict(result_list), indent=2) ``` **Output:** (pretty printed as JSON) ``` { "something_else": { "opt1": "58", "name3": null }, "something": { "opt1": "2", "opt2": "text", "name2": null, "name1": null } } ``` Use the [pyparsing API](http://pythonhosted.org/pyparsing/) as your guide to exploring the functionality of pyparsing and understanding the nuances of my solution. I've found that the quickest way to master this library is trying it out on some simple languages you think up yourself.
I there a good ruby book for learning sockets programming? Is there a good book for ruby sockets programming or we have to rely on Unix C books for theory and source ?.
If the goal is to learn socket programming I would recommend using C and reading through some Unix system programming books. The books will probably be much better and go into more detail than a sockets book that is Ruby specific (mainly because C, Unix and sockets have been around much longer than Ruby and my quick Googling didn't find a Ruby specific book). Books I would recommend: - [Unix Systems Programming](https://rads.stackoverflow.com/amzn/click/com/0130424110) - [Unix Network Programming Volume 1](https://rads.stackoverflow.com/amzn/click/com/0131411551) After getting a handle on the sockets in general (even if you don't write any C) the [Ruby API](http://www.ruby-doc.org/stdlib/libdoc/socket/rdoc/classes/Socket.html) for sockets will make much more sense.
Keyboard shortcut for C# Console Class I know the shortcut for `Console.WriteLine()` is type **cw** and `tab` twice. Does anyone know what's the short cut for `Console.ReadLine()` ?
Save the following as a `.snippet` and import it (used the cw snippet as a template) ``` <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <Title>cr</Title> <Shortcut>cr</Shortcut> <Description>Code snippet for Console.ReadLine</Description> <Author>Whoever you want it to be</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal Editable="false"> <ID>SystemConsole</ID> <Function>SimpleTypeName(global::System.Console)</Function> </Literal> </Declarations> <Code Language="csharp"><![CDATA[$SystemConsole$.ReadLine();]]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> ``` Once imported, you can hit **cr** and `tab` twice to get `Console.Readline();`
Flutter FutureBuilder gets constantly called I'm experiencing interesting behavior. I have a FutureBuilder in Stateful widget. If I return FutureBuilder alone, everything is ok. My API gets called only once. However, if I put extra logic, and make a choice between two widgets - I can see in chrome my API gets called tens of times. I know that `build` method executes at any time, but how does that extra logic completely breaks Future's behavior? Here is example of api calling once. ``` @override Widget build(BuildContext context) { return FutureBuilder(..); } ``` Here is example of api being called multiple times if `someBooleanFlag` is `false`. ``` @override Widget build(BuildContext context) { if(someBooleanFlag){ return Text('Hello World'); } else{ return FutureBuilder(..); } ``` Thanks
Even if your code is working in the first place, you are not doing it correctly. As stated in the official documentation of [FutureBuilder](https://api.flutter.dev/flutter/widgets/FutureBuilder-class.html), > > The `future` must be obtained earlier, because if the `future` is created at the same time as the FutureBuilder, then every time the `FutureBuilder`'s parent is rebuilt, the asynchronous task will be restarted. > > > Following are the correct ways of doing it. Use either of them: - Lazily initializing your `Future`. ``` // Create a late instance variable and assign your `Future` to it. late final Future myFuture = getFuture(); @override Widget build(BuildContext context) { return FutureBuilder( future: myFuture, // Use that variable here. builder: (context, snapshot) {...}, ); } ``` - Initializing your `Future` in `initState`: ``` // Create an instance variable. late final Future myFuture; @override void initState() { super.initState(); // Assign that variable your Future. myFuture = getFuture(); } @override Widget build(BuildContext context) { return FutureBuilder( future: myFuture, // Use that variable here. builder: (context, snapshot) {}, ); } ```
What is Gradle build system in Kotlin? I was reading the Kotlin documentation and I came across the statement, *By default, your project will use the Gradle build system with Kotlin DSL.* What does it mean? I've seen Gradle Kotlin option while making a new project in IntelliJ: [![Screenshot of IntelliJ project configuration](https://i.stack.imgur.com/tOGXc.png)](https://i.stack.imgur.com/tOGXc.png) Can somebody explain me these, and which Bundle I should be using as a beginner?
A **build system** combines and simplifies some of the key tasks involved in building and distributing your program. The main things a build system does include: - Downloading any **dependencies** your application has - Running **tests** against your application - **Compiling** your code - **Packaging** up your application and its dependencies into a form you can share with others You could run all of these tasks separately yourself, but build systems make it a lot easier and less prone to mistakes. In practice, all but the smallest projects use some kind of build system. Gradle is one such tool, but you can also use Maven, or the tools built into an IDE like IntelliJ. ## Which one should I use? If this is a personal project, the build system and tools built into an IDE like IntelliJ are more than good enough. If you're working with other people, you might want to consider a standalone build system instead. That's because standalone build systems like Gradle can be used with multiple IDEs, and can also be used on the command line without an IDE at all. Large projects with many contributors will often run a build server that runs the build system in an automated way against all new changes, to make sure the code builds and runs as expected. IDEs like IntelliJ have very good integration with the common build systems, including Maven and Gradle, so you won't disadvantage yourself by choosing them over the built-in IDE tools. ## Maven, Gradle, or Gradle with Kotlin? There are plenty of other resources you can find comparing Maven with Gradle. The crucial difference, though, is the way you write the **build script** that allows you to customise the dependencies, tests, and other parameters of your build. - In **Maven**, your build script is an **XML** file. It follows a rigid structure, providing inputs and configuration to existing tasks and plugins. - In **Gradle**, the build script was historically written in **Groovy**, a loosely-typed language that gives you a lot of flexibility. As well as configuring tasks and plugins, you can easily add your own tasks and functions. - You can also choose to write Gradle build scripts in **Kotlin**. This offers the same flexibility and customisation as Groovy, but the addition of a type system means the IDE can give you much more help with writing the script correctly.
C++: Accessing parent methods and variables In which way should I access this parent method and parent variable? ``` class Base { public: std::string mWords; Base() { mWords = "blahblahblah" } }; class Foundation { public: Write( std::string text ) { std::cout << text; } }; class Child : public Base, public Foundation { DoSomething() { this->Write( this->mWords ); // or Foundation::Write( Base::mWords ); } }; ``` Thanks. Edit: And what if there is ambiguity?
The two syntaxes you use in your code (`this->...` and qualified names) are only necessary specifically when there is ambiguity or some other name lookup issue, like name hiding, template base class etc. When there's no ambiguity or other problems you don't need any of these syntaxes. All you need is a simple unqualified name, like `Write` in your example. Just `Write`, not `this->Write` and not `Foundation::Write`. The same applies to `mWords`. I.e. in your specific example a plain `Write( mWords )` will work just fine. --- To illustrate the above, if your `DoSomething` method had `mWords` parameter, as in ``` DoSomething(int mWords) { ... ``` then this local `mWords` parameter would hide inherited class member `mWords` and you'd have to use either ``` DoSomething(int mWords) { Write(this->mWords); } ``` or ``` DoSomething(int mWords) { Write(Foundation::mWords); } ``` to express your intent properly, i.e. to break through the hiding. --- If your `Child` class also had its own `mWords` member, as in ``` class Child : public Base, public Foundation { int mWords ... ``` then this name would hide the inherited `mWords`. The `this->mWords` approach in this case would not help you to unhide the proper name, and you'd have to use the qualified name to solve the problem ``` DoSomething(int mWords) { Write(Foundation::mWords); } ``` --- If both of your base classes had an `mWords` member, as in ``` class Base { public: std::string mWords; ... }; class Foundation { public: int mWords; ... ``` then in `Child::DoSomething` the `mWords` name would be ambiguous, and you'd have to do ``` DoSomething(int mWords) { Write(Foundation::mWords); } ``` to resolve the ambiguity. --- But, again, in your specific example, where there's no ambiguity and no name hiding all this is completely unnecessary.
Can I create a class [] operator in Typescript Is there a way to override the [] operation in typescript? We use 1.4 so we're fine with a solution that requires 1.4. **Update:** I was horribly unclear in my initial question. What I meant was can I add that as an operator to a class. In my class I presently have a method: ``` public get(index : number) : LinkedListNode<t> { if (this._first === null || (index >= this._count)) { return null; } var node = this._first; while (index-- > 0) { node = node._next; } return node; } ``` I would prefer to be able to call data[5] instead of data.get(5). Is there a way to do this? thanks & sorry for the incredibly inaccurate initial question.
## Update In response to your update to your question, no it's not possible to overload the index operator for a class—you can't do `data[5]` instead of `data.get(5)`. In my opinion, the reason this has not been implemented is because JavaScript allows accessing an object's properties using brackets and that creates some ambiguity. For example, if `data.myProperty` is a property that exists and `data['myProperty']` is called, then it would be difficult to decide if it should return the `myProperty` property or if the string 'myProperty' should be passed to the index overload. ## Original Answer It's not possible to change the result of: ``` var a = []; ``` Imagine the problems that could occur if people were allowed to change that behaviour? Library A could define it one way and then Library B could overwrite that with its own behaviour... meaning Library B now uses Library A's `[]` behaviour. What you can do is add methods to `Array`'s prototype: ``` interface Array { log: () => void; } Array.prototype.log = function() { console.log(JSON.stringify(this)); }; ``` Then use: ``` var a = []; a.log(); ``` *However, doing this is extremely not recommended!* You shouldn't be modifying objects you don't own because it can lead to unforeseen problems. The reason not to do this is similar to why changing `[]`'s behaviour would lead to problems: 1. Library A defines a `log` method. 2. Library B defines a `log` method that works differently. 3. Now the `log` method for Library A won't work as expected because it's using Library B's method. **Recommendation** I would suggest creating your own implementation of array: ``` class MyArray<T> implements Array<T> { private _underlyingArray : Array<T> = []; // implement methods for Array here log() { console.log(JSON.stringify(this._underlyingArray)); } } ``` Or create a helper class: ``` class ArrayHelper { static log<T>(a: Array<T>) { console.log(JSON.stringify(a)); } } ```
How to redirect domain A to domain B using A-Records and CNAME records only I have 2 domains hosted with different hosts. I need to redirect Domain A to Domain B. Unfortunately I can't do a 301 redirect from Host A, but can only modify/add DNS entries (A-Records and CNAMEs) at Host A. Surely it is possible to redirect www.DomainA.com to www.DomainB.com using only A-records and CNAMEs? At present, the DNS entries are: ``` DomainA.com. 3600 IN SOA ns1.HostA.net. www 3600 IN CNAME www.DomainB.com. DomainA.com. 3600 IN NS ns1.HostA.net. DomainA.com. 3600 IN NS ns2.HostA.net. DomainA.com. 3600 IN NS ns3.HostA.net. ``` I want to redirect ``` DomainA.com -> DomainB.com *.DomainA.com -> *.DomainB.com ``` I've tried the suggestion from this [other post](https://serverfault.com/questions/207705/domain-a-to-domain-b-cname-mapping) but it didn't work. How can I achieve this only with A-Records and CNAMEs please? Thank you for your advice. Prembo.
So you are not looking at redirection as such (as that happens at the app level, i.e. on Apache/Nginx/wherever) but rather on the DNS resolution. The host on which DomainA is hosted will or should never be hit, based on your description as you want the DNS requests to be resolved to the IPs of DomainB. Unless I'm missing something in your request? As Shane pointed out DNS is not capable of HTTP redirection - that's an application/webserver duty. You could make DomainA and DomainB resolve to the same IP on DNS and all would work. But if you're looking to do this on per URL/per-path way then this is not possible - DNS is not capable of that - it's a simple DNS->IP service, what's happening with the actual URL is the webserver's task. After the comment below, what I'd do is to refer all DNS records for DomainA to the same IP(s) as DomainB is pointed to - this way you will get HTTP request hitting hostB and then it's just a simple matter of: 1. creating a particular Apache Name Baseed Virtual host - which will be serving files from its own DocumentRoot 2. creating permanent redirect on Apache like this: This will rewrite anything coming to DomainB to DomainA which can be hosted on the same server or somewhere else. I appreciate that the second option is probably an overhead and not necessary if you can/are allowed to create Name Based Virtual hosts on apache. ``` <VirtualHost *:80> ServerName DomainB Redirect permanent / http://DomainA/ </VirtualHost> ``` I'd go with 1. - point all DNS records of DomainA to the same IP(s) as DomainB is pointing and create particular Name Based VirtualHosts on Apache.
NSFileManager creating folder (Cocoa error 513.) I'm trying to create a folder inside the /sounds folder of my app. ``` -(void)productPurchased:(UAProduct*) product { NSLog(@"[StoreFrontDelegate] Purchased: %@ -- %@", product.productIdentifier, product.title); NSFileManager *manager = [NSFileManager defaultManager]; NSString *bundleRoot = [[NSBundle mainBundle] bundlePath]; NSError *error; NSString *dataPath = [NSString stringWithFormat:@"%@/sounds/%@", bundleRoot, product.title]; if (![manager fileExistsAtPath:dataPath isDirectory:YES]) { [manager createDirectoryAtPath:dataPath withIntermediateDirectories:YES attributes:nil error:&error]; NSLog(@"Creating folder"); } NSLog(@"%@", error); } ``` But I get this error: ``` Error Domain=NSCocoaErrorDomain Code=513 "The operation couldn’t be completed. (Cocoa error 513.)" UserInfo=0x175120 {NSFilePath=/var/mobile/Applications/D83FDFF9-2600-4056-9047-05F82633A2E4/App.app/sounds/Test Tones, NSUnderlyingError=0x117520 "The operation couldn’t be completed. Operation not permitted"} ``` What am I doing wrong? Thanks.
If you search Google on the error domain [`NSCocoaErrorDomain`](http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Miscellaneous/Foundation_Constants/Reference/reference.html#//apple_ref/doc/c_ref/NSCocoaErrorDomain) you find that the code `513` translates to the error [`NSFileWriteNoPermissionError`](http://developer.apple.com/mac/library/documentation/Cocoa/Reference/Foundation/Miscellaneous/Foundation_Constants/Reference/reference.html#//apple_ref/doc/c_ref/NSFileWriteNoPermissionError). This provides you with the critical clue for solving this problem: > > [This is the bundle directory containing the application itself. Because an application must be signed, you must not make changes to the contents of this directory at runtime. Doing so may prevent your application from launching later.](http://developer.apple.com/iphone/library/documentation/iphone/conceptual/iphoneosprogrammingguide/RuntimeEnvironment/RuntimeEnvironment.html#//apple_ref/doc/uid/TP40007072-CH2-SW10) > > > Specifically, you cannot modify the contents of a compiled app's bundle folder. This is because the bundle is the compiled application. When you eventually distribute the app through the iTunes App Store, the application has a digital signature that validates the contents of the app. This signature is generated at compile time. If you try to change the bundle after compilation, the app changes and the digital signature is no longer valid. This invalidates the application — who knows what code is in there, right? — and so Apple has set up iOS to throw an error if you try to modify the application. Instead of writing to the bundle, your app can write to [one of three accepted app-specific folders](https://developer.apple.com/icloud/documentation/data-storage/): `<Application_Home>/Documents`, `<Application_Home>/tmp` and `<Application_Home>/Library/Caches`. Most likely, you will want to write to the `<Application_Home>/Documents` folder. These folders are only accessible to your app. No other app can access the contents of these folders. (Likewise, your app cannot access another app's folders.) You can set up your app to allow the end user to manage access to file data through iTunes, via [desktop file sharing support](http://developer.apple.com/iphone/library/documentation/iphone/conceptual/iphoneosprogrammingguide/StandardBehaviors/StandardBehaviors.html#//apple_ref/doc/uid/TP40007072-CH4-SW10).
Do I need to synchronize a call to the interrupt method? Consulting the JavaDocs and the source code of the `Thread.interrupt()` method in `Java SE 7` I found this: ``` public void interrupt() { if (this != Thread.currentThread()) checkAccess(); synchronized (blockerLock) { Interruptible b = blocker; if (b != null) { interrupt0(); // Just to set the interrupt flag b.interrupt(this); return; } } interrupt0(); //1, Outside of the synchronized block } //... private native void interrupt0(); ``` As can be seen, the native method invocation at `//1` is outside of the synchronized block. So, is it safe if don't put a call to the `interrupt()` method into a `synchronized` block? ``` Thread t; //something else t.interrupt(); //Not in a synchronized block ``` Will it be thread-safe? What if more than 1 thread will try to interrupt it simultaneously? How will the native method `interrupt0` behave then?
I would say yes ... it is thread-safe. Reasons: 1. If it was necessary for applications to call `interrupt()` in a `synchronized` block, then the the spec (the javadoc) would say so, and also say what object you needed to synchronize on to get thread-safety. In fact, the javadoc says nothing about this. 2. If it was necessary for applications to call `interrupt()` in a `synchronized` block, then the Oracle Java Tutorial on Concurrency would mention it [on this page](https://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html). It doesn't. 3. If external synchronization on the `Thread` object was necessary to make the `interrupt()` call thread-safe, then it is hard to explain why the method is doing internal synchronization as well. (They could / would have made the entire method synchronized if it was necessary.) The above evidence is (IMO) convincing, though not absolute proof. If you wanted proof that `interrupt()` is thread-safe, you would get it by a thorough analysis of the native code implementation for `interrupt0()`. I haven't looked at the native code, but I would expect that `interrupt0` is internally thread-safe, and that that is sufficient to make the `interrupt` method thread-safe.
Calculate Angle from Two Points and a Direction Vector I have two vectors in a game. One vector is the player, one vector is an object. I also have a vector that specifies the direction the player if facing. The direction vector has no z part. It is a point that has a magnitude of 1 placed somewhere around the origin. I want to calculate the angle between the direction the soldier is currently facing and the object, so I can correctly pan some audio (stereo only). The diagram below describes my problem. I want to calculate the angle between the two dashed lines. One dashed line connects the player and the object, and the other is a line representing the direction the player is facing from the point the player is at. ![Angle Problem](https://i.stack.imgur.com/FuXTH.png) At the moment, I am doing this (assume player, object and direction are all vectors with 3 points, x, y and z): ``` Vector3d v1 = direction; Vector3d v2 = object - player; v1.normalise(); v2.normalise(); float angle = acos(dotProduct(v1, v2)); ``` But it seems to give me incorrect results. Any advice? Test of code: ``` Vector3d soldier = Vector3d(1.f, 1.f, 0.f); Vector3d object = Vector3d(1.f, -1.f, 0.f); Vector3d dir = Vector3d(1.f, 0.f, 0.f); Vector3d v1 = dir; Vector3d v2 = object - soldier; long steps = 360; for (long step = 0; step < steps; step++) { float rad = (float)step * (M_PI / 180.f); v1.x = cosf(rad); v1.y = sinf(rad); v1.normalise(); float dx = dotProduct(v2, v1); float dy = dotProduct(v2, soldier); float vangle = atan2(dx, dy); } ```
You shoud always use `atan2` when computing angular deltas, and then normalize. The reason is that for example `acos` is a function with domain -1...1; even normalizing if the input absolute value (because of approximations) gets bigger than 1 the function will fail even if it's clear that in such a case you would have liked an angle of 0 or PI instead. Also `acos` cannot measure the full range -PI..PI and you'd need to use explicitly sign tests to find the correct quadrant. Instead `atan2` only singularity is at `(0, 0)` (where of course it doesn't make sense to compute an angle) and its codomain is the full circle -PI...PI. Here is an example in C++ ``` // Absolute angle 1 double a1 = atan2(object.y - player.y, object.x - player.x); // Absolute angle 2 double a2 = atan2(direction.y, direction.x); // Relative angle double rel_angle = a1 - a2; // Normalize to -PI .. +PI rel_angle -= floor((rel_angle + PI)/(2*PI)) * (2*PI) - PI; ``` In the case of a general 3d orientation you need **two** orthogonal directions, e.g. the vector of where the nose is pointing to and the vector to where your right ear is. In that case the formulas are just slightly more complex, but simpler if you have the dot product handy: ``` // I'm assuming that '*' is defined as the dot product // between two vectors: x1*x2 + y1*y2 + z1*z2 double dx = (object - player) * nose_direction; double dy = (object - player) * right_ear_direction; double angle = atan2(dx, dy); // Already in -PI ... PI range ``` ![enter image description here](https://i.stack.imgur.com/SfKHC.png)
How to extend a primitive data type in FHIR with an extension? I'm trying to integrate the `FHIR KBV_PR_Base_Observation_Heart_Rate` profile. In the coding segment of the FHIR Resource, the profile provides that the display segment is to be provided with an extension for the German-speaking area. <https://fhir.kbv.de/StructureDefinition/KBV_PR_Base_Observation_Heart_Rate> How can I meet the requirements of the profile? I do not understand how I should include the extension at the point? I tried the following, but the validator didn't seem to like it (which is also logical, since there is no primitive data type any more): ``` code: { coding: [{ system: 'http://loinc.org', version: '2.69', code: '8867-4', display: { value: 'Heart rate', extension: { url: 'https://fhir.kbv.de/StructureDefinition/KBV_EX_Base_Terminology_German', anzeigenameCodeLoinc: { extension: { content: { url: 'content', valueString: 'Herzfrequenz', }, }, }, }, }, }, { system: 'http://snomed.info/sct', version: '1.1.3', code: '364075005', display: { value: 'Heart rate (observable entity)', extension: { url: 'https://fhir.kbv.de/StructureDefinition/KBV_EX_Base_Terminology_German', anzeigenameCodeLoinc: { extension: { content: { url: 'content', valueString: 'Herzfrequenz', }, }, }, }, }, }], text: 'Heart rate', }, ``` The output of the validator: ``` Error @ Observation.code.coding[0].display (line 24, col25) : This property must be a simple value, not an object Error @ Observation.code.coding[1].display (line 43, col25) : This property must be a simple value, not an object ``` Without the extension: ``` code: { coding: [{ system: 'http://loinc.org', version: '2.69', code: '8867-4', display: 'Heart rate' }, { ... }], text: 'Heart rate', }, ``` Validator output: ``` Error @ Observation.code.coding[0].display (line 28, col8) : Observation.code.coding:loinc.display.extension:anzeigenameCodeLoinc: at least required = 1, but only found 0 Error @ Observation.code.coding[1].display (line 34, col8) : Observation.code.coding:codeSnomed.display.extension:anzeigenameCodeSnomed: at least required = 1, but only found 0 ```
JSON primitives are extended with an `_` before the property name <https://hl7.org/fhir/json.html#primitive>. This is a separate field from the display value itself. So your Observation would look like this ``` { "code": { "coding": [ { "system": "http://loinc.org", "version": "2.69", "code": "8867-4", "display": "Heart rate", "_display": { "extension": [{ "url": "https://fhir.kbv.de/StructureDefinition/KBV_EX_Base_Terminology_German", "extension": [{ "url": "content", "valueString": "Herzfrequenz" }] }] } }, { "system": "http://snomed.info/sct", "version": "1.1.3", "code": "364075005", "display": "Heart rate (observable entity)", "_display": { "extension": [{ "url": "https://fhir.kbv.de/StructureDefinition/KBV_EX_Base_Terminology_German", "extension": [{ "url": "content", "valueString": "Herzfrequenz" }] }] } } ], "text": "Heart rate" } } ```
256bit AES/CBC/PKCS5Padding with Bouncy Castle I am having trouble mapping the following JDK JCE encryption code to Bouncy Castles Light-weight API: ``` public String dec(String password, String salt, String encString) throws Throwable { // AES algorithm with CBC cipher and PKCS5 padding Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding", "BC"); // Construct AES key from salt and 50 iterations PBEKeySpec pbeEKeySpec = new PBEKeySpec(password.toCharArray(), toByte(salt), 50, 256); SecretKeyFactory keyFactory = SecretKeyFactory.getInstance("PBEWithSHA256And256BitAES-CBC-BC"); SecretKeySpec secretKey = new SecretKeySpec(keyFactory.generateSecret(pbeEKeySpec).getEncoded(), "AES"); // IV seed for first block taken from first 32 bytes byte[] ivData = toByte(encString.substring(0, 32)); // AES encrypted data byte[] encData = toByte(encString.substring(32)); cipher.init( Cipher.DECRYPT_MODE, secretKey, new IvParameterSpec( ivData ) ); return new String( cipher.doFinal( encData ) ); } ``` The above works great, but is not very portable due to Oracle's restriction on encryption strengths. I've made several attempts at porting to Bouncy Castles Light-weight API but without success. ``` public String decrypt1(String password, String salt, String encString) throws Exception { byte[] ivData = toByte(encString.substring(0, 32)); byte[] encData = toByte(encString.substring(32)); PKCS12ParametersGenerator gen = new PKCS12ParametersGenerator(new SHA256Digest()); gen.init(password.getBytes(), toByte(salt), 50); CBCBlockCipher cbcBlockcipher = new CBCBlockCipher(new RijndaelEngine(256)); CipherParameters params = gen.generateDerivedParameters(256, 256); cbcBlockcipher.init(false, params); PaddedBufferedBlockCipher aesCipher = new PaddedBufferedBlockCipher(cbcBlockcipher, new PKCS7Padding()); byte[] plainTemp = new byte[aesCipher.getOutputSize(encData.length)]; int offset = aesCipher.processBytes(encData, 0, encData.length, plainTemp, 0); int last = aesCipher.doFinal(plainTemp, offset); byte[] plain = new byte[offset + last]; System.arraycopy(plainTemp, 0, plain, 0, plain.length); return new String(plain); } ``` The above attempt results in a org.bouncycastle.crypto.DataLengthException: last block incomplete in decryption. I have searched for examples online, but there isn't many examples of providing your own IV data for 256bit AES with CBC using PKCS5/PKCS7 as padding. NB: The toByte function converts a String into a byte array using base64 or similar.
This should work for you: ``` public String dec(String password, String salt, String encString) throws Exception { byte[] ivData = toByte(encString.substring(0, 32)); byte[] encData = toByte(encString.substring(32)); // get raw key from password and salt PBEKeySpec pbeKeySpec = new PBEKeySpec(password.toCharArray(), toByte(salt), 50, 256); SecretKeyFactory keyFactory = SecretKeyFactory .getInstance("PBEWithSHA256And256BitAES-CBC-BC"); SecretKeySpec secretKey = new SecretKeySpec(keyFactory.generateSecret( pbeKeySpec).getEncoded(), "AES"); byte[] key = secretKey.getEncoded(); // setup cipher parameters with key and IV KeyParameter keyParam = new KeyParameter(key); CipherParameters params = new ParametersWithIV(keyParam, ivData); // setup AES cipher in CBC mode with PKCS7 padding BlockCipherPadding padding = new PKCS7Padding(); BufferedBlockCipher cipher = new PaddedBufferedBlockCipher( new CBCBlockCipher(new AESEngine()), padding); cipher.reset(); cipher.init(false, params); // create a temporary buffer to decode into (it'll include padding) byte[] buf = new byte[cipher.getOutputSize(encData.length)]; int len = cipher.processBytes(encData, 0, encData.length, buf, 0); len += cipher.doFinal(buf, len); // remove padding byte[] out = new byte[len]; System.arraycopy(buf, 0, out, 0, len); // return string representation of decoded bytes return new String(out, "UTF-8"); } ``` I assume that you're actually doing hex encoding for `toByte()` since your code uses 32 characters for the IV (which provides the necessary 16 bytes). While I don't have the code you used to do the encryption, I did verify that this code will give the same decrypted output as your code.
How can I have an Akka actor executed every 5 min? I'd like to know if there are any mechanism in Akka that can have an actor executed periodically?
You don't really need an actor to do this in Akka 1.3.1 you can schedule a function to be called every 5 minutes like this: ``` Scheduler.schedule(() => println("Do something"), 0L, 5L, TimeUnit.MINUTES) ``` However, if you do want it to be an actor for other reasons you would call it like this ``` case class Message() val actor = actorOf(new Actor { def receive = { case Message() => println("Do something in actor") } }).start() Scheduler.schedule(actor, Message(), 0L, 5L, TimeUnit.MINUTES) ``` If you're using Akka 2.0 then it would be done like this ``` val system = ActorSystem("MySystem") system.scheduler.schedule(0 seconds, 5 minutes)(println("do something")) ``` Or send a message to an actor every 5 minutes like this ``` case class Message() class MyActor extends Actor { def receive = { case Message() => println("Do something in actor") } } val system = ActorSystem("MySystem") val actor = system.actorOf(Props(new MyActor), name = "actor") system.scheduler.schedule(0 seconds, 5 minutes, actor, Message()) ```
How do I create an HTML-formatted ICS message body using ical.net? I need send reminders to users of a web application. To do this I use iCal.Net from nuget packages. Following instructions about the usage I'm able to send an email with an attachment containing the ics file. Is it possible to format the event description as html and make it working on both Outlook office client and Chrome calendar? On these clients I see plain text representation with all the html tags. I tested also on Windows 10 calendar where the event description is correctly displayed as html. Do I have to set something more? This is my calendar string generation ``` var now = DateTime.Now; var later = scadenza.Value; var e = new Event { DtStart = new CalDateTime(later), DtEnd = new CalDateTime(later), Description = mailText, IsAllDay = true, Created = new CalDateTime(DateTime.Now), Summary = $"Reminder for XYZ", }; var attendee = new Attendee { CommonName = "…..", Rsvp = true, Value = new Uri("mailto:" + ConfigurationManager.AppSettings["ReminderReceiver"]) }; e. Attendees = new List<IAttendee> { attendee }; var calendar = new Calendar(); calendar.Events.Add(e); var serializer = new CalendarSerializer(new SerializationContext()); var icalString = serializer.SerializeToString(calendar); ```
HTML formatting isn't part of the icalendar spec, so it's up to applications to add support for this via the non-standard property fields, which are designated with an `X-` prefix. ([Further reading on non-standard properties](https://www.rfc-editor.org/rfc/rfc5545#section-3.8.8.2) if you're interested.) **Google calendar** It doesn't look like Google calendar has any support for HTML-formatted events, so I think you're out of luck there. **Outlook** Outlook uses a number of these `X-` fields for various things; for HTML-formatted descriptions, it uses `X-ALT-DESC` and specifies `FMTTYPE=text/html` like this: ``` X-ALT-DESC;FMTTYPE=text/html:<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//E N">\n<HTML>\n<HEAD>\n<META NAME="Generator" CONTENT="MS Exchange Server ve rsion rmj.rmm.rup.rpr">\n<TITLE></TITLE>\n</HEAD>\n<BODY>\n<!-- Converted from text/rtf format -->\n\n<P DIR=LTR><SPAN LANG="en-us"><FONT FACE="Cali bri">This is some</FONT></SPAN><SPAN LANG="en-us"><B> <FONT FACE="Calibri" >HTML</FONT></B></SPAN><SPAN LANG="en-us"><FONT FACE="Calibri"></FONT></SP AN><SPAN LANG="en-us"><U> <FONT FACE="Calibri">formatted</FONT></U></SPAN> <SPAN LANG="en-us"><FONT FACE="Calibri"></FONT></SPAN><SPAN LANG="en-us">< I> <FONT FACE="Calibri">text</FONT></I></SPAN><SPAN LANG="en-us"><FONT FAC E="Calibri">.</FONT></SPAN><SPAN LANG="en-us"></SPAN></P>\n\n</BODY>\n</HT ML> ``` You may be able to get away with much simpler HTML, you'll have to test to see what Outlook supports/allows. The HTML block above was generated using Outlook 2013. I have successfully used that block and run it through a serialization round-trip with ical.net, and Outlook opens it without any fuss. **ical.net** Unfortunately you can't *quite* do this in ical.net [due a bug](https://github.com/rianjs/ical.net/issues/209) with `CalendarProperty` serialization, but the workaround is just one line. ``` const string formatted = //the text block from above starting with FMTTYPE=text/html: var start = DateTime.Now; var end = start.AddHours(1); var @event = new Event { Start = new CalDateTime(start), End = new CalDateTime(end), Description = "This is a description", }; var property = new CalendarProperty("X-ALT-DESC", formatted); @event.AddProperty(property); var calendar = new Calendar(); calendar.Events.Add(@event); var serialized = new CalendarSerializer().SerializeToString(calendar); //Workaround for the bug: serialized = serialized.Replace("X-ALT-DESC:FMTTYPE=text/html", "X-ALT-DESC;FMTTYPE=text/html"); ``` This will produce HTML-formatted Outlook calendar events. (When I fix the bug, I'll update this answer to remove the `string.Replace` call.)
R read.csv "More columns than column names" error I have a problem when importing `.csv` file into R. With my code: ``` t <- read.csv("C:\\N0_07312014.CSV", na.string=c("","null","NaN","X"), header=T, stringsAsFactors=FALSE,check.names=F) ``` R reports an error and does not do what I want: ``` Error in read.table(file = file, header = header, sep = sep, quote = quote, : more columns than column names ``` I guess the problem is because my data is not well formatted. I only need data from `[,1:32]`. All others should be deleted. Data can be downloaded from: <https://drive.google.com/file/d/0B86_a8ltyoL3VXJYM3NVdmNPMUU/edit?usp=sharing> Thanks so much!
That's one wonky CSV file. Multiple headers tossed about (try pasting it to [CSV Fingerprint](http://setosa.io/csv-fingerprint/)) to see what I mean. Since I don't know the data, it's impossible to be sure the following produces accurate results for you, but it involves using `readLines` and other R functions to pre-process the text: ``` # use readLines to get the data dat <- readLines("N0_07312014.CSV") # i had to do this to fix grep errors Sys.setlocale('LC_ALL','C') # filter out the repeating, and wonky headers dat_2 <- grep("Node Name,RTC_date", dat, invert=TRUE, value=TRUE) # turn that vector into a text connection for read.csv dat_3 <- read.csv(textConnection(paste0(dat_2, collapse="\n")), header=FALSE, stringsAsFactors=FALSE) str(dat_3) ## 'data.frame': 308 obs. of 37 variables: ## $ V1 : chr "Node 0" "Node 0" "Node 0" "Node 0" ... ## $ V2 : chr "07/31/2014" "07/31/2014" "07/31/2014" "07/31/2014" ... ## $ V3 : chr "08:58:18" "08:59:22" "08:59:37" "09:00:06" ... ## $ V4 : chr "" "" "" "" ... ## .. more ## $ V36: chr "" "" "" "" ... ## $ V37: chr "0" "0" "0" "0" ... # grab the headers headers <- strsplit(dat[1], ",")[[1]] # how many of them are there? length(headers) ## [1] 32 # limit it to the 32 columns you want (Which matches) dat_4 <- dat_3[,1:32] # and add the headers colnames(dat_4) <- headers str(dat_4) ## 'data.frame': 308 obs. of 32 variables: ## $ Node Name : chr "Node 0" "Node 0" "Node 0" "Node 0" ... ## $ RTC_date : chr "07/31/2014" "07/31/2014" "07/31/2014" "07/31/2014" ... ## $ RTC_time : chr "08:58:18" "08:59:22" "08:59:37" "09:00:06" ... ## $ N1 Bat (VDC) : chr "" "" "" "" ... ## $ N1 Shinyei (ug/m3): chr "" "" "0.23" "null" ... ## $ N1 CC (ppb) : chr "" "" "null" "null" ... ## $ N1 Aeroq (ppm) : chr "" "" "null" "null" ... ## ... continues ```
Can't activate python venv environment from Makefile I'm trying to activate my virtual environment with a Makefile command but I'm getting an error when I run below. **Command** ``` make env ``` **Error** ``` Makefile:20: warning: overriding commands for target `make' Makefile:17: warning: ignoring old commands for target `make' source ../env/bin/activate make: source: No such file or directory make: *** [env] Error 1 ``` **Makefile** ``` ... env: source ../env/bin/activate ``` - The environment exists one directory above the directory with the Makefile. - Other Makefile commands work. - `source ../env/bin/activate` on the command line works. I wonder if there is something special about Makefiles I don't understand that is causing this to fail?
There are more layers to this and you will probably find yourself with your next step, but one at a time. Recipes are executed in shell, more specifically, by default I reckon in our case as well, `/bin/sh` which does not understand `source`, so... change your `Makefile` to say: ``` env: . ../env/bin/activate ``` Or define make `SHELL` variable to say `/bin/bash` and it will appear to work: ``` SHELL := /bin/bash env: source ../env/bin/activate ``` But, next thing... line of a recipe does `fork()` its own shell (environment), so changes you've made to one shell instance (by sourcing a script), does not affect the next one. You could get around that and spawn just one shell by concatenating multiple recipes commands to actually being one line (escaping newlines and separating commands with `;` instead: ``` env: . ../env/bin/activate ; \ SOME_COMMAND ``` But this is still only effective for the commands that are part of that one command line. You might consider doing the same in multiple recipes, but really, if you need the environment to be active for whatever happens in the make, you may just wan to activate it first before calling make.
How do I mask multi-output in Tensorflow 2 LSTM training? I am training an LSTM model in Tensorflow 2 to predict two outputs, streamflow and water temperature. - For some of the time steps there is a streamflow label ***and*** a temperature label, - For some there is only a streamflow label ***or*** a temperature label, - and for some there are ***neither***. So the loss function needs to ignore the temperature and streamflow loss when they don't have a label. I've done quite a bit of reading in the TF docs, but I'm struggling to figure out how to best do this. So far I've tried - specifying `sample_weight_mode='temporal'` when compiling the model and then included a `sample_weight` numpy array when calling `fit` When I do this, I get an error asking me to pass a 2D array. But that confuses me because there are 3 dimensions: `n_samples`, `sequence_length`, and `n_outputs`. Here's some code of what I am basically trying to do: ``` import tensorflow as tf import numpy as np # set up the model simple_lstm_model = tf.keras.models.Sequential([ tf.keras.layers.LSTM(8, return_sequences=True), tf.keras.layers.Dense(2) ]) simple_lstm_model.compile(optimizer='adam', loss='mae', sample_weight_mode='temporal') n_sample = 2 seq_len = 10 n_feat = 5 n_out = 2 # random in/out x = np.random.randn(n_sample, seq_len, n_feat) y_true = np.random.randn(n_sample, seq_len, n_out) # set the initial mask as all ones (everything counts equally) mask = np.ones([n_sample, seq_len, n_out]) # set the mask so that in the 0th sample, in the 3-8th time step # the 1th variable is not counted in the loss function mask[0, 3:8, 1] = 0 simple_lstm_model.fit(x, y_true, sample_weight=mask) ``` The error: ``` ValueError: Found a sample_weight array with shape (2, 10, 2). In order to use timestep-wise sample weighting, you should pass a 2D sample_weight array. ``` Any ideas? I must not understand what `sample_weights` do because to me it only makes sense if the `sample_weight` array has the same dimensions as the output. I could write a custom loss function and handle the masking manually, but it seems like there should be a more general or built in solution.
## 1. `sample_weights` Yes, you understand it incorrectly. In this case you have `2` samples, `10` timesteps with `5` features each. You could pass `2D` tensor like this so each timestep for each sample will contribute differently to total loss, all features are equally weighted (as is usually the case). **That's not what you are after at all**. You would like to mask certain loss values after their calculation so those do not contribute. ## 2. Custom loss One possible solution is to implement your own loss function which multiplies loss tensor by mask before taking `mean` or `sum`. Basically, you pass `mask` and `tensor` concatenated together somehow and split it within function for use. This is sufficient: ``` def my_loss_function(y_true_mask, y_pred): # Recover y and mask y_true, mask = tf.split(y_true_mask, 2) # You could user reduce_sum or other combinations return tf.math.reduce_mean(tf.math.abs(y_true - y_pred) * mask) ``` Now your code (no weighting as it's not needed): ``` simple_lstm_model = tf.keras.models.Sequential( [tf.keras.layers.LSTM(8, return_sequences=True), tf.keras.layers.Dense(2)] ) simple_lstm_model.compile(optimizer="adam", loss=my_loss_function) n_sample = 2 seq_len = 10 n_feat = 5 n_out = 2 x = np.random.randn(n_sample, seq_len, n_feat) y_true = np.random.randn(n_sample, seq_len, n_out) mask = np.ones([n_sample, seq_len, n_out]) mask[0, 3:8, 1] = 0 # Stack y and mask together y_true_mask = np.stack([y_true, mask]) simple_lstm_model.fit(x, y_true_mask) ``` And so it works. You could also stack the values in some other way, but I hope you get the feel of how one could do it. ## 3. Masking outputs Please notice above introduces a few problems. If you have a lot of zeroes and take `mean` you might get a really small loss value and inhibit learning. On the other hand, if you go with `sum` it might explode.
GLKit vs. Metal perspective matrix difference I'm reading a Metal [tutorial](https://www.raywenderlich.com/137398/ios-metal-tutorial-swift-part-5-switching-metalkit) on raywenderlich.com, where it introduces a pure Swift float4x4 [helper class](https://koenig-media.raywenderlich.com/uploads/2016/09/Metal_5_swift3_resources.zip). 99% it's just wrapper around GLKit functions, except one function which really puzzles me: ``` static func makePerspectiveViewAngle(_ fovyRadians: Float, aspectRatio: Float, nearZ: Float, farZ: Float) -> float4x4 { var q = unsafeBitCast(GLKMatrix4MakePerspective(fovyRadians, aspectRatio, nearZ, farZ), to: float4x4.self) let zs = farZ / (nearZ - farZ) q[2][2] = zs q[3][2] = zs * nearZ return q } ``` Why does it need to change `q[2][2]` and `q[3][2]`. Is this some incompatibility between Metal's and GLKit's coordinate system? Is this a particular choice this tutorial made? If not, are there any other incompatibilities between GLKit and Metal mathematics? --- **Update**: I found a nice illustration about Metal's clip space coordinate system from the WWDC 2016 Session: Adopting Metal I. [![clip-space](https://i.stack.imgur.com/RsPOX.jpg)](https://i.stack.imgur.com/RsPOX.jpg)
Quoting from [this forum question](https://forums.raywenderlich.com/t/ios-metal-tutorial-with-swift-part-5-switching-to-metalkit/19283) > > OpenGL uses different clip-space coordinates than Metal (in GL, z goes > from -1 to 1, while in Metal z goes from 0 to 1), so using > GLKMatrix4MakePerspective doesn't give you a matrix that properly > transforms from eye space to clip space. Instead, it transforms half > of the viewing volume behind the eye, causing sometimes-subtle > clipping and culling issues. You can fix up the matrix you get back > from GLK by setting the matrix elements that are relevant to depth by > adding the following code to makePerspectiveViewAngle: > > > let zs = farZ / (nearZ - farZ) > > > q[2][2] = zs > > > q[3][2] = zs \* nearZ > > >
De-encode URL parameters I am talking to a server that used to send me HTTP strings like this: ``` /path/to/my/handler/?action-query&id=112&type=vca&info=ch=0&type=event&ev16[sts=begin (...) ``` So the "info" GET parameter included "=" and "&" characters. It was rather unorthodox but nevertheless we wrote a parser for it. However, recently they have decided to encode part of it, so now the string looks like this.. ``` /path/to/my/handler/?action=query&id=112&type=vca&info=ch%3D0%26type%3Devent%26ev46[sts%3Dbegin (...) ``` This breaks our parser, which expects a string like the first one. Can I somehow "de-encode" the string, so that I can use the old code (so that it's not broken as we re-write the parser)? As per answer below, we can use urllib.unquote() to clean the string up. However, we are relying on request.GET, which gets set up based on the first string. Is it possible to reconstruct the GET object based on the new converted string, or somehow force it to re-evaluate?
I suspect what you want is the `unquote` function from the `urllib` module. ``` >>> s = '/path/to/my/handler/?action=query&id=112&type=vca&info=ch%3D0%26type%3Devent%26ev46[sts%3Dbegin' >>> import urllib >>> urllib.unquote(s) '/path/to/my/handler/?action=query&id=112&type=vca&info=ch=0&type=event&ev46[sts=begin' ``` Edit: I'm not very familiar with Django, but the [Request and response object section of their docs](https://docs.djangoproject.com/en/dev/ref/request-response/) states the following: > > QueryDict instances are immutable, unless you create a copy() of them. That means you can't change attributes of request.POST and request.GET directly. > > > Based on my limited reading of those docs, you might be able to apply the `unquote()` function to the `HttpRequest.body` attribute and build a new `QueryDict` out of the results (and possibly use it to update your current one if necessary).
Spring - wrong method called I am providing the followind defition in my spring config file: ``` <bean id="path" class="java.nio.file.Paths" factory-method="get"> <constructor-arg> <value type="java.lang.String">${limits.path}</value> </constructor-arg> </bean> ``` `Paths.get` can be either called with a String parameter or URI parameter. SPring, in the above example resolves it to URI, which is wrong... Any idea why?
Try with a specific index, it might have a problem since it has a varargs in the method signature. From the oracle [docs](http://docs.oracle.com/javase/7/docs/api/java/nio/file/Paths.html) : ``` get(String first, String... more) ``` Try Something like that : ``` <bean id="path" class="java.nio.file.Paths" factory-method="get"> <constructor-arg index="0" type="java.lang.String" value="${limits.path}" /> </bean> ``` If it doesn't work, try with an empty list : ``` <bean id="path" class="java.nio.file.Paths" factory-method="get"> <constructor-arg index="0"> <value type="java.lang.String">${limits.path}</value> </constructor-arg> <constructor-arg index="1"> <list></list> </constructor-arg> </bean> ```
Reading Cookie when using Curl in php, how to? I am connecting to an API service which authenticates users using cookies. I make these two statements from command prompt and it works. ``` curl -d "u=username&p=password" -c ~/cookiejar https://domain/login curl -b https://domain/getData ``` Now I want to make two equivalent php files **login.php** and **get\_data.php** using curl. I am using ``` curl_setopt ($ch, CURLOPT_COOKIEJAR, $ckfile); ``` in `login.php` and ``` curl_setopt($ch, CURLOPT_COOKIEFILE, $ckfile); ``` in `get_data.php` It is not working. Cookie file is getting created but second curl is not reading it. Is this the right way to do it ? Do I have to read the cookie file seperately and set the header `Cookie` ? Any help would appreciated. Thanks.
This will do the trick. I run it against Google.com as an example: ``` <?PHP // open a site with cookies $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://www.google.com"); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0'); curl_setopt($ch, CURLOPT_HEADER ,1); curl_setopt($ch, CURLOPT_RETURNTRANSFER ,1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION ,1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $content = curl_exec($ch); // get cookies $cookies = array(); preg_match_all('/Set-Cookie:(?<cookie>\s{0,}.*)$/im', $content, $cookies); print_r($cookies['cookie']); // show harvested cookies // basic parsing of cookie strings (just an example) $cookieParts = array(); preg_match_all('/Set-Cookie:\s{0,}(?P<name>[^=]*)=(?P<value>[^;]*).*?expires=(?P<expires>[^;]*).*?path=(?P<path>[^;]*).*?domain=(?P<domain>[^\s;]*).*?$/im', $content, $cookieParts); print_r($cookieParts); ?> ``` See [other examples](https://stackoverflow.com/questions/5574530/how-to-parse-a-cookie-string) for how to effectively parse such as string.
Looping within a Looping creates duplicates in DataTable C# I'm writing to loop within a loop to write to a data table but it produces duplicates. I'm not sure what the issue might here. I"m getting duplicate ID's from this API as a result of this loop ``` var users = api.Users.GetAllUsers(); using (DataTable table = new DataTable()) { var properties = users.Users[0].GetType().GetProperties(); for (int i = 0; i < properties.Count(); i++) { table.Columns.Add(properties[i].Name, typeof(String)); } foreach (var user in users.Users) { DataRow newRow = table.NewRow(); for (int j = 0; j < properties.Count(); j++) { var colName = properties[j].Name; newRow[colName] = user.GetType().GetProperty(colName).GetValue(user, null); } table.Rows.Add(newRow); foreach (DataRow row in table.Rows) { Console.WriteLine(row["id"]); } } } ```
It seems to be just a display issue. You have a nested loop, you loop all rows in the table while you're adding new rows. That means you output all new rows. At the beginning you see only the one newly created row. But at the second user you're alreading seeing again the first row. This fixes it, move the inner loop behind the main loop: ``` foreach (var user in users.Users) { DataRow newRow = table.NewRow(); for (int j = 0; j < properties.Count(); j++) { var colName = properties[j].Name; newRow[colName] = user.GetType().GetProperty(colName).GetValue(user, null); } table.Rows.Add(newRow); } foreach (DataRow row in table.Rows) { Console.WriteLine(row["id"]); } ``` As an aside, you don't need to use the `using`-statement for a `DataTable` or `DataSet`. It does not use unmanaged resources. The `using` will otherwise prevent you from further processing or returning it from the method. But this in an exception, in general it is best practice to use the `using`-statement for everything implementing IDisposable.
Unclosed / misnested HTML tags extend past their parent I'm running into some interesting functionality when HTML tags aren't closed. Sometimes the browser inserts extra opening and closing tags to compensate, and other times it just inserts a closing tag. This is best explained through examples: **With the `<sup>` tag:** ``` first text node <div> This is a parent div <sup>superscript tag starts IN parent</div> text OUTSIDE node of parent ``` **With the `<s>` tag:** ``` first text node <div> This is a parent div <s>strikethrough tag starts IN parent</div> text OUTSIDE node of parent ``` As you can see in the first example the browser automatically closes the `<sup>` tag before its parent closes. However, in the second example the browser seems to close the `<s>` tag before the end of its parent *and then inserts* ***another*** starting `<s>` after the parent. I've looked through the [`<s>`](https://www.w3.org/TR/html5/text-level-semantics.html#the-s-element) and the [`<sup>`](https://www.w3.org/TR/html5/text-level-semantics.html#the-sub-and-sup-elements) specs - I can't seem to find anything specific to how browsers interpret and deal with unclosed tags.. At least nothing that explains this functionality. The reason I'm wanting to know this is for a live markdown parser I'm using - users may not finish their tags before it parses their source. I'd like to know how the browser deals with these things, so I can code for that use-case. At the present time the browser handles closing different tags in different ways (as you can see by my examples). Does anyone know *why* the browser does this? Or at least know a list of elements that act the same?
Thanks to **@Ankith Amtange** I found the [explanation](https://www.w3.org/TR/html5/syntax.html#an-introduction-to-error-handling-and-strange-cases-in-the-parser) of what happens. I'll write it out here for future readers. The `<s>` tag extends past its parent because it is a **formatting element**. The `<sup>` tag is automatically closed because the browser expected a closing `</sup>` tag before the end of the parent element. The `HTML` parser treats elements differently in its stack, which fall into the following categories ([source](https://www.w3.org/TR/html5/syntax.html#special)): > > **Special elements** > > > - The following elements have varying levels of special parsing rules: `HTML`'s `address`, `applet`, `area`, `article`, `aside`, `base`, `basefont`, `bgsound`, `blockquote`, `body`, `br`, `button`, `caption`, `center`, `col`, `colgroup`, `dd`, `details`, `dir`, `div`, `dl`, `dt`, `embed`, `fieldset`, `figcaption`, `figure`, `footer`, `form`, `frame`, `frameset`, `h1`, `h2`, `h3`, `h4`, `h5`, `h6`, `head`, `header`, `hgroup`, `hr`, `html`, `iframe`, `img`, `input`, `isindex`, `li`, `link`, `listing`, `main`, `marquee`, `meta`, `nav`, `noembed`, `noframes`, `noscript`, `object`, `ol`, `p`, `param`, `plaintext`, `pre`, `script`, `section`, `select`, `source`, `style`, `summary`, `table`, `tbody`, `td`, `template`, `textarea`, `tfoot`, `th`, `thead`, `title`, `tr`, `track`, `ul`, `wbr`, and `xmp`; `MathML`'s `mi`, `mo`, `mn`, `ms`, `mtext`, and `annotation-xml`; and `SVG`'s `foreignObject`, `desc`, and `title`. > > > **Formatting elements** > > > - The following `HTML` elements are those that end up in the list of active formatting elements: `a`, `b`, `big`, `code`, `em`, `font`, `i`, `nobr`, `s`, `small`, `strike`, `strong`, `tt`, and `u`. > > > **Ordinary elements** > > > - All other elements found while parsing an `HTML` document. > > > ## Explanation (from linked spec): The most-often discussed example of erroneous markup is as follows: ``` <p>1<b>2<i>3</b>4</i>5</p> ``` The parsing of this markup is straightforward up to the "3". At this point, the DOM looks like this: ``` ─html ├──head └──body └──p ├──"1" └──b ├──"2" └──i └──"3" ``` Here, the stack of open elements has five elements on it: `html`, `body`, `p`, `b`, and `i`. The list of active formatting elements just has two: `b` and `i`. The insertion mode is "in body". Upon receiving the end tag token with the tag name "`b`", the "adoption agency algorithm" is invoked. This is a simple case, in that the formatting element is the `b` element, and ***there is no furthest block***. Thus, the stack of open elements ends up with just three elements: `html`, `body`, and `p`, while the list of active formatting elements has just one: `i`. The `DOM` tree is unmodified at this point. The next token is a character ("4"), triggers the reconstruction of the active formatting elements, in this case just the `i` element. A new `i` element is thus created for the "4" Text node. After the end tag token for the "`i`" is also received, and the "5" Text node is inserted, the `DOM` looks as follows: ``` ─html ├──head └──body └──p ├──"1" ├──b │ ├──"2" │ └──i │ └──"3" ├──i │ └──"4" └──"5" ```
How to display debug info or console.log equivalent in Lua I am creating many games using Lua and LOVE2D, but whenever I implement a new function and want to test it out, or simply want to know a value of a variable in Lua, I either display it on the game screen or just hope that it works. Now my question is... IS THERE A WAY TO DISPLAY SOME INFO, such as A VARIABLE VALUE or something else into the terminal or somewhere else? Just like `console.log` in javascript which displays some content in the javascript console in the browser. So, is there a way to do this is Lua?? using LOVE2D? I am using a Mac, so I have a `terminal` and not a command prompt. Is there a way to display some content there? Anywhere else would also be fine, I just need to see if those values are as expected or not.
Use a `conf.lua` file to enable the console, then you should be able to use a standard `print()`. You can read the wiki entry [here](https://love2d.org/wiki/Config_Files). **Note: You have to run Lua and Love2D via the terminal for this to work. Running Lua and Love2D like this is required for the print statements to show:** ``` /Applications/love.app/Contents/MacOS/love "/Users/myuser/Desktop/love2d-test-proj" ``` You just need to add a `conf.lua` file to the same location where your `main.lua`. Your file may be as simple as this: ``` function love.conf(t) t.console = true end ``` But feel free to copy the whole configuration file from the above link and edit what you need. I can't be completely sure about this, because I have no access to Mac, but the console is disabled by default and even on Windows, no prints are shown until you turn it on. **Alternatively** You can also display debug info in the game itself like some games do. What I like to do is add something like `debugVariable = {}` for logging events that happen in each loop and `debugPermanent = {}` for events that happen rarely. Possibly add convenience functions for writing to the variables: ``` function debugAddVariable(str) table.insert(debugVariable, str) end --..and similarly for debugPermanent ``` Now a function to draw our debug info: ``` function debugDraw() love.graphics.push() --remember graphics state love.graphics.origin() --clear any previous transforms love.graphics.setColor(--[[select color for debug info]]) love.graphics.setFont(--[[select font for debug info]]) for i, v in ipairs(debugPermanent) do love.graphics.print(v) love.graphics.translate(0, --[[fontHeight]]) end for i, v in ipairs(debugVariable) do love.graphics.print(v) love.graphics.translate(0, --[[fontHeight]]) end debugVariable = {} --clear debugVariable to prepare it for the next loop love.graphics.pop() --recall graphics state end ``` And we just call this draw function at the end of our `love.draw()` and the texts should appear. Obviously, this method can be refined further and further almost infinitely, displaying specific variables, and adding graphs for some other variables to clarify the information you want to show, but that's kind of outside of the scope of the question. **Lastly** Feel free to check [here](https://love2d.org/wiki/Category:Libraries) for debug libraries submitted by users.
Plot datetime.date / time series in a pandas dataframe I created a pandas dataframe from some value counts on particular calendar dates. Here is how I did it: ``` time_series = pd.DataFrame(df['Operation Date'].value_counts().reset_index()) time_series.columns = ['date', 'count'] ``` Basically, it is two columns, the first "date" is a column with `datetime.date` objects and the second column, "count" are simply integer values. Now, I'd like to plot a scatter or a KDE to represent how the value changes over the calendar days. But when I try: ``` time_series.plot(kind='kde') plt.show() ``` I get a plot where the x-axis is from -50 to 150 as if it is parsing the `datetime.date` objects as integers somehow. Also, it is yielding two identical plots rather than just one. Any idea how I can plot them and see the calendars day along the x-axis?
you sure you got datetime? i just tried this and it worked fine: ``` df = date count 7 2012-06-11 16:51:32 1.0 3 2012-09-28 08:05:14 12.0 19 2012-10-01 18:01:47 4.0 2 2012-10-03 15:18:23 29.0 6 2012-12-22 19:50:43 4.0 1 2013-02-19 19:54:03 28.0 9 2013-02-28 16:08:40 17.0 12 2013-03-12 08:42:55 6.0 4 2013-04-04 05:27:27 6.0 17 2013-04-18 09:40:37 29.0 11 2013-05-17 16:34:51 22.0 5 2013-07-07 14:32:59 16.0 14 2013-10-22 06:56:29 13.0 13 2014-01-16 23:08:46 20.0 15 2014-02-25 00:49:26 10.0 18 2014-03-19 15:58:38 25.0 0 2014-03-31 05:53:28 16.0 16 2014-04-01 09:59:32 27.0 8 2014-04-27 12:07:41 17.0 10 2014-09-20 04:42:39 21.0 df = df.sort_values('date', ascending=True) plt.plot(df['date'], df['count']) plt.xticks(rotation='vertical') ``` [![enter image description here](https://i.stack.imgur.com/DOr6I.png)](https://i.stack.imgur.com/DOr6I.png) EDIT: if you want a scatter plot you can: ``` plt.plot(df['date'], df['count'], '*') plt.xticks(rotation='vertical') ``` [![enter image description here](https://i.stack.imgur.com/zHLZG.png)](https://i.stack.imgur.com/zHLZG.png)
Deserialize Avro messages into specific datum using KafkaAvroDecoder I'm reading from a Kafka topic, which contains Avro messages serialized using the `KafkaAvroEncoder` (which automatically registers the schemas with the topics). I'm using the maven-avro-plugin to generate plain Java classes, which I'd like to use upon reading. The `KafkaAvroDecoder` only supports deserializing into `GenericData.Record` types, which (in my opinion) misses the whole point of having a statically typed language. My deserialization code currently looks like this: ``` SpecificDatumReader<event> reader = new SpecificDatumReader<>( event.getClassSchema() // event is my class generated from the schema ); byte[] in = ...; // my input bytes; ByteBuffer stuff = ByteBuffer.wrap(in); // the KafkaAvroEncoder puts a magic byte and the ID of the schema (as stored // in the schema-registry) before the serialized message if (stuff.get() != 0x0) { return; } int id = stuff.getInt(); // lets just ignore those special bytes int length = stuff.limit() - 4 - 1; int start = stuff.position() + stuff.arrayOffset(); Decoder decoder = DecoderFactory.get().binaryDecoder( stuff.array(), start, length, null ); try { event ev = reader.read(null, decoder); } catch (IOException e) { e.printStackTrace(); } ``` I found my solution cumbersome, so I'd like to know if there is a simpler solution to do this.
Thanks to the comment I was able to find the answer. The secret was to instantiate `KafkaAvroDecoder` with a `Properties` specifying the use of the specific Avro reader, that is: ``` Properties props = new Properties(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "..."); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroSerializer.class); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, io.confluent.kafka.serializers.KafkaAvroSerializer.class); props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_C‌ONFIG, "..."); props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true); VerifiableProp vProps = new VerifiableProperties(props); KafkaAvroDecoder decoder = new KafkaAvroDecoder(vProps); MyLittleData data = (MyLittleData) decoder.fromBytes(input); ``` The same configuration applies for the case of using directly the `KafkaConsumer<K, V>` class (I'm consuming from Kafka in Storm using the `KafkaSpout` from the storm-kafka project, which uses the `SimpleConsumer`, so I have to manually deserialize the messages. For the courageous there is the storm-kafka-client project, which does this automatically by using the new style consumer).
Extract directory entry from archive Is it possible to extract a directory entry from an archive? How can I do it? Code to extract a file.txt from the archive: ``` let fileManager = FileManager() let currentWorkingPath = fileManager.currentDirectoryPath var archiveURL = URL(fileURLWithPath: currentWorkingPath) archiveURL.appendPathComponent("archive.zip") guard let archive = Archive(url: archiveURL, accessMode: .read) else { return } guard let entry = archive["file.txt"] else { return } var destinationURL = URL(fileURLWithPath: currentWorkingPath) destinationURL.appendPathComponent("out.txt") do { try archive.extract(entry, to: destinationURL) } catch { print("Extracting entry from archive failed with error:\(error)") } ``` If as entry I use a subdirectory path of the zip file, can I extract that directory and all its contents?
ZIP archives don't store parent/child relationships for entries. An archive is a flat list of entries, that have a `path` property. Because archives are organized as a list - and not as a tree - there is no efficient way to obtain a subtree. In ZIP Foundation, `Archive` conforms to `Sequence`. So you can use `filter` to find all entries with a specific path prefix. e.g. ``` let entries = archive.filter { $0.path.starts(with: "Test/") } ``` You can then iterate over all qualifying entries and use the `extract` code from your question. There are some edge cases to consider though: - You will have to create (intermediate) directories yourself (e.g. with `FileManager.createDirectory(...)` during extraction. - ZIP archives *don't* require dedicated `.directory` entries. The best approach is, to create the directory hierarchy "on-demand". (E.g. if you encounter an entry with a parent path that doesn't exist yet, you create it) - There is no guaranteed order of entries. So you can't make any assumptions on already existing paths during extraction.
How to use Spring's WebTestClient to check for a string in Kotlin? I am trying to use the WebTestClient to check a Controller that returns a string. But for some reason I get an error. I use Kotlin so I tried to apply the Java examples I have found to it but I can't figure out how to do it right. What am I missing? ``` @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) class HelloResourceIT { @Test fun shouldReturnGreeting(@Autowired webClient: WebTestClient) { webClient.get() .uri("/hello/Foo") .accept(MediaType.TEXT_PLAIN) .exchange() .expectStatus() .isOk() .expectBody(String::class.java) .isEqualTo<Nothing>("Hello Foo!") } } ``` When I try to use `String` or `java.lang.String` instead of `Nothing` I get the error message: > > Type argument is not within its bounds. > Expected: Nothing! > Found:String! > > > When I use `Nothing` I get a NPE. There is already [Type interference issue with the WebFlux WebTestClient and Kotlin](https://stackoverflow.com/questions/45295818/type-interference-issue-with-the-webflux-webtestclient-and-kotlin) but I works with a specific type. String does not seem to work here. What I am missing?
It looks like you are not using the extension function that was identified as a work-around. To use it, try updating the last two lines of your test as follows: ``` webClient.get() .uri("/hello/Foo") .accept(MediaType.TEXT_PLAIN) .exchange() .expectStatus() .isOk() .expectBody<String>() .isEqualTo("Hello Foo!") ``` which appears to work correctly. For reference: - [Type interference issue with the WebFlux WebTestClient and Kotlin](https://stackoverflow.com/questions/45295818/type-interference-issue-with-the-webflux-webtestclient-and-kotlin) - <https://github.com/sdeleuze/webflux-kotlin-web-tests/blob/master/src/test/kotlin/com/example/ControllerWebFluxTest.kt>
What did Silverman (1981) mean by 'critical bandwidth'? In the selection of a bandwidth for a Kernel Density Estimator, critical bandwidth according to my understanding is: "For every integer k, where `1<k<n`, we can find the minimum width `h(k)` such that the kernel density estimate has at most k maxima. Silverman calls these `h(k)` values “critical widths.” I don't intuitively understand this concept. Any help would be appreciated. Thank you!
I hate animations in Web pages, but this question begs for an animated answer: [![Figure](https://i.stack.imgur.com/tvzsv.gif)](https://i.stack.imgur.com/tvzsv.gif) These are KDEs for a set of three values (near -2.5, 0.5, and 2.5). Their bandwidths continually vary, growing from small to large. Watch as three peaks become two and ultimately one. --- A KDE puts a pile of "probability" at each data point. As the bandwidth widens, the pile "slumps." When you start with tiny bandwidths, each data value contributes its own discrete pile. As the bandwidths grow, the piles slump and merge and accumulate on top of each other (the thick blue line), ultimately becoming one single pile. Along the way, the maxima change discontinuously from the starting value of $n$ (assuming the kernel has a single maximum, which is almost always the case) to $1.$ The critical width for $k$ maxima is the first (smallest) width that reduces the KDE to a curve with no more than $k$ maxima.
What is the difference between = and == in hive I accidentally used `==` instead of `=` in one of my HQL query, but shockingly it didn't throw any error but was working as it should have been working for `=`. My question here is why do we have `==` operator in HQL and what is the exact difference between the two.
The`=` and `==` operators have the same functionality. They are implemented by the exact same class. ``` system.registerGenericUDF("=", GenericUDFOPEqual.class); system.registerGenericUDF("==", GenericUDFOPEqual.class); ``` [FunctionRegistry.java](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java) --- ``` +----------+---------------------+----------------------------------------------------------------+ | Operator | Operand types | Description | +----------+---------------------+----------------------------------------------------------------+ | A = B | All primitive types | TRUE if expression A is equal to expression B otherwise FALSE. | | A == B | All primitive types | Synonym for the = operator. | +----------+---------------------+----------------------------------------------------------------+ ``` [LanguageManual UDF](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF)
How to port forward Google Compute Engine Instance? I've set up a VPS using the Google Compute Engine platform. In the instance, I've established a MongoDB database that's being locally hosted at the default port 21017. I've also set up a REST API based NodeJS server with express listening in on connections at port 8080. Right now, I can only access the NodeJS site internally. How do I expose the VPS port 8080 to the external ip address so that I can access the API anywhere? I tried following along an answer to this post: [Enable Access Google Compute Engine Instance Via HTTP Port](https://stackoverflow.com/questions/22453803/enable-access-google-compute-engine-instance-via-http-port). But that did not solve my issue
## [Default Firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls#default_firewall_rules) Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network, few ports like 22 (ssh), 3389 (RDP) are allowed. The [default firewall rules are documented here](https://cloud.google.com/compute/docs/vpc/firewalls#default_firewall_rules). ## Opening ports for ingress The [ingress firewall rules are described here](https://cloud.google.com/compute/docs/vpc/firewalls#ingress_cases). The recommended approach is to create a firewall rule which allows port `8080` to VMs containing a specific tag you choose. Then associate this tag on the VMs you would like to allow ingress `8080`. If you use `gcloud`, you can do that using the following steps: ``` # Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080' gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080 # Add the 'allow-tcp-8080' tag to a VM named VM_NAME gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080 # If you want to list all the GCE firewall rules gcloud compute firewall-rules list ``` Here is [another stack overflow answer](https://stackoverflow.com/a/44645115/380757) which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to `gcloud`). ## [Static IP addresses](https://cloud.google.com/compute/docs/configure-ip-addresses#reserve_new_static) The answer you linked only describes how to allocate a Static IP address and assign it to your VM. This step is independent of the firewall rules and hence can be used in combination if you would like to use static IP addresses.
How do I get git to use the cli rather than some GUI application when asking for GPG password? Whenever I try to create a signed git commit, I need to enter my GPG key. It spawns some GUI application to receive the password. It looked like the application was `seahorse`, so I uninstalled it, but git still uses some GUI app. Polybar doesn't report the application name and it's title is just `[process]@MYPC`. How do I get git to use the command line / pinentry? Versions: - gpg: 2.2.19 - git: 2.25.1 - pinentry: 1.1.0
What's in your `~/.gnupg/gpg-agent.conf`? I have `pinentry-program /usr/bin/pinentry-curses` in mine, and everything which uses `gpg` will ask for my pass-phrase in the terminal. NOTE: You will need to restart your gpg-agent (or send it a HUP signal) if you change its config. Just running `gpgconf --kill gpg-agent` will do, `gpg` will restart it when needed. ALSO NOTE: the environment variable GPG\_TTY needs to be your **current** tty (i.e. the tty you're currently running `gpg` in - or whatever calls gpg, such as `mutt`, `pass`, `git`, etc). So add the following to your ~/.bashrc (or whatever's appropriate for your shell): ``` GPG_TTY=$(tty) export GPG_TTY ``` See `man gpg-agent` for details.
Generating colour image gradient using numpy I wrote a function which generates 2 coloured image blocks: ``` def generate_block(): x = np.ones((50, 50, 3)) x[:,:,0:3] = np.random.uniform(0, 1, (3,)) show_image(x) y = np.ones((50, 50, 3)) y[:, :, 0:3] = np.random.uniform(0, 1, (3,)) show_image(y) ``` I would then like to combine those two colours to form a gradient, ie 1 image going from one colour to the other. I'm not sure how to continue, any advice? Using np.linspace() I can form a 1D array of steps but what then?
Is this what you are looking for ? ``` def generate_block(): x = np.ones((50, 50, 3)) x[:, :, 0:3] = np.random.uniform(0, 1, (3,)) plt.imshow(x) plt.figure() y = np.ones((50, 50, 3)) y[:,:,0:3] = np.random.uniform(0, 1, (3,)) plt.imshow(y) plt.figure() c = np.linspace(0, 1, 50)[:, None, None] gradient = x + (y - x) * c plt.imshow(gradient) return x, y, gradient ``` To use `np.linspace` as you suggested, I've used broadcasting which is a very powerful tool in numpy; read more [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). `c = np.linspace(0, 1, 50)` creates an array of shape `(50,)` with 50 numbers from 0 to 1, evenly spaced. Adding `[:, None, None]` makes this array 3D, of shape `(50, 1, 1)`. When using it in `(x - y) * c`, since `x - y` is `(50, 50, 3)`, broadcasting happens for the last 2 dimensions. c is treated as an array we'll call d of shape `(50, 50, 3)`, such that for i in range(50), `d[i, :, :]` is an array of shape `(50, 3)` filled with `c[i]`. so the first line of gradient is `x[0, :, :] + c[0] * (x[0, :, :] - y[0, :, :])`, which is just `x[0, :, :]` The second line is `x[1, :, :] + c[1] * (x[1, :, :] - y[1, :, :])`, etc. The `i`th line is the barycenter of `x[i]` and `y[i]` with coefficients `1 - c[i]` and `c[i]` [![enter image description here](https://i.stack.imgur.com/Rx4gx.png)](https://i.stack.imgur.com/Rx4gx.png) You can do column-wise variation with [None, :, None] in the definition of c.
Retrieve servletContext reference in quartz scheduler I am using Quartz Scheduler with my spring 3.0 based application. I am successfully able to create new schedulers and they are working fine. I have seen thus [reference.](http://www.mkyong.com/spring/spring-quartz-scheduler-example/comment-page-1/#comment-81592) But.. I am not able to retrieve servletContext in my quartz job file. can anyone help me for How to retrieve servletContext reference in executeInternal() method ??
I had a similar need. I sorted it out in a similar fashion to [the solution presented here.](https://stackoverflow.com/a/3216084/613495) In my servlet context listener I am setting the servlet context using the job data map object which then is set for a job: ``` @Override public void contextInitialized(ServletContextEvent sce) { try { //Create & start the scheduler. StdSchedulerFactory factory = new StdSchedulerFactory(); factory.initialize(sce.getServletContext().getResourceAsStream("/WEB-INF/my_quartz.properties")); scheduler = factory.getScheduler(); //pass the servlet context to the job JobDataMap jobDataMap = new JobDataMap(); jobDataMap.put("servletContext", sce.getServletContext()); // define the job and tie it to our job's class JobDetail job = newJob(ImageCheckJob.class).withIdentity("job1", "group1").usingJobData(jobDataMap).build(); // Trigger the job to run now, and then repeat every 3 seconds Trigger trigger = newTrigger().withIdentity("trigger1", "group1").startNow() .withSchedule(simpleSchedule().withIntervalInMilliseconds(3000L).repeatForever()).build(); // Tell quartz to schedule the job using our trigger scheduler.scheduleJob(job, trigger); // and start it off scheduler.start(); } catch (SchedulerException ex) { log.error(null, ex); } } ``` Then inside my job I am doing this: ``` @Override public void execute(JobExecutionContext context) throws JobExecutionException { ServletContext servletContext = (ServletContext) context.getMergedJobDataMap().get("servletContext"); //... } ``` **EDIT:** Also since you mention that you are using Spring [I found this link](http://forum.springsource.org/showthread.php?48920-Quartz-and-ServletContext), where in the last post a guy mentions to implement ServletContextAware. Personally, I would go with the JobDataMap, since that is its role.
Can I run Ubuntu with VMware? I was looking to install this with or under VMware on windows 7 64bit, is this possible and where can I find instructions on how to do so? I have searched the site and can not find anything for help on installing this with VMware. Also I have VMware Workstation do I need VMware player?
Of course you can =) Actually trying Ubuntu in a virtualized environment is more than encouraged =) **Resources:** - [Info on how to install Ubuntu in VMware Player](http://www.howtogeek.com/howto/11287/how-to-run-ubuntu-in-windows-7-with-vmware-player/) - [Info on how to install Ubuntu in VMware Workstation (a little bit outdated)](http://www.sysprobs.com/install-ubuntu-1010-vmware-playerworkstation-working-vmware-tools) - [A blog post with info on how to install Ubuntu 11.04 on VMware workstation 7](http://linux4everybody.blogspot.gr/2011/04/how-to-install-ubuntu-1104-on-vmware.html)
Twitter Bootstrap 2 modal form dialogs I have the following dialog form : ``` <div class='modal' id='myModal'> <div class='modal-header'> <a class='close' data-dismiss='modal'>×</a> <h3>Add Tags</h3> </div> <div class='modal-body'> <form accept-charset="UTF-8" action="/tagging" data-remote="true" method="post"><div style="margin:0;padding:0;display:inline"><input name="utf8" type="hidden" value="&#x2713;" /><input name="authenticity_token" type="hidden" value="mCNvbvoPFWhD7SoJm9FPDh+BcRvCG3d16P+oOFACPuc=" /></div> <input id="tags_string" name="tags_string" type="text" value="luca" /> <input id="id" name="id" type="hidden" value="4f1c95fd1d41c80ff200067f" /> </form> </div> <div class='modal-footer'> <div class='btn btn-primary'><input name="commit" type="submit" value="Add tag" /></div> </div> </div> ``` and his JS : ``` <script> //<![CDATA[ $(function() { // wire up the buttons to dismiss the modal when shown $("#myModal").bind("show", function() { $("#myModal a.btn").click(function(e) { // do something based on which button was clicked // we just log the contents of the link element for demo purposes console.log("button pressed: "+$(this).html()); // hide the dialog box $("#myModal").modal('hide'); }); }); // remove the event listeners when the dialog is hidden $("#myModal").bind("hide", function() { // remove event listeners on the buttons $("#myModal a.btn").unbind(); }); // finally, wire up the actual modal functionality and show the dialog $("#myModal").modal({ "backdrop" : "static", "keyboard" : true, "show" : true // this parameter ensures the modal is shown immediately }); }); //]]> </script> ``` When I click x, which is `<a class='close' data-dismiss='modal'>×</a>`, the form close down leaving me on the current page, while I'd like to go on the hamepage. Also "Add tag" botton, which is `<div class='btn btn-primary'><input name="commit" type="submit" value="Add tag" /></div>` don't do nothing, while clicking jaust ENTER on the keyboard do the job and I'd like clicking "Add tag" did the same. I'm not so skilled on JS and front-end prog, so any help is welcome.
Your submit button is outside of the form tags. It won't know what form to submit. Use javascript to connect it to the form. ``` <div class='modal-body'> <form id="modal-form" accept-charset="UTF-8" action="/tagging" data-remote="true" method="post"> <input name="something" value="Some value" /> </form> </div> <div class='modal-footer'> <a id="modal-form-submit" class='btn btn-primary' href="#">Submit</a> </div> <script> $('#modal-form-submit').on('click', function(e){ // We don't want this to act as a link so cancel the link action e.preventDefault(); // Find form and submit it $('#modal-form').submit(); }); </script> ``` As for the `<a class='close' data-dismiss='modal'>×</a>` that is supposed to link to the homepage, why not just remove the `data-dismiss='modal'` and make it act like a normal link using a standard `href='home.html'`. Here is some additional code to point you in the right direction for using AJAX to submit the form: ``` // Since we want both pressing 'Enter' and clicking the button to work // We'll subscribe to the submit event, which is triggered by both $('#modal-form').on('submit', function(){ //Serialize the form and post it to the server $.post("/yourReceivingPage", $(this).serialize(), function(){ // When this executes, we know the form was submitted // To give some time for the animation, // let's add a delay of 200 ms before the redirect var delay = 200; setTimeout(function(){ window.location.href = 'successUrl.html'; }, delay); // Hide the modal $("#my-modal").modal('hide'); }); // Stop the normal form submission return false; }); ```
Best logging plugin of Kong API gateway to utilise existing ELK stack log Can anyone help me to choose one Kong API Gateway logging plugin from all available options like Tcp Log, Udp Log, Http Log, File Log, Syslog, Statsd, Loggly, etc which can utilize the existing ELK stack log?
I made experiences with the file-log, tcp-log and http-log plugin. All worked quite well for us in terms of the actual logging functionality. The main decision driver was the tech stack we used: - file-log: We deployed Kong on OpenShift. As OpenShift has a built-in EFK stack and pushes all standard out via Fluentd to ElasticSearch, we wrote our Kong logs to a named pipe ("FIFO file") and made the Docker container to continually print that file to stdout. Kind of a workaround but it worked very well for our use case. - tcp-log/http-log: We additionally had a requirement to send the logs to an external ELK stack. For this use case we used the tcp-log plugin. We later switched to http-log as it additionally has a built-in basic auth feature, so you can transfer your logs to an publicly available endpoint with authentication. Both plugins sent the logs to logstash which listened to either a tcp or http port. I hope this helps you choosing a logging plugin according to your needs.