prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
How to create an instance of an object in c# Greeting for the day! I have a question in my mind and looking for answer from some days. If my understanding is correct then only diff between Instance and object is :- instance means just creating a reference(copy) . object :means when memory location is associated with the object( is a runtime entity of the class) by using the new operator Now i want to know how to create an instance of an object. Please give explanation with sample code Any help will be appreciated. Thanks
By your explanation it's not called an instance, but a reference of an object. An instance of a class is called an object. I think your question is: "What is the difference of an object and a reference variable?" I'll try to explain it with some examples: ``` Foo f; ``` I just declared a reference variable. This is not an object but only a reference that refers to an object. ``` f = new Foo(); ``` Now I created a new object and assigned it to the `f` reference variable so every time I do something to `f` I refer to the `Foo` object. Like when I call `f.Name = "MyFoo";` I refer to the foo object. ``` Foo otherFoo; ``` Now I declare another reference variable. ``` otherFoo = f; ``` What we have here now is having **ONE** object in the memory but **TWO** reference variables refering to the same object. ``` f.IsFoo = true; bool isotherFooFoo = otherFoo.IsFoo; ``` This last line will return true because we changed the `IsFoo` property to `true` and `f` and `otherFoo` reffer to the **same** object. I hope that explains you everything. :)
fatal error: sqlite3.h: No such file or directory I'm trying to build a C application through cross compiling for a Zynq board (ARM architecture). When I type make without mentioning the ARM arch, it works fine on my laptop. But as soon as I modify the Makefile, I get an error saying: ``` main.c:20:43: fatal error: sqlite3.h: No such file or directory #include "sqlite3.h" //library for sqlite3 ^ compilation terminated. make: *** [ws_temp_server] Error 1 ``` The Makefile looks like this: ``` SOURCE=lib/base64_enc.c lib/websocket.c lib/sha1.c lib/sqlite/sqlite3.c main.c CC = arm-xilinx-linux-gnueabi-gcc LDFLAGS=-lpthread -ldl INCLUDES=lib/ PROGRAM=ws_temp_server all: $(PROGRAM) $(PROGRAM): $(SOURCE) $(CC) $(SOURCE) -I$(INCLUDES) -o$(PROGRAM) $(LDFLAGS) clean: rm $(PROGRAM) ``` What am I doing wrong? Thanks for any help I can get.
You don't provide enough information to say for sure: in particular, you don't say where the `sqlite3.h` file actually is on your filesystem. However, based on what you do show I suspect you need to change the `INCLUDES` variable, to this: ``` INCLUDES = lib/sqlite ``` (or else change the `#include` in your code to be `#include "sqlite/sqlite3.h"`). This is assuming that the header file is in the same directory as the `sqlite3.c` source file. Note that this is a bad/confusing implementation. You should be putting the `-I` flag in the `INCLUDES` variable: ``` INCLUDES = -Ilib/sqlite ... $(PROGRAM): $(SOURCE) $(CC) $(SOURCE) $(INCLUDES) -o$(PROGRAM) $(LDFLAGS) ``` `INCLUDES` is plural which may lead someone to believe they could add multiple directories in that variable, but if you leave it the way you have it, this will cause strange compiler errors: ``` INCLUDES = lib/sqlite another/dir ... $(PROGRAM): $(SOURCE) $(CC) $(SOURCE) -I$(INCLUDES) -o$(PROGRAM) $(LDFLAGS) ``` will add the flags `-Ilib/sqlite another/dir`... note how the second directory doesn't have a `-I` option. Of course, by convention you should be using `CPPFLAGS` (for C preprocessor flags), not `INCLUDES`, but... :)
How do I get the data from an NSURLSession as a string? I'm using an API that gives me data in a neat format - How do I get this as a `String`? I can print it, but saving it as a string doesn't seem to work. I specifically want to update a UI element using the data from the `NSURLSession` task. ``` let url = NSURL(string: apiCall) let task = NSURLSession.sharedSession().dataTaskWithURL(url!) {(data, response, error) in //I want to replace this line below with something to save it to a string. println(NSString(data: data, encoding: NSUTF8StringEncoding)) } task.resume() ```
If your problem is that it is empty outside of the task, that is because it is going out of scope after the completion block ends. You need to save it somewhere that has a wider scope. ``` let url = NSURL(string: apiCall) var dataString:String = "" let task = NSURLSession.sharedSession().dataTaskWithURL(url!) {(data, response, error) in //I want to replace this line below with something to save it to a string. dataString = String(NSString(data: data, encoding: NSUTF8StringEncoding)) dispatch_async(dispatch_get_main_queue()) { // Update the UI on the main thread. self.textView.text = dataString }); } task.resume() ``` now when you access dataString it will be set to the data from task. Be wary though, until task is completed, dataString won't be set, so you should really try to use it in the completion block.
Layout out data-entry form in HTML: table or no table? I want to create a data-entry form like the following: ``` Name: [ Name textbox ] Age: [ Age textbox ] label n: [ textbox n ] ``` Where the labels left-align, and the textboxes left-align. I know I can do this in a `table` element, but I'm also aware that "tables should only be for tabular data". While I part agree/disagree with that statement - I'd like to know whether my desired layout could/should be considered "tabular data", and what an alternative layout would be to produce the same results without dozens of lines of complicated cross-browser CSS. I don't do web development much at the moment (strictly WinForms for some time now when I do UI work), so I appreciate there may be an elegant solution. Possibly involving an unordered list with the bullet points turned off and a bit off label->field y position offsetting, perhaps?
Unordered lists with `label` elements should be the way to go here. The markup I would use should look something like: ``` <form id="person" method="post" action="process.php"> <ul> <li><label for="name">Name: </label><input id="name" name="name" type="text" /></li> <li><label for="age">Age: </label><input id="age" name="age" type="text" /></li> <li><label for="n">N: </label><input id="n" name="n" type="text" /></li> </ul> </form> ``` And this CSS if to get something similar to want you asked for: ``` #person ul { list-style: none; } #person li { padding: 5px 10px; } #person li label { float: left; width: 50px; margin-top: 3px; } #person li input[type="text"] { border: 1px solid #999; padding: 3px; width: 180px; } ``` See: <http://jsfiddle.net/tZhUQ/1> , which contains some more interesting stuff you can try.
How To Get jQuery-UI Autocomplete To Trigger On Key Press I'm not quite sure if what I want is possible. But I currently have some code that populates an autocomplete list. The source is handled by an ajax call to a web api, that returns a set of items from the database (see code below). ``` $(".ItemSearch").on('keypress', function (event, ui) { var disabled = true; if (event.which === 13) { disabled = false; } }); function BindItemNumberSearch(hostItemForm) { if ($(".ItemSearch", hostItemForm).autocomplete({}).data("ui-autocomplete")) { $(".ItemSearch", hostItemForm).unbind("autocomplete"); $(".ItemSearch", hostItemForm).autocomplete({ close: function () { // some logic }, response: function (event, ui) { // some logic if the item is empty }, source: function (request, response) { // return if the search box is empty or is disabled if (request.term.trim().length <= 0 || disabled) { return; } $.ajax({ // some ajax call }); }, delay: 500, focus: function (event, ui) { return false; }, select: function (event, ui) { // return false if no item is selected if (ui.item.id != null) { return false; } // some logic to select the item } }).data("ui-autocomplete")._renderItem = RenderSearchResultItem; } } ``` The issue we are having is that sometimes the request to search can be sent before the user has finished typing out the search string. This used to be OK as the search would return quickly, but now we have too much data and it is causing slowness (we think due to multiple searches being kicked off as the user slowly types what they are looking for). So we would like to add a trigger on key press (such as the enter key) to kick off the search. I have found this [answer](https://stackoverflow.com/questions/11416727/jquery-autocomplete-action-on-enter-key), and it seems jQuery-ui does not support this. I've tried different attempts, the one included is the latest. However I can't seem to get it to work.
You can assign a flag for when your `autocomplete` should start searching. ``` // this will be the flag if autocomplete should begin searching // should become true when [Enter] key is pressed & input field is not empty window.BeginSearch = false; ``` After that, attach a DOM Event to your `autocomplete` element that would detect the `Enter` key ``` $(document).on("keydown", "#tags", function(e) { ... }) ``` Programatically instruct the `autocomplete` to start searching as needed when the `Enter` key is pressed ``` $("#tags").autocomplete("search"); ``` Inside the `source` callback, this is when the flag variable will come in handy. Use this to detect if `Enter` key was pressed and therefore have set `BeginSearch` to `true` ``` $("#tags").autocomplete({ source: function (request, response) { if (window.BeginSearch != true || request.term.trim().length <= 0) { response([]); window.BeginSearch = false; // reset the flag since searching is finished return; } else if (window.BeginSearch == true) { sample_async_function(request).then(function (return_data) { response(return_data); window.BeginSearch = false; // reset the flag since searching is finished }); } }, delay: 0 // no need for delay, as you can see }); ``` --- ## Sample Demo: ``` // this event will be responsible for tracking [Enter] key press $(document).on("keydown", "#tags", function(e) { // additional checks so that autocomplete search won't occur if conditions are not met if (e.key == "Enter" && $("#tags").val().trim().length > 0 && $(".sample-loader:visible").length < 1) { window.BeginSearch = true; $("#tags").autocomplete("search"); } }) $(document).ready(function() { // this will be the flag if autocomplete should begin searching // should become true when [Enter] key is pressed & input field is not empty window.BeginSearch = false; $("#tags").autocomplete({ source: function(request, response) { if (window.BeginSearch != true || request.term.trim().length <= 0) { response([]); window.BeginSearch = false; // reset the flag since searching is finished return; } else if (window.BeginSearch == true) { sample_async_function(request).then(function(return_data) { response(return_data); window.BeginSearch = false; // reset the flag since searching is finished }); } }, delay: 0 // no need for delay, as you can see }); }); // sample asynchronous function. mimics fetching data from server side (e.g., ajax) function sample_async_function(some_passed_string) { $(".sample-loader").show(); return new Promise(resolve => setTimeout(() => { $(".sample-loader").hide(); resolve( [ "ActionScript", "AppleScript", "Asp", "BASIC", "C", "C++", "Clojure", "COBOL", "ColdFusion", "Erlang", "Fortran", "Groovy", "Haskell", "Java", "JavaScript", "Lisp", "Perl", "PHP", "Python", "Ruby", "Scala", "Scheme" ].filter((val, index) => { if (val.toLowerCase().includes(some_passed_string.term.toLowerCase())) { return val; } }) ); }, 500)); // arbitrary value. sample speed of the API XHR in unit milliseconds } ``` ``` .sample-loader { display: none; position: fixed; top: 0; right: 0; bottom: 0; left: 0; z-index: 1053; background: #000000dd; color: white; font-size: 20px; } ``` ``` <script src="https://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"></script> <script src="https://code.jquery.com/ui/1.12.1/jquery-ui.min.js" integrity="sha256-VazP97ZCwtekAsvgPBSUwPFKdrwD3unUfSGVYrahUqU=" crossorigin="anonymous"></script> <link rel="stylesheet" href="https://code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css"> <div class="ui-widget"> <label for="tags">AutoComplete: </label> <input id="tags"> </div> <div class="sample-loader">Loading...</div> ``` From a UX perspective, you are going to want to figure out how to temporarily disable interactions with the element when the searching is in progress. In this example, I have used a simple "loading" screen.
How to hit the WebSocket Endpoint? I see that there is websocket endpoint which works out fins with Java tests. In logs I see ``` Connecting to: ws://127.0.0.1:8080/76f48a44-0af8-444c-ba97-3f1ed34afc91/tweets ``` Just like any other `REST` API I would like to hit it via browser or curl, but when I do that I see ``` ➜ tweetstream git:(master) ✗ curl ws://127.0.0.1:8080/b9b90525-4cd4-43de-b893-7ef107ad06c2/tweets curl: (1) Protocol ws not supported or disabled in libcurl ``` and ``` ➜ tweetstream git:(master) ✗ curl http://127.0.0.1:8080/b9b90525-4cd4-43de-b893-7ef107ad06c2/tweets <html><head><title>Error</title></head><body>Not Found</body></html>% ``` Is there a way to test websocket APIs with browser/curl?
If you mean literally to test the implementation of websockets, I found Autobahn's test suite to be very useful: <http://autobahn.ws/> If you just want to noodle with a websocket I would recommend using the developer tools in a browser like chrome to make a connection and send/recv data: ``` var ws = new WebSocket("ws://127.0.0.1:8080/76f48a44-0af8-444c-ba97-3f1ed34afc91/tweets"); ws.onclose = function() { // thing to do on close }; ws.onerror = function() { // thing to do on error }; ws.onmessage = function() { // thing to do on message }; ws.onopen = function() { // thing to do on open }; ws.send("Hello World"); ```
Delphi: open a zip archive from a stream -> extract to a stream Are there any zip components with such features? I need to download a zip archive from the Internet to a stream, then to open the archive from the stream and then to extract files to another stream. E.g. **ZipForge** can open an archive from a stream `ZipForge.OpenArchive(MyStream, false);` but how to extract to another one...? ``` procedure ExtractToStream(FileName: WideString; Stream: TStream); ``` **Description** > > Use ExtractToStream to decompress data stored in the file inside the > archive to a TStream descendant object like TFileStream, TMemoryStream > or TBlobStream. > > > The FileName parameter specifies file name being extracted. > > > And what use of the `OpenArchive(MyStream, false)` method if extraction isn't supported...
The zip file component that is built into XE2 will do this. There is an overloaded `Open` method that receives a `TStream` as its input parameters. To extract individual files you can call an overloaded `Read` method passing the name of the file that you wish to extract. The extracted file is returned as a new instance of `TStream`. You can that use `CopyFrom` on that instance to transfer the extracted file to your stream. ``` var ZipFile: TZipFile; DownloadedStream, DecompressionStream, MyStream: TStream; LocalHeader: TZipHeader; ... ZipFile := TZipFile.Create; try ZipFile.Open(DownloadedStream, zmRead); ZipFile.Read('myzippedfile', DecompressionStream, LocalHeader); try MyStream.CopyFrom(DecompressionStream, DecompressionStream.Size); finally DecompressionStream.Free; end; finally ZipFile.Free; end; ``` Note that I've not tested this code, I've just written it based on the source code for `TZipFile` and the documentation contained in that source code. There may be a few wrinkles in this but if the code behaves as advertised it meets your needs perfectly. --- OK, now I tested it because I was curious. Here's the program that shows that this all works as advertised: ``` program ZipTest; {$APPTYPE CONSOLE} uses System.SysUtils, System.Classes, System.Zip; procedure ExtractToFile( const ZipFileName: string; const ZippedFileIndex: Integer; const ExtractedFileName: string ); var ZipFile: TZipFile; DownloadedStream, DecompressionStream, OutputStream: TStream; LocalHeader: TZipHeader; begin DownloadedStream := TFileStream.Create(ZipFileName, fmOpenRead); try ZipFile := TZipFile.Create; try ZipFile.Open(DownloadedStream, zmRead); ZipFile.Read(ZippedFileIndex, DecompressionStream, LocalHeader); try OutputStream := TFileStream.Create(ExtractedFileName, fmCreate); try OutputStream.CopyFrom(DecompressionStream, DecompressionStream.Size); finally OutputStream.Free; end; finally DecompressionStream.Free; end; finally ZipFile.Free; end; finally DownloadedStream.Free; end; end; begin try ExtractToFile('C:\desktop\test.zip', 0, 'C:\desktop\out.txt'); except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; end. ``` Note that I extracted by index rather than file name since that was more convenient for me. And I used file streams rather than memory streams which I imagine you would use. However, since the `TZipFile` methods work with `TStream` I'm sure that the code will work with streams of any form. --- This is the latest in a series of questions about ZIP files. I know that you are using XE2 and I wonder why you seem reluctant to use the built in ZIP class that XE2 provides. I've not seen anything to indicate that it will not fulfil your requirements. In fact, it is precisely this ability to work directly with streams that makes me feel it has sufficient generality for any application.
MySQL trigger which triggers on either INSERT or UPDATE? Is there a way to create MySQL trigger which triggers on either UPDATE or INSERT? Something like ``` CREATE TRIGGER t_apps_affected BEFORE INSERT OR UPDATE ... ``` Obviously, the above don't work. So, any workarounds without creating two separate triggers? I need this in order to update running counter on another table.
Unfortunately, there is no shorthand form - you must create multiple triggers - one for each event. The [doc](http://dev.mysql.com/doc/refman/5.5/en/create-trigger.html) says: > > `trigger_event` indicates the kind of statement that activates the trigger. The trigger\_event can be **one** of the following: > > > **INSERT**: The trigger is activated whenever a new row is inserted into > the table; for example, through INSERT, LOAD DATA, and REPLACE > statements. > > > **UPDATE**: The trigger is activated whenever a row is modified; for > example, through UPDATE statements. > > > **DELETE**: The trigger is activated whenever a row is deleted from the > table; for example, through DELETE and REPLACE statements. However, > DROP TABLE and TRUNCATE TABLE statements on the table do not activate > this trigger, because they do not use DELETE. Dropping a partition > does not activate DELETE triggers, either. See Section 12.1.27, > “TRUNCATE TABLE Syntax”. > > >
using a non-blocking socket connection in C I'm changing a socket connection in a script to a non-blocking connection. In a tutorial I found the lines: ``` x=fcntl(s,F_GETFL,0); // Get socket flags fcntl(s,F_SETFL,x | O_NONBLOCK); // Add non-blocking flag ``` So I added them after I create my socket and before the connect statement. And it's no longer blocking :) but it also doesn't connect. I'm not getting any errors, the connect is just returning -1. If I comment these lines out it connects. What else do I need to add to get a non-blocking connection to connect?
Check return value of [`connect(2)`](http://www.kernel.org/doc/man-pages/online/pages/man2/connect.2.html) - you should be getting `-1`, and `EINPROGRESS` in [`errno(3)`](http://www.kernel.org/doc/man-pages/online/pages/man3/errno.3.html). Then add socket file descriptor to a poll set, and wait on it with [`select(2)`](http://www.kernel.org/doc/man-pages/online/pages/man2/select.2.html) or [`poll(2)`](http://www.kernel.org/doc/man-pages/online/pages/man2/poll.2.html). This way you can have multiple connection attempts going on at the same time (that's how e.g. browsers do it) and be able to have tighter timeouts.
Why "decimal" is not a valid attribute parameter type? It is really unbelievable but real. This code will not work: ``` [AttributeUsage(AttributeTargets.Property|AttributeTargets.Field)] public class Range : Attribute { public decimal Max { get; set; } public decimal Min { get; set; } } public class Item { [Range(Min=0m,Max=1000m)] //compile error:'Min' is not a valid named attribute argument because it is not a valid attribute parameter type public decimal Total { get; set; } } ``` While this works: ``` [AttributeUsage(AttributeTargets.Property|AttributeTargets.Field)] public class Range : Attribute { public double Max { get; set; } public double Min { get; set; } } public class Item { [Range(Min=0d,Max=1000d)] public decimal Total { get; set; } } ``` Who can tell me why double is OK while decimal is not.
> > This is a CLR restriction. Only > primitive constants or arrays of > primitives can be used as attribute > parameters. The reason why is that an > attribute must be encoded entirely in > metadata. This is different than a > method body which is coded in IL. > Using MetaData only severely restricts > the scope of values that can be used. > In the current version of the CLR, > metadata values are limited to > primitives, null, types and arrays of > primitives (may have missed a minor > one). > > > Taken from [this](https://stackoverflow.com/questions/507528/use-decimal-values-as-attribute-params-in-c/507533#507533) answer by [JaredPar](https://stackoverflow.com/users/23283/jaredpar). > > Decimals while a basic type are not a > primitive type and hence cannot be > represented in metadata which prevents > it from being an attribute parameter. > > >
Referencing "this" inside setInterval/setTimeout within object prototype methods Normally I'd assign an alternative "self" reference when referring to "this" within setInterval. Is it possible to accomplish something similar within the context of a prototype method? The following code errors. ``` function Foo() {} Foo.prototype = { bar: function () { this.baz(); }, baz: function () { this.draw(); requestAnimFrame(this.baz); } }; ```
Unlike in a language like Python, a Javascript method forgets it is a method after you extract it and pass it somewhere else. You can either ## Wrap the method call inside an anonymous function This way, accessing the `baz` property and calling it happen at the same time, which is necessary for the `this` to be set correctly inside the method call. You will need to save the `this` from the outer function in a helper variable, since the inner function will refer to a different `this` object. ``` var that = this; setInterval(function(){ return that.baz(); }, 1000); ``` ## Wrap the method call inside a fat arrow function In Javascript implementations that implement the [arrow functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions) feature, it is possible to write the above solution in a more concise manner by using the fat arrow syntax: ``` setInterval( () => this.baz(), 1000 ); ``` Fat arrow anonymous functions preserve the `this` from the surrounding function so there is no need to use the `var that = this` trick. To see if you can use this feature, consult a compatibility table like [this one](https://kangax.github.io/compat-table/es6/#test-arrow_functions). ## Use a binding function A final alternative is to use a function such as Function.prototype.bind or an equivalent from your favorite Javascript library. ``` setInterval( this.baz.bind(this), 1000 ); //dojo toolkit example: setInterval( dojo.hitch(this, 'baz'), 100); ```
Same send and receive buffers in MPI In my code, each process works on certain portion of an array. I want each process to send the portion it worked on to other processes and receive other portions from other processes. For this I used `MPI_Allgatherv` but I kept send and receive buffers the same: ``` MPI_Allgatherv (&vel[0], localSizesFaceV[rank], MPI_DOUBLE, &vel[0], localSizesFaceV, displsFaceV, MPI_DOUBLE, MPI_COMM_WORLD); ``` I used this function before for other purposes with different send and receive buffers and it worked. That is why I am sure there is no problem with other parameters. In the case of 2 processes, one of the processes does not return. When I copied send buffer to another `std::vector` ``` vector <double> vel2; vel2 = vel; ``` and used `vel2` as send buffer then all processes returned. Why?
Generally speaking, MPI requires that the argument is not aliased. This is explicitly mentioned [chapter 2.3 of the current standard](http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf#section.2.3). > > Unless specified otherwise, an argument of type OUT or type INOUT > cannot be aliased with any other argument passed to an MPI procedure. > > > This explains why your code has problems. However, there is the possibility to solve your issue very easily, without having to explicitly copy your buffer: the `MPI_IN_PLACE` keyword. It specifies that the communication will be done "in-place" using the output buffer as an input buffer too wherever relevant. Your code would become: ``` MPI_Allgatherv( MPI_IN_PLACE, 0, MPI_DATATYPE_NULL, &vel[0], localSizesFaceV, displsFaceV, MPI_DOUBLE, MPI_COMM_WORLD); ``` NB: the actual type to use for the send buffer is irrelevant. You can keep `MPI_DOUBLE` if you want to, but I tend to prefer using `MPI_DATATYPE_NULL` to make clear that the parameter is ignored.
Finding next position in 'circular' array I have an array of 8 compass points numbered from SW, clockwise though to S: ``` 2 3 4 1 5 0 7 6 ``` I want to calculate if the shortest route from one point to another would be clockwise (+1) or anticlockwise (-1). E.g. to go from 7 to 5 would be -1, to go from 7 to 0 would be + 1. Simple problem I guess but I'm having a real brain freeze today. The closest I've got is `if abs(start - end) < 4, -1, 1` but that doesn't work if the start is 3. There is a similar problem [here](https://stackoverflow.com/questions/5214907/shortest-path-algorithm-for-an-analog-clock), the accepted answer for which is to use modulo, but doesn't explain how. I've thrown various calculations around without success.
Instead of using `abs`, add `8` (the number of entries) and then take modulo `8`, like this: ``` enum Direction { None, Clockwise, Counterclockwise } public static Direction GetDirection(int a, int b) { if (a == b) { return Direction.None; } return (a-b+8)%8 > 4 ? Direction.Clockwise : Direction.Counterclockwise; } ``` Adding `8` makes the difference non-negative; modulo-`8` brings it into `0`...`7` range. Note that when the number of steps is `4`, it does not matter which way you go, so the program prefers counterclockwise. You can change it by using `>=` in place of `>`.
Defining compare function for fibonacci heap in boost I need to use Fibonacci heap in my project and I am trying to use it from boost library. But I cannot figure out how to set up a user defined compare function for arbitrary data type. I need to construct a min heap for struct node defined as follows: ``` struct node { int id; int weight; struct node* next; /* dist is a global array of integers */ bool operator > (struct node b) //Boost generates a Max-heap. What I need is a min-heap. {return dist[id] < dist[b.id] ? 1:0 ;} //That's why "<" is used for "operator >". bool operator < (struct node b) {return dist[id] > dist[b.id] ? 1:0 ;} bool operator >=(struct node b) {return dist[id] <= dist[b.id] ? 1:0 ;} bool operator <=(struct node b) {return dist[id] >= dist[b.id] ? 1:0 ;} node() { id=0; weight=0; next=NULL; } }; ``` I looked up the documentation and there was a compare class. But it did not contain any element. Please tell me how to set up a user defined compare function. Thank you in advance.
`fibonacci_heap` takes a comparison *functor*, which is effectively a `struct` or `class` with a function call operator - `operator()`. I'm going to simplify your `node` struct, but you should be able to use this with minor modifications: ``` struct node { int id; node(int i) : id(i) { } }; ``` Now, we need to define a class that compares `node`s. This will have an `operator()` that takes 2 nodes by const reference, and return a `bool`: ``` struct compare_node { bool operator()(const node& n1, const node& n2) const { return n1.id > n2.id; } }; ``` We can then declare our heap as follows: ``` boost::heap::fibonacci_heap<node, boost::heap::compare<compare_node>> heap; ``` A full example: ``` #include <boost/heap/fibonacci_heap.hpp> #include <iostream> struct node { int id; node(int i) : id(i) { } }; struct compare_node { bool operator()(const node& n1, const node& n2) const { return n1.id > n2.id; } }; int main() { boost::heap::fibonacci_heap<node, boost::heap::compare<compare_node>> heap; heap.push(node(3)); heap.push(node(2)); heap.push(node(1)); for(const node& n : heap) { std::cout << n.id << "\n"; } } ```
ASP.NET Core - Starting the web server is taking longer than expected I'm attempting to debug a ASP.NET Core web app using either the Web API or Web Application templates: [![enter image description here](https://i.stack.imgur.com/dGzoD.png)](https://i.stack.imgur.com/dGzoD.png) [![enter image description here](https://i.stack.imgur.com/IEldR.png)](https://i.stack.imgur.com/IEldR.png) without adding additional code, etc. to the project. I use IIS Express to debug the application and the following message is displayed > > Starting the web server is taking longer than expected. > > > [![enter image description here](https://i.stack.imgur.com/X6UEk.png)](https://i.stack.imgur.com/X6UEk.png) After about 10 minutes of waiting, my processor utilization is less than 10%. It looks like the web server is not going to start with any more waiting, and so debugging is not going to start either. How do I get the web server to start so that I can proceed with debugging a .NET Core web app? My machine environment is as follows ``` Microsoft Visual Studio Enterprise 2015 Version 14.0.25123.00 Update 2 Microsoft .NET Framework Version 4.6.01055 .NET Command Line Tools (1.0.0-preview1-002702) Product Information: Version: 1.0.0-preview1-002702 Commit Sha: 6cde21225e Runtime Environment: OS Name: Windows OS Version: 10.0.10240 OS Platform: Windows RID: win10-x64 ```
For me the issue was the self signed SSL certificate install popup on start wasn't getting completed. This is what resolved the issue for me. My Setup: win10 VS 2015 community user is running as non admin .NET core asp.net framework site/app project configured to default to https using localhost startup Default browser on startup - Chrome Steps to resolve. Start VS debug with IISExpress VS hangs with popup stating "starting the web server is taking longer than expected" Right click on icon tray in lower right main window move mouse over IISExpress Icon and right click Under the View Sites context menu that pops up select your https enabled site This will open the window to your site and a popup menu asking you to trust your self signed SSL certificate will ask you to install the cert as a trusted SSL cert. From that point on I didn't receive the startup hang
why is IE11 choosing render mode: "IE7 Strict" and how to i make it use current browser? A website that was deployed has crashed, and it is because it is rendering it in "IE7 Strict". This test was determined by the following code snippet: ``` var vMode = document.documentMode; var rMode = 'IE5 Quirks Mode'; if(vMode == 8){ rMode = 'IE8 Standards Mode'; } else if(vMode == 7){ rMode = 'IE7 Strict Mode'; } alert('Rendering in: ' + rMode); ``` This is an ASP Web application. I was thinking that if it were opened with IE11, it would render it in IE11. It seems that is definitely NOT the case. How would i resolve this? Do i have to add something to the config file of my WebApplication, or is an IE module that needs to be removed? Are there meta tags i need to append to the MasterPage Header?
You can use `X-UA-Compatible to IE=edge` to make use of latest IE version to render ``` <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> ``` check [What does <meta http-equiv="X-UA-Compatible" content="IE=edge"> do?](https://stackoverflow.com/questions/6771258/whats-the-difference-if-meta-http-equiv-x-ua-compatible-content-ie-edge-e) for more information Can be configured in web.config for all the pages, also it will make sure that intranet website will render it accordingly. I was facing the problem with internal website, even after adding a META tag. Hence I updated it in web.config ``` <system.webServer> <httpProtocol> <customHeaders> <add name="X-UA-Compatible" value="IE=edge" /> </customHeaders> </httpProtocol> </system.webServer> ```
Java 8 groupingby with returning multiple field In Java 8 group by how to groupby on a single field which returns more than one field. In the below code by I am passing name and the field to be summed which is 'total' in this scenario. however I would like to return sum of 'total' and 'balance' field for every 'name' in the Customer list (can be a map with key and value as array). Can it be done by using a single groupingBy with the return values? ``` import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors; public class Sample { public static void main(String str[]){ Customer custa = new Customer("A",1000,1500); Customer custa1 = new Customer("A",2000,2500); Customer custb = new Customer("B",3000,3500); Customer custc = new Customer("C",4000,4500); Customer custa2 = new Customer("A",1500,2500); List<Customer> listCust = new ArrayList<>(); listCust.add(custa); listCust.add(custa1); listCust.add(custb); listCust.add(custc); listCust.add(custa2); Map<String, Double> retObj = listCust.stream().collect(Collectors.groupingBy(Customer::getName,Collectors.summingDouble(Customer::getTotal))); System.out.println(retObj); } private static class Customer { private String name; private double total; private double balance; public Customer(String name, double total, double balance) { super(); this.name = name; this.total = total; this.balance = balance; } public String getName() { return name; } public void setName(String name) { this.name = name; } public double getTotal() { return total; } public void setTotal(double total) { this.total = total; } public double getBalance() { return balance; } public void setBalance(double balance) { this.balance = balance; } @Override public String toString() { return "Customer [name=" + name + ", total=" + total + ", balance=" + balance + "]"; } } } ``` Expected Output - ``` { A = [4500,6500], B = [3000,3500] , C = [4000,4500] } ```
You can write your own collector to sum total and balance ``` Collector<Customer, List<Double>, List<Double>> collector = Collector.of( () -> Arrays.asList(0.0, 0.0), (a, t) -> { a.set(0, a.get(0) + t.getTotal()); a.set(1, a.get(1) + t.getBalance()); }, (a, b) -> { a.set(0, a.get(0) + b.get(0)); a.set(1, a.get(1) + b.get(1)); return a; } ); Map<String, List<Double>> retObj = listCust .stream() .collect(Collectors.groupingBy(Customer::getName, collector)); System.out.println(retObj); ``` result ``` {A=[4500.0, 6500.0], B=[3000.0, 3500.0], C=[4000.0, 4500.0]} ```
How to apply modulo operation on a char array in C? Edited: I have a big number that C does not have a type for it natively. I have to use a char array to hold it. As an example, I create a 32-byte array. It represents a large number up to 2 ^ 256. ``` unsigned char num[32]; // The size could be any number for this question. ``` I want to apply modulo operation on it, for example, I want to mod the big number by a small divisor and get an integer type result. ``` int divisor = 1234; // Note that the divisor is much smaller than the big number int result; // do something here // to produce a result // like result = number mod divisor ``` I do not want to use other library. How can I do it?
To perform *mod* an a large number, use *mod* one `unsigned char` ([@Bathsheba](https://stackoverflow.com/questions/38571700/how-to-apply-modulo-operation-on-a-char-array-in-c/38572628#comment64532792_38571700)) at a time. `%` is C's *remainder* operator. For positive operands it has the same functionality as [mod](https://en.wikipedia.org/wiki/Modulo_operation). ``` unsigned mod_big(const unsigned char *num, size_t size, unsigned divisor) { unsigned rem = 0; // Assume num[0] is the most significant while (size-- > 0) { // Use math done at a width wider than `divisor` rem = ((UCHAR_MAX + 1ULL)*rem + *num) % divisor; num++; } return rem; } ```
Java - Timer is not being removed after execution I have an application that starts a timer to splash a message on user actions. In JDK profiler it seems that all other threads are being removed after execution by GC (I guess) but the timers a created is not being removed. What could be happening there? my timer: ``` /** * @param owner * @param added */ public static void splashParentWithAnimation(AnchorPane owner, Parent added,double posX,double posY) { // addParentWithAnimation(owner, added); owner.getChildren().add(added); AnchorPane.setLeftAnchor(added, posX); AnchorPane.setTopAnchor(added, posY); FadeTransition ft1 = new FadeTransition(Duration.millis(300), added); ft1.setFromValue(0.0); ft1.setToValue(1.0); ft1.play(); Timer messagePrinter = new Timer(); messagePrinter.schedule(new TimerTask() { @Override public void run() { Platform.runLater(() -> { if (!owner.getChildren().contains(added)) return; FadeTransition ft1 = new FadeTransition(Duration.millis(300), added); ft1.setFromValue(1.0); ft1.setToValue(0.0); ft1.play(); ft1.setOnFinished((e) -> { if (owner.getChildren().contains(added)) owner.getChildren().remove(added); }); }); } }, 1000); } ``` JDK profiler : [![enter image description here](https://i.stack.imgur.com/MMsrN.png)](https://i.stack.imgur.com/MMsrN.png) Is it because I am using a static method or should I destroy it myself?
Actually, you have no problem with timer termination here. The threads you see in the profiler are already terminated – they have a white box on the left side which indicates that they are dead. The profiler shows all the threads that were created during the program execution, even if those threads are already dead and garbage-collected. You can easily confirm that by doing the following: Instead of a lambda, create a subclass of `TimerTask` that will do the same and redefine its `finalize()` method to print something. You'll see that when garbage collections are performed, your tasks are finalized. It only can happen if threads are stopped, because it's the only place in the `Thread` class where it drops the reference to its `Runnable` (which `TimerTask` implements). Another way to confirm that is just to select 'Live Threads' from the View dropdown list on top of the table. In addition, I would recommend you to substitute `Timer` for something better. It's too wasteful to create a thread every time you need to delay some task. Have a look at `ScheduledThreadPoolExecutor`, it seems much more appropriate for your task: ``` // Create a shared executor with a single thread private final ScheduledThreadPoolExecutor executor = new ScheduledThreadPoolExecutor(1); // Instead of creating a Timer, schedule the task executor.schedule(() -> { // Do what you need here }, 1, TimeUnit.SECONDS); // Don't forget to terminate the scheduler when you don't need it anymore scheduler.terminate(); ``` You can add more than one thread to the executor if you have too many scheduled tasks at once and those tasks are not small enough.
Swift: How to handle view controllers for my game i have a general question about view controllers and how to handle them in a clean way when i develop a SpriteKit based game. What i did so far: - Use storyboard only for defining view controllers - SKScene's are presented in each view controller (Home, LevelSelection, Game) by presentScene - in each view controller i call **performSegueWithIdentifier** with the identifier i defined in the storyboard between the view controllers - all the content i show programmatically using SKSpritenode etc. on the SKScene's - on the storyboard i only have view controllers with segue relations and identifiers defined - all the stuff i do in viewDidDisappear is because it seems to be the only way to get my SKScene deinited correctly My problems are: - everytime i segue to another view, my memory raises, because the view controller is re-initialized, the old one keeps staying in the stack - it is not clear for me how to handle the segue's between the view controllers, on some tutorial pages i see people using the navigation controller, others are using strong references of some view controllers and using the singleton pattern for the view controller in order to decide either to init the view controller or just show it - my view controllers are not deiniting, i understand my home view can't because it is the initial one, but since ios is reiniting it anyways, why then not unloading it? What is the correct way for a Swift based game using SpriteKit to handle the view controller? Below you can see my initial view controller (Home) showing an SKScene with a simple play button which calls the play() function to segue to the levelselection ``` import UIKit import SpriteKit class Home : UIViewController { private var scene : HomeScene! override func viewDidLoad() { print(self) super.viewDidLoad() self.scene = HomeScene(size: view.bounds.size) self.scene.scaleMode = .ResizeFill let skView = view as! SKView skView.showsFPS = true skView.showsNodeCount = true skView.ignoresSiblingOrder = true NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(play), name: Constants.Events.Home.play, object: nil) skView.presentScene(self.scene) } override func viewDidDisappear(animated: Bool) { super.viewDidDisappear(animated) let v = view as! SKView self.scene.dispose() v.presentScene(nil) NSNotificationCenter.defaultCenter().removeObserver(self) self.scene = nil self.view = nil print("home did disappear") } func play() { self.performSegueWithIdentifier("home_to_levelselection", sender: nil) } deinit { print("Home_VC deinit") } } ```
Your way seems very complicated to essentially present 3 scenes. Its not what you are supposed to do for SpriteKit games, you only really need 1 view controller (GameViewController). Load your first scene from GameViewController (e.g HomeScene) and nothing else. Create your playButton and other UI directly in HomeScene. Use SpriteKit APIs for your UI (SKLabelNodes, SKNodes, SKSpriteNodes etc). You should never really use UIKit (UIButtons, UILabels) in SpriteKit. There are some exceptions to this, like maybe using UICollectionViews for massive level select menus, but basic UI should be done with SpriteKit APIs. There is plenty tutorials to google on how to create sprite kit buttons, how to use SKLabelNodes etc. Xcode has a SpriteKit level editor so you can do all that visually similar to storyboards. Than from HomeScene transition to the LevelSelect Scene and than to the GameScene and vice versa. Its super easy to do. ``` /// Home Scene class HomeScene: SKScene { ... func loadLevelSelectScene() { // Way 1 // code only, no XCode/SpriteKit visual level editor used let scene = LevelSelectScene(size: self.size) // same size as current scene // Way 2 // with xCode/SpriteKit visual level editor // fileNamed is the LevelSelectScene.sks you need to create that goes with your LevelSelectScene class. guard let scene = LevelSelectScene(fileNamed: "LevelSelectScene") else { return } let transition = SKTransition.SomeTransitionYouLike view?.presentScene(scene, withTransition: transition) } } /// Level Select Scene class LevelSelectScene: SKScene { .... func loadGameScene() { // Way 1 // code only, no XCode/SpriteKit visual level editor used let scene = GameScene(size: self.size) // same size as current scene // Way 2 // with xCode/SpriteKit visual level editor // fileNamed is the GameScene.sks you need to create that goes with your GameScene class. guard let scene = GameScene(fileNamed: "GameScene") else { return } let transition = SKTransition.SomeTransitionYouLike view?.presentScene(scene, withTransition: transition) } } /// Game Scene class GameScene: SKScene { .... } ``` I strongly recommend you scratch your storyboard and ViewController approach, and just use different SKScenes and 1 GameViewController. Hope this helps
Running a script in FreeBSD after boot/reboot I have a simple script: ``` #!/bin/sh PROVIDE: test REQUIRE: LOGIN NETWORKING . /etc/rc.subr name="test" load_rc_config $name rcvar=test_enable cd /home/deploy/projects/test /usr/sbin/daemon -u deploy /usr/local/bin/node /home/deploy/projects/test/server.js run_rc_command "$1" ``` inside `/usr/local/etc/rc.d`. It is executable. It is registred into /etc/rc.conf I need it to start after boot/reboot. I managed to do it with Cron using ``` @reboot ``` but it doesn't look legit. What is the proper way to run that script automatically after boot/reboot?
First of all, there's an article in the official documentation explaining how to write rc scripts: [Practical rc.d scripting in BSD](https://www.freebsd.org/doc/en/articles/rc-scripting/). It will probably answer most of your questions. When it comes to your script: 1. The keywords like `PROVIDE`, `REQUIRE`, etc. have to be comments. See the [rc(8) manual page](https://www.freebsd.org/cgi/man.cgi?query=rc&sektion=8&manpath=FreeBSD+11.2-RELEASE+and+Ports) and the [rcorder(8) manual page](https://www.freebsd.org/cgi/man.cgi?query=rcorder&apropos=0&sektion=8&manpath=FreeBSD%2011.2-RELEASE%20and%20Ports&arch=default&format=html) for more details. ``` #!/bin/sh # # PROVIDE: test # REQUIRE: LOGIN NETWORKING ``` 2. I think you also miss setting `test_enable` to a default value. ``` : "${test_enable:="NO"}" ``` 3. You don't really want to just put the instructions to start your daemon in the global scope of the script. This part of your code is bad: ``` cd /home/deploy/projects/test /usr/sbin/daemon -u deploy /usr/local/bin/node /home/deploy/projects/test/server.js ``` You should try to define a `start_cmd` function (look for `argument_cmd` in the [rc.subr(8) manual page](https://www.freebsd.org/cgi/man.cgi?query=rc.subr&sektion=8&manpath=FreeBSD+11.2-RELEASE+and+Ports) for more information) or define the `command` variable. --- All in all, the best idea is to look at other scripts in `/etc/rc.d` and `/usr/local/etc/rc.d` to see how people write those and what are the standards. This is how I've learnt it recently as I was developing a daemon for the Keybase filesystem (KBFS). You may look at the code [here](https://github.com/0mp/kbfsd/blob/master/kbfsd.in). The manpages are also helpful. Start with [rc(8)](https://www.freebsd.org/cgi/man.cgi?query=rc&sektion=8&manpath=FreeBSD+11.2-RELEASE+and+Ports) and then look at other manuals listed in the *SEE ALSO* section.
What happens when you have two jQuery $(document).ready calls in two JavaScript files used on the same HTML page? I have a question on jQuery `$(document).ready` Let's say we have a HTML page which includes 2 **JavaScript** files ``` <script language="javascript" src="script1.js" ></script> <script language="javascript" src="script2.js" ></script> ``` Now let's say in both these script files, we have `$(document)` as follows Inside **script1.js**: ``` $(document).ready(function(){ globalVar = 1; }) ``` Inside **script2.js**: ``` $(document).ready(function(){ globalVar = 2; }) ``` Now my **Questions** are: > > 1. Will both these ready event function get fired ? > 2. If yes, what will the order in which they get fired, since the > document will be ready at the same > time for both of them? > 3. Is this approach recommended OR we should ideally have only 1 > $(document).ready ? > 4. Is the order of execution same across all the browsers (IE,FF,etc)? > > > Thank you.
> > 1. Will both these ready event function get fired ? > > > Yes, they will both get fired. > > 2. what will the order in which they get fired, since the document will be ready at the same time for both of them? > > > In the way they appear (top to bottom), because the ready event will be fired once, and all the event listeners will get notified one after another. > > 3. Is this approach recommended OR we should ideally have only 1 $(document).ready ? > > > It is OK to do it like that. If you can have them in the same block code it would be easier to manage, but that's all there is to it. *Update*: Apparently I forgot to mention, you will increase the size of your JavaScript code if you do this in multiple files. > > 4. Is the order of execution same across all the browsers (IE,FF,etc)? > > > Yes, because jQuery takes the cross-browser normalization at hand.
Reordering with bootstrap 3 Can someone help me with the html to reorder columns below using bootstrap 3: ``` ----- ----- ----- | 1 | 2 | 3 | ----- ----- ----- ``` To this: ``` ----- | 2 | ----- | 1 | ----- | 3 | ----- ``` I know this has something to do with push/pull I just cant to seem to get it right. **Edit** And som code that i can't get to work: ``` <div class="row"> <div class="col-md-8 col-xs-12">2</div> <div class="col-md-2 col-xs-12 col-md-push-2">1</div> <div class="col-md-8 col-xs-12 col-md-pull-2">3</div> </div> ``` On mobile it looks good but not on desktop. **Solution** ``` <div class="row"> <div class="col-md-8 col-xs-12 col-md-push-2">2</div> <div class="col-md-2 col-xs-12 col-md-pull-8">1</div> <div class="col-md-8 col-xs-12">3</div> </div> ```
This is pretty straightforward, they key point is that we cannot reorder the columns in mobile mode. We should think of it as mobile first, then reorder the columns on larger screens using `.col-*-push-#`, `.col-*-pull-#` helper classes: ``` .red { background: red; } .blue { background: blue; } .green { background: green; } ``` ``` <link href="http://getbootstrap.com/dist/css/bootstrap.min.css" rel="stylesheet"/> <div class="container"> <div class="row"> <div class="red col-sm-4 col-sm-push-4">2</div> <div class="green col-sm-4 col-sm-pull-4">1</div> <div class="blue col-sm-4">3</div> </div> </div> ```
Using Pragma in Oracle Package Body I'd like to create an Oracle Package and two functions in it: A public function ( `function_public` ) and a private one ( `function_private` ). The public function uses the private one in an sql statement. Without pragma the code does not compile (`PLS-00231: function 'FUNCTION_PRIVATE' may not be used in SQL`) ``` CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN return 'z'; END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS ret VARCHAR2(100); BEGIN SELECT 'x' || function_private(x) INTO ret FROM dual; return ret; END; END PRAGMA_TEST; ``` The code compiles if I add `WNDS, WNPS` pragma to `function_private`. It seems to me pragma can be used only in the package declaration, and not in package body, so I have to declare `function_private` in the package as well: ``` CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2; PRAGMA RESTRICT_REFERENCES( function_private, WNDS, WNPS); FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN return 'z'; END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS ret VARCHAR2(100); BEGIN SELECT 'x' || function_private(x) INTO ret FROM dual; return ret; END; END PRAGMA_TEST; ``` This solution makes my `function_private` public as well. Is there a solution to add pragma to a function which can be found only in the package body? **UPDATE:** Replaced the pseudo-code with a working (simplified) example. **UPDATE2**: Bugfixes in the code as suggested by Rob van Wijk.
Your problem has nothing to do with PRAGMAs. As Rob says, modern Oracle versions handle most of this automatically. The problem is you can't call private functions from a SQL statement, **even ones embedded in another subprogram within the same package.** When PL/SQL executes SQL, it is handed off to the SQL engine for execution, and that essentially takes you outside the scope of the package, so it has no access to private members. This compiles fine -- no pragmas, but making the "private" function public: ``` CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2; END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN return 'z'; END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS ret VARCHAR2(30); BEGIN SELECT 'x' || function_private(x) INTO ret FROM dual; RETURN ret; END; END PRAGMA_TEST; ``` If you want to keep the function private, you need to see if you can rewrite the public function in such a way that the call to the private function is done outside the SQL statement: ``` CREATE OR REPLACE PACKAGE PRAGMA_TEST AS FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2; END PRAGMA_TEST; CREATE OR REPLACE PACKAGE BODY PRAGMA_TEST AS FUNCTION function_private(y IN VARCHAR2) RETURN VARCHAR2 IS BEGIN return 'z'; END; FUNCTION function_public(x IN VARCHAR2) RETURN VARCHAR2 IS ret VARCHAR2(30); BEGIN ret := function_private(x); SELECT 'x' || ret INTO ret FROM dual; RETURN ret; END; END PRAGMA_TEST; ```
iOS API difference between 4.x and 5.x I have iPhone 4 with iOS 4.3.2 not jailbroken. I want to upgrade my iOS to 5.x version. If I'll do this, can I write apps for iOS 4.3 then? I mean will they work on iOS 5.x the same as on iOS 4.3? And what main differences between iOS 4.x API and 5.x?
If you don't make any mistakes apps that run on iOS4.3 should run on iOS5 too. And if you set the deployment target to iOS4.3 you can write apps for that version with the iOS5 SDK and the newest Xcode version too. Make sure that you don't use any iOS5 only features if you want to support iOS4 As usual apple offers plenty of documentation. - [iOS 4.3 to iOS 5.0 API Differences](http://developer.apple.com/library/ios/#releasenotes/General/iOS50APIDiff/index.html) - [iOS 5 Readiness Checklist](http://developer.apple.com/devcenter/ios/checklist/) - [iOS SDK Release Notes for iOS 5.0](http://developer.apple.com/library/ios/#releasenotes/General/RN-iOSSDK-5_0/_index.html)
Why does the compiler stops the name lookup on overloads? I just read this article:[Fun with C++ namespaces](http://pizer.wordpress.com/2008/10/22/fun-with-c-namespaces-sarcasm/) Where the author shows that the compiler stops looking for overloads when encountering the first one, here by using namespaces. ``` namespace A { void f(int x); // like our std::sqrt(double) } namespace B { struct S {}; // user-defined type with associated namespace B void f(S); void f(int, int); void test1() { using namespace A; // using DIRECTIVE f(1); // ERROR namespace A is not considered because // B contains two overloads for 'f' f(1,2); // OK B::f(int,int) f(B::S()); // OK B::f(S) } void test2() { using A::f; // using DECLARATION f(1); // OK A::f(int) f(1,2); // ERROR A::f hides B::f(int,int) f(B::S()); // OK B::f(S) due to ADL! } } namespace C { void test3() { using namespace A; // using DIRECTIVE f(1); // OK A::f(int) f(B::S()); // OK B::f(S) due to ADL! } void test4() { using A::f; // using DECLARATION f(1); // OK A::f(int) f(B::S()); // OK B::f(S) due to ADL! } } ``` Why is the compiler supposed to stop? EDIT #1: The question is indeed ment to be: Why does the standard says so? Thanks for all the answers!
> > the compiler stops looking for overloads when encountering the first one > > > No, it doesn't stop *"when encountering the first one"* otherwise you couldn't find both `B::f(int,int)` and `B::f(S)`. It finds all overloads in a given scope (not only the first one), but then doesn't look further in more distant scopes. That's like all name lookup in C++, if you have a global variable called `var` and in some function you also have a local variable called `var`, using the name within the function will refer to the local variable. It's more useful that way, it's more likely that you meant to use the variable that is in declared nearby, as it's in related code. If someone hands you a letter and tells you to give it to Fred, who is standing a few metres away wearing a badge that says "I am Fred", would you ignore him and go outside and keep looking for every other person in the world called Fred?
Regex for overlapping matches For a linguistics project I am trying to match all occurrences of one or two consonants between vowels in some text. I am trying to write a very simple matcher in PHP (`preg_match_all`), but once the match is consumed, it cannot match again. The following is very simple and should do the trick, but only matches the first occurrence: `[aeiou](qu|[bcdfghjklmnprstvwxyz]{1,2})[aeiou]` In: `officiosior`: `offi` and `osi` are returned, but not `ici` because the trailing `i` is the first part of the match in the second match. As far as I can tell, it's impossible to do, but is there a decent way to work around the issue?
You can use a [**Positive Lookahead**](http://www.regular-expressions.info/lookaround.html) assertion to achieve this. ``` (?=([aeiou](?:qu|[^aeiou]{1,2})[aeiou])) ``` A lookahead does not **consume** any characters on the string. After looking, the regular expression engine is back at the same position on the string from where it started looking. From there, it can start matching again... **Explanation**: ``` (?= # look ahead to see if there is: ( # group and capture to \1: [aeiou] # any character of: 'a', 'e', 'i', 'o', 'u' (?: # group, but do not capture: qu # 'qu' | # OR [^aeiou]{1,2} # any character except: 'a', 'e', 'i', 'o', 'u' # (between 1 and 2 times) ) # end of grouping [aeiou] # any character of: 'a', 'e', 'i', 'o', 'u' ) # end of \1 ) # end of look-ahead ``` [**Working Demo**](https://eval.in/170230)
Laravel 4: pointing a form to a controller function I can't understand, how to set up the Form action to direct to a function of a specific controller. This is my blade code: ``` {{ Form::open(array('route'=>'user.search')) }} ``` But I get this error : ``` Unable to generate a URL for the named route "user.search" as such route does not exist. ``` the controller (`UserController`) has a function with this prototype ``` public function search(){ ... } ``` I have also tried to set up a route like this in route.php ``` Route::post('user/search', 'UserController@search'); ``` What is wrong with this code?
You can do it like ``` {{ Form::open( array('url' => URL::to('user/search')) ) }} ``` Because you don't have a name for the `route`. To define a name for the route, use following syntax, ``` Route::post('user/search', array( 'as' => 'userSearch', 'uses' => 'UserController@search' )); ``` So, you can use the route by it's name, as ``` {{ Form::open( array('route' => 'userSearch') ) }} // 'search' method will be invoked ``` Also, you can directly use the `action` of a controller as ``` {{ Form::open( array('action' => 'UserController@search') ) }} ``` Check [Routing](http://laravel.com/docs/routing#named-routes) and [Form](http://laravel.com/docs/html).
Using @RequestBody and @ModelAttribute together? I'm trying to get at the body of a `POST`, and I'd like the parameters of my method to bind to an object. Is this possible? My current declaration doesn't ever get hit: ``` @RequestMapping(method = RequestMethod.POST) public void doStuff(@RequestBody byte[] bodyData, @ModelAttribute Form form, Model model ) { ``` Looks like I'm getting this exception: ``` - 2011-02-25 16:57:30,354 - ERROR - http-8080-3 - org.springframework.web.portle t.DispatcherPortlet - Could not complete request java.lang.UnsupportedOperationException: @RequestBody not supported ```
For this to work correctly, you have to be sure you're using [AnnotationMethodHandlerAdapter](http://static.springsource.org/spring/docs/3.0.1.RELEASE/javadoc-api/org/springframework/web/servlet/mvc/annotation/AnnotationMethodHandlerAdapter.html). This overrides [HandlerMethodInvoker's](http://static.springsource.org/spring/docs/3.0.1.RELEASE/javadoc-api/org/springframework/web/bind/annotation/support/HandlerMethodInvoker.html) [createHttpInputMessage](http://static.springsource.org/spring/docs/3.0.1.RELEASE/javadoc-api/org/springframework/web/bind/annotation/support/HandlerMethodInvoker.html#createHttpInputMessage%28org.springframework.web.context.request.NativeWebRequest%29) (which is throwing the exception you're seeing). (It does this in a private class.) I believe you can just include the following in your \*-servlet.xml ``` <bean class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"/> ``` WARNING: The below answer is for the case of needing @RequestBody and @RequestParam in the same handler method. It does not answer this question, but could be of use to someone. I've tested this out using Spring 3.0.1. This is possible, but it's somewhat precarious. You MUST have your @RequestBody method argument before your @RequestParam argument. I'm guessing this is because [HandlerMethodInvoker](http://static.springsource.org/spring/docs/3.0.1.RELEASE/javadoc-api/org/springframework/web/bind/annotation/support/HandlerMethodInvoker.html) reads the request body (along with the GET parameters) when retrieving parameters (and the request body can only be read once). Here's an example (WARNING: I code in Scala, so I've not compiled this Java code) ``` @RequestMapping(value = "/test", method = RequestMethod.POST) public String test( @RequestBody String body, @RequestParam("param1") String parma1, Map<Object, Object> model: Map[AnyRef, AnyRef]) { model.put("test", test) model.put("body", body) return "Layout" } ``` An alternative is to use @PathVariable. I've confirmed that this works.
Read text or number from mobile camera I don't know if is it possible to read specific text from mobile camera with Javascript. I'm trying to make an webapp which read a ISBN number from a book and then import it in a database. There are some websites which convert webapp into apk and I need it because I want to use the mobile camera to read ISBN and then my script goes find informations with Amazon Api etc. But how can I read from camera mobile please, and is it possible? :p
> > I don't know if is it possible to read specific text from mobile camera with Javascript. > > > The first challenge you will have is to read the image from the mobile device camera, you can do that by using with multiple approaches, one of them is a simple ``` <input type="file" accept="image/*;capture=camera"> ``` Check this [reference](https://www.html5rocks.com/en/tutorials/getusermedia/intro/) for more options > > I'm trying to make an webapp which read a ISBN number from a book. > > > The second challenge will be to read the ISBN from that image. One possibility is that the ISBN is printed on books as a barcode. So you have to read that from the static image and for that you also have several approaches, one example is to use a JS bar code that works on browser, like [quaggaJS](https://serratus.github.io/quaggaJS/) If you ISBN is a pure text, you can use an OCR to extract (like [tesseract](http://tesseract.projectnaptha.com/) for example) the text and use some regex to match the ISBN from the text read from the image. > > and then import it in a database > > > You will need a werbserver for that that receieves the ISBN value and store it on the database. There are several solutions as services, like [Firebase Database](https://firebase.google.com/docs/database/) for example, that will make your job much easier in the beginning. > > here are some websites which convert webapp into apk and I need it because I want to use the mobile camera to read ISBN > > > Don't think you need that, everything I said so far is supposed to work on the browser just fine.
LUA: Seeking efficient and error-free means of assigning default arguments Instead of using long lists of arguments in my function definitions, I prefer to pass a few fixed parameters and a table of 'additional params' like this: ``` function:doit( text, params ) end ``` This is nice as it allows me to add new named parameters later without breaking old calls. The problem I am experiencing occurs when I try to force default values for some of the params: ``` function:doit( text, params ) local font = params.font or native.systemBold local fontSize = params.fontSize or 24 local emboss = params.emboss or true -- ... end ``` The above code works fine in all cases, except where I have passed in 'false' for emboss: ``` doit( "Test text", { fontSize = 32, emboss = false } ) ``` The above code will result in emboss being set to true when I really wanted false. To be clear, what I want is for the first non-NIL value to be assigned to emboss, instead I'm getting a first non-false and non-NIL. To combat this problem I wrote a small piece of code to find the first non-NIL value in a table and to return that: ``` function firstNotNil( ... ) for i = 1, #arg do local theArg = arg[i] if(theArg ~= nil) then return theArg end end return nil end ``` Using this function I would re-write the emboss assignment as follows: ``` local emboss = firstNotNil(params.emboss, true) ``` Now, this certainly works, but it seems so inefficient and over the top. I am hoping there is a more compact way of doing this. Please note: I found this ruby construct which looked promising and I am hoping lua has something like it: ``` [c,b,a].detect { |i| i > 0 } -- Assign first non-zero in order: c,b,a ```
Lua's relational operators evaluate to the value of one of the operands (i.e. the value is not coerced to boolean) so you can get the equivalent of C's ternary operator by saying `a and b or c`. In your case, you want to use `a` if it's not `nil` and `b` otherwise, so `a == nil and b or a`: ``` local emboss = (params.emboss == nil) and true or params.emboss ``` Not as pretty as before, but you'd only need to do it for boolean parameters. --- > > [snip - Lua code] > > > Now, this certainly works, but it seems so inefficient and over the top. > > > Please note: I found this ruby construct which looked promising and I am hoping lua has > something like it: > > > [c,b,a].detect { |i| i > 0 } -- Assign first non-zero in order: c,b,a > > > Your Lua function is no more over-the-top or inefficient. The Ruby construct is more *succinct*, in terms of source text, but the semantics are not really different from `firstNotNil(c,b,a)`. Both constructs end up creating a list object, initialize it with a set of values, running that through a function that searches the list linearly. In Lua you could skip the creation of the list object by using vararg expression with `select`: ``` function firstNotNil(...) for i = 1, select('#',...) do local theArg = select(i,...) if theArg ~= nil then return theArg end end return nil end ``` > > I am hoping there is a more compact way of doing this. > > > About the only way to do that would be to shorten the function name. ;)
XsendFile with apache and django I have my django served by apache using Vhost. The conf file is the following ``` WSGIPythonPath /srv/www/myproject/testproject/ <VirtualHost *:80> ServerAdmin [email protected] ServerName www.betarhombus.com WSGIScriptAlias / /srv/www/testproject/testproject/testproject/wsgi.py <Directory /srv/www/testproject/testproject/testproject> <Files wsgi.py> Require all granted </Files> </Directory> Alias /static/ /srv/www/testproject/testproject/static/ Alias /media/ /srv/www/testproject/testproject/media/ <Directory /srv/www/testproject/testproject/static> Require all granted </Directory> <Directory /srv/www/testproject/testproject/media> Require all granted </Directory> </VirtualHost> ``` I want to restrict media files to being served only on speicific logged users. So I ran into XsendFile. If I understand it correctly what it does is while you have django do all the checking for the media file you want to serve it is then served by Apache as static file. So the procedure is the follwing if I am guessing right 1. Activate XsendFile. 2. Create view that checks for media files permissions etc and serves them 3. associate with url in urls.py file Then I can use ` and will work normally like if it was served by using the initial media file url. Do I understand it correctly? My questions are the following: About 1.activating XSendFile. Should this be done in conf file inside my Vhost tag? Is setting XsendFile on enough? Should I remove the Alias for media directive and also the section for the media files? I want media files only to be served by my view? Is there anything else that I should be aware of? Edit: My setup is ``` <VirtualHost *:80> ServerAdmin [email protected] ServerName www.betarhombus.com WSGIScriptAlias / /srv/www/testproject/testproject/testproject/wsgi.py XSendFile On XsendFilePath /srv/www/testproject/testproject/media/ <Directory /srv/www/testproject/testproject/testproject> <Files wsgi.py> Require all granted </Files> </Directory> Alias /static/ /srv/www/testproject/testproject/static/ <Directory /srv/www/testproject/testproject/static> Require all granted </Directory> </VirtualHost> ``` my urls.py ``` #for xsendmedia file serving url(r'^media\/(?P<path>.*)$', 'customer.views.media_xsendfile'), ``` and my view ``` def media_xsendfile(request, path): #here will be checked if user can access media response = HttpResponse() response['Content-Type']='' response['X-Sendfile']= smart_str(os.path.join(settings.MEDIA_ROOT, path)) return response ``` My problem is that some of the media files are shared normally and some are not, and get an Internal Server Error
Also make sure to set the XSendFilePath in your Apache configuration file like so, > > > ``` > XSendFile on > XSendFilePath "//path/to/files/on/disk" > <Directory "//path/to/files/on/disk"> > Order Deny,Allow > Allow from all > </Directory> > > ``` > > and include this in your view while returning the response: > > response['X-Sendfile'] = smart\_str(file\_path) > > > And to answer your questions: - Activate the XSendFile in the vhost tag - I've written above what else needs to be done in the view - I'm not sure if you should remove the media alias, the log files should tell you if having them is a problem
Create a single element tuple of tuple I just noticed that if you want to create a tuple with single element, being a tuple itself, you cannot do it with constructor `tuple` only with `(,)` syntax. Why is that? Example: ``` >>> tuple(list('abc')) ('a', 'b', 'c') >>> tuple(tuple(list('abc'))) ('a', 'b', 'c') >>> (tuple(list('abc')),) (('a', 'b', 'c'),) ``` But then it holds for a list ``` >>> tuple([1],) (1,) >>> tuple([1]) (1,) ```
I don't really see the issue, this adheres to the documentation: > > > ``` > class tuple(object) > | tuple() -> empty tuple > | tuple(iterable) -> tuple initialized from iterable's items > | > | If the argument is a tuple, the return value is the same object. > > ``` > > --- So, `list('abc')` always evaluates to `['a', 'b', 'c']` which is an iterable. So in the first example (`tuple(['a', 'b', 'c'])`), the result is a tuple initialised from the iterable's items. I.e. ('a', 'b', 'c'). The second example takes the result of the first example (a tuple) and passes it into the `tuple()` function once more. As the documentation states (last line), the return value when passed a tuple is the same object which matches with our result. And for the third, once more, [the docs](https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences) tell us what we need to know: > > A special problem is the construction of tuples containing 0 or 1 items: the syntax has some extra quirks to accommodate these. Empty tuples are constructed by an empty pair of parentheses; a tuple with one item is constructed by following a value with a comma (it is not sufficient to enclose a single value in parentheses). > > > --- Finally, your last two examples (`tuple([1])` and `tuple([1],)`) both produce a one-element tuple, because you are passing an iterable of length one. The docs once again state (at top): a tuple is initialized from iterable's items. So, to conclude, the case where you need a comma is when you want to create a tuple with one element. However, if passing an iterable of length one, this is not necessary as Python understands that you are not evaluating an expression. --- For completeness, the reason this awkward syntax is unavoidable is because statements like: `(1 + 2) * 3` would evaluate to `(3, 3, 3)` rather than the expected `9`. So instead, you must go out of your way through adding a comma: `(1 + 2,) * 3` to get the result of `(3, 3, 3)` which makes perfect sense.
Python: performance comparison of using `pickle` or `marshal` and using `re` I am calculating some very large numbers using Python, and I'd like to store previously calculated results in Berkeley DB. The problem is that Berkeley DB has to use strings, and I have to store an integer tuple for the calculation results. For example, I get `(m, n)` as my result, one way is to store this as `"%d,%d" % (m, n)` and read it out using `re`. I can also store the tuple using `pickle` or `marshal`. Which has the better performance?
For pure speed, `marshal` will get you the fastest results. Timings: ``` >>> timeit.timeit("pickle.dumps([1,2,3])","import pickle",number=10000) 0.2939901351928711 >>> timeit.timeit("json.dumps([1,2,3])","import json",number=10000) 0.09756112098693848 >>> timeit.timeit("pickle.dumps([1,2,3])","import cPickle as pickle",number=10000) 0.031056880950927734 >>> timeit.timeit("marshal.dumps([1,2,3])","import marshal", number=10000) 0.00703883171081543 ```
What is constexpr? Ok, so im reading the C++ Primer, Fifth Editon and im learning about constexpr for the first time. It starts by saying: > > It is important to understand that when we define a pointer in a > constexpr declaration, the constexpr specifier applies to the pointer, > not the type to which the pointer points: > > > ``` const int *p = nullptr; // p is a pointer to a const *q = nullptr; // q is a const pointer to int ``` Ok so i think to myself... Well it p is a pointer to a const then it means that p(the pointer) it-self is not a constant so i may change it. So ofcourse, i tried it out on my IDE: ``` #include <iostream> #include <list> #include <vector> #include <string> int main() { const int x = 0; const int y = 30; const int *p = x; *p = &y; return 0; } ``` Guess what. It gave me a error when i try to assign \*p to the adress of constant y. Well the error specifically ``` error: assignment of read-only location '* p'| ``` Wow i was suprised. I Really thought the book said p is a pointer to a const. SO i thought p is not a constant it self, so you can change it.? Or is my anaolgy wrong?? And then ofcourse it tells me: ``` constexpr int *q = nullptr; // q is a const pointer to int ``` Well if my previous anaology was correct then, this pointer is an actual const it-self. So it may not be changed..? Or am i still wrong? ## Constexpr Ok guys so i understood. I shouldnt derefrence when i assign pointers to "objects" or anything. But now i get this error when i try out constexpr for the first time! ``` error: invalid conversion from 'const int*' to 'int*' [-fpermissive]| ``` And this is my code: ``` int main() { const int a = 0; const int i = 5; constexpr int *w = &a; return 0; } ```
You have a typo. When you do `*p` dereferencing the pointer with gives you access to the underlying `const int` that you cannot change. ``` p = &y; ``` On the other hand changes what `p` points to. Specifically it changes `p` to point to `y` which is legal. ``` int main() { const int x = 0; const int y = 30; const int *p = &x; std::cout << *p << "\n"; p = &y; std::cout << *p; return 0; } ``` Output: ``` 0 30 ``` `[Live Example](http://coliru.stacked-crooked.com/a/4ae0e88094258f7f)` I also had to change ``` const int *p = x; ``` to ``` const int *p = &x; ``` otherwise you are trying initialize the pointer with the value of `x` and not the address of `x`. --- You `constexpr` error has to do with the type of the pointer and what you are trying to point to. ``` constexpr int *w = &a; ``` Says give me a `int *` and have it point to `a` and make this a `constexpr`. Now `a` is a `const int` not a `int` to trying to do that would remove the `const` of `a` which is illegal. If we change it to ``` constexpr const int *w = &a; ``` Then we have the right types but now we have a new error. `a` is not a `constexpr` so it cannot be used in a `constexpr` initialization as it is a local variable and will only have an address at runtime. If we make `a` `static` or a global variable then the address will be known at compile time and we can use it in a `constexpr`.
How to connect to Oracle 11 database from . net What is the easiest way to connect a . NET web application to an Oracle 11g database? Can EntityFramework handle this right out of the box? Or will I need some sort or ODBC plugin from Oracle? \*I'm running from a locked down environment, so I can't really test any of these scenarios at this time. I'm currently running VS2010, but I'm looking to see if they will let me run with VS2013 (no nuget).
I know 17 ways to connect to an Oracle Database from a .NET application. - ODBC with driver from Oracle ``` var connectString = "Driver={Oracle in OraClient11g_home1};Uid=scott;Pwd=secret;DBQ=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` (exact driver name `Oracle in OraClient11g_home1` depends on installed Oracle version) - ODBC with driver from Microsoft (only for 32bit, [deprecated](https://msdn.microsoft.com/en-us/library/ms713590%28v=vs.85%29.aspx), does not work anymore with Oracle Client 18c or newer) ``` var connectString = "Driver={Microsoft ODBC for Oracle};Uid=scott;Pwd=secret;Server=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - Oracle Provider for OLE DB ``` var connectString = "Provider=OraOLEDB.Oracle;Data Source=orcl1;Password=secret;User ID=scott"; var con = new System.Data.OleDb.OleDbConnection(connectString); con.Open(); ``` - Microsoft OLE DB Provider for Oracle (only for 32bit, [deprecated](https://msdn.microsoft.com/en-us/library/ms675851%28v=vs.85%29.aspx), does not work anymore with Oracle Client 18c or newer) ``` var connectString = "Provider=MSDAORA;Data Source=orcl1;Password=secret;User ID=scott"; var con = new System.Data.OleDb.OleDbConnection(connectString); con.Open(); ``` - Microsoft .NET Framework Data Provider for Oracle ([deprecated](https://learn.microsoft.com/de-de/archive/blogs/adonet/system-data-oracleclient-update)) ``` var connectString = "Data Source=orcl1;User ID=scott;Password=secret"; var con = new System.Data.OracleClient.OracleConnection(connectString); con.Open(); ``` - Oracle Data Provider for .NET (ODP.NET) ``` var connectString = "Data Source=orcl1;User ID=scott;Password=secret"; var con = new Oracle.DataAccess.Client.OracleConnection(connectString); con.Open(); ``` - Oracle Data Provider for .NET, Managed Driver (ODP.NET Managed Driver) ``` var connectString = "Data Source=orcl1;User ID=scott;Password=secret"; var con = new Oracle.ManagedDataAccess.Client.OracleConnection(connectString); con.Open(); ``` - dotConnect for Oracle from [Devart](https://www.devart.com/dotconnect/oracle/) (formerly known as OraDirect .NET from Core Lab) ``` var connectString = "Data Source=orcl1;User ID=scott;Password=secret"; var con = new Devart.Data.Oracle.OracleConnection(connectString); con.Open(); ``` - dotConnect Universal from Devart (uses deprecated `System.Data.OracleClient`) ``` var connectString = "Provider=OracleClient;Data Source=orcl1;User ID=scott;Password=secret"; var con = new Devart.Data.Universal.UniConnection(connectString); con.Open(); ``` - ODBC with driver from Devart ``` var connectString = "Driver={Devart ODBC Driver for Oracle};Uid=scott;Pwd=secret;Server=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - DataDirect Connect for ADO.NET from [Progress](https://www.progress.com/connectors/oracle-database) ``` var connectString = "Data Source=orcl1;User ID=scott;Password=secret"; var con = new DDTek.Oracle.OracleConnection(connectString); con.Open(); ``` - ODBC with driver from Progress ``` var connectString = "Driver={DataDirect 8.0 Oracle Wire Protocol};Uid=scott;Pwd=secret;ServerName=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - ODBC with Oracle Driver from [Easysoft](https://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html) (did not work for me, I guess it does not support TNS alias resolution from OID/LDAP) ``` var connectString = "Driver={Easysoft ODBC-Oracle Driver};Database=orcl1;Uid=scott;Pwd=secret;Server=orcl1;SID=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - ODBC with Oracle WP Driver from Easysoft (did not work for me, I guess it does not support TNS alias resolution from OID/LDAP) ``` var connectString = "Driver={Easysoft ODBC-Oracle WP Driver};Database=orcl1;Uid=scott;Pwd=secret;Server=orcl1;SID=orcl1"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - ADO.NET Provider for Oracle OCI from [CData](https://www.cdata.com/drivers/oracledb/) ``` var connectString = "Data Source=orcl1;User=scott;Password=secret"; var con = new System.Data.CData.OracleOci.OracleOciConnection(connectString); con.Open(); ``` - ODBC with Driver for Oracle OCI from CData ``` var connectString = "Driver={CData ODBC Driver for Oracle OCI};Data Source=orcl1;User=scott;Password=secret"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` - ODBC with Oracle Driver with SQL Connector from [Magnitude (formerly Simba)](https://www.magnitude.com/drivers/oracle-odbc-jdbc) ``` var connectString = "Driver={Simba Oracle ODBC Driver};TNS=orcl1;UID=scott;PWD=secret"; var con = new System.Data.Odbc.OdbcConnection(connectString); con.Open(); ``` In general all of them are working. For new application you should use *ODP.NET* or *ODP.NET Managed Driver*. *ODP.NET Managed Driver* is quite new and has still a few limitations and also the "newest" bugs. The third party providers may come with additional costs. Apart from *ODP.NET Managed Driver*, Progress and *Easysoft ODBC-Oracle WP Driver* all drivers/providers need to have an Oracle (Instant-) Client installed. I developed an application in [github](https://github.com/Wernfried/connection-tester) which runs all these 32 (17 64-bit + 15 32-bit) variants at once.
Trouble translating small C# "Random String" function to Haskell I saw the following code in my companies codebase and thought to myself "Damn that's a fine line of linq, I'd like to translate that to Haskell to see what it's like in an actual functional language" ``` static Random random = new Random(); static string RandomString(int length) { const string chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; return new string(Enumerable.Repeat(chars, length) .Select(s => s[random.Next(s.Length)]) .ToArray()); } ``` However I'm having a bit of trouble getting a concise and direct translation to Haskell because of how awkward it is to generate random numbers in this language. I've considered a couple of approaches. The most direct translation of the C# code only generates a single random index and then uses that in place of the `random.Next(s.Length)`. But I need to generate multiple indexes, not a single one. Then I considered doing a list of `IO Int` random number actions, but I can't work out how to go through and convert the list of `IO` actions into actual random numbers. So the Haskell code that I end up writing ends up looking quite convoluted compared to the C# (which I wasn't expecting in this case) and I haven't even got it to work anyway. My question, what would be a natural translation of the C# to Haskell? Or more generally, how would you go about generating a random String of a specified length in Haskell (because this C# way doesn't seem to translate well to Haskell)? Note: I'm mainly interested in what the algorithm to generate a random string looks like in Haskell. I'm not really interested in any standard libraries which do the job for me
The natural translation to Haskell involves having some sort of `IO` (as you need randomness). Since you are essentially trying to perform the action of choosing a character `n` times, you want to use [`replicateM`](http://hackage.haskell.org/package/base-4.9.0.0/docs/Control-Monad.html#v:replicateM). Then, for getting a random number in a range, you can use [`randomRIO`](http://hackage.haskell.org/package/random-1.1/docs/System-Random.html#v:randomRIO). ``` import Control.Monad (replicateM) import System.Random (randomRIO) randomString :: Int -> IO String randomString n = replicateM n (do r <- randomRIO (0,m); pure (chars !! r)) where chars = ['A'..'Z'] ++ ['0'..'9'] m = length chars ``` This is somewhat complicated by the fact you want a string of only characters in a certain range. otherwise, you'd have a one liner: `randomString n = replicateM n randomIO`. --- That said, the more faithful translation would use [`conduit`](https://www.stackage.org/haddock/lts-7.10/conduit-combinators-1.0.8.2/Data-Conduit-Combinators.html). I'm not going to include imports and language pragmas (because they are a bit painful). This looks a lot more like what you would write in C#: ``` randomString' :: Int -> IO String randomString' n = runConduit $ replicate n chars .| mapM (\cs -> do r <- randomRIO (0,m); pure (cs !! r)) .| sinkList where chars = ['A'..'Z'] ++ ['0'..'9'] m = length chars ```
What can lead to "IOError: [Errno 9] Bad file descriptor" during os.system()? I am using a scientific software including a Python script that is calling `os.system()` which is used to run another scientific program. While the subprocess is running, Python at some point prints the following: ``` close failed in file object destructor: IOError: [Errno 9] Bad file descriptor ``` I believe that this message is printed at the same time as `os.system()` returns. My questions now are: Which conditions can lead to this type of IOError? What does it exactly mean? What does it mean for the subprocess that has been invoked by `os.system()`?
You get this error message if a Python file was closed from "the outside", i.e. not from the file object's `close()` method: ``` >>> f = open(".bashrc") >>> os.close(f.fileno()) >>> del f close failed in file object destructor: IOError: [Errno 9] Bad file descriptor ``` The line `del f` deletes the last reference to the file object, causing its destructor `file.__del__` to be called. The internal state of the file object indicates the file is still open since `f.close()` was never called, so the destructor tries to close the file. The OS subsequently throws an error because of the attempt to close a file that's not open. Since the implementation of `os.system()` does not create any Python file objects, it does not seem likely that the `system()` call is the origin of the error. Maybe you could show a bit more code?
Unique index in MongoDB Can someone please help me with creation of unique index @ mongoDB for the following condition ? Suppose I have a schema like this, and I want a unique compound index on 1.path 2.verb 3."switches.name" 4."switches.value" ``` { "path": "somepath", "verb": "GET", "switches": [ { "name": "id", "value":"123" }, { "name": "age", "value":"25" } ] } ``` So if I try to insert ``` { "path": "somepath", "verb": "GET", "switches": [ { "name": "id", "value":"123" }, { "name": "age", "value":"25" } ] } ``` I should get duplicate error, but if I insert ``` { "path": "somepath", "verb": "GET", "switches": [ { "name": "id", "value":"123" }, { "name": "age", "value":"24" } ] } ``` or ``` { "path": "somepath", "verb": "GET", "switches": [ { "name": "id", "value":"123" }, { "name": "age", "value":"25" }, { "name": "foo", "value":"bar" } ] } ``` I shouldn't get any error. Basically, I should be allowed to insert documents with distinct array "switches".
I am afraid what you want to achieve cannot be by your current schema. First you have to understand how array on indexes work: <https://docs.mongodb.com/manual/core/index-multikey/#multikey-indexes> > > To index a field that holds an array value, **MongoDB creates an index key for each element in the array**. > > > This suggests that arrays are not indexed as single objects, but are *unwinded* into multiple objects. To achieve a similar result to what you want, you may consider using object property maps instead: ``` { "path" : "somepath", "verb" : "GET", "switches" : { "id": "123", "age": "25" } } ``` And create as unique index as usual: ``` db.yourCollection.createIndex({"path": 1, "verb": 1, "switches": 1}, {"unique": true}); ``` **However**, this is generally undesirable because querying for the key is non-trivial [(Also read about this here)](https://stackoverflow.com/questions/9589856/mongo-indexing-on-object-arrays-vs-objects). So alternatively, you can wrap the array within another object instead: ``` { "path" : "somepath", "verb" : "GET", "switches" : { "options": [ { "name" : "id", "value" : "123" }, { "name" : "age", "value" : "25" } ] } } ``` With the same index: ``` db.yourCollection.createIndex({"path": 1, "verb": 1, "switches": 1}, {"unique": true}); ``` If you plan to execute queries on the `switches.options` array, this would be the more desirable solution.
how to select element by class name in typescript? I am trying to design responsive menu bar that collapses in small screens, however, I am using typescript. is there any clue what equivalent to this code in typescript ``` function myFunction() { document.getElementsByClassName("topnav")[0].classList.toggle("responsive");} ``` I changed to this code in typescript but it never works ``` myFunction(): void { (<HTMLScriptElement[]><any>document.getElementsByClassName("topnav"))[0].classList.toggle("responsive"); } ```
There's no need to change anything because typescript is a superset of javascript, so even regular javascript can be typescript. With that being said, you can add some typescript features: ``` function myFunction(): boolean { let elements: NodeListOf<Element> = document.getElementsByClassName("topnav"); let classes: DOMTokenList = elements[0].classList; return classes.toggle("responsive"); } ``` But there's no need to break things apart like that, so you can just have your exact code, but maybe add a return type to the function signature: ``` function myFunction(): void { document.getElementsByClassName("topnav")[0].classList.toggle("responsive"); } ``` Or ``` function myFunction(): boolean { return document.getElementsByClassName("topnav")[0].classList.toggle("responsive"); } ```
RxSwift: Observable while a button holds down How to create Observable which streams an event repeatedly while a button holds down?
Even I was looking for a solution for your question. I got help from RxSwift slack channel. ``` let button = submitButton.rx_controlEvent([.TouchDown]) button .flatMapLatest { _ in Observable<Int64>.interval(0.1, scheduler: MainScheduler.instance) .takeUntil(self.submitButton.rx_controlEvent([.TouchUpInside])) } .subscribeNext{ x in print("BOOM \(x)") } .addDisposableTo(disposeBag) //prints BOOM 0 BOOM 1 BOOM 2 BOOM 3 BOOM 4 BOOM 5 for every 0.1 seconds ``` And also Check [Interval Documentation](http://reactivex.io/documentation/operators/interval.html).Thanks to @jari of RxSwift slack channel.
Django how to change bounds value of IntegerRangeField I have model with ***IntegerRangeField*** in it, which returns data as, for example, ***NumericRange(1992, 1997, '[)')*** after adding objects to the database. I need second option of range to be also included in it, so the bounds should be like *'[]'*. I was looking for the solution in the [psycopg2 documentation](http://initd.org/psycopg/docs/extras.html#adapt-range), but unfortunatelly didn't find an answer. Please help! Thanks in advance. My current *models.py* ``` from django.contrib.postgres.fields import IntegerRangeField from django.contrib.postgres.validators import RangeMinValueValidator, RangeMaxValueValidator from django.core.validators import MinValueValidator, MaxValueValidator from django.db import models from datetime import datetime class Bancnote(models.Model): Dollar= 'Dollar' Euro= 'Euro' TYPE_CHOICES = ( (Dollar, 'Dollar'), (Euro, 'Euro') ) type = models.CharField(max_length=11, choices=TYPE_CHOICES, default=Dollar) par = models.PositiveIntegerField() year = IntegerRangeField() size = models.CharField(max_length=7) sign = models.TextField(max_length=250) desc = models.TextField(max_length=500) image = models.ImageField(upload_to='bons_images') def __str__(self): return str(self.par) + ' ' + self.type + ' ' + str(self.year.lower) + '-' + str(self.year.upper) ```
The official [documentation](https://docs.djangoproject.com/en/1.11/ref/contrib/postgres/fields/#integerrangefield) says: > > Regardless of the bounds specified when saving the data, PostgreSQL > always returns a range in a canonical form that includes the lower > bound and excludes the upper bound; that is [). > > > So I'm afraid you can't really change that. You can however store the data as usual: ``` from psycopg2.extras import NumericRange Bancnote.objects.create( # specify other fields year=NumericRange(1992, 1997, bounds='[]') ) # assuming the object we created is with id=1 bancnote = Bancnote.objects.get(id=1) print(bancnote.year) # NumericRange(1992, 1998, '[)') ``` which I suppose you do. It's not much of a help, but that's all I can contribute.
UIButton takes up its own size Autolayout What I tried was this :- ``` UIButton *btn = [UIButton buttonWithType:UIButtonTypeCustom]; [self.view addSubview:btn]; btn.translatesAutoresizingMaskIntoConstraints = NO; [btn addTarget:self action:@selector(bringUpNextViewController:) forControlEvents:UIControlEventTouchUpInside]; btn.titleLabel.font = [UIFont fontWithName:@"HelveticaNeue" size:14]; [btn setTitle:@"8" forState:UIControlStateNormal]; [self.view layoutIfNeeded]; NSLog(@"button size : %@", NSStringFromCGSize(btn.frame.size)); ``` As output, I get this : ``` button size : {30, 29} ``` Then I gave `setTitle` string as nothing. The button width was still 30. So why is this the case always? I also tried giving a `high compression resistance priority` and `high content hugging priority`. Doesn't shrink to nothing. The problem is also the fact that I want to reduce the width of the button simply based on its content, without giving any fixed width. I could take the width of text and give the button the width, but I shouldn't be needing to do that either if the button was taking up the content width. EDIT: Its not the insets either which is causing the width to be `30`. Ghost value.
A button is made of several subviews. It's very likely that the internal layout of a button has some default padding between the label and the button view itself. Making a button like yours and examining the constraints shows the following: ``` button constraints ( "<NSContentSizeLayoutConstraint:0x8c40a60 H:[UIButton:0x8f29840(30)] Hug:250 CompressionResistance:750>", "<NSContentSizeLayoutConstraint:0x8c55280 V:[UIButton:0x8f29840(29)] Hug:250 CompressionResistance:750>" ) ``` The 30 and 29 tie up with the size values you are seeing. The intrinsic content size property of the button also returns 30,29. Basically this is the minimum size for a button, in the absence of anything else. It's not quite clear what you want or why you are bothered by this. Anything smaller will be a poor touch target, and a button with no label or image will be invisible anyway. If you add a longer title, the button will get bigger. If you add other constraints to force particular sizes, then these will override the intrinsic content size. If you want the button to become invisible when it has no title, then you should explicitly hide it. This makes your intentions in the code much clearer and will prevent the user from accidentally hitting a button they can't really see.
NodeJS Managed Hostings vs VPS There are a bunch of managed cloud based hosting services for nodejs [out there](https://github.com/joyent/node/wiki/Node-Hosting) which seem relatively new and some still in Beta. Yet another path to host a nodejs app is setting up a stack on a VPS like Linode. I'm wondering what's the basic difference here between these two kinds of deployment. Which factors should one consider in choosing one over another? Which one is more suitable for production considering how young these services are. To be clear I'm not asking on choosing a provider but to decide whether to host on a managed nodejs specific hosting or on an old fashioned self setup VPS.
Using one of the services is for the most part hands off - you write your code and let them worry about managing the box, keep your process up, creating the publishing channel, patching the OS, etc... In contrast having your own VM gives you more control but with more up front and ongoing time investment. Another consideration is some hosters and cloud providers offer proprietary or distinct variations on technologies. They have reasons for them and they offer value but it does mean that if you want to switch cloud providers, it might mean you have to rewrite code, deployment scripts etc... On the other hand using VMs with standard OS as the baseline is pretty generic. If you automate/script/document the configuration of your VMs and your code stays generic, then your options stay open. If you do take a dependency on a proprietary cloud technology then it would be good to abstract it away behind an interface so it's a decoupled component and not sprinkled throughout your code. I've done both. I did the VM path recently mostly because I wanted the learning experience. I had to: - get the VM from the cloud provider - I had to update and patch the OS - I had to install and configure git as a publishing channel - I had to write some scripts and use things like forever to keep it running - I had to configure the reverse http-proxy to get it to run multiple sites. - I had to configure DNS with the cloud provider, open ports for git etc... The list goes on. In the end, it cost me more up front time not coding but I learned about a lot more things. If those are important to you, then give it a shot. If you want to focus on writing your code, then a node hosting provider may be for you. At the end of it, I had also had more options - I wanted to add a second site. I added an entry to my reverse proxy, append my script to start up another app with forever, voila, another site. More control. After that, I wanted to try out MongoDB - simple - installed it. Cost wise they're about the same but if you start hosting multiple sites with many other packages like databases etc..., then the VM can start getting cheaper. [Nodejitsu open sourced](http://www.nodejitsu.com/open-source.html) their tools which also makes it easier if you do your own. If you do it yourself, here's some links that may help you: Keeping the server up: <https://github.com/nodejitsu/forever/> <http://blog.nodejitsu.com/keep-a-nodejs-server-up-with-forever> <https://github.com/bryanmacfarlane/svchost> Upstart and Monit generic auto start and restart through monitoring <http://howtonode.org/deploying-node-upstart-monit> Cluster Node Runs one process per core <http://nodejs.org/docs/latest/api/cluster.html> Reverse Proxy <https://github.com/nodejitsu/node-http-proxy> <https://github.com/nodejitsu/node-http-proxy/issues/232> <http://blog.nodejitsu.com/http-proxy-middlewares> <https://github.com/nodejitsu/node-http-proxy/issues/168#issuecomment-3289492> <http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load/> Script the install <https://github.com/bryanmacfarlane/svcinstall> [Exit Shell Script Based on Process Exit Code](https://stackoverflow.com/questions/90418/exit-shell-script-based-on-process-exit-code) Publish Site [Using git to publish to a website](https://stackoverflow.com/questions/4938239/using-git-to-publish-to-a-website)
Understanding lifecycle\_rules and ignore\_changes I'll try to give some background, we're deploying a Grafana instance via Terraform and GitLab CI/CD Pipelines. The first time the pipeline runs the instance loads perfectly and we can access the grafana UI in a web browser. HOWEVER, if we then re-run the pipeline with changes, we will get a HTTP 500 error when trying to hit the grafana UI in a web browser again, every 'even' number run (2, 4, 6, 8, etc.) will cause this issue but the 'odd' number runs work fine. I've found the fix to be to add `ignore_changes` block to the ASG, ignore changes to the `load_balancers` and `target_group_arns` - as is recommended by Terraform (<https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group>) However I'm struggling to understand what the implications of this change actually are, why does this fix the issue? I've had a Google to try and find some explanation but I can't say I understand any that I've read. Could anyone help explain what adding these lifecycle rules to the ASG actually do?
The `ignore_changes` causes terraform to not consider a resource to require an update if only ignored properties changes and not consider the attribute change when actually performing an update. Typical case for this is anything related to autoscaling: - you deploy the services using terraform and specify that you want 2 instances - autoscaling takes care of upscaling to X instances - next deployment would come along and downgrade to 2 instances again - not ideal! That is why you literally tell terraform to ignore the changes related to some attributes (e.g. `desired_count`) such that terraform does not roll back some scaling changes that happened during the actual application lifecycle. --- Another example is: if you have a bucket specifying KMS SSE and then upload an object into that bucket using terraform but do not specify a KMS key for that object then the object will inherit the KMS key of the bucket. But in the terraform code you did not specify the key meaning that during the next deployment terraform would try to change / remove the encryption on the object. That is why we often have `ignore_changes` for the `kms_key_id` set up in that case. --- If you want to figure out why the `ignore_changes` is required / recommended in your situation I would advise you to look at the plan that terraform generates without the `ignore_changes` in place. Try to understand which property changes and try to reason what resources those properties point to and why a change in them might be expected / unexpected. I am not familiar enough with the `autoscaling_group` to reason about it.
Typescript - Import vs "declare var" with type definitions Building an Angular 2 app using Typescript, I am attempting to import the popular `d3` library. I have installed the type definitions using `TSD`, and I am referencing the `tsd.d.ts` file correctly: `/// <reference path="../../../tools/typings/tsd/tsd.d.ts" />` Now, I want to `import` my `d3` node\_module. It was installed via `NPM`: ``` /// <reference path="../../../tools/typings/tsd/tsd.d.ts" /> import * as d3 from 'd3/d3'; ``` This works, but I don't get any benefit from my type definitions. My IDE is not providing any type-ahead info or syntax highlighting. If I change it to this: ``` /// <reference path="../../../tools/typings/tsd/tsd.d.ts" /> import * as d3 from 'd3/d3'; ``` I now get all of the syntax highlighting/type-ahead definitions that I am expecting. However, my app is looking for a file at `node_modules/d3.js` which doesn't exist, so this obviously doesn't work. When I change my import statement to a `var` declaration, my app compiles correctly and I get all the appropriate typescript definitions: ``` /// <reference path="../../../tools/typings/tsd/tsd.d.ts" /> declare var d3 = require('d3/d3'); ``` So, my question is simply what is the right approach? What is the difference in `import` vs `declare var`, and is there a way to get type definitions when using `import` if they are not included in the npm module itself? I've noticed things like `import {Component} from 'angular2/core';` work fine, but the type definitions are included within the same directory as the javascript file I am importing.
`import * as d3 from 'd3/d3';` should work fine with the type system (without the `///<reference .../>`) as long as the compiler options are correct, and the folder structure is correct in the typings folder. `declare var d3` is how to declare a variable that exists somewhere in the JS. Think of it like "Yeah yeah typescript, quit complaining, trust me it exists". `import {Component} from 'angular/core';` is how to pull a specific piece from a module. In node terms this translates to `var Component = require('angular/core').Component;` The important compiler option to have on is `"moduleResolution": "node"`, which should already be on for angular to function. So if `d3` was installed as a node\_module then you should be able to simply use: ``` npm install d3 npm install --save-dev @types/d3 tsc ``` then ``` import * as d3 from 'd3'; ```
sort 2d array by date in python 3.3 I have 3 arrays that are 2d and inside them is a string that is a date i would like to sort these by that date in them. The arrays are all structured like this: ``` array1 = [[1,'29-04-2013','U11'],[2,'20-05-2013','U11']] array2 = [[1,'06-05-2013','U13'],[2,'03-06-2013','U13']] array3 = [[1,'06-03-2013','U15'],[2,'03-07-2013','U15']] ``` I would like to get them into an array like this: ``` all = [[1,'06-03-2013','U15'],[1,'29-04-2013','U11'],[1,'06-05-2013','U13'],[2,'20-05-2013','U11'],[2,'03-06-2013','U13'],[2,'03-07-2013','U15']] ``` I just need some sort of way to approach this as i havent got a clue how i would do it.Thanks for the help in advance
``` big_array = array1 + array2 + array3 import dateutil.parser as p print sorted(big_array,key=lambda x: p.parse(x[1])) ``` if for somereason you are opposed to dateutil.parser ``` import datetime print sorted(big_array,key=lambda x:datetime.datetime.strptime(x[1],"%d-%m-%Y") ``` the reason I reccommend datetime over the regular time module is that datetime can see as far in the future as Ive tested ... while the time module only works up to like 2035 however you can also do it with the time module ``` import time print sorted(big_array,key=lambda x:time.strptime(x[1],"%d-%m-%Y") ```
Writing a simple webserver in Ruby I want to create an extremely simple web server for development purposes in Ruby (no, don’t want to use ready solutions). Here is the code: ``` #!/usr/bin/ruby require 'socket' server = TCPServer.new('127.0.0.1', 8080) while connection = server.accept headers = [] length = 0 while line = connection.gets headers << line if line =~ /^Content-Length:\s+(\d+)/i length = $1.to_i end break if line == "\r\n" end body = connection.readpartial(length) IO.popen(ARGV[0], 'r+') do |script| script.print(headers.join + body) script.close_write connection.print script.read end connection.close end ``` The idea is to run this script from the command line, providing another script, which will get the request on its standard input, and gives back the complete response on its standard output. So far so good, but this turns out to be really fragile, as it breaks on the second request with the error: ``` /usr/bin/serve:24:in `write': Broken pipe (Errno::EPIPE) from /usr/bin/serve:24:in `print' from /usr/bin/serve:24 from /usr/bin/serve:23:in `popen' from /usr/bin/serve:23 ``` Any idea how to improve the above code to be sufficient for easy use? Versions: Ubuntu 9.10 (2.6.31-20-generic), Ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux]
The problem appears to be in the child script, since the parent script in your question runs on my box (Debian Squeeze, Ruby 1.8.7 patchlevel 249): I created the dummy child script bar.rb: ``` #!/usr/bin/ruby1.8 s = $stdin.read $stderr.puts s print s ``` I then ran your script, passing it the path to the dummy script: ``` $ /tmp/foo.rb /tmp/bar.rb ``` The I hit it with wget: ``` $ wget localhost:8080/index ``` And saw the dummy script's output: ``` GET /index HTTP/1.0^M User-Agent: Wget/1.12 (linux-gnu)^M Accept: */*^M Host: localhost:8080^M Connection: Keep-Alive^M ^M ``` I also saw that wget received what it sent: ``` $ cat index GET /index HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: localhost:8080 Connection: Keep-Alive ``` It worked the same no matter how many times I hit it with wget.
Best practices for Denormalizing data from Relational to non-relational DBs I'm running a website that's starting to grow beyond simple performance and Tunning. It's a PHP app with MySQL as backend. MySQL is properly tunned and the code is optimized. The thing is that i see that i can use some sort of denormalization to speed things up. Suppose you have a site similar to ebay or Amazon. You have products in your database with some information related (seller, customers who bought the product, city, state, etc). That would be multiple tables in a Relational DataBase, and is good to keep this way to make good querys. But, for example, for the home page, you could have one single denormalized document (for example in MongoDB). Could be a collection with the latest products, denormalied, similar to this: ``` products = { { id:13, name:"Some product", city:"aCity", state:"aState", price:"10" }, { id:123, name:"another product", city:"aCity", state:"aState", price:"10" } } ``` This way, I could query that collection instead of the MySQL database (with all the joins involved) and things could get really fast. Now, here is the question. When and how would you denormalize that data? For example, i could decide that I want to denormalize the data when it's inserted. So, in my "create-product.php" (simply put). I could do all the "insert into" for mysql, and after that i could do the save to the Mongo collection. Or, i could just run a program in the server. Or make some cron to look for the latest products. All these are posibilities. What do you do? What is your expirience? Thanks a lot.
Conceptually you are creating some kind of a cache, and you're forseeing that populating it is going to be time-expensive, hence you want to have it persistent, on the reasonable assumption that loading from your persisted cache is going to faster than going back to the real DB. There are some variations on your idea, caching HTML page fragements or JSON strings, and using a distributed in-memory cache - not persistent but fault-tolerant. The big question with all caching solutions is: "how stale can I afford to be?". For some data being 24 hours out of date doesn't really matter. For example: Top 10 most popular books? Latest reviews, for those just some batch update will do. For more urgent stuff you may well need to ensure that there's a more rapid update, but you really want to avoid putting too much extra processing in the mainstream. For example it would be a shame to give a customer a slow purchasing experience because he's waiting for an update to a cache. In those cases you might just drop a "Here's an update" message on a queue, or indeed a "your entry nunber 23 is now stale" message, let the cache pick that up as its leisure and if need be refresh itself.
HTML as sandboxed iframe's `src` (IE11/Edge) With HTML5, is there any way in IE11/Edge to populate a sandboxed iframe (`<iframe sandbox></iframe>`) with HTML other than using a `src` url? I am looking for a solution like [`srcdoc`](https://caniuse.com/#feat=iframe-srcdoc) which works with all other modern browsers. Using `src` with a [data URI](https://caniuse.com/#feat=datauri) is not an option [according to Microsoft](https://msdn.microsoft.com/en-us/library/cc848897(v=vs.85).aspx) as it "cannot be used [to] populate frame or iframe elements." Surprisingly, this does works for me in Edge but only for data URIs with less than 4096 characters. All other options that I have found, e.g. in *[Alternatives to iframe srcdoc?](https://stackoverflow.com/questions/13214419/alternatives-to-iframe-srcdoc)* and *[Html code as IFRAME source rather than a URL](https://stackoverflow.com/questions/6102636/html-code-as-iframe-source-rather-than-a-url)* do not work for a sandboxed iframe.
Assuming usage of `<iframe sandbox="allow-scripts">` is desired or acceptable, a possible workaround would be using [`window.postMessage()`](https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage) with the following setup: *index.html:* ``` <!doctype html> <html> <body> <iframe onload="connectIframe()" sandbox="allow-scripts" src="iframeConnect.html" name="srcdocloader"></iframe> <script> var SRCDOC_HTML = '<html><body><script src="https://code.jquery.com/jquery-3.2.1.js"><\/script><script>console.log("loaded srcdoc and dependencies", jQuery);<\/script><h1>done!</h1></body></html>'; var loaded; function connectIframe (event) { if (!loaded) { loaded = true; window.frames.srcdocloader.postMessage(SRCDOC_HTML, '*'); } else { onloadSrcdoc(); } } function onloadSrcdoc () { // ... } </script> </body> </html> ``` *iframeConnect.html:* ``` <!doctype html> <script> window.addEventListener("message", handler); function handler(event) { if (event.source === window.parent) { window.removeEventListener("message", handler); document.write(event.data); document.close(); } } </script> ``` Note that the iframe's `onload` event will be triggered two times. The second time will be after the `srcdoc` html and all its dependencies got loaded.
Java.util.Date: try to understand UTC and ET more I live in North Carolina, btw, which is on the East Side. So I compile and run this code and it print out the same thing. The documentation say that java.util.date try to reflect UTC time. ``` Date utcTime = new Date(); Date estTime = new Date(utcTime.getTime() + TimeZone.getTimeZone("ET").getRawOffset()); DateFormat format = new SimpleDateFormat("dd/MM/yy h:mm a"); System.out.println("UTC: " + format.format(utcTime)); System.out.println("ET: " + format.format(estTime)); ``` And this is what I get ``` UTC: 11/05/11 11:14 AM ET: 11/05/11 11:14 AM ``` But if I go to this [website](http://tycho.usno.navy.mil/cgi-bin/timer.pl) which try to reflect all different time, UTC and ET are different. What did I do wrong here
That's because `getRawOffset()` is returning 0 - it does that for me for "ET" as well, and in fact `TimeZone.getTimeZone("ET")` basically returns GMT. I suspect that's not what you meant. The best Olson time zone name for North Carolina is "America/New\_York", I believe. Note that you shouldn't just add the raw offset of a time zone to a UTC time - you should set the time zone of the formatter instead. A `Date` value doesn't really know about a time zone... it's always just milliseconds since January 1st 1970 UTC. So you can use: import java.text.*; import java.util.*; ``` Date date = new Date(); DateFormat format = new SimpleDateFormat("dd/MM/yy h:mm a zzz"); format.setTimeZone(TimeZone.getTimeZone("America/New_York")); System.out.println("Eastern: " + format.format(date)); format.setTimeZone(TimeZone.getTimeZone("Etc/UTC")); System.out.println("UTC: " + format.format(date)); ``` Output: ``` Eastern: 11/05/11 11:30 AM EDT UTC: 11/05/11 3:30 PM UTC ``` I'd also recommend that you look into using `java.time` now - which is much, mnuch better than the `java.util` classes.
CSS div doesn't show up **HTML:** ``` <html> <head> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <div id= "body"> Hello <div id= "notr"> <div id="slika"> </div> <div id="besedilo"> </div> </div> </div> </body> </html> ``` **CSS:** ``` body { background-color: brown; font-family: "comic sans ms"; border: 1 solid black; font-size: 200%; } notr { border: 1 solid black; height: 20px; width: 30px; } ``` Div name "notr" is the problem that i can´t see it.Page should look like this: <http://shrani.najdi.si/?3L/x5/3gUJm56s/sa.jpg> "Notr" is the div that should be the Div that is light-brown color
With CSS, you need to label IDs like this: ``` #notr { border:1px solid black; height:20px; width:30px; } ``` Without the `#` it is looking for a tag, like `<notr>`, which is obviously not what you want. Heads up, I also took the liberty of correcting your `border` syntax ... you need to include a measurement unit (`px`, `em`, or `rem`), so it would be `1px solid black`. **Edit** The reason it has the "same background-color" as `body` (the parent of that `div`), is because by default `div`s have `background-color:transparent;`. If you wanted to give it a different color, try something like this: ``` #notr { background-color:white; border:1px solid black; height:20px; width:30px; } ``` And voila ... magically it is a different color. ;) [Even have a jsFiddle to prove it!](http://jsfiddle.net/t3FQe/) **Another edit** Looking at your answer, *"Notr" is the div that should be the Div that is light-brown color*. Well then it shouldn't be on the body bro! ``` body { font-family: "comic sans ms"; font-size: 200%; } #notr { background-brown; border:1px solid black; height:20px; width:30px; } ``` [Here is a second jsFiddle](http://jsfiddle.net/t3FQe/2/), which I think gives what you want.
How to draw an arc with SwiftUI? I'd like to draw an arc shape with SwiftUI. I was looking for something like a segment modifier to use on Circle(), but I couldn't find one. I should be able to set a start and end angle.
You should really check this: <https://developer.apple.com/tutorials/swiftui/drawing-paths-and-shapes> And here's a shortcut: [![enter image description here](https://i.stack.imgur.com/EUoeL.png)](https://i.stack.imgur.com/EUoeL.png) ``` import SwiftUI struct ContentView : View { var body: some View { MyShape() } } struct MyShape : Shape { func path(in rect: CGRect) -> Path { var p = Path() p.addArc(center: CGPoint(x: 100, y:100), radius: 50, startAngle: .degrees(0), endAngle: .degrees(90), clockwise: true) return p.strokedPath(.init(lineWidth: 3, dash: [5, 3], dashPhase: 10)) } } ```
Number of rows changes even after `pandas.merge` with `left` option I am merging two data frames using `pandas.merge`. Even after specifying `how = left` option, I found the number of rows of merged data frame is larger than the original. Why does this happen? ``` panel = pd.read_csv(file1, encoding ='cp932') before_len = len(panel) prof_2000 = pd.read_csv(file2, encoding ='cp932').drop_duplicates() temp_2000 = pd.merge(panel, prof_2000, left_on='Candidate_u', right_on="name2", how="left") after_len = len(temp_2000) print(before_len, after_len) > 12661 13915 ```
This sounds like having more than one rows in `right` under `'name2'` that match the key you have set for the `left`. Using option `'how='left'` with [`pandas.DataFrame.merge()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) only means that: > > - left: use only keys from left frame > > > However, the actual number of rows in the result object is not necessarily going to be the same as the number of rows in the `left` object. Example: ``` In [359]: df_1 Out[359]: A B 0 a AAA 1 b BBA 2 c CCF ``` and then another DF that looks like this (notice that there are more than one entry for your desired key on the left): ``` In [360]: df_3 Out[360]: key value 0 a 1 1 a 2 2 b 3 3 a 4 ``` If I merge these two on `left.A`, here's what happens: ``` In [361]: df_1.merge(df_3, how='left', left_on='A', right_on='key') Out[361]: A B key value 0 a AAA a 1.0 1 a AAA a 2.0 2 a AAA a 4.0 3 b BBA b 3.0 4 c CCF NaN NaN ``` This happened even though I merged with `how='left'` as you can see above, there were simply more than one rows to merge and as shown here the result `pd.DataFrame` has in fact more rows than the `pd.DataFrame` on the `left`. I hope this helps!
ASP.NET Identity OWIN Middleware Google OAuth2 AuthenticationManager SignIn not working I have created a simple ASP.NET MVC4 web site to test the new OWIN Authentication middleware, I decided to start with Google OAuth2, I have had struggle quite a bit with the configuration but I have managed to have Google to authorize the user, the problem I have right now is that OWIN is not authenticating the user. I think I have the proper settings in the web config ``` <system.web> <authentication mode="None" /> </system.web> <system.webServer> <modules> <remove name="FormsAuthenticationModule" /> </modules> </system.webServer> ``` Then I have in the Startup class a very simple configuration ``` public partial class Startup { public void Configuration(IAppBuilder app) { ConfigureAuth(app); } public void ConfigureAuth(IAppBuilder app) { app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie); // Enable the External Sign In Cookie. app.SetDefaultSignInAsAuthenticationType(DefaultAuthenticationTypes.ExternalCookie); // Enable Google authentication. app.UseGoogleAuthentication(GetGoogleOptions()); } private static GoogleOAuth2AuthenticationOptions GetGoogleOptions() { var reader = new KeyReader(); var keys = reader.GetKey("google"); var options = new GoogleOAuth2AuthenticationOptions() { ClientId = keys.Public, ClientSecret = keys.Private }; return options; } } ``` In the `AccountController` I have coded the actions the following way which is again very simple but it should work. ``` [AllowAnonymous, HttpPost, ValidateAntiForgeryToken] public ActionResult ExternalLogin(string provider, string returnUrl) { return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { ReturnUrl = returnUrl })); } [AllowAnonymous, HttpGet] public async Task<ActionResult> ExternalLoginCallback(string returnUrl) { var loginInfo = await AuthenticationManager.GetExternalLoginInfoAsync(); if (loginInfo == null || !loginInfo.ExternalIdentity.IsAuthenticated) { return RedirectToAction("Login"); } var identity = new ClaimsIdentity(new[] { new Claim(ClaimTypes.Name, loginInfo.DefaultUserName), new Claim(ClaimTypes.Email, loginInfo.Email) }, DefaultAuthenticationTypes.ExternalCookie); AuthenticationManager.SignOut(DefaultAuthenticationTypes.ExternalCookie); AuthenticationManager.SignIn(new AuthenticationProperties { IsPersistent = false }, identity); return RedirectToLocal(returnUrl); } ``` The main problem I'm having is that the call to the method `AuthenticationManager.SignIn` doesn't appear to be doing anything, even though Google is granting access to the request, when the user is redirected to the home page in which I have the following code ``` @using Microsoft.AspNet.Identity @{ Layout = "~/Views/Shared/_Main.cshtml"; } <h2>Welcome</h2> @{ if (Request.IsAuthenticated) { <p>Welcome @User.Identity.GetUserName()</p> } else { @Html.ActionLink("Login", "Login", "Account") } } ``` The value of `Request.IsAuthenticated` is always false, anybody has an idea as to what am I missing here? From what I read online this should be working. I have cookies enabled in my browser and other Google OAuth samples that I have that rely on the `UserManager` class work but this simple implementation I have is not working
After countless hours of reading on the web for answers I decided to debug the OWIN source code to find a solution to this problem, while the debugging session I came accross this gem in the `AuthenticationHandler` class ``` if (BaseOptions.AuthenticationMode == AuthenticationMode.Active) { AuthenticationTicket ticket = await AuthenticateAsync(); if (ticket != null && ticket.Identity != null) { Helper.AddUserIdentity(ticket.Identity); } } ``` In my original `Startup` class I was enabling the external sign in cookie with this method ``` app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie); ``` This method was using a default `CookieAuthenticationOptions` instance that had `AuthenticationMode = AuthenticationMode.Passive` and this was preventing the class from reading the information stored in the cookie, that way on every new request the OwinContext was not loading the authenticated identity and it resulted on `Request.IsAuthenticated` After I realized this all I did was to change `app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);` with this ``` app.UseCookieAuthentication(new CookieAuthenticationOptions() { AuthenticationMode = AuthenticationMode.Passive, AuthenticationType = DefaultAuthenticationTypes.ExternalCookie, ExpireTimeSpan = TimeSpan.FromMinutes(30) }); ``` and everything worked beautifully
Code First entity framework and foreign keys Imagine I have a database created by code first that looks like this. ``` Movie (table) ID int PK MoveiName nvarchar(100) Actor (table) ID int PK ActorName nvarchar(100) Movie_ID int FK ``` These are the models I would have used, obviously not exactly like this but you'll get my point. ``` class Movie { int ID{get;set;} string MovieName {get;set;} List<Actor> Actors {get;set} } class Actor { int ID{get;set} string Actorname{get;set} Movie Movie{get;set;} int Move_ID{get;set;} } ``` Database first enables me to query the movie from an actor but also lets me set the forgien key property of the actor to update the database. I cant do this in code first because the foreign key isnt in the model only the object. I prefer not to get the Movie object and set the property, if i know the id of my movie I'd rather just set the foreign key without the call to get the movie. Make sense? lol I want to use codefirst and entity migrations but i want the same ability as database first to set a foreign key on an object. Cheers for the help :-D
First change your code to this ``` public class Movie { int ID{ get; set; } string MovieName {get; set; } List<Actor> Actors {get; set; } } public class Actor { int ID{get;set} string ActorName{ get; set } Movie Movie{ get; set; } int MovieID{ get; set; } } ``` By [convention](http://msdn.microsoft.com/en-gb/data/jj679962.aspx) EF will know that `MovieID` is the foreign key to `Movie` Then change `Actor` code to this: ``` public class Actor { int ID{get;set} string ActorName{ get; set; } Movie Movies{ get; set; } } ``` Actors appear in many movies. Movies have many actors.... :-) But going back to your one-to-many where an actor only appears in one movie - if you really want the foreign key to be `Move_ID` then add a [data annotation](http://msdn.microsoft.com/en-gb/data/jj591583#relationships): ``` [ForeignKey("Move_ID")] Movie Movie{ get; set; } ``` Or configure it using [`FluentAPI`](http://msdn.microsoft.com/en-us/data/jj591620) ``` modelBuilder.Entity<Actor>() .HasRequired(c => c.Movie) .WithMany(d => d.Actors) .HasForeignKey(c => c.Move_ID); ```
How to interpret errors.ubuntu.com graph data? The graph and bar chart at the Ubuntu [Error reports](https://errors.ubuntu.com/) page seem to contain a lot of information. But I'm puzzled about the meaning of some of the values, and the page doesn't refer to any documentation. - What does the "frequency" column measure, in what units? - What does the "Mean Time Between Failure" vertical axis on the graph mean? - What do the "If all updates were installed" vs "Actual" toggle mean? For me, clicking "Actual" just blanks the whole graph. - Where is the code that generates the page? **Update**: And where does the data come from? Is this related to [ErrorTracker: how can I track a bug that caused a crash and was reported via apport / whoopsie?](https://askubuntu.com/questions/140379/errortracker-how-can-i-track-a-bug-that-caused-a-crash-and-was-reported-via-app)?
> > What does the "frequency" column measure, in what units? > > > Number of instances of that problem for the selected period. A instance is one person experiencing a specific error. These errors have signatures which make them unique. A grouping of all of the instances with the same signature is a "problem". In simpler terms, the frequency is the number of times this specific problem was encountered and reported. > > What does the "Mean Time Between Failure" vertical axis on the graph mean? > > > This has since been replaced with "Average number of crashes." This is the total number of reports seen in the day divided by the number of unique users sending those reports. > > What do the "If all updates were installed" vs "Actual" toggle mean? For me, clicking "Actual" just blanks the whole graph. > > > This is a placeholder. "If all updates were installed" will show the graph only for those users who had completely up to date systems. The gap between this ideal line and the "actual" line tells us the degree to which we need to fix our updates mechanism. > > Where is the code that generates the page? > > > [lp:errors](http://code.launchpad.net/errors)
selenium get current url after loading a page I'm using Selenium Webdriver in Java. I want to get the current url after clicking the "next" button to move from page 1 to page 2. Here's the code I have: ``` WebDriver driver = new FirefoxDriver(); String startURL = //a starting url; String currentURL = null; WebDriverWait wait = new WebDriverWait(driver, 10); foo(driver,startURL); /* go to next page */ if(driver.findElement(By.xpath("//*[@id='someID']")).isDisplayed()){ driver.findElement(By.xpath("//*[@id='someID']")).click(); driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("//*[@id='someID']"))); currentURL = driver.getCurrentUrl(); System.out.println(currentURL); } ``` I have both the implicit and explicit wait calls to wait for the page to be fully loaded before I get the current url. However, it's still printing out the url for page 1 (it's expected to be the url for page 2).
Like you said since the xpath for the next button is the same on every page it won't work. It's working as coded in that it does wait for the element to be displayed but since it's already displayed then the implicit wait doesn't apply because it doesn't need to wait at all. Why don't you use the fact that the url changes since from your code it appears to change when the next button is clicked. I do C# but I guess in Java it would be something like: ``` WebDriver driver = new FirefoxDriver(); String startURL = //a starting url; String currentURL = null; WebDriverWait wait = new WebDriverWait(driver, 10); foo(driver,startURL); /* go to next page */ if(driver.findElement(By.xpath("//*[@id='someID']")).isDisplayed()){ String previousURL = driver.getCurrentUrl(); driver.findElement(By.xpath("//*[@id='someID']")).click(); driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS); ExpectedCondition e = new ExpectedCondition<Boolean>() { public Boolean apply(WebDriver d) { return (d.getCurrentUrl() != previousURL); } }; wait.until(e); currentURL = driver.getCurrentUrl(); System.out.println(currentURL); } ```
Chrome doesn't get the selected html string wrapping tags (contenteditable) I'm using [this](https://stackoverflow.com/a/6668159/1491124) solution by Tim Down to get selected html in a contenteditable div, and it's working fine (thank you Tim!) But using Chrome, if I select a html string exactly at the boundaries of a html tag, as in this image: <https://i.stack.imgur.com/tBqlf.png>: what I get it's just plain text (`test` in this case). If I expand the selection to a next character (letter `c` for example), instead I get the correct html (`<strong>test</strong> c`). Can I get the full html in Webkit by selecting a word like in the image? Thanks
Not really. WebKit normalizes each boundary of any range when it's added to the selection so that it conforms to WebKit's idea of valid selection/caret positions in the document. You could change the original function so that it detects the case of a selection containing all the text within an element and expanding the selection range to surround that element (without actually changing the selection). Here's a simple example (you may need something cleverer for a more general case, such as when the text is inside nested elements, detecting block/inline elements, etc.): Demo: <http://jsfiddle.net/btLeg/> Code: ``` function adjustRange(range) { range = range.cloneRange(); // Expand range to encompass complete element if element's text // is completely selected by the range var container = range.commonAncestorContainer; var parentElement = container.nodeType == 3 ? container.parentNode : container; if (parentElement.textContent == range.toString()) { range.selectNode(parentElement); } return range; } function getSelectionHtml() { var html = "", sel, range; if (typeof window.getSelection != "undefined") { sel = window.getSelection(); if (sel.rangeCount) { var container = document.createElement("div"); for (var i = 0, len = sel.rangeCount; i < len; ++i) { range = adjustRange( sel.getRangeAt(i) ); container.appendChild(range.cloneContents()); } html = container.innerHTML; } } else if (typeof document.selection != "undefined") { if (document.selection.type == "Text") { html = document.selection.createRange().htmlText; } } return html; } ```
OpenCV Assertion Error when converting image to greyscale When trying to convert an image to grey scale in opencv I get the following error message which can be seen here: <https://i.stack.imgur.com/9C3kg.png> Here's the code: ``` import cv2 img = cv2.imread('pictures\chessBoard.png',0) gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) ``` These error messages are pretty cryptic, especially to someone new to opencv. Thanks for the help!
the 0 flag in imread forces your image into grayscale already, thus the later conversion fails. so either skip the conversion: ``` gray_image = cv2.imread('pictures\chessBoard.png',0) cv2.imshow('image',gray_image) cv2.waitKey(0) ... ``` or read a bgr image, and convert later ``` img = cv2.imread('pictures\chessBoard.png') gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imshow('image',gray_image) cv2.waitKey(0) ``` just as a reminder, here are the imread() flags again : ``` >>> help(cv2) ... IMREAD_ANYCOLOR = 4 IMREAD_ANYDEPTH = 2 IMREAD_COLOR = 1 IMREAD_GRAYSCALE = 0 IMREAD_LOAD_GDAL = 8 IMREAD_UNCHANGED = -1 ... ```
Quicksort- how pivot-choosing strategies affect the overall Big-oh behavior of quicksort? I have came up with several strategies, but I am not entirely sure how they affect the overall behavior. I know the average case is O(NlogN), so I would assume that would be in the answer somewhere. I want to just put NlogN+1 for if I just select the 1st item in the array as the the pivot for the quicksort, but I don't know whether that is either correct nor acceptable? If anyone could enlighten me on this subject that would be great. Thanks! Possible Strategies: a) Array is random: pick the first item since that is the most cost effective choice. b) Array is mostly sorted: pick middle item so we are likely to compliment the binary recursion of splitting in half each time. c) Array is relatively large: pick first, middle and last indexes in array and compare them, picking the smallest to ensure we avoid worst case. d) Perform 'c' with randomly generated indexes to make selection less deterministic.
An important fact you should know is that in an array of distinct elements, quicksort with a random choice of partition will run in O(n lg n). There are many good proofs of this, and [the one on Wikipedia](http://en.wikipedia.org/wiki/Quicksort#Randomized_quicksort_expected_complexity) actually has a pretty good discussion of this. If you're willing to go for a slightly less formal proof that's mostly mathematically sound, the intuition goes as follows. Whenever we pick a pivot, let's say that a "good" pivot is a pivot that gives us at least a 75%/25% split; that is, it's greater than at least 25% of the elements and at most 75% of the elements. We want to bound the number of times that we can get a pivot of this sort before the algorithm terminates. Suppose that we get k splits of this sort and consider the size of the largest subproblem generated this way. It has size at most (3/4)kn, since on each iteration we're getting rid of at least a quarter of the elements. If we consider the specific case where k = log3/4 (1/n) = log4/3 n, then the size of the largest subproblem after k good pivots are chosen will be 1, and the recursion will stop. This means that if we choose get O(lg n) good pivots, the recursion will terminate. But on each iteration, what's the chance of getting such a pivot? Well, if we pick the pivot randomly, then there's a 50% chance that it's in the middle 50% of the elements, and so on expectation we'll choose two random pivots before we get a good pivot. Each step of choosing a pivot takes O(n) time, and so we should spend roughly O(n) time before getting each good pivot. Since we get at most O(lg n) good pivots, the overall runtime is O(n lg n) on expectation. An important detail in the above discussion is that if you replace the 75%/25% split with any constant split - say, a (100 - k%) / k% split - the over asymptotic analysis is the same. You'll get that quicksort takes, on average, O(n lg n) time. The reason that I've mentioned this proof is that it gives you a good framework for thinking about how to choose a pivot in quicksort. If you can pick a pivot that's pretty close to the middle on each iteartion, you can guarantee O(n lg n) runtime. If you can't guarantee that you'll get a good pivot on any iteration, but can say that on expectation it takes only a constant number of iterations before you get a good pivot, then you can also guarantee O(n lg n) expected runtime. Given this, let's take a look at your proposed pivot schemes. For (a), if the array is random, picking the first element as the pivot is essentially the same as picking a random pivot at each step, and so by the above analysis you'll get O(n lg n) runtime on expectation. For (b), if you know that the array is mostly sorted, then picking the median is a good strategy. The reason is that if we can say that each element is "pretty close" to where it should be in the sorted sequence, then you can make an argument that every pivot you choose is a good pivot, giving you the O(n lg n) runtime you want. (The term "pretty close" isn't very mathematically precise, but I think you could formalize this without too much difficulty if you wanted to). As for (c) and (d), of the two, (d) is the only one guaranteed to get O(n lg n) on expectation. If you deterministically pick certain elements to use as pivots, your algorithm will be vulnerable to deterministic sequences that can degenerate it to O(n2) behavior. There's actually a really interesting paper on this called ["A Killer Adversary for Quicksort"](http://www.cs.dartmouth.edu/~doug/mdmspe.pdf) by McIlroy that describes how you can take any deterministic quicksort and construct a pathologically worst-case input for it by using a malicious comparison function. You almost certainly want to avoid this in any real quicksort implementation, since otherwise malicious users could launch DoS attacks on your code by feeding in these killer sequences to force your program to sort in quadratic time and thus hang. On the other hand, because (d) is picking its sample points randomly, it is not vulnerable to this attack, because on any sequence the choice of pivots is random. Interestingly, though, for (d), while it doesn't hurt to pick three random elements and take the median, you don't need to do this. The earlier proof is enough to show that you'll get O(n lg n) on expectation with a single random pivot choice. I actually don't know if picking the median of three random values will improve the performance of the quicksort algorithm, though since quicksort is always Ω(n lg n) it certainly won't be asymptotically any better than just picking random elements as the pivots. I hope that this helps out a bit - I really love the quicksort algorithm and all the design decisions involved in building a good quicksort implementation. :-)
different behavior of keyword "this" between node.js and browsers I've tried following code in ie, firefox and node.js ``` var x = 10; var o = { x: 15 }; function f(){ console.log(this.x); } f(); f.call(o); ``` the results in browsers are 10, 15, but the result in node.js is undefined, 15. Please explain to me what is the different behavior of “this” keyword in browsers and node.js? I've read many pages but there wasn't any obvious answer. Thanks in advance.
Javascript files loaded in Nodejs are automatically wrapped in anonymous functions. So in Node what you are really running is: ``` (function(/* There are args here, but they aren't important for this answer */){ var x = 10; var o = { x: 15 }; function f(){ console.log(this.x); } f(); f.call(o); })(); ``` The browser does not do this. The issue is that now in Node `x` is just a normal variable in the scope of the function, it is not part of the global scope. When you call `f()` this way, `this` within `f` is the global scope. If directly put `x` on the global scope, it will work in both cases. ``` this.x = 10; ``` That will place `x` on the `window` global object in the browser, and the `global` global object in Node. Generally, you do not load things globally in Node, instead you group your code into modules, as [described here](http://nodejs.org/docs/latest/api/modules.html). There is info about the various global things you can access [here](http://nodejs.org/api/globals.html). And if you are curious about the wrapper, you can see it [here](https://github.com/joyent/node/blob/master/src/node.js#L735).
How does Java differ on different platforms? I am currently in high school. I was recently browsing the internet looking for what employees in the software industry usually want and what the job requirements are like. I came accros a job description and one of the requirement is: > > Strong, object-oriented design and coding skills (C/C++ and/or Java > preferably on a UNIX or Linux platform) > > > Note the last part: **Java preferably on a UNIX or Linux platform**. I don't understand this. Isn't Java run inside a virtual environment/machine? Why would it matter what OS it is running on since Java cannot directly interact with the OS?
A developer job description may require experience with some OS for several reasons: 1. First, as you noticed already, there are languages that talk directly to the OS and the code needs to be aware of the underlying OS (like C/C++, which are listed in your job description). 2. Secondly, even if the programming language abstracts away anything that's OS-specific from you (including the file-system / path separators), you are still going to deploy / configure / run / monitor your applications on top of some OS and you need to know (at least) the basics in order to do that. For a Java job description, If UNIX/Linux "is a plus", it usually means you're going to run your code on a UNIX/Linux system and you should know how to start a process (your own java app or some application server), how to deploy an application in a container, how to read log files and so on...
How can I understand REINFORCE with baseline is not a actor-critic algorithm? I read [Sutton's RL book](https://drive.google.com/file/d/1xeUDVGWGUUv1-ccUMAZHJLej2C7aAFWY/view) and I found that in page 333 > > Although the REINFORCE-with-baseline method learns both a policy and a state-value function, we do not consider it to be an actor–critic method because its state-value function is used only as a baseline, not as a critic. That is, it is not used for bootstrapping (updating the value estimate for a state from the estimated values of subsequent states), but only as a baseline for the state whose estimate is being updated. > > > The pseudo code of REINFORCE-with-baseline is [![enter image description here](https://i.stack.imgur.com/1y1Zf.png)](https://i.stack.imgur.com/1y1Zf.png) And the pseudo code of actor-critic is [![enter image description here](https://i.stack.imgur.com/zFfxs.png)](https://i.stack.imgur.com/zFfxs.png) In the above pseudo code, how can I understand **bootstrapping**, and I think REINFORCE-with-baseline and actor-critic are similar and it is hard for beginners to tell apart.
The difference is in how (and when) the prediction error estimate $\delta$ is calculated. In REINFORCE with baseline: $\qquad \delta \leftarrow G - \hat{v}(S\_t,\mathbf{w})\qquad$ ; after the episode is complete In Actor-critic: $\qquad \delta \leftarrow R +\gamma \hat{v}(S',\mathbf{w}) - \hat{v}(S,\mathbf{w})\qquad$ ; online *Bootstrapping* in RL is when the learned estimate $\hat{v}$ from a successor state $S'$ is used to construct the update for a preceding state $S$. This kind of self-reference to the learned model so far allows for updates at every step, but at the expense of initial bias towards however the model was initialised. On balance, the faster updates can often lead to more efficient learning. However the bias can lead to instability. In REINFORCE, the final return $G$ is used instead, which is the same value as you would use in Monte Carlo control. The value of $G$ is not a bootstrap estimate, it is a direct sample of the return seen when behaving with the current policy. As a result it is not biased, but you have to wait to the end of each episode before applying updates.
Does BDD/TDD imply an automatable client? I agree with all the fundamental ideas of BDD and try to use it as much as I can. However, one thing that strikes me is that the outside in development and tests that express a scenario need to have control over the client. The term outside in development refers to the practice of developing software by writing domain level/high level tests first, and using these as a guide to implement functionality. For an excellent guide to this approach, please see [Growing Object Oriented Software (GOOS) book](http://rads.stackoverflow.com/amzn/click/0321503627). Therefore, web client automation frameworks and desktop UI automation tools become a key component of BDD/TDD. In my case, I develop middle-tier and back-end software, so the clients of my software are usually service layers such as Undertow or Tomcat etc. or standalone executables that consume my code in the form of a library. In this case, I find myself writing code that describes what the client does such as `client.ConfirmsMessagesAreInTheQueue();` I am OK with this, because it also makes me think about and implement how a client interacts with my code but it also makes me think that no matter what the architecture at hand is, BDD assumes that there is either an automatable client, or I have to write one. Otherwise, I can't describe a feature without some sort of interaction with the client, which may be a UI operated by the user or a piece of code that runs as a service. Do I get it right? Is this the norm for BDD/TDD? **Update:** I must confess I got a bit too focused in scenarios in which the UI automation is included in the BDD scope. A very common example of this is when Selenium is used to automate UI interaction in end to end tests. Admittedly, this is not the scope definition for BDD. [This blog post](http://skipoleschris.blogspot.co.uk/2010/11/best-way-to-apply-bdd.html) seems to make the distinction nicely.
> > tests that express a scenario need to have control over the client > > > Tests are **a** client. There is no "the client". A client that uses a service relies on that service exhibiting a certain behavior. A test, be it unit, integration, acceptance, whatever, is an attempt to confirm that actual and expected behaviors of the service are the same. A test takes on the role of a client when it talks to, and listens to, the service the same way any other client would. That way a service under test can simply do what it always does. For this to work the service has to allow different clients to talk to it. When people talk about code being testable that's what they mean. So tests do not need to have "control over the client". They need to be a client that can be swapped in to replace your "operational" client(s).
RDD filter in scala spark I have a dataset and i want to extract those (review/text) which have (review/time) between x and y, for example ( 1183334400 < time < 1185926400), here are part of my data: ``` product/productId: B000278ADA product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: A17KXW1PCUAIIN review/profileName: Mark Anthony "Mark" review/helpfulness: 4/4 review/score: 5.0 review/time: 1174435200 review/summary: Jobst UltraSheer Knee High Stockings review/text: Does a very good job of relieving fatigue. product/productId: B000278ADB product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: A9Q3932GX4FX8 review/profileName: Trina Wehle review/helpfulness: 1/1 review/score: 3.0 review/time: 1352505600 review/summary: Delivery was very long wait..... review/text: It took almost 3 weeks to recieve the two pairs of stockings . product/productId: B000278ADB product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: AUIZ1GNBTG5OB review/profileName: dgodoy review/helpfulness: 1/1 review/score: 2.0 review/time: 1287014400 review/summary: sizes recomended in the size chart are not real review/text: sizes are much smaller than what is recomended in the chart. I tried to put it and sheer it!. ``` my Spark-Scala Code : ``` import org.apache.hadoop.conf.Configuration import org.apache.hadoop.io.{LongWritable, Text} import org.apache.hadoop.mapreduce.lib.input.TextInputFormat import org.apache.spark.{SparkConf, SparkContext} object test1 { def main(args: Array[String]): Unit = { val conf1 = new SparkConf().setAppName("golabi1").setMaster("local") val sc = new SparkContext(conf1) val conf: Configuration = new Configuration conf.set("textinputformat.record.delimiter", "product/title:") val input1=sc.newAPIHadoopFile("data/Electronics.txt", classOf[TextInputFormat], classOf[LongWritable], classOf[Text], conf) val lines = input1.map { text => text._2} val filt = lines.filter(text=>(text.toString.contains(tt => tt in (startdate until enddate)))) filt.saveAsTextFile("data/filter1") } } ``` but my code does not work well, how can i filter these lines?
Is is much simpler than that. Try this: ``` object test1 { def main(args: Array[String]): Unit = { val conf1 = new SparkConf().setAppName("golabi1").setMaster("local") val sc = new SparkContext(conf1) def extractDateAndCompare(line: String): Boolean= { val from = line.indexOf("/time: ") + 7 val to = line.indexOf("review/text: ") -1 val date = line.substring(from, to).toLong date > startDate && date < endDate } sc.textFile("data/Electronics.txt") .filter(extractDateAndCompare) .saveAsTextFile("data/filter1") } } ``` I usually find those intermediate auxiliary methods to make things much clearer. Of course, this assumes the boundary dates are defined somewhere and that the input file contain format issues. I did this intentionally to keep this simple, but adding a try, returning an Option clause and using flatMap() can help you avoid errors if you have them. Also, your raw text is a little cumbersome, you might want to explore Json, TSV files or some other alternative, easier format.
How to create N-tuples in Python? What would be the easiest way to create a list of n-tuples in Python? For example, if I want to create for a number n (for e.g. 3): I'd want to generate the following set of tuples: ``` (1,1,1) (1,1,2) (1,1,3) (2,1,1) (2,1,2) (2,1,3) (3,1,1) (3,1,2) (3,1,3) (1,2,1) (1,2,2) (1,2,3) (2,2,1) (2,2,2) (2,2,3) (3,2,1) (3,2,2) (3,2,3) (1,3,1) (1,3,2) (1,3,3) (2,3,1) (2,3,2) (2,3,3) (3,3,1) (3,3,2) (3,3,3) ```
Use [`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product): ``` >>> from itertools import product >>> list(product(range(1, 4), repeat=3)) [(1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 2, 1), (1, 2, 2), (1, 2, 3), (1, 3, 1), (1, 3, 2), (1, 3, 3), (2, 1, 1), (2, 1, 2), (2, 1, 3), (2, 2, 1), (2, 2, 2), (2, 2, 3), (2, 3, 1), (2, 3, 2), (2, 3, 3), (3, 1, 1), (3, 1, 2), (3, 1, 3), (3, 2, 1), (3, 2, 2), (3, 2, 3), (3, 3, 1), (3, 3, 2), (3, 3, 3)] ```
Calculate autocorrelation using FFT in Matlab I've read some explanations of how autocorrelation can be more efficiently calculated using the fft of a signal, multiplying the real part by the complex conjugate (Fourier domain), then using the inverse fft, but I'm having trouble realizing this in Matlab because at a detailed level.
Just like you stated, take the fft and multiply pointwise by its complex conjugate, then use the inverse fft (or in the case of cross-correlation of two signals: `Corr(x,y) <=> FFT(x)FFT(y)*`) ``` x = rand(100,1); len = length(x); %# autocorrelation nfft = 2^nextpow2(2*len-1); r = ifft( fft(x,nfft) .* conj(fft(x,nfft)) ); %# rearrange and keep values corresponding to lags: -(len-1):+(len-1) r = [r(end-len+2:end) ; r(1:len)]; %# compare with MATLAB's XCORR output all( (xcorr(x)-r) < 1e-10 ) ``` In fact, if you look at the code of `xcorr.m`, that's exactly what it's doing (only it has to deal with all the cases of padding, normalizing, vector/matrix input, etc...)
How to provide default value for a parameter of delegate type in C#? In C# we can provide default value of the parameters as such: ``` void Foo(int i =0) {} ``` But, when the method signature is: ``` void FooWithDelegateParam(Func<string,string> predicate) {} ``` How can we pass the default parameter: ``` void FooWithDelegateParam(Func<string,string> predicate = (string,string x)=> {return y;}) {} ``` But this won't compile. So, what is the proper syntax for doing so ? Note: I'm trying to provide a way to specify an **input-string to output-string mapper** through a delegate, and if not provided I simply want to return the input string. So, suggestions on any alternative approach to achieve this is highly appreciated as well. Thanks.
You can't, basically. Default values for parameters have to be compile-time constants. However, if you're happy to use `null` as a value meaning "use the default" you could have: ``` void FooWithDelegateParam(Func<string, string> predicate = null) { predicate = predicate ?? (x => x); // Code using predicate } ``` Or use an overload, as per Alireza's suggestion, of course. Each option has different implications: - The overload solution works with languages which don't support optional parameters (e.g. C# before 4.0) - The overload solution differentiates between `null` and "the default". This in itself has pros and cons: - If the caller should never provide a `null` value, the overload version can find bugs where it's *accidentally* doing so - If you don't believe there will be any such bugs, the optional parameter version allows the idea of "the default" to be represented in code - you could pass a "`null` meaning default" value through multiple layers, letting only the bottom-most layer determine what that default actually means, and do so more simply than having to explicitly call different overloads - The optional parameter version is simpler to express in an interface... - ... with the downside that the default value would still need to be expressed in the implementation. (This is somewhat common to the overload solution, mind you... in both cases, an abstract class implementing the interface could do the defaulting using the template method pattern.)
Decorator pattern versus sub classing I can solve the problem of adding functionality by adding sub classing then why should I use decorator pattern what's the real advantage of decorator pattern ?
from [Decorator pattern at wikipedia](http://en.wikipedia.org/wiki/Decorator_pattern) > > The decorator pattern can be used to > make it possible to extend (decorate) > the functionality of a certain object > at **runtime**. > > > The whole point of decorator pattern is to dynamically add additional behaviour/functionality, which is of course not possible at design time. from the same article: > > The decorator pattern is an > alternative to subclassing. > *Subclassing adds behavior at compile > time*, and the change affects all > instances of the original class; > *decorating can provide new behavior at > runtime for individual objects*. > > >
Sql Server If condition whit select What's wrong with this? ``` DECLARE @error int If (SELECT ID_Projet FROM tblProjet WHERE No_Projet=@no_Projet)> 0 SET @error=1 END IF ```
The `END IF` is incorrect. Do it like this: ``` DECLARE @error int If (SELECT ID_Projet FROM tblProjet WHERE No_Projet=@no_Projet)> 0 SET @error=1 ``` or this: ``` DECLARE @error int If (SELECT ID_Projet FROM tblProjet WHERE No_Projet=@no_Projet)> 0 Begin SET @error=1 End ``` --- Check [HERE](http://msdn.microsoft.com/en-us/library/aa933214%28v=sql.80%29.aspx) for documentation. --- If you're trying to see the number of rows with that restriction you should do it like this: ``` DECLARE @error int If (SELECT count(ID_Projet) FROM tblProjet WHERE No_Projet=@no_Projet)> 0 Begin SET @error=1 End ```
addHow to make django post\_save signal run only during creation I'm using django-notifications in my project and I want to notify a particular user whenever a model is created using the signal, but the post\_save also runs when a model is being updated how do I prevent this and only make the post\_save method run when a model is created. models.py ``` class Card(models.Model): user = models.ForeignKey(User,on_delete=models.CASCADE) title = models.CharField(max_length=100) description = models.TextField(blank=True) list = models.ForeignKey(List, related_name='cards') story_points = models.IntegerField(null=True, blank=True) business_value = models.IntegerField(null=True, blank=True) def __str__(self): return "Card: {}".format(self.title) def my_handler(sender, instance, **kwargs): if instance.pk is None: notify.send(instance.user, recipient=User.objects.get(pk=1), target=instance, verb='created') post_save.connect(my_handler, sender=Card) ``` I tried using if instance.pk is None, but when I add this condition it doesn't run at all. EDITED: The code checking if created ``` def my_handler(sender, instance, created, **kwargs): if created: notify.send(instance.user, recipient=User.objects.get(pk=1), target=instance, verb='created') ```
There is a created named argument which will be set to True if it's a new object. Have a look here - <https://docs.djangoproject.com/en/1.10/ref/signals/#post-save> ``` def my_func(sender, instance, created, **kwargs): print("Created: ", created) class MyModel(models.Model): x = models.CharField(max_length=255) post_save.connect(my_func, sender=MyModel) >>> MyModel.objects.create(f='asdf') Created: True >>> m = MyModel.objects.all().first() >>> m.x 'asdf' >>> m.x = 'a' >>> m.save() Created: False ```
Jupyter Notebook: How to relaunch all cells above when a crash occurs? ### Question 1: I am using jupyter 4 with python and I would need my script to do a **relaunch all the cells above** when a crash occurs. Is this possible ? ### Question 2: If I need to relaunch all some cells, can I ask python to **execute them according to some cell-id**? I could then create a list of the cells id which have to be re-executed when catching an exception...
You can always relaunch all cells above the active cell using `Cell > Run All Above`. But when it comes to doing so programmatically and *reliably*, I've got both good and bad news for you. --- **Let's get the bad news regarding question 2 out of the way: NO** --- ...at least not very reliably, because any ID of a cell would change if you insert or remove any other cell. According to [Execute specific cells through widgets and conditions](https://github.com/jupyter/notebook/issues/2660) on github: > > We don't have the Ids of of cell in order to handle them > programatically. > > > And further down on the same post: > > There are some APIs which can run cells identified by numbers, but > unfortunately the numbers change if you insert or delete a cell > somewhere above. > > > --- **And now to the good news about the first question: YES** --- ...but it's not 100% certain that it will solve your error handling needs as per the details in your question. But we'll get to that in a bit. Because the good news is that the answer to the question as it stands in the title > > How to relaunch all cells above when a crash occurs? > > > is **YES WE CAN!** The hard (maybe even impossible) part of this question is to implement it as a robust error handling method. If you're only interested in that, skip to the section **`The hard part`** at the end of my answer. For now, let's go on with the **`easy part`** that is to programmatically run the menu option `Cell > Run All` (as described in the answer by Nic Cottrell). You have two options: **Option 1 -** Run all cells above by executing a cell: If you insert the following snippet in a cell and run it, all cells above will be executed: ``` from IPython.display import Javascript display(Javascript('IPython.notebook.execute_cells_above()')) ``` **Option 2 -** Run all cells above by clicking a button: If you insert the following snippet in a cell and run it, all cells above will be executed when you click the appearing button: **Snippet:** ``` from IPython.core.display import display, HTML HTML('''<script> </script> <form action="javascript:IPython.notebook.execute_cells_above()"><input type="submit" id="toggleButton" value="Run all"></form>''') ``` **Output:** [![enter image description here](https://i.stack.imgur.com/L3SiI.png)](https://i.stack.imgur.com/L3SiI.png) --- ## **`THE HARD PART`** So, how can we set this up to handle an error when a crash occurs? I'm not an expert on this, but I think I've been able to make a setup that will work for you. But it will most likely depend on the type of error in question and the rest of your work flow. The following example builds on two different error messages. The first is a `NameError` that occurs when you try to assign a value to a variable that does not exist. And this will be useful since re-running some cells after an error will need an iterator that resets only when the notebook is restarted completely, and not when a cell is re-run as part of an error handling method. The name error will only occur when the kernel is restarted upon a fresh restart of your notebook. As part of the error handling, the value `0` is assigned to `x1`. When the cell is only re-run `x1` will increase by `1`. The second error will serve as a proxy for ***your*** error, and is an AssignmentError that occurs each time you try to [delete an element from a list that does not exist](https://stackoverflow.com/questions/15605925/how-to-get-the-last-exception-object-after-an-error-is-raised-at-a-python-prompt). And this leads us to the real challenge, since if your error handler re-runs all cells above every time the error is triggered, you'll quickly end up in a bad loop. But we'll handle that with a counter that exits the looping execution of cells after a few runs. It's also a bit problematic that there does not seem to exist a functionality to rerun your ***existing cell***, or the cell from where the `run cells above` functionality is initialized. But we'll handle that with another suggestion from the same github post as earlier: > > Doing the following helps me to execute the cell right below the code > cell. You can also change the values to get cells in other parts of > the notebook. > `display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1, IPython.notebook.get_selected_index()+2)'))` > > > **Notebook with suggested work-flow:** Insert the four following snippets below in four cells. Click the menu option `Cell > Run all` once, and we're good to go! **Snippet 1 - Imports and setup** ``` import sys import os from IPython.core.display import display, HTML from IPython.display import Javascript from random import randint # Trigger to randomly raise en error in the next cell ErrorTrigger = randint(0, 9) # Assignment of variables at first run of the Norebook try: x1 except NameError: x1 = None if x1 is None: %qtconsole # opens a qtconsole (for variable inspection and debugging) x1 = 0 # counter for NameError x2 = 0 # counter for assignment error (used in cells below) mr = 0 # counter for manual relaunch by button ErrorTriggers=[] # container for ErroTriggers print('NameErrors = ', x1) else: x1 = x1 + 1 ErrorTriggers.append(ErrorTrigger) #print('Executions:', x1, '||', 'Triggers:', ErrorTriggers) ``` **Snippet 2 - Proxy for your error** ``` # PROXY ERROR => INSERT YOUR CODE FROM HERE ################################################################ list1 = [1,2,3,4] # 80 % chance of raising an error trying to delete an element that does not exist in the list if ErrorTrigger > 2: elemDelete = 8 # error else: elemDelete = 0 # not error try: del list1[elemDelete] print('Executions:', x1, '||', 'Triggers:', ErrorTriggers) print('Routine success on attempt', x2 + 1) print('Error mesg: None') ErrorTriggers=[] x2 = 0 # reset error counter # TO HERE ################################################################################################# except Exception: x2 = x2 + 1 # Will end error handler after 5 attempts if x2 < 3: # As long as we're UNDER the attempt limit, the next cell executed by: display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,'+ ' IPython.notebook.get_selected_index()+2)')) else: # If we're OVER the attempt limit, it all ends here. The next cell is NOT run. # And NEITHER is the last cell with the button to relaunch the whole thing. print('Executions:', x1, '||', 'Triggers:', ErrorTriggers) print('Routine aborted after attempt', x2) print('Error msg:', sys.exc_info()[1]) # Returns a message describing the error # reset variables ErrorTriggers = [] x2 = 0 ``` **Snippet 3 - Cell to rerun all cells above as error handler** ``` display(Javascript('IPython.notebook.execute_cells_above()')) ``` **Snippet 4 - Cell to rerun the whole thing with en error probability of 20%** ``` HTML('''<script> </script> <form action="javascript:IPython.notebook.execute_cells_above()"><input type="submit" id="toggleButton" value="Run again!"></form>''') ``` **Screenshot after a few test runs:** [![enter image description here](https://i.stack.imgur.com/tqOxj.png)](https://i.stack.imgur.com/tqOxj.png) I'll gladly add more details if the comments in the snippets are unclear. But if you run the notebook a few times by clicking `Run Again!` and at the same time have a look at the output of cell 3, you'll quickly grasp how the whole thing is put together: [![enter image description here](https://i.stack.imgur.com/2Oey2.png)](https://i.stack.imgur.com/2Oey2.png)
Django JWT auth: How to get user data? I'm trying to desperately understand how to use JWT auth with Django. This page explains how to get a token against username and password: <http://getblimp.github.io/django-rest-framework-jwt/> ``` $ curl -X POST -H "Content-Type: application/json" -d '{"username":"admin","password":"password123"}' http://localhost:8000/api-token-auth/ ``` `Now in order to access protected api urls you must include the Authorization: JWT <your_token> header.` 1) How can I get the user details (id, email..) of the "logged in" user from the server? If I used the session based auth I would just serialize and return `request.user` if it's logged in. I don't understand how the server would know who is who if nothing auth-related is persisted. 2) I don't even understand how the procedure described in that page is safe. Why can't the attacker just hijack the token and do what he wants? As I understood I just get a token and then send the same token back in every request. Is this even real JWT?
You use the typical Django auth mechanism with JWT. - You POST with the username and password and get the token back. Your auth view needs to have the following permission class: ``` from rest_framework.views import APIView class Authenticate(APIView): permission_classes = (AllowAny,) ``` - The next time you sent the token it goes through here: ``` REST_FRAMEWORK = { 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.SessionAuthentication', 'rest_framework.authentication.BasicAuthentication', 'rest_framework_jwt.authentication.JSONWebTokenAuthentication', ), ``` - The authentication classes set `request.user` and you can use it as you normally do > > 2) I don't even understand how the procedure described in that page is safe. Why can't the attacker just hijack the token and do what he wants? As I understood I just get a token and then send the same token back in every request. Is this even real JWT? > > > You absolutely have to investigate the JWT refresh token mechanism. Tokens are usually short lived, the default is 5 minutes I think.
Configuring proxy for JAX-RS 2.0 client API I have an application that is running on a Java EE 7 application server (WildFly), that queries another service using REST resources. In previous applications I have used the Jersey 1.x client API. Access to the REST service is granted through a web proxy. In Jersey I create the `Client` instance like this: ``` public Client create() { Client client; if ( proxyConfiguration != null && proxyConfiguration.getHost() != null && !proxyConfiguration.getHost().trim().isEmpty() ) { HttpURLConnectionFactory urlConnectionFactory = new ProxyUrlConnectionFactory( proxyConfiguration ); client = new Client( new URLConnectionClientHandler( urlConnectionFactory ), clientConfig ); } else { client = Client.create( clientConfig ); } return client; } ``` Running on a Java EE 7 application server I wanted to use the JAX-RS 2.0 client API which is provided by the application server. Now I am having a really hard time to find information on how to configure the JAX-RS 2.0 client in a platform independent way. Setting the `http.proxyHost` and `http.proxyPort` system properties had no effect in WildFly (I would prefer to not configure it globally anyway). Does anyone know how to solve this?
I think there's no vendor independent solution (at least, I didn't find anything related to proxies in the JAX-RS API). For Jersey 2.x, you can try: ``` ClientConfig config = new ClientConfig(); config.property(ClientProperties.PROXY_URI, "192.168.1.254:8080"); Client client = ClientBuilder.withConfig(config).build(); ``` [`ClientProperties`](https://jersey.java.net/apidocs/2.9/jersey/org/glassfish/jersey/client/ClientProperties.html) is a class from Jersey API. --- For RESTEasy, the configuration is: ``` Client client = new ResteasyClientBuilder() .defaultProxy("192.168.1.254", 8080, "http") .build(); ``` [`ResteasyClientBuilder`](http://docs.jboss.org/resteasy/docs/3.0.13.Final/javadocs/org/jboss/resteasy/client/jaxrs/ResteasyClientBuilder.html) is a class from RESTEasy API.
JavaScript array to CSV I've followed this post [How to export JavaScript array info to csv (on client side)?](https://stackoverflow.com/questions/14964035/how-to-export-javascript-array-info-to-csv-on-client-side) to get a nested js array written as a csv file. The array looks like: ``` var test_array = [["name1", 2, 3], ["name2", 4, 5], ["name3", 6, 7], ["name4", 8, 9], ["name5", 10, 11]]; ``` The code given in the link works nicely except that after the third line of the csv file all the rest of the values are on the same line e.g. name1,2,3 name2,4,5 name3,6,7 name4,8,9name5,10,11 etc etc Can anyone shed any light on why this is? Same using Chrome or FF. Thanks **EDIT** jsfiddle <http://jsfiddle.net/iaingallagher/dJKz6/> Iain
The cited answer was wrong. You had to change ``` csvContent += index < infoArray.length ? dataString+ "\n" : dataString; ``` to ``` csvContent += dataString + "\n"; ``` As to why the cited answer was wrong (funny it has been accepted!): `index`, the second parameter of the `forEach` callback function, is the index in the looped-upon array, and it makes no sense to compare this to the size of `infoArray`, which is an item of said array (which happens to be an array too). ## EDIT Six years have passed now since I wrote this answer. Many things have changed, including browsers. The following was part of the answer: *START of aged part* BTW, the cited code is suboptimal. You should avoid to repeatedly append to a string. You should append to an array instead, and do an array.join("\n") at the end. Like this: ``` var lineArray = []; data.forEach(function (infoArray, index) { var line = infoArray.join(","); lineArray.push(index == 0 ? "data:text/csv;charset=utf-8," + line : line); }); var csvContent = lineArray.join("\n"); ``` *END of aged part* (Keep in mind that the CSV case is a bit different from generic string concatenation, since for every string you also have to add the separator.) Anyway, the above seems **not** to be true anymore, at least not for Chrome and Firefox (it seems to still be true for Safari, though). To put an end to uncertainty, I wrote a [jsPerf test](https://jsperf.com/csv-string-concatenation/1) that tests whether, in order to concatenate strings in a comma-separated way, it's faster to push them onto an array and join the array, or to concatenate them first with the comma, and then directly with the result string using the += operator. Please follow the link and run the test, so that we have enough data to be able to talk about facts instead of opinions.
What is the difference between class constants and class instance variables in Ruby? I will note that there are a lot of similarly worded questions that are distinct from what I believe I'm asking. What is the difference between the following in terms of functionality? E.g. how do they behave with regards to inheritance? ``` class Foo BAR = 'Hello' end ``` and ``` class Foo @bar = 'Hello' end ```
# Access Constants are public by default (we're disregarding [private constants](http://ruby-doc.org/core-2.3.4/Module.html#method-i-private_constant) here). Class instance variables are not accessible (except with stuff like `Object#instance_variable_get`, but that's typically not very good style) without a reader and/or writer method. # Inheritance Constants will refer to the value in the context in which they are used, not the current value of `self`. For example, ``` class Foo BAR = 'Parent' def self.speak puts BAR end end class FooChild < Foo BAR = 'Child' end Foo.speak # Parent FooChild.speak # Parent ``` While class instance variables are dependent on the value of `self`: ``` class Foo @bar = 'Parent' def self.speak puts @bar end end class FooChild < Foo @bar = 'Child' end Foo.speak # Parent FooChild.speak # Child ``` If you use an explicit reference to `self`, you can get the same behavior as constants, however: ``` class Foo BAR = 'Parent' def self.speak puts self::BAR end end class FooChild < Foo BAR = 'Child' end Foo.speak # Parent FooChild.speak # Child ```
Why do you need '-lpthread'? So my questions is: Why do you need '-lpthread' at the end of a compiling command? Why does this command work: ``` gcc -o name name.c -lpthread ``` but this won't: ``` gcc -o name name.c ``` I am using the pthread.h library in my c code. I already looked online for some answers but didn't really find anything that answered it understandably
`pthread.h` is not a library **it is just a header file** which gives you declaration (not the actual body of function) of functions which you will be using for multi-threading. using `-libpthread` or `-lpthread` while compiling actually links the GCC library `pthread` with your code. Hence the compiler flag, `-libLIBRARY_NAME` or `-lLIBRARY_NAME` is essential. If you don't include the flags `-l` or `-lib` with `LIBRARY_NAME` you won't be able to use the external libraries. In this case, say if you are using functions `pthread_create` and `pthread_join`, so you'll get an error saying: ``` undefined reference to `pthread_create' undefined reference to `pthread_join' ```
What is the benefit of std::literals::.. being inline namespaces? In the C++-Standard (eg. N4594) there are two definitions for `operator""s`: One for `std::chrono::seconds` : ``` namespace std { ... inline namespace literals { inline namespace chrono_literals { // 20.15.5.8, suffixes for duration literals constexpr chrono::seconds operator "" s(unsigned long long); ``` and one for `std::string` : ``` namespace std { .... inline namespace literals { inline namespace string_literals { // 21.3.5, suffix for basic_string literals: string operator "" s(const char* str, size_t len); ``` I wonder what is gained from those namespaces (and all the other namespaces inside `std::literals`), if they are `inline`. I thought they were inside separate namespaces so they do not conflict with each other. But when they are `inline`, this motivation is undone, right? *Edit:* Because [Bjarne explains](http://www.stroustrup.com/C++11FAQ.html#inline-namespace) the main motivation is "library versioning", but this does not fit here. I can see that the overloads for "Seconds" and "String" are distinct and therefor do not conflict. But would they conflict if the overloads were the same? Or does take the (`inline`?) `namespace` prevents that somehow? Therefore, what is gained from them being in an `inline namespace` at all? How, as @Columbo points out below, are overloading across inline namespaces resolved, and do they clash?
The user-defined literal `s` does not "clash" between `seconds` and `string`, even if they are both in scope, because they overload like any other pair of functions, on their different argument lists: ``` string operator "" s(const char* str, size_t len); seconds operator "" s(unsigned long long sec); ``` This is evidenced by running this test: ``` void test1() { using namespace std; auto str = "text"s; auto sec = 1s; } ``` With `using namespace std`, both suffixes are in scope, and yet do not conflict with each other. So why the `inline namespace` dance? The rationale is to allow the programmer to expose as few std-defined names as desired. In the test above, I've "imported" the entire std library into `test`, or at least as much as has been #included. `test1()` would not have worked had `namespace literals` not been `inline`. Here is a more restricted way to use the literals, without importing the entire std: ``` void test2() { using namespace std::literals; auto str = "text"s; auto sec = 1s; string str2; // error, string not declared. } ``` This brings in all std-defined literals, but not (for example) `std::string`. `test2()` would not work if `namespace string_literals` was not `inline` and `namespace chrono_literals` was not `inline`. You can also choose to *just* expose the string literals, and not the chrono literals: ``` void test3() { using namespace std::string_literals; auto str = "text"s; auto sec = 1s; // error } ``` Or just the chrono literals and not the string literals: ``` void test4() { using namespace std::chrono_literals; auto str = "text"s; // error auto sec = 1s; } ``` Finally there is a way to expose all of the chrono names *and* the chrono\_literals: ``` void test5() { using namespace std::chrono; auto str = "text"s; // error auto sec = 1s; } ``` `test5()` requires this bit of magic: ``` namespace chrono { // hoist the literals into namespace std::chrono using namespace literals::chrono_literals; } ``` In summary, the `inline namespace`s are a tool to make all of these options available to the developer. **Update** The OP asks some good followup questions below. They are (hopefully) addressed in this update. > > Is `using namespace std` not a good idea? > > > It depends. A `using namespace` is never a good idea at global scope in a header that is meant to be part of a general purpose library. You don't want to force a bunch of identifiers into your user's global namespace. That namespace belongs to your user. A global scope `using namespace` can be ok in a header if the header only exists for the application you are writing, and if it is ok with you that you have all of those identifiers available for everything that includes that header. But the more identifiers you dump into your global scope, the more likely it is that they will conflict with something. `using namespace std;` brings in a *bunch* of identifiers, and will bring in even more with each new release of the standard. So I don't recommend `using namespace std;` at global scope in a header even for your own application. However I could see `using namespace std::literals` or `using namespace std::chrono_literals` at global scope in a header, but only for an application header, not a library header. I like to use `using` directives at function scope as then the import of identifiers is limited to the scope of the function. With such a limit, if a conflict does arise, it is much easier to fix. And it is less likely to happen in the first place. std-defined literals will *probably* never conflict with one another (they do not today). But you never know... std-defined literals will *never* conflict with user-defined literals because std-defined literals will never start with `_`, and user-defined literals *have* to start with `_`. > > Also, for library developers, is it necessary (or good practice) to have no conflicting overloads inside several inline namespaces of a large library? > > > This is a really good question, and I posit that the jury is still out on this one. However I just happen to be developing a library that *purposefully* has conflicting user-defined literals in different inline namespaces! <https://github.com/HowardHinnant/date> ``` #include "date.h" #include "julian.h" #include <iostream> int main() { using namespace date::literals; using namespace julian::literals; auto ymd = 2017_y/jan/10; auto jymd = julian::year_month_day{ymd}; std::cout << ymd << '\n'; std::cout << jymd << '\n'; } ``` The above code fails to compile with this error message: ``` test.cpp:10:20: error: call to 'operator""_y' is ambiguous auto ymd = 2017_y/jan/10; ^ ../date/date.h:1637:1: note: candidate function operator "" _y(unsigned long long y) NOEXCEPT ^ ../date/julian.h:1344:1: note: candidate function operator "" _y(unsigned long long y) NOEXCEPT ^ ``` The `_y` literal is used to create `year` in this library. And this library has both a Gregorian calendar (in "date.h") and a Julian calendar (in "julian.h"). Each of these calendars has a `year` class: (`date::year` and `julian::year`). They are different types because the Gregorian year is not the same thing as a Julian year. But it is still convenient to name them both `year` and to give them both a `_y` literal. If I remove the `using namespace julian::literals;` from the code above then it compiles and outputs: ``` 2017-01-10 2016-12-28 ``` which is a demonstration that 2016-12-28 Julian is the same day as 2017-01-10 Gregorian. And this is also a graphic demonstration that the same day can have different years in different calendars. Only time will tell if my use of conflicting `_y`s will be problematic. To date it hasn't been. However not many people have used this library with non-Gregorian calendars.
show multiple annotation in map view i am new to map view in ios sdk. i want to show multiple annotation in map view using lat and long.basically all lat and long are coming from server side in json format. i am parsing all lat and long and saving it in different array. but how to show all annotation at single time. i am able to show only one annotation at a time.Below is code for single annotation i am using, ``` zoomLocation.latitude = latmpa.doubleValue; zoomLocation.longitude = logmpa.doubleValue; annotationPoint = [[MKPointAnnotation alloc] init]; annotationPoint.coordinate = zoomLocation; annotationPoint.title = @"masjid...."; [mapView selectAnnotation:annotationPoint animated:YES]; [mapView addAnnotation:annotationPoint]; mapView.centerCoordinate = annotationPoint.coordinate; MKCoordinateSpan span; span.latitudeDelta = 1.5; span.longitudeDelta = 1.0; MKCoordinateRegion newRegion; newRegion.center = zoomLocation; newRegion.span = span; [mapView setRegion:newRegion animated:YES]; ```
Try This ``` for ( int i=0; i<[yourLatLongarray count]; i++) { CLLocationCoordinate2D coord; coord.latitude=[[NSString stringWithFormat:@"%@",[yourLatitudeArray objectAtIndex:i]] floatValue]; coord.longitude=[[NSString stringWithFormat:@"%@", [yourLongitudeArray objectAtIndex:i]] floatValue]; MKCoordinateRegion region1; region1.center=coord; region1.span.longitudeDelta=20 ; region1.span.latitudeDelta=20; [mapview setRegion:region1 animated:YES]; NSString *titleStr =[namesArr objectAtIndex:i] ; // NSLog(@"title is:%@",titleStr); MyAnnotation* annotObj =[[MyAnnotation alloc]initWithCoordinate:coord title:titleStr]; [mapview addAnnotation:annotObj]; } ``` MyAnnotation.h is ``` @interface MyAnnotation : NSObject <MKAnnotation> { CLLocationCoordinate2D coordinate; NSString *title; NSString *subTitle; NSString *time; } @property (nonatomic)CLLocationCoordinate2D coordinate; @property (nonatomic, retain) NSString *title; @property (nonatomic, retain) NSString *subTitle; @property (nonatomic,retain) NSString *time; -(id)initWithCoordinate:(CLLocationCoordinate2D) c title:(NSString *) t subTitle:(NSString *)timed time:(NSString *)tim; -(id)initWithCoordinate:(CLLocationCoordinate2D) c title:(NSString *)tit; @end ``` MyAnnotation.m is ``` @implementation MyAnnotation @synthesize coordinate; @synthesize title; @synthesize time; @synthesize subTitle; -(id)initWithCoordinate:(CLLocationCoordinate2D) c title:(NSString *) t subTitle:(NSString *)timed time:(NSString *)tim { self.coordinate=c; self.time=tim; self.subTitle=timed; self.title=t; return self; } -(id)initWithCoordinate:(CLLocationCoordinate2D) c title:(NSString *)tit { self.coordinate=c; self.title=tit; return self; } @end ```
Speeding up element-wise array multiplication in python I have been playing around with numba and numexpr trying to speed up a simple element-wise matrix multiplication. I have not been able to get better results, they both are basically (speedwise) equivalent to numpys multiply function. Has anyone had any luck in this area? Am I using numba and numexpr wrong (I'm quite new to this) or is this altogether a bad approach to try and speed this up. Here is a reproducible code, thank you in advanced: ``` import numpy as np from numba import autojit import numexpr as ne a=np.random.rand(10,5000000) # numpy multiplication1 = np.multiply(a,a) # numba def multiplix(X,Y): M = X.shape[0] N = X.shape[1] D = np.empty((M, N), dtype=np.float) for i in range(M): for j in range(N): D[i,j] = X[i, j] * Y[i, j] return D mul = autojit(multiplix) multiplication2 = mul(a,a) # numexpr def numexprmult(X,Y): M = X.shape[0] N = X.shape[1] return ne.evaluate("X * Y") multiplication3 = numexprmult(a,a) ```
What about using [fortran](/questions/tagged/fortran "show questions tagged 'fortran'") and [ctypes](/questions/tagged/ctypes "show questions tagged 'ctypes'")? elementwise.F90: ``` subroutine elementwise( a, b, c, M, N ) bind(c, name='elementwise') use iso_c_binding, only: c_float, c_int integer(c_int),intent(in) :: M, N real(c_float), intent(in) :: a(M, N), b(M, N) real(c_float), intent(out):: c(M, N) integer :: i,j forall (i=1:M,j=1:N) c(i,j) = a(i,j) * b(i,j) end forall end subroutine ``` elementwise.py: ``` from ctypes import CDLL, POINTER, c_int, c_float import numpy as np import time fortran = CDLL('./elementwise.so') fortran.elementwise.argtypes = [ POINTER(c_float), POINTER(c_float), POINTER(c_float), POINTER(c_int), POINTER(c_int) ] # Setup M=10 N=5000000 a = np.empty((M,N), dtype=c_float) b = np.empty((M,N), dtype=c_float) c = np.empty((M,N), dtype=c_float) a[:] = np.random.rand(M,N) b[:] = np.random.rand(M,N) # Fortran call start = time.time() fortran.elementwise( a.ctypes.data_as(POINTER(c_float)), b.ctypes.data_as(POINTER(c_float)), c.ctypes.data_as(POINTER(c_float)), c_int(M), c_int(N) ) stop = time.time() print 'Fortran took ',stop - start,'seconds' # Numpy start = time.time() c = np.multiply(a,b) stop = time.time() print 'Numpy took ',stop - start,'seconds' ``` I compiled the Fortran file using ``` gfortran -O3 -funroll-loops -ffast-math -floop-strip-mine -shared -fPIC \ -o elementwise.so elementwise.F90 ``` The output yields a speed-up of ~10%: ``` $ python elementwise.py Fortran took 0.213667869568 seconds Numpy took 0.230120897293 seconds $ python elementwise.py Fortran took 0.209784984589 seconds Numpy took 0.231616973877 seconds $ python elementwise.py Fortran took 0.214708089828 seconds Numpy took 0.25369310379 seconds ```
Convert curl (with --data-urlencode) to ruby I am trying to convert the following curl command to ruby using `net/http` but I haven't figured out how to pass in the `--data-urlencode script@files/jql/events.js` part of the command. ``` curl https://mixpanel.com/api/2.0/jql -u <apikey>: --data-urlencode script@files/jql/events.js ``` Using `net/http` I had the following... ``` uri = URI.parse("https://mixpanel.com/api/2.0/jql") request = Net::HTTP::Get.new(uri) request.basic_auth("<apikey>", "") response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http| http.request(request) end ``` Is there anyway to do this? If not within `net/http` then maybe using another gem?
Mixpanel has it's [official ruby gem](https://github.com/mixpanel/mixpanel-ruby) I didn't actually work with it, but I assume it have all needed methods. But if you don't like to use it, you may use [Faraday](https://github.com/lostisland/faraday) an awesome HTTP client library for Ruby. I made a simple example with it. Please have a look: ``` class MixpanelClient def initialize(url = "https://mixpanel.com/api/2.0/jql", api_key = "ce08d087255d5ceec741819a57174ce5") @url = url @api_key = api_key end def query_data File.read("#{Rails.root}/lib/qry.js") end def query_params '{"from_date": "2016-01-01", "to_date": "2016-01-07"}' end def get_events resp = Faraday.new(url: @url, ssl: { verify: false }) do |faraday| faraday.request :url_encoded faraday.response :logger faraday.adapter Faraday.default_adapter faraday.basic_auth(@api_key, "") end.get do |req| req.params['script'] = query_data req.params['params'] = query_params end raise MixpanelError.new("Mixpanel error") unless resp.status == 200 JSON.parse(resp.body) end end class MixpanelError < StandardError; end ``` Here is the result: ``` [1] pry(main)> m = MixpanelClient.new => #<MixpanelClient:0x007fc1442d53b8 @api_key="ce08d087255d5ceec741819a57174ce5", @url="https://mixpanel.com/api/2.0/jql"> [2] pry(main)> m.get_events I, [2016-06-09T09:05:51.741825 #36920] INFO -- : get https://mixpanel.com/api/2.0/jql?params=%7B%22from_date%22%3A+%222016-01-01%22%2C+%22to_date%22%3A+%222016-01-07%22%7D&script=function+main%28%29%7B+return+Events%28params%29.groupBy%28%5B%22name%22%5D%2C+mixpanel.reducer.count%28%29%29+%7D D, [2016-06-09T09:05:51.741912 #36920] DEBUG -- request: Authorization: "Basic Y2UwOGQwODcyNTVkNWNlZWM3NDE4MTlhNTcxNzRjZTU6" User-Agent: "Faraday v0.9.2" I, [2016-06-09T09:05:52.773172 #36920] INFO -- Status: 200 D, [2016-06-09T09:05:52.773245 #36920] DEBUG -- response: server: "nginx/1.9.12" date: "Thu, 09 Jun 2016 03:05:52 GMT" content-type: "application/json" transfer-encoding: "chunked" connection: "close" vary: "Accept-Encoding" cache-control: "no-cache, no-store" access-control-allow-methods: "GET, POST, OPTIONS" access-control-allow-headers: "X-PINGOTHER,Content-Type,MaxDataServiceVersion,DataServiceVersion,Authorization,X-Requested-With,If-Modified-Since" => [{"key"=>["Change Plan"], "value"=>186}, {"key"=>["View Blog"], "value"=>278}, {"key"=>["View Landing Page"], "value"=>1088}, {"key"=>["login"], "value"=>1241}, {"key"=>["purchase"], "value"=>359}, {"key"=>["signup"], "value"=>116}] ``` A set `ssl: {verufy: false}` because Faraday need addtitional workaround to work with ssl certificates: <https://github.com/lostisland/faraday/wiki/Setting-up-SSL-certificates>
Why is a bean declared in @SpringBootApplication class registered even though it is not a stereotyped class? I have this main class in my project ``` @SpringBootApplication @EnableOAuth2Sso public class App { public static void main(String[] args) throws Exception { SpringApplication.run(App.class, args); } @Bean public RequestContextListener requestContextListener(){ return new RequestContextListener(); } } ``` As far as I know, component scan scans beans in **stereotyped** classes which are one of `@Component, @Service, @Repository, @Controller` if I am not wrong. From spring docs > > By default, classes annotated with @Component, @Repository, @Service, > @Controller, or a custom annotation that itself is annotated with > @Component are the only detected candidate components. > > > I cannot understand how the bean in this class is registered. As it is not a stereotyped class and no annotation is annotated with `@Component` it shouldn't be scanned in the first place but this code works perfectly. In fact for my use case having the bean in this class was the only way my problem was solved, but that is a different thing. Can anyone please explain this. Thanks !!
[`@SpringBootApplication`](https://github.com/spring-projects/spring-boot/blob/master/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/SpringBootApplication.java#L50) is a meta annotation which looks like: ``` // Some details omitted @SpringBootConfiguration @EnableAutoConfiguration public @interface SpringBootApplication { ... } ``` [`@SpringBootConfiguration`](https://github.com/spring-projects/spring-boot/blob/master/spring-boot/src/main/java/org/springframework/boot/SpringBootConfiguration.java#L43) is also a meta annotation: ``` // Other annotations @Configuration public @interface SpringBootConfiguration { ... } ``` And [`@Configuration`](https://github.com/spring-projects/spring-framework/blob/master/spring-context/src/main/java/org/springframework/context/annotation/Configuration.java#L383) is: ``` // Other annotations @Component public @interface Configuration { ... } ``` It works Since: > > By default, classes annotated with @Component, @Repository, @Service, > @Controller, or **a custom annotation that itself is annotated with > @Component are the only detected candidate components.** > > >
AngualrJS $http returns undefined? According to AngularJS, my `$http` call through a service from my controller is returning undefined? What seems to be the issue here? I am trying to return the data called, but once passed to the controller the data becomes undefined? **JavaScript** ``` var myStore = angular.module('myStore', []) .controller('StoreController', ['$scope', 'dataService', function ($scope, dataService) { $scope.products = dataService.getData(); }]) .service('dataService', ['$http', function($http) { this.getData = function() { $http.get('assets/scripts/data/products.json') .then(function(data) { return data; }); }; }]); ``` **HTML** ``` <div class="content"> <ul> <li ng-repeat="product in products.products">{{product.productName}}</li> </ul> </div> ``` I understand that `$http`, `$q`, and `$resource` all return promises, but I thought I had covered that with .then.
The problem could be that you are not `return`ing the promise created by `$http.get` in your `dataService.getData` function. In other words, you may solve your `undefined` issue by changing what you have to this: ``` .service('dataService', ['$http', function($http) { this.getData = function() { return $http.get... }; } ``` If you had multiple calls to `$http.get` within `dataService.getData`, here is how you might handle them. ``` .service('dataService', ['$http', function($http) { this.getData = function() { var combinedData, promise; combinedData = {}; promise = $http.get(<resource1>); promise.then(function (data1) { combinedData['resource1Response'] = data1; return $http.get(<resource2>); }); return promise.then(function (data2) { combinedData['resource2Response'] = data2; return combinedData; }); }; }]); ``` A much cleaner way, however, would be to use [`$q.all`](https://docs.angularjs.org/api/ng/service/$q#all) ``` .service('dataService', ['$http', '$q', function($http, $q) { this.getData = function() { var combinedData, promises; combinedData = {}; promises = $q.all([ $http.get(<resource1>), $http.get(<resource2>) ]); return promises.then(function (allData) { console.log('resource1 response', allData[0]); console.log('resource2 response', allData[1]); return allData; }); }; }]); ```
What are some of the benefits of a "Micro-ORM"? I've been looking into the so-called "Micro ORMs" like Dapper and (to a lesser extent as it relies on .NET 4.0) Massive as these might be easier to implement at work than a full-blown ORM since our current system is highly reliant on stored procedures and would require significant refactoring to work with an ORM like NHibernate or EF. What is the benefit of using one of these over a full-featured ORM? It seems like just a thin layer around a database connection that still forces you to write raw SQL - perhaps I'm wrong but I was always told the reason for ORMs in the first place is so you didn't **have** to write SQL, it could be automatically generated; especially for multi-table joins and mapping relationships between tables which are a pain to do in pure SQL but trivial with an ORM. For instance, looking at an example of Dapper: ``` var connection = new SqlConnection(); // setup here... var person = connection.Query<Person>("select * from people where PersonId = @personId", new { PersonId = 42 }); ``` How is that any different than using a handrolled ADO.NET data layer, except that you don't have to write the command, set the parameters and I suppose map the entity back using a Builder. It looks like you could even use a stored procedure call as the SQL string. Are there other tangible benefits that I'm missing here where a Micro ORM makes sense to use? I'm not really seeing how it's saving anything over the "old" way of using ADO.NET except maybe a few lines of code - you still have to write to figure out what SQL you need to execute (which can get hairy) and you still have to map relationships between tables (the part that IMHO ORMs help the most with).
Benefits: - Similar performance to a raw SqlCommand with DataReader and parsing. - No need to roll your own conversion layer for the DataReader. That's pretty much it, to be honest. You've got a very lightweight wrapper to your sql connections that will do the object conversion for you. You can, obviously, fine-tune the queries without having to deal with any autogenerated SQL. Cons: - Not even slightly typesafe. If you make a typo in the SQL your CI server is not going to catch it, you'll have to hope it's caught during automated UI or functional testing. - A pain to maintain. You've got a bunch of inline SQL statements that do various queries that have no strong ties to the DB architecture. This can quite easily lead to queries that get "left behind" when the underlying DB structure changes, which, again, you will not see at build time. They have their place, and they're a very effective tool that can take away some of the "donkey work" from developers when interacting with the DB, but in reality they simply cannot take the place of a full ORM in any large-scale system for queries that are not performance-critical, simply due to the increased maintenance cost. If you are struggling with performance on DB queries I'd suggest that it would be better to use these mapping frameworks with Stored Procedures only, in order to get a compile-time indication of whether your SQL is valid (plus the additional performance benefits).
Force a total refresh of Blazor WebAssembly PWA app from Javascript This code is in index.html. I call the `updateVersion` with the latest version of my Blazor WebAssembly PWA app. The first line is the original registration of the service worker that was part of the Blazor app template. The rest is added by me. ``` navigator.serviceWorker.register('service-worker.js'); function updateVersion(newVersion) { var key = 'x-photish-version'; var oldVersion = localStorage.getItem(key); if (oldVersion == null) { localStorage.setItem(key, newVersion); } else if (newVersion != oldVersion) { localStorage.setItem(key, newVersion); // Reload service worker navigator.serviceWorker.register('service-worker.js').then(function (registration) { caches.delete("blazor-resources-/").then(function (e) { console.log("'blazor-resources-/' cache deleted"); }); registration.update(); window.location.reload(); }).catch(function (error) { // registration failed console.log(`Registration failed with ${error}`); }); } } ``` The version part works. It detects that the version is new and it correctly enters the `newVersion != oldVersion` part of the code where I want to make sure the app is completely refreshed. To test it, I release a new version of my app with some trivial changes to my app and it detects it's a new version and reloads the page. And my small app changes do not appear. It shows the old version of the app. It is essential I get a way to do this from the code as I don't want the users to retrieve the latest content on every page load. Only if I actually deployed a new version of the code. What can I do to ensure the serviceworker is refreshed and the cache of the blazor app itself is not cached? **UPDATE**: I thought it was solved after changing the code to the following but I am still seeing some cases where it does not work. ``` navigator.serviceWorker.register('service-worker.js'); function updateVersion(newVersion) { var key = 'x-photish-version'; var oldVersion = localStorage.getItem(key); if (oldVersion == null) { localStorage.setItem(key, newVersion); } else if (newVersion != oldVersion) { localStorage.setItem(key, newVersion); caches.delete("blazor-resources-/").then(function (e) { console.log("'blazor-resources-/' cache deleted"); }); // Reload service worker navigator.serviceWorker.register('/service-worker.js', { updateViaCache: 'none' }).then(function (registration) { window.location.reload(); }).catch(function (error) { // registration failed console.log(`Registration failed with ${error}`); }); } } ``` Still hoping someone out there can fix the code and make it bulletproof or simply declare that this is technically impossible for some reason.
I found a solution and confirmed over several days that it works. It also works from my PWA loaded into iOS. My existing function from the question works if combined with an updated const CACHE inside the wwwroot/**service-worker.published.js**. My post-build **Powershell** script looks like this: ``` $currentDate = get-date -format yyyy.MM.dd.HHmm; # Update service worker $curDir = Get-Location $inputFile = $curDir.Path +"\wwwroot\service-worker.published.js" $findString = "const CACHE_VERSION =.+" $replaceString = "const CACHE_VERSION = '$currentDate'" (Get-Content $inputFile) | ForEach-Object { $_ -replace $findString , $replaceString } | Set-Content $inputFile Write-Host "Service Worker Updated: $currentDate" ``` For it to work I have to manually add this line to **service-worker.published.js**: ``` const CACHE_VERSION = '2022.08.10.2222' ``` I am also using the following post-build **Powershell** script to update the version in version.txt which I always know will be updated. And then I also update the assembly version inside the csproj. That way I can compare the two values to see if I am actually running the newest code: ``` param([string]$ProjectDir, [string]$ProjectPath); $currentDate = get-date -format yyyy.MM.dd.HHmm; # Update version.txt $versionFilePath = $ProjectDir + "/wwwroot/version.txt" Set-Content -Path $versionFilePath -Value $currentDate -NoNewline Write-Host "version.txt Updated: $currentDate" # Update actual assembly version $find = "<Version>(.|\n)*?</Version>"; $replace = "<Version>" + $currentDate + "</Version>"; $csproj = Get-Content $ProjectPath $csprojUpdated = $csproj -replace $find, $replace Set-Content -Path $ProjectPath -Value $csprojUpdated Write-Host "Assembly Version Updated: $currentDate" ``` For your reference, I check the assembly version like this in **C#**: ``` assemblyVersion = Assembly.GetExecutingAssembly()?. GetCustomAttribute<AssemblyInformationalVersionAttribute>()?. InformationalVersion; ``` And I check the latest version with a simple http call but indicating no-cache in **C#**. ``` var message = new HttpRequestMessage { Method = HttpMethod.Get, RequestUri = new Uri(baseUri + "version.txt?nocache=" + DateTime.Now.ToString("yyyyMMddHHmmss")) }; message.Headers.CacheControl = new CacheControlHeaderValue { NoCache = true }; var versionResponse = await http.SendAsync(message); var version = await versionResponse.Content.ReadAsStringAsync(); return version; ``` I call my function in index.html with this line in Blazor: ``` await js.InvokeAsync<string>("updateVersion", state.Version); ``` And finally here is the **javascript** in index.html that will clear the cache and refresh the service-worker. ``` navigator.serviceWorker.register('service-worker.js'); function updateVersion(newVersion) { var key = 'x-photish-version'; var oldVersion = localStorage.getItem(key); if (oldVersion == null) { localStorage.setItem(key, newVersion); } else if (newVersion != oldVersion) { localStorage.setItem(key, newVersion); caches.delete("blazor-resources-/").then(function (e) { console.log("'blazor-resources-/' cache deleted"); }); // Reload service worker navigator.serviceWorker.register('/service-worker.js', { updateViaCache: 'none' }).then(function (registration) { window.location.reload(); }).catch(function (error) { // registration failed console.log(`Registration failed with ${error}`); }); } } ``` The first line is included in the standard Blazor template. The remaining is the function that is called from the interop in Blazor. Voila: Just code. On every build the version number is updated. If a user opens production code that is outdated, the page will automatically reload and it will then be running the latest code.
Good MSbuild log formatter? we are building a very large Visual C++ 2010 solution (about 150 projects, full build takes about an hour) on a build server (Jenkins) with MSBuild. Unfortunately when a project fails, it is difficiult to find out which and why, because the console log is so long and errors are hard to find. Is there a nice msbuild log parser or msbuild logger that nicely lists all failed projects and the error messages? The best we've found so far is the [Warnings Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Warnings+Plugin) but this is pretty much useless in many cases.
I find the [Log Parser Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Log+Parser+Plugin) very useful, though you need to provide your own parser file (in /jobs/vs\_parsing\_rules). I use the following, which works well for `msbuild` ``` error /[Ee]rror *:/ error /Error on line/ error /error [CM]/ error /fatal error/ error /unresolved external/ error /[aA]ccess is denied/ error /aborted/ warning /[Ww]arning *:/ warning /[Ww]arning C/ warning /WARNING/ warning /Couldn't/ warning /Can't/ warning /Cannot/ warning /ld.*unsupported/ error /[uU]ndefined symbols/ warning /[rR]eferenced from/ warning /[nN]ot found/ warning /exit status/ warning /no symbols/ error /ERROR[^A-Za-z]/ info /INFO/ start /BUILD/ ```
Apache Giraph on EMR Has any tried Apache Giraph on EMR? It seems to me the only requirements to run on EMR are to add proper bootstrap scripts to the Job Flow configuration. Then I should just need to use a standard Custom JAR launch step to launch the Giraph Runner with appropriate arguments for my Giraph program. Any documentation/tutorial or if you could just share your experience with Giraph on EMR, that will be much appreciated.
Yes, I run Giraph jobs on EMR regularly but I don't use "Job Flows", I manually login to the master node and use it as a normal Hadoop cluster (I just submit the job with `hadoop jar` command). You are right, you need to add bootstrap scripts to run Zookeeper and to add Zookeeper details to core-site config. Here is how I did it : Bootstrap actions - `Configure Hadoop s3://elasticmapreduce/bootstrap-actions/configure-hadoop --site-key-value, io.file.buffer.size=65536, --core-key-value, giraph.zkList=localhost:2181, --mapred-key-value, mapreduce.job.counters.limit=1200` `Run if s3://elasticmapreduce/bootstrap-actions/run-if instance.isMaster=true, s3://hpc-chikitsa/zookeeper_install.sh` The contents of zookeeper\_install.sh are : ``` #!/bin/bash wget --no-check-certificate http://apache.mesi.com.ar/zookeeper/zookeeper3.4./zookeeper3.4.5.tar.gz tar zxvf zookeeper-3.4.5.tar.gz cd zookeeper-3.4.5 mv conf/zoo_sample.cfg conf/zoo.cfg sudo bin/zkServer.sh start ``` Then copy your Giraph jar file to master node (using scp) and then ssh to master node and submit the job using `hadoop jar` command. Hope that helps. Here is a relevant mail-thread on giraph-user mailing list : <https://www.mail-archive.com/user%40giraph.apache.org/msg01240.html>
How to plot 2d math vectors with matplotlib? How can we plot 2D math vectors with `matplotlib`? Does anyone have an example or suggestion about that? I have a couple of vectors stored as 2D `numpy` arrays, and I would like to plot them as directed edges. The vectors to be plotted are constructed as below: ``` import numpy as np # a list contains 3 vectors; # each list is constructed as the tail and the head of the vector a = np.array([[0, 0, 3, 2], [0, 0, 1, 1], [0, 0, 9, 9]]) ``` --- **Edit:** I just added the plot of the final answer of `tcaswell` for anyone interested in the output and want to plot 2d vectors with matplotlib: ![enter image description here](https://i.stack.imgur.com/BkIId.png)
The suggestion in the comments by halex is correct, you want to use quiver ([doc](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.quiver)), but you need to tweak the properties a bit. ``` import numpy as np import matplotlib.pyplot as plt soa = np.array([[0, 0, 3, 2], [0, 0, 1, 1], [0, 0, 9, 9]]) X, Y, U, V = zip(*soa) plt.figure() ax = plt.gca() ax.quiver(X, Y, U, V, angles='xy', scale_units='xy', scale=1) ax.set_xlim([-1, 10]) ax.set_ylim([-1, 10]) plt.draw() plt.show() ```
How do I convert String to Number according to locale javascript If I do: `var number = 35000.25; alert(number.toLocaleString("de-DE"));` I will get `35.000,25` in German. But how can I convert it back to `35000.25` or I want something like: `var str='35.000,25'; alert(str.toLocaleNumber("en-US"));` So, that it can give `35,000.25`. Is it possible by JS?
The following function will first construct a NumberFormat based on the given locale. Then it will try to find the decimal separator for that language. Finally it will replace all but the decimal separator in the given string, then replace the locale-dependant separator with the default dot and convert it into a number. ``` function convertNumber(num, locale) { const { format } = new Intl.NumberFormat(locale); const [, decimalSign] = /^0(.)1$/.exec(format(0.1)); return +num .replace(new RegExp(`[^${decimalSign}\\d]`, 'g'), '') .replace(decimalSign, '.'); } // convertNumber('100,45', 'de-DE') // -> 100.45 ``` Keep in mind that this is just a quick proof of concept and might / will fail with more exotic locales that do not follow the assumptions made here (e.g. left-to-right, no weird number insertions, no whitespace, no signs etc.). You can however adapt this...
Is it possible to use 1 Kubernetes ingress object to route traffic to k8s services in different clusters? I have the following setup: k8s cluster A, containing service SA k8s cluster B, containing service SB, and an HTTP ingress that routes traffic to SB Is it possible to add service SA as the backend service for one of the path of the ingress? If so, how do I refer to it in the ingress configuration file? (using selectors in the usual way doesn't work, presumably because we are in different clusters)
Ingress objects help configure HTTP(S) load balancing for a single cluster. They don't have a concept of multiple clusters, so they aren't going to have a configuration language for what you are trying to accomplish (maybe they will with [Ubernetes](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/federation.md), but they certainly don't today). The upshot is that you can bypass the Ingress configuration and configure the routing manually (after all, Ingress is really just an ease-of-use shortcut for a typical L7 configuration). You can create your own L7 configuration in GCP and set up the path based forwarding to route to different backend groups. You can then assign the backend groups to a `NodePort` service that you configure in each of your clusters. The rough steps are: 1. Create a `NodePort` service in each cluster 2. Create an HTTP health check for each service 3. Add a firewall rule to allow http health checks to hit your backends 4. Add a service to the instance group for your cluster (e.g. `gcloud compute instance-groups managed set-named-ports ...`) 5. Add backend services for the load balancer (e.g. `gcloud compute backend-services create ...`) 6. Add a backend for your cluster to this backend service (e.g. `gcloud compute backend-services add-backend ...`) 7. Map that URL to your backend service (e.g. `gcloud compute url-maps create ...`) 8. Create a load balancing proxy for that backend service (e.g. `gcloud compute target-http-proxies create ...`) 9. Create a forwarding rule for that proxy (e.g. `gcloud compute forwarding-rules create ...`)
What does this refer to in a JavaScript function? ``` function Box(width, height) { this.width = width; this.height = height; } var myBox = new Box(5,5); ``` 1. What is the `new` keyword doing here technically? Is it creating a new function? Or is it creating a new object and applying the function to it? 2. If so then this is a way to create a "Box", does this mean the `this` keyword is actually referring to the object myBox?
It's creating a new object, using `Box` as its constructor. The value of `this` *in this case* (when the function is called with the `new` keyword) is the new instance being constructed. This new object will inherit from whatever is defined as `Box.prototype` (the default being `Object.prototype`). I said *in this case*, because in JavaScript the value of `this` is determined by how the function is called. I recommend reading the [MDN page on `this`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this) for more information. --- Note: if this question is supposed to be closed, it should have been as a duplicate. Here are some possible duplicate links that might also help you: - [How does the "this" keyword work?](https://stackoverflow.com/questions/3127429/javascript-this-keyword) - [Javascript 'this' value changing, but can't figure out why](https://stackoverflow.com/questions/3562980/javascript-this-value-changing-but-cant-figure-out-why) - [this value in JavaScript anonymous function](https://stackoverflow.com/questions/8670877/this-value-in-javascript-anonymous-function) - [javascript this object](https://stackoverflow.com/questions/10176209/javascript-this-object) - [How does "this" keyword work within a function?](https://stackoverflow.com/questions/133973/how-does-this-keyword-work-within-a-javascript-object-literal)
Allocating memory for 2d matrix using 1 malloc call ``` We can allocate memory for 2d matrix using 1 malloc call as int (*a)[5]; int i,j; ``` a=malloc(sizeof(int\*) \* 5); //allocating 5 pointers and each pointer points to an array of 5 ints How can we free this memory allocated successfully? Using free(a) gives run-time error Using for(i=0;i<5;i++) free(a[i]); free(a); This also gives run-time error
Edit: THE WHOLE STORY. Previously I ignored **THREE** other ways to allocate 2d arrays. **Dynamic 2d array method 1:** This one works if you know the the number of columns at compile time. ``` #define CCOLS 200 int (*m)[CCOLS] = malloc(cRows * sizeof(*m)); m[iRow][iCol] = n; // sets the item at iRow*CCOLS + iCol ... free(m); ``` This works because m is declared as a pointer to an array of CCOLS ints. The compiler knows its size and does the math for you. m[iRow] = an array of CCOLS ints. You can only pass this to functions with this signature: ``` foo(int (*m)[CCOLS]) { ... } ``` and maybe this signature, depending upon your compiler and the switches you use: ``` foo(int m[][CCOLS]) { ... } ``` not this signature: ``` foo(int **m) { ... } ``` Since the memory layouts and sizes are different. int m[][CCOLS] looks like this: ``` +---------+---------+---------+---------+ | m[0][0] | m[0][1] | m[0][2] | m[0][3] | +---------+---------+---------+---------+ | m[1][0] | m[1][1] | m[1][2] | m[1][3] | +---------+---------+---------+---------+ | m[2][0] | m[2][1] | m[2][2] | m[2][3] | +---------+---------+---------+---------+ | m[3][0] | m[3][1] | m[3][2] | m[3][3] | +---------+---------+---------+---------+ ``` int \*\*m looks like this: ``` +----+ +----+----+----+----+----+ |m[0]| ---> | | | | | | +----+ +----+----+----+----+----+ |m[1]| ---> | | | | | | +----+ +----+----+----+----+----+ |m[2]| ---> | | | | | | +----+ +----+----+----+----+----+ |m[3]| ---> | | | | | | +----+ +----+----+----+----+----+ ``` **Dynamic 2d array method 2 (C99 which is not supported by all compilers):** This one is the same as the previous but you don't need to know dimensions at compile time. ``` int cCols, cRows, iCol, iRow; ... set cRows, cCols somehow, they could be passed in as parameters also ... int (*m)[cCols] = malloc(cRows * sizeof(*m)); m[iRow][iCol] = n; // sets the item at iRow*cCols + iCol ... free(m); ``` You can only pass this to functions with this signature: ``` foo(int cCols, m[][cCols]) {} ``` or this one ``` foo(int cRows, int cCols, m[cRows][cCols]) {} ``` If you use gcc, here is more [info](http://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html). **Dynamic 2d array method 3 using the STACK! (C99 which is not supported by all compilers):** This lets you avoid malloc entirely if you are ok with your 2d array on the stack. ``` int cRows, cCols; ... set cRows, cCols somehow ... int m[cRows][cCols]; m[iRow][iCol] = n; ``` I assume you could declare a global variable this way too. You pass this to functions the same way as method 2. **Dynamic 2d array method 4:** This is the array of pointers method that a lot of people use. You use one malloc to allocate to be efficient. And of course you then only use one free. Only if you have HUGE arrays where contiguous memory becomes and issue, would you want to malloc each row individually. ``` int cCols = 10, cRows = 100, iRow; // allocate: // cCols*cRows*sizeof(int) = space for the data // cRows*sizeof(int*) = space for the row ptrs int **m = malloc(cCols*cRows*sizeof(int) + cRows*sizeof(int*)); // Now wire up the row pointers. They take the first cRows*sizeof(int*) // part of the mem becasue that is what m[row] expects. // we want each row pointer to have its own cCols sized array of ints. // We will use the space after the row pointers for this. // One way to calc where the space after the row pointers lies is to // take the address of the nth + 1 element: &m[cRows]. // To get a row ptr, cast &m[cRows] as an int*, and add iRow*cCols to that. for (iRow = 0; iRow < cRows; ++iRow) m[iRow] = (int*)&m[cRows] + iRow*cCols; // or for (p=(int*)&m[cRows] ; iRow = 0; iRow < cRows; ++iRow, p+=cCols) m[iRow] = p; // use it: ... m[iRow][iCol] = 10; ... // free it free(m); ```
How to read nltk.text.Text files from nltk.book in Python? i'm learning a lot about Natural Language Processing with nltk, can do a lot of things, but I'm not being able to find the way to read Texts from the package. I have tried things like this: ``` from nltk.book import * text6 #Brings the title of the text open(text6).read() #or nltk.book.text6.read() ``` But it doesn't seem to work, because it has no fileid. No one seems to have asked this question before, so I assume the answer should be easy. Do you know what's the way to read those texts or how to convert them into a string? Thanks in advance
Lets dig into the code =) Firstly, the `nltk.book` code resides on <https://github.com/nltk/nltk/blob/develop/nltk/book.py> If we look carefully, the texts are loaded as an `nltk.Text` objects, e.g. for `text6` from <https://github.com/nltk/nltk/blob/develop/nltk/book.py#L36> : ``` text6 = Text(webtext.words('grail.txt'), name="Monty Python and the Holy Grail") ``` The `Text` object comes from <https://github.com/nltk/nltk/blob/develop/nltk/text.py#L286> , you can read more about how you can use it from <http://www.nltk.org/book/ch02.html> The `webtext` is a corpus from `nltk.corpus` so to get to the raw text of `nltk.book.text6`, you could load the webtext directly, e.g. ``` >>> from nltk.corpus import webtext >>> webtext.raw('grail.txt') ``` The `fileids` comes only when you load a `PlaintextCorpusReader` object, not from the `Text` object (processed object): ``` >>> type(webtext) <class 'nltk.corpus.reader.plaintext.PlaintextCorpusReader'> >>> for filename in webtext.fileids(): ... print(filename) ... firefox.txt grail.txt overheard.txt pirates.txt singles.txt wine.txt ```
how to safely bypass Delphi Error: "types of formal and actual parameters must be identical" I need a way to write a generic procedure to act upon an object type or any of its descendants. My first attempt was to declare ``` procedure TotalDestroy(var obj:TMyObject); ``` but when using it with a descendant object ``` type TMyNewerObject = class(TMyObject); var someNewerObject: TMyNewerObject; TotalDestroy(someNewerObject); ``` I get the infamous error "types of formal and actual parameters must be identical" So, while strugling to find a solution, I looked at the source code of Delphi system `FreeAndNil` procedure. And I found this awesome declaration, along with this astonishing comment ``` { FreeAndNil frees the given TObject instance and sets the variable reference to nil. Be careful to only pass TObjects to this routine. } procedure FreeAndNil(var Obj); ``` It avoids the type checking error, but it uses no safety net. My question is ... is there any safe way to check the type of an untyped var parameter? or in other words, can you improve this Delphi source code so that the warning would not be needed? ``` procedure FreeAndNil(var Obj); var Temp: TObject; begin Temp := TObject(Obj); Pointer(Obj) := nil; Temp.Free; end; ```
I've written about this before, using an example very similar to [Lasse's](https://stackoverflow.com/questions/2051673/how-to-safely-bypass-delphi-error-types-of-formal-and-actual-parameters-must-be/2051769#2051769): - [Delphi Q&A: Why must the types of actual and formal var parameters be identical?](http://www.cs.wisc.edu/~rkennedy/var-identical) Unless you're writing an assignment statement to *change the value* of the input parameter itself, and not just one of its properties, you shouldn't pass a parameter by reference in the first place. If you *are* writing an assignment statement to change the parameter's value, then the compiler message really is true, and you should heed it. One reason for needing to by-pass the error is when you're writing a function like [`TApplication.CreateForm`](http://docwiki.embarcadero.com/VCL/en/Forms.TApplication.CreateForm). Its job is to change the input parameter's value, and the type of the new value varies and cannot be determined at compile time. If you're writing such a function, then your only option with Delphi is to use an [*untyped var*](http://www.cs.wisc.edu/~rkennedy/untyped) parameter, and then there is extra burden on both the caller and the receiver to make sure everything goes right. The caller needs to make sure it passes a variable that is capable of holding values of whatever type the function will put in it, and the function needs to make sure it stores a value of a type compatible with what the caller requested. In the case of `CreateForm`, the caller passes in a class-reference literal and a variable of that class type. The function instantiates the class and stores the reference in the variable. I don't think very highly of either `CreateForm` or `FreeAndNil`, largely because of the way their untyped parameters sacrifice type safety in return for comparatively little extra convenience. You haven't shown the implementation of your `TotalDestroy` function, but I suspect its var parameter will ultimately provide the same low utility as in those other two functions. See my articles on both: - [When should I use `FreeAndNil`?](http://www.cs.wisc.edu/~rkennedy/freeandnil) - [Why shouldn't I call `Application.CreateForm`?](http://www.cs.wisc.edu/~rkennedy/createform)
How to filter RDDs based on a given partition? Consider the following example: ``` JavaPairRDD<String, Row> R = input.textFile("test").mapToPair(new PairFunction<String, String, Row>() { public Tuple2<String, Row> call(String arg0) throws Exception { String[] parts = arg0.split(" "); Row r = RowFactory.create(parts[0],parts[1]); return new Tuple2<String, Row>(r.get(0).toString(), r); }}).partitionBy(new HashPartitioner(20)); ``` The code above creates an RDD named `R` which is partitioned in 20 pieces by hashing on the first column of a txt file named "test". Consider that the `test.txt` file is of the following form: ``` ... valueA1 valueB1 valueA1 valueB2 valueA1 valueB3 valueA1 valueB4 ... ``` In my context, I have a known value e.g., valueA1 and I want to retrieve all the other values. It is trivial to do it by using the existing filter operation with the specified value. However, I would like to avoid this since essentially the filter operation will be performed on the whole RDD. Assume that the hash(valueA1)=3, I would like to perform a given operation only on partition 3. More generally, I am interested in dropping/selecting specific partitions from an RDD and perform operations on them. From the SPARK API it seems that it is not possible directly is there a workaround to achieve the same thing?
For single keys you can use `lookup` method: ``` rdd.lookup("a") // Seq[Int] = ArrayBuffer(1, 4) ``` For an efficient lookup you'll need a RDD which is partitioned, for example using `HashPartitioner` as below. If you want to simply filter partitions containing specific keys it can be done with `mapPartitionsWithIndex`: ``` import org.apache.spark.HashPartitioner val rdd = sc.parallelize( Seq(("a", 1), ("b", 2), ("c", 3), ("a", 4), ("b", 5) // A particular number is used only to get a reproducible output )).partitionBy(new HashPartitioner(8)) val keys = Set("a", "c") val parts = keys.map(_.## % rdd.partitions.size) rdd.mapPartitionsWithIndex((i, iter) => if (parts.contains(i)) iter.filter{ case (k, _) => keys.contains(k) } else Iterator() ).collect // Array[(String, Int)] = Array((a,1), (a,4), (c,3)) ```
Why does Electron need to be saved as a developer dependency? As per the official website, the correct way to save electron files is: ``` npm install electron --save-dev ``` Electron is actually required for running the app (quite literally: `require()`) and this goes against the [top voted answer here](https://stackoverflow.com/questions/18875674/whats-the-difference-between-dependencies-devdependencies-and-peerdependencies). So why do we make this exception, if this is even one?
The fact that you `require` a package is irrelevant to whether it should be considered a dependency or a devDependency (in the npm sense). E.g. many projects use webpack API (i.e. `const webpack = require('webpack')`) but list it as a devDependency. The reason is also explained in the post you link to: when you `publish` your package, if the consumer project needs other packages to use yours, then these must be listed as `dependencies`. If your package uses some modules only for build, test, or bundles them into a dist file (i.e. what will be used by the consumer project), then those modules should not be mentioned in `dependencies`. We still list them in `devDependencies` for development. Now in the case of an electron app, there is little chance you will consume your app as a node module of a consumer project, therefore the above convention is not really relevant. Furthermore, we fall in the case where the `electron` package is bundled as part of the built output. There is no need for your user to get `electron` from npm to use your built app. Therefore it matches well the definition of a devDependency. That being said, IIRC some electron packagers bundle your `dependencies` into the built app, so you still need some rigour in filling this list.
Animate border left to right I am trying to animate a bottom border on a div so that it looks like a line sliding in from the right. I am using jQuery but can't seem to work out how to achieve it, can anyone point me in the direction of a tutorial or reading on it?
This is not possible using a border. You can only animate the width of the bottom border (which appears to be it's height), not it's left/right position or horizontal width. Instead, look at creating an absolutely positioned element within the element, and animating the width of that instead. ``` <div id="foo"> Foo <div class="slider"></div> </div> ``` ``` #foo { font-size: 2em; position: relative; padding: 30px; } .slider { position: absolute; bottom: 0; left: 0; height: 2px; background-color: #C00; width: 0%; } ``` ``` $('.slider').animate({ width: $('#foo').width() }, 1000); ``` [**Example fiddle**](http://jsfiddle.net/RoryMcCrossan/qm66x51c/)
Lua Alien - Defined variables in the Win32 API WaitForSingleObject function I am using [Alien for Lua](http://alien.luaforge.net/) to reference the [WaitForSingleObject function](http://msdn.microsoft.com/en-us/library/windows/desktop/ms687032%28v=vs.85%29.aspx) in the Windows Kernel32.dll. I am pretty new to Windows programming, so the question I have is about the following [#define](http://msdn.microsoft.com/en-us/library/vstudio/yt3yck0x.aspx)d variables referenced by the WaitForSingleObject documentation: > > If dwMilliseconds is **INFINITE**, the function will return only when the object is signaled. > > > What is the **INFINITE** value? I would naturally assume it to be `-1`, but I cannot find this to be documented anywhere. Also, with the following table, it mentions the return values in hexadecimal, but I am confused as to why they have an `L` character after the last digit. Could this be something as simple as casting it to a Long? ![Chart](https://i.stack.imgur.com/7ZBqu.jpg) The reason I ask is because Lua uses a [Number data type](http://www.lua.org/pil/2.3.html), so I am not sure if I should be checking for this return value via Hex digits (0-F) or decimal digits (0-9)?
The thought crossed my mind to just open a C++ application and print out these values, so I did just that: ``` #include <windows.h> #include <process.h> #include <iostream> int main() { std::cout << INFINITE; std::cout << WAIT_OBJECT_0; std::cout << WAIT_ABANDONED; std::cout << WAIT_TIMEOUT; std::cout << WAIT_FAILED; system("pause"); return 0; } ``` The final Lua results based off my findings is: ``` local INFINITE = 4294967295 local WAIT_OBJECT_0 = 0 local WAIT_ABANDONED = 128 local WAIT_TIMEOUT = 258 local WAIT_FAILED = 4294967295 ```
How to define constant string in Swagger open api 3.0 How to define constant string variable in swagger open api 3.0 ? If I define enum it would be like as follows ``` "StatusCode": { "title": "StatusCode", "enum": [ "success", "fail" ], "type": "string" } ``` But enums can be list of values, Is there any way to define string constant in swagger open api 3.0 code can be executed form the <http://editor.swagger.io/>
As @Helen already pointed out, and as you can read in the linked answer, currently it does not seem to get any better than an `enum` with only one value. Full example that can be pasted into <http://editor.swagger.io/>: ``` { "openapi": "3.0.0", "info": { "title": "Some API", "version": "Some version" }, "paths": {}, "components": { "schemas": { "StatusCode": { "title": "StatusCode", "enum": [ "The only possible value" ], "type": "string" } } } } ``` There is a related topic on Github which is unsolved as of now: <https://github.com/OAI/OpenAPI-Specification/issues/1313>