prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
How do I use subscript and superscript in Swift? I want my `UILabel` to display text in following manner 6.022\*1023. What functions does Swift have for subscript and superscript?
Most of the answers+examples are in ObjC, but this is how to do it in Swift. ``` let font:UIFont? = UIFont(name: "Helvetica", size:20) let fontSuper:UIFont? = UIFont(name: "Helvetica", size:10) let attString:NSMutableAttributedString = NSMutableAttributedString(string: "6.022*1023", attributes: [.font:font!]) attString.setAttributes([.font:fontSuper!,.baselineOffset:10], range: NSRange(location:8,length:2)) labelVarName.attributedText = attString ``` This gives me: ![SuperScript Example](https://i.stack.imgur.com/rWEJa.png) In a more detailed explanation: 1. Get `UIFont` you want for both the default and superscript style, superscript must be smaller. 2. Create a `NSMutableAttributedString` with the full string and default font. 3. Add an attribute to the characters you want to change (`NSRange`), with the smaller/subscript `UIFont`, and the `NSBaselineOffsetAttributeName` value is the amount you want to offset it vertically. 4. Assign it to your `UILabel` *Hopefully this helps other Swift devs as I needed this as well.*
What is the purpose of .\*\\? I have been playing around with `list.files()` and I wanted to only list `001.csv` through `010.csv` and I came up with this command: ``` list_files <- list.files(directory, pattern = ".*\\000|010", full.names = TRUE) ``` This code gives me what I want, but I do not fully understand what is happening with the pattern argument. How does `pattern = .*\\\000` work?
`\\0` is a backreference that inserts the whole regex match to that point. Compare the following to see what that can mean: ``` sub("he", "", "hehello") ## [1] "hello" sub("he\\0", "", "hehello") ## [1] "llo" ``` With strings like `"001.csv"` or `"009.csv"`, what happens is that the `.*` matches zero characters, the `\\0` repeats those zero characters one time, and the `00` matches the first two zeros in the string. Success! This pattern **won't** match `"100.csv"` or `"010.csv"` because it can't find anything to match that is doubled and then immediately followed by two `0`s. It **will**, though, match `"1100.csv"`, because it matches `1`, then doubles it, and then finds two `0`s. So, to recap, `".*\\000"` matches any string beginning with `xx00` where `x` stands for any substring of zero or more characters. That is, it matches anything repeated twice and then folllowed by two zeros.
How to return and download Excel file using FastAPI? How do I return an excel file (version: Office365) using FastAPI? The documentation seems pretty straightforward. But, I don't know what `media_type` to use. Here's my code: ``` import os from fastapi import FastAPI from fastapi.responses import FileResponse from pydantic import BaseModel from typing import Optional excel_file_path = r"C:\Users\some_path\the_excel_file.xlsx" app = FastAPI() class ExcelRequestInfo(BaseModel): client_id: str @app.post("/post_for_excel_file/") async def serve_excel(item: ExcelRequestInfo): # (Generate excel using item.) # For now, return a fixed excel. return FileResponse( path=excel_file_path, # Swagger UI says 'cannot render, look at console', but console shows nothing. media_type='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' # Swagger renders funny chars with this argument: # 'application/vnd.ms-excel' ) ``` Assuming I get it right, how to download the file? Can I use Swagger UI generated by FastAPI to view the sheet? Or, curl? Ideally, I'd like to be able to download and view the file in Excel.
You could set the [`Content-Disposition` header using the `attachment` parameter](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition#as_a_response_header_for_the_main_body), indicating to the browser that the file should be downloaded, as described in the answers [here](https://stackoverflow.com/a/71728386/17865804) and [here](https://stackoverflow.com/a/71639658/17865804). Swagger UI will provide a `Download file` link for you to download the file, as soon as you execute the request. ``` headers = {'Content-Disposition': 'attachment; filename="Book.xlsx"'} return FileResponse(excel_file_path, headers=headers) ``` To have the file viewed in the browser, one can use the `inline`, instead of `attachment`, parameter in the `Content-Disposition` header, as explained in the linked answers earlier. However, for the browser to be able to display the Excel file, one should set the correct `media_type` in the [`FileResponse`](https://fastapi.tiangolo.com/advanced/custom-response/#fileresponse) (for Excel files see [here](https://stackoverflow.com/questions/974079/setting-mime-type-for-excel-document)), as well as `.xlsx` (or `.xls`) must be a known file extension to the browser (this is usually achieved through browser extensions/plug-ins).
Define member in class, but actual type not known till run time This is probably a basic question for a regular C++ user. Functionally, I have an ECG monitor, and want to choose which output format to use at runtime. I have set up two classes that expose the same methods and members (ECGRecordingDefaultFormat and ECGRecordingEDFFormat) eg. ->InsertMeasure, ->setFrequency, ->setPatientName etc. I know I could define one instance of each format class type, then put in: ``` if (ECGFormatToUse == ECGFormat.EDF) { ecgDefaultFormat.InsertMeasure(x); } if (ECGFormatToUse == ECGFormat.Default) { ecgEDFFormat.InsertMeasure(x); } ``` all throughout the code, but I think I might not be using C++'s dynamic typing to its full extent. The question is: can I define just one variable in main(), and after choosing the format I want at runtime, have the code use the right class with its exposed 'InsertMeasure' method, avoiding a whole lot of if/else's throughout the code? I'd be happy with just a reference to which aspect of inheritance/polymorphism (?) I should be using, and can google away the rest. Thanks guys. Pete
You can combine the [factory pattern](http://en.wikipedia.org/wiki/Factory_pattern) with C++'s polymorphism. ``` class Base { virtual void InsertMeasure() = 0; //virtual pure, make the base class abstract }; class ECGRecordingDefaultFormat : public Base { virtual void InsertMeasure(); }; class ECGRecordingEDFFormat : public Base { virtual void InsertMeasure(); }; class Factory { static Base* create(ECGFormat format) { if ( format == ECGFormat.EDF ) return new ECGRecordingEDFFormat; if ( format == ECGFormat.Default ) return new ECGRecordingDefaultFormat; return NULL; } }; int main() { ECGFormat format; //set the format Base* ECGRecordingInstance = Factory::create(format); ECGRecordingInstance->InsertMeasure(); return 0; } ```
Regex for WhatsApp Chat Log I would like to return WhatsApp conversation as an array. To do so, I am utilizing regex, however I can't get it to work. This is a sample chat as exported via WhatsApp (edited): ``` 6/13/18, 3:40 AM - Messages to this group are now secured with end-to-end encryption. Tap for more info. 6/13/18, 3:40 AM - You created group "Test Group" 6/13/18, 3:42 AM - Zack added Emma 6/13/18, 4:06 AM - Zack added Json 6/13/18, 2:35 PM - Zack: Let's meet tomorrow. 6/15/18, 5:34 PM - Emma: I'll create the Discord server by tonight. We'll look into making the parser. 7/15/18, 12:05 PM - Zack: Great, I'll add that to our schedule. 7/15/18, 12:05 PM - Json: On our team calander - TCal? 7/15/18, 12:05 PM - Zack: Yes, added on 7/15/18, 12:05 PM. 7/15/18, 12:05 PM - Emma: Are we going JS on this? 7/15/18, 12:05 PM - Json: You bet. 7/15/18, 12:05 PM - Zack: JS is love, JS is life. 7/15/18, 1:46 PM - Emma: Haha. 7/15/18, 4:53 PM - Json: ``` I've tried the following: ``` /\d{1,2}\/\d{1,2}\/\d{2},\s\d{1,2}:\d{2}\s[AP]M\s-.+\n?/g ``` which as you'd expect gets me the lines as follows: - ...but if the message has more than 1 line then that gets skipped. e.g > > 6/15/18, 5:34 PM - Emma: I'll create the Discord server by > tonight. > We'll look into making the parser. > > > > extracts: > > 6/15/18, 5:34 PM - Emma: I'll create the Discord server by tonight. > > > but I would like it to extract as: > > Emma: I'll create the Discord server by tonight. > We'll look into > making the parser. > > >
You may use your (a bit shortened) pattern inside a positive lookahead: ``` s.split(/(?=^\d{1,2}\/\d{1,2}\/\d{2},\s\d{1,2}:\d{2}\s[AP]M)/m).filter(Boolean) ``` See the [regex demo](https://regex101.com/r/zaBeMU/1) Here, the pattern will match each line start position that is immediately followed with the `\d{1,2}\/\d{1,2}\/\d{2},\s\d{1,2}:\d{2}\s[AP]M)` pattern. JS Demo: ``` var s = "6/13/18, 3:40 AM - Messages to this group are now secured with end-to-end encryption. Tap for more info.\r\n6/13/18, 3:40 AM - You created group \"Test Group\"\r\n6/13/18, 3:42 AM - Zack added Emma\r\n6/13/18, 4:06 AM - Zack added Json\r\n6/13/18, 2:35 PM - Zack: Let's meet tomorrow.\r\n6/15/18, 5:34 PM - Emma: I'll create the Discord server by tonight.\r\nWe'll look into making the parser.\r\n7/15/18, 12:05 PM - Zack: Great, I'll add that to our schedule.\r\n7/15/18, 12:05 PM - Json: On our team calander - TCal?\r\n7/15/18, 12:05 PM - Zack: Yes, added on 7/15/18, 12:05 PM.\r\n7/15/18, 12:05 PM - Emma: Are we going JS on this?\r\n7/15/18, 12:05 PM - Json: You bet.\r\n7/15/18, 12:05 PM - Zack: JS is love, JS is life.\r\n7/15/18, 1:46 PM - Emma: Haha.\r\n7/15/18, 4:53 PM - Json: "; console.log(s.split(/(?=^\d{1,2}\/\d{1,2}\/\d{2},\s\d{1,2}:\d{2}\s[AP]M)/m).filter(Boolean)); ``` You may trim each item in the resulting array if you add `.map(x => x.trim())` (or `.map(function(x) { return x.trim(); })`).
Different nautilus thumbnails rendering for text files I have Ubuntu 14.04 Gnome edition on both my laptop and the desktop; both are up-to-date but they have a different past-history of installations and upgrades. Now, text-type thumbnails are different in the two machines, and I am unable to find where the difference is. This post: [What rendering engine does Nautilus use to show HTML previews/thumbnails?](https://askubuntu.com/questions/13648/what-rendering-engine-does-nautilus-use-to-show-html-previews-thumbnails) seems obsolete (no entries in `dconf` for them), and [Generate thumbnails for text?](https://askubuntu.com/questions/558846/generate-thumbnails-for-text) would help (but it's unanswered). Look at this composed shot: ![thumbnails for text](https://i.stack.imgur.com/24VEz.png) In the laptop the thumbnails have a small excerpt of the content of the file (the real content) while in my desktop they are simply blank. I dug a bit around but I can't find which option/package is responsible of the laptop behavior (which I do prefer, by the way). How can I enable the "text snapshot" in thumbnails? (And if they had syntax highlighting, like in the linked question, it would be great).
It seems for an **icon theme** to let **Nautilus** display the contents of **plain text files** as thumbnails, its sub-directory for mime types, which might be named, for example, **mimes** or **mimetypes** should have a file named **text-x-preview.icon** (along with an icon file named **text-x-preview.png** or **text-x-preview.svg**, etc. serving as a ***backdrop for the file contents thumbnail***), which is actually a plain text file including settings for the thumbnail, in each sub-directory for different icon sizes (for example, 22, 24, 48). For example, I have added `~/.local/share/icons/Nitrux/mimetypes/48/text-x-preview.icon` (imported from the default **Humanity** icon theme) to my current icon theme, and only then it has begun displaying thumbnails for plain text files, which has the following content: ``` [Icon Data] EmbeddedTextRectangle=180,100,680,900 AttachPoints=750,750|750,250|250,250|250,750 ``` I'll hopefully try and add more information about the settings above in due course. - It seems having **text-x-preview.icon** and **text-x-preview.svg** (or another valid image file type) in the sub-directory for size **48px** allows Nautilus to display thumbnails for all sizes **from 48px upwards**, but not for sizes below 48px, for which you also need **text-x-preview.icon** and **text-x-preview.svg** (or another valid image file type) at least in the sub-directories for sizes **22px and 24px** (this lets Nautilus to display thumbnails also for 16px in some but not all icon themes). These two files in the **Humanity** icon theme have exactly the same content: ``` [Icon Data] EmbeddedTextRectangle=180,100,680,900 ``` --- **NOTE: *Unfortunately this whole thing no longer works under Ubuntu 15.04***
Mutable variable is accessible from closure. How can I fix this? I am using Typeahead by twitter. I am running into this warning from Intellij. This is causing the "window.location.href" for each link to be the last item in my list of items. How can I fix my code? Below is my code: ``` AutoSuggest.prototype.config = function () { var me = this; var comp, options; var gotoUrl = "/{0}/{1}"; var imgurl = '<img src="/icon/{0}.gif"/>'; var target; for (var i = 0; i < me.targets.length; i++) { target = me.targets[i]; if ($("#" + target.inputId).length != 0) { options = { source: function (query, process) { // where to get the data process(me.results); }, // set max results to display items: 10, matcher: function (item) { // how to make sure the result select is correct/matching // we check the query against the ticker then the company name comp = me.map[item]; var symbol = comp.s.toLowerCase(); return (this.query.trim().toLowerCase() == symbol.substring(0, 1) || comp.c.toLowerCase().indexOf(this.query.trim().toLowerCase()) != -1); }, highlighter: function (item) { // how to show the data comp = me.map[item]; if (typeof comp === 'undefined') { return "<span>No Match Found.</span>"; } if (comp.t == 0) { imgurl = comp.v; } else if (comp.t == -1) { imgurl = me.format(imgurl, "empty"); } else { imgurl = me.format(imgurl, comp.t); } return "\n<span id='compVenue'>" + imgurl + "</span>" + "\n<span id='compSymbol'><b>" + comp.s + "</b></span>" + "\n<span id='compName'>" + comp.c + "</span>"; }, sorter: function (items) { // sort our results if (items.length == 0) { items.push(Object()); } return items; }, // the problem starts here when i start using target inside the functions updater: function (item) { // what to do when item is selected comp = me.map[item]; if (typeof comp === 'undefined') { return this.query; } window.location.href = me.format(gotoUrl, comp.s, target.destination); return item; } }; $("#" + target.inputId).typeahead(options); // lastly, set up the functions for the buttons $("#" + target.buttonId).click(function () { window.location.href = me.format(gotoUrl, $("#" + target.inputId).val(), target.destination); }); } } }; ``` With @cdhowie's help, some more code: i will update the updater and also the href for the click() ``` updater: (function (inner_target) { // what to do when item is selected return function (item) { comp = me.map[item]; if (typeof comp === 'undefined') { return this.query; } window.location.href = me.format(gotoUrl, comp.s, inner_target.destination); return item; }}(target))}; ```
You need to nest two functions here, creating a new closure that captures the value of the variable (instead of the variable itself) *at the moment the closure is created*. You can do this using arguments to an immediately-invoked outer function. Replace this expression: ``` function (item) { // what to do when item is selected comp = me.map[item]; if (typeof comp === 'undefined') { return this.query; } window.location.href = me.format(gotoUrl, comp.s, target.destination); return item; } ``` With this: ``` (function (inner_target) { return function (item) { // what to do when item is selected comp = me.map[item]; if (typeof comp === 'undefined') { return this.query; } window.location.href = me.format(gotoUrl, comp.s, inner_target.destination); return item; } }(target)) ``` Note that we pass `target` into the outer function, which becomes the argument `inner_target`, effectively capturing the value of `target` at the moment the outer function is called. The outer function returns an inner function, which uses `inner_target` instead of `target`, and `inner_target` will not change. (Note that you can rename `inner_target` to `target` and you will be okay -- the closest `target` will be used, which would be the function parameter. However, having two variables with the same name in such a tight scope could be very confusing and so I have named them differently in my example so that you can see what's going on.)
C header file is causing warning "ISO C requires a translation unit to contain at least one declaration" Using Qt Creator I made these plain C files just to test my understanding: `main.c` ``` #include <stdio.h> #include "linked.h" int main() { printf("Hello World!\n"); printf("%d", linked()); return 0; } ``` `linked.h` ``` #ifndef LINKED_H_ #define LINKED_H_ int linked(void); #endif // LINKED_H ``` `linked.c` ``` int linked() { return 5; } ``` The IDE shows a warning on the line of `linked.h` in-between `#define LINKED_H_` and `int linked(void);` which reads ``` ISO C requires a translation unit to contain at least one declaration ``` My best guess about what this means is that any header or other C file, if it is in a project, should get used in the main file at least once somewhere. I've tried searching the warning but if this has been answered elsewhere, I'm not able to understand the answer. It seems to me I've used the `linked` function and so it shouldn't give me this warning. Can anyone explain what's going on? The program compiles and runs exactly as expected.
I think the issue is that you don't `#include "linked.h"` from `linked.c`. The current `linked.c` file doesn't have any declarations; it only has one function definition. To fix this, add this line to `linked.c`: ``` #include "linked.h" ``` I don't know why it says this is an issue with `linked.h`, but it seems to be quite a coincidence that the line number you pointed out just happens to be the line number of the end of `linked.c`. Of course, that may be all this is; a coincidence. So, if that doesn't work, try putting some sort of external declaration in this file. The easiest way to do that is to include a standard header, such as `stdio.h`. I would still advise you to `#include "linked.h"` from inside `linked.c`, though.
Configuring SKS to sync with other KeyServers Is there a procedure that needs to be followed in order to get an [SKS keyserver](https://bitbucket.org/skskeyserver/sks-keyserver/wiki/Home) to sync with other keyservers (specifically, Ubuntu's keyservers)? If so, what is that process?
If it is a keyserver you manage, it should be a matter of adding peers to the "membership" file, and having the administrators of those peers add your server to their "membership" files. Depending on the counterparty servers, you may find that you need to use PKS synchronisation via email: this is probably best discussed with the counterparty administrators. Specific instructions are available in the SKS README, and if you have a peering relationship with one of the larger keyservers, the admins there can likely help with specifics related to their preferred peering arrangements. In the specific case of synchronisation with the Ubuntu keyservers, you may either work with other keyserver admins in the wide public set, with which the Ubuntu keyserver regularly synchronises, or to request counterparty configuration directly from the Ubuntu keyserver, open an request at <http://rt.ubuntu.com/> . Be aware that the administrators of this service are very busy, so there may be some delay before you get a response. Note that your keyserver may need to meet certain conditions in order to be accepted as a peer. If you seek synchronisation between some third-party keyserver and the Ubuntu keyserver, you would need to ask the admins of the third-party keyserver to request peering (either to the Ubuntu keyserver or to some keyserver with which the Ubuntu keysrever is peered (directly or indirectly)). In this case, it is probably easier to just upload your key(s) to one of the keyservers already in sync with the Ubuntu keyserver.
Setting the size of the plotting canvas in Matplotlib I would like Matplotlib/Pyplot to generate plots with a consistent canvas size. That is, the figures can well have different sizes to accomodate the axis descriptions, but the plotting area (the rectangle within which the curves are drawn) should always have the same size. Is there a simple way to achieve that? The option figsize of pyplot.figure() seems to set the overall size of the figure, not that of the canvas, so I get a different canvas size whenever the axis description occupies more or less space.
This is one of my biggest frustrations with Matplotlib. I often work with raster data where for example i want to add a colormap, legend and some title. Any simple example from the matplotlib gallery doing so will result in a different resolution and therefore resampled data. Especially when doing image analysis you dont want any (unwanted) resampling. Here is what i usually do, although i would love to know if there are simpler or better ways. Lets start with loading a picture and outputting it just as it is with the same resolution: ``` import matplotlib.pyplot as plt import urllib2 # load the image img = plt.imread(urllib2.urlopen('http://upload.wikimedia.org/wikipedia/en/thumb/5/56/Matplotlib_logo.svg/500px-Matplotlib_logo.svg.png')) # get the dimensions ypixels, xpixels, bands = img.shape # get the size in inches dpi = 72. xinch = xpixels / dpi yinch = ypixels / dpi # plot and save in the same size as the original fig = plt.figure(figsize=(xinch,yinch)) ax = plt.axes([0., 0., 1., 1.], frameon=False, xticks=[],yticks=[]) ax.imshow(img, interpolation='none') plt.savefig('D:\\mpl_logo.png', dpi=dpi, transparent=True) ``` Note that i manually defined the axes position so that spans the entire figure. In a similar way as above you could add some margin around the image to allow for labels or colorbars etc. This example adds a 20% margin above the image, which is then used for plotting a title: ``` fig = plt.figure(figsize=(xinch,yinch/.8)) ax = plt.axes([0., 0., 1., .8], frameon=False, xticks=[],yticks=[]) ax.imshow(img, interpolation='none') ax.set_title('Matplotlib is fun!', size=16, weight='bold') plt.savefig('D:\\mpl_logo_with_title.png', dpi=dpi) ``` So the figure y-size (height) is increased and the y-size of the axes is decreased equally. This gives a larger (overall) output image, but the axes area will still be the same size. It might be nice the have a figure or axes property like .set\_scale() to force a true 1-on-x output.
Enrich Seq log statements with properties without also including them in the message line (using Serilog) I am using Serilog with Seq and want to enrich the logging that appears in Seq with my own properties. If I enter a log statement like... ``` Log.Information("ProcessCycle {Site} {Activity}", SiteName, ActivityName); ``` In Seq I get... [![enter image description here](https://i.stack.imgur.com/qyLdJ.png)](https://i.stack.imgur.com/qyLdJ.png) Notice the Site and Activity values are shown as enriched properties in Seq, but they are also displayed in the overall message. How can I log where I get enriched properties, but not have the values appear in the text message line? Notice I have the NuGet package that adds a ThreadId to each call. I want the Site and Activity properties to be in the list of enriched props, but not necessarily printed in the message line. The answer for this might also require an understanding of our application. The application is a windows service that spawns multiple activities that do different things. So the windows service orchestrates the various activities contained within. On a schedule it simply calls 'Process' on each activity to go off and do some work. Each time Process is called by the orchestrater, I need all logging by that Activity to automatically include the Site and Activity values as shown above (along with many more property values, but I don't want it all printed in the message line). So instead of the above entry, instead we would see... Notice the message now reads just "ProcessCycle". [![enter image description here](https://i.stack.imgur.com/ISiYE.png)](https://i.stack.imgur.com/ISiYE.png)
``` Log.Information("ProcessCycle {Site} {Activity}", SiteName, ActivityName); ``` Needs to be changed to: ``` Log.ForContext("Site",SiteName) .ForContext("Activity",ActivityName) .Information("ProcessCycle") ``` To render as you desire. You can also do `Enrich.FromLogContext` and do `LogContext.PushProperty` to do it more globally (search for those two strings to find an example). **Added by John Livermore** More information on FromLogContext and other methods can be found at... <https://nblumhardt.com/2016/08/context-and-correlation-structured-logging-concepts-in-net-5/> FromLogContext creates an ILogger that can be used in scope for subsequent logging calls.
Reading in a price (eg. $89.95) into a double I am working on an assignment that requires that I read in data from a txt file. The data fields are for books, so I have a title, book id, price, quantity. Everything is working well except for reading in the price. I am using atof() for that which works when I remove the '$' sign from the front of the price, but returns '0' when the '$' is there. How do I make it ignore the '$' ? An example of the txt file: ``` Introduction to programming languages 1 $89.99 100 ``` Here is my code so far: ``` #include <iostream> #include <fstream> #include <string> #include <cstdlib> using namespace std; int main() { char title[50]; char strBookid[10]; char strPrice[10]; char strAmount[10]; int bookId; double price; int amount; ifstream filein("bookfile.txt"); filein.getline(title, 50); cout << "Title : " << title << endl; filein.getline(strBookid, 10); cout << "BookId as a string : " << strBookid << endl; filein.getline(strPrice, 10); cout << "Price as a string : " << strPrice << endl; filein.getline(strAmount, 10); cout << "Qty as a string: " << strAmount << endl; bookId = std::atoi(strBookid); cout << "The Book ID as an int : " << bookId << endl; price = std::atof(strPrice); cout << "The price as a double : " << price << endl; return 0; } ```
You see, the folks behind the C++ standard love money and they know we all do, so they put up a nice way to read money in a generic manner in C++ The [`std::get_money`](http://en.cppreference.com/w/cpp/locale/money_get) You can do this: ``` #include <iostream> #include <fstream> #include <string> #include <cstdlib> #include <locale> //add this using namespace std; int main() { char title[50]; char strBookid[10]; char strPrice[10]; char strAmount[10]; int bookId; long double price; //changed! get_money only supports long double int amount; ifstream filein("bookfile.txt"); filein.getline(title, 50); cout << "Title : " << title << endl; filein.getline(strBookid, 10); cout << "BookId as a string : " << strBookid << endl; filein.imbue(std::locale("en_US.UTF-8")); /// added filein >> std::get_money(price); ///changed price /= 100; //get_money uses the lowest denomination, in this case cents, so we convert it $ by dividing the value by 100 cout << "Price as a string : $" << price << endl; ///changed filein.getline(strAmount, 10); cout << "Qty as a string: " << strAmount << endl; bookId = std::atoi(strBookid); cout << "The Book ID as an int : " << bookId << endl; price = std::atof(strPrice); cout << "The price as a double : " << price << endl; return 0; } ``` --- As a second alternative, you can modify your original code to test for the `$` sign manually... (see the snippet below ``` ......many lines skipped ........... bookId = std::atoi(strBookid); cout << "The Book ID as an int : " << bookId << endl; price = std::atof(strPrice[0] == '$' ? strPrice+1 : strPrice ); //modified cout << "The price as a double : " << price << endl; return 0; } ```
IPv6 routing problem I've received a native IPv6 /64 subnet from my server provider, but I can't get it to play the way I want it. I'm running virtual machines, and I want them to have their own public ip(s). Let's pretend I got the following information: IPs: 1:1:1:1::/64, gateway: 1:1:1:0::1 Now I want the host machine to have ip 1:1:1:1::1, and the rest routed from eth0 to the internal br0 where all the virtual machines are bridged. It's possible to ping6 ipv6.google.com from the root machine, so it seems to work so far. I've enabled routing, and it seems as the packets are properly forwarded from the internal machine (from br0) through eth0 - but nothing more than that... no reply. Note, eth0 is not a part of br0. Any ideas?
Unless your provider really *routes* your subnet to your host, you'll have to setup your host so that it proxies ICMPv6 neighbourhood discovery (ND) for the IPs of your guests. Assuming you have been assigned `2001:0DB8:A::/64`, your host is `2001:0DB8:A::1` on `eth0`, and your guest VM uses `2001:0DB8:A::2` (on a virtual bridge `br0`). To tell your host to proxy ND queries issue the following command: ``` ip -6 neigh add proxy 2001:0DB8:A::2 dev eth0 ``` Also make sure that you have fowarding and ND proxying enabled for IPv6: ``` sysctl -w net.ipv6.conf.all.forwarding=1 sysctl -w net.ipv6.conf.all.proxy_ndp=1 ``` On the guest, you either use the link-local address of the host on `br0` as default gateway. Or you add an additional IP on the `br0` interface on the host, and use that as default gateway in the guests.
static thread function access non-static class member in C++ ``` Class Test{ int value; static void* thread_func(void* args){ value++; } void newthread(){ pthread_create(&thread_func,...); } } ``` I'm trying to create a thread in `Class Test.` Therefore compiler forces me to make `thread_func` static. However I cannot access the non-static member "`value`" anymore. It says: ``` invalid use of member 'Class::value' in static member function ``` Is there a way around it?
> > However I cannot access the non-static > member "value" anymore. > > > That is because `static` function in your class doesn't have (and **cannot** have ) `this` pointer. All you need to pass the pointer to your `Test` object to [pthread\_create()](http://www.kernel.org/doc/man-pages/online/pages/man3/pthread_create.3.html) function as **fourth** argument, and then do this: ``` static void* thread_func(void* args) { Test *test = static_cast<Test*>(args); test->value++; //write return statement properly } ``` --- However, if you're doing too many things in `thread_func()` that require access to `Test` class members at many places, then I would suggest this design: ``` //this design simplifies the syntax to access the class members! class Test { //code omitted for brevity static void* thread_func(void* args) { Test *test = static_cast<Test*>(args); test->run(); //call the member function! //write return statement properly } void run() //define a member function to run the thread! { value++;//now you can do this, because it is same as 'this->value++; //you do have 'this' pointer here, as usual; //so access other members like 'value++'. } //code omitted for brevity } ``` --- # Better design : define a reusable class! Even better would be to define a **reusable** class with *pure* virtual function `run()` to be implemented by the derived classes. Here is how it should be designed: ``` //runnable is reusable class. All thread classes must derive from it! class runnable { public: virtual ~runnable() {} static void run_thread(void *args) { runnable *prunnable = static_cast<runnable*>(args); prunnable->run(); } protected: virtual void run() = 0; //derived class must implement this! }; class Test : public runnable //derived from runnable! { public: void newthread() { //note &runnable::run_thread pthread_create(&runnable::run_thread,..., this); } protected: void run() //implementing the virtual function! { value++; // your thread function! } } ``` Looks better?
Is it possible to boot Linux from a GPT disk on a BIOS system? I have an AMI BIOS computer [Asus EB1501P] with a Seagate ST9250315AS 250GB HDD. Is it possible to boot Linux from a GPT disk on such a BIOS system?
The BIOS generally doesn't care anything about your hard drives1. It simply loads the MBR and transfer control to the boot loader in MBR. Therefore technically it'll be possible to boot a GPT drive in BIOS mode, because the GPT drive still has a protective MBR at the beginning. You just need a bootloader that supports GPT disks (such as Grub and many other Linux bootloaders) However, here a small problem arises. On MBR drives the boot loaders often *cheat* a bit by storing a part of them in the next sectors called ["MBR gap", "boot track", or "embedding area"](https://en.wikipedia.org/wiki/GUID_Partition_Table#Features) which are often left empty by disk partitioning tools. On a GPT disk the sectors right after the MBR are GPT data structures, hence can't be used for that purpose and you need to create a small [*BIOS Boot Partition*](https://en.wikipedia.org/wiki/BIOS_boot_partition) for Grub to store its data > > On a BIOS/GPT configuration, a [BIOS boot partition](https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html#BIOS-installation) is required. GRUB embeds its `core.img` into this partition. > > > > > > > Note: > > > > > > - Before attempting this method keep in mind that not all systems will be able to support this partitioning scheme. Read more on GUID partition tables. > > - This additional partition is only needed on a GRUB, BIOS/GPT partitioning scheme. Previously, for a GRUB, BIOS/MBR partitioning scheme, GRUB used the Post-MBR gap for the embedding the core.img). GRUB for GPT, however, does not use the Post-GPT gap to conform to GPT specifications that require 1\_megabyte/2048\_sector disk boundaries. > > - For UEFI systems this extra partition is not required, since no embedding of boot sectors takes place in that case. However, UEFI systems still require an EFI system partition. > > > > > > > > > Create a mebibyte partition (+1M with fdisk or gdisk) on the disk with no file system and with partition type GUID `21686148-6449-6E6F-744E-656564454649`. > > > - Select partition type `BIOS boot` for `fdisk`. > - Select partition type code `ef02` for `gdisk`. > - For `parted` set/activate the flag `bios_grub` on the partition. > > > [GUID Partition Table (GPT) specific instructions](https://wiki.archlinux.org/index.php/GRUB#GUID_Partition_Table_.28GPT.29_specific_instructions) > > > Grub also supports **hard coding the sector** that contains the next stage so it can boot without a post-MBR gap or BIOS boot partition, but that's fragile because you need to update Grub after every OS update. Therefore this isn't recommended For more information you can read - [How grub2 works on a MBR partitioned disk and GPT partitioned disk?](https://superuser.com/q/1165557/241386) - <https://wiki.archlinux.org/index.php/partitioning#Choosing_between_GPT_and_MBR> - [Legacy BIOS Issues with GPT](https://www.rodsbooks.com/gdisk/bios.html) - [Booting from GPT](http://www.rodsbooks.com/gdisk/booting.html) - [Grub BIOS installation official documentation](https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html) --- Another way is to **convert the GPT drive back to MBR** if your HDD is not too big. In fact ***it's possible to have MBR disks above 2 TB***, upto ~233 sectors (i.e. 4 TB and 16 TB for disks with 512-byte and 4096-byte sector respectively) with a big partition lasting from just before the half disk margin. There are multiple tools to do the conversion without loss of data like [gdisk](http://www.rodsbooks.com/gdisk/mbr2gpt.html), [MiniTool Partition Wizard](https://www.partitionwizard.com/), [AOMEI Partition Assistant](https://www.disk-partition.com/free-partition-manager.html), [EaseUS Partition Master](https://www.easeus.com/partition-master/convert-gpt-disk-to-mbr-disk.html)... (I'm not affiliated with any of them). Since your HDD is just 250 GB so it'll work fine in MBR. But using GPT is much safer because it has checksum and a backup table at the end, although it's a lot trickier if you dual boot Windows or some other OSes that are BIOS GPT unfriendly --- 1There are some buggy BIOSes that eagerly unnecessarily do things they aren't supposed to do like checking the [MBR signature](https://stackoverflow.com/q/53920440/995714) or the active [boot flag](https://en.wikipedia.org/wiki/Boot_flag) and refuse to boot on such drives
Javascript adding linebreak in mailto body I'm setting the body of an email using values from a form ``` firstname = bob lastname = dole ebody = 'First Name: ' + firstname + '\r\n' + 'Last Name: ' + lastname window.location.href = 'mailto:[email protected]?subject=test email&body=' + ebody; ``` If I do an "alert(ebody);" I get the linebreak between firstname & lastname, however when it opens up outlook, the entire ebody string appears without a linebreak in the email body. I've tried just \n also. is there something that can give be a line break? Thanks in advance
[RFC 2368](https://www.rfc-editor.org/rfc/rfc2368) says that mailto body content must be URL-encoded, using the %-escaped form for characters that would normally be encoded in a URL. Those characters includes spaces and (as called out explicitly in section 5 of 2368) CR and LF. You could do this by writing ``` ebody = 'First%20Name:%20' + firstname + '%0D%0A' + 'Last%20Name:%20' + lastname; ``` but it's easier and better to have JavaScript do the escaping for you, like this: ``` ebody = 'First Name: ' + firstname + '\r\n' + 'Last Name: ' + lastname; ebody = encodeURIComponent(ebody); ``` Not only will that save you from having to identify and look up the hex values of characters that need to be encoded in your fixed text, it will also encode any goofy characters in the `firstname` and `lastname` variables.
Why does jetpack-compose state hoisting use callbacks? [This tutorial](https://developer.android.com/codelabs/jetpack-compose-basics#7) on state hoisting in jetpack-compose says: > > By passing a function and not a state to OnboardingScreen we are making this composable more reusable and protecting the state from being mutated by other composables. In general, it keeps things simple. > > > How does passing a callback function achieve these stated goals any better than passing the state directly?
It's because the composable function is now "stateless", meaning it will not change anything in its scope. It's simple because: 1. For every input, the function has the same output (since it only relies on its inputs and does not change anything on the outside). 2. It becomes very testable (because of 1). 3. It gives the power to anyone that is using it. They can "change" the state however they want. Imagine a `TextField(text: String, onTextChanged: (text: String) -> Unit)`. You can change the state however you want (for example, remove any digits from the string and then apply it to your state). If `TextField` handled the state itself, It would always change the state to the actual text (and you would need another mechanism for the said scenario). This brings a simplicity to your composable as you do not have to worry about how your composable (e.g. `TextField`) changes a state.
jquery - Click event not working for dynamically created button My requirement is to create number of buttons equal to the json array count. I was successful in creating buttons dynamically in jquery. But the method in .ready function of jquery is not called for the click action. I have tried searching in SO. Found few solutions but nothing worked for me. I am very new to jquery. Please help... my code: jQuery: ``` $(document).ready(function() { currentQuestionNo = 0; var questionsArray; $.getJSON('http://localhost/Sample/JsonCreation.php', function(data) { questionsArray = data; variable = 1; //CREATE QUESTION BUTTONS DYNAMICALLY ** NOT WORKING for (var question in questionsArray) { var button = $("<input>").attr("type", "button").attr("id", "questionButton").val(variable); $('body').append(button); //Tried using .next here - but it dint work... //$('body').append('<button id="questionButton">' + variable + '</button>'); variable++; } displayQuestionJS(questionsArray[currentQuestionNo], document); }); $("button").click(function() { if ($(this).attr('id') == "nextQuestion") { currentQuestionNo = ++currentQuestionNo; } else if ($(this).attr('id') == "previousQuestion") { currentQuestionNo = --currentQuestionNo; } displayQuestionJS(questionsArray[currentQuestionNo], document); }); function displayQuestionJS(currentQuestion, document) { document.getElementById('questionNumber').innerText = currentQuestion.questionNumber; document.getElementById('questionDescription').innerText = currentQuestion.quesDesc; $('label[for=optionA]').html(currentQuestion.optionA); $('label[for=optionB]').html(currentQuestion.optionB); $('label[for=optionC]').html(currentQuestion.optionC); } HTML content <form method="post" name="formRadio"> <label id="questionNumber"></label>. &nbsp; <label id="questionDescription"></label> <br /> <input type="radio" id="optionA"> </input> <label for="optionA"></label> <br /> <input type="radio" id="optionB"> </input> <label for="optionB"></label> <br /> <input type="radio" id="optionC"> </input> <label for="optionC"></label> <br /> <button id="previousQuestion">Previous Question</button> <button id="nextQuestion">Next Question</button> <br /> <br /> <input type="submit" id="submitButton" name="submitTest" value="Submit"></input> </form> ``` EDIT -- Sample .on Method code - Separate file: WORKING - THANKS A LOT ``` <script> $(document).ready(function() { $("button").click(function() { var button = '<input type="button" id="button2" value="dynamic button">'; $('body').append(button); }); }); $(document).on('click','#button2', function() { alert("Dynamic button action"); }); </script> </head> <body> <button id="button">Static Button</button> </body> ```
You create buttons dynamically because of that you need to call them with `.live()` method if you use jquery 1.7 but this method is deprecated (you can see the list of all deprecated method [here](http://api.jquery.com/category/deprecated/)) in newer version. if you want to use jquery 1.10 or above you need to call your buttons in this way: ``` $(document).on('click', 'selector', function(){ // Your Code }); ``` For Example If your html is something like this ``` <div id="btn-list"> <div class="btn12">MyButton</div> </div> ``` You can write your jquery like this ``` $(document).on('click', '#btn-list .btn12', function(){ // Your Code }); ```
Julia - Multiple conditions for filtering array Given an array like: ``` 5-element Array{String,1}: "Computer science" "Artificial intelligence" "Machine learning" "Algorithm" "Mathematics" ``` How does one filter it by multiple conditions in Julia? For example, I want to obtain all the values that are not "Computer science" or "Artificial intelligence", hence, I want to obtain: ``` 3-element Array{String,1}: "Machine learning" "Algorithm" "Mathematics" ```
Maybe something like this? ``` julia> x = ["Computer science", "Artificial intelligence", "Machine learning", "Algorithm", "Mathematics"] 5-element Array{String,1}: "Computer science" "Artificial intelligence" "Machine learning" "Algorithm" "Mathematics" # Note the double parentheses, in order to build the # ("Computer science", "Artificial intelligence") tuple # # It would also be possible (but probably less efficient) to put # those values in a vector julia> filter(!in(("Computer science", "Artificial intelligence")), x) 3-element Array{String,1}: "Machine learning" "Algorithm" "Mathematics" ``` **Edit:** as mentioned in comments, if the list of values to filter out is longer, it might be more efficient to build a `Set` instead of a `Tuple`: ``` julia> filter(!in(Set(("Computer science", "Artificial intelligence"))), x) 3-element Array{String,1}: "Machine learning" "Algorithm" "Mathematics" ```
How does one determine the amount of space allocated to their command line? How would one go about determining the amount of space allocated to their system for command lines. Is there anything else I need to be concerned with?
### xargs One method that I'm aware of is to use `xargs` to find this information out. ``` $ xargs --show-limits --no-run-if-empty < /dev/null Your environment variables take up 4791 bytes POSIX upper limit on argument length (this system): 2090313 POSIX smallest allowable upper limit on argument length (all systems): 4096 Maximum length of command we could actually use: 2085522 Size of command buffer we are actually using: 131072 ``` ### getconf The limit that `xargs` is displaying derives from this system configuration value. ``` $ getconf ARG_MAX 2097152 ``` Values such as these are typically "hard coded" on a system. See `man sysconf` for more on these types of values. I believe these types of values are accessible inside a C application, for example: ``` #include <unistd.h> ... printf("%ld\n", sysconf(_SC_ARG_MAX)); ``` ### References - [ARG\_MAX, maximum length of arguments for a new process](http://www.in-ulm.de/~mascheck/various/argmax/)
What's the name of this kind of graph? I would like to create this type of visualization. It comes from a paper about political self-placement in the left-right continuum: Attitudes, Ideological Associations and the Left-Right Divide in Latin America (Weisehomeier, Doyle, 2012). [![enter image description here](https://i.stack.imgur.com/I7kyQ.png)](https://i.stack.imgur.com/I7kyQ.png) Does anybody know what these graphs are called and how to create them in R?
As a relief of my reply-as-a-comment, here's a more qualified answer. This kind of graphical output is usually called a dot plot or a dot chart, after Cleveland.1 One dimensional scatter plots are also called dotplots in some statistical packages. As @mdewey mentioned, this can also be used to show the mean effect of a variable (with its standard error or confidence intervals) on a given outcome, but in this case each "point" are a different kind of aggregated statistic (i.e., the average of numerical values, rather than a frequency of count data). It is a nice alternative to more classical bar charts, because of its inherent low [data-ink ratio](https://stats.stackexchange.com/q/131400/930), valued by Edward Tufte.2 Moreover, this kind of graphical representation fits nicely into the design of trellis displays, by Becker and coll.,3 which among other things allow for facetting (i.e., conditioning in statistical language) and more (e.g., "shingle") --- Ross Ihaka has a nice presentation on this aspect in his [Topic in Computational Data Analysis and Graphics](https://www.stat.auckland.ac.nz/%7Eihaka/787/) course. You will probably find many such illustrations on this site, but the main R packages that are generally used are the builtin `dotchart` function, its [lattice](https://cran.r-project.org/web/packages/lattice/index.html) counterpart, `dotplot`, or the [ggplot2](https://ggplot2.tidyverse.org/) package, where `geom_point` can safely replace `geom_bar` or `geom_hbar` (note that `geom_dotplot` is the equivalent of R's builtin `stripchart`, for one dimensional scatter plots). The [UC Business Analytics R Programming Guide](https://uc-r.github.io/cleveland-dot-plots) provides some nice illustrations using the latter approach as well. **References** 1 Cleveland, W.S. and McGill, R. (1984). Graphical Perception: Theory, Experimentation, and Application to the Development of Graphical Methods. *Journal of the American Statistical Association*, 79(387), 531–554. 2 Tufte, E. (2001). *The Visual Display of Quantitative Information*, (2nd ed.). Graphics Press. 3 Becker, R.A., Cleveland, W.S and Shyu, M.-J. (1996). The Visual Design and Control of Trellis Display. *Journal of Computational and Graphical Statistics*, 5(2), 123–155.
Increase number with character in Java I have a problem with increasing number and character combinations. What I want is increase from **001** to **ZZZ** > > Example: **001**, **002**,..., **999**, **00A**,..., **00Z**, **0AA**,..., **ZZZ** > > > My code look like this: ``` int numberA = 1000; int numberB = 1024; int numberC = 1025; /* * Some formulae here */ System.out.println(numberA); //Result: 00A System.out.println(numberB); //Result: 00Z System.out.println(numberC); //Result: 0A0 ``` Are there any fomulae to solve this problem?
Maybe the following will help you get started ;-) ``` final Integer radix = 36; // that's 0-9 A-Z final Double limit = Math.pow(radix.doubleValue(), 3.0 /* max number of 'chars' */); Stream.iterate(0, i -> i+1) .map(i -> Integer.toString(i, radix)) .map(s -> String.format("000%S", s) .substring(s.length())) // leading 0, uppercase .limit(limit.longValue()) .forEach(System.out::println); ``` Or simply: ``` String radix36 = Integer.toString(yourIntThatYouCanIncrement, 36); ``` Of course if you require the 00#-format (leading zeros and uppercase) you need to apply that functions too. Holgers comment already contains a short variant of it to combine uppercase/leading zeros: ``` String formatted = String.format("000%S", radix36) .substring(radix36.length()); ```
How to check if a file is signed in C#? I'm writing a program that need to load a few other files. is there a way to check if those files are signed or not, without right clicking them and check? I use 300-400 files that change every few days I need to check DLL/EXE/CAB/OCX/MSI (and maybe also vbs/js) is there a way to check it?
Assuming you want to check if a file is Authenticode signed and that the certificate is trusted you can pinvoke to [`WinVerifyTrust`](http://msdn.microsoft.com/en-us/library/aa388208%28v=vs.85%29.aspx) in `Wintrust.dll`. Below is a wrapper (more or less reproduced [from here](https://web.archive.org/web/20200630020153/http://geekswithblogs.net/robp/archive/2007/05/04/112250.aspx)) that can be called as follows: ``` AuthenticodeTools.IsTrusted(@"path\to\some\signed\file.exe") ``` Where `AuthenticodeTools` is defined as follows: ``` internal static class AuthenticodeTools { [DllImport("Wintrust.dll", PreserveSig = true, SetLastError = false)] private static extern uint WinVerifyTrust(IntPtr hWnd, IntPtr pgActionID, IntPtr pWinTrustData); private static uint WinVerifyTrust(string fileName) { Guid wintrust_action_generic_verify_v2 = new Guid("{00AAC56B-CD44-11d0-8CC2-00C04FC295EE}"); uint result=0; using (WINTRUST_FILE_INFO fileInfo = new WINTRUST_FILE_INFO(fileName, Guid.Empty)) using (UnmanagedPointer guidPtr = new UnmanagedPointer(Marshal.AllocHGlobal(Marshal.SizeOf(typeof (Guid))), AllocMethod.HGlobal)) using (UnmanagedPointer wvtDataPtr = new UnmanagedPointer(Marshal.AllocHGlobal(Marshal.SizeOf(typeof (WINTRUST_DATA))), AllocMethod.HGlobal)) { WINTRUST_DATA data = new WINTRUST_DATA(fileInfo); IntPtr pGuid = guidPtr; IntPtr pData = wvtDataPtr; Marshal.StructureToPtr(wintrust_action_generic_verify_v2, pGuid, true); Marshal.StructureToPtr(data, pData, true); result = WinVerifyTrust(IntPtr.Zero, pGuid, pData); } return result; } public static bool IsTrusted(string fileName) { return WinVerifyTrust(fileName) == 0; } } internal struct WINTRUST_FILE_INFO : IDisposable { public WINTRUST_FILE_INFO(string fileName, Guid subject) { cbStruct = (uint)Marshal.SizeOf(typeof(WINTRUST_FILE_INFO)); pcwszFilePath = fileName; if (subject != Guid.Empty) { pgKnownSubject = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(Guid))); Marshal.StructureToPtr(subject, pgKnownSubject, true); } else { pgKnownSubject = IntPtr.Zero; } hFile = IntPtr.Zero; } public uint cbStruct; [MarshalAs(UnmanagedType.LPTStr)] public string pcwszFilePath; public IntPtr hFile; public IntPtr pgKnownSubject; #region IDisposable Members public void Dispose() { Dispose(true); } private void Dispose(bool disposing) { if (pgKnownSubject != IntPtr.Zero) { Marshal.DestroyStructure(this.pgKnownSubject, typeof(Guid)); Marshal.FreeHGlobal(this.pgKnownSubject); } } #endregion } enum AllocMethod { HGlobal, CoTaskMem }; enum UnionChoice { File = 1, Catalog, Blob, Signer, Cert }; enum UiChoice { All = 1, NoUI, NoBad, NoGood }; enum RevocationCheckFlags { None = 0, WholeChain }; enum StateAction { Ignore = 0, Verify, Close, AutoCache, AutoCacheFlush }; enum TrustProviderFlags { UseIE4Trust = 1, NoIE4Chain = 2, NoPolicyUsage = 4, RevocationCheckNone = 16, RevocationCheckEndCert = 32, RevocationCheckChain = 64, RecovationCheckChainExcludeRoot = 128, Safer = 256, HashOnly = 512, UseDefaultOSVerCheck = 1024, LifetimeSigning = 2048 }; enum UIContext { Execute = 0, Install }; [StructLayout(LayoutKind.Sequential)] internal struct WINTRUST_DATA : IDisposable { public WINTRUST_DATA(WINTRUST_FILE_INFO fileInfo) { this.cbStruct = (uint)Marshal.SizeOf(typeof(WINTRUST_DATA)); pInfoStruct = Marshal.AllocHGlobal(Marshal.SizeOf(typeof(WINTRUST_FILE_INFO))); Marshal.StructureToPtr(fileInfo, pInfoStruct, false); this.dwUnionChoice = UnionChoice.File; pPolicyCallbackData = IntPtr.Zero; pSIPCallbackData = IntPtr.Zero; dwUIChoice = UiChoice.NoUI; fdwRevocationChecks = RevocationCheckFlags.None; dwStateAction = StateAction.Ignore; hWVTStateData = IntPtr.Zero; pwszURLReference = IntPtr.Zero; dwProvFlags = TrustProviderFlags.Safer; dwUIContext = UIContext.Execute; } public uint cbStruct; public IntPtr pPolicyCallbackData; public IntPtr pSIPCallbackData; public UiChoice dwUIChoice; public RevocationCheckFlags fdwRevocationChecks; public UnionChoice dwUnionChoice; public IntPtr pInfoStruct; public StateAction dwStateAction; public IntPtr hWVTStateData; private IntPtr pwszURLReference; public TrustProviderFlags dwProvFlags; public UIContext dwUIContext; #region IDisposable Members public void Dispose() { Dispose(true); } private void Dispose(bool disposing) { if (dwUnionChoice == UnionChoice.File) { WINTRUST_FILE_INFO info = new WINTRUST_FILE_INFO(); Marshal.PtrToStructure(pInfoStruct, info); info.Dispose(); Marshal.DestroyStructure(pInfoStruct, typeof(WINTRUST_FILE_INFO)); } Marshal.FreeHGlobal(pInfoStruct); } #endregion } internal sealed class UnmanagedPointer : IDisposable { private IntPtr m_ptr; private AllocMethod m_meth; internal UnmanagedPointer(IntPtr ptr, AllocMethod method) { m_meth = method; m_ptr = ptr; } ~UnmanagedPointer() { Dispose(false); } #region IDisposable Members private void Dispose(bool disposing) { if (m_ptr != IntPtr.Zero) { if (m_meth == AllocMethod.HGlobal) { Marshal.FreeHGlobal(m_ptr); } else if (m_meth == AllocMethod.CoTaskMem) { Marshal.FreeCoTaskMem(m_ptr); } m_ptr = IntPtr.Zero; } if (disposing) { GC.SuppressFinalize(this); } } public void Dispose() { Dispose(true); } #endregion public static implicit operator IntPtr(UnmanagedPointer ptr) { return ptr.m_ptr; } } ```
C# IAsyncResult WaitAll In some of the implementations of WaitAll I have seen the following code ``` IAsyncResult result1 = Method.BeginInvoke(10, MyCallback, null) IAsyncResult result2 = Method.BeginInvoke(20, MyCallback, null) WaitHandle[] waitHandles = new WaitHandle[] { result1.AsyncWaitHandle, result2.AsyncWaitHandle}; WaitHandle.WaitAll(waitHandles) ``` Does this seem right ? What are the chances that before the waitHandles array is created one of the calls complete ? Regards, Dhananjay
Makes sense to me. ``` // Begin invoking the first and second method, and give them a callback IAsyncResult result1 = Method.BeginInvoke(10, MyCallback, null) IAsyncResult result2 = Method.BeginInvoke(20, MyCallback, null) // Any time past the point of invokation, MyCallback could be called. // Get the wait handles of the async results, regardless of whether they have finished or not WaitHandle[] waitHandles = new WaitHandle[] { result1.AsyncWaitHandle, result2.AsyncWaitHandle}; // Make sure to wait until the methods have finished. // They could have finished immediately, and MyCallback could have already been called, // but we will never get past this line unless they have finished. WaitHandle.WaitAll(waitHandles) // We may have waited on the function to finish, or they may have been done before we arrived at the previous line. Either way, they are done now. ``` What exactly do you find odd? Asking "what are the chances" is a bad sign. The chances are *it might happen* which means you will need to account for what the program needs to do if and when the methods complete before you WaitAll.
Concatenate values that are grouped by a column I have the following table: ``` Code Ref Value A1 Car A A1 Car - A1 Car B B2 Truck CC B2 Truck D B2 Truck - C3 Van E C3 Van F C3 Van - C3 Van G ``` The goal I am trying to accomplish, is a concatenated string grouping all of the values together like this: ``` Code Ref Value A1 Car A-B B2 Truck CCD- C3 Van EF-G ``` I went off of the example [here](https://stackoverflow.com/questions/15154644/sql-group-by-to-combine-concat-a-column), but got nowhere. Here is what I came up with: ``` SELECT [Table].[Code] , [Table].[Ref] , STUFF((SELECT DISTINCT [Value] FROM [Table2] FOR XML PATH ('')),1, 1,'') AS Values FROM [Table] LEFT JOIN [Table2] ON [Table2].[Code] = [Table].[Code] ``` Where am I going wrong? Is there a more efficient way to do this?
You have nothing linking your inner and outer references to `[Table]`, and you also need to make the outer reference distinct. Finally you need to either have no column name within your subquery, or it needs to be `[text()]` ``` SELECT [Code] ,[Ref] ,STUFF((SELECT DISTINCT [Value] AS [text()] FROM [Table] AS T2 WHERE T1.Code = T2.Code -- LINK HERE AND T2.Ref = T2.Ref -- AND HERE FOR XML PATH ('') ),1, 1,'') AS [Values] FROM [Table] AS T1 GROUP BY T1.Code, T1.Ref; -- GROUP BY HERE ``` As an aside, you do not need to use `STUFF` as you have no delimiter, `STUFF` is typically used to remove the chosen delimiter from the start of the string. So when you have a string like `,value1,value2,value3`, `STUFF(string, 1, 1, '')` will replace the first character with `''` leaving you with `value1,value2,value3`. You should also use the `value` xquery method to ensure you are not tripped up by special characters, if you don't and you try an concatenate `">>"` and `"<<"` you would not end up with `">><<"` as you might want, you would get `"&gt;&gt;&lt;&lt;"`, so a better query would be: ``` SELECT t1.Code, t1.Ref, [Values] = (SELECT DISTINCT [text()] = [Value] FROM [Table] AS t2 WHERE T1.Code = T2.Code AND T2.Ref = T2.Ref FOR XML PATH (''), TYPE ).value('.', 'NVARCHAR(MAX)') FROM [Table] AS T1 GROUP BY t1.Code, t1.Ref; ``` --- **ADDENDUM** Based on the latest edit to the question it appears as though your `Value` column is coming from another table, linked to the first table by `Code`. If anything this makes your query simpler. You don't need the `JOIN`, but you still need to make sure that there is an expression to link the outer table to the inner table your subquery. I am assuming that the rows are unique in the first table, so you probably don't need the group by either: ``` SELECT t1.Code, t1.Ref, [Values] = (SELECT DISTINCT [text()] = t2.[Value] FROM [Table2] AS t2 WHERE T1.Code = T2.Code FOR XML PATH (''), TYPE ).value('.', 'NVARCHAR(MAX)') FROM [Table] AS T1; ``` --- **WORKING EXAMPLE** ``` CREATE TABLE #Table1 (Code CHAR(2), Ref VARCHAR(10)); INSERT #Table1 VALUES ('A1', 'Car'), ('B2', 'Truck'), ('C3', 'Van'); CREATE TABLE #Table2 (Code CHAR(2), Value VARCHAR(2)); INSERT #Table2 VALUES ('A1', 'A'), ('A1', '-'), ('A1', 'B'), ('B2', 'CC'), ('B2', 'D'), ('B2', '-'), ('C3', 'F'), ('C3', '-'), ('C3', 'G'); SELECT t1.Code, t1.Ref, [Values] = (SELECT DISTINCT [text()] = t2.[Value] FROM #Table2 AS t2 WHERE T1.Code = T2.Code FOR XML PATH (''), TYPE ).value('.', 'NVARCHAR(MAX)') FROM #Table1 AS T1; ```
Do I need CSRF-protection without users or login? I am building a Django application where people can register for events. Everyone can register, there's no user account or login, i.e. no authentication. Verification is done through an an email with a link that has to be clicked in order to activate the registration. I'm unsure whether I need to enable CSRF-protection for these forms. It boils down to the following question: Is CSRF-protection necessary for every POST-request (which doesn't leave the domain) or only for POST-requests by logged-in users? What could be done with a CSRF-attack? I know you can use it to circumvent the same origin policy and post whatever you want in the name of the user, but can you also use it to alter a real post by the user or steal their data? If a malicious site could learn the data the user posted or silently alter their request that would be a reason for me to use it. If it just means that another website can create additional registrations then no, because so can everyone else. (I know that it doesn't cost much to just use it everywhere and I might in fact do that, but I'm trying to understand the principle better)
Contrary to the other answer, CSRF fundamentally is not about sending cookies. CSRF is about another website being able to have a user visiting it send a request to your application. If there is a session, it needs to be via something like cookies for this to be successful, because cookies for example will be sent automatically. But there are other forms of authentication that will be sent automatically, for example client certificates. Also if there is no authentication, even easier, requests can be made. And that sounds like a problem in your case too. What another website can do is if a user visit them, they can have *that user* perform actions in *your application*. For example they can have them register for an event, without them even noticing it. Or the malicious website can deregister people from events if that's possible. They can do whatever *in the name of the victim user* that is possible on your website, without the victim knowing about it, just by having them visiting the malicious website. So to put it another way, the probelm is not that another website can perform actions in your application - they could do it with CSRF enabled too if there is no authentication. But without CSRF protection, they can have **your users** perform actions inadvertently **in your application**, just by having them visit the malicious website. Only you can tell, whether this is a problem in your case. Without more info, I think you should have CSRF protection enabled.
How to customize TimePicker in material design android? I am unable to change the selector color and other parts of the TimePicker. So far, I can change header color and background but I am unable to change the innercircle and the textcolor. Change custom theme [link](https://material-design.storage.googleapis.com/publish/material_v_4/material_ext_publish/0B3321sZLoP_HcVRNejI4UTh6aHM/components_pickers_time1.png). My code : ``` <TimePicker android:id="@+id/tp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:timePickerMode="clock" android:headerBackground="#565466" android:amPmBackgroundColor="#500126" android:numbersBackgroundColor="#57e326" android:numbersTextColor="#995394" android:numbersSelectorColor="#675543" android:textColorSecondary="#897530" android:textColorPrimary="#359875" /> ```
### Basic mode All you have to do is set your accent color in your Activity's theme: ``` <item name="colorAccent">#3333cc</item> ``` This will set all the colors for you so you wouldn't mess up the styling. (*This also means that you shouldn't set values like `amPmBackgroundColor` on your TimePicker directly, let Android do the things for you.*) ### Advanced mode If you want to specify all the possible values separately, do the following: First define this in your Activity's theme: ``` <item name="android:timePickerStyle">@style/Theme.MyTheme.TimePicker</item> ``` Then create the appropriate style: ``` <style name="Theme.MyTheme.TimePicker" parent="android:Widget.Material.Light.TimePicker"> <item name="android:timePickerMode">clock</item> <item name="android:headerBackground">#ff555555</item> <item name="android:numbersTextColor">?android:attr/textColorPrimaryActivated</item> <item name="android:numbersInnerTextColor">?android:attr/textColorSecondaryActivated</item> <item name="android:numbersSelectorColor">?android:attr/colorControlActivated</item> <item name="android:numbersBackgroundColor">#ff555555</item> <item name="android:amPmTextColor">?android:attr/textColorSecondary</item> </style> ``` Note that `numbersInnerTextColor` is only available from API level 23 and other styles (e.g. `headerTextColor`) can't be set (or at least I couldn't make it work). I'd advise against using the "advanced" mode as the TimePicker should have the same colors as the containing Activity and doing otherwise might impact your UX in a bad way.
How can I access built-in methods of an Object that is overridden? A webpage is setting a built-in javascript method to `null`, and I'm trying to find a way to call the overridden methods in a userscript. Consider the following code: ``` // Overriding the native method to something else document.querySelectorAll = null; ``` Now, if I try to execute `document.querySelectorAll('#an-example')`, I will get the exception **`Uncaught TypeError: null is not a function`**. The reason being the method has been changed to `null` and is no longer accessible. I'm looking for a way to somehow restore the reference to the method in my userscript. The problem is that the website can override the reference to anything (even including the `Document`, `Element` and `Object` constructors). Since the website can also easily set the reference to `null`, I need a way to find a way to access the `querySelectorAll` method that **the website won't be able to override**. The challenge is that **any method** such as `createElement` and `getElementsByTagName` (in addition to their `prototype`s) can get overridden to `null` at the point my userscript is executed on the page. My question is, how do I access the **`Document`** or **`HTMLDocument`** constructor methods, if they have also been overridden? ### Note: Since Tampermonkey [due to browser limitations](https://github.com/Tampermonkey/tampermonkey/issues/211#issuecomment-94441134) *cannot* run my script at the *beginning* of a document, I'm unable to save a reference to the method I'd like to use, with something like this: ``` // the following code cannot be run at the beginning of the document var _originalQuerySelectorAll = document.querySelectorAll; ```
There are at least 3 approaches: 1. Use the **userscript sandbox**. Alas, this currently only works on Greasemonkey (including version 4+) due to Tampermonkey and Violentmonkey design flaws / bugs. More below. 2. Use `@run-at document-start`. Except that this too will not work on fast pages. 3. **Delete the function override**. This usually works, but is liable to more interference with/from the target page. and can be blocked if the page alters the `prototype` of the function. See, also, **[Stop execution of Javascript function (client side) or tweak it](https://stackoverflow.com/questions/3972038/stop-execution-of-javascript-function-client-side-or-tweak-it)** Note that all of the script and extension examples, below, are **complete working code**. And you can test them against [this JS Bin page](https://output.jsbin.com/kobegen) by changing:       `*://YOUR_SERVER.COM/YOUR_PATH/*` to:       `https://output.jsbin.com/kobegen*` --- ## Userscript Sandbox: This is the preferred method and works on Firefox+Greasemonkey (including Greasemonkey 4). When setting `@grant` to other than none, the script engine is *supposed* to run the script in a sandbox that browsers specifically provide for that purpose. In the proper sandbox, the target page can override `document.querySelectorAll` or other native functions all it wants, and **the userscript will see its own, completely untouched instances**, regardless. This *should* always work: ``` // ==UserScript== // @name _Unoverride built in functions // @match *://YOUR_SERVER.COM/YOUR_PATH/* // @grant GM_addStyle // @grant GM.getValue // ==/UserScript== //- The @grant directives are needed to restore the proper sandbox. console.log ("document.querySelectorAll: ", document.querySelectorAll); ``` and yield: > > document.querySelectorAll: function querySelectorAll() { [native code] } > > > However, **both Tampermonkey and Violentmonkey do not sandbox properly**, in neither Chrome nor Firefox. The target page can tamper with the native functions a Tampermonkey script sees, even with Tampermonkey's or Violentmonkey's version of the sandbox on. This is not just a design flaw, it is **a security flaw** and a vector for potential exploits. We know that Firefox and Chrome are not the culprits since (1) Greasemonkey-4 sets up the sandbox properly, and (2) a Chrome extension sets up the "Isolated World" properly. That is, this extension: **manifest.json:** ``` { "manifest_version": 2, "content_scripts": [ { "js": [ "Unoverride.js" ], "matches": [ "*://YOUR_SERVER.COM/YOUR_PATH/*" ] } ], "description": "Unbuggers native function", "name": "Native function restore slash use", "version": "1" } ``` **Unoverride.js:** ``` console.log ("document.querySelectorAll: ", document.querySelectorAll); ``` Yields: > > document.querySelectorAll: function querySelectorAll() { [native code] } > > > as it should. --- ## Use `@run-at document-start`: Theoretically, running the script at `document-start` should allow the script to catch the native function before it's altered. EG: ``` // ==UserScript== // @name _Unoverride built in functions // @match *://YOUR_SERVER.COM/YOUR_PATH/* // @grant none // @run-at document-start // ==/UserScript== console.log ("document.querySelectorAll: ", document.querySelectorAll); ``` And this sometimes works on slow enough pages and/or networks. But, as the OP already noted, **neither Tampermonkey nor Violentmonkey actually inject and run before any other page code**, so this method fails on fast pages. Note that a Chrome-extension content script set with `"run_at": "document_start"` in the manifest, *does* run at the correct time and/or fast enough. --- ## Delete the function override: If the page (mildly) overrides a function like `document.querySelectorAll`, you can clear the override using `delete`, like so: ``` // ==UserScript== // @name _Unoverride built in functions // @match *://YOUR_SERVER.COM/YOUR_PATH/* // @grant none // ==/UserScript== delete document.querySelectorAll; console.log ("document.querySelectorAll: ", document.querySelectorAll); ``` which yields: > > document.querySelectorAll: function querySelectorAll() { [native code] } > > > The drawbacks are: 1. Won't work if the page alters the prototype. EG: `Document.prototype.querySelectorAll = null;` 2. The page can see or remake such changes, especially if your script fires too *soon*. Mitigate item 2 by making a private copy: ``` // ==UserScript== // @name _Unoverride built in functions // @match *://YOUR_SERVER.COM/YOUR_PATH/* // @grant none // ==/UserScript== var foobarFunc = document.querySelectorAll; delete document.querySelectorAll; var _goodfunc = document.querySelectorAll; var goodfunc = function (params) {return _goodfunc.call (document, params); }; console.log (`goodfunc ("body"): `, goodfunc("body") ); ``` which yields: > > goodfunc ("body"): NodeList[1](https://output.jsbin.com/kobegen)0: body, length: 1,... > > > And `goodfunc()` will continue to work (for your script) even if the page remolests `document.querySelectorAll`.
CustomAttribute reflects html attribute MVC5 Hoping to find a way when in MVC5 a Custom attribute or preferable the RegularExpressionAttribute decorates a property in the model the html control will contain it as another attribute of the control. E.g. ``` class CoolModel { [CustomHtmlAttribute("hello")] public string CoolValue {get;set;} } ``` outputs... ``` <input type="text" customhtml="hello" /> ``` Or something like that. So for the RegularExpressionAttribute the pattern attribute will be awesome. ``` class CoolModel { [RegularExpressionAttribute("/d")] public string CoolValue {get;set;} } ``` outputs... ``` <input type="text" pattern="/d" /> ``` I need this output without enabling the Javascript unobtrusive option. So I'm thinking in a way to specify some attribute in the model that gets push down to the view. Not sure if the Data annotations provider could do this job. Not sure if a Helper could be extended to get this result. Help is appreciated.
If using the standard helpers with the overload to add html attributes is not acceptable, then you can create an attribute implements `IMetadataAware` that adds properties to `metadata.AdditionalValues` which can then be used in custom html helpers. A simple example might be ``` [AttributeUsage(AttributeTargets.Property)] public class CustomHtmlAttribute : Attribute, IMetadataAware { public static string ValueKey { get { return "Value"; } } public string Value { get; set; } public void OnMetadataCreated(ModelMetadata metadata) { if (Value != null) { metadata.AdditionalValues[ValueKey] = Value; } } } ``` and to create a helper to render a textbox (only one overload shown here) ``` public static MvcHtmlString CustomHtmlTextBoxFor<TModel, TValue>(this HtmlHelper<TModel> helper, Expression<Func<TModel, TValue>> expression) { ModelMetadata metaData = ModelMetadata.FromLambdaExpression(expression, helper.ViewData); object attributes = null; if (metaData.AdditionalValues.ContainsKey(ValueKey)) { attributes = new { customhtml = (string)metaData.AdditionalValues[ValueKey] }; } return InputExtensions.TextBoxFor(helper, expression, attributes); } ``` and use it as ``` [CustomHtml(Value = "hello")] public string CoolValue { get; set; } ``` and in the view ``` @Html.CustomHtmlTextBoxFor(m => m.CoolValue) ``` to make this a bit more flexible, you could add more properties to the attribute so you could apply it as ``` [CustomHtml(Value = "hello", Pattern="/d")] public string CoolValue { get; set; } ``` and modify the helper to render all the html attributes you define.
How Can I Mask My Material-UI TextField? I have a TextField for phone numbers in a short form. And then i want to mask this form field like (0)xxx xxx xx xx. I'm trying to use [react-input-mask](https://github.com/sanniassin/react-input-mask) plugin with [Material-UI](https://github.com/callemall/material-ui). But if i want to change input value, this is not updating the my main TextField. ``` <TextField ref="phone" name="phone" type="text" value={this.state.phone} onChange={this.onChange} > <InputMask value={this.state.phone} onChange={this.onChange} mask="(0)999 999 99 99" maskChar=" " /> </TextField> ``` Actually, I couldn't find any documentation for masking with Material-UI. I'm trying to figure out how can i use with another plugins.
# Update versions: material-ui 0.20.2, react-input-mask 2.0.4 Seems like the API changed a bit: ``` <InputMask mask="(0)999 999 99 99" value={this.state.phone} disabled={false} maskChar=" " > {() => <TextField />} </InputMask> ``` Demo [![Edit throbbing-bird-9qgw9](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/throbbing-bird-9qgw9?fontsize=14&hidenavigation=1&theme=dark) # Original This should do the trick: ``` <TextField ref="phone" name="phone" type="text" value={this.state.phone} onChange={this.onChange} > <InputMask mask="(0)999 999 99 99" maskChar=" " /> </TextField> ``` Demo: [![Edit yl8p9jvq9](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/yl8p9jvq9)
Clojure - walk with path I am looking for a function similar to those in [clojure.walk](https://clojuredocs.org/clojure.walk/walk) that have an `inner` function that takes as argument : - not a key and a value, as is the case with the clojure.walk/walk function - but the vector of keys necessary to access a value from the top-level data structure. - recursively traverses all data Example : ``` ;; not good since it takes `[k v]` as argument instead of `[path v]`, and is not recursive. user=> (clojure.walk/walk (fn [[k v]] [k (* 10 v)]) identity {:a 1 :b {:c 2}}) ;; {:a 10, :c 30, :b 20} ;; it should receive as arguments instead : [[:a] 1] [[:b :c] 2] ``` *Note:* - It should work with arrays too, using the keys 0, 1, 2... (just like in `get-in`). - I don't really care about the `outer` parameter, if that allows to simplify the code.
Currently learning clojure, I tried this as an exercise. I however found it quite tricky to implement it directly as a walk down the tree that applies the inner function as it goes. To achieve the result you are looking for, I split the task in 2: - First transform the nested structure into a dictionary with the path as key, and the value, - Then map the inner function over, or reduce with the outer function. My implementation: ``` ;; Helper function to have vector's indexes work like for get-in (defn- to-indexed-seqs [coll] (if (map? coll) coll (map vector (range) coll))) ;; Flattening the tree to a dict of (path, value) pairs that I can map over ;; user> (flatten-path [] {:a {:k1 1 :k2 2} :b [1 2 3]}) ;; {[:a :k1] 1, [:a :k2] 2, [:b 0] 1, [:b 1] 2, [:b 2] 3} (defn- flatten-path [path step] (if (coll? step) (->> step to-indexed-seqs (map (fn [[k v]] (flatten-path (conj path k) v))) (into {})) [path step])) ;; Some final glue (defn path-walk [f coll] (->> coll (flatten-path []) (map #(apply f %)))) ;; user> (println (clojure.string/join "\n" (path-walk #(str %1 " - " %2) {:a {:k1 1 :k2 2} :b [1 2 3]}))) ;; [:a :k1] - 1 ;; [:a :k2] - 2 ;; [:b 0] - 1 ;; [:b 1] - 2 ;; [:b 2] - 3 ```
How can I save token for reuse in O365 python library? I am reading and sending mails of my office 365 account using python O365 library to automate some regular tasks. Every time I want to use this API I have to do verification and get a fresh token and this token expires every 60 minutes. SO after some thorough research I found about a method called `FileSystemTokenBackend` however I'm still unable to save the token. This is how I'm trying to save ``` token_backend = O365.FileSystemTokenBackend(token_path='G:/Newfolder', token_filename='my_token.txt') ``` Even after this command, no token is being saved.
According to the [docs](https://pypi.org/project/O365/#authentication), your token will be stored at the specified path **when you authenticate**. After you have authenticated for the first time, the token will be read at the specified path. When you create and account with a token backend pointing to this path, this account will automatically be logged in. ``` token_backend = FileSystemTokenBackend(token_path='token_dir', token_filename='o365_token.txt') account = Account(CREDENTIALS, token_backend=token_backend) # If it's your first login, you will have to visit a website to authenticate and paste the redirected URL in the console. Then your token will be stored. # If you already have a valid token stored, then account.is_authenticated is True. if not account.is_authenticated: account.authenticate(scopes=['basic', 'message_all']) ```
What does (float)(par4 >> 16 & 255) / 255.0F; mean? I found this line of code: `this.red = (float)(par4 >> 16 & 255) / 255.0F;` where red has been declared as a `float`. I am trying to understand what it does, especially because the full code is: ``` this.red = (float)(par4 >> 16 & 255) / 255.0F; this.blue = (float)(par4 >> 8 & 255) / 255.0F; this.green = (float)(par4 & 255) / 255.0F; this.alpha = (float)(par4 >> 24 & 255) / 255.0F; GL11.glColor4f(this.red, this.blue, this.green, this.alpha); ``` so I'm guessing this somehow uses different locations of an int (`par4`) to color text. `par4` is equal to `553648127` in this case. What do those four lines mean, notably the `>> 16 & 25`?
RGB with alpha channel (usually known as [RGBA](http://en.wikipedia.org/wiki/RGBA_color_space) or aRGB) are four bytes packed into one integer. ``` AAAAAAAARRRRRRRRBBBBBBBBGGGGGGGG // the original par4, each char represents one bit. // where ARBG stands for alpha, red, blue and green bit. ``` The shift and and operator are used to retrieve each individual byte. For example, `par4 >> 16 & 255` is first right-shifting the integer 16 bits such that the original 3rd byte is located at base, and the `255` is served as mask to extract only one byte. And `par4 >> 16` will right-shift the original byte 16 bits; ``` 0000000000000000AAAAAAAARRRRRRRR ``` Finally, applying `&255`, which is `00000000000000000000000011111111` in bit-representation, will mask the last 8 bits: ``` 0000000000000000AAAAAAAARRRRRRRR & 00000000000000000000000011111111 = 000000000000000000000000RRRRRRRR ``` This gives you the red byte.
What is the proper way to comment code in Python? I was reading PEP8 and some questions on StackΒ Overflow, but I was wondering about spaces between comments: Let’s say I have this code: ``` class MyBrowser(QWebPage): ''' Settings for the browser.''' def __init__(self): QWebPage.__init__(self) # Specifies whether images are automatically loaded in web pages. self.settings().setAttribute(QWebSettings.AutoLoadImages, True) def userAgentForUrl(self, url): ''' Returns a User Agent that will be seen by the website. ''' return "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.15 (KHTML, like Gecko) Chrome/24.0.1295.0 Safari/537.15" ``` What is the most Pythonic way of putting blank lines between comments and the actual code? I want to show my program to some experts. And want my code to look more professional.
I don't know if this represents the "community standard" but here are [Google's Python style guides](https://google.github.io/styleguide/pyguide.html?showone=Comments#Comments) (as they relate to comments). Specifically classes: ``` class SampleClass(object): """Summary of class here. Longer class information.... Longer class information.... Attributes: likes_spam: A boolean indicating if we like SPAM or not. eggs: An integer count of the eggs we have laid. """ def __init__(self, likes_spam=False): """Inits SampleClass with blah.""" self.likes_spam = likes_spam self.eggs = 0 def public_method(self): """Performs operation blah.""" ```
Grouping googletest unit tests by categories Can googletest unit tests be grouped by categories? For example "SlowRunning", "BugRegression", etc. The closest thing I have found is the --gtest\_filter option. By appending/prepending category names to the test or fixture names I can simulate the existence of groups. This does not allow me to create groups that are not normally run. If categories do not exist in googletest, is there a good or best practice workaround? Edit: Another way is to use the --gtest\_also\_run\_disabled\_tests. Adding DISABLED\_ in front of tests gives you exactly one conditional category, but I feel like I'm misusing DISABLED when I do it.
One of way use gtest\_filter option and use naming convention for tests (as you describe in question). ``` TEST_F(Foo, SlowRunning_test1) {...} TEST_F(Foo, BugRegression_test1) {...} TEST_F(Foo, SlowRunningBugRegression_test1) {...} ``` Other way use separate binaries/executable for any type of test. This way have some limitations because gtest use static autoregistration so if you include some source file - all tests implemented in this source file would be included to generated binary/executable. By my opinion first method better. Additionally, i would implemented new test registration macro for make my life easier: ``` #define GROUP_TEST_F(GroupName, TestBase, TestName) \ #ifdef NO_GROUP_TESTS \ TEST_F(TestBase, TestName) \ #else \ TEST_F(TestBase, GroupName##_##TestName) \ #endif ```
Prevent git from popping up gnome password box I have not asked a question of this nature before, so this may not be the correct site for this. I use the xfce terminal in drop-down mode connected to a hotkey. It closes when another window becomes active, which is just fine. What is not fine, however, is that when I use git and have it pull or push to an https url, it pops up a fun box to ask me for my password instead of just letting me enter it directly on the command line. Normally I would google around to find the answer to this, but sadly most people are trying to get git to stop asking for a password altogether rather than prevent a dialog box, so this is hard for me to google (trust me; I've tried for a couple months now on and off when I get annoyed enough). How can I prevent git from popping up any graphical windows for things like passwords? Git says that it is using `/usr/lib/seahorse/seahorse-ssh-askpass` for asking the password, so if there is some configuration option to prevent it from using that (or that has an equivalent effect), that would be great. Thanks very much for any help.
It seems like git is probably using the `GIT_ASKPASS` or `SSH_ASKPASS` environment variables to figure out if it should use a separate program to prompt for passwords. Try running `unset GIT_ASKPASS` or `unset SSH_ASKPASS`. Now try pushing or pulling from a git repository. If that works, add the appropriate command to `.bashrc`, `.zshrc`, or whatever file you use to run a command when your shell starts. You can also override the value of git's `core.askpass` setting with `git config --global core.askpass YOUR_PREFERRED_PROMPT_COMMAND`. **Relevant information from the [git-config man page](http://git-scm.com/docs/git-config):** > > core.askpass > > > Some commands (e.g. svn and http interfaces) that interactively ask for a password can be told to use an external program given via > the value of this variable. Can be overridden by the GIT\_ASKPASS > environment variable. If not set, fall back to the value of the > SSH\_ASKPASS environment variable or, failing that, a simple password > prompt. The external program shall be given a suitable prompt as > command line argument and write the password on its STDOUT. > > > **Original source:** <http://kartzontech.blogspot.com/2011/04/how-to-disable-gnome-ssh-askpass.html>
Print out all permutations of an Array I am working on a program, and I have a function that swaps the positions in an Array of length that is input by a user. However, I am trying to figure out how to print out this function call N! times, which would list all the permutations in the function. My code for the permutation function is: ``` static void nextPerm(int[] A){ for( int i = (n-1); i > 0; i-- ){ if( A[i] < A[i+1] ){ A[i] = pivot; continue; } if( A[i] >= A[i+1] ){ reverseArray(A); return; } } for( int i = n; i > 0; i--){ if( A[i] > pivot ){ A[i] = successor; continue; } } Swap(pivot, successor); int[] B = new int[pivot+1]; reverseArray(B); return; } ``` Should I write a loop in function main, that will print this out n! times?
Creating (or printing) the permutations of an array is much easier done as a combination of recursively and iteratively than purely iteratively. There are surely iterative ways to do it, but it is particularly simple with a combination. Specifically, note that there are by definition N! permutations of a length N array - N choices for the first slot, N-1 choices for the 2nd, etc etc. So, we can break an algorithm down into two steps *for each index i in the array*. 1. Select an element in the sub-array `arr[i....end]` to be the `ith` element of the array. Swap that element with the element currently at `arr[i]`. 2. Recursively permute `arr[i+1...end]`. We note that this will run in O(N!), as on the 1st call N sub calls will be made, each of which will make N-1 sub calls, etc etc. Moreover, every element will end up being in every position, and so long as only swaps are made no element will ever be duplicated. ``` public static void permute(int[] arr){ permuteHelper(arr, 0); } private static void permuteHelper(int[] arr, int index){ if(index >= arr.length - 1){ //If we are at the last element - nothing left to permute //System.out.println(Arrays.toString(arr)); //Print the array System.out.print("["); for(int i = 0; i < arr.length - 1; i++){ System.out.print(arr[i] + ", "); } if(arr.length > 0) System.out.print(arr[arr.length - 1]); System.out.println("]"); return; } for(int i = index; i < arr.length; i++){ //For each index in the sub array arr[index...end] //Swap the elements at indices index and i int t = arr[index]; arr[index] = arr[i]; arr[i] = t; //Recurse on the sub array arr[index+1...end] permuteHelper(arr, index+1); //Swap the elements back t = arr[index]; arr[index] = arr[i]; arr[i] = t; } } ``` Sample input, output: ``` public static void main(String[] args) { permute(new int[]{1,2,3,4}); } [1, 2, 3, 4] [1, 2, 4, 3] [1, 3, 2, 4] [1, 3, 4, 2] [1, 4, 3, 2] [1, 4, 2, 3] [2, 1, 3, 4] [2, 1, 4, 3] [2, 3, 1, 4] [2, 3, 4, 1] [2, 4, 3, 1] [2, 4, 1, 3] [3, 2, 1, 4] [3, 2, 4, 1] [3, 1, 2, 4] [3, 1, 4, 2] [3, 4, 1, 2] [3, 4, 2, 1] [4, 2, 3, 1] [4, 2, 1, 3] [4, 3, 2, 1] [4, 3, 1, 2] [4, 1, 3, 2] [4, 1, 2, 3] ```
PSTools psexec and PCI I am just wondering if anyone knows of any reason why using psexec would cause the failure of a PCI DSS audit. I have never been able to find information, though have always been told that it can't be used by administrators on anything in the CDE, or surrounding environment. I am wondering if the FUD is to do with the MetaSpolit script of the same name? Not sure what that does, but I've heard that it may have caused confusion. Could anyone shed any light on whether this can be used legitimately or whether it is highly frowned upon/banned? To put it into perspective, psexec gets treated the same as telnet being enabled on devices, such as printers, etc. Thanks
psexec has [multiple issues](https://security.stackexchange.com/q/13911/11291) which make it inappropriate for use in a reasonably secure environment: 1. It's not encrypted. 2. It requires administrative shares to be made available. 3. It has a mode which can [trivially expose a backdoor administrative command prompt to the world](http://www.windowsecurity.com/articles-tutorials/misc_network_security/PsExec-Nasty-Things-It-Can-Do.html). And probably other issues I can't think of right now. --- If your environment is sufficiently modern (everything is 2008 or later), you can use [PowerShell remoting](http://technet.microsoft.com/en-us/magazine/ff700227.aspx) in its place. This runs over WinRM with HTTPS transport by default and doesn't require you to reduce your security.
Fitting data vs. transforming data in scikit-learn In [scikit-learn](http://www.astroml.org/sklearn_tutorial/general_concepts.html), all estimators have a `fit()` method, and depending on whether they are supervised or unsupervised, they also have a `predict()` or `transform()` method. I am in the process of writing a [transformer](http://scikit-learn.org/stable/modules/generated/sklearn.base.TransformerMixin.html) for an unsupervised learning task and was wondering if there is a rule of thumb where to put which kind of learning logic. The official documentation is not very helpful in this regard: > > `fit_transform(X, y=None, **fit_params)` > > Fit to data, then transform it. > > > In this context, what is meant by both *fitting data* and *transforming data*?
Fitting finds the internal parameters of a model that will be used to transform data. Transforming applies the parameters to data. You may fit a model to one set of data, and then transform it on a completely different set. For example, you fit a linear model to data to get a slope and intercept. Then you use those parameters to transform (i.e., map) new or existing values of `x` to `y`. `fit_transform` is just doing both steps to the same data. A scikit example: You fit data to find the principal components. Then you transform your data to see how it maps onto these components: ``` from sklearn.decomposition import PCA pca = PCA(n_components=2) X = [[1,2],[2,4],[1,3]] pca.fit(X) # This is the model to map data pca.components_ array([[ 0.47185791, 0.88167459], [-0.88167459, 0.47185791]], dtype=float32) # Now we actually map the data pca.transform(X) array([[-1.03896057, -0.17796634], [ 1.19624651, -0.11592512], [-0.15728599, 0.29389156]]) # Or we can do both "at once" pca.fit_transform(X) array([[-1.03896058, -0.1779664 ], [ 1.19624662, -0.11592512], [-0.15728603, 0.29389152]], dtype=float32) ```
ArrayList unexpected out of boundary result Hi guys i catch exception: > > java.lang.IndexOutOfBoundsException: Index: 1, Size: 0 > > > Code look like this: ``` **ArrayList decoded = new ArrayList(10); decoded.add(1, "A");** ``` ***I really don't get it what's wrong here ?*** Maybe someone could help me on this. --- Basically i want to do this: I have chars for example likes this: ``` ABCEDFG ``` And array with numbers: ``` 321211 ``` I need to take first char and first number. char write into S arrayList: ``` S={"A"}; ``` Look at number and again write same char in that position if number is 3 then in third place: (If number 1 then just write it in arrayList) ``` S={"A","","","A"} ``` And delete number and char from arrayLists. Now they look like this: ``` BCEDFG 21211 ``` and again the same till the end.
I suspect you're confused between *capacity* and *size*. This line: ``` ArrayList decoded = new ArrayList(10); ``` ... creates a list with a *capacity* of 10 (it won't need to grow internally until the eleventh element is added), but a *size* of 0. The *only* index you can add anything is 0. If you want to be able to set values at arbitrary positions, you should populate the list first, e.g. (using generics just because I hate using the raw types): ``` List<String> decoded = new ArrayList<String>(10); for (int i = 0; i < 10; i++) { decoded.add(""); // Or perhaps null } ... decoded.set(1, "S"): // This is now fine, replacing element 1 ``` Alternatively, if you already know the size you want, you could use an array to start with - they have a *fixed* size, and each element is the default value for the element type (so `null` for strings, for example): ``` String[] decoded = new String[10]; decoded[1] = "S"; ```
Make Github push to a remote server when it receives updates What is the set up for having Github automatically push any updates to a remote server? This is useful for maintaining a codebase on Github, and having a website run off that codebase. 1. I have my repo on my own computer, this is where I work. 2. I commit my changes on my local repo, and push them to my Github repo. 3. I want my Github repo to then push these changes to my remote server. I've been researching all day, and using the 'hooks' sounds reasonable. Maybe using a 'post-receive' hook on Github which then runs a push command to my remote server. Any suggestions?
As I understand github doesn't allow you to define "true" hooks. Like post-receive. Instead they provide something called a [webhook](https://help.github.com/articles/post-receive-hooks) to developers. what you can do with this is issue a web request to any URL specified by you whenever there's a push to your repository. So what you can do is: setup a webserver on you remote git server and configure github to make an http call to it on post-receive. Whenever github notifies your remote server do a pull on it from github. See here on how to use webhooks: <https://help.github.com/articles/post-receive-hooks> P.S. A true hook mechianism whould have been a possible security vulnerability for github cause it allows you to execute custom code on their servers. So they have made something that does not allow you to execute anything but still allows you to do anything you want.
GetHashCode returns different Values I'm using Linq-to-Sql to query a SQL Server database. This query returns a List of an entity in my database. My underlying data is NOT changing. Once I have received the List, I call GetHashCode on it in order to test for equality. Oddly, the hash value is always different. Why would it always be different? Thank you,
Are different beacuse they are different object references. You need to override [Equals()](http://msdn.microsoft.com/en-us/library/bsc2ak47.aspx) and [GetHashCode()](http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx) for your object, based on the object data, if you want to behave in that way. [Here](http://msdn.microsoft.com/en-us/library/ms173147%28VS.90%29.aspx) you have an example about how to do it, and [here](http://blogs.msdn.com/b/ericlippert/archive/2011/02/28/guidelines-and-rules-for-gethashcode.aspx) a blog post about the guidelines overriding the `GetHashCode()` method. Hope it helps. ``` class TwoDPoint : System.Object { public readonly int x, y; public TwoDPoint(int x, int y) { this.x = x; this.y = y; } public override bool Equals(System.Object obj) { if (obj == null) return false; TwoDPoint p = obj as TwoDPoint; if (p == null) return false; // Return true if the fields match return (x == p.x) && (y == p.y); } public override int GetHashCode() { return x ^ y; } } ``` As [Servy](https://stackoverflow.com/users/1159478/servy) said in his comment, keep in mind that even overriding `GetHashCode()` method, you won't be able to have a collision-free hash with that type of data (ever), you can only reduce the collision rate. You need to use `Equals()` to ensure objects with the same hash are really the same
How to reuse control template? Can I create parametrized control template, i.e. create control template which might contain different controls. For instance, I can use control template with label or with button - whatever I want in some position. ``` <ControlTemplate x:Key="MessageCTemplate"> <Grid …> <Rectangle …/> <Rectangle …/> <Rectangle …/> … <!--I want to have here button, label or whatever I want--> <label x:Name=”MsgLabel”> <Grid/> <Style x:Key="MsgStyle" TargetType="{x:Type Button}"> <Setter Property="Opacity" Value="0.6" /> <Setter Property="Template" Value="{StaticResource MessageCTemplate}" /> <Style/> ``` I don’t feel happy to write the same control templates which has only one different string in their code. Or, perhaps, I misunderstand smth and another way of avoiding copy-past exists.
What you're describing is a ContentControl. This is the base class for many common controls including Button, Label, ListBoxItem... The idea of a ContentControl is that it can define its own layout and some UI elements and also include a placeholder where whatever is set as its Content property can be injected. There is also a HeaderedContentControl that allows for 2 placeholders for other content. ``` <ControlTemplate x:Key="MessageCTemplate" TargetType="{x:Type ContentControl}"> <Grid …> <Rectangle …/> <Rectangle …/> <Rectangle …/> <ContentPresenter/> <!-- This is where the content shows up --> <Grid/> </ControlTemplate> <Button Template="{StaticResource MessageCTemplate}"> <Label Content="My label"/> </Button> <Button Template="{StaticResource MessageCTemplate}"> <Ellipse Fill="Orange" Width="100" Height="30"/> </Button> ```
angular runtime, styles and polyfiles get a 404 error on my web server I have made a little Angular application, I have built it on my computer, but when I try to open the index.html file I have 404 errors, I tried to deploy it on a web server but it's the same : GET <http://www.valhatech.com/runtime-es2015.edb2fcf2778e7bf1d426.js> net::ERR\_ABORTED 404 (Not Found) GET <http://www.valhatech.com/styles.365933d9cb35ac99c497.css> net::ERR\_ABORTED 404 (Not Found) GET <http://www.valhatech.com/polyfills-es2015.2987770fde9daa1d8a2e.js> net::ERR\_ABORTED 404 (Not Found) My index.html file is that : ``` <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>McKenzieEquation</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <script src="assets/js/app.js"></script> </head> <body> <app-root></app-root> </body> </html> ``` And the content of my app.module.ts is : ``` const routes: Routes = [ { path: 'home', component: AppComponent }, { path: '', redirectTo: 'home', pathMatch: 'full' }, { path: '**', redirectTo: 'home' } ]; @NgModule({ declarations: [ AppComponent, MckenzieGraphComponent ], imports: [ BrowserModule,FormsModule,RouterModule.forRoot(routes, { useHash: true }) ], providers: [MckenzieEquationService], bootstrap: [AppComponent] }) export class AppModule { } ``` Firefox console send me on a Reason: CORS request not HTTP error or URI of module source is not authorized in this document : Β« file:///polyfills-es2015.2987770fde9daa1d8a2e.js Β».
I have found my error. On my web server my angular page was located to the following path : /www/mckenzieequation And my index file was written like that : ``` <head> <meta charset="utf-8"> <title>McKenzieEquation</title> <base href="/"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <script src="assets/js/app.js"></script> </head> ``` I have to replace the base parameter with the following code : ``` <base href="/mckenzieequation/"> ``` The base href must pointing to the place of the index file.
Configure Pie Chart Colors Using Chart.js and PHP I have been trying to enable different colors for different areas but I cannot find a way of having different colors for different objects. Am fetching data from database and then working on it and working on it as follows. I would appreciate your views on how to change colors for different items ``` //looping through the data for(var i in data) { CarPrices.push(data[i].name); score.push(data[i].price); } //Working with data fetched from database var chartdata = { labels: player, datasets : [ { //Trying to change background color for each element label: 'Car Prices', backgroundColor: [ "#FF6384", "#36A2EB", "#FFCE56" ], //Trying to change background color for each element** hoverBackgroundColor: [ "#FF6384", "#36A2EB", "#FFCE56" ], } ] }; ```
It is possible to control the color of each slice in a pie/doughnut chart using the `backgroundColor` dataset property. It looks like you were attempting to do this in your question, but here is a full working example so that you can leverage it for your needs. ``` var config = { type: 'doughnut', data: { datasets: [{ data: [300, 50, 100, 40, 10], backgroundColor: [ window.chartColors.red, window.chartColors.orange, window.chartColors.yellow, window.chartColors.green, window.chartColors.blue, ], }], labels: [ "Red", "Orange", "Yellow", "Green", "Blue" ] }, options: { responsive: true, legend: { display: true, labels: { padding: 20 }, }, tooltips: { enabled: false, } } }; ``` Here is a working [Codepen example](http://codepen.io/jordanwillis/pen/Wpxbge).
Why does an async single task run faster than a normal single task? I have a method which has just one task to do and has to wait for that task to complete: ``` public async Task<JsonResult> GetAllAsync() { var result = await this.GetAllDBAsync(); return Json(result, JsonRequestBehavior.AllowGet); } public async Task<List<TblSubjectSubset>> GetAllDBAsync() { return await model.TblSubjectSubsets.ToListAsync(); } ``` It is significantly faster than when I run it without async-await. We know > > The async and await keywords don't cause additional threads to be > created. Async methods don't require multithreading because an async > method doesn't run on its own thread. The method runs on the current > synchronization context and uses time on the thread only when the > method is active > > > According to this link: <https://msdn.microsoft.com/en-us/library/hh191443.aspx#BKMK_Threads>. What is the reason for being faster when we don't have another thread to handle the job?
"Asynchronous" does **not** mean "faster." "Asynchronous" means "performs its operation in a way that it does not require a thread for the duration of the operation, thus allowing that thread to be used for other work." In this case, you're testing a single request. The asynchronous request will "yield" its thread to the ASP.NET thread pool... which has no other use for it, since there are no other requests. I fully expect asynchronous handlers to run *slower* than synchronous handlers. This is for a variety of reasons: there's the overhead of the `async`/`await` state machine, and extra work when the task completes to have its thread enter the request context. Besides this, the Win32 API layer is still heavily optimized for synchronous calls (expect this to change gradually over the next decade or so). So, why use asynchronous handlers then? For scalability reasons. Consider an ASP.NET server that is serving more than one request - hundreds or thousands of requests instead of a single one. In *that* case, ASP.NET will be very grateful for the thread returned to it during its request processing. It can immediately use that thread to handle other requests. Asynchronous requests allow ASP.NET to handle more requests with fewer threads. This is assuming your backend can scale, of course. If every request has to hit a single SQL Server, then your scalability bottleneck will probably be your database, not your web server. But if your situation calls for it, asynchronous code can be a great boost to your web server scalability. For more information, see my article on [async ASP.NET](https://msdn.microsoft.com/en-us/magazine/Dn802603.aspx).
Deserialize dynamic object name for newtonsoft property When I am sending request for a certain API, they return me a json which is awesome, but the problem is that depending on the parameters I provide, the object name is always different while the data structure remains the same. So I am trying to convert the json to a C# class using Newtonsoft library. The only way I've found to do this is by using JsonTextReader, but I'm wondering if there is a cleaner way of achieving this, I looked up the documentation and couldn't find anything to help me in that regard. I also tried using JValue.Parse for dynamic object mapping, but since the property name is always different, it doesn't help me. Here is a code sample to illustrate the problem: ``` { "error": [], "result": { //This property name always changes "changingPropertyName": [ [ "456.69900", "0.03196000", 1461780019.8014, ]] } //C# mapping public partial class Data { [JsonProperty("error")] public object[] Error { get; set; } [JsonProperty("result")] public Result Result { get; set; } } public class Result { [JsonProperty("changingPropertyName")] public object[][] changingPropertyName{ get; set; } } ```
One way to deal with a variable property name is to use a `Dictionary<string, T>` in place of a strongly typed class (where `T` is the type of the variable property you trying to capture). For example: ``` public partial class Data { [JsonProperty("error")] public object[] Error { get; set; } [JsonProperty("result")] public Dictionary<string, object[][]> Result { get; set; } } ``` You can then get the first `KeyValuePair` from the dictionary and you will have both the name of the variable property and the value available from that. ``` string json = @" { ""error"": [], ""result"": { ""changingPropertyName"": [ [ ""456.69900"", ""0.03196000"", 1461780019.8014 ] ] } }"; var data = JsonConvert.DeserializeObject<Data>(json); KeyValuePair<string, object[][]> pair = data.Result.First(); Console.WriteLine(pair.Key + ":"); object[][] outerArray = pair.Value; foreach (var innerArray in outerArray) { foreach (var item in innerArray) { Console.WriteLine(item); } } ``` Fiddle: <https://dotnetfiddle.net/rlNKgw>
/bin/sh returns 0 when it did not find a command in an if statement, is that expected? I wrote a C++ watchdog that runs a set of scripts to determine whether there is a problem on that system. The code is a bit hairy so I won't show it here, but it is equivalent to a system call as follow: ``` int const r(system("/bin/sh /path/to/script/test-health")); ``` Only, `r` is 0 when the script fails because a command is missing in an `if` statement. There is the offensive bit of the script: ``` set -e [...] if unknown_command arg1 arg2 then [...] ``` The `unknown_command` obviously fails since... it is unknown. At that point the script ends because I have the `set -e` at the start. The exit code, though, is going to be 0 in that situation. Would there be a way for me to get an exit code of 1 in such a situation? i.e. the question is detecting the error without having to add a test to know whether `unknown_command` exists. I know how to do that: ``` if ! test -x unknown_command then exit 1 fi ``` My point is that when I write that script, I expect `unknown_command` to exist as I install it myself, but if something goes wrong or someone copies the script on another system without installing everything, I'd like to know that I got an error executing the script.
From the POSIX standard, [regarding `set -e`](http://pubs.opengroup.org/onlinepubs/9699919799.2016edition/utilities/V3_chap02.html#set): > > The `-e` setting shall be ignored when executing the compound list following the `while`, `until`, `if`, or `elif` reserved word, a pipeline beginning with the `!` reserved word, or any command of an AND-OR list other than the last. > > > This means that executing an unknown command in an `if` statement will *not* cause the script to terminate when running under `set -e`. Or rather, `set -e` will not cause it to terminate. --- User `command -v` to test whether a utility exists in the current `PATH`, unless you use full paths to the utilities that you invoke in which case a `-x` test would be sufficient, as in your question. See also: [Why not use "which"? What to use then?](https://unix.stackexchange.com/questions/85249/why-not-use-which-what-to-use-then)
iOS Nested View Controllers view inside UIViewController's view? Is it typically bad programming practice in iOS to have a nested view controller's view inside UIViewController's view? Say for instance I wanted to have some kind of interactive element that responds to user's touches, but only takes up maybe 25% of the screen. I suppose I would add this nested view controller to my UIViewController by saying something like: ``` [self.view addSubview: nestedViewController.view]; ```
No, this is generally good design, it helps keep your view controllers concise. However you should be using the view controller containment pattern, take a look at the following documentation. [Implementing a Container View Controller](https://developer.apple.com/library/ios/featuredarticles/ViewControllerPGforiPhoneOS/ImplementingaContainerViewController.html) This is incredibly simple to setup using Interface Builder with Storyboards as well, take a look at the Container View in the object library. Here is a contrived example in a Storyboard. In this example you would have 4 view controllers, one that holds the 3 containers, and one for each container. When you present the left most controller that has all of the containers, the Storyboard will automatically initialize and embed the other 3. You can access these child view controllers via the `childViewControllers` property or there is a method you can override `prepareForSegue:sender:` and capture the destination view controllers of the segue about to be called. This is also a good point to pass properties to the child view controllers if any are needed. ![enter image description here](https://i.stack.imgur.com/dX4le.png)
overlapping labels in flot pie chart I use jquery flot for my pie charts and I have a problem with overlapping labels when the pie chart pieces are very small. Is there a good solution for that? My pie chart: ``` series: { pie: { show: true, radius: 1, label: { show: true, radius: 5/8, formatter: function(label, series){ return '<div style="font-size:12pt;text- align:center;padding:2px;color:black;margin-left:-80%;margin- top:-20%;">'+label+'<br/>'+Math.round(series.percent)+'%</div>'; }, background: { opacity: 0.5 } } } }, legend: { show: false } ``` Thanks, Arshavski Alexander.
A solution from Flot's Google code issues by **Marshall Leggett**([link](http://code.google.com/p/flot/issues/detail?id=530)): > > I've found that it seems common for **pie labels to overlap** in smaller > pie charts making them unreadable, particularly if several slices have > small percentage values. This is with the jquery.flot.pie plugin. > > Please see attached images. I've worked around this with the addition > of an anti-collision routine in the label rendering code. I'm > attaching a copy of the revised plugin as well. See **lines 472-501**, > particularly the new functions getPositions() and comparePositions(). > This is based in part on Ε ime Vidas' DOM-element collision detection > code. Something like this might be a nice addition to the pie > library. > > > ![pie labels overlapping](https://i.stack.imgur.com/tZVQR.png) ![pie labels overlapping fixed](https://i.stack.imgur.com/6HMln.png) long story short: 1. In jquery.flot.pie.js and **after** the line **463** that contains: `label.css('left', labelLeft);` add the following code: ``` // check to make sure that the label doesn't overlap one of the other labels var label_pos = getPositions(label); for(var j=0; j<labels.length; j++) { var tmpPos = getPositions(labels[j]); var horizontalMatch = comparePositions(label_pos[0], tmpPos[0]); var verticalMatch = comparePositions(label_pos[1], tmpPos[1]); var match = horizontalMatch && verticalMatch; if(match) { var newTop = tmpPos[1][0] - (label.height() +1 ); label.css('top', newTop); labelTop = newTop; } } function getPositions(box) { var $box = $(box); var pos = $box.position(); var width = $box.width(); var height = $box.height(); return [ [ pos.left, pos.left + width ], [ pos.top, pos.top + height ] ]; } function comparePositions(p1, p2) { var x1 = p1[0] < p2[0] ? p1 : p2; var x2 = p1[0] < p2[0] ? p2 : p1; return x1[1] > x2[0] || x1[0] === x2[0] ? true : false; } labels.push(label); ``` 2. Add the following to `drawLabels()` and you are done: `var labels = [];`
run python script as cgi apache server I am trying to make a python script run as cgi, using an Apache server. My script looks something like this: ``` #!/usr/bin/python import cgi if __name__ == "__main__": print("Content-type: text/html") print("<HTML>") print("<HEAD>") ``` I have done the necessary configurations in httpd.conf(in my opinion): ``` <Directory "/opt/lampp/htdocs/xampp/python"> Options +ExecCGI AddHandler cgi-script .cgi .py Order allow,deny Allow from all </Directory> ``` I have set the execution permission for the script with chmod However, when I try to access the script via localhost i get an Error 500:End of script output before headers:script.py What could be the problem? The script is created in an Unix like environment so I think the problem of clrf vs lf doesn't stand. Thanks a lot.
I think you are missing a print statement after ``` print("Content-type: text/html") ``` The output of a CGI script should consist of two sections, separated by a blank line. The first section contains a number of headers, telling the client what kind of data is following. The second section is usually HTML, which allows the client software to display nicely formatted text with header, in-line images, etc. It may look like ``` #!/usr/bin/env python print "Content-Type: text/html" print print """ <TITLE>CGI script ! Python</TITLE> <H1>This is my first CGI script</H1> Hello, world! """ ``` For more details visit [python-cgi](http://docs.python.org/2/library/cgi.html) For python3 ``` #!/usr/bin/env python3 print("Content-Type: text/html") print() print (""" <TITLE>CGI script ! Python</TITLE> <H1>This is my first CGI script</H1> Hello, world! """ ) ```
Application Request Routing doesn't route to same machine I have setup an ARR in my IIS server. My requirement is I have to route a request to this ARR server to one of my application which is hosted in local IIS(ARR server itself). Is it possible? ie. Suppose in my ARR server has Application1. Then I have to do the following Broswer -> ARR (iis server) -> Application1(which is itself in ARR server). If possible how can implement such a setup?
Yes you can. You need, at least, 2 sites to implement this. One will get the requests and route them to ARR via a Rewrite Rule, the other one will be the node behind ARR. For example, considering your domain name is "example.com": ``` IIS - Site 1 (binding: http://example.com:80) - Site 2 (binding: http://127.0.0.1:22001) Rewrite Rule - Match All - {HTTP_HOST} matches example.com - {SERVER_PORT} does not match 22001 - Action: Route to Farm ``` This is a good starting point to do zero-downtime deployment. If you want to know more check this repo on GitHub: <https://github.com/yosoyadri/IIS-ARR-Zero-Downtime>
How to detect language of a webpage Is there a `meta` tag that gives the original language of a webpage, or some library I could use to detect it? For example: ``` detect_language('https://play.google.com/store/movies/details?id=lzLX-xKfQhE') ==> DE (German) detect_language('https://itunes.apple.com/jp/movie/gon-garu-zi-mu-ban/id944521490?l=en') ==> JP (Japanese) ```
The language of both pages is, arguably, English! Much of the *content* on the page is in other languages, but the structure of the page (labels, links, etc) is English, and the meta tags on each page agree with this assessment. From the Google Play page: ``` <html lang="en_US"> ^^^^^^^^^^^^ ``` From the iTunes Store page: ``` <html prefix="og: http://ogp.me/ns#" xmlns="http://www.apple.com/itms/" lang="en"> ^^^^^^^^^ ``` There exist some APIs that can attempt to perform language detection. One such (commercial) example is [Google Translate's Detect Language call](https://cloud.google.com/translate/v2/using_rest#detect-language). It's a bit of a tossup what such an API would make of these pages, though; there's a strong argument that they're both English.
Django admin: Prefill data when clicking the add-another button next to a ForeignKey dropdown In the Django Admin, when you modify an objects properties, if there is a ForeignKey, it will have a dropdown box of all the choices, plus a "+" button to add more choices. When I click on it, I want to prefill some of the data. I've noticed I could do it if I could modify the URL (example: `http://localhost:8000/admin/app/model/add/?field=value` where `field` and `value` are programmatically modified.) I figure I have to override something in the `forms.ModelForm` that the `admin.ModelAdmin` uses, but I'm not sure what.
Django allows you to replace a request's GET dict (which it uses to pre-populate the admin form). Django will automatically fill values from URL GET parameters if you are sending field values of model form in the URL. For example, considering `"http://myhost/admin/app/model/add/?name=testname"`, it will prefill the `name` field of the form in the admin add-view template with the value `'testname'`. But, if you are sending any id in your URL, you need to modify the GET parameters by overriding the `add_view` function. Taken from stackoverflow answer ``` class ArticleAdmin(admin.ModelAdmin): # ... def add_view(self, request, form_url='', extra_context=None): source_id = request.GET.get('source',None) if source_id != None: source = FeedPost.objects.get(id=source_id) # any extra processing can go here... g = request.GET.copy() g.update({ 'title':source.title, 'contents':source.description + u"... \n\n[" + source.url + "]", }) request.GET = g return super(ArticleAdmin, self).add_view(request, form_url, extra_context) ``` It just an example.DO it with Your model and fields :)
first\_value windowing function in pyspark I am using pyspark 1.5 getting my data from Hive tables and trying to use windowing functions. According to [this](https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html) there exists an analytic function called `firstValue` that will give me the first non-null value for a given window. I know this exists in Hive but I can not find this in pyspark anywhere. Is there a way to implement this given that pyspark won't allow UserDefinedAggregateFunctions (UDAFs)?
**Spark >= 2.0**: `first` takes an optional `ignorenulls` argument which can mimic the behavior of `first_value`: ``` df.select(col("k"), first("v", True).over(w).alias("fv")) ``` **Spark < 2.0**: Available function is called [`first`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.first) and can be used as follows: ``` df = sc.parallelize([ ("a", None), ("a", 1), ("a", -1), ("b", 3) ]).toDF(["k", "v"]) w = Window().partitionBy("k").orderBy("v") df.select(col("k"), first("v").over(w).alias("fv")) ``` but if you want to ignore nulls you'll have to use Hive UDFs directly: ``` df.registerTempTable("df") sqlContext.sql(""" SELECT k, first_value(v, TRUE) OVER (PARTITION BY k ORDER BY v) FROM df""") ```
How to convert html & css to an image? I'm developing an ecard-maker (I know it's awful, not my idea...). Is there any way to convert html and css of a specific dom-element to an image without using flash? I know there is image magick and the like, but it seems to be quite complicated to align and order the text properly. Preferably I'm looking for a server-side solution, as posting an image from the client could take some time. I found this: <https://code.google.com/p/java-html2image/> but unfortunately I can only use php, ruby or client-side technologies.
## Client Side solution In Client Side, you can use something like library (that uses HTML 5): <http://html2canvas.hertzen.com/> =) With it, you can use something like: ``` html2canvas(document.getElementById("element-id"), { onrendered: function(canvas) { var image = new Image(); image.src = canvas.toDataURL("image/png"); // This should be image/png as browsers (only) support it (to biggest compatibilty) // Append image (yes, it is a DOM element!) to the DOM and etc here.. } }); ``` ## Server Side solution To get a server side solution, you can use [PhantomJS](http://phantomjs.org) (that uses Webkit) or [SlimerJS](http://slimerjs.org/) (that uses Gecko). A good library that is a wrapper to these two is [CasperJS](http://casperjs.org/). Using CasperJS, a code that can be used is: ``` casper.start(casper.cli.args[0], function() { this.captureSelector(casper.cli.args[1], casper.cli.args[2]); }); casper.run(); ``` Save it as `screenshot.js` (just an example of name, you can choice a name too) and run it by using something like: ``` casperjs screenshot.js (URL) (image path) (selector) ``` From **any** language. ## A (possible better) alternative to Server Side Other option is use [Selenium](http://code.google.com/p/selenium/), but this is only valid if you **can** run Java in your server (and install browsers manually) (PhantomJS/SlimerJS/CasperJS, however, does not have these requeriments) Use it only if you **need** to emulate a browser completely (maybe when using plugins...) The best of Selenium is that you can use wrappers to connect to it (using Selenium Server), see the documentation to get the list: <http://code.google.com/p/selenium/w/list>
Most efficient way to merge collections preserving order? I have 3 channels: ``` byte[] Red; byte[] Green; byte[] Blue; ``` I need to copy all the values from them into a `byte[Red.Length+Green.Length+Blue.Length] PA`, so that: ``` PA[0] = Red[0]; PA[1] = Green[0]; PA[2] = Blue[0]; PA[3] = Red[1]; /// and so on ``` Here is an example with the above arrays: ``` byte[] Red = new byte[255]; byte[] Green = new byte[255]; byte[] Blue = new byte[255]; byte[] PA = new byte[Red.Length + Green.Length + Blue.Length]; for (int i = 0; i != 255; ++i) { PA[i*3 + 0] = Red[i]; PA[i*3 + 1] = Green[i]; PA[i*3 + 2] = Blue[i]; } ``` I'm assuming that the collections to be merged are of equal sizes and that they do have some order amongst themselves e.g. `[0] = Red`, `[1]=Green`, etc. that has to be preserved for the items in the "merged" collection. What is the most efficient way to do this in `C#`? The collections do not have to be arrays nor the items bytes (although collection types that accept bytes would be appreciated).
I tried to make a more efficent way by using pointers: ``` unsafe { fixed (byte* red = Red, green = Green, blue = Blue, pa = PA2) { byte* r = red, g = green, b = blue, p = pa; for (int i = 0; i < 255; i++) { *p = *r; p++; r++; *p = *g; p++; g++; *p = *b; p++; b++; } } } ``` In x86 mode this is about twice as fast, but in x64 mode there is no difference. In conclusion, the code that you have is already fast enough for most applications. If you need it to be really fast you can optimise it a bit, but not much.
Disable browser zoom on certain elements in Firefox Is it possible to disable the in-browser, full-page zoom in Firefox (activated by Ctrl +) for a webpage? How about for certain elements in a webpage? I just notice that sometimes elements look really weird when they are zoomed, and it might be nice to just disable the zooming completely for those elements. *Note*: I know there a few ways to *find* the zoom level, but this is really wanting to actively work around it (which might not be a good idea anyway).
Using information gathered largely through this question: [Catch browser's "zoom" event in JavaScript](https://stackoverflow.com/questions/995914/catch-browsers-zoom-event-in-javascript) I've been playing around with attempting to track browser zoom for the last day or so, and this is about as close as you can get without an onZoom standard event that you can kill. ``` document.observe('keydown', function (ev) { var key, keys = ['0']; var isApple = (navigator.userAgent.indexOf('Mac') > -1), isCmmd, isCtrl; if (window.event) { key = window.event.keyCode; isCtrl = window.event.ctrlKey ? true : false; isCmmd = window.event.metaKey ? true : false; } else { key = e.which; isCtrl = ev.ctrlKey ? true : false; isCmmd = ev.metaKey ? true : false; } if (isCtrl || (isCmmd && isApple)) { switch (key) { case 48: // 0 // do not stop, or user could get stuck break; case 187: // + case 189: // - ev.stop() break; default: break; } } }); ``` Unfortunately, and I've been playing with this for a while now, and there isn't any surefire way to really disable it. The zoom options are still available through main application menus, so until a real method of tracking zoom (including after page reloads, which mostly impossible right now, and, besides, webkit exhibits some odd behavior when you attempt to track zoom). Although many people would like to keep browser zoom more hidden, I can personally see the possible benefits of being able to observe zoom separately from resize, as they are mostly indistinguishable at this point (and that's if for no other reason at all).
XGBoost:What is the parameter 'objective' set? I want to solve a regression problem with XGBoost. I'm confused with Learning Task parameter objective [ default=reg:linear ]([XGboost](http://xgboost.readthedocs.io/en/latest/parameter.html)), \*\*it seems that 'objective' is used for setting loss function.\*\*But I can't understand 'reg:linear' how to influence loss function. In logistic regression demo([XGBoost logistic regression demo](https://github.com/dmlc/xgboost/blob/master/demo/binary_classification/mushroom.conf)), objective = binary:logistic means loss function is logistic loss function.So 'objective=reg:linear' corresponds to which loss function?
> > So 'objective=reg:linear' corresponds to which loss function? > > > **Squared error** You can take a look at the loss functions ( which are based on the gradient and hessian ) for both logistic regression and linear regression here <https://github.com/dmlc/xgboost/blob/master/src/objective/regression_obj.cc> Note the loss functions are reasonably similar. Just that the `SecondOrderGradient` is a constant in square loss ``` // common regressions // linear regression struct LinearSquareLoss { static float PredTransform(float x) { return x; } static bool CheckLabel(float x) { return true; } static float FirstOrderGradient(float predt, float label) { return predt - label; } static float SecondOrderGradient(float predt, float label) { return 1.0f; } static float ProbToMargin(float base_score) { return base_score; } static const char* LabelErrorMsg() { return ""; } static const char* DefaultEvalMetric() { return "rmse"; } }; // logistic loss for probability regression task struct LogisticRegression { static float PredTransform(float x) { return common::Sigmoid(x); } static bool CheckLabel(float x) { return x >= 0.0f && x <= 1.0f; } static float FirstOrderGradient(float predt, float label) { return predt - label; } static float SecondOrderGradient(float predt, float label) { const float eps = 1e-16f; return std::max(predt * (1.0f - predt), eps); } ``` the authors mention this here <https://github.com/dmlc/xgboost/tree/master/demo/regression>
Why can't Scala find implicit value for parameter scala.slick.session.Session? I am running a Scala Play 2.2 application with Slick 1.0.1. I am trying to wrap all of my database calls into a future try, for example: ``` object DbTeachers extends Table[DbTeacher]("edu_teachers") { ... def insertTeacher(school: Int, userId: String) (implicit ec: ExecutionContext, db: Database) = future { Try { db.withSession => { implicit s: Session => (DbTeachers.school ~ DbTeachers.teacher).insert(school, userId) }}} } ``` I find that the pattern `future { Try { db.withSession => { ACTUAL_CODE_GOES_HERE }}}` creates clutter and I would like to abstract it out as follows: ``` sealed class DbAsync[T](block: => T) { import play.api.libs.concurrent.Execution.Implicits.defaultContext implicit lazy val db = Database.forDataSource(DB.getDataSource()) def get: Future[Try[T]] = future { Try { db.withSession { implicit s: Session => block }}} } object DbAsync { def apply[T](block: => T): Future[Try[T]] = new DbAsync[T](block).get } ``` And then I can write my insertTeacher function as: ``` def insertTeacher(school: Int, userId: String) = DbAsync { (DbTeachers.school ~ DbTeachers.teacher).insert(school, userId) } ``` However, the scala compiler (2.10.2) complains about this: `could not find implicit value for parameter session: scala.slick.session.Session` According to my understanding, the `insert()` method does have an implicit session variable in scope within the DbAsync block, and because it is a call-by-name parameter, it shouldn't actually be evaluated until it is called within the DbAsync, at which time there would be an implicit session object in scope. So, my question is, how do I convince the Scala compiler that there actually is an implicit Session object in scope?
Your suggestion is incorrect. It doesn't matter where `call-by-name` parameter will be evaluated. All implicit parameters should be resolved at compile time in the place where they are required. You could make it work this way: ``` def dbAsync[T](block: Session => T): Future[Try[T]] = { import play.api.libs.concurrent.Execution.Implicits.defaultContext implicit lazy val db = Database.forDataSource(DB.getDataSource()) future { Try { db.withSession { block }}} } def insertTeacher(school: Int, userId: String) = dbAsync { implicit s: Session => (DbTeachers.school ~ DbTeachers.teacher).insert(school, userId) } ``` Note that you don't need class `DbAsync` nor object `DbAsync`. Note that you should not use `defaultContext` for blocking operations. You could create additional `ExecutionContext` with configured thread pool.
Why is the first bar so big in my R histogram? I'm playing around with R. I try to visualize the distribution of 1000 dice throws with the following R script: ``` cases <- 1000 min <- 1 max <- 6 x <- as.integer(runif(cases,min,max+1)) mx <- mean(x) sd <- sd(x) hist( x, xlim=c(min - abs(mx/2),max + abs(mx/2)), main=paste(cases,"Samples"), freq = FALSE, breaks=seq(min,max,1) ) curve(dnorm(x, mx, sd), add = TRUE, col="blue", lwd = 2) abline(v = mx, col = "red", lwd = 2) legend("bottomleft", legend=c(paste('Mean (', mx, ')')), col=c('red'), lwd=2, lty=c(1)) ``` The script produces the following histogram: [![histogram](https://i.stack.imgur.com/oX3mO.png)](https://i.stack.imgur.com/oX3mO.png) Can someone explain to me why the first bar is so big? I've checked the data and it looks fine. How can I fix this? Thank you in advance!
Histograms aren't good for discrete data, they're designed for continuous data. Your data looks something like this: ``` > table(x) x 1 2 3 4 5 6 174 138 162 178 196 152 ``` i.e. roughly equal numbers of each value. But when you put that in a histogram, you chose breakpoints at 1:6. The first bar has 174 entries on its left limit, and 138 on its right limit, so it displays 312. You could get a better looking histogram by specifying breaks at the half integers, i.e. `breaks = 0:6 + 0.5`, but it still doesn't make sense to be using a histogram for data like this. Simply running `plot(table(x))` or `barplot(table(x))` gives a more accurate depiction of the data.
IntelliJ IDEA + Maven what is the need for dependency entries in an iml file? In Maven, the dependencies of a project is specified in the pom.xml file. In IntelliJ IDEA, the same information is stored in an iml file even for Maven projects. What is the need for having the same infromation in two places?
When importing Maven project IDEA represents its dependencies model in the format that it can understand and use internally. If you manage dependencies via Maven, you don't need to change the dependencies in the IntelliJ IDEA module setting. This can be also used to experiment with dependencies without changing the `pom.xml`. Note that all the modifications you make will be reverted on next Maven import. In other words, IDEA doesn't understand Maven model directly, it converts it to its own project model used by all the subsystems, and the internal project information needs to be stored somewhere, hence the `.iml` files and `.idea` project directory. This way IDEA doesn't need to analyze the pom file every time you open the project and resolve all the dependencies again, it's done only when the `pom.xml` changes. As you can build/run/test/deploy/debug Maven projects in IDEA without using Maven at all, the information needed for these tasks is stored in the format that IDE can understand itself. It's faster, easier to maintain and I guess was easier to implement than reading Maven model directly.
GDB script to trace Objective-C calls on iOS device - issue I have a gdb script I am working on to trace all objective-C method calls that pass through objc\_msgSend but Ive encountered a problem I cant seem to deal with. After review of the Objective-C runtime source code I have developed the following script to print [ ] at each break on objc\_msgSend. The issue is that there are situations in which data\_NEVER\_USE is not a valid pointer but is also not null. The only indicator that I can find of whether a class is initialized is in id->data\_NEVER\_USE->flags & RW\_REALIZED. What aspect of class initialization am I missing here that would allow me to skip this case? ``` b objc_msgSend c commands silent if (*$r0 == 0) continue end set $id = (class_t *)$r0 set $sel = $r1 print *$id if($id->data_NEVER_USE != 0) set $data = (class_ro_t *) ($id->data_NEVER_USE) if (($data->flags & 0x80000000) && ($data->name)) set $classname = $data->name printf "[%s ", $classname else continue end end if ($sel != 0) printf "%s", $sel else printf "null" end printf "]\n" continue end ``` I appreciate any help on this. Thanks.
These 2 methods have worked reasonably well for me. Note that in my example I am manually starting "SomeApp" in order to monitor it as soon as it starts up. ``` gdb (gdb) attach --waitfor 'SomeApp' **this is where you manually start SomeApp on your device** call (void)instrumentObjcMessageSends(YES) ``` The "instrumentObjcMessageSends" enables/disables message logging from within the runtime. Here's some [more information on this method.](http://www.dribin.org/dave/blog/archives/2006/04/22/tracing_objc/) Another option, again still using GDB on your iDevice, is to write a small command like this: ``` FooPad:~ root# gdb (gdb) attach SBSettings Attaching to process 440. Reading symbols for shared libraries . done Reading symbols for shared libraries ............................. done 0x35686004 in mach_msg_trap () (gdb) break objc_msgSend Breakpoint 1 at 0x3323ef72 (gdb) commands Type commands for when breakpoint 1 is hit, one per line. End with a line saying just "end". >printf "-[%s %s]\n", (char *)class_getName(*(long *)$r0,$r1),$r1 >c >end (gdb) c Continuing. // a ton of information will follow ``` As soon as you press **"c"** (right above line that reads "Continuing.", your screen will fill with function names and arguments. And finally follow [these instructions](http://www.securitylearn.net/2012/05/20/installing-gnu-debugger-on-iphone/) to get a working GDB on your iDevice. For posterity I'll post the short instructions here: > > GNU Debugger (gdb) is used to analyze the run time behavior of an iOS > application. In recent iOS versions, GNU Debugger directly downloaded > from the Cydia is broken and not functioning properly. Following the > Pod 2g blog post also did not help me. > > > To get rid of this problem, add <http://cydia.radare.org> to cydia > source and download the latest GNU Debugger (build 1708). GDB build > 1708 is working for iOS 5.x. > > >
Use HERE file and redirect output of command I have the following code in a batch script: ``` mpirun -np 6 ./laplace <<END 100 100 100 0.01 100 3 2 1 END | tail -n 1 > output ``` But it isn't working. What I want it to do is to use the HERE file as input to the mpirun command, and then pipe the output to the tail command. However, I think the HERE file and tail output things are getting confused. How should I write this so that it does what I want?
What you've written in the first line looks like a complete command (a β€œ(compound) list” in shell terminology), so the shell treats it as a complete command. Since there's a here-document start marker (`<<END`), the shell then reads the here-document contents, and then starts a new command. If you want to put the here-document in the middle of a list, you need to indicate to the shell that the list is not finished. Here are couple of ways. ``` mpirun -np 6 ./laplace <<END | … END tail -n 1 > output ``` ``` { mpirun -np 6 ./laplace <<END … END } | tail -n 1 > output ``` Or, of course, you can make sure the command completely fits in the first line. ``` mpirun -np 6 ./laplace <<END | tail -n 1 > output … END ``` The rule to remember is that the here-document contents starts after the first unquoted newline after the `<<END` indicator. For example, here's another obfuscated way of writing this script: ``` mpirun -np 6 ./laplace <<END \ | tail -n $( … END echo 1) > output ```
C code with ncurses compiled with libtinfo dependency I've recently written a minesweeper implementation in C using ncurses on linux; everything works fine on my pc, but if I try to give the compiled binaries to someone else they often get the error: `error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory` If I have them recompile the code everything's fine. By looking around I discovered it's an issue with the separation between libtinfo and libncurses. It can be solved by making a few simlinks, but it's a solution only viable when the user has root privileges. As of here (and other sources), <http://www.cyberspice.org.uk/blog/2009/12/24/tinfo-about-dash/> , it seems that it's an issue that can be solved by writing the code in a different way or maybe compiling differently. I'd rather be able to solve the problem that way than forcing people to make simlinks. Any pointers in the right direction to understand how to solve my issue? I can add any code or details, if needed, but it seems an overkill to post *everything*, so please tell me what can be added (if needed) to better understand the problem. The only thing I'm posting for now is the makefile: ``` CC=gcc -std=gnu89 -pedantic -Wall -Wno-unused-but-set-variable CFLAGS=-c -g LDFLAGS=-lncurses NAME=campo_ex OBJECTS=error.o interface.o utilities.o main.o grid.o DEBUG_NAME=debug DEBUG_OBJECTS=error.o interface.o utilities.o debug.o $(NAME): $(OBJECTS) $(CC) -o $(NAME) $(OBJECTS) $(LDFLAGS) main.o: main.c interface.h grid.h $(CC) $(CFLAGS) main.c debug.o: debug.c interface.h $(CC) $(CFLAGS) debug.c error.o: error.c error.h $(CC) $(CFLAGS) error.c utilities.o: utilities.c utilities.h $(CC) $(CFLAGS) utilities.c interface.o: interface.c interface.h error.h utilities.h $(CC) $(CFLAGS) interface.c grid.o: grid.c grid.h error.h $(CC) $(CFLAGS) grid.c .PHONY: clean clean: @-rm -f $(OBJECTS) $(NAME) $(DEBUG_NAME) $(DEBUG_OBJECTS) .PHONY: debug debug: $(DEBUG_OBJECTS) $(CC) -o $(DEBUG_NAME) $(DEBUG_OBJECTS) $(LDFLAGS) ```
Executing `readelf -d` on your compiled program will probably show the connection to libtinfo.so.5 ``` $ readelf -d /path/to/your/program | grep NEEDED [...] 0x0000000000000001 (NEEDED) Shared library: [libtinfo.so.5] [...] ``` This may be pulled in because your `libncurses.so` pulls it in somehow, e.g. by containing something like: ``` INPUT(... -ltinfo) ``` (Or something similar. I can just guess here..) You can try adding `-Wl,--as-needed` to your `LDFLAGS` and hope that your program is not referencing any symbols from `libtinfo` directly so that the linker does not need to add a dependency for `libtinfo` to your program. ``` LDFLAGS=-Wl,--as-needed -lncurses ``` Recompile with new `LDFLAGS` and check again with `readelf -d` if it got compiled and linked without errors. Using `--as-needed` can be problematic if `libncurses` uses symbols from `libtinfo` but does not include a dependency to `libtinfo` itself. If this happens your build will fail and complain about `unreferenced symbols` or similar.. So if this does not work you may want to fix your curses installation or use the (in my opinion very dirty) symlink hack you already mentioned. Or let users compile the code on their systems - If you do not want to share code you can also just do the linking on the target system. To fix the symlink-need-root-privileges issue you can also add a `-Wl,-rpath,'$ORIGIN/../lib'` to your linker flags and expand the library search path of your program. This enables users to install your binary to `/home/user/bin/program` and search libraries in `/home/user/bin/../lib`. So they can do "dirty" symlink hacks in `/home/user/lib`. It is always problematic when distributing binaries only.
Android Mobile Vision API and ML Kit installed without Google Play Services I have an Android app that uses Android Mobile Vision API to recognise text (OCR). However, the device that the app is installed on has no Google Play Services installed. I want to find out if it is possible to install ONLY Mobile Vision API or ML Kit without installing Google Play Services. OS: Lollipop [Android Mobile Vision API](https://developers.google.com/vision/android/text-overview) [ML Kit](https://firebase.google.com/docs/ml-kit/recognize-text)
`TL;DR`: Both require Play Services Firebase ML Kit, either the off-line Latin or the on-line version has a prerequisite of firebase-core and google-services (Google Play Services 15.0.0 or above). ``` |-com.google.firebase:firebase-ml-vision |-com.google.firebase:firebase-core |-com.google.gms:google-services ``` > > Prerequisites > > > - A device running Android 4.0 (Ice Cream Sandwich) or newer, and Google Play services 15.0.0 or higher. > > > re: <https://firebase.google.com/docs/android/setup> Google's Mobile Vision API, replaced by ML Kit, also requires Play Services. > > - Have an Android device for testing, that runs Android 2.3 (Gingerbread) or higher and includes the Google Play Store. > > >
How to move to the next line when adding text using Apache PDFBox I've just started using Apache PDFBox and been experimenting with various examples I've found. However, I haven't been able to find an easy way to move to the next line when adding text. E.g. ``` PDPageContentStream content = new PDPageContentStream(document, page); PDFont font = PDType1Font.HELVETICA; content.beginText(); content.setFont(font, 12); content.moveTextPositionByAmount(x, y); content.drawString("Some text."); content.endText(); ``` To add another line of text underneath I had to repeatedly experiment with the value of y in `moveTextPositionByAmount` until it wasn't overwriting the previous line. Is there a more intuitive way to figure out what the coordinates of the next line are? TIA
The PDFBox API allows low-level content generation. This implies that you have to do (but also that you are enabled to do) much of the layout work yourself, among that deciding how much to move down to get to the next baseline. That distance (called *leading* in this context) depends on a number of factors: - the font size used (obviously) - how tightly or loosely spaced the text shall appear - the presence of elements on the lines involved positioned outside the regular line, e.g. superscripts, subscripts, formulas, ... The *standard is arranged so that the nominal height of tightly spaced lines of text is 1 unit* for a font drawn at size 1. **Thus, usually you will use a leading of 1..1.5 times the font size unless there is material on the line reaching beyond it.** BTW, if you have to forward to the next line by the same amount very often, you can use the combination of the `PDPageContentStream` methods `setLeading` and `newLine` instead of `moveTextPositionByAmount`: ``` content.setFont(font, 12); content.setLeading(14.5f); content.moveTextPositionByAmount(x, y); content.drawString("Some text."); content.newLine(); content.drawString("Some more text."); content.newLine(); content.drawString("Still some more text."); ``` PS: It looks like `moveTextPositionByAmount` will be deprecated in the 2.0.0 version and be replaced by `newLineAtOffset`. PPS: As the OP indicates in a comment, > > There is no PDPageContentStream method called setLeading. I'm using PDFBox version 1.8.8. > > > Indeed, I was looking at the current 2.0.0-SNAPSHOT development version. They are currently implemented like this: ``` /** * Sets the text leading. * * @param leading The leading in unscaled text units. * @throws IOException If there is an error writing to the stream. */ public void setLeading(double leading) throws IOException { writeOperand((float) leading); writeOperator("TL"); } /** * Move to the start of the next line of text. Requires the leading to have been set. * * @throws IOException If there is an error writing to the stream. */ public void newLine() throws IOException { if (!inTextMode) { throw new IllegalStateException("Must call beginText() before newLine()"); } writeOperator("T*"); } ``` One can easily implement external helper methods doing the equivalent using `appendRawCommands((float) leading); appendRawCommands(" TL");` and `appendRawCommands("T*");`
How can I use a python interpreter in a singularity/docker image in visual studio code I want to be able to use a python interpreter inside a singularity image from visual studio code. It seems that all of the options to point VSC to python interpreters involve a direct path, but using python within an image requires a command: ``` singularity exec path/to/image.img python3.6 ``` I tried putting this in the VSC settings.json file: ``` "[python]": { "python.pythonPath": "singularity exec /home/sryadgir/all/docker/py_dock/pydock_v0.img python3.6" } ``` with no luck, running any python code from VSC uses the python interpreter here: `/usr/bin/python3`
The easiest way is to use the singularity image's runscript and set `"python.pythonPath": "path/to/python.img"` e.g., ``` $ sudo singularity build py36.simg docker://python:3.6 Docker image path: index.docker.io/library/python:3.6 Cache folder set to /root/.singularity/docker [9/9] |===================================| 100.0% Importing: base Singularity environment Exploding layer: sha256:6f2f362378c5a6fd915d96d11dda1e0223ccf213bf121ace56ae0f6616ea1dc8.tar.gz Exploding layer: sha256:494c27a8a6b820f9167ec7e368b3a9bb47d7029f4dc8c97c67091f3757a5bc4e.tar.gz Exploding layer: sha256:7596bb83081b6c8410df557d538a0ae45922cbf81e469c6f4cfa835247cb24ab.tar.gz Exploding layer: sha256:372744b62d49eba993652ee4a1201801fe278b687d85489101e07e7b9a4900e0.tar.gz Exploding layer: sha256:615db220d76c063138a2e6c5849703a7a80d682a682f7e1a841e6e7ed5f43879.tar.gz Exploding layer: sha256:1865698adfb04b47d1aa53e0f8dac0a511d78285cb4dda39b4f3b0b3b091bb2e.tar.gz Exploding layer: sha256:7159b3304cc0ff68a7903c2660aa37fdae97a02164449400c6ef283a6aaf3879.tar.gz Exploding layer: sha256:ad0713808ef687d1e541819f50497506f5dce12604d1af54dbae153d61d5cf21.tar.gz Exploding layer: sha256:7ba59390457320287875a9c381fee7936b50ecfd21abfe3c50278ac2f39b9786.tar.gz Exploding layer: sha256:14b2fefd5f8a77dd860f2f455a2108a55836dd0062ced0df5fbd636ce3188ff7.tar.gz Building Singularity image... Singularity container built: py36.simg Cleaning up... $ ./py36.simg --version Python 3.6.8 # this is equivalent to: $ singularity exec py36.simg python3 --version Python 3.6.8 ``` If you're using a custom singularity image with multiple versions of python, you'll probably need to make a wrapper script and then use that. e.g., ``` #!/bin/bash exec singularity exec python.img python3.6 "$@" ```
Angularjs Check all in scope of scope I want all checkboxes checked, inside a loop of a loop. But how? ``` <section ng-repeat="user in users"> <p>{{ user.name }}</p> <label for="#"> <input type="checkbox" ng-click="checkAllRoles(user)"> CHECK ALL </label> <ul> <li ng-repeat="role in user.roles"> <input type="checkbox" value="{{ role.id }}"> {{ role.name }} </li> </ul> </section> ```
What you can do is to use `ngChange` handler to loop through all roles changing some property like `checked` bound to `ngModel` directive: ``` $scope.checkAllRoles = function(user) { user.roles.forEach(function(role) { role.checked = user.allChecked; }); }; ``` Where HTML is this: ``` <section ng-repeat="user in users"> <p>{{ user.name }}</p> <label for="#"> <input type="checkbox" ng-model="user.allChecked" ng-change="checkAllRoles(user)"> CHECK ALL </label> <ul> <li ng-repeat="role in user.roles"> <input type="checkbox" value="{{ role.id }}" ng-model="role.checked">{{ role.name }} </li> </ul> </section> ``` **Demo:** <http://plnkr.co/edit/a3I4DKyz6DRUKTIt50Xj?p=preview>
Simply compress 1 folder in batch with WinRAR command line? Using the WinRAR command line `(C:\Program Files\WinRAR\rar.exe)`, all I'm trying to do is compress a single folder `(C:\Users\%username%\desktop\someFolder)` and possibly change the name of the .zip file created. I've tried just "rar.exe a "`C:\Users\%username%\desktop\someFile`" and it works, but it compresses another folder (not the one I put). What am I doing wrong? Can you also provide explanation (and maybe tell me what recursion is because I'm unfamiliar with it)? Thanks
Use either ``` "%ProgramFiles%\WinRAR\Rar.exe" a -ep1 -idq -r -y "Name of RAR file with path" "%UserProfile%\Desktop\someFolder" ``` or ``` "%ProgramFiles%\WinRAR\Rar.exe" a -ep1 -idq -r -y "Name of RAR file with path" "%UserProfile%\Desktop\someFolder\" ``` to create a **RAR** archive file with the specified name after command `a` (add to archive) and the switches - `-idq` ... enable quiet mode to display only error messages, - `-ep1` ... exclude base directory from specified file/folder names, - `-r` ... recursively archive/compress all files and subdirectories, - `-y` ... assume **Yes** on all queries. The folder `someFolder` **is included** in the archive with first command line **without a backslash at end**. The folder `someFolder` **is NOT included** in the archive, just the files and the subdirectories of this folder, with second command line **with the backslash at end**. In other words option `-ep1` results in omitting everything from path up to last backslash in specified file or folder name on adding the file or folder to the archive which explains the difference on adding a folder without or with backlash at end specified on command line. [Recursion](https://en.wikipedia.org/wiki/Recursion) means to add not only the files in the specified folder, but also all subfolders and all files in all subfolders. So RAR must search first in the specified folder for a subfolder. If found go into this subfolder and search again for a subfolder. If one is found, go into this subfolder and search for a subfolder. If no one found, add the files in this subfolder into the archive or just the folder name if the subfolder is empty. Then go back to parent folder and continue searching for next subfolder. If none is found, add the files of this subfolder. Then go back to parent folder and continue search for subfolder, and so on. As you can read, the same procedure is done again and again for each branch of the entire folder tree until all subfolders were processed. This is done using a recursion. The subroutine searching for subfolders calls itself every time a subfolder is found. **NOTE:** Console version `Rar.exe` supports only creation/extraction of RAR archives. It does not support ZIP archives. This is clearly written in text file `Rar.txt` at top which is the manual for console version of *WinRAR*. It would be necessary to use `WinRAR.exe` instead of `RAR.exe` to create ZIP archives. Example 1: ``` "%ProgramFiles%\WinRAR\WinRAR.exe" a -afzip -ep1 -ibck -r -y "Name of ZIP file with path" "%UserProfile%\Desktop\someFolder" ``` Example 2: ``` "%ProgramFiles%\WinRAR\WinRAR.exe" a -afzip -ep1 -ibck -r -y "Name of ZIP file with path" "%UserProfile%\Desktop\someFolder\" ``` GUI version `WinRAR.exe` has many commands and switches identical to console version `Rar.exe`, but there are differences as shown here with `-afzip` supported only by `WinRAR.exe` and `-ibck` instead of `-idq` to run *WinRAR* in background which means minimized to system tray instead of in foreground with a visible progress window. For help on creating `WinRAR.exe` command line start *WinRAR*, click in last main menu **Help** on first menu item **Help topics**, select help tab **Contents**, expand list item **Command line mode** and make use of the help pages: - *Command line syntax* - *Alphabetic commands list* - *Alphabetic switches list* It is advisable to read the help pages in listed order respectively the text file `Rar.txt` from top to bottom on creating the `WinRAR.exe` or `Rar.exe` command line for usage in a batch file or in a shortcut file (\*.lnk).
Dijkstra with negative edges that leave the source node Dijkstra's Algorithm fails when in a graph we have edges with negative weights. However, to this rule there is an exception: If In a directed acyclic graph only the edges that leave the source node are negative (all the other edges are positive), then we can successfully use Dijkstra's Algorithm. Now my question is, what if in the above exception the graph has a cycle? I believe Dijkstra won't work, but I cannot come up with an example of a directed graph that has cycles, and the only negative edges are those leaving the source node which does not work with Dijkstra. Anyone can suggest an example?
In the scenario you describe, Dijkstra's algorithm will work just fine. The reason why it fails in the general case with negative weight since it greedily chooses which node to "close" at each step, and a closed node is never reopened. Now, assume the source `s` has `k` out edges, to `k` different nodes. Let the order of them be `v_1, v_2, ..., v_k` (`v_1` being the smallest). Note that for each `v_i`, `v_j` such that `i < j` - there will be no path from `s` to `v_i` through `v_j` with a "better" cost then `v_i`, thus - the order of investigating these first nodes will never change. (and since it doesn't change, no way a later node will be entered to "closed" before the shortest path is indeed found). Thus, at overall - no harm is done - once an edge is in the "closed" - you will never find a "shorter" path to it, since the negative edges are only from the source. --- In here I assume the `source` in your question means `d_in(source)=0`, same as a "source" in a DAG. If you mean out of the source vertex, it could be a problem since look at a 2 vertices graph such that `w(s,t) = -2, w(t,s)=1` - there is a negative cycle in the graph. So, in order to the above explanation to work - you must assume `d_in(s) = 0`
How do I sort a List by List? In my c# MVC project I have a list of items in that I want to sort in order of another list ``` var FruitTypes = new List<Fruit> { new Fruit { Id = 1, Name = "Banana"}, new Fruit { Id = 2, Name = "Apple" }, new Fruit { Id = 3, Name = "Orange" }, new Fruit { Id = 4, Name = "Plum"}, new Fruit { Id = 5, Name = "Pear" }, }; SortValues = new List<int> {5,4,3,1,2}; ``` Currently my list is showing as default of fruit type. How can I sort the Fruit list by SortValues?
It's unclear if you are sorting by the indexes in `SortValues` or whether `SortValues` contains corresponding `Id` values that should be joined. In the first case: First you have to [Zip](https://msdn.microsoft.com/en-us/library/dd267698(v=vs.110).aspx) your two lists together, then you can sort the composite type that Zip generates, then select the FruitType back out. ``` IEnumerable<FruitType> sortedFruitTypes = FruitTypes .Zip(SortValues, (ft, idx) => new {ft, idx}) .OrderBy(x => x.idx) .Select(x => x.ft); ``` However, this is simply sorting the first list by the ordering indicated in `SortValues`, ***not*** joining the ids. In the second case, a simple join will suffice: ``` IEnumerable<FruitType> sortedFruitTypes = SortValues .Join(FruitTypes, sv => sv, ft => ft.Id, (_, ft) => ft); ``` This works because `Enumerable.Join` maintains the order of the "left" hand side of the join.
Why are $\_SERVER["PHP\_AUTH\_USER"] and $\_SERVER["PHP\_AUTH\_PW"] not set? Before I begin, I'd like to point out that I've browsed Stack Overflow and found other similar questions - [PHP\_AUTH\_USER not set?](https://stackoverflow.com/questions/3663520/php-auth-user-not-set) and [HTTP Auth via PHP - PHP\_AUTH\_USER not set?](https://stackoverflow.com/questions/7053306/http-auth-via-php-php-auth-user-not-set) - and these have pointed out that the authentication $\_SERVER variables won't be set if ''Server API'' is set to ''CGI/FCGI'', but I checked my ''phpinfo()'' output and my ''Server API'' is set to ''Apache 2.0 Handler''. Ok so I have a simple script as follows: ``` <?php echo "Username: " . $_SERVER["PHP_AUTH_USER"] . ", Password: " . $_SERVER["PHP_AUTH_PW"]; ?> ``` ... which I am calling remotely via the following: ``` wget -v --http-user=johnsmith --http-password=mypassword http://www.example.com/myscript.php ``` ... but which only outputs: ``` Username: , Password: ``` I have also tried calling the script using PHP cURL and setting the authentication parameters appropriately as follows: ``` curl_setopt($ch, CURLOPT_USERPWD, "johnsmith:mypassword"); ``` ... but I get the same output as above. Any idea what I'm doing wrong? Perhaps there is something else I need to enable / configure?
I've finally discovered the answer thanks to the of help of Naktibalda in ##php on irc.freenode.net The following page summarises the issue: <http://php.net/manual/en/features.http-auth.php> To quote the relevant bits: > > As of PHP 4.3.0, in order to prevent someone from writing a script which reveals the password for a page that was authenticated through a traditional external mechanism, the PHP\_AUTH variables will not be set if external authentication is enabled for that particular page and safe mode is enabled. Regardless, REMOTE\_USER can be used to identify the externally-authenticated user. So, you can use $\_SERVER['REMOTE\_USER']. > > > ... > > > PHP uses the presence of an AuthType directive to determine whether external authentication is in effect. > > >
How to assign a list of instance methods to a class variable? I have this class which works: ``` class Point: def __init__(self): self.checks = [self.check1, self.check2] def check1(self): return True def check2(self): return True def run_all_checks(self): for check in self.checks: check() ``` The instance variable `checks` is not specific to instances, so I want to move it to class level, here is my attempt : ``` class Point: def __new__(cls, *args, **kwargs): cls.checks = [cls.check1, cls.check2] return super(Point, cls).__new__(cls, *args, **kwargs) def check1(self): return True def check2(self): return True def run_all_checks(self): for check in self.checks: check() ``` The class definition `seems` to work (in the sense that there are no `syntax` errors), but when I run it, got error : ``` TypeError: Point.check1() missing 1 required positional argument: 'self' ``` *Update* With @juanpa.arrivillaga's solution, my problem is solved : ``` class ParentFuncs: def check1(self): print("check1") def check2(self): print("check2") checks = [check1, check2] def run_all_checks(self): for check in self.checks: check(self) class ChildFuncs(ParentFuncs): def check3(self): print("check3") def check4(self): print("check4") checks = ParentFuncs.checks + [check3, check4] ChildFuncs().run_all_checks() # Output check1 check2 check3 check4 ```
Just do it in the class body, and then in `run_all_checks`, make sure to pass the instance explicitly, because they will just be functions, not bound methods (which get created *when accessing a method through and instance*): ``` class Point: def check1(self): return True def check2(self): return False checks = [check1, check2] def run_all_checks(self): for check in self.checks: print(check(self)) point = Point() point.run_all_checks() ``` Note, this doesn't behave the same way in the case of inheritance. But that is probably OK, depending on your use-case. You *could* do something like `checks = ['check1', 'check2']` and then in `run_all_checks`, something like `getattr(self, check)()`, but IMO, this is more brittle. So I would go with your original approach in that case if I intended these methods to be overridden in subclasses.
Getting only 1st elements of list using sapply I've read in R a data frame (df<-read.csv(data,as.is=T)) with n columns, one of which is a char column like this: ``` df$qual =========== 1/5 12/17 ... 0/3 9/14 ``` I'd like to convert this column to a vector of numbers, keeping ONLY the 1st element of every row. ``` df$qual =========== 1 12 ... 0 9 ``` I guess there's simpler ways (ideas welcome!), but I tried sapply: ``` sapply(df$qual,strsplit() ,simplify=T) ``` Now the problem is how/where I pass the argument split="/" so that this works? R help is not helping much. thanks in advance, p.
Perhaps ``` sapply(strsplit(df$qual,split="/") , "[[", 1) ``` ? The explanation: `strsplit` generates a list of results, i.e. a character vector for each character element in the original input. The `"[["` is a short-hand way to call the indexing operator, and `1` says to pass the additional argument `1` to `[[` -- i.e., take the first element. Two other, possibly more transparent ways to do the same thing: ``` sapply(strsplit(df$qual,split="/"), function(x) x[[1]]) ``` or ``` sapply(strsplit(df$qual,split="/") , head, 1) ``` You may want to consider `as.numeric()` at the end.
What are the differences between Hibernate and JPA? When I was in college learning about web programming, they told us about Hibernate. We used it for a while, I even had the chance to work with it in a real scenario in a company for almost 8 months. Now that I completely switched to Java EE 6, I use JPA for my ORM needs. It has been a few months since I use it, but I don't really understand what are the differences between one and other. Why some people say one or other is better or worse? The way I do my mappings and annotations in both is almost the same. Maybe you can solve some of my doubts: - What are the advantages and disadvantages of each? - Does Hibernate uses JPA or the other way around (do they depend on each other)? - From the point of view of features, what features does one have that does not have the other? - Any other differences between both?
JPA (Java Persistence API) is an API, JPA 2.0 is of the JSR 317 group. Basically it's a framework to manage relational data by using ORM (Object Relational Mapping) for data persistence. Hibernate is a ORM library that maps your POJO/JavaBeans to your data persistence. Both ORM and JPA have a object/relational metadata (XML, Annotations) for mapping POJO's to DB tables. > > Does Hibernate uses JPA or the other > way around (Do they depend on each > other)? > > > Hibernate 3 now supports JPA 2.0. JPA is a specification describing the relation and management of relational data using Object model. Since JPA is an API, Hibernate implements this API. All you have to do is write your program using JPA API classes/interfaces, configure Hibernate as a JPA resource and voila, you got JPA running. > > What are the advantages and > disadvantages of each? > > > Advantages: - Avoids low level JDBC and SQL code. - It's free (EclipseLink e.g. for JPA). - JPA is a standard and part of EJB3 and Java EE. That's all I know of Hibernate.
Simulate autofit column in xslxwriter I would like to simulate the Excel autofit function in Python's xlsxwriter. According to this url, it is not directly supported: <http://xlsxwriter.readthedocs.io/worksheet.html> However, it should be quite straightforward to loop through each cell on the sheet and determine the maximum size for the column and just use worksheet.set\_column(row, col, width) to set the width. The complications that is keeping me from just writing this are: 1. That URL does not specify what the units are for the third argument to set\_column. 2. I can not find a way to measure the width of the item that I want to insert into the cell. 3. xlsxwriter does not appear to have a method to read back a particular cell. This means I need to keep track of each cell width as I write the cell. It would be better if I could just loop through all the cells, that way a generic routine could be written.
[NOTE: as of Jan 2023 xslxwriter added a new method called autofit. See jmcnamara's answer below] As a general rule, you want the width of the columns a bit larger than the size of the longest string in the column. The with of 1 unit of the xlsxwriter columns is about equal to the width of one character. So, you can simulate autofit by setting each column to the max number of characters in that column. Per example, I tend to use the code below when working with pandas dataframes and xlsxwriter. It first finds the maximum width of the index, which is always the left column for a pandas to excel rendered dataframe. Then, it returns the maximum of all values and the column name for each of the remaining columns moving left to right. It shouldn't be too difficult to adapt this code for whatever data you are using. ``` def get_col_widths(dataframe): # First we find the maximum length of the index column idx_max = max([len(str(s)) for s in dataframe.index.values] + [len(str(dataframe.index.name))]) # Then, we concatenate this to the max of the lengths of column name and its values for each column, left to right return [idx_max] + [max([len(str(s)) for s in dataframe[col].values] + [len(col)]) for col in dataframe.columns] for i, width in enumerate(get_col_widths(dataframe)): worksheet.set_column(i, i, width) ```
Haskell: runInteractiveCommand: how to ignore the output given so far? i run an external interactive program in haskell via "runInteractiveCommand" and want read the output of the program until it shows the programs prompt. So far i have ``` checkConsistency cal nets = do (sparqIn, sparqOut, sparqErr, sparqId) <- runInteractiveCommand "sparq -i" mapM_ (flip hSetBinaryMode False) [sparqIn, sparqOut] hSetBuffering sparqIn LineBuffering hSetBuffering sparqOut NoBuffering hPutStrLn sparqIn ("load-calculus " ++ cal) -- Here i need to wait for output and ignore it until the prompt is shown! sparqAnswers <- mapM (checkConsistencyWithSparq sparqIn sparqOut) nets return sparqAnswers ``` I think i should use "hReady" and "hGetChar" in a while-loop but i am not sure how to do it exactly. Or maybe there is a better way? Kind regards, Annaluise
What it seems you want to do is [Parse](http://en.wikipedia.org/wiki/Parsing) the output of your interactive program. While this can get very hairy (depending on the format of the output, your semantic requirements etc.), often you can get away with some very simple constructs. Like the recursive `parseUntilPrompt` in the following example: ``` import Control.Applicative import System.IO import System.Process main = do (inn, out, err, idd) <- runInteractiveCommand "mongo" mapM_ (flip hSetBinaryMode False) [inn, out] hSetBuffering inn LineBuffering hSetBuffering out NoBuffering hPutStrLn inn "help" parsedIntro <- parseUntilPrompt out mapM_ (putStrLn . \x -> "PARSED:: " ++ x) parsedIntro parseUntilPrompt :: Handle -> IO [String] parseUntilPrompt out = do latest <- hGetLine out if latest == "" then return [] else (:) <$> return latest <*> parseUntilPrompt out ``` which happily parses the output of my `help` command: ``` *Interactive> main PARSED:: MongoDB shell version: 1.8.0 PARSED:: connecting to: test PARSED:: > help PARSED:: db.help() help on db methods PARSED:: db.mycoll.help() help on collection methods PARSED:: rs.help() help on replica set methods PARSED:: help connect connecting to a db help PARSED:: help admin administrative help PARSED:: help misc misc things to know PARSED:: help mr mapreduce help *Interactive> ```
Bash script processing limited number of commands in parallel I have a bash script that looks like this: ``` #!/bin/bash wget LINK1 >/dev/null 2>&1 wget LINK2 >/dev/null 2>&1 wget LINK3 >/dev/null 2>&1 wget LINK4 >/dev/null 2>&1 # .. # .. wget LINK4000 >/dev/null 2>&1 ``` But processing each line until the command is finished then moving to the next one is very time consuming, I want to process for instance 20 lines at once then when they're finished another 20 lines are processed. I thought of `wget LINK1 >/dev/null 2>&1 &` to send the command to the background and carry on, but there are 4000 lines here this means I will have performance issues, not to mention being limited in how many processes I should start at the same time so this is not a good idea. One solution that I'm thinking of right now is checking whether one of the commands is still running or not, for instance after 20 lines I can add this loop: ``` while [ $(ps -ef | grep KEYWORD | grep -v grep | wc -l) -gt 0 ]; do sleep 1 done ``` Of course in this case I will need to append & to the end of the line! But I'm feeling this is not the right way to do it. So how do I actually group each 20 lines together and wait for them to finish before going to the next 20 lines, this script is dynamically generated so I can do whatever math I want on it while it's being generated, but it DOES NOT have to use wget, it was just an example so any solution that is wget specific is not gonna do me any good.
Use the `wait` built-in: ``` process1 & process2 & process3 & process4 & wait process5 & process6 & process7 & process8 & wait ``` For the above example, 4 processes `process1` ... `process4` would be started in the background, and the shell would wait until those are completed before starting the next set. From the [GNU manual](http://www.gnu.org/software/bash/manual/bashref.html#Job-Control-Builtins): > > > ``` > wait [jobspec or pid ...] > > ``` > > Wait until the child process specified by each process ID pid or job specification jobspec exits and return the exit status of the last > command waited for. If a job spec is given, all processes in the job > are waited for. If no arguments are given, all currently active child > processes are waited for, and the return status is zero. If neither > jobspec nor pid specifies an active child process of the shell, the > return status is 127. > > >
Draw multiple Bootstrap curves in R I am wondering how to draw these multiple bootstrap curves in R. The codes of mine is like ``` dat2 <- read.delim("bone.data", sep ="\t", header= TRUE) y <- dat2[,4] x <- dat2[,2] plot(x,y,xlab="age",ylab="BMD",col=ifelse(dat2[,3]=="female","red","blue")) ``` The multiple Bootstrap Curves are like Fig 8.2 bottom left one in this book. [ESL](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) ![enter image description here](https://i.stack.imgur.com/Umrz4.png) And the data named Bone Mineral Density could be get from this website: [data](http://www-stat.stanford.edu/~tibs/ElemStatLearn/data.html) The direct link to the file being: [here](http://www-stat.stanford.edu/~tibs/ElemStatLearn/datasets/bone.data "here")
You can plot a spline curve using `smooth.spline` and `lines`: ``` plot.spline = function(x, y, ...) { s = smooth.spline(x, y, cv=TRUE) lines(predict(s), ...) } ``` So to perform bootstrapping, as per the instructions in the book, you sample random rows from the data with replacement, and call `plot.spline` on the resampled data: ``` bootstrap.curves = function(dat, nboot, ...) { for (i in 1:nboot) { subdata = dat[sample(NROW(dat), replace=TRUE), ] plot.spline(subdata$age, subdata$spnbmd, ...) } } ``` You can thus use this function to run separate plots for male and female: ``` bootstrap.curves(dat2[dat2$gender == "female", ], 10, col="red") bootstrap.curves(dat2[dat2$gender == "male", ], 10, col="blue") ``` End result: ![enter image description here](https://i.stack.imgur.com/8aeqW.png) *Note:* This code will produce a number of warnings (not errors) that look like: ``` 1: In smooth.spline(x, y, cv = TRUE) : crossvalidation with non-unique 'x' values seems doubtful ``` This is because of the bootstrap resampling. `smooth.spline` uses cross validation to decide on the number of degrees of freedom to give a spline, but it prefers not to do so with duplicate `x` values (as there effectively always will be with bootstrap resampling). You could get around this by choosing your own number of degrees of freedom, but this is probably fine for this purpose.
How to stop reading a spreadsheet with xlrd at the first empty row? I'm using xlrd to sniff through a directory structure and pull out spreadsheets, reading the second row (row 1) down to "do stuff." The issue is that I don't know how to stop the reading/printing at the first empty row. I understand that rows are not "empty" objects, but I'd appreciate a little help showing how to check if all cells are empty. Here is the code I'm working with: ``` import xlrd import os def excel_file_filter(filename, extensions=['.xls', '.xlsx']): return any(filename.endswith(e) for e in extensions) def get_filenames(root): filename_list = [] for path, subdirs, files in os.walk(root): for filename in filter(excel_file_filter, files): filename_list.append(os.path.join(path, filename)) return filename_list spreadsheets = get_filenames('C:\\Temp') for s in spreadsheets: with xlrd.open_workbook(s) as wb: cs = wb.sheet_by_index(0) num_cols = cs.ncols for row_index in range(1, cs.nrows): print('Row: {}'.format(row_index)) for col_index in range(0, num_cols): cell_object = cs.cell(row_index, col_index) if cell_obj is not xlrd.empty_cell: print('Col #: {} | Value: {}'.format(col_index, cell_obj)) ``` What ends up happening is that it prints all the way through almost 1000 rows, when only the first say, 25 rows have content in them. The amount of content between the spreadsheets varies, so a generic solution (that does not rely on other optional libraries) that helps me understand how to detect an empty row and then break, would be appreciated.
**First:** To get the cell value then check if it is empty, use one of the methods explained in the answer of question [How to detect if a cell is empty when reading Excel files using the xlrd library?](https://stackoverflow.com/questions/11603009/how-to-detect-if-a-cell-is-empty-when-reading-excel-files-using-the-xlrd-library) 1. When using `cell_val= cs.cell(row_index, col_index).value` to get the value: - To check if it is empty: simply write `if cell_vel == ''` 2. When using `cell_object = cs.cell(row_index, col_index)` to get the value: - To check if it is empty: -first get the cell\_type `cell_type = cs.cell_type(row_index, col_index)` -then check `if cell_type == xlrd.XL_CELL_EMPTY` **Second:** To check if the whole row is empty you can do the following: 1. Define a counter (count\_empty=0) to count number empty cells in the row & boolean (empty\_cell = False) 2. Check if the cell is empty If it is > increment counter & change empty\_cell to True If not > set empty\_cell False 3. Check if empty\_cell is False > print the value of the cell 4. After looping through columns in the row If count\_empty is equal to the number of columns > means the whole row is empty > break and stop looping through rows **The Code:** ``` # define empty_cell boolean empty_cell= False with xlrd.open_workbook(s) as wb: cs= wb.sheet_by_index(0) num_cols= cs.ncols num_rows= cs.nrows for row_index in range(1, num_rows): # set count empty cells count_empty = 0 print('Row: {}'.format(row_index)) for col_index in range(0,num_cols): # get cell value cell_val= cs.cell(row_index, col_index).value # check if cell is empty if cell_val== '': # set empty cell is True empty_cell = True # increment counter count_empty+= 1 else: # set empty cell is false empty_cell= False # check if cell is not empty if not empty_cell: # print value of cell print('Col #: {} | Value: {}'.format(col_index, cell_val)) # check the counter if is = num_cols means the whole row is empty if count_empty == num_cols: print ('Row is empty') # stop looping to next rows break ``` Note: I used the first method `cell_val= cs.cell(row_index, col_index).value` to get the value of cells, I see it's simpler. If you want to use the other method change the following: ``` cell_val= cs.cell(row_index, col_index) # remove .value cell_type= cs.cell_type(row_index, col_index) # add this line # check if cell is empty if cell_type == xlrd.XL_CELL_EMPTY: # change if cell_val== '': ``` Other links that helped me understand how to check if a cell is empty: [xlrd.XL\_CELL\_EMPTY](http://nullege.com/codes/search/xlrd.XL_CELL_EMPTY) and [Validating a cell value using XLRD](https://stackoverflow.com/questions/29907072/validating-a-cell-value-using-xlrd)
How do I "run" an XSLT file? I've received a great answer about sorting XML - I need to [use XSLT](https://superuser.com/questions/355115/what-windows-app-can-sort-a-huge-xml-file/355175#355175). But how do I actually do that? What software is required? **What command or application do I need to start** to get a "converted" XML output file, given that I've got an XML file and an XSLT file as input? I don't have any development environment installed; this is a regular office computer with WinXP+IE7. **Update:** With help from this site, I created a small package that I want to share: [XML-Sorter\_v0.3.zip](http://dl.dropbox.com/u/135090/XML-Sorter_v0.3.zip)
First decide whether you want to use XSLT 1.0 or 2.0. XSLT 2.0 is a much richer language, and the only reason for preferring XSLT 1.0 is that it's supported in a wider range of environments (for example, in the browser). Then decide what XSLT processor you want to use. There's a wide choice for XSLT 1.0; a rather narrower choice for XSLT 2.0. Then look in the documentation for that XSLT processor to find out how to run it. Given that you seem to be OK with running the transformation from the Windows command line, I would recommend using Saxon-HE, which you can get from <http://saxon.sf.net/>. You will need to install Java, and then you can run Saxon.Β The documentation is here: <http://www.saxonica.com/documentation/index.html#!using-xsl/commandline> If you prefer a simple GUI interface, consider "Kernow for Saxon". If you want a development environment with a nice editor and debugger, you will have to pay for it, but Stylus Studio and oXygen are both good value, and both give you a choice of XSLT engines.
how to get uri from bitmap in android 10 I'm facing a problem in getting URI in android 10 it's working in lower devices but I don't know how to do in android 10. I am uploading a video in the server. ``` bitmap = Bitmap.createScaledBitmap(bitmap, newWidth, newHeight, false); String path = MediaStore.Images.Media.insertImage(mContext.getContentResolver(), bitmap, "Title", null); Uri.parse(path); ``` I also try `ContentValues` but I couldn't understand how to do I am getting some error like can not resolve Q, RELATIVE\_PATH, IS\_PENDING ``` import android.provider.MediaStore; ContentValues contentValues = new ContentValues(); contentValues.put(MediaStore.MediaColumns.DISPLAY_NAME, System.currentTimeMillis()+""); contentValues.put(MediaStore.MediaColumns.MIME_TYPE, "image/jpeg"); if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) { contentValues.put(MediaStore.MediaColumns.RELATIVE_PATH, relativeLocation); contentValues.put(MediaStore.MediaColumns.IS_PENDING, 1); } ``` can Anyone help me how can I get URI in android 10? Any Help Would Be Highly Appreciated.
Use the next snipped to insert new bitmaps into the MediaStore in Andorid 10, and get the Uri. Take into account that this code is for Android 10 only. The compress format must be related to the mime-type parameter. For example, with a JPEG compress format the mime-type would be "image/jpeg", and so on. The subFolder parameter is optional. ``` @NonNull private Uri saveBitmap(@NonNull final Context context, @NonNull final Bitmap bitmap, @NonNull final Bitmap.CompressFormat format, @NonNull final String mimeType, @NonNull final String displayName, @Nullable final String subFolder) throws IOException { String relativeLocation = Environment.DIRECTORY_PICTURES; if (!TextUtils.isEmpty(subFolder)) { relativeLocation += File.separator + subFolder; } final ContentValues contentValues = new ContentValues(); contentValues.put(MediaStore.MediaColumns.DISPLAY_NAME, displayName); contentValues.put(MediaStore.MediaColumns.MIME_TYPE, mimeType); contentValues.put(MediaStore.MediaColumns.RELATIVE_PATH, relativeLocation); final ContentResolver resolver = context.getContentResolver(); OutputStream stream = null; Uri uri = null; try { final Uri contentUri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI; uri = resolver.insert(contentUri, contentValues); if (uri == null) { throw new IOException("Failed to create new MediaStore record."); } stream = resolver.openOutputStream(uri); if (stream == null) { throw new IOException("Failed to get output stream."); } if (bitmap.compress(format, 95, stream) == false) { throw new IOException("Failed to save bitmap."); } return uri; } catch (IOException e) { if (uri != null) { // Don't leave an orphan entry in the MediaStore resolver.delete(uri, null, null); } throw e; } finally { if (stream != null) { stream.close(); } } } ```
How to subtract rows (lines) with AWK I'm trying to figure out how I can use [AWK](http://en.wikipedia.org/wiki/AWK) to subtract lines. For example, imagine the input file is: ``` 30 20 ``` The output would be: ``` 10 ``` Now, as a test I am trying to calculate the "Used" memory column from: ``` $ cat /proc/meminfo ``` So at the moment I have written this: ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ -- Here comes the calculation using AWK ``` I have tried the following: ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}' | awk '{$0-s}{s=$0} END {print s}' ``` But this just gives me the last row of data. I've found a working solution, but I doubt it's the most optimal one. All my coding experience tells me that hard coding the amount of rows is terrible :P ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}' | awk 'NR == 1{s=$0} NR == 2 {s=s-$0} END {print s}' ```
You can also do this using `awk`, `paste`, and `bc`. I find this approach easier to remember, the syntax of `awk` always requires me to look things up to confirm. **NOTE:** This approach has the advantage of being able to contend with multiple lines of output, subtracting the 2nd, 3rd, 4th, etc. numbers from the 1st. ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}' | paste -sd- - | bc 7513404 ``` ### Details The above uses `awk` to select the column that contains the numbers we want to subtract. ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}' 7969084 408432 ``` We then use `paste` to combine these 2 values values and add the minus sign in between them. ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}'| paste -sd- - 7969084-346660 ``` When we pass this to `bc` it performs the calculation. ``` $ grep -P 'MemTotal|MemFree' /proc/meminfo | \ awk '{print $2}'| paste -sd- - | bc 7513404 ```
VS Code: Line break being removed on save We have an app written in angularJS and I want to align the directives vertically for easier reading. However, when I insert a line break in a tag and save the file, the line break is removed and the directives remain on the same line. I've disabled all extensions and it still doesn't work on save. Why is VS Code ignoring the line break? **Before saving**: *(what I want)* [![enter image description here](https://i.stack.imgur.com/jB7ON.png)](https://i.stack.imgur.com/jB7ON.png) **After saving**: *(what I don't want)* [![enter image description here](https://i.stack.imgur.com/Vgh7J.png)](https://i.stack.imgur.com/Vgh7J.png) Has anyone had this issue? If so, how did you resolve it? Thanks.
After more research, I came across this post [VSCode automatically adding new lines on save](https://stackoverflow.com/questions/50241599/vscode-automatically-adding-new-lines-on-save) via a google search of "*line break in visual studio code is removed on save*". Even though the issue I was having was that I **DID** want a new line, and this question was **just the opposite**, I looked at the answer anyway since everything else I was finding was about "VSCode removing or adding new lines at the end of file" (*for example*) -- which isn't what I wanted. Unchecking the **Format On Save** setting as shown worked for what I wanted to do. [![enter image description here](https://i.stack.imgur.com/MOCuV.png)](https://i.stack.imgur.com/MOCuV.png) I also added extension: Formatting Toggle. [![enter image description here](https://i.stack.imgur.com/kmoiw.png)](https://i.stack.imgur.com/kmoiw.png) The extension shows up on the right side of the status bar in editor [![enter image description here](https://i.stack.imgur.com/32rwx.png)](https://i.stack.imgur.com/32rwx.png)
How can I make a loopback file descriptor in bash? I want to be able to write data to a file descriptor then read it back later in bash. This would look like: ``` # some line to create LOOPBACK_FD echo foo >${LOOPBACK_FD} cat <${LOOPBACK_FD} # foo is printed ``` I know that I can use variables to store the output of commands like in: ``` MYMESSAGE=$(echo foo) echo ${MYMESSAGE} ``` `memfd_create` does something *like* what I want but there'd probably be issues with the file offset and it seems bash doesn't even use it. Maybe there's a special device that can do this? Edit: I'm looking for a solution which doesn't require making a new node in the filesystem.
> > > ``` > echo foo >${LOOPBACK_FD} > cat <${LOOPBACK_FD} > > ``` > > That will not work -- no matter how `LOOPBACK_FD` is created or opened, or if it's accessible via the filesystem or not. The `cat` will keep waiting for an EOF which will not come as long as the calling script (which is waiting for `cat` to terminate) will keep the writing end of the pipe open. Something that *may* work with simple examples (but will run into deadlocks and buffering issues with anything non-trivial) is using `read` instead of `cat` to get just a line from the pipe, without trying to slurp it whole. > > Edit: I'm looking for a solution which doesn't require making a new node in the filesystem. > > > There are no "anonymous" pipes in Linux; even those created with the `pipe(2)` system call or with `|` in the shell are accessible via `/proc/{pid}/fd/{fd}` or `/proc/self/fd/{fd}`; they *will* always "create" a new node in filesystem. But you can use that feature to open to writing end of an input pipe (or vice versa), which could make for an "clever" answer to your XY problem: ``` #! /bin/bash exec {fd}< <(:) {fd}<>/proc/self/fd/$fd echo eenie >&$fd read message <&$fd; echo "$message" echo meenie >&$fd read message <&$fd; echo "$message" ```
What does (l+v) :: ...(list + integer :: (cons operand) ..) mean in SML? ``` - fun addto (l,v) = = if null l then nil = else hd l + v :: addto (tl l,v); val addto = fn : int list * int -> int list addto ([1,2,3],2); val it = [3,4,5] : int list - addto ([1,2,3],~2); val it = [~1,0,1] : int list ``` This is an SML function in my slides. I don't get how (l+v) can work here. But it works actually: ``` addto ([1,2,3],2); val it = [3,4,5] : int list ``` I am thinking it goes something like that: ``` addto([1,2,3],2); addto([2,3], 2); addto([3], 2); addto([],2) ``` now it's actually l nill so it returns it to addto([3], 2); But what does `hd l + v :: addto (tl l,v);` mean anyway; I thought that the "cons" operator `::` has to be defined something like : ***<list\_object\_of\_a\_certain\_type> ::*** And here my pseudo-name <list\_object\_of\_a\_certain\_type> is actually integers in the example with addto([1,2,3],2). But inside my function we have the phrase (l+v :: ..) while **l is a list and v is an int** so what is `l+v`? p.s I am a complete beginner so forgive me if it all too simple
First `::` is defined as: ``` datatype 'a list = nil | :: of 'a * 'a list ``` So, for instance, `1 :: [2, 3]` is the list `[1, 2, 3]`. Then in your code, your expression is interpreted as: ``` ((hd l) + v)) :: (addto (tl l,v)) ``` So basically, your function can be rewritten as follows: ``` fun addto (l,v) = if null l then nil (* if l is empty the result is the empty list *) else let val head = hd l (* take the first element of list l *) val sum = head + v (* then add it v *) val result_tail = addto (tl l,v) (* and compute x + v for all x in the tail of my list *) in sum :: result_tail (* the result is the sum I just computed followed by what the recursive call gave me *) end ``` Finally, note that in most cases, you don't need functions such as `null` or `head` because we write functions with pattern matching construction. This greatly enhances readability and it is often the case that it suppresses the need for such functions. For instance, `addto` can be rewritten more simply as: ``` fun addto ([], _) = [] | addto (x :: tl, v) = (x + v) :: addto (tl, v) ``` Simpler isn't it?
how to forward http 2 protocol version via proxy? I have a simple rails application ``` (browser) -> (nginx latest; proxy_pass) -> rails (latest) ``` How do I configure nginx to notify rails that nginx received an HTTP/2 request via a different header IE: my\_"http\_version = 2.0"? proxy\_pass communicates with rails via HTTP 1.1 and I want to know if the original request was http/2. Thank you!
This is similar to what the `X-Forwarded-For` and `X-Forwarded-Proto` headers are used for, but there not a [standard header](https://en.wikipedia.org/wiki/List_of_HTTP_header_fields) used to communicate the HTTP protocol version to the backend. I recommend using this: ``` proxy_set_header X-Forwarded-Proto-Version $http2; ``` The `$http2` variables comes from [ngx\_http\_v2\_module](http://nginx.org/en/docs/http/ngx_http_v2_module.html) which I presume you are using with Nginx to server HTTP/2. The difference between `$http2` and `$server_protocol` is that `$http2` works more like a boolean, appearing blank if the HTTP/1 protocol was used. [`$server_protocol`](http://nginx.org/en/docs/http/ngx_http_core_module.html#var_server_protocol) will contain values like "HTTP/1.1" or "HTTP/2.0", so it could also be a good choice depending on your needs.
Knockout Js "You cannot apply bindings multiple times to the same element" I am using kendo mobile app builder and I am using knockout js for bindings but I am getting error "**You cannot apply bindings multiple times to the same element**". I have two javascript file which consist bindings, below my code ``` //Employee.js// function EmployeeViewModel() { this.EmployeeName= ko.observable(); this.EmployeeMobile= ko.observable(); this.EmployeeEmail= ko.observable(); } ko.applyBindings(new EmployeeViewModel()); //Company.js// function CompanyViewModel() { this.CompanyName= ko.observable(); this.CompanyMobile= ko.observable(); this.CompanyEmail= ko.observable(); } ko.applyBindings(new CompanyViewModel()); //In index page i am using this both script file drag and drop// <html> <head> </head> <body> <script src="Employee.js"></script> <script src="Company.js"></script> </body> </html> ```
The "ko.applyBindings" function takes 2 arguments: ``` applyBindings(viewModelOrBindingContext, rootNode); ``` first - view model second - DOM node the binding will be applied to You call "ko.applyBindings" method twice - in both functions, with the first parameter only. This means you are going to bind two different models to the same node - document root. This causes the error. You can use two approaches: - create one global view model with submodels and apply binding only once: ``` //Employee.js// function EmployeeViewModel() { this.EmployeeName= ko.observable(); this.EmployeeMobile= ko.observable(); this.EmployeeEmail= ko.observable(); } //Company.js// function CompanyViewModel() { this.CompanyName= ko.observable(); this.CompanyMobile= ko.observable(); this.CompanyEmail= ko.observable(); } //In index page i am using this both script file drag and drop// <html> <head> </head> <body> <script src="Employee.js"></script> <script src="Company.js"></script> <script> ko.applyBindings({ employee: new EmployeeViewModel(), company: new CompanyViewModel() }); </script> </body> </html> ``` - apply bindings to different nodes: ``` ``` //Employee.js function EmployeeViewModel() { this.EmployeeName= ko.observable(); this.EmployeeMobile= ko.observable(); this.EmployeeEmail= ko.observable(); ko.applyBindings(new EmployeeViewModel(), document.getElementById("employee")); } //Company.js function CompanyViewModel() { this.CompanyName= ko.observable(); this.CompanyMobile= ko.observable(); this.CompanyEmail= ko.observable(); ko.applyBindings(new CompanyViewModel(), document.getElementById("company")); } //In index page i am using this both script file drag and drop// <html> <body> <script src="Employee.js"></script> <script src="Company.js"></script> <div id="employee"></div> <div id="company"></div> </body> </html> ``` ```
ModelForm with a reverse ManytoMany field I'm having trouble getting ModelMultipleChoiceField to display the initial values of a model instance. I haven't been able to find any documentation about the field, and the examples I've been reading are too confusing. [Django: ModelMultipleChoiceField doesn't select initial choices](https://stackoverflow.com/q/488036/700673) seems to be similar, but the solution that was given there is not dynamic to the model instance. Here is my case (each database user is connected to one or more projects): models.py ``` from django.contrib.auth.models import User class Project(Model): users = ManyToManyField(User, related_name='projects', blank=True) ``` forms.py ``` from django.contrib.admin.widgets import FilteredSelectMultiple class AssignProjectForm(ModelForm): class Meta: model = User fields = ('projects',) projects = ModelMultipleChoiceField( queryset=Project.objects.all(), required=False, widget=FilteredSelectMultiple('projects', False), ) ``` views.py ``` def assign(request): if request.method == 'POST': form = AssignProjectForm(request.POST, instance=request.user) if form.is_valid(): form.save() return HttpResponseRedirect('/index/') else: form = AssignProjectForm(instance=request.user) return render_to_response('assign.html', {'form': form}) ``` The form that it returns is not selecting the instance's linked projects (it looks like: [Django multi-select widget?](https://stackoverflow.com/q/1698435/700673)). In addition, it doesn't update the user with any selections made when the form is saved. Edit: Managed to solve this using the approach here: <http://code-blasphemies.blogspot.com/2009/04/dynamically-created-modelmultiplechoice.html>
Here's a solution that is better than the older ones, which really don't work. You have to both load the existing related values from the database when creating the form, and save them back when saving the form. I use the `set()` method on the related name (manager) which does all the work for you: taking away existing relations that are not selected anymore, and adding new ones which have become selected. So you don't have to do any looping or checking. ``` class AssignProjectForm(ModelForm): def __init__(self, *args, **kwargs): super(AssignProjectForm, self).__init__(*args, **kwargs) # Here we fetch your currently related projects into the field, # so that they will display in the form. self.fields['projects'].initial = self.instance.projects.all( ).values_list('id', flat=True) def save(self, *args, **kwargs): instance = super(AssignProjectForm, self).save(*args, **kwargs) # Here we save the modified project selection back into the database instance.projects.set(self.cleaned_data['projects']) return instance ``` Aside from simplicity, using the `set()` method has another advantage that comes into play if you use Django signals (eg. `post_save` etc) on your m2m relation: If you add and remove entries one at a time in a loop, you'll get signals for each object. But if you do it in one operation using `set()`, you'll get just one signal with a list of objects. If the code in your signal handler does significant work, this is a big deal.
NullPointerException when setting attribute? For example I have a servlet code that sets attribute to a HttpServletRequest: ``` request.setAttribute("someValue", someValue()); RequestDispatcher rd = getServletContext().getRequestDispatcher("/SomeJsp.jsp"); rd.forward(this.request, this.response); return; ``` How do I make sure that the code above is thread safe? This is the stacktrace I am getting: ``` java.lang.NullPointerException at org.apache.catalina.connector.Request.notifyAttributeAssigned(Request.java:1552) at org.apache.catalina.connector.Request.access$000(Request.java:105) at org.apache.catalina.connector.Request$3.set(Request.java:3342) at org.apache.catalina.connector.Request.setAttribute(Request.java:1504) at org.apache.catalina.connector.RequestFacade.setAttribute(RequestFacade.java:541) at org.apache.catalina.core.ApplicationHttpRequest.setAttribute(ApplicationHttpRequest.java:281) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:286) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:684) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:471) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:402) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:329) at com.mycompany.myapp.servlet.SomeServlet.doRequest(SomeServlet.java:103) at com.mycompany.myapp.servlet.SomeServlet.doGet(SomeServlet.java:159) ```
``` rd.forward(this.request, this.response); ``` This (pun intented) suggests that you've assigned `request` and `response` as instance variables of a class. Your concrete problem in turn suggests that the instance of the class itself is not threadsafe. Assuming that it's actually the servlet itself, then you've there the cause of your problem. Servlets are not threadsafe at all. There's only one instance of it been created during webapp's startup which is then shared across *all* requests applicationwide. You should **never** assign request or session scoped data as an instance variable of the servlet. It would only be overriden whenever another HTTP request takes place at the same moment. That would make your code threadunsafe, as you encountered yourself. Here's some code which illustrates that: ``` public class MyServlet extends HttpServlet { private Object thisIsNOTThreadSafe; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Object thisIsThreadSafe; thisIsNOTThreadSafe = request.getParameter("foo"); // BAD!! Shared among all requests! thisIsThreadSafe = request.getParameter("foo"); // OK, this is thread safe. } } ``` Assigning the HTTP request itself as instance variable of a servlet is actually an epic mistake. When user Y fires another request at the same time the servlet is dealing with the request of user X, then user X would instantly get the `request` and `response` objects of user Y at hands. This is definitely threadunsafe. The NPE is caused because the `request` is at that moment "finished" with processing for user Y and thus released/destroyed. ### See also: - [How do servlets work? Instantiation, sessions, shared variables and multithreading](https://stackoverflow.com/questions/3106452/how-do-servlets-work-instantiation-session-variables-and-multithreading)
Firefox 9.0.1 Broke Internal Wiki Layout The most recent version of Firefox has messed up our internal wiki layout so that the left bar menus are displayed below the content on all pages now. I've tried using multiple resolutions and window sizes and the problem persists so that doesn't appear to be the issue. There are no problems with Chrome or IE however. A screenshot of the problem is attached here for you to see.![enter image description here](https://i.stack.imgur.com/TpGTa.png)
You're using an old MediaWiki version. It does UA sniffing and sends different code to different browsers, relying on bugs in some of the browsers to make the code sent to them render correctly. Firefox 9 removed a nonstandard property that only Gecko used to implement. MediaWiki was using that property to decide that the rendering engine was Gecko. With it gone, MediaWiki now decides that you're using kthml and sends CSS rules tailored to khtml bugs... but Gecko happens to not have those bugs. So the rendering ends up wrong. Your options are to either update to MediaWiki 1.16 or newer (which you should do anyway, because your version is not getting security fixes anymore!) or wait for Firefox 10 to come out, because chances are that will restore the nonstandard property because too many sites were sniffing for it. See also <https://bugzilla.mozilla.org/show_bug.cgi?id=683151> and <https://bugzilla.wikimedia.org/show_bug.cgi?id=31807> for more info.
How to display multi Jenkins jobs execution status in all node? My project have several jobs running in Jenkins, and each job is configured to run in 4~8 nodes(we have 8 nodes in total), we want to have a web page view to monitor the status like the following: [![enter image description here](https://i.stack.imgur.com/DKzmv.png)](https://i.stack.imgur.com/DKzmv.png) Is there any plugin could help show this? Thanks!
Depending on the version of your Jenkins instance, exact configuration and needs, you may want to take a look at following plugins: 1. [Build Monitor Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Build+Monitor+Plugin). My favorite one. It allows you to easily build a dashboard with selected builds and display their status in clear and even animated way. 2. [Radiator View Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Radiator+View+Plugin). This one is not as nice in design but it appears it provides a bit more settings. 3. [Dashboard View](https://wiki.jenkins-ci.org/display/JENKINS/Dashboard+View). It is more old-school in design, like 1.50\* versions of Jenkins, but it can display way more than just a job status. See plugin page for details, I'll just mention configurable axises and availability of custom extensions. 4. [Jenkins Job Dashing Widget](https://kj187.github.io/dashing-jenkins_job/). Last, but not least in my review. I never used this one, and it appears to have some Node.js dependencies but it creates an easily understandable dashboard and it looks like it can be customized more or less easily. These are some of the options you have here. I'm not sure any of listed plugins will allow you to draw a grid 100% like you state in question but some of them may eventually suite your needs.
Functional programming in Java 7 Does Java 7 have functional programming support or I still need to use FunctionalJava or another lib? I thought is has support to this, but didn't find much info about it.
It would depend on your definition of Functional Programming. In any case, the answer to your question would be No. Lambdas were due to appear in Java7 at one point but they will appear only in Java8. It *looks* like with Java8 you'll be able to [do a lot with the new lambda notation in conjunction with the regular JDK8 class libraries](http://cr.openjdk.java.net/~briangoetz/lambda/collections-overview.html) (collections in particular) before you need something like FunctionalJava, but that sort of depends on how much you want to do. A lot of OO folk will be very happy with just a flavor of FP - a common example is collections with `map`, `filter` etc. That by itself would no doubt move Java *closer* to FP - and might just be FP enough for you. The question is, even then, would that allow true (even if 'impure') functional programming in Java? Yes, and No. *Yes*, because any language with lexical closures and a lambda notation could in theory be enough. *No*, because FP as supported by languages `Haskell`, `F#`, `OCAML` and `Scala` would still be impractical. Some examples: 1. The lack of [Algebraic Data Types](http://en.wikipedia.org/wiki/Algebraic_data_type) - these are regarded as a key component of the statically typed family of FP languages, and go especially well with many FP idioms. 2. While not exactly a requirement for FP, nearly all statically typed FP languages feature some form of *type inference*. 3. Statements need to behave like expressions for a lot of Functional Programming idioms to be convenient - `if`, `try`, etc need to *return a value*. 4. Enforcement (as in Haskell), or the encouraging (as in Scala) of *single assignment* as well as *immutability*, and a useful collection of data structures and libraries to that end. Other languages like Lisp/Scheme or Erlang are also considered Functional; but in a less strict sense; the requirements (1), and (2) do not apply because they are *dynamically typed* to begin with. One can say then, that Javascript is about as functional as Lisp (impure dynamic functional language), because Javascript has always had lambdas and first-class functions. But Java being in the *statically* typed family, does not fare any better (than Javascript) and certainly not as well as the existing statically typed FP languages. Regarding (4. (Immutable/Side-effect free)), it appears that in JDK8, the existing mutable classes will be *retrofitted* with lambda-consuming methods, so that's something that will (at least for a while) limit how far you can take FP paradigms in Java8. I found these links very useful - I haven't been following up for a while though, so I'm not sure if they are the best/latest info regarding this. But worth reading: <http://cr.openjdk.java.net/~briangoetz/lambda/lambda-state-final.html> <http://cr.openjdk.java.net/~briangoetz/lambda/collections-overview.html>
Store ES6 class in variable and instance it through Object.Create I'm currently needing to pass a **class** through a variable to a function which dynamically calls their constructor and so on. Short example: ``` class MyClass { constructor ($q) { console.log($q); } expose () { return {}; } } const myclass = function () { // My concern is this ------. // I need to pass it | // as a variable \|/ // ' let obj = Object.create(MyClass.prototype); obj.constructor.apply(obj, arguments); return {}; // other stuff goes here }; myclass.$inject = ['$q']; export {myclass}; ``` My idea is to pass MyClass as a variable, to have it this way: ``` const myclass = function (classVariable) { let obj = Object.create(classVariable.prototype); obj.constructor.apply(obj, arguments); return {}; // other stuff goes here }; ``` So that I can call it like `let newclass = myclass(MyClass);`. This way if I have different classes (MyClass2, MyClass3, and so on) I don't have to repeat code in each file. If I try to do it this way though, Chrome throws an error saying that it cannot use a class as variable or something like that. So, what would be the best approach to do this and avoid repeating code? it's actually a redesign I'm working on to avoid the copy-paste disaster made by someone else I need to fix. Thanks in advance!
I think your confusion is that you think that class constructors cannot be referenced by variables. They can, they're just functions. So: ``` class Foo { message() { console.log("I'm a Foo"); } } class Bar { message() { console.log("I'm a Bar"); } } function test(C) { let obj = new C(); obj.message(); // "I'm a Foo" or "I'm a Bar", depending on // whether Foo or Bar was passed in } test(Foo); test(Bar); ``` Your pattern of calling `var obj = Object.create(X.prototype)` followed by `X.apply(obj, /*...args here...*/)` would work in ES5 and earlier, but ES2015's classes don't allow it. To construct instances from them, you *have* to use the `new` operator. The reason for that has to do with subclassing and setting `new.target`, so that if the instance has reason to create new instances (as `Promise` does), it can do that in an unambiguous way. Which seems like it could be a step back, but if for some reason you have the constructor arguments as an array rather than discrete items, spread notation lets you use `new` anyway: ``` let obj = new C(...args); ``` So **if** you need a generic function that accepts a class constructor and an array of arguments and needs to return an instance of that class using those arguments, it would look like this: ``` function createInstanceOf(C, args) { return new C(...args); } ```
Why 'X' + Y = 'XY' in TypeScript ? Expect an error I am trying to understand TypeScript and how it works so this is a very simple example : ``` const X: string = '5'; const Y: number = 6; console.log(X+Y) // output '56' ! why not an error ? and why not 11 ? ``` With JavaScript, such a result is acceptable, but since TypeScript is designed to avoid logic errors, why it does not mind if we try to append a "string" to a "number". Logically speaking, this makes no sense even for concatenation. also why the default procedure is to treat Y as `string` and concatenate why not try to convert X to `number` and add ?
One of the tenets of TypeScript is that "your existing working JavaScript code is also TypeScript code" [[link](https://www.typescriptlang.org/docs/handbook/typescript-in-5-minutes.html)]. There's plenty of existing JavaScript code with bits like this: ``` const x = '5'; const y = 6; console.log(x + y); // '56' ``` and that do expect the behavior of JavaScript. It would be a significant barrier to migration if TypeScript forced developers to change the above. (And it's a nonstarter to suggest that it should output `11`; we couldn't trust TypeScript if renaming a file from `.js` to `.ts` could introduce bugs detectable only at runtime.) Now, you might want to object that your code is different, in that you explicitly annotate the variables with their types. But TypeScript provides type inference, and it would be a horrible mess if ``` const x : string = '5'; ``` produced a variable with slightly different behavior than ``` const x = '5'; ``` Language design always involves tradeoffs, and TypeScript has decided that some JavaScript quirks are worth keeping for compatibility.
How do I Sum Array elements in every second I'm trying to sum up the array of elements by every seconds. My array has 20 elements, and it should be summed up in 20seconds. I'm using `setTimeout` in `for` loop but its not working , the loop finish before the first second. Anyway to achieve ? ``` for (var o = 0; o < 20; o++) { setTimeout(function () { tempPsum += array[o]; }, 1000); } ```
Right, the loop finishes before the first `setTimeout` callback occurs. Your code schedules 20 callbacks that all occur, one after another, a second later. You have two choices: 1. Schedule each one a second later than the previous one, or 2. Don't schedule the next one until the previous one finishes #1 is simpler, so let's do that: ``` for (let o = 0; o < 20; o++) { // ^^^βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’ setTimeout(function () { tempPsum += array[o]; }, 1000 * o); // ^^^^βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’ } ``` By multiplying the delay time by `o`, we get `0` for the first timer, `1000` for the second, `3000` for the third... Note the change from `var` to `let`. That way, there's a separate `o` for the callback to close over for each loop iteration. (See [answers here](http://stackoverflow.com/questions/750486/javascript-closure-inside-loops-simple-practical-example), or...Chapter 2 of my new book. :-) ) If you can't use ES2015+ features in your environment, you can use the extra arguments to `setTimeout` instead: ``` for (var o = 0; o < 20; o++) { setTimeout(function (index) { // ^^^^^βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’ tempPsum += array[index]; // ^^^^^βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’ }, 1000 * o, o); // ^^^^βˆ’βˆ’^βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’ } ``` That tells `setTimeout` to pass the third argument you give it to the callback as its first argument.
Reuse MonoTouch library on MonoDroid I was able to easily port an existing .NET desktop app to iPhone using MonoTouch. As part of that process, I had to create new iOS 'versions' of all my projects - MonoTouch class library projects linking the files from my existing projects. This works great. Now I want to do a MonoDroid port. Can I just reference my iOS libraries? Or do they need to be MonoDroid class libraries? The MonoTouch and MonoDroid profiles seem to be the same, based on the documentation: <http://monotouch.net/Documentation/Assemblies> <http://mono-android.net/Documentation/Assemblies> Has anyone used a MonoTouch class library with MonoDroid? Thanks in advance
If you're using MonoTouch 4.0, *and* the assembly doesn't reference `monotouch.dll` (directly or indirectly), then you just need to reference the assembly from your Mono for Android project. Note that at this time you can only reference the assembly (`.dll`) and not the the Project (`.csproj`). The reverse should also be true (reference a Mono for Android assembly and use in your MonoTouch project, as long as the Mono for Android assembly doesn't reference `Mono.Android.dll`). If this breaks, please file a bug: we fully intend for the MonoTouch and Mono for Android APIs to be compatible with each other. (Whenever possible, that is; things such as `System.Reflection.Emit` will never be supported on both platforms unless/until Apple removes the JIT restriction.)
How to highlight lowercase letters with JavaScript? For a paragraph like this ``` <p> This is an exaMPLe </p> ``` how to highlight the lowercase letters with a different color using Javascript?
This is a quick and dirty solution using regex find/replace. ``` var p = document.querySelector("p"); var html = p.innerHTML; html = html.replace(/([a-z]+)/g, "<strong>$1</strong>"); p.innerHTML = html; ``` First, you get the paragraph, read its inner HTML. Then use regex to find lowercase a-z letters and wrap them with `strong` HTML tags. You can also use `span` and apply a CSS class if you wish. After regex find/replace, set back the HTML value. Working example: <http://jsbin.com/vodevutage/1/edit?html,js,console,output> **EDIT**: If you wish to have a different colour, then change the one line as follows: ``` html = html.replace(/([a-z]+)/g, "<span class='highlight'>$1</span>"); ``` and also define a CSS class as follows: ``` .highlight { background-color: yellow; } ```
Differences-in-Differences coefficients meaning (from Mostly Harmless Econometics) In *Mostly Harmless Econometrics*, section 5.2.1 (Regression DD), pages 233-234, equation (5.2.3) defines $Y\_{ist}=\alpha + \gamma NJ\_{s}+\lambda d\_{t}+\delta (NJ\_{s}.d\_{t})+\epsilon\_{ist}$, where $NJ\_{s}$ is a dummy denoting observations from New Jersey and $d\_{t}$ is a dummy for observations obtained in November. Considering $E[\epsilon\_{ist} \vert s,t]=0$, the books states that: $\alpha=E[Y\_{ist} \vert s=PA, t=Feb] = \gamma\_{PA} + \lambda\_{Feb}$ $\gamma=E[Y\_{ist} \vert s=NJ, t=Feb] - E[Y\_{ist} \vert s=PA, t=Feb] = \gamma\_{NJ} - \gamma\_{PA}$ $(...)$ Although, in **my opinion**: $E[Y\_{ist} \vert s=NJ, t=Feb]=\alpha+\gamma$, so that $E[Y\_{ist} \vert s=NJ, t=Feb] - E[Y\_{ist} \vert s=PA, t=Feb] =\alpha+\gamma -\alpha=$ $=(\gamma\_{PA}+\lambda\_{Feb}+\gamma\_{NJ})-(\gamma\_{PA}+\lambda\_{Feb})=\gamma\_{NJ}$, where $\gamma\_{NJ} \equiv \gamma$ Which has a different meaning from what the book stated (cited above). Could anyone explain what I am missing or if there is a mistake in the book? Found nothing about it at the [blog](http://www.mostlyharmlesseconometrics.com/tag/corrections/).
The book is correct, but it is easier to see if you insert the corresponding values of the dummies and check what happens to the regression equation. Let's go through the possible values together. - $NJ\_s = 0$ is Pennsylvania - $NJ\_s = 1$ is New Jersey - $d\_t = 0$ is February - $d\_t = 1$ is November So your baseline regression is $NJ\_s = 0$ and $d\_t = 0$, i.e. the control group in the pre-treatment period which gives the regression: $$Y\_{ist} = \alpha + \epsilon\_{ist}$$ So $\alpha$ captures the employment level for state Pennsylvania in February: $$\alpha = E[Y\_{ist}|s=PA, t=Feb] = \gamma\_{PA} + \lambda\_{Feb}$$ Now if you "switch on" the $NJ\_s$ dummy to $NJ\_s = 1$, the regression becomes: $$Y\_{ist} = \alpha + \gamma NJ\_s + \epsilon\_{ist}$$ in which $\gamma $ gives you the difference between the two states at time $d\_t = 0$ (February), and it can be written as: $$\gamma= E[Y\_{ist}|s=NJ, t=Feb]- E[Y\_{ist}|s=PA, t=Feb] = \gamma\_{NJ} - \gamma\_{PA}$$ In chapter 2, section 2.3, they explain how these differences work out. In this case $NJ\_s = 1$ measures the deviation from the baseline, which we said was Pennsylvania in February. So if this deviation is positive and significant with $\gamma > 0$, then employment levels in February were higher in New Jersey than in Pennsylvania. Finally, let's also switch the time dummy $d\_t$ to one, then the regression is the full model you've described in the question. The coefficient $\beta $ then gives the difference in employment levels between two differences (which is your difference in differences): - New Jersey in November minus New Jersey in February - Pennsylvania in November minus Pennsylvania in February