prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Enum.Parse returning unexpected members Consider the following piece of code: ``` namespace ConsoleApplication1 { class Program { public static void Main (string[] args) { var en = (TestEnum)Enum.Parse(typeof(TestEnum), "AA"); Console.WriteLine(en.ToString()); Console.ReadKey(); } } public enum TestEnum { AA = 0x01, AB = 0x02, AC = 0x03, BA = 0x01, BB = 0x02, BC = 0x03 } } ``` If you execute this, the variable `en` will get the value of `TestEnum.BA`. Now I have learned from this that enum flags should be unique, or you get these kind of unexpected things, but I do fail to understand what is happening here. The even weirder part is that when I add the [Flags] attribute to the TestEnum, it solves the problem and returns TestEnum.AA instead of TestEnum.BA, but for the original enum (which is much larger, around ~200 members) for which I have discovered this problem this does not make a difference. My understanding is that enums are a value type, so when you define your own flags it will store the value in memory as 0x01 in the case of for TestEnum.AA, and when you cast it from object to TestEnum it will do the lookup for that flag value and find TestEnum.BA. This is also confirmed by running the following line: ``` var en = (TestEnum)(object)TestEnum.AA; Console.WriteLine(en.ToString()); ``` Which will output: `BA` So my question is: what exactly is happening here? And more importantly why does adding the Flags attribute make a difference?
Firstly, this is nothing to do with `Enum.Parse()`. The underlying type of an enum by default is `int`, so in your example `TestEnum.AA` and `TestEnum.BA` are both stored as `1` and there is no way to distinguish them. Witness the following code: ``` Console.WriteLine(TestEnum.AA); // Prints BA Console.WriteLine(TestEnum.BA); // Prints BA ``` Secondly, the reason that setting the `[Flags]` attribute changes the output is because a different code path is taken when determining the string. Here's [the code from ReferenceSource](https://referencesource.microsoft.com/#mscorlib/system/enum.cs,e91b5f6f66834f75): ``` private static String InternalFormat(RuntimeType eT, Object value) { if (!eT.IsDefined(typeof(System.FlagsAttribute), false)) // Not marked with Flags attribute { // Try to see if its one of the enum values, then we return a String back else the value String retval = GetName(eT, value); if (retval == null) return value.ToString(); else return retval; } else // These are flags OR'ed together (We treat everything as unsigned types) { return InternalFlagsFormat(eT, value); } } ``` Note how `GetName()` is called if `[Flags]` is not set, otherwise `InternalFlagsFormat()` is called. The implementation of `GetName()` ends up doing a binary search to find the value, whereas `InternalFlagsFormat()` winds up doing a linear search to find the value. `InternalFlagsFormat()` must do a linear search because it may need to set multiple values (e.g. "X|Y|Z") so Microsoft implemented an O(N) solution for it. However for `GetName()` they went for a more efficient O(Log2(N)) solution. A binary search can (and does) find a different duplicate value than the linear search does, hence the difference.
matplotlib inline syntax error I'm testing a python program, which contains calling for IPython. But I got errors in the following code: If I use ``` %matplotlib inline ``` I got a syntax error at the "%" symbol. I found a solution to this problem using: ``` from IPython import get_ipython get_ipython().run_line_magic('matplotlib', 'inline') ``` I got error saying: ``` AttributeError: 'NoneType' object has no attribute 'run_line_magic' ``` I'm using Ubuntu 16.04 and running the code via command line. How can I fix this ?
This `%matplotlib` magic is used to display graphs (of `matplotlib.pyplot` objects). This needs UI to display. So cannot be display on command prompt. According to [IPython documentation](http://ipython.readthedocs.io/en/stable/interactive/plotting.html), > > If the `%matplotlib` magic is called without an argument, the output of a plotting command is displayed using the default matplotlib backend in a separate window. Alternatively, the backend can be explicitly requested using, for example: > > > `%matplotlib gtk` > > > A particularly interesting backend, provided by IPython, is the inline backend. This is available only for the Jupyter Notebook and the Jupyter QtConsole. It can be invoked as follows: > > > `%matplotlib inline` > > > Simple solution would be to replace `%matplotlib inline` with `%matplotlib` and run it using `ipython`. Alternatively, what you could do is download `jupyter notebook` and run that code there. Or as @tihom said in comments, you could comment or remove that line and run the code but this wouldn't display the graphs and other things.
Give a second name to a variable in javascript I have a code source that contains a long variable name (`postCustomThumbnailsScrollerHeight`). I don't want to rename this variable for the whole code source so that I easily continue the project, but to have a shorthand of its name. I tried following solution (which works) at the first declaration of the variable, but I am not sure if it is the correct way to do so. I have a different color of `d` in IDE: ``` var postCustomThumbnailsScrollerHeight= d= $('.post-scroller').outerHeight(); ``` I am seeking by this question your usual expert advice.
No, this isn't really correct: you're not **declaring** the `d` variable, only **assigning** to it, and thus 1. making it global (which may or not be desired) 2. making your code incompatible with [strict mode](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Strict_mode#Converting_mistakes_into_errors) Here's a solution: ``` var d = $('.post-scroller').outerHeight(), postCustomThumbnailsScrollerHeight = d; ``` Note that this should only be done for readability/typing issues, not for downloaded script size: minifiers should be used for that latter goal. Be also careful that you're not making an alias, but really two variables. If you assign to one, you won't change the other one. It's hard to give a definite advice without more information but the usual solution is to have namespaced object: Assuming you have a struct ``` myApp.thumbnailScrollers.postCustom = {height:... ``` then you would just assign that latter object to a local variable in a module or function: ``` var s = myApp.thumbnailScrollers.postCustom ``` In *this* case, changing `s.height` would also change `myApp.thumbnailScrollers.postCustom.height`.
How can i add a disclosure button to a MKAnnotation? I want to add a disclosure button to a `MKAnnotation` to segue to another view. The button should look like this one: [Image](https://i.stack.imgur.com/Ivuzn.png) Here's my **.h** and **.m** files. --- ## .h file ``` // // POI.h // #import <Foundation/Foundation.h> #import <MapKit/MapKit.h> @interface POI : NSObject <MKAnnotation> { NSString *title; NSString *subtitle; CLLocationCoordinate2D coordinate; } @property (nonatomic, copy) NSString *title; @property (nonatomic, copy) NSString *subtitle; @property (nonatomic, assign) CLLocationCoordinate2D coordinate; - (id)initWithCoordinate:(CLLocationCoordinate2D)_coordinate title:(NSString *)_titolo andSubTitle:(NSString *)_sottotitolo; @end ``` --- ## .m file ``` // // POI.m #import "POI.h" @implementation POI @synthesize title, subtitle, coordinate; -(id)initWithCoordinate:(CLLocationCoordinate2D)_coordinate title:(NSString *)_titolo andSubTitle:(NSString *)_sottotitolo { [self setTitle:_titolo]; [self setSubtitle:_sottotitolo]; [self setCoordinate:_coordinate]; return self; } @end ``` in my ViewController i call this using: ``` pinLocation.latitude = 4.8874; pinLocation.longitude = 1.400; POI *poi = [[POI alloc] initWithCoordinate:pinLocation title:@"foo" andSubTitle:@"bar"]; [_mapView addAnnotation:poi]; ```
Three steps. 1) In your header file (.h) *or* your implementation file's (.m) class extension conform to `MKMapViewDelegate`: ``` @interface ViewController : UIViewController <MKMapViewDelegate> { ... } ``` 2) Set your view controller as the delegate of `MKMapViewDelegate` to receive delegate callbacks. Commonly done in `viewDidLoad`: ``` - (void)viewDidLoad { [super viewDidLoad]; self.mapView.delegate = self; } ``` 3) Implement the following delegate function to show the disclosure button: ``` - (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>) annotation { MKPinAnnotationView *newAnnotation = [[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"pinLocation"]; newAnnotation.canShowCallout = YES; newAnnotation.rightCalloutAccessoryView = [UIButton buttonWithType:UIButtonTypeDetailDisclosure]; return newAnnotation; } ``` the following function will assist in determining what action (in your case, presenting a view) is taken upon touching the disclosure button. ``` - (void)mapView:(MKMapView *)mapView annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control { //launch a new view upon touching the disclosure indicator TestVCViewController *tvc = [[TestVCViewController alloc] initWithNibName:@"TestVCViewController" bundle:nil]; [self presentViewController:tvc animated:YES completion:nil]; } ```
Regex that finds consecutive words with first letter capitalized I am looking for a regex that can identify in a sentence that consecutive words in a sentence start with capital letters. If we take the text below as an example: > > The A-Z Group is a long-established > market leader in the provision of > information for the global air cargo > community, and also for the defence > and security sectors through BDEC > Limited, publishers of the British > Defence Equipment Catalogue and > British Defence Industry Directory. > > > I want to be able to retrieve the following: > > The A-Z Group > > > BDEC Limited Defence Equipment > > > Catalogue British Defence > > > IndustryDefence Industry > > > Is this even possible with a regex? If so, can anyone suggest one?
(Update: I misunderstood your question at first.) A simple case is ``` /([A-Z][\w-]*(\s+[A-Z][\w-]*)+)/ ``` It may need to be modified if there are special cases of different language construct. `ruby-1.9.2-p0 > %Q{The A-Z Group is a long-established market leader in the provision of information for the global air cargo community, and also for the defence and security sectors through BDEC Limited, publishers of the British Defence Equipment Catalogue and British Defence Industry Directory.}.scan(/([A-Z][\w-]*(\s+[A-Z][\w-]*)+)/).map{|i| i.first}` `=> ["The A-Z Group", "BDEC Limited", "British Defence Equipment Catalogue", "British Defence Industry Directory"]`
Extend memory size limit in R I have a R program that combines 10 files each file is of size 296MB and I have increased the memory size to 8GB (Size of RAM) ``` --max-mem-size=8192M ``` and when I ran this program I got a error saying ``` In type.convert(data[[i]], as.is = as.is[i], dec = dec, na.strings = character(0L)) : Reached total allocation of 7646Mb: see help(memory.size) ``` Here is my R program ``` S1 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_1_400.txt"); S2 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_401_800.txt"); S3 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_801_1200.txt"); S4 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_1201_1600.txt"); S5 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_1601_2000.txt"); S6 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_2001_2400.txt"); S7 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_2401_2800.txt"); S8 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_2801_3200.txt"); S9 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_3201_3600.txt"); S10 <- read.csv2("C:/Sim_Omega3_results/sim_omega3_3601_4000.txt"); options(max.print=154.8E10); combine_result <- rbind(S1,S2,S3,S4,S5,S6,S7,S8,S9,S10) write.table(combine_result,file="C:/sim_omega3_1_4000.txt",sep=";", row.names=FALSE,col.names=TRUE, quote = FALSE); ``` Can anyone, help me with this Thanks, Shruti.
I suggest incorporating the suggestions in `?read.csv2`: > > Memory usage: > > > > ``` > These functions can use a surprising amount of memory when reading > large files. There is extensive discussion in the ‘R Data > Import/Export’ manual, supplementing the notes here. > > Less memory will be used if ‘colClasses’ is specified as one of > the six atomic vector classes. This can be particularly so when > reading a column that takes many distinct numeric values, as > storing each distinct value as a character string can take up to > 14 times as much memory as storing it as an integer. > > Using ‘nrows’, even as a mild over-estimate, will help memory > usage. > > Using ‘comment.char = ""’ will be appreciably faster than the > ‘read.table’ default. > > ‘read.table’ is not the right tool for reading large matrices, > especially those with many columns: it is designed to read _data > frames_ which may have columns of very different classes. Use > ‘scan’ instead for matrices. > > ``` > >
Sinatra and question mark I need to make some methods with Sinatra that should look like: > > http//:localhost:1234/add?string\_to\_add > > > But when I declare it like this: ``` get "/add?:string_to_add" do ... end ``` it doesn't see the `string_to_add` param. How should I declare my method and use this parameter to make things work?
In a URL, a question mark separates the [path](https://www.rfc-editor.org/rfc/rfc3986#section-3.3) part from the [query](https://www.rfc-editor.org/rfc/rfc3986#section-3.4) part. The query part normally consists of name/value pairs, and is often constructed by a web browser to match the data a user has entered into a form. For example a url might look like: ``` http://example.com/submit?name=John&age=93 ``` Here the path section in `/submit`, and the query sections is `name=John&age=93` which refers to the value “John” for the `name` key, and “93” for the `age`. When you [create a route](http://www.sinatrarb.com/intro.html#Routes) in Sinatra, you only specify the path part. Sinatra then parses the query, and makes the data in it available in the `params` object. In this example you could do something like this: ``` get '/submit' do name = params[:name] age = params[:age] # use name and age variables ... end ``` If you use a `?` character when defining a Sinatra route, it makes part of the url optional. In the example you used (`get "/add?:string_to_add"`), it will actually match any url starting with `/ad`, then optionally another `d`, and then anything else will be put in the `:string_to_add` key of the params hash, and the query section will be parsed separately. In other words the question mark makes the preceding `d` character optional. If you want to get the ‘raw’ text of the query string in Sinatra, you can use the [`query_string` method of the `request` object](http://www.sinatrarb.com/intro.html#Accessing%20the%20Request%20Object). In your example that would look something like this: ``` get '/add' do string_to_add = request.query_string ... end ``` Note that the route doesn’t include the `?` character, just the base `/add`.
Recursion (or while loops) in Scheme ``` (define (orderedTriples n) (set! i n) (set! j n) (set! k n) (while (>= i 0) (while (>= j 0) (while (>= k 0) (printf "(~a, ~a, ~a)" i j k) (set! k (- k 1))) (set! j (- j 1))) (set! i (- i 1)))) ``` So my issue is...I am confused as to how to make while loops work in scheme (I'm very new to this so excuse the syntax if I am WAY off). I typed while into here just for the purpose of working through a problem and to show what I am trying to accomplish. Could anyone help me with a simple recursion example or nested recursion?
Depending on the Scheme interpreter in use, there are several ways to implement the required loops. For example, in Racket it's as simple as using [iterations and comprehensions](http://docs.racket-lang.org/guide/for.html): ``` (define (orderedTriples n) (for* ([i (in-range n -1 -1)] [j (in-range n -1 -1)] [k (in-range n -1 -1)]) (printf "(~a, ~a, ~a)" i j k))) ``` The style of programming shown in the question (assuming it worked) is heavily discouraged in Scheme - using mutation (the `set!` operation) for looping is a big no-no, that's how you'd solve the problem in a C-like language, but in Scheme in particular (and in Lisp in general) there are other constructs for implementing iteration in a program (the solution given by @TerjeD demonstrates the use of `do`, for instance), and even if such constructs didn't exist, a recursive solution or a solution using [higher-order procedures](http://en.wikipedia.org/wiki/Higher-order_function) would be preferred. For example, here's another possible solution, using [nested mappings](http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-15.html#%_sec_2.2.3) with only standard procedures (with the exception of `printf`, which is non-standard): ``` (define (range n) (if (negative? n) '() (cons n (range (- n 1))))) (define (orderedTriples n) (for-each (lambda (i) (for-each (lambda (j) (for-each (lambda (k) (printf "(~a, ~a, ~a)" i j k)) (range n))) (range n))) (range n))) ```
common lisp cons creates a list from two symbols, clojure cons requires a seq to cons onto? (Disclaimer - I'm aware of the significance of Seqs in Clojure) In common lisp the cons function can be used to combine two symbols into a list: ``` (def s 'x) (def l 'y) (cons s l) ``` In clojure - you can only cons onto a sequence - cons hasn't been extended to work with two symbols. So you have to write: ``` (def s 'x) (def l 'y) (cons s '(l)) ``` Is there a higher level pattern in Clojure that explains this difference between Common LISP and Clojure?
In Clojure, unlike traditional Lisps, lists are not the primary data structures. The data structures can implement the [ISeq](http://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/ISeq.java) interface - which is another view of the data structure it's given - allowing the same functions to access elements in each. (Lists already implement this. `seq?` checks whether something implements [ISeq](http://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/ISeq.java).`(seq? '(1 2)), (seq? [1 2]))` Clojure simply acts differently (with good reason), in that when `cons` is used, a sequence (it's actually of type `clojure.lang.Cons`) constructed of `a` and `(seq b)` is returned. (`a` being arg 1 and `b` arg 2) Obviously, symbols don't and can't implement [ISeq](http://github.com/clojure/clojure/blob/master/src/jvm/clojure/lang/ISeq.java). [Clojure.org/sequences](http://clojure.org/sequences) [Sequences screencast/talk by Rich Hickey](http://clojure.blip.tv/file/734409/) However, note that `rest` has changed, and it's previous behaviour is now in `next`, and that `lazy-cons` has been replaced by `lazy-seq` and `cons`. [clojure.lang.RT](http://github.com/hiredman/clojure/blob/master/src/jvm/clojure/lang/RT.java)
How to change the title color when using Theme.AppCompat.Light.NoActionBar I am creating my own action bar with menu items and therefore using `Theme.AppCompat.Light.NoActionBar`. I want to have the action bar purple and the title color white, but at the moment it is black. ``` <style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar"> <!-- Customize your theme here. --> <item name="colorPrimaryDark">@color/colorPrimaryDark</item> <item name="colorAccent">@color/colorAccent</item> </style> ``` my action bar xml is: ``` <android.support.v7.widget.Toolbar xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/colorPrimary" android:id="@+id/app_bar" > ```
As explained in this [Google+ pro-tip](https://plus.google.com/+AndroidDevelopers/posts/AV2ooBWY1iy), you can use a `ThemeOverlay` to customize only certain things. This is useful for when you need to make text light on a dark background, which can be achieved by adding `android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar"` to your `Toolbar`: ``` <android.support.v7.widget.Toolbar xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="wrap_content" android:background="@color/colorPrimary" android:id="@+id/app_bar" android:theme="@style/ThemeOverlay.AppCompat.Dark.ActionBar" > ``` This technique was further discussed in the [Theming with AppCompat blog+video](https://medium.com/google-developers/theming-with-appcompat-1a292b754b35).
How to output label with icon in line? Text of the label jumps on a next line when I use `setIcon()`. How to output it in one line? ``` Label info = new Label("Reports are generated every day at 5 pm", ContentMode.HTML); info.setIcon( new ThemeResource("img/icons/information.png") ) ; info.setSizeUndefined(); JobsLayout.addComponent( info ); JobsLayout.setComponentAlignment(info, Alignment.MIDDLE_RIGHT); ```
You can use icon inside label text: ``` Label date = new Label(FontAwesome.CALENDAR.getHtml() + " " + new Date()); date.setContentMode(ContentMode.HTML); ``` Source: [Vaadin Wiki](https://vaadin.com/wiki?p_p_id=36&p_p_lifecycle=0&p_p_state=normal&p_p_mode=view&p_p_col_id=row-1&p_p_col_pos=1&p_p_col_count=3&_36_struts_action=%2Fwiki%2Fview&p_r_p_185834411_nodeName=vaadin.com+wiki&p_r_p_185834411_title=Using+font+icons#section-Using+font+icons-Using+FontAwesome). Result: [![Vaadin label with icon in the same line.](https://i.stack.imgur.com/VHTbk.png)](https://i.stack.imgur.com/VHTbk.png)
upgraded from perl 5.8 (32bit) to 5.16 (64bit) - regex performance hit I'm running a series of regexes against blocks of data. We recently upgraded from Activestate perl 5.8 32bit (I know... extremely old!) to perl 5.16 64bit. All the hardware stayed the same (windows). We are noticing a performance hit where as before our parse loop would take about 2.5 seconds, now it takes about 5 seconds. Can anybody give me a hint as to what would cause the change? I was expecting an increase in performance as my understanding was that the engine had improved greatly, any docs on what I should be doing different would be greatly appreciated.
Yes, the regex engine improved greatly after v8. Alone in v10, we saw: - pattern recursion - named captures - possessive quantifiers - backtrack control verbs like `(*FAIL)` or `(*SKIP)`. - The `\K` operator - … and some more Also, more internals were made Unicode-aware. In v12, the Unicode support was cleaned up. The `\p` and `\X` operators in regexes are now greatly enhanced. In v14, the Unicode support was bumped to 6.0. Charnames for the `\N` operator were improved (see also `charnames` pragma). The new character model can treat any unsigned integer as a codepoint. In the regex engine, - regexes can now carry charclass modifiers like `/u`, `/d`, `/l`, `/a`, `/aa`. - Non-destructive susbtitution with `/r` was implemented. - The RE engine is now reentrant, so embedded code can use regexes. - `\p` was cleaned up - regex compilation is faster when a switch to unicode semantics is neccessary. In v16, perl almost supports Unicode 6.1. In the regex engine, - efficiency of `\p` charclasses was increased. - Various regex bugs (often involving case-insensitive matching) were fixed. Obviously, not all of these features come at a price, but especially Unicode-awareness makes internals more complicated, and slower. You also cannot waive a hand and state that the execution time of a script doubled from perl5 v8 x86 to perl5 v16 x64; there are too many variables: - were both Perls compiled with the same flags? - are both perls threaded perls (disabling threading support makes it faster) - how big are your integers? 64 bit or 32 bit? - what compiler optimizations were chosen? - did your previous Perl have some distribution-specific patches applied? Basically, you have to compare the whole `perl -V` output. --- If you are hitting a performance ceiling with regexes, they may be the wrong tool for extensive parsing. At the very least, you may use the newer features to optimize the regexes to eliminate some backtracking. If your parsing code describes a (roughly) context-free language (i.e. you don't use `(?{...})`, `(?=...)` or related regex features), and parsing means doing something like generating a tree, then [Marpa::R2](https://metacpan.org/module/Marpa%3a%3aR2%3a%3aScanless) might speed things up considerably.
Can I search/index a custom datasource in Orchard via Lucene? I am currently working on a site to allow users to search through a custom product catalog. I have been looking around and would love to leverage Orchard CMS to help me develop this site. I have currently gone through [Ron Petersons youtube series](http://www.youtube.com/watch?v=Iv7rA-viyTw) on custom Orchard Modules and the [Skywalker blog series](http://skywalkersoftwaredevelopment.net/blog/writing-an-orchard-webshop-module-from-scratch-part-1). I feel like my goal is possible, but I'm looking for some validation on whether my strategy will work within the Orchard framework. This is my current situation: 1. I have an default Orchard configuration pointing to a SQL DB (named Product-Orchard) 2. I have a custom DAL that points to another SQL DB (named Products). 3. Products are made up of your typical information (Product Name, Description, Price, etc). 4. The custom DAL has a POCO model called Product (with a Repository to interact with) with the properties Name, Description, Price. Now, based on the information I read about creating Orchard modules it seems like the method of creating a custom module with custom content is to: 1. Create a Module through code gen tools (We'll call it ProductModule) 2. Create a custom Content Part (ProductPart) 3. Create a custom Content Part Record (ProductPartRecord) to act as the data model for the part. 4. Create a custom ContentPartHandler (ProductPartHandler) that handles the persistance of the Content Part. 5. Create a custom Driver that is the entry for preparing the Shapes for rendering of the UI. 6. Potentially create a Service that interacts with the Drivers? This is where things start to get jumbled and I'm not sure if this is possible or not. What I would like to do is to create a custom Content Type that is backed by my custom DAL rather than having the data be stored through the ContentPartRecord inside the Product-Orchard DB, but still allow it to be indexed by the Lucene module to allow for searching of the Product catalog. Is it possible to create a custom ContentType and/or ContentPart that is backed by a different datasource and still leverage the Lucene search capabilities? In high level terms I'd like a Product ContentType where the ContentItems are actually stored in my secondary database, not the Orchard database (and still want to be able to leverage Lucene search via Projections).
For those searching for a similar answer, the following solution is what I settled on. There is no easy mechanism I could find to interact with a separate DAL and perform the Lucene indexing. 1. Create the Orchard Module 2. Create new Content Part/Type via aMigration 3. Use Orchard Command infrastructure to import data from your secondary database 4. Use the OnIndexing event in the Content Part handler to allow Lucene to index your datasource. 5. Create a lookup property (I called mine ConcreateProperty) that is populated through a Service I created in the module to interact with the secondary DAL in the OnLoaded event. My final Handler looked like this: ``` public class HomePartHandler : ContentHandler { public HomePartHandler(IRepository<HomePartRecord> repository, IHomeSearchMLSService homeSearchService) { Filters.Add(StorageFilter.For(repository)); OnLoaded<HomePart>((ctx, part) => { part.ConcreteProperty = homeSearchService.GetByMlsNumber(part.MlsId) ?? new PropertyDetail(); }); OnIndexing<HomePart>((context, homePart) => context.DocumentIndex .Add("home_StreetFullName", homePart.Record.StreetFullName).RemoveTags().Analyze().Store() .Add("home_City", homePart.Record.City).RemoveTags().Analyze().Store() .Add("home_State", homePart.Record.State).RemoveTags().Analyze().Store() .Add("home_Zip", homePart.Record.Zip).RemoveTags().Analyze().Store() .Add("home_Subdivision", homePart.Record.Subdivision).RemoveTags().Analyze().Store() .Add("home_Beds", homePart.Record.Beds).RemoveTags().Analyze().Store() .Add("home_Baths", homePart.Record.Baths).RemoveTags().Analyze().Store() .Add("home_SquareFoot", homePart.Record.SquareFoot).RemoveTags().Analyze().Store() .Add("home_PropertyType", homePart.Record.PropertyType).RemoveTags().Analyze().Store() .Add("home_ListPrice", homePart.Record.ListPrice).RemoveTags().Analyze().Store() .Add("home_MlsId", homePart.Record.MlsId).RemoveTags().Analyze().Store() .Add("home_Latitude", (double)homePart.Record.Latitude).RemoveTags().Analyze().Store() .Add("home_Longitude", (double)homePart.Record.Longitude).RemoveTags().Analyze().Store() ); } } ``` This allows me to create a search service for searching through all my data and then hook it up to the model via the Concrete Property, which actually works better from a performance standpoint anyway.
How to get mypy to accept an unpacked dict? I am having problems with mypy. I have this code: ``` func(arg1, arg2, arg3=0.0, arg4=0.0) # type: (float, float, float, float) -> float # do something and return float. dict_with_other_arguments = {arg3: 0.5, arg4: 1.4} a = func(arg1, arg2, **dict_with_other_arguments) ``` The problem is that mypy does not check what's in the dictionary for types, instead, I get an error like this: > > error: Argument 3 to "func" has incompatible type "\*\*Dict[str, float]"; expected "float" > > > Any ideas how to fix this without changing code?
Mypy is correct in flagging your function calls. The following code illustrates why: ``` def func(str_arg='x', float_arg=3.0): # type: (str, float) -> None print(str_arg, float_arg) kwargs1 = {'float_arg': 8.0} kwargs2 = {'str_arg': 13.0} # whoops func(float_arg=5.0) # prints "x 5.0" -- good func(**kwargs1) # prints "x 13.0" -- good but flagged by Mypy func(**kwargs2) # prints "13.0 3.0" -- bad ``` In this example, `kwargs1` and `kwargs2` are both of type `Dict[str, float]`. The type checker does not consider the content of the keys, only their types, so the second and third calls to `func` look identical to Mypy. They must either both be errors or both be acceptable, and they can't both be acceptable since the third call violates the type system. The only way that the type checker can be sure that you're not passing incorrect types in the dict is if all of the arguments that haven't been explicitly passed share the type of the dict's values. Note, however, that mypy will not protect you from errors caused by respecifying a keyword argument in a dict: ``` # This works fine: func('x', **kwargs1) # This is technically type safe and accepted by mypy, but at runtime raises # `TypeError: func() got multiple values for argument 'str_arg'`: func('x', **kwargs2) ``` There is some further discussion of this issue here: <https://github.com/python/mypy/issues/1969>
How to print an object's representation using the string format function? I'm creating a Money class, and I'd like to pass the object directly to the string format() function and get the money representation with 2 decimals and the currency symbol. What method should I override to print with the string format function? Overriding **str** and **repr** did not work. ``` from decimal import Decimal class Money(Decimal): def __str__(self): return "$" + format(self, ',.2f') def __repr__(self): return "$" + format(self, ',.2f') m = Money("123.44") print(m) # $123.44. Good. m # $123.44. Good. print("Amount: {0}".format(m)) # 123.44. Bad. I wanted $123.44 print(f"Amount: {m}") # 123.44. Bad. I wanted $123.44 ```
You can give your class a [`__format__` method](https://docs.python.org/3/reference/datamodel.html#object.__format__); in this case just call overridden version: ``` def __format__(self, spec): spec = spec or ',.2f' # set a default spec when not explicitly given return '$' + super().__format__(spec) ``` From the linked documentation: > > Called by the `format()` built-in function, and by extension, evaluation of formatted string literals and the `str.format()` method, to produce a “formatted” string representation of an object. The `format_spec` argument is a string that contains a description of the formatting options desired. The interpretation of the `format_spec` argument is up to the type implementing `__format__()`, however most classes will either delegate formatting to one of the built-in types, or use a similar formatting option syntax. > > > You'll want to drop your `__str__` and `__repr__` implementations now, or at least not add another `'$'` on top of the one `__format__` now adds (which `format(self, ...)` will trigger). Demo: ``` >>> from decimal import Decimal >>> class Money(Decimal): ... def __format__(self, spec): ... spec = spec or ',.2f' # set a default spec when not explicitly given ... return '$' + super().__format__(spec) ... >>> m = Money("123.44") >>> print("Amount: {0}".format(m)) Amount: $123.44 >>> print(f"Amount: {m}") Amount: $123.44 ```
Issue updating Ruby on Mac with Xcode 4.3.1 I'm using RVM to install it and it gives me this error: ``` The provided compiler '/usr/bin/gcc' is LLVM based, it is not yet fully supported by ruby and gems, please read `rvm requirements`. ``` I'm on Lion 10.7.3 and I have Xcode 4.3.1.
Short answer is you can grab RVM master branch (not stable) to build it with LLVM (not gcc, I mistyped initially). It has appropriate patches to make 1.9.3-p125 to run (at least better) with Xcode 4.3.1 by default. I provided the patch. If you already installed RVM, `rvm get head` will install the master branch. With command line tools installed with Xcode 4.3.1, you can successfully install Ruby 1.9.3-p125. **Background** It's happen due to a simple configuration issue of Ruby 1.9.3-p125, it don't allow dynamic link modules to work. This happens if you're using Xcode 4.3.x ([Ruby Issue#6080](https://bugs.ruby-lang.org/issues/6080)). This issue have fixed in [change set r34840](http://svn.ruby-lang.org/cgi-bin/viewvc.cgi?revision=34840&view=revision). RVM has patch system which provides per-version basis. This patch is included in the RVM (master branch for now) and now default for p125 configuration steps. **Xcode 4.3.x Command Line Tool** First, With Xcode 4.3.x, you need to install command line tool AFTER installing Xcode 4.3.x, by following directions: 1) Launching Xcode, 2) Open “Preferences” from the “Xcode” item on the menu bar. 3) Select “Downloads” tab (icon). 4) Click “Install” button for “Command Line Tools” (directions borrowed from my friend's site [here](http://draft.scyphus.co.jp/macosx/lion.html)) If Xcode 4.3.1 is correctly installed, then `cc --version` should emit: ``` % cc --version Apple clang version 3.1 (tags/Apple/clang-318.0.54) (based on LLVM 3.1svn) Target: x86_64-apple-darwin11.3.0 Thread model: posix ``` **autoconf and automake** You need autoconf and automake, since Xcode 4.3.x don't have them. Install them either `brew` or `MacPorts.` With `MacPorts`: ``` sudo port install autoconf automake ``` **Recommended installation step with RVM** Then, To install specific branch of RVM, you can: ``` REPO=wayneeseguin BRANCH=master # stable for the stable branch curl -s https://raw.github.com/${REPO}/rvm/${BRANCH}/binscripts/rvm-installer > /tmp/rvm-installer.sh bash /tmp/rvm-installer.sh --branch ${REPO}/${BRANCH} ``` Or if RVM is already installed: ``` rvm get head # master branch, for stable branch "rvm get stable" ``` After that, install openssl, iconv and readline using rvm pkg command for best result. I do following lately. Part of this might need to be included to RVM.. ``` rvm pkg install openssl rvm pkg install readline # if you prefer GNU readline ``` Then, finally, install the Ruby. ``` rvm install 1.9.3-p125 --patch xcode-debugopt-fix-r34840.diff --with-readline-dir=$rvm_path/usr --with-openssl-dir=$rvm_path/usr --with-tcl-lib=/usr --with-tk-lib=/usr ``` rvm pkg's help recommend different parameter, the help is broken. So use above for now. You need tcl/tk parameters if you have them via MacPorts(like me) By the way, It is possible to install old Xcode then run rvm with `export CC="gcc-4.2" rvm install 1.9.3-p125`, but I personally think clang (LLVM) is the way to go for future, if possible. Hope this helps. **Additional note on 2012/3/31** iconv don't need to install, also added autoconf/automake requirements for clarifications. **Additional note on 2012/4/13** Latest stable does not include the patch require dynamic link to work by default. you need to add `--patch xcode-debugopt-fix-r34840.diff` when you build 1.9.3-p125.
JScrollPane does not appear when using it on a JPanel I have been trying for hours to find a way to solve the issue, but I had no luck with that. Here is a sample code: ``` import java.awt.BorderLayout; import javax.swing.BoxLayout; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.SwingUtilities; public class Example extends JFrame { private static final long serialVersionUID = 1L; public Example() { JPanel contentPane = (JPanel) getContentPane(); contentPane.setLayout(new BorderLayout()); JPanel panTop = new JPanel(new BorderLayout()); //JPanel panBottom = new JPanel(new BorderLayout()); JPanel panTopCenter = new JPanel(); //JPanel panTopLeft = new JPanel(); //JPanel panTopRight = new JPanel(); panTop.add(panTopCenter, BorderLayout.CENTER); //panTop.add(panTopLeft, BorderLayout.WEST); //panTop.add(panTopRight, BorderLayout.EAST); contentPane.add(panTop, BorderLayout.CENTER); //contentPane.add(panBottom, BorderLayout.SOUTH); JPanel pan = new JPanel(); pan.setLayout(new BoxLayout(pan, BoxLayout.Y_AXIS)); for(int i = 0; i < 50; i++) pan.add(new JButton("Button " + i)); JScrollPane scrollPane = new JScrollPane(pan); panTopCenter.add(scrollPane); pack(); setLocationRelativeTo(null); setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { new Example(); } }); } } ``` **Snapshot:** ![enter image description here](https://i.stack.imgur.com/p9r77.png)
I always have to set the viewport's preferred size like this. ``` import java.awt.*; import javax.swing.*; public class Example extends JFrame { public Example() { setDefaultCloseOperation(EXIT_ON_CLOSE); Box box = new Box(BoxLayout.Y_AXIS); for (int i = 0; i < 50; i++) { box.add(new JButton("Button " + i)); } JScrollPane sp = new JScrollPane(box); Dimension d = new Dimension(box.getComponent(0).getPreferredSize()); sp.getVerticalScrollBar().setUnitIncrement(d.height); d.height *= 10; // Show at least 10 buttons sp.getViewport().setPreferredSize(d); add(sp, BorderLayout.CENTER); pack(); setLocationRelativeTo(null); setVisible(true); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { @Override public void run() { Example e = new Example(); } }); } } ```
pandas how to swap or reorder columns I know that there are ways to swap the column order in python pandas. Let say I have this example dataset: ``` import pandas as pd employee = {'EmployeeID' : [0,1,2], 'FirstName' : ['a','b','c'], 'LastName' : ['a','b','c'], 'MiddleName' : ['a','b', None], 'Contact' : ['(M) 133-245-3123', '(F)[email protected]', '(F)312-533-2442 [email protected]']} df = pd.DataFrame(employee) ``` The one basic way to do would be: ``` neworder = ['EmployeeID','FirstName','MiddleName','LastName','Contact'] df=df.reindex(columns=neworder) ``` However, as you can see, I only want to swap two columns. It was doable just because there are only 4 column, but what if I have like 100 columns? what would be an effective way to swap or reorder columns? There might be 2 cases: 1. when you just want 2 columns swapped. 2. when you want 3 columns reordered. (I am pretty sure that this case can be applied to more than 3 columns.)
**Two column Swapping** ``` cols = list(df.columns) a, b = cols.index('LastName'), cols.index('MiddleName') cols[b], cols[a] = cols[a], cols[b] df = df[cols] ``` **Reorder column Swapping (2 swaps)** ``` cols = list(df.columns) a, b, c, d = cols.index('LastName'), cols.index('MiddleName'), cols.index('Contact'), cols.index('EmployeeID') cols[a], cols[b], cols[c], cols[d] = cols[b], cols[a], cols[d], cols[c] df = df[cols] ``` **Swapping Multiple** Now it comes down to how you can play with list slices - ``` cols = list(df.columns) cols = cols[1::2] + cols[::2] df = df[cols] ```
Format a string into columns Is there a cool way to take something like this: ``` Customer Name - City, State - ID Bob Whiley - Howesville, TN - 322 Marley Winchester - Old Towne, CA - 5653 ``` and format it to something like this: ``` Customer Name - City, State - ID Bob Whiley - Howesville, TN - 322 Marley Winchester - Old Towne, CA - 5653 ``` Using string format commands? I am not too hung up on what to do if one is very long. For example this would be ok by me: ``` Customer Name - City, State - ID Bob Whiley - Howesville, TN - 322 Marley Winchester - Old Towne, CA - 5653 Super Town person - Long Town Name, WA- 45648 ``` To provide some context. I have a drop down box that shows info very similar to this. Right now my code to create the item in the drop down looks like this: ``` public partial class CustomerDataContract { public string DropDownDisplay { get { return Name + " - " + City + ", " + State + " - " + ID; } } } ``` I am looking for a way to format this better. Any ideas? --- This is what I ended up with: ``` HttpContext.Current.Server.HtmlDecode( String.Format("{0,-27} - {1,-15}, {2, 2} - {3,5}", Name, City, State, ID) .Replace(" ", "&nbsp;")); ``` The HtmlDecode changes the   to a space that can withstand the space removing formatting of the dropdown list.
You can specify the number of columns occupied by the text as well as alignment using `Console.WriteLine` or using `String.Format`: ``` // Prints "--123 --" Console.WriteLine("--{0,-10}--", 123); // Prints "-- 123--" Console.WriteLine("--{0,10}--", 123); ``` The number specifies the number of columns you want to use and the sign specifies alignment (`-` for left alignment, `+` for right alignment). So, if you know the number of columns available, you could write for example something like this: ``` public string DropDownDisplay { get { return String.Format("{0,-10} - {1,-10}, {2, 10} - {3,5}"), Name, City, State, ID); } } ``` If you'd like to calculate the number of columns based on the entire list (e.g. the longest name), then you'll need to get that number in advance and pass it as a parameter to your `DropDownDisplay` - there is no way to do this automatically.
How to stringify an object of the type Map ``` class test{ name : string children : Map<string,string> =new Map() constructor(){ this.name='KIANA' this.children.set('name','OTTO') } } var t = new test() console.log(t) console.log(JSON.stringify(t)) ``` The result is: ``` test { children: Map { 'name' => 'OTTO' }, name: 'KIANA' } {"children":{},"name":"KIANA"} ``` How can I get the whole data like: ``` {"children":{'name':'OTTO'},"name":"KIANA"} ``` or ``` {"children":['name':'OTTO'],"name":"KIANA"} ``` Or, does it has a simpler way to describe the relationship of 'key value' in JSON and TypeScript
Preface: Class names should start with an uppercase character, so I've changed `test` to `Test` in the below. Since `Map` isn't stringify-able by default, you have at least three choices: 1. Implement `toJSON` on your `Test` class and return an object with a replacement for `children` (probably an array of arrays), or 2. Implement a subclass of `Map` that has `toJSON` and use that in `Test` 3. Implement a *replacer* that you use with `JSON.stringify` that handles `Map` instances. While #1 works, it means you have to edit your `toJSON` method every time you add or remove properties from `Test`, which seems like a maintenance issue: ``` class Test { name: string children: Map<string, string> = new Map() constructor() { this.name = 'KIANA' this.children.set('name', 'OTTO') } toJSON() { return { name: this.name, children: [...this.children.entries()] } } } var t = new Test() console.log(JSON.stringify(t)) ``` Live Example: ``` class Test { name/*: string*/ children/*: Map<string, string>*/ = new Map() constructor() { this.name = 'KIANA' this.children.set('name', 'OTTO') } toJSON() { return { name: this.name, children: [...this.children.entries()] } } } var t = new Test() console.log(JSON.stringify(t)) ``` `[...this.children.entries()]` creates an array of `[name, value]` arrays for the map. But I prefer #2, a JSON-compatible `Map`: ``` class JSONAbleMap extends Map { toJSON() { return [...this.entries()] } } ``` ...which you then use in `Test`: ``` class Test { name: string children: Map<string, string> = new JSONAbleMap() constructor() { this.name = 'KIANA' this.children.set('name', 'OTTO') } } var t = new Test() console.log(JSON.stringify(t)) ``` Live Example: ``` class JSONAbleMap extends Map { toJSON() { return [...this.entries()] } } class Test { name/*: string*/ children/*: Map<string, string>*/ = new JSONAbleMap() constructor() { this.name = 'KIANA' this.children.set('name', 'OTTO') } } var t = new Test() console.log(JSON.stringify(t)) ``` Or #3, a replacer function you use with `JSON.stringify`: ``` function mapAwareReplacer(key: string|Symbol, value: any): any { if (value instanceof Map && typeof value.toJSON !== "function") { return [...value.entries()] } return value } ``` ...which you use when calling `JSON.stringify`: ``` console.log(JSON.stringify(t, mapAwareReplacer)) ``` Live Example: ``` function mapAwareReplacer(key, value) { if (value instanceof Map && typeof value.toJSON !== "function") { return [...value.entries()] } return value } class Test { name/*: string*/ children/*: Map<string, string>*/ = new Map() constructor() { this.name = 'KIANA' this.children.set('name', 'OTTO') } } var t = new Test() console.log(JSON.stringify(t, mapAwareReplacer)) ```
Switching ViewControllers with UISegmentedControl in iOS5 I am trying something very simple but somehow I can't get it to work. All I try to do is switching between 2 View Controllers using an UISegmentedControl as you can see it for example in the App Store application in the Highlights tab. I am using iOS5 and Storyboards. Here's my Storyboad line up: ![enter image description here](https://i.stack.imgur.com/bpoZ0.png) So I have a root View Controller and two UITableViews - This 2 TableViews I want to switch. Here's how the implementation file looks like ``` #import "SegmentedLocationViewController.h" #import "PastEventsLocationViewController.h" #import "FutureEventsLocationViewController.h" @interface SegmentedLocationViewController() @property (weak, nonatomic) IBOutlet UISegmentedControl *segmentedControl; @property (strong, nonatomic) NSArray *viewControllers; @end @implementation SegmentedLocationViewController @synthesize segmentedControl = _segmentedControl; @synthesize viewControllers = _viewControllers; - (IBAction)indexDidChangeForSegmentedControl:(UISegmentedControl*)segmentedControl { NSLog(@"index: %d", segmentedControl.selectedSegmentIndex); } - (void)setupViewControllers { PastEventsLocationViewController *pastEventsLocationViewController = [[PastEventsLocationViewController alloc] initWithStyle:UITableViewStylePlain]; FutureEventsLocationViewController *futureEventsLocationViewController = [[FutureEventsLocationViewController alloc] initWithStyle:UITableViewStylePlain]; self.viewControllers = [NSArray arrayWithObjects:pastEventsLocationViewController, futureEventsLocationViewController, nil]; } - (void)setupUI { [self.segmentedControl addTarget:self action:@selector(indexDidChangeForSegmentedControl:) forControlEvents:UIControlEventValueChanged]; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; [self setupViewControllers]; [self setupUI]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } @end ``` I can trigger the switch event and can log the currently selected index. But I don't have any idea where to go from here. Maybe someone can turn my attention towards a certain direction...?
This code works pretty well for your purpose, I use it for one of my new apps. It uses the new UIViewController containment APIs that allow UIViewControllers inside your own UIViewControllers without the hassles of manually forwarding stuff like `viewDidAppear:` ``` - (void)viewDidLoad { [super viewDidLoad]; // add viewController so you can switch them later. UIViewController *vc = [self viewControllerForSegmentIndex:self.typeSegmentedControl.selectedSegmentIndex]; [self addChildViewController:vc]; vc.view.frame = self.contentView.bounds; [self.contentView addSubview:vc.view]; self.currentViewController = vc; } - (IBAction)segmentChanged:(UISegmentedControl *)sender { UIViewController *vc = [self viewControllerForSegmentIndex:sender.selectedSegmentIndex]; [self addChildViewController:vc]; [self transitionFromViewController:self.currentViewController toViewController:vc duration:0.5 options:UIViewAnimationOptionTransitionFlipFromBottom animations:^{ [self.currentViewController.view removeFromSuperview]; vc.view.frame = self.contentView.bounds; [self.contentView addSubview:vc.view]; } completion:^(BOOL finished) { [vc didMoveToParentViewController:self]; [self.currentViewController removeFromParentViewController]; self.currentViewController = vc; }]; self.navigationItem.title = vc.title; } - (UIViewController *)viewControllerForSegmentIndex:(NSInteger)index { UIViewController *vc; switch (index) { case 0: vc = [self.storyboard instantiateViewControllerWithIdentifier:@"FooViewController"]; break; case 1: vc = [self.storyboard instantiateViewControllerWithIdentifier:@"BarViewController"]; break; } return vc; } ``` I got this stuff from chapter 22 of Ray Wenderlichs book [iOS5 by tutorial](http://www.raywenderlich.com/store/ios-5-by-tutorials). Unfortunately I don't have a public link to a tutorial. But there is a WWDC 2011 video titled "Implementing UIViewController Containment" **EDIT** `self.typeSegmentedControl` is outlet for your `UISegmentedControl` `self.contentView` is outlet for your container view `self.currentViewController` is just a property that we're using to store our currently used `UIViewController`
Check if a input box is empty How can I check if a given input control is empty? I know there is `$pristine` property on the field which tells that if a given field is empty initially but what if when someone fill the field and yanks the whole content again? I think above feature is necessary as its important for telling the user that field is required. Any idea will be appreciated!
[Quite simple](http://plnkr.co/edit/ZZwkvLiBU912xsHpygjy?p=preview): ``` <input ng-model="somefield"> <span ng-show="!somefield.length">Please enter something!</span> <span ng-show="somefield.length">Good boy!</span> ``` You could also use `ng-hide="somefield.length"` instead of `ng-show="!somefield.length"` if that reads more naturally for you. --- A better alternative might be to really take advantage of the [form abilities of Angular](http://docs.angularjs.org/api/ng.directive%3ainput.text): ``` <form name="myform"> <input name="myfield" ng-model="somefield" ng-minlength="5" required> <span ng-show="myform.myfield.$error.required">Please enter something!</span> <span ng-show="!myform.myfield.$error.required">Good boy!</span> </form> ``` [Updated Plnkr here.](http://plnkr.co/edit/hfIWCfZkpPkOKo1L6BhV?p=preview)
Flatten a nested object in MongoDB and rename Suppose I have documents like this ``` { "id" : "1415166669", "color" : { "14" : "Silver" }, "name":"Random Name" } ``` where the key inside color can change, i.e Next Document may look like this ``` { "id" : "1415126969", "color" : { "11" : "Gold" }, "name":"Random Name 2" } ``` I want to flatten and rename them so that my documents have a similar structure as follows: ``` { "id" : "1415126969", "color" : "Gold" "name":"Random Name 2" } ``` and ``` { "id" : "1415166669", "color" : "Silver" "name":"Random Name" } ``` I tried using the aggregation pipeline but I am not sure how to proceed further. Also can anyone suggest a way using Robomongo, that would be nice as well.
I would run a simple JavaScript loop for this, since it's going to be quite quick and easy to code. It can also be applied to the same collection: ``` var ops = []; db.collection.find().forEach(function(doc) { let color; Object.keys(doc.color).forEach(function(key) { color = doc.color[key]; }); ops = [ ...ops, { "updateOne": { "filter": { "_id": doc._id }, "update": { "$set": { "color": color } } }} ]; if ( ops.length >= 500 ) { db.collection.bulkWrite(ops); ops = []; } }) if ( ops.length > 0 ) { db.collection.bulkWrite(ops); ops = []; } ``` So you basically traverse the "keys" of the object and get the value, which later we write back to the document with [`$set`](https://docs.mongodb.com/manual/reference/operator/update/set/) Alternately, since MongoDB 3.4 you can run an aggregation statement using `$objectToArray` in order to access the keys. But you really only should do this where the collection has millions of documents. The requirements of [`$out`](https://docs.mongodb.com/manual/reference/operator/aggregation/out/) means the result is in new collection, and not the current one being updated: ``` db.collecion.aggregate([ { "$addFields": { "color": { "$arrayElemAt": [ { "$map": { "input": { "$objectToArray": "$color" }, "as": "c", "in": "$$c.v" }}, 0 ] } }}, { "$out": "newcollection" } ]) ``` This works because `$objectToArray` turns your structure like this: ``` "color": { "11": "Gold" } ``` Into this: ``` "color": [{ "k": "11", "v": "Gold" }] ``` So we can then use the [`$map`](https://docs.mongodb.com/manual/reference/operator/aggregation/map/) operator in order to extract the `"color.v"` path value and [`$arrayElemAt`](https://docs.mongodb.com/manual/reference/operator/aggregation/arrayElemAt/) to turn this into a singular value rather than an array. Generally speaking such conversions would be a lot more complex for aggregation pipeline statements than for what you can achieve simply by manipulating the document structure in code and then writing back to the target collection. --- Given the existing documents: ``` { "_id" : ObjectId("59389951fc04695e84e7f4ae"), "id" : "1415166669", "color" : { "14" : "Silver" }, "name" : "Random Name" } { "_id" : ObjectId("59389a75fc04695e84e7f4af"), "id" : "1415126969", "color" : { "11" : "Gold" }, "name" : "Random Name 2" } ``` Output from both methods is: ``` { "_id" : ObjectId("59389951fc04695e84e7f4ae"), "id" : "1415166669", "color" : "Silver", "name" : "Random Name" } { "_id" : ObjectId("59389a75fc04695e84e7f4af"), "id" : "1415126969", "color" : "Gold", "name" : "Random Name 2" } ```
Store Photos in Blobstore or as Blobs in Datastore - Which is better/more efficient /cheaper? I have an app where each DataStore Entity of a specific kind can have a number of photos associated with it. (Imagine a car sales website - one Car has multiple photos) Originally since all the data is being sourced from another site, I was limited to having to store the photos as DataStore Blobs, but now that its possible to write BlobStore items programatically, I'm wondering if I should change my design and store the photos as BlobStore items? So, the question is: Is it 'better' to store the photos in the Blobstore, or as Blobs in the Datastore? Both are possible solutions, but which would be the better/cheaper/most efficient approach, and why?
Images served from BlobStore have several advantages over Datastore: 1. Images are served directly from BlobStore, so request does not go through GAE frontend instance. So you are saving on frontend instances time and hence cost. 2. BlobStore storage cost is roughly half of Datastore storage cost ($0.13 vs $0.24). With Datastore you'd additionally pay for get() or query(). 3. BlobStore automatically uses Google cache service, so the only cost is cost of bandwidth ($0.12/GB). You can also set this on frontend instance via cache control, but the difference is that this is done automatically for BlobStore. 4. Images in BlobStore can be served via ImageService and can be [transformed on the fly](http://code.google.com/appengine/docs/java/images/overview.html#Transforming_Images_from_the_Blobstore), e.g. creating thumbnails. Transformed images are also automatically cached. 5. Binary blobs in Datastore are limited to 1Mb in size. One downside of BlobStore is that it has no access controls. Anybody with an URL to blob can download it. If you need ACL (Access Control List) take a look at [Google Cloud Storage](http://code.google.com/appengine/docs/java/googlestorage/). **Update:** Cost wise the biggest saving will come from properly caching the images: 1. Every image should have a permanent URL. 2. Every image URL should be served with proper cache control HTTP headers: ``` // 32M seconds is a bit more than one year Cache-Control: max-age=32000000, must-revalidate ``` you can do this in java via: ``` httpResponse.setHeader("Cache-Control", "max-age=32000000, must-revalidate"); ``` **Update 2:** As Dan correctly points out in the comments, BlobStore data is served via a frontend instance, so access controls can be implemented by user code.
What is this backtick at the beginning of a directory name? (perl) I am trying to understand a program. Correct my if I'm wrong, but backticks are used to execute commands in a shell, so I'm not sure what it is its purpose in the following code: ``` my $end = $` if $dir =~ m/\/foldername/; foreach my $folder (@dirs_) { my $start_from = "$dir" . "\/" . "$folder"; my $move_to = "$end" . "\/" . "$folder"; rmtree $move_to; dircopy($start_from, $move_to); } ```
It's not very pretty is it. The `$`` variable is one of the trinity `$``, `$&` and `$'` which represent the pre-match, match, and post-match parts of the last string that was subjected to a successful regex comparison For instance, if I have ``` my $s = 'abcdef'; ``` then after ``` $s =~ /c./; ``` you will find that `$`` is `ab`, `$&` is `cd`, and `$'` is `ef` It's important to remember that, just like the capture variables `$1`, `$2` etc., these three are unaffected by failed regex matches. (They are not set to `undef`.) So it's vital to check whether a regex pattern matched before using any of them This is archaic Perl, maintained primarily for backward compatability. It was a good idea at the time because Perl was keeping close to shell syntax (as were awk and sed, which still do). Nowadays it is best to use *regex captures*, or perhaps `substr` in conjunction with the newer `@-` and `@+` arrays All of the special built-in variables are documented in [perldoc perlvar](http://perldoc.perl.org/perlvar.html)
How to change knitr options mid chunk Hi I would like to change chunk options, mid chunk, without having to create a new chunk.. running the following code I would expect to get two very different size outputs, but for some reason this does not seem to be the case. Also the second plot doesn't plot at all...(it does when you change it to plot(2:1000)...but either way the second output is the same size as the first. both `fig.width=7`. What am I doing wrong? Pls note the importance of 'mid chunk' the reason for this is that I would like to change the chunk options several times when running a function to get different outputs of different sizes. ``` ```{r} sessionInfo() opts_chunk$set(fig.width=3) plot(1:1000) opts_chunk$set(fig.width=10) plot(1:1000) ``` ``` the sessionInfo output is as follows: ``` ## R version 2.15.1 (2012-06-22) ## Platform: i386-pc-mingw32/i386 (32-bit) ## ## locale: ## [1] LC_COLLATE=English_United Kingdom.1252 ## [2] LC_CTYPE=English_United Kingdom.1252 ## [3] LC_MONETARY=English_United Kingdom.1252 ## [4] LC_NUMERIC=C ## [5] LC_TIME=English_United Kingdom.1252 ## ## attached base packages: ## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages: ## [1] knitr_0.7 ## ## loaded via a namespace (and not attached): ## [1] digest_0.5.2 evaluate_0.4.2 formatR_0.5 parser_0.0-16 ## [5] plyr_1.7.1 Rcpp_0.9.13 stringr_0.6 tools_2.15.1 ``` ![html preview](https://i.stack.imgur.com/7tpjL.jpg)
Two questions: When you want both figures to be keep, use ``` ```{r fig.keep='all'} ``` Default only keeps the unique plots (because your two plots are identical, the second one is removed; see the [knitr graphics manual](https://github.com/downloads/yihui/knitr/knitr-graphics.pdf) for details). Global chunk options are active when the next chunk(s) open: ``` ```{r} opts_chunk$set(fig.width=10) ``` ```{r} opts_chunk$set(fig.width=2) # Our figure is 10 wide, not 2 plot(1:1000) ``` ```{r} # Our figure is 2 wide, not 10 opts_chunk$set(fig.width=10) plot(1:1000) ``` ```
How do you get over tooling problems in a communal open-source project? Two different teams (from different companies) are uniting to work on a communal open-source project. Agreeing on technical design is something we have no trouble with, but I'm struggling with tooling/workflow problems. I really like a BDD testing tool called `phpspec` (analogous to RSpec), whilst a lot of my teammates stick to what they know (`phpunit`) regardless of the pros and cons of either tool. How do you move forward with a project when members are in disagreement like this? Should you enforce a standard testing tool? Is there a way of using both? I think it boils down to whether members will break out of their comfort zones to learn new technologies that are better for the job. I'm of the strong opinion that you should always be willing to learn new things, but I get the impression that others are purely concerned with getting things done in the quickest way possible - thereby using tools they've used before.
I think you answered your own question; whilst you'd like to use 1 tool, "a lot" of your colleagues prefer to use something else. Without a 'boss' to decide tooling, the majority rules and you need to go with the preferred tool. Now, there's no reason why you can't try to engage with them in discussion of tooling. You will have to take the initiative and lead the debate around changing, you will also have to persuade them to agree with you and that will take a bit of effort. Think of this question from their point of view. Unless you collaborate with them, one of them might post a question "how to deal with one developer who insists on using a different tool to our standard?". How would you react if one developer decided to change a tool you have already agreed to use? Ultimately you have to agree to use common tools, just like common design. If you don't there will be chaos.
Is it always the case that sizeof(T) >= alignof(T) for all object types T? For any object type `T` is it always the case that `sizeof(T)` is at least as large as `alignof(T)`? Intuitively it seems so, since even when you adjust the alignment of objects like: ``` struct small { char c; }; ``` above what it would normally be, their "size" is also adjusted upwards so that the relationship between objects in an array makes sense while maintaining alignment (at least in my [testing](http://coliru.stacked-crooked.com/a/d27a419d81c12f2d)). For example: ``` struct alignas(16) small16 { char c; }; ``` Has both a size and alignment of 16.
At least in standard C++, for anything you can make an array of (with length > 1), this will have to be true. If you have ``` Foo arr[2]; ``` and `alignof(Foo) > sizeof(Foo)`, then `arr[0]` and `arr[1]` can't both be aligned. As [Zalman Stern's example](https://stackoverflow.com/a/46457954) shows, though, at least some compilers will allow you to declare a type with alignment greater than its size, with the result that the compiler simply won't let you declare an array of that type. This is not standards-compliant C++ (it uses type attributes, which [are a GCC extension](https://gcc.gnu.org/onlinedocs/gcc-7.2.0/gcc/Common-Type-Attributes.html)), but it means that you can have `alignof(T) > sizeof(T)` in practice. The array argument assumes `sizeof(Foo) > 0`, which is true for any type supported by the standard, but [o11c shows](https://stackoverflow.com/a/46460970) an example where compiler extensions break that guarantee: some compilers allow 0-length arrays, with 0 `sizeof` and positive `alignof`.
Why am I getting "rawModule is undefined" when adding Vuex modules? I recently was struggling with implementing modules in Vuex for the first time. I couldn't find much info on the console error message I was getting ( `rawModule is undefined` ), so I thought I'd share the issue I ran into and the solution. I was doing a quick, simple version of a module implementation as I was working through some examples: ``` export const store = new Vuex.Store({ state: { loggedIn: false, user: {}, destination: '' }, mutations: { login: state => state.loggedIn = true, logout: state => state.loggedIn = false, updateUser: ( state, user ) => { state.user = user }, updateDestination: ( state, newPath ) => { state.destination = newPath } }, modules: { project }, }); const project = { state: {} } ```
The issue ultimately was that I had declared my module *after* I tried to add it to the Vuex store. I had thought it would have been okay to declare the module later thanks to variable hoisting, but that doesn't appear to be the case. Here is the code that does work: ``` const project = { state: {} } export const store = new Vuex.Store({ state: { loggedIn: false, user: {}, destination: '' }, mutations: { login: state => state.loggedIn = true, logout: state => state.loggedIn = false, updateUser: ( state, user ) => { state.user = user }, updateDestination: ( state, newPath ) => { state.destination = newPath } }, modules: { project }, }); ``` Hopefully this saves some people some time. I didn't see anything in the documentation requiring a certain ordering, so I'm surprised it mattered. If anyone has some insight into why it works this way, I'd be really interested in hearing it! Perhaps because the `Vuex.Store()` function gets called before the `project` value is set, so the project module's value is encapsulated as `undefined`, and that causes the error?
HowTo setup Tomcat serving two SSL Certificates using SNI? According to these two answers [[1]](https://stackoverflow.com/a/6343059) [[2]](https://stackoverflow.com/a/10173447) it's possible to have two SSL certificates serving from the same [**`Apache Tomcat`**](http://tomcat.apache.org/) using Server Name Indication (SNI). My question is then, how to setup this? I could setup two virtual hosts but I still have then just one connector which presents the specified SSL certificate to the client. In the connector one can specify the keystore and alias to use for the certificate but there is no parameter saying for which virtual host this connector is for or which certificate he should present to the client according to the used domain. How can I tell tomcat which SSL certificate (or to be more correct which keystore) he has to use while using SNI? [1] [https://stackoverflow.com/a/10173447](https://stackoverflow.com/a/6343059) [2] [https://stackoverflow.com/a/6343059](https://stackoverflow.com/a/10173447)
You need to re-read the answers to those question. SNI is not supported on the server side until Java 8. The minimum Java version that Tomcat 8 has to support is Java 7 so at the moment there i no SNI support in Tomcat. It may be possible to optionally support SNI if Tomcat is running on Java 8 or later but that would need code changes in Tomcat for which there are currently no plans. **Update as of December 2014:** Adding SNI support is on the [TODO](http://svn.apache.org/viewvc/tomcat/trunk/TOMCAT-NEXT.txt?view=annotate) list for Tomcat 9. That TODO list is quite long and SNI is not currently at the top of the list. As always patches are welcome. Once SNI is implemented in Tomcat 9 it is possible that SNI support might be back-ported to Tomcat 7 and Tomcat 8. Again, patched welcome. **Update as of June 2015:** SNI has been implemented for Tomcat 9. It is supported by all three HTTP connector implementations (NIO, NIO2 and APR/native). To use SNI with NIO or NIO2 you will need to compile Tomcat 9 (a.k.a. trunk) from source. To use SNI with APR/native you will also need to compile tc-native trunk (**not the 1.1.x branch currently used by the Tomcat releases**). TLS configuration has changed significantly to support SNI. Details will be in the docs web application once you have build Tomcat 9. **Update as of November 2016:** SNI support is included in Tomcat 8.5.x. It is unlikely it will be back-ported further. i.e. It is unlikely to make it to 8.0.x or 7.0.x.
Horizontal UIStackView with two label (one multiline label, one one-line) I have a horizontal `UIStackView` which has two `UILabel` in it. First UILabel is multiline (two line) and other one is one line only. They both have default content compression and resistance priorities. **My problem** is that even there is a gap between labels, "best" word in first text goes second line. I noticed that first label doesn't goes beyond half of total width. **What I want** is that second label should always show itself and first label should size It self for remaining space. If It can't fit to one line It should be two line. However, If second label is too short and first label is a long one but both of them can fit, first label should go beyond half of the width. **P.S** I need to use UIStackView in this scenario because there are other cases. I know putting two label inside UIView may solve the problem. ``` UIStackView: - Distribution: Horizontal - Alignment: Center - Spacing: 0 UILabel: - Number of line: 2 - Line break: Word wrap UILabel: - Number of line: 1 ``` [![Screenshot](https://i.stack.imgur.com/4gxe1.png)](https://i.stack.imgur.com/4gxe1.png) View hierarchy: [![enter image description here](https://i.stack.imgur.com/Mhe3G.png)](https://i.stack.imgur.com/Mhe3G.png) **Desired Result:** [![enter image description here](https://i.stack.imgur.com/xTISc.png)](https://i.stack.imgur.com/xTISc.png) **OR** [![enter image description here](https://i.stack.imgur.com/6zaXX.png)](https://i.stack.imgur.com/6zaXX.png) **EDIT:** I calculate the width of second label and give width constraint. I think It solved my problem, I'll test a bit. ``` //Give specific width to second label to make first one calculate right number of lines. if let font = UIFont(name: "Metropolis-ExtraBold", size: 15) { let fontAttributes = [NSAttributedString.Key.font: font] let size = (secondLabelText as NSString).size(withAttributes: fontAttributes) secondLabel.widthAnchor.constraint(equalToConstant: size.width).isActive = true } ```
To try and simplify... Forget calculating any widths... what matters is the horizontal `Content Hugging` and `Content Compression Resistance` Leave the left (blue) label at the defaults: ``` Content Hugging Priority Horizontal: 251 Content Compression Resistance Priority: Horizontal: 750 ``` But set the right (orange) label to: ``` Content Hugging Priority Horizontal: 1000 Content Compression Resistance Priority: Horizontal: 1000 ``` Results: [![enter image description here](https://i.stack.imgur.com/yLd4Z.png)](https://i.stack.imgur.com/yLd4Z.png) The only other issue would be if the text in the right-side label exceeds the full width of the view -- but you haven't indicated that you might need that much text.
Python Pandas Dataframe GroupBy Size based on condition I have a dataframe 'df' that looks like this: ``` id date1 date2 1 11/1/2016 11/1/2016 1 11/1/2016 11/2/2016 1 11/1/2016 11/1/2016 1 11/1/2016 11/2/2016 1 11/2/2016 11/2/2016 2 11/1/2016 11/1/2016 2 11/1/2016 11/2/2016 2 11/1/2016 11/1/2016 2 11/2/2016 11/2/2016 2 11/2/2016 11/2/2016 ``` What I would like to do is to groupby the id, then get the size for each id where date1=date2. The result should look like: ``` id samedate count 1 11/1/2016 2 1 11/2/2016 1 2 11/1/2016 2 2 11/2/2016 2 ``` I have tried this: ``` gb=df.groupby(id').apply(lambda x: x[x.date1== x.date2]['date1'].size()) ``` And get this error: ``` TypeError: 'int' object is not callable ``` You could certainly flag each instance where the date1 and date2 are equal, then count those flags for each id by each samedate, but I have to believe there is a groupby option for this.
You can use [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing) first and then aggregate [`size`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html): ``` df.date1 = pd.to_datetime(df.date1) df.date2 = pd.to_datetime(df.date2) df = df[df.date1 == df.date2] gb=df.groupby(['id', 'date1']).size().reset_index(name='count') print (gb) id date1 count 0 1 2016-11-01 2 1 1 2016-11-02 1 2 2 2016-11-01 2 3 2 2016-11-02 2 ``` **Timings**: ``` In [79]: %timeit (df[df.date1 == df.date2].groupby(['id', 'date1']).size().reset_index(name='count')) 100 loops, best of 3: 3.84 ms per loop In [80]: %timeit (df.groupby(['id', 'date1']).apply(lambda x: (x['date1'] == x['date2']).sum()).reset_index()) 100 loops, best of 3: 7.57 ms per loop ``` **Code for timings**: ``` #len df = 10k df = pd.concat([df]*1000).reset_index(drop=True) #print (df) df.date1 = pd.to_datetime(df.date1) df.date2 = pd.to_datetime(df.date2) ```
Catalyst - How to skip rendering a view In one of my controllers, I'm doing some SSE async streaming (see [here](http://search.cpan.org/~jjnapiork/Catalyst-Runtime-5.90064/lib/Catalyst/Response.pm#%24res-%3Ewrite_fh)), and I have it working great in a barebones test project. In the test project, I don't have a default view set, so it seems to just pass through - perfect! Now I'm trying to put it into my existing larger project, however, I'm finding that forwarding it to any view messes it up and I can't figure out how to simply skip the rendering of a view. Because I have a default view now, it refuses to just pass through. I've blindly tried a few things: `$c->detach`, `$c->forward(undef)`, overriding the "**end**" method. None have succeeded in skipping the view rendering - it always passes it on to my default view. Any ideas? **Edit** Not super relevant, but the action in question: ``` sub time_server : Path('/events') { my ( $self, $c ) = @_; $c->res->content_type('text/event-stream'); $timer_model->( $c, $c->response->write_fh ); } ```
`Catalyst::Action::Renderview` has a small set of criteria it uses when deciding whether or not to call the view. It will skip running the view if: - The request was a `HEAD` request. - Something has already set `$c->response->body` to a defined value. - `$c->response->status` is set to 204 ("No Content") or any 3xx (redirection). - `$c->error` contains one or more errors and `$c->stash->{template}` hasn't been set (so that `finalize_error` can do its job instead). Honestly this isn't the best possible arrangement, but what I would try in your situation is setting `$c->res->body("");` in your `time_server` action. An empty body won't write anything, and your headers are already finalized since you've called `write_fh`, but an empty string is still defined so it'll keep RenderView from doing anything.
Visual Studio for SSRS 2008 - How to organize reports into subfolders in Solution Explorer? Right now I have a project called reports with several reports. In solution explorer it looks like this: ``` Shared Data Sources -- DEV Reports -- Report1 -- Report2 -- Report3 ``` I want to make it look like this and have the same structure carry over to the report manager website when I click deploy. ``` Shared Data Sources -- DEV Folder A -- Report1 Folder B -- Report2 -- Report3 ``` Anyone know how to do this?
I'm using SSRS 2005 - I *think* this part of it works in the same way as 2008. As far as I can tell, you can't have folders within projects, but you **can** have multiple projects within a solution. To create a new folder, right-click on the solution in the Solution Explorer and select Add>New Project... Type in your new Project Name (eg. MyProject), and select Report Server Project from the list of Visual Studio installed templates. Click on OK, and your new Project should appear at the end of the list of projects in the Solution Explorer. (There are other ways of setting up a new Reports project, but this seems to be the quickest.) If you now right-click on your new Report Project and select Properties, you can see the TargetReportFolder, which will default to your new Project Name (eg. MyProject). When you deploy reports from SSRS, they are deployed to this location. (You can change the location, if you wish - I find it easier to keep track of what's going where by using the Project name.) You will need to copy any data sources to be used in each project, into the data sources folder of all projects that use that data source. By default, OverwriteDataSources is set to false, so when you deploy a new report, it will use the data source already deployed to the Report Manager environment. So to get the Report Manager structure that you want to see: - Create Projects called Folder A and Folder B - Move/copy Report1 into the Reports folder in Project Folder A - Move/copy Report2 and Report3 into the Reports folder in Project Folder B - Move/copy data source DEV into the Shared Data Sources folders in Projects Folder A and Folder B - Deploy your reports Don't forget to check your changes into source control.
Using JavaScript to properly sign a string using HmacSHA256 In the Houndify API Docs for Authentication, you have the following block of content: --- ### An Example of Authenticating a Request Let's assume we have the following information: ``` UserID: ae06fcd3-6447-4356-afaa-813aa4f2ba41 RequestID: 70aa7c25-c74f-48be-8ca8-cbf73627c05f Timestamp: 1418068667 ClientID: KFvH6Rpy3tUimL-pCUFpPg== ClientKey: KgMLuq-k1oCUv5bzTlKAJf_mGo0T07jTogbi6apcqLa114CCPH3rlK4c0RktY30xLEQ49MZ-C2bMyFOVQO4PyA== ``` 1. Concatenate the UserID string, RequestID string, and TimeStamp string in the following format: `{user_id};{request_id}{timestamp}` 2. With the values from the example, the expected output would be in this case: `ae06fcd3-6447-4356-afaa-813aa4f2ba41;70aa7c25-c74f-48be-8ca8-cbf73627c05f1418068667` 3. Sign the message with the decoded ClientKey. The result is a 32-byte binary string (which we can’t represent visually). After base-64 encoding, however, the signature is: `myWdEfHJ7AV8OP23v8pCH1PILL_gxH4uDOAXMi06akk=` 4. The client then generates two authentication headers *Hound-Request-Authentication* and *Hound-Client-Authentication*. 5. The Hound-Request-Authentication header is composed by concatenating the UserID and RequestID in the following format: `{user-id};{request-id}`. Continuing the example above, the value for this header would be: Hound-Request-Authentication: `ae06fcd3-6447-4356-afaa-813aa4f2ba41;70aa7c25-c74f-48be-8ca8-cbf73627c05f` 6. The Hound-Client-Authentication header is composed by concatening the ClientID, the TimeStamp string and the signature in the following format: `{client-id};{timestamp};{signature}`. Continuing the example above, the value for this header would be: `Hound-Client-Authentication: KFvH6Rpy3tUimL-pCUFpPg==;1418068667;myWdEfHJ7AV8OP23v8pCH1PILL_gxH4uDOAXMi06akk=` --- For Number 3, it says "Sign the message with the decoded ClientKey". The "message" and "ClientKey" are two distinct strings. My question(s): How do you sign one string with another string i.e. what exactly does that mean? And how would you do that in JavaScript? ``` var message = 'my_message'; var key = 'signing_key'; //??what next?? ``` I'm trying to figure all this out so I can create a pre-request script in Postman to do a proper HmacSHA256 hash.
According to the documentation, if you're using one of their SDKs, it will automatically authenticate your requests: > > SDKs already handle authentication for you. You just have to provide > the SDK with the Client ID and Client Key that was generated for your > client when it was created. If you are not using an SDK, use the code > example to the right to generate your own HTTP headers to authenticate > your request. > > > However, if you want to do it manually, I believe you need to compute the [HMAC](https://en.wikipedia.org/wiki/Hash-based_message_authentication_code) value of the string they describe in the link in your question and then send it base64 encoded as part of the `Hound-Client-Authentication` header in your requests. They provide an [example for node.js](https://www.houndify.com/docs/code-examples): ``` var uuid = require('node-uuid'); var crypto = require('crypto'); function generateAuthHeaders (clientId, clientKey, userId, requestId) { if (!clientId || !clientKey) { throw new Error('Must provide a Client ID and a Client Key'); } // Generate a unique UserId and RequestId. userId = userId || uuid.v1(); // keep track of this requestId, you will need it for the RequestInfo Object requestId = requestId || uuid.v1(); var requestData = userId + ';' + requestId; // keep track of this timestamp, you will need it for the RequestInfo Object var timestamp = Math.floor(Date.now() / 1000), unescapeBase64Url = function (key) { return key.replace(/-/g, '+').replace(/_/g, '/'); }, escapeBase64Url = function (key) { return key.replace(/\+/g, '-').replace(/\//g, '_'); }, signKey = function (clientKey, message) { var key = new Buffer(unescapeBase64Url(clientKey), 'base64'); var hash = crypto.createHmac('sha256', key).update(message).digest('base64'); return escapeBase64Url(hash); }, encodedData = signKey(clientKey, requestData + timestamp), headers = { 'Hound-Request-Authentication': requestData, 'Hound-Client-Authentication': clientId + ';' + timestamp + ';' + encodedData }; return headers; }; ```
What does $IFS$() mean? Just as the title states, I'm confused about `$IFS$()`, I saw it in a website which said that `$IFS$()` can replace the blank space, but I don't know how. Can anyone help me?
By `$IFS$()` they probably mean they change IFS from default white space, to end of string. From bash manpages: > > IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is ```<space><tab><newline>''`. > > > They mean they do `IFS=$()` which acts the same as doing `IFS=$'\0'`, declaring that field separator is null character, which marks end of a string. `$()` means return the output of command inside parenthesizes, which the is the same as you just pressing Enter key in terminal. Example: ``` $ cat test 1 2 3 4 5 6 7 8 9 ``` It will take every number as new variable, because every whitespace (be it single space, tab or new line is considered field separator) ``` $ for i in $(cat test); do echo $i; done 1 2 3 4 5 6 7 8 9 ``` If we change IFS to `$()`, output is the same as is in the file: ``` $ IFS=$();for i in $(cat test); do echo $i; done 1 2 3 4 5 6 7 8 9 ``` Unset IFS and it goes back to looking whitespace as IFS ``` $ unset IFS $ for i in $(cat test); do echo $i; done 1 2 3 4 5 6 7 8 9 ``` you can similarly make IFS change to null character with `$'\0'` ``` $ IFS=$'\0';for i in $(cat test); do echo $i; done 1 2 3 4 5 6 7 8 9 ``` `IFS=$()` is basically the same as `IFS=` or `IFS=""`, you are declaring it equal to empty string so bash looks for end of strings as separators.
freertos vTaskDelete(NULL) no free memory I am starting to learn FreeRTOS. Just now I am trying to make a print task function with this code: ``` static void vTaskPrint(void *pvParameters) { taskENTER_CRITICAL(); printf("%s", (char *)pvParameters); printf("xPortGetFreeHeapSize: %d\r\n", xPortGetFreeHeapSize()); taskEXIT_CRITICAL(); vTaskDelete(NULL); } ``` But after 14 calls to: ``` xTaskCreate(vTaskPrint, (char *)"vTaskPrint", configMINIMAL_STACK_SIZE, (void *)buffer, 3, (xTaskHandle *)NULL); ``` The cortex M3 run out of memory: ``` ************** TEST GPIO & LEDS FREERTOS OPEN103Z EVB ************** vTaskLeds xPortGetFreeHeapSize: 7832 vTaskReadKeys xPortGetFreeHeapSize: 7232 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 6632 [INFO] vTaskPrint created successful Key pressed CENTER KEY xPortGetFreeHeapSize: 6032 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 5432 [INFO] vTaskPrint created successful Key pressed LEFT xPortGetFreeHeapSize: 4832 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 4232 [INFO] vTaskPrint created successful Key pressed LEFT xPortGetFreeHeapSize: 3632 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 3032 [INFO] vTaskPrint created successful Key pressed LEFT xPortGetFreeHeapSize: 2432 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 1832 [INFO] vTaskPrint created successful Key pressed LEFT xPortGetFreeHeapSize: 1232 [INFO] vTaskPrint created successful Key pressed RIGHT xPortGetFreeHeapSize: 632 [INFO] vTaskPrint created successful Key pressed LEFT xPortGetFreeHeapSize: 32 [INFO] vTaskPrint created successful [ERROR] vTaskPrint not created successful ``` How can I free memory?
When you delete at task the memory allocated to the task is freed from the Idle task. Therefore you have to let the idle task run at some point. This is clearly stated in the documentation for the function you are calling: <http://www.freertos.org/a00126.html> In most applications the idle task will be the task that runs most, so your test is somewhat artificial. As an aside: It is really not a good idea to call such long functions as printf from a critical section, and you are breaking the FreeRTOS API usage rules by calling xPortGetFreeHeapSize() from a critical section (although you will probably get away with it in this case, the general rule of thumb is not to call RTOS API functions from a critical section or when the scheduler is locked). The FAQ page covering this is here: <http://www.freertos.org/FAQHelp.html> There is lots of information, plus a dedicated support forum with FreeRTOS experts waiting, all available for your free use on the FreeRTOS website, so I never understand why people ask FreeRTOS questions anywhere else.
Difference between pushing a docker image and installing helm image I need to learn a CI pipeline in which there is a step for building and pushing an image using a Dockerfile and another step for creating a helm chart image in which there is a definition of the image created by the docker file. After that, there's a CD pipeline in which there's an installation of what was created by the helm chart only. What is the difference between the image created directly by a Dockerfile and the one which is created by the helm chart? Why isn't the Docker image enough?
**Amount to effort** To deploy a service on Kubernetes using docker image you need to manually create various configuration files like `deployment.yaml`. Such files keep on increasing as you have more and more services added to your environment. In the Helm chart, we can provide a list of all services that we wish to deploy in `requirements.yaml` file and Helm will ensure that all those services get deployed to the target environment using `deployment.yaml`, `service.yaml` & `values.yaml` files. **Configurations to maintain** Also adding configuration like routing, config maps, secrets, etc becomes manually and requires configuration over-&-above your service deployment. For example, if you want to add an Nginx proxy to your environment, you need to separately deploy it using the Nginx image and all the proxy configurations for your functional services. But with Helm charts, this can be achieved by configuring just one file within your Helm chart: `ingress.yaml` **Flexibility** Using docker images, we need to provide configurations for each environment where we want to deploy our services. But using the Helm chart, we can just override the properties of the existing helm chart using the environment-specific `values.yaml` file. This becomes even easier using tools like `ArgoCD`. **Code-Snippet:** Below is one example of `deployment.yaml` file that we need to create if we want to deploy one service using docker-image. Inline, I have also described how you could alternatively populate a generic `deployment.yaml` template in Helm repository using different files like `requirements.yaml` and `Values.yaml` **deployment.yaml for one service** ``` crazy-project/charts/accounts/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: accounts spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: accounts app.kubernetes.io/instance: crazy-project template: metadata: labels: app.kubernetes.io/name: accounts app.kubernetes.io/instance: crazy-project spec: serviceAccountName: default automountServiceAccountToken: true imagePullSecrets: - name: regcred containers: - image: "image.registry.host/.../accounts:1.2144.0" <-- This version can be fetched from 'requirements.yaml' name: accounts env: <-- All the environment variables can be fetched from 'Values.yaml' - name: CLUSTERNAME value: "com.company.cloud" - name: DB_URI value: "mongodb://connection-string&replicaSet=rs1" imagePullPolicy: IfNotPresent volumeMounts: - name: secretfiles mountPath: "/etc/secretFromfiles" readOnly: true - name: secret-files mountPath: "/etc/secretFromfiles" readOnly: true ports: - name: HTTP containerPort: 9586 protocol: TCP resources: requests: memory: 450Mi cpu: 250m limits: memory: 800Mi cpu: 1 volumes: - name: secretFromfiles secret: secretName: secret-from-files - name: secretFromValue secret: secretName: secret-data-vault optional: true items:... ``` Your `deployment.yaml` in Helm chart could be a generic template(code-snippet below) where the details are populated using `values.yaml` file. ``` env: {{- range $key, $value := .Values.global.envVariable.common }} - name: {{ $key }} value: {{ $value | quote }} {{- end }} ``` Your `Values.yaml` would look like this: ``` accounts: imagePullSecrets: - name: regcred envVariable: service: vars: spring_data_mongodb_database: accounts_db spring_product_name: crazy-project ... ``` Your `requirements.yaml` would be like below. 'dependencies' are the services that you wish to deploy. ``` dependencies: - name: accounts repository: "<your repo>" version: "= 1.2144.0" - name: rollover repository: "<your repo>" version: "= 1.2140.0" ``` The following diagram will help you visualize what I have mentioned above: [![Docker image v/S Helm chart](https://i.stack.imgur.com/gdzeC.png)](https://i.stack.imgur.com/gdzeC.png)
Mock Principal for Spring Rest controller I have the following REST controller: ``` @RequestMapping(path = "", method = RequestMethod.GET) public ExtendedGetUserDto getCurrentUser(Principal principal) { CustomUserDetails userDetails = userDetailsService.loadByUsername(principal.getName()) // ..... } ``` `CustomUserDetails` has a number of fields, including `username` and `password` I want to mock principal in the controller method (or pass from test to the controller method). How should I do that ? I read many posts, but none of them actually answered this question. **Edit 1** ``` @Test public void testGetCurrentUser() throws Exception { RequestBuilder requestBuilder = MockMvcRequestBuilders.get( USER_ENDPOINT_URL).accept(MediaType.APPLICATION_JSON); MvcResult result = mockMvc.perform(requestBuilder).andReturn(); MockHttpServletResponse response = result.getResponse(); int status = response.getStatus(); Assert.assertEquals("response status is wrong", 200, status); } ```
You can mock a principal in your test case, set some expectations on it and then pass this mock down the mvc call using `MockHttpServletRequestBuilder.principal()`. I've updated your example: ``` @Test public void testGetCurrentUser() throws Exception { Principal mockPrincipal = Mockito.mock(Principal.class); Mockito.when(mockPrincipal.getName()).thenReturn("me"); RequestBuilder requestBuilder = MockMvcRequestBuilders .get(USER_ENDPOINT_URL) .principal(mockPrincipal) .accept(MediaType.APPLICATION_JSON); MvcResult result = mockMvc.perform(requestBuilder).andReturn(); MockHttpServletResponse response = result.getResponse(); int status = response.getStatus(); Assert.assertEquals("response status is wrong", 200, status); } ``` With this approach, your controller method will receive the mocked instance of `Principal`. I have verified this behaviour locally.
Force SSL for every URL except one path using .htaccess I'm trying to enforce SSL for every URL except one with the first segment `/preview`. ``` http://test.example.com/preview/blah/blah ``` should be ignored by the rule; every other URL should be forced to use SSL. I'm using CentOS 6.4, Apache, and CodeIgniter. My .htaccess file: ``` RewriteEngine On RewriteBase / RewriteCond %{HTTPS} !=on RewriteCond %{REQUEST_URI} !^/preview/ RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] ``` Normally CI URLs look like this before they are rewritten by the last rule: ``` http://test.example.com/index.php?/preview/blah/blah ``` I've tried: ``` RewriteCond %{HTTPS} !=on RewriteCond %{REQUEST_URI} !^/index.php?/preview/ RewriteCond %{REQUEST_URI} !^/preview/ RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ``` That doesn't work either. What am I doing wrong?
You almost had it. The full solution is: ``` RewriteEngine On RewriteBase / RewriteCond %{HTTPS} !=on RewriteCond %{REQUEST_URI} !^/preview RewriteCond %{QUERY_STRING} !^/preview RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] ``` The reason it failed was that `http://test.domain.com/preview/blah/blah` first gets resolved to `http://test.domain.com/index.php?/preview/blah/blah` and that this URL immediately get rewritten again (the htaccess loops with the new URL). The new URL (`http://test.domain.com/index.php?/preview/blah/blah` did not match your conditions, because the part after the ? is not considered part of the REQUEST\_URI, but of the QUERY\_STRING. See the description of REQUEST\_URI at <http://httpd.apache.org/docs/2.4/en/mod/mod_rewrite.html#rewritecond>
Load testing at 350,000 requests per minute I need a way to reproduce the sudden burst of traffic our site experienced a while ago, to have a chance to keep our stack working. Our load balancer reported at a certain point that some 350k requests were performed per minute, trashing everything, from the operating system to knocking off the backends. I tried looking into services that performs this kind of burst test, but it appears that when tweaking the test settings to match the desired conditions it would simply cost too much. How can I load test my stack in a way that matches these conditions?
So, this is 5,833 requests per second, which is a lot, but its doable. I would recommend [using a tool](http://blog.remarkablelabs.com/2012/11/benchmarking-and-load-testing-with-siege) called [`siege`](https://www.joedog.org/siege-home/). Experiment with different concurrency options until you reach a peak `Transactions per second:` rate in `siege`'s output. You can also make use of [`ab`](https://httpd.apache.org/docs/2.4/programs/ab.html) (ApacheBench) from the Apache Project, but `siege` is much more powerful. Then, add more machines until you can get to a total of, say, 6,000. You can probably make cost-effective use of Amazon EC2, or even just set up a few machines on different internet uplinks in different locations with `ssh` access to them. You could make use of `cron` or `at` to start `siege` at the same time on all of the machines (ensure that the clocks are in sync first). As far as making use of an external company to do the test, yes, this might be prohibitively expensive, and I don't think I'm allowed to point to any particular company, but I've worked with a few in the past. This is definitely something you can do on your own with a few hours of EC2 time. **Edit** As [@HBruijn](https://serverfault.com/users/37681/hbruijn) awesomely pointed out, there is a great open-source tool called [Bees with Machine Guns](https://github.com/newsapps/beeswithmachineguns) which will automate the creation, teardown, and coordination of load generation via EC2. This sounds like the right approach for your situation.
Plot multiple ggplot2 on same page I have a working loop which generates and can save individual plots from each file saved in a directory. I want to plot all of the returned plots in a single file as a 2x2 grid over multiple pages but cannot do this. I have tried to save the plot objects in a list ``` pltList <- list() pltList[] for (f in 1:length(files)){ plot_object <- ggplot2(...) #make ggplot2 plot print(plot_object) pltList[[f]] <- plot_object #save ggplot2 plot in list } jpeg(filename.jpg) par(mfrow=c(2,2)) #to generate 2x2 plot per page print(pltList[[1]]) print(pltList[[2]]) ... print(pltList[[f]]) dev.off() ``` The problem is that the resulting saved .jpg file only contains the last plot and not a 2x2 grid of all plots over many pages which is what I want. **EDIT** My first problem is how to save each plot from the loop in the list - how can I view the saved objects from the list to make sure they have been saved correctly? When I do `print(pltList[1])`, the resulting output is: ``` function (x, y, ...) UseMethod("plot") <bytecode: 0x0000000010f43b78> <environment: namespace:graphics> ``` rather than the actual plot. It seems that the plots are not being saved in the list as expected. How can I correct for this? Hopefully, once this is fixed, your plotting suggestions will work.
Assuming you need a PDF output where every page has multiple plots plotted as one, e.g.: if there are 12 plots then 4 plots per page. Try this example: ``` library(ggplot2) library(cowplot) # list of 12 dummy plots, only title is changing. pltList <- lapply(1:12, function(i){ ggplot(mtcars,aes(mpg,cyl)) + geom_point() + ggtitle(paste("Title",i))}) # outputs 3 jpeg files with 4 plots each. for(i in seq(1,12,4)) ggsave(paste0("Temp",i,".jpeg"), plot_grid(pltList[[i]], pltList[[i+1]], pltList[[i+2]], pltList[[i+3]],nrow = 2)) # or we can output into 1 PDF with 3 pages using print pdf("TempPDF.pdf") for(i in seq(1,12,4)) print(plot_grid(pltList[[i]], pltList[[i+1]], pltList[[i+2]], pltList[[i+3]],nrow = 2)) dev.off() ``` --- **EDIT:** Another way using gridExtra, as suggested by @user20650: ``` library(gridExtra) #output as PDF pdf("multipage.pdf") #use gridExtra to put plots together marrangeGrob(pltList, nrow=2, ncol=2) dev.off() ```
C++11 uint types vs u\_int I just stumbled upon the type `u_int8_t` because it did not compile in Windows+MinGW (but compiled fine under Linux). According to [this site](http://www.cplusplus.com/reference/cstdint/) the C++11 standard defines the type `uint8_t`. I just used the latter and everything worked. The questions that arised are: 1. Is there any difference between `u_int8_t` and `uint8_t`? 2. Is there a reason (besides legacy code) to use `u_int8_t`? 3. Is it safe to assume that `uint8_t` will be present if I use a C++11 compiler (on different OS or architectures)? 4. Are the answers to the above questions also valid for the other types (`intX_t` and `uintX_t`)?
> > Is there any difference between `u_int8_t` and `uint8_t`? > > > `u_int8_t` is just a very old name that was not standardised. Avoid it. > > Is there a reason (besides legacy code) to use `u_int8_t`? > > > Suicide by coworker. > > Is it safe to assume that `uint8_t` will be present if I use a C++11 compiler (on different OS or architectures)? > > > The C++ standard requires it to be present on all implementations that have an unsigned 8-bit type available (today that means everything that is not exotic). > > Are the answers to the above questions also valid for the other types (`intX_t` and `uintX_t`)? > > > Pretty much, yes.
What exactly does pygame.init() do? Every pygame tutorial I have ever read/seen have said to squeeze 'pygame.init()' into your code before you think about doing anything else. Apparently, it initializes pygame modules or something, which seems pretty important. This was until I thought to remove the 'pygame.init()' line from my code, just to see what would happen. Lo and behold, my game works exactly the same. I took to the web and once again, every where I went, I was told 'pygame.init()' is necessary. But no, it is clearly not, as I just took it out of my code which works just fine. So, needless to say, I am pretty confused. I would really appreciate it if someone explained: a) the function of pygame.init() b) whether or not it is required in a pygame program. if it is, then why did my game work and if it is not, then when is it needed? c) any other things you think I should know
From [www.pygame.org](https://www.pygame.org/docs/ref/pygame.html) > > pygame.init() > initialize all imported pygame modules. No exceptions will be raised if a module fails, but the total number if successful and failed inits will be returned as a tuple. You can always initialize individual modules manually, but pygame.init()initialize all imported pygame modules is a convenient way to get everything started. The init() functions for individual modules will raise exceptions when they fail. > > > Furthemore, here we can see that it's safe to do it in the beginning: > > It is safe to call this init() more than once as repeated calls will have no effect. This is true even if you have pygame.quit() all the modules. > > > And finally, the main reason in my opinion as for why it is always used in the beginning: > > This will attempt to initialize all the pygame modules for you. Not all pygame modules need to be initialized, but this will automatically initialize the ones that do. > > > So the main conclusion is that `pygame.init()` safely initializes all imported pygame modules regardless if the modules actually need to be initialized; but since it does for the ones that do, it saves the trouble of manually initializing each module individually.
why a variable with value 0 is not admited asking if exist Im developing a very big application on Scalable Javascript Aplication Architecture, in the core i have a simple function to asign a css property to a element. ``` establishCSSvalue : function( element, property, value ) { if( element && property && value ) { jQuery( element ).css( property, value ); } else { //log wrong arguments } } ``` If the variable value is equal to 0 the if go to the else, if i delete the value condition o fix the 0: ``` if( element && property ) { if( element && property && ( value || value == 0 ) ) { ``` It works. Anyone can explain me what its happening, or why? I cant understand or my knowlegde of javascript its not enough, in php is easy with isset. For me a variable with value = 0 exist... Thanks.
If you are simply trying to see that all three function arguments actually exist and are not undefined, then you can't just check them for a falsey value as you are. You must specifically check to see if they are not undefined. You can do that like this: ``` if (element !== undefined && property !== undefined && value !== undefined) ``` The reason you cannot do this: ``` if (element && property && value) ``` is because any falsey value will cause that statement to fail. falsey values include `null`, `false`, `0`, `""`, `undefined` and `NaN`. So, if you ONLY want to check to make sure it's not undefined, then you need to specifically check for that rather than any falsey value. --- Note: If all you're really trying to tell is whether the caller actually passed three arguments to the function when they called it, you can also check `arguments.length` like this: ``` if (arguments.length === 3) ``` or just check to see that the last variable is not undefined: ``` if (value !== undefined) ```
ggplot2: Extend stat\_smooth regression line to entire x-range of plot area I have two separate regression lines in my ggplot, each corresponding to a separate variable. However, the second line corresponding to `local` does not extend across the entire graph. Is there a workaround for this or a way to make both ablines extend equally across the area of the graph? ``` ggplot(metrics, aes(x=popDensity, y= TPB, color = factor(type))) + geom_point() +theme_minimal() + stat_smooth(method = "lm", se = FALSE) + geom_label_repel(aes(label= rownames(metrics)), size=3, show.legend = FALSE) + theme(axis.title = element_text(family = "Trebuchet MS", color="#666666", face="bold", size=12)) + labs(x = expression(paste( "Populatin Density ", km^{2})), y = expression(paste("Rating")))+ theme(legend.position="top", legend.direction="horizontal") + theme(legend.title=element_blank()) ``` [![enter image description here](https://i.stack.imgur.com/GGVSr.png)](https://i.stack.imgur.com/GGVSr.png) Here is a sample of the data: ``` > dput(metrics) structure(list(popDensity = c(4308, 27812, 4447, 5334, 4662, 2890, 1689, 481, 4100), TPB = c(2.65, 4.49, 2.37, 2.87, 3.87, 2.95, 1.18, 1.62, 1.87), type = c("Global", "Global", "Global", "Global", "Global", "Global", "Local", "Local", "Local")), .Names = c("popDensity", "TPB", "type"), row.names = c("City1", "City2", "City3", "City4", "City5", "City6", "City7", "City8", "City9"), class = "data.frame") ```
Add `fullrange = T` to `stat_smooth` will make the fit span the full range of the plot: ``` ggplot(metrics, aes(x = popDensity, y = TPB, color = factor(type))) + geom_point() + theme_minimal() + stat_smooth(method = "lm", se = FALSE, fullrange = T) + geom_label_repel(aes(label = rownames(metrics)), size = 3, show.legend = FALSE) + theme(axis.title = element_text( family = "Trebuchet MS", color = "#666666", face = "bold", size = 12 )) + labs(x = expression(paste("Populatin Density ", km ^ {2})), y = expression(paste("Rating"))) + theme(legend.position = "top", legend.direction = "horizontal") + theme(legend.title = element_blank()) ```
Django doesn't serve static files with NGINX + GUNICORN Everything worked very well before gunicorn and nginx, static files were served to the website. But now, it doesn't work anymore. Settings.py ``` STATICFILES_DIRS = [ '/root/vcrm/vcrm1/static/' ] STATIC_ROOT = os.path.join(BASE_DIR, 'vcrm/static') STATIC_URL = '/static/' MEDIA_ROOT = '/root/vcrm/vcrm1/vcrm/media/' MEDIA_URL = '/media/' ``` /etc/nginx/sites-available/vcrm ``` server { listen 80; server_name 195.110.58.168; location = /favicon.ico { access_log off; log_not_found off; } location /static { root /root/vcrm/vcrm1/vcrm; } location = /media { root /root/vcrm/vcrm1/vcrm; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } ``` } When I run collectstatic: ``` You have requested to collect static files at the destination location as specified in your settings: /root/vcrm/vcrm1/vcrm/static This will overwrite existing files! Are you sure you want to do this? ``` and then: ``` Found another file with the destination path 'admin/js/vendor/jquery/jquery.min.js'. It will be ignored since only the first encountered file is collected. If this is not what you want, make sure every static file has a unique path. 0 static files copied to '/root/vcrm/vcrm1/vcrm/static', 251 unmodified. ```
**NGINX + Gunicorn + Django** Django project: ``` djangoapp - ... - database - djangoapp - settings.py - urls.py - ... - media - static - manage.py - requirements.txt ``` Server: install venv, requirements.txt: ``` sudo apt-get update sudo apt-get install -y git python3-dev python3-venv python3-pip supervisor nginx vim libpq-dev --> cd djangoapp pathon3 -m venv venv source venv/bin/activate (venv) pip3 install -r requirements.txt ``` Server: install NGINX: ``` sudo apt-get install nginx sudo vim /etc/nginx/sites-enabled/default ``` Server: NGINX config: ``` server { listen 80 default_server; listen [::]:80 default_server; location /static/ { alias /home/ubuntu/djangoapp/static/; } location /media/ { alias /home/ubuntu/djangoapp/media/; } location / { proxy_pass http://127.0.0.1:8000; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; add_header P3P 'CP="ALL DSP COR PSAa OUR NOR ONL UNI COM NAV"'; add_header Access-Control-Allow-Origin *; } } ``` Server: setup supervisor: ``` cd /etc/supervisor/conf.d/ sudo vim djangoapp.conf ``` Server: supervisor config: ``` [program:djangoapp] command = /home/ubuntu/djangoapp/venv/bin/gunicorn djangoapp.wsgi -b 127.0.0.1:8000 -w 4 --timeout 90 autostart=true autorestart=true directory=/home/ubuntu/djangoapp stderr_logfile=/var/log/game_muster.err.log stdout_logfile=/var/log/game_muster.out.log ``` Server: update supervisor with the new process: ``` sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl restart djangoapp ```
Django : 'tag' is not a registered tag library error I'm having this error when loading a custom tag in my template. I've visited many topics about this and I made sure to verify that I didn't commit some common errors : - The file containing the tags is in the `templatetags` folder. - This `templatetags` folder contains a `__init__.py` file. - The app `actualites` is in the `INSTALLED_APPS` list from the settings. - I'm using `{% load mes_tags %}` at the beginning of my template. Here is the file structure of my app : ``` actualites/ __init__.py SOME FILES templatetags/ __init__.py mes_tags.py ``` mes\_tags.py ``` from django import template register = template.Library() @register.simple_tag(takes_context=True) def param_replace(context, **kwargs): d = context['request'].GET.copy() for k, v in kwargs.items(): d[k] = v for k in [k for k, v in d.items() if not v]: del d[k] return d.urlencode() ``` The error I get is the following : ``` TemplateSyntaxError at / 'mes_tags' is not a registered tag library. Must be one of: LIST OF TAGS ``` Can someone tell me what I did wrong ? Thanks in advance !
You need to add this tags library in settings (for Django >= 1.9): ``` TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'app.apptemplates.load_setting', ], 'libraries':{ 'custom_templatetag': 'actualites.templatetags.mes_tags', } }, }] ``` You can read more [here](https://docs.djangoproject.com/en/3.0/topics/templates/#module-django.template.backends.django)
Poisson or quasi poisson in a regression with count data and overdispersion? I have count data (demand/offer analysis with counting number of customers, depending on - possibly - many factors). I tried a linear regression with normal errors, but my QQ-plot is not really good. I tried a log transformation of the answer: once again, bad QQ-plot. So now, I'm trying a regression with Poisson Errors. With a model with all significant variables, I get: ``` Null deviance: 12593.2 on 53 degrees of freedom Residual deviance: 1161.3 on 37 degrees of freedom AIC: 1573.7 Number of Fisher Scoring iterations: 5 ``` Residual deviance is larger than residual degrees of freedom: I have overdispersion. How can I know if I need to use quasipoisson? What's the goal of quasipoisson in this case? I read this advise in "The R Book" by Crawley, but I don't see the point nor a large improvement in my case.
When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function. However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the [R documentation](http://stat.ethz.ch/R-manual/R-patched/library/stats/html/family.html). It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model. You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors.
Replace Column if equal to a specific value I'm looking to replace the fourth column in a CSV if it equals `N/A`. I'm trying to change it to `-1`. I can't seem to get this to work. ``` awk -F , '{ if($4 == "N/A") {$4 = -1} }' test.csv ```
You can use the following `awk`: ``` awk -F, '{ $4 = ($4 == "N/A" ? -1 : $4) } 1' OFS=, test.csv ``` - We set the input and output field separators to `,` to preserve the delimiters in your csv file - We check the forth field if it is equal to "N/A" then we assign it the value `-1` if not we retain the value as is. - `1` at the end prints your line with or without modified 4th column depending if our test was successful or not. - `($4=="N/A"?-1:$4)` is a ternary operator that checks if the condition `$4=="N/A"` is true or not. If true `?` then we assign `-1` and if false `:` we keep the field as is. ### Test run on sample file: ``` $ cat file a,b,c,d,e,f 1,2,3,4,5,6 44,2,1,N/A,4,5 24,sdf,sdf,4,2,254,5 a,f,f,N/A,f,4 ``` ``` $ awk -F, '{ $4 = ($4 == "N/A" ? -1 : $4) } 1' OFS=, file a,b,c,d,e,f 1,2,3,4,5,6 44,2,1,-1,4,5 24,sdf,sdf,4,2,254,5 a,f,f,-1,f,4 ```
Contact us functionality in Rails 3 I want to make a contact us form in Rails 3 with the following fields: - Name - Email - Message title - Message body The posted messages are intended to go to my email address so I don't neccessarily must store the messages in the database. Do I have to use `ActionMailer`, any gem or plugin for it?
[This](http://www.railsmine.net/2010/03/rails-3-action-mailer-example.html) tutorial is an excellent example - and it's Rails 3 **Update:** [This article](http://matharvard.ca/posts/2011/aug/22/contact-form-in-rails-3/) is a better example than the one I posted earlier, works flawlessly **Second Update:** I would also recommend merging-in some of the techniques outlined in [this railscast](http://railscasts.com/episodes/326-activeattr?view=asciicast) on the [active\_attr](https://github.com/cgriego/active_attr) gem, where Ryan Bates walks you through the process of setting up a tabless model for a contact page. **Third Update:** I wrote my own [test-driven blog post](https://www.murdo.ch/blog/build-a-contact-form-with-ruby-on-rails-part-1) about it
How to restore a continuous sequence of IDs as primary keys in a SQL database? I'm using a SQlite database and Django's `QuerySet` API to access this database. I wrote data sequentially into the database and each entry has a simple ID as the primary key (this is the Django default). This means that there is a continuous sequence of IDs now in the database (entry 1 has ID 1, entry 2 has ID 2, and so on). Now I needed to delete some entries again. This means that the sequence of IDs is discontinuous now (entry 1 has ID 1, but entry 2 might have ID 3, 8, 1432 or anything else, but not 2). How can I restore this continuous sequence of IDs again and associate them with the remaining entries in the database? Is there a way to do this with Django's `QuerySet` API or do I need to use plain SQL? I have no experience with plain SQL, so some working code would be very helpful in this case. Thank you!
I cannot think of any situation in which doing this would be desirable. The best primary keys are immutable (although that's not a technical requirement) and the very *purpose* of using non-meaningful integer primary keys is to avoid having to update them. I would even go so far as to say that if you require a meaningful, unbroken sequence of integers, create a separate column in your table, keep the primary key with its sequence breaks, and renumber the new "sequence" column when needed. However, you may have requirements that I can't think of. If you *really* **need** to change the values in those keys make sure that all the references to that column in your database are protected by FOREIGN KEY constraints and check out the [ON UPDATE CASCADE](http://www.sqlite.org/foreignkeys.html#fk_actions) option when you declare a foreign key. It will instruct the database to do the updating for you. But if you don't *have to* this, don't.
How do I define a function that accepts any Task producing an IEnumerable? I'm looking to create a function that accepts any task that produces an `IEnumerable<T>`. To illustrate, consider the following function signature. ``` void DoWork<TElement>(Task<IEnumerable<TElement>> task) { } ``` Now, I would like to call this method as follows: ``` Task<int[]> task = Task.FromResult(new[] { 1, 2, 3 }); DoWork(task); ``` Clearly, this doesn't work since the two `Task` types are not the same, and that covariance doesn't exist for Tasks. However, I am wondering if there are some clever tricks that will allow this to work, inspired by the following example. ``` async Task<IEnumerable<int>> GetTask() { return await Task.FromResult(new int[] { 1, 2, 3 }); } ``` Here, `await` is effectively creating a new task with the result of the inline task, hence the illusion of a type conversion. To give a more detailed example, I'd like to allow for users to call `DoWork` without too much of a burden in conversions: ``` // Service proxy method Task<int[]> GetInts() { // simplified for brevity return Task.FromResult(new[] { 1, 2, 3 }); } // Service proxy method Task<long[]> GetLongs() { // simplified for brevity return Task.FromResult(new[] { 100L, 200L, 300L }); } async Task<IEnumerable<T>> DoWork<T>(Func<Task<IEnumerable<T>>> getData, Func<T, bool> predicate) { return (await getData()).Where(predicate); } // GOAL: DoWork(GetInts, i => i % 2 == 0); DoWork(GetLongs, l => l % 40 == 0); ```
You could introduce one more Type parameter and do something like this: ``` async Task<IEnumerable<TElement>> DoWork<T, TElement>(Func<Task<T>> getData, Func<TElement, bool> predicate) where T : IEnumerable<TElement> { return (await getData()).Where(predicate); } Task<int[]> GetInts() { return Task.Run(() => new[] { 1, 2, 3 }); } Task<long[]> GetLongs() { return Task.Run(() => new[] { 100L, 200L, 300L }); } ``` Then you could to ``` static void Main() { var ints = DoWork<int[], int>(GetInts, i => i % 2 == 0).Result; var longs = DoWork<long[], long>(GetLongs, i => i % 2 == 0).Result; } ``` Or as noted by OP in comments you can make compiler to infer the types if you specify `TElement` explicitly. ``` var ints = DoWork(GetInts, (int i) => i % 2 == 0).Result; ``` Your code doesn't work since `Task<T>` is not "Covariant" on `T`. You may be aware that classes can't be covariant.
Looking for an alternative to the image onLoad > > **Possible Duplicate:** > > [jQuery or JavaScript: Determine when image finished loading](https://stackoverflow.com/questions/4494437/jquery-or-javascript-determine-when-image-finished-loading) > > > The problem is known, but I can't find any simple solution: ``` var img = new Image(); img.onLoad = function() { console.log("load"); // this event doesn't fire }; img.src = "img/domino/21.png"; console.log(img.complete); // false setTimeout(function() { console.log(img.complete); // sometimes true, sometimes false }, 10); ``` I was looking for an implementation of onComplete event, but I can't find anything. Would you help?
The proper spelling of the event handler property is all lowercase `.onload`, not `.onLoad`. ``` var img = new Image(); img.onload = function() { console.log("load"); // works every time }; img.src = "img/domino/21.png"; ``` Javascript is case sensitive so you MUST use the proper case for all object properties. --- The alternatives besides `.onload` are to use: ``` img.addEventListener("load", function(e) {...}); ``` or (in older versions of IE): ``` img.attachEvent("onload", function(e) {...}); ``` If you're only using one event handler and you aren't using a cross platform library that already abstracts event handlers for you, then using `.onload` is the simplest. And, here's a simple cross browser way to add event handlers: ``` // add event cross browser function addEvent(elem, event, fn) { if (elem.addEventListener) { elem.addEventListener(event, fn, false); } else { elem.attachEvent("on" + event, function() { // set the this pointer same as addEventListener when fn is called return(fn.call(elem, window.event)); }); } } ```
While reading file on Python, I got a UnicodeDecodeError. What can I do to resolve this? This is one of my own projects. This will later help benefit other people in a game I am playing (AssaultCube). Its purpose is to break down the log file and make it easier for users to read. I kept getting this issue. Anyone know how to fix this? Currently, I am not planning to write/create the file. I just want this error to be fixed. The line that triggered the error is a blank line (it stopped on line 66346). This is what the relevant part of my script looks like: ``` log = open('/Users/Owner/Desktop/Exodus Logs/DIRTYLOGS/serverlog_20130430_00.15.21.txt', 'r') for line in log: ``` and the exception is: ``` Traceback (most recent call last): File "C:\Users\Owner\Desktop\Exodus Logs\Log File Translater.py", line 159, in <module> main() File "C:\Users\Owner\Desktop\Exodus Logs\Log File Translater.py", line 7, in main for line in log: File "C:\Python32\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 3074: character maps to <undefined> ```
Try: ``` enc = 'utf-8' log = open('/Users/Owner/Desktop/Exodus Logs/DIRTYLOGS/serverlog_20130430_00.15.21.txt', 'r', encoding=enc) ``` if it won't work try: ``` enc = 'utf-16' log = open('/Users/Owner/Desktop/Exodus Logs/DIRTYLOGS/serverlog_20130430_00.15.21.txt', 'r', encoding=enc) ``` you could also try it with ``` enc = 'iso-8859-15' ``` also try: ``` enc = 'cp437' ``` wich is very old but it also has the "ü" at 0x81 wich would fit to the string "üßer" wich I found on the homepage of assault cube. If all the codings are wrong try to contact some of the guys developing assault cube or as mentioned in a comment: have a look at <https://pypi.python.org/pypi/chardet>
Finding near neighbors I need to find "near" neighbors among a set of points. ![pointSet](https://i.stack.imgur.com/fR5pn.png) There are 10 points in the above image. Red lines are edges from the [Delaunay Triangulation](http://www.mathworks.com/help/techdoc/ref/delaunay.html), black stars mark the mid-lines of the edges, blue lines are the [Voronoi tesselation](http://www.mathworks.com/help/techdoc/ref/voronoi.html). Point 1 has three "near" neighbors, i.e. 4, 6, and 7, but not 2 and 3, who are almost in line with the edge 1-7, but much further away. What is a good way to identify the near neighbors (or "good" edges)? Looking at the figure, it seems to me that either selecting edges whose mid-point falls onto the intersection with the Voronoi lines, or considering as "near" neighbors those with touching Voronoi cells could be a good solution (the classification of 3-5 can go either way). Is there an efficient way of implementing either of the solutions in Matlab (I'd be happy to get a good general algorithm that I can then translate to Matlab, btw)?
You can implement your first idea of selecting edges whose mid-points fall on the intersection with the Voronoi lines by making use of the [`DelaunayTri` class](http://www.mathworks.com/help/techdoc/ref/delaunaytriclass.html) and its [`edges`](http://www.mathworks.com/help/techdoc/ref/trirep.edges.html) and [`nearestNeighbor`](http://www.mathworks.com/help/techdoc/ref/delaunaytri.nearestneighbor.html) methods. Here's an example with 10 random pairs of `x` and `y` values: ``` x = rand(10,1); %# Random x data y = rand(10,1); %# Random y data dt = DelaunayTri(x,y); %# Compute the Delaunay triangulation edgeIndex = edges(dt); %# Triangulation edge indices midpts = [mean(x(edgeIndex),2) ... %# Triangulation edge midpoints mean(y(edgeIndex),2)]; nearIndex = nearestNeighbor(dt,midpts); %# Find the vertex nearest the midpoints keepIndex = (nearIndex == edgeIndex(:,1)) | ... %# Find the edges where the (nearIndex == edgeIndex(:,2)); %# midpoint is not closer to %# another vertex than it is %# to one of its end vertices edgeIndex = edgeIndex(keepIndex,:); %# The "good" edges ``` And now `edgeIndex` is an N-by-2 matrix where each row contains the indices into `x` and `y` for one edge that defines a "near" connection. The following plot illustrates the Delaunay triangulation (red lines), Voronoi diagram (blue lines), midpoints of the triangulation edges (black asterisks), and the "good" edges that remain in `edgeIndex` (thick red lines): ``` triplot(dt,'r'); %# Plot the Delaunay triangulation hold on; %# Add to the plot plot(x(edgeIndex).',y(edgeIndex).','r-','LineWidth',3); %# Plot the "good" edges voronoi(dt,'b'); %# Plot the Voronoi diagram plot(midpts(:,1),midpts(:,2),'k*'); %# Plot the triangulation edge midpoints ``` ![enter image description here](https://i.stack.imgur.com/lchJw.png) ## How it works... The Voronoi diagram is comprised of a series of Voronoi polygons, or cells. In the above image, each cell represents the region around a given triangulation vertex which encloses all the points in space that are closer to that vertex than any other vertex. As a result of this, when you have 2 vertices that aren't close to any other vertices (like vertices 6 and 8 in your image) then the midpoint of the line joining those vertices falls on the separating line between the Voronoi cells for the vertices. However, when there is a third vertex that is close to the line joining 2 given vertices then the Voronoi cell for the third vertex may extend between the 2 given vertices, crossing the line joining them and enclosing that lines midpoint. This third vertex can therefore be considered a "nearer" neighbor to the 2 given vertices than the 2 vertices are to each other. In your image, the Voronoi cell for vertex 7 extends into the region between vertices 1 and 2 (and 1 and 3), so vertex 7 is considered a nearer neighbor to vertex 1 than vertex 2 (or 3) is. In some cases, this algorithm may not consider two vertices as "near" neighbors even though their Voronoi cells touch. Vertices 3 and 5 in your image are an example of this, where vertex 2 is considered a nearer neighbor to vertices 3 or 5 than vertices 3 or 5 are to each other.
Use 2 data sets in one chart in SSRS Is it possible to have 2 data sets and display the data for them in one chart on an SSRS report or will I need to combine the data sets? I have number of calls answered in one dataset and number of calls missed in another and want to show them both in a graph. The data is held in different areas which is why I didn't create it in one data set to start with.
This may not work for all types of charts, and does require your datasets to be constructed with common axis values: - Select the chart so that the "chart data" panel appears. - Click the green "+" above the "Values" pane. You'll see a list of fields in the dataset bound to the charts data region. Rather than choosing any of those, choose "Expression" on the very bottom. - Add a value from your other dataset - note that it will *probably* need to be wrapped in an aggregate function, like SUM or FIRST. For example: =sum(Fields!YourField.Value, "2ndDatasetName") - All datasets will need to have common axis values, otherwise you're in for a bad time. If you need to split them up, you can have TWO sets of axis values for each orientation (vertical, horizontal); to change which axis position is used, bring up the "Series Properties", choose the 2nd tab on the left ("Axes and Chart Area"), and choose the Primary or Secondary axis accordingly.
Why isn't application:openFile: called when the application is already running? I would like to handle open events when a user double-clicks on a file that was created by my application, or drags such a file onto the dock icon. I've therefore implemented `NSApplicationDelegate`'s [`application:openFile:`](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/1428612-application?language=objc) and [`application:openFiles:`](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/1428742-application?language=objc) methods, which work as expected when the application is not running. However, if the application is already running when the open event occurs, the application becomes focused, but the above methods are never called (breakpoints inside them are not hit) and the files do not open. I also tried implementing [`application:openURLs:`](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/2887193-application?language=objc). This method has the same behaviour: it is not called if the application is already running when the event occurs. Do I need to implement different functions to handle opening files when the application is already running, or is there something else I need to do/set in order for the existing functions to be called in those circumstances?
This is not mentioned in [the documentation](https://developer.apple.com/documentation/appkit/nsapplicationdelegate/1428612-application?language=objc), but [according to this answer](https://stackoverflow.com/a/506955/328936) the way that `application:openFile:` works is that it `NSApplication` forwards `odoc` Apple Events to its delegate. Armed with this knowledge, I was able to find the following old Carbon call in the app: ``` osError = AEInstallEventHandler(kCoreEventClass, kAEOpenDocuments, g_OpenDocumentsUPP, 0L, false); ``` I'm presuming this existing event handler consumed the Apple Event before `NSApplication` had a chance to deal with it. However, when the app is not already running, `NSApplication` handles the event before the line above setting up the event handler is called, hence the different behaviour. Removing this old code from the build caused the `NSApplicationDelegate` methods to be called, thus fixing the issue.
How do I merge a pull request on someone else's project in git? I cloned this repo on my computer: <https://github.com/derobins/wmd.git> There are several bugs with it though, and it looks like another user has fixed them and issued "Pull requests" (I assume these are requests for their changes to be committed?) Is it possible to merge those changes into my local version? EDIT: just to be clear, this is *not* my repository. I am using the WMD editor from derobins, but it has several bugs which those pull requests purport to fix. I have cloned the repo on Ubuntu (not in github) and was hoping to merge those changes in if possible.
(GitHub has [very thorough documentation](https://help.github.com/articles/merging-a-pull-request/) on how to deal with pull requests.) Essentially, you'll want to add a remote for the repository of the person who made the pull requests, e.g.: ``` git remote add helpful git://github.com/helpful-person/whatever.git ``` ... then fetch their changes into remote-tracking branches: ``` git fetch helpful ``` ... so that now you have all the commits from that person's GitHub repository in your clone of the upstream repository. If you look at the additional commits within that pull request you could: 1. merge the latest one, e.g. `git merge 75708aeab5` 2. cherry pick each of those changes, e.g. `git cherry-pick 2142db89`, `git cherry-pick 75708aeab5` 3. create a local branch to work on them further, e.g. `git checkout -b fix-for-issue3 75708aeab5` 4. etc. etc. An alternative is to just clone the repository of the contributor who made the pull requests instead, if that's the same but for those fixes.
Does the git branch renaming affect branch hierarchy? I'm working on a multi-branch project managed by GIT. Our workflow is currently this one : ``` master |-> v1/master |-> v1/dev ``` Note : Each developper make a "fork" of v1/dev to realise task. Potentially we have numbers of branch heritated from v1/dev We need to add branch whose heritate from v1/master named v1/debug. And the current v1/dev branch have to heritate of v1/debug. We target this new workflow : ``` master |-> v1/master |-> v1/debug |-> v1/dev ``` Note : Each have to continue to make "fork" of v1/dev to realise unit task. --- I'm looking for a solution to add the intermediate branch v1/debug. After some research, i would use the git rename branch command ([How do I rename a local Git branch?](https://stackoverflow.com/questions/6591213/rename-a-local-git-branch)). Is this command preserve branch hierarchy? Could i rename v1/dev to v1/debug and after that make the new v1/dev branch whitout to make trouble in current development branch outcome from the current v1/dev branch? Could developers will are able to merge unit branch outcome of renamed v1/dev in v1/debug?
First of all, **don't rename the branch**. You could rename your local branch, but this will only apply to you. Remember that git is a distributed system. People have the right to name their local branch using a different name than the associated remote tracking branch. For example, `v2/debug` could be the name of a local branch tracking the remote tracking branch `origin/v1/master` (I know, this doesn't make sense, but because git is a distributed system, people can name things like they want **locally**. **Don't rename the branch remotely either**. This would mess up everything, because it wouldn't change the local repositories of your colleagues. Their local branches will continue to point to the same remote tracking branches (same name). You just need to create a new branch and make it start at the point `v1/master` currently is. To create it and switch to it directly: ``` git checkout -b v1/debug v1/master ``` Or to only create it but stay on your current branch: ``` git branch v1/debug v1/master ``` The branch is created locally and still needs to be pushed for others to be able to see it. Later, the only thing you need is to change your merge workflow. From now on, stop to merge `v1/dev` directly into `v1/master`, and only merge it into `v1/debug`. And merge `v1/debug` into `v1/master` whenever the code base is ready. You were talking about **branch hierarchy**. Actually, branch hierarchy is « unknown » to git. It's just the way you merge (which branch into which one) which in the end makes the hierarchy. ## Workflow example with pictures 1. Initial state (only `v1/master` and `v1/dev`). Here, it is assumed that `v1/dev` is 1 commit ahead of `v1/master`. It is also assumed that we are currently on branch `v1/master`. [![Initial state](https://i.stack.imgur.com/WXnku.png)](https://i.stack.imgur.com/KUp1N.png) 2. Run `git branch v1/debug v1/master`. This only creates a label pointing on the same commit currently pointed by `v1/master`. [![Create the new branch v1/debug starting from v1/master](https://i.stack.imgur.com/4DzZG.png)](https://i.stack.imgur.com/WJ212.png) 3. Once the branch `v1/dev` is ready, merge it into `v1/debug`. Run `git checkout v1/debug && git merge v1/dev`. [![Merge v1/dev into v1/debug](https://i.stack.imgur.com/impAp.png)](https://i.stack.imgur.com/aVg51.png) 4. Once the branch `v1/debug` is ready, merge it into `v1/master`. Run `git checkout v1/master && git merge v1/debug`. From now on, what you call the « hierarchy » starts to appear clearly. [![Merge v1/debug into v1/master](https://i.stack.imgur.com/zGpNh.png)](https://i.stack.imgur.com/4kySO.png) 5. Do some work on branch `v1/dev`. Run `git checkout v1/dev` and do several commits. [![Do some work on v1/dev (commits)](https://i.stack.imgur.com/ZGLIq.png)](https://i.stack.imgur.com/ZiuPg.png) 6. Merge it into `v1/debug`. Run `git checkout v1/debug && git merge v1/dev`. [![Merge v1/dev into v1/debug](https://i.stack.imgur.com/MaxPy.png)](https://i.stack.imgur.com/p5SAC.png) 7. Merge it into `v1/master`. Run `git checkout v1/master && git merge v1/debug`. [![Merge v1/debug into v1/master](https://i.stack.imgur.com/59imC.png)](https://i.stack.imgur.com/qLKvI.png) Now you have 3 clear branches with the wanted workflow! Note that the graph would only look like this if you use `git merge --no-ff`. It makes things clearer in the picture so I assumed the merges are **non fast-forward**, so that we always see what happened, even if those merge commits are actually useless.
How to retrieve passphrase for private key? I generated DSA key using ubuntu. Saved the public key on remote server, such that keys are required before connecting ssh. I have forgot the passphrase for that key. How can I retrieve it?
If it was a reasonably secure password, the answer is probably "not at all". According to the ssh-keygen man page, the private key is encrypted using 128bit AES. Although this algorithm has some weaknesses, the complexity is still high enough to make it reasonably [secure](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard#Security). So, assuming a strong password and highly parallel decryption (for instance using GPGPU) having 210 threads, each having a very optimistic rate of 230 operations per second, after a day of time, you could have run about 256 operations. With an effective complexity of around 2100, it would take about 3 billion years to break the key... If it was an insecure password, you might have a chance to break it with [brute force](http://pentestmonkey.net/blog/phrasendrescher) though. It seems the community enhanced edition [John the Ripper](http://www.openwall.com/john/) has a GPGPU module for brute-force attacks on OpenSSH key files (didn't try it, I don't use proprietary drivers). By thy way, a [similar question](https://stackoverflow.com/questions/1110152/breaking-aes-encryption-using-decrypted-data), not about retrieving the key, but instead breaking the encryption itself was already asked.
Why is Oracle eating my string? i currently try to execute the following query on an Oracle DB ``` select tzname || ' (UTC'|| tz_offset(tzname) || ')' from v$timezone_names ``` It not seems to be very complicated. Just the name of the timzone and the UTC offset in braces. But when i execute the query with PL/SQL Developer on windows it always eats up the last brace. So I went to sqlplus and executed it there and now i get my last brace but also an additional whitespace before the last brace as an extra goody. I've tried it with nested `to_char()` and `trim()` but nothing changes. I also tried it on different DBs but it's always the same. Does anybody know if there is a problem with `tz_offset` and string concatenation?
Issuing the following query: ``` select dump(tz_offset(tzname)) from v$timezone_names; ``` You get results like these: ``` Typ=1 Len=7: 43,48,49,58,48,48,0 Typ=1 Len=7: 43,48,49,58,48,48,0 Typ=1 Len=7: 43,48,49,58,48,48,0 Typ=1 Len=7: 43,48,49,58,48,48,0 Typ=1 Len=7: 43,48,49,58,48,48,0 Typ=1 Len=7: 43,48,49,58,48,48,0 ... ``` This shows that `tz_offset()` returns null-terminated strings (maybe a bug). So for your query, Oracle is returning ``` "Africa/Algiers (UTC+01:00\0)" // Note \0 -> null character "Africa/Cairo (UTC+03:00\0)" // Note \0 -> null character ... ``` Having that in mind, I guess that PL/SQL Developer interprets \0 as end-of-string (maybe another bug, SQL strings are not null-terminated) and so it does not bother writing the rest of the string, so you lose the trailing brace. SQL\*PLus chooses instead to print a whitespace instead of that null and then proceeds with the rest of the string, printing the closing brace. As a workaround, you can replace `tz_offset(...)` with `replace(tz_offset(...), chr(0))`. This will delete nulls from whatever `tz_offset(...)` returns.
How do I add a model specific configuration option to a rails concern? I'm in the process of writing an Importable concern for my rails project. This concern will provide a generic way for me to import a csv file into any model that includes Importable. I need a way for each model to specify which field the import code should use to find existing records. Are there any recommended ways of adding this type of configuring for a concern?
Rather than including the concern in each model, I'd suggest creating an `ActiveRecord` submodule and extend `ActiveRecord::Base` with it, and then add a method in that submodule (say `include_importable`) that does the including. You can then pass the field name as an argument to that method, and in the method define an instance variable and accessor (say for example `importable_field`) to save the field name for reference in your `Importable` class and instance methods. So something like this: ``` module Importable extend ActiveSupport::Concern module ActiveRecord def include_importable(field_name) # create a reader on the class to access the field name class << self; attr_reader :importable_field; end @importable_field = field_name.to_s include Importable # do any other setup end end module ClassMethods # reference field name as self.importable_field end module InstanceMethods # reference field name as self.class.importable_field end end ``` You'll then need to extend `ActiveRecord` with this module, say by putting this line in an initializer (`config/initializers/active_record.rb`): ``` ActiveRecord::Base.extend(Importable::ActiveRecord) ``` (If the concern is in your `config.autoload_paths` then you shouldn't need to require it here, see the comments below.) Then in your models, you would include `Importable` like this: ``` class MyModel include_importable 'some_field' end ``` And the `imported_field` reader will return the name of the field: ``` MyModel.imported_field #=> 'some_field' ``` In your `InstanceMethods`, you can then set the value of the imported field in your instance methods by passing the name of the field to [`write_attribute`](http://api.rubyonrails.org/classes/ActiveRecord/AttributeMethods/Write.html#method-i-write_attribute), and get the value using [`read_attribute`](http://api.rubyonrails.org/classes/ActiveRecord/AttributeMethods/Read.html#method-i-read_attribute): ``` m = MyModel.new m.write_attribute(m.class.imported_field, "some value") m.some_field #=> "some value" m.read_attribute(m.class.importable_field) #=> "some value" ``` Hope that helps. This is just my personal take on this, though, there are other ways to do it (and I'd be interested to hear about them too).
My Ping command does not seem to ever finish. Does that mean it is suceeeding? I am pinging a remote IP. I know very little about the Ping command. When I ping the IP, it keeps going and going... I am not using the [-t option](http://pcsupport.about.com/od/commandlinereference/p/ping-command.htm). ``` Me$ ping 137.30.124.104 PING 137.30.124.104 (137.30.124.104): 56 data bytes 64 bytes from 137.30.124.104: icmp_seq=0 ttl=62 time=3.378 ms 64 bytes from 137.30.124.104: icmp_seq=1 ttl=62 time=3.825 ms 64 bytes from 137.30.124.104: icmp_seq=2 ttl=62 time=4.882 ms 64 bytes from 137.30.124.104: icmp_seq=3 ttl=62 time=1.822 ms 64 bytes from 137.30.124.104: icmp_seq=4 ttl=62 time=4.572 ms .... 64 bytes from 137.30.124.104: icmp_seq=290 ttl=62 time=3.273 ms ``` Does that mean it is successfully pinging the IP? Or that it is trying and failing and trying again? How do I get it to stop?
That's how the ping command works. You can control it using the count switch, `-c`. ### Example ``` $ ping -c 2 skinner PING skinner.bubba.net (192.168.1.3) 56(84) bytes of data. 64 bytes from skinner.bubba.net (192.168.1.3): icmp_req=1 ttl=64 time=1.00 ms 64 bytes from skinner.bubba.net (192.168.1.3): icmp_req=2 ttl=64 time=1.13 ms --- skinner.bubba.net ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 1.001/1.069/1.138/0.075 ms ``` ### Breakdown of output Lines like this mean that it is successfully pinging the other host: ``` 64 bytes from skinner.bubba.net (192.168.1.3): icmp_req=2 ttl=64 time=1.13 ms ``` These lines show detalis about each "ping" as it occurs between your host and the host you're atempting to ping. ### 6th column The column that contains this value, `icmp_req=2` is telling you which ICMP packet this is regarding. The `ping` command manufactures network packets. There are many types, you've probably heard of TCP or perhaps UDP packets. Another type is ICMP. ICMP is similar to SMS in cell phone networks. It's primary purpose is for command and control over the network. ### 7th column The 3rd column that's interesting is the `TTL=64`. TTL - "aka. Time to Live", mean that the packet will only transverse through at most 64 nodes before timing out. So if the system is more than 64 "hops" away from your system, you can't ping it, unless you increase the TTL. ### 8th column This column shows how long, in time, the ping took to occur (in milliseconds). This would be the column that looks like this: `time=1.13 ms`. The other columns are fairly self explanatory. ### Ping versions Different `ping` commands are implemented differently across the various Unixes. So you need to pay attention to the version. ``` $ ping -V ping utility, iputils-sss20100418 ``` I'm on a Fedora 14, Linux system.
Troubleshooting anti-forgery token problems I have a form post that consistently gives me an anti-forgery token error. Here is my form: ``` @using (Html.BeginForm()) { @Html.AntiForgeryToken() @Html.EditorFor(m => m.Email) @Html.EditorFor(m => m.Birthday) <p> <input type="submit" id="Go" value="Go" /> </p> } ``` Here is my action method: ``` [HttpPost] [ValidateAntiForgeryToken] public ActionResult Join(JoinViewModel model) { //a bunch of stuff here but it doesn't matter because it's not making it here } ``` Here is the machineKey in web.config: ``` <system.web> <machineKey validationKey="mykey" decryptionKey="myotherkey" validation="SHA1" decryption="AES" /> </system.web> ``` And here is the error I get: ``` A required anti-forgery token was not supplied or was invalid. ``` I've read that changing users on the HttpContext will invalidate the token, but this isn't happening here. The HttpGet on my Join action just returns the view: ``` [HttpGet] public ActionResult Join() { return this.View(); } ``` So I'm not sure what's going on. I've searched around, and everything seems to suggest that it's either the machineKey changing (app cycles) or the user/session changing. What else could be going on? How can I troubleshoot this?
After help from Adam, I get the MVC source added to my project, and was able to see there are many cases that result in the same error. Here is the method used to validate the anti forgery token: ``` public void Validate(HttpContextBase context, string salt) { Debug.Assert(context != null); string fieldName = AntiForgeryData.GetAntiForgeryTokenName(null); string cookieName = AntiForgeryData.GetAntiForgeryTokenName(context.Request.ApplicationPath); HttpCookie cookie = context.Request.Cookies[cookieName]; if (cookie == null || String.IsNullOrEmpty(cookie.Value)) { // error: cookie token is missing throw CreateValidationException(); } AntiForgeryData cookieToken = Serializer.Deserialize(cookie.Value); string formValue = context.Request.Form[fieldName]; if (String.IsNullOrEmpty(formValue)) { // error: form token is missing throw CreateValidationException(); } AntiForgeryData formToken = Serializer.Deserialize(formValue); if (!String.Equals(cookieToken.Value, formToken.Value, StringComparison.Ordinal)) { // error: form token does not match cookie token throw CreateValidationException(); } string currentUsername = AntiForgeryData.GetUsername(context.User); if (!String.Equals(formToken.Username, currentUsername, StringComparison.OrdinalIgnoreCase)) { // error: form token is not valid for this user // (don't care about cookie token) throw CreateValidationException(); } if (!String.Equals(salt ?? String.Empty, formToken.Salt, StringComparison.Ordinal)) { // error: custom validation failed throw CreateValidationException(); } } ``` My problem was that condition where it compares the Identity user name with the form token's user name. In my case, I didn't have the user name set (one was null, the other was an empty string). While I doubt many will run into this same scenario, hopefully others will find it useful seeing the underlying conditions that are being checked.
ipython: how to set terminal width When I use `ipython terminal` and want to print a `numpy.ndarray` which has many columns, the lines are automatically broken somewhere around 80 characters (i.e. the width of the lines is cca 80 chars): ``` z = zeros((2,20)) print z ``` Presumably, ipython expects that my terminal has 80 columns. In fact however, my terminal has width of 176 characters and I would like to use the full width. I have tried changing the following parameter, but this has no effect: ``` c.PlainTextFormatter.max_width = 160 ``` **How can I tell `ipython` to use full width of my terminal ?** I am using `ipython 1.2.1` on Debian Wheezy
After some digging through the code, it appears that the variable you're looking for is `numpy.core.arrayprint._line_width`, which is 75 by default. Setting it to 160 worked for me: ``` >>> numpy.zeros((2, 20)) array([[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) ``` The function used by default for array formatting is `numpy.core.numeric.array_repr`, although you can change this with `numpy.core.numeric.set_string_function`.
how to merge nodes (withe same name) to single node output using xslt? I would like to merge nodes such as: ``` <sourcePatientInfo>PID-3|1428eab4645a4ce^^^&amp;1.3.6.1.4.1.21367.2008.2.1&amp;ISO</sourcePatientInfo> <sourcePatientInfo>PID-5|WILKINS^CHARLES^^^</sourcePatientInfo> <sourcePatientInfo>PID-8|M</sourcePatientInfo> ``` To a single node like this (don't worry about the node value, I have it handled): ``` <sourcePatientInfo> <patientIdentifier> </patientIdentifier> <patientName> </patientName> <patientSex></patientSex> </sourcePatientInfo> ``` If found a few posts: [post 1](https://stackoverflow.com/questions/4321764/xslt-combine-multiple-nodes-into-single-node) [Post 2](https://stackoverflow.com/questions/2775424/how-to-select-multiple-nodes-in-single-for-eachin-xslt) But they are merging nodes with different names in the source xml. For now I have this: ``` <xsl:template match="sourcePatientInfo"> <sourcePatientInfo> <xsl:choose> <xsl:when test="matches(., 'PID-3')"> <patientIdentifier /> </xsl:when> <xsl:when test="matches(., 'PID-5')"> <patientName /> </xsl:when> <xsl:when test="matches(., 'PID-8')"> <patientSex /> </xsl:when> </xsl:choose> </sourcePatientInfo> </xsl:template> ``` I excluded some details to avoid to much code. What I get with it is 3 separate `sourcePatientInfo` which is no good. Any help? Thank you!!!!
This stylesheet: ``` <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:l="http://localhost" exclude-result-prefixes="l"> <l:n id="PID-3">patientIdentifier</l:n> <l:n id="PID-5">patientName</l:n> <l:n id="PID-8">patientSex</l:n> <xsl:variable name="vNames" select="document('')/*/l:n"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> <xsl:template match="sourcePatientInfo"/> <xsl:template match="sourcePatientInfo[1]"> <xsl:copy> <xsl:apply-templates select=".|following-sibling::sourcePatientInfo" mode="merge"/> </xsl:copy> </xsl:template> <xsl:template match="sourcePatientInfo" mode="merge"> <xsl:apply-templates select="$vNames[@id=substring-before(current(),'|')]"> <xsl:with-param name="pCurrent" select="."/> </xsl:apply-templates> </xsl:template> <xsl:template match="l:n"> <xsl:param name="pCurrent" select="/.."/> <xsl:element name="{.}"> <xsl:value-of select="substring-after($pCurrent,'|')"/> </xsl:element> </xsl:template> </xsl:stylesheet> ``` With this input: ``` <item> <sourcePatientInfo>PID-3|1428eab4645a4ce^^^&amp;1.3.6.1.4.1.21367.2008.2.1&amp;ISO</sourcePatientInfo> <sourcePatientInfo>PID-5|WILKINS^CHARLES^^^</sourcePatientInfo> <sourcePatientInfo>PID-8|M</sourcePatientInfo> </item> ``` Output: ``` <item> <sourcePatientInfo> <patientIdentifier>1428eab4645a4ce^^^&amp;1.3.6.1.4.1.21367.2008.2.1&amp;ISO</patientIdentifier> <patientName>WILKINS^CHARLES^^^</patientName> <patientSex>M</patientSex> </sourcePatientInfo> </item> ``` **EDIT**: Applying templates to inline map's nodes for "complex" further processing.
Sum columns in mysql then use the result in where clause I want to be able to do this: ``` SELECT dept.id, (invoices.col1 + invoices.col2 + invoices.col3) as sumTotal FROM dept INNER JOIN invoices ON invoices.id_dept = dept.id WHERE sumTotal > 10000 ``` But I am getting an unknown column on using "sumTotal". Is this possible?
Use `HAVING`: ``` SELECT dept.id, (invoices.col1 + invoices.col2 + invoices.col3) as sumTotal FROM dept INNER JOIN invoices ON invoices.id_dept = dept.id HAVING sumTotal > 10000 ``` The problem is that the `WHERE` clause is *executed* before the `SELECT` statement. Therefore the `sumTotal` column is not yet available. The `HAVING` clause is *executed* after the `SELECT` statement. It kinds of filter the results out after you have selected everything. Bear in mind, though, because of that using `HAVING` is slower. It operates on the whole set of rows. From the [MySQL documentation](http://dev.mysql.com/doc/refman/5.1/en/select.html): > > The HAVING clause is applied nearly last, just before items are sent to the client, with no optimization. (LIMIT is applied after HAVING.) > > > The SQL standard requires that HAVING must reference only columns in the GROUP BY clause or columns used in aggregate functions. However, MySQL supports an extension to this behavior, and permits HAVING to refer to columns in the SELECT list and columns in outer subqueries as well. > > > --- > > The HAVING clause can refer to aggregate functions, which the WHERE clause cannot: > > > ``` SELECT user, MAX(salary) FROM users GROUP BY user HAVING MAX(salary) > 10; ``` > > Do not use HAVING for items that should be in the WHERE clause. > > >
How to send data from JQuery AJAX request to Node.js server **What i want to do:** Simply send some data (json for example), to a node.js http server, using jquery ajax requests. For some reason, i can't manage to get the data on the server, cause it never fires the 'data' event of the request. **Client code:** ``` $.ajax({ url: server, dataType: "jsonp", data: '{"data": "TEST"}', jsonpCallback: 'callback', success: function (data) { var ret = jQuery.parseJSON(data); $('#lblResponse').html(ret.msg); }, error: function (xhr, status, error) { console.log('Error: ' + error.message); $('#lblResponse').html('Error connecting to the server.'); } }); ``` **Server code:** ``` var http = require('http'); http.createServer(function (req, res) { console.log('Request received'); res.writeHead(200, { 'Content-Type': 'text/plain' }); req.on('data', function (chunk) { console.log('GOT DATA!'); }); res.end('callback(\'{\"msg\": \"OK\"}\')'); }).listen(8080, '192.168.0.143'); console.log('Server running at http://192.168.0.143:8080/'); ``` As i said, it never gets into the 'data' event of the request. **Comments:** 1. It logs the 'Request received' message; 2. The response is fine, im able to handle it back on the client, with data; Any help? Am i missing something? Thank you all in advance. **EDIT:** Commented final version of the code, based on the answer: **Client code:** ``` $.ajax({ type: 'POST' // added, url: server, data: '{"data": "TEST"}', //dataType: 'jsonp' - removed //jsonpCallback: 'callback' - removed success: function (data) { var ret = jQuery.parseJSON(data); $('#lblResponse').html(ret.msg); }, error: function (xhr, status, error) { console.log('Error: ' + error.message); $('#lblResponse').html('Error connecting to the server.'); } }); ``` **Server code:** ``` var http = require('http'); http.createServer(function (req, res) { console.log('Request received'); res.writeHead(200, { 'Content-Type': 'text/plain', 'Access-Control-Allow-Origin': '*' // implementation of CORS }); req.on('data', function (chunk) { console.log('GOT DATA!'); }); res.end('{"msg": "OK"}'); // removed the 'callback' stuff }).listen(8080, '192.168.0.143'); console.log('Server running at http://192.168.0.143:8080/'); ``` Since i want to allow Cross-Domain requests, i added an implementation of [CORS](http://en.wikipedia.org/wiki/Cross-Origin_Resource_Sharing). Thanks!
To get the 'data' event to fire on the node.js server side, you have to POST the data. That is, the 'data' event only responds to POSTed data. Specifying 'jsonp' as the data format forces a GET request, since jsonp is defined in the jquery documentation as: > > "jsonp": Loads in a JSON block using JSONP. Adds an extra "?callback=?" to the end of your URL to specify the callback > > > Here is how you modify the client to get your data event to fire. ### Client: ``` <html> <head> <script language="javascript" type="text/javascript" src="jquery-1.8.3.min.js"></script> </head> <body> response here: <p id="lblResponse">fill me in</p> <script type="text/javascript"> $(document).ready(function() { $.ajax({ url: 'http://192.168.0.143:8080', // dataType: "jsonp", data: '{"data": "TEST"}', type: 'POST', jsonpCallback: 'callback', // this is not relevant to the POST anymore success: function (data) { var ret = jQuery.parseJSON(data); $('#lblResponse').html(ret.msg); console.log('Success: ') }, error: function (xhr, status, error) { console.log('Error: ' + error.message); $('#lblResponse').html('Error connecting to the server.'); }, }); }); </script> </body> </html> ``` Some helpful lines to help you debug the server side: ### Server: ``` var http = require('http'); var util = require('util') http.createServer(function (req, res) { console.log('Request received: '); util.log(util.inspect(req)) // this line helps you inspect the request so you can see whether the data is in the url (GET) or the req body (POST) util.log('Request recieved: \nmethod: ' + req.method + '\nurl: ' + req.url) // this line logs just the method and url res.writeHead(200, { 'Content-Type': 'text/plain' }); req.on('data', function (chunk) { console.log('GOT DATA!'); }); res.end('callback(\'{\"msg\": \"OK\"}\')'); }).listen(8080); console.log('Server running on port 8080'); ``` The purpose of the data event on the node side is to build up the body - it fires multiple times per a single http request, once for each chunk of data that it receives. This is the asynchronous nature of node.js - the server does other work in between receiving chunks of data.
Node.js and Typescript, how to dynamically access imported modules I am working on creating a discord bot in TypeScript. I wanted to create a generic command disbatcher and here is my work so far: **app.ts:** ``` import * as Discord from 'discord.js'; import * as config from '../config' import * as commands from './Commands/index' const token : string = config.Token; const _client = new Discord.Client(); _client.on('message', (msg) => { let args : Array<string> = msg.content.split(' ') let command : string = args.shift() || " "; if(!command.startsWith("!")) return; else{ commands[`${command.toLower().substring(1)}`] } }) ``` **Commands/Index.ts** ``` export {default as ping} from './ping'; export {default as prong} from './prong'; ``` **Ping.ts** : same structure for all commands ``` import { Message } from "discord.js"; export default { name : 'ping', description: 'Ping!', execute(message: Message, args: Array<string>){ message.channel.send('Pong.'); } } ``` When indexing the commands import I can successfuly call the right execute function using this: `commands['pong'].execute()` however, when trying to dynamically index it like this: `commands[command].execute()` I recieve the following error: > > `Element implicitly has an 'any' type because expression of type > 'string' can't be used to index type 'typeof > import("c:/Users/alexs/Desktop/Discord Bot/src/Commands/index")'. No > index signature with a parameter of type 'string' was found on type > 'typeof import("c:/Users/alexs/Desktop/Discord > Bot/src/Commands/index")'` > > > Is there anyway I can typecast the command import as some kind of object or collection? If not, is there a way I could create some kind of accssesor to make this work? I am newer to typescript and am curious what is possible.
I suggest a different approach for your commands, this approach fixes 2 things: - You don't forget to export files properly - You get type safe commands Let's first create a interface for your commands, this interface describes the metadata, add as many as you want ``` export interface Command { name: string description: string // Making `args` optional execute(message: Message, args?: string[]) => any } ``` Now that you have a shape for your command, let's make sure all your commands have the right shape ``` import { Command } from "./types" // This will complain if you don't provide the right types for each property const command: Command = { name: "ping", description: "Ping!", execute(message: Message, args: string[]) => { message.channel.send("Pong") } } export = command ``` The next part is loading your commands, discord.js has [glob](https://www.npmjs.com/package/glob) as a dependency which can help you read files in a directory easily, let's use some utilities so we can have nice async / await usage ``` import glob from "glob" // included by discord.js import { promisify } from "util" // Included by default import { Command } from "./types" // Make `glob` return a promise const globPromise = promisify(glob) const commands: Command = [] client.once("ready", async () => { // Load all JavaScript / TypeScript files so it works properly after compiling // Replace `test` with "await globPromise(`${__dirname}/commands/*.{.js,.ts}`)" // I just did this to fix SO's syntax highlighting! const commandFiles = test for (const file of commandFiles) { // I am not sure if this works, you could go for require(file) as well const command = await import(file) as Command commands.push(command) } }) const prefix = "!" client.on("message", message => { // Prevent the bot from replying to itself or other bots if (message.author.bot) { return } const [commandName, ...args] = message.content .slice(prefix.length) .split(/ +/) const command = commands.find(c => c.name === commandName) if (command) { command.execute(message, args) } }) ``` I hope this gives you some good starting point and shows you the power of TypeScript
Calculate equity changes using percentage change of price I have a data frame that looks like this: ``` Ret % 0 0.02 1 0.01 2 0.04 3 -0.02 4 -0.01 5 0.04 6 0.02 7 -0.01 8 0.04 9 -0.02 10 0.01 11 0.04 ``` I need to create a column named 'Equity' that shows how equity changes from a starting amount every time the percentage change (return %) of the first column is applied to this amount. The result should look like this assuming that the starting amount is 100: ``` Ret % Equity 0 0.02 102.00 1 0.01 103.02 2 0.04 107.14 3 -0.02 105.00 4 -0.01 103.95 5 0.04 108.11 6 0.02 110.27 7 -0.01 109.17 8 0.04 113.53 9 -0.02 111.26 10 0.01 112.37 11 0.04 116.87 ``` I found a solution using a "for" loop however I need to increase the performance so I'm looking for a vectorized solution with a panda/numpy method. Is there a way to do this? Thank you in advance
Try using [`df.cumprod`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.cumprod.html): ``` base = 100 df['Equity'] = (1 + df['Ret %']).cumprod()*base print(df) Ret % Equity 0 0.02 102.000000 1 0.01 103.020000 2 0.04 107.140800 3 -0.02 104.997984 4 -0.01 103.948004 5 0.04 108.105924 6 0.02 110.268043 7 -0.01 109.165362 8 0.04 113.531977 9 -0.02 111.261337 10 0.01 112.373951 11 0.04 116.868909 ``` Or [`np.cumprod`](https://numpy.org/doc/stable/reference/generated/numpy.cumprod.html) if you prefer: ``` import numpy as np df['Equity'] = np.cumprod(1+df['Ret %'])*base ``` You could round these values of course to end up with your expected output: ``` df['Equity'] = df['Equity'].round(2) ```
View is not getting notified when value of static Property Changes I have a ViewModelBase class as follows: ``` public class ViewModelBase : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public static event PropertyChangedEventHandler GlobalPropertyChanged = delegate { }; public static void OnGlobalPropertyChanged(string propertyName, Type className) { GlobalPropertyChanged(className,new PropertyChangedEventArgs(propertyName)); } } ``` Now, I have another viewModel called GroupViewModel which inherits ViewModelBase: ``` public class GroupViewModel : ViewModelBase { public GroupsViewModel() { CurrentGroup = new Group(); } private static Group _currentGroup; public static Group CurrentGroup { get { return _currentGroup; } set { _currentGroup = value; OnGlobalPropertyChanged("CurrentGroup", typeof(Group)); } } } ``` Now in Groups.xaml Page : ``` <Grid DataContext="{Binding CurrentGroup}"> ..... ..... <TextBlock Text="{Binding GroupName, TargetNullValue=''}" /> ..... ..... </Grid> ``` I have another ViewModel called MainWindowViewModel, I try to save CurrentGroup to Database like below code and then I set `CurrentGroup = new Group();` but in Group.xaml the text of TextBox is not cleared : ``` Group group = GroupViewModel.CurrentGroup; db.Groups.Add(group); db.SaveChanges(); GroupViewModel.CurrentGroup = new Group(); ``` **Update:** If I use the below code in GroupsViewModel, the output is as expected. I mean View is updated when Static property changes. ``` public static event EventHandler<PropertyChangedEventArgs> StaticPropertyChanged = delegate { }; private static void NotifyStaticPropertyChanged(string propertyName) { StaticPropertyChanged(null, new PropertyChangedEventArgs(propertyName)); } ``` If I use that same code in ViewModelBase (Please note that GroupsViewModel inherits ViewModelBase) then View is not updated when value of static property changes. Also I have marked the NotifyStaticPropertyChanged as public in this case to avoid the compile time errors like errors about protection level.
For `Static PropertyChanged` you have to create generic static event like this in your class: ``` public static event EventHandler<PropertyChangedEventArgs> StaticPropertyChanged = delegate { }; private static void NotifyStaticPropertyChanged(string propertyName) { StaticPropertyChanged(null, new PropertyChangedEventArgs(propertyName)); } ``` and you have to call like you use to do instance properties: ``` NotifyStaticPropertyChanged("CurrentGroup"); ``` But main catch is in XAML where you are binding - > > You'll use parentheses around the namespace, class, and property > because WPF binding engine parse the path as ClassName.PropertyName > rather than PropertyName.PropertyName. > > > So, it will be like this: ``` <Grid DataContext="{Binding Path=(local:GroupViewModel.CurrentGroup)}"> ..... ..... <TextBlock Text="{Binding GroupName, TargetNullValue=''}" /> ..... ..... </Grid> ``` Source here [INPC for static properties](http://10rem.net/blog/2011/11/29/wpf-45-binding-and-change-notification-for-static-properties). --- **UPDATE** > > If I use that same code in ViewModelBase (Please note that > GroupsViewModel inherits ViewModelBase) then View is not updated when > value of static property changes. > > > **`StaticPropertyChangedEvent` have to be in same class where property resides**. It won't work like traditional `INotifyPropertyChanged` for instance properties. I don't have any MSDN documentation to assert that but I verified it by tweaking event code a bit to see if XAML is hooking onto `StaticPropertyChangedEvent` from XAML. Replace event code to this and you can see yourself: ``` private static event EventHandler<PropertyChangedEventArgs> staticPC = delegate { }; public static event EventHandler<PropertyChangedEventArgs> StaticPropertyChanged { add { staticPC += value; } remove { staticPC -= value; } } protected static void NotifyStaticPropertyChanged(string propertyName) { staticPC(null, new PropertyChangedEventArgs(propertyName)); } ``` Put a breakpoint on add and you will see it will get hit since WPF binding engine internally hook to it to listen to static property changed events. But as soon as you move that to base class ViewModelBase, breakpoint won't get hit. Since, WPF haven't hook to it so any changes in property won't update UI obviously.
Why does adding float give different result depending on how its performed I have run in to an issue when adding floats in C#. The following is taken from the immediate window in Visual Studio. I have declared f as float, and now do a simple calculation in two different ways: ``` f = 5.75f + 0.075f*37 8.525001 f = 0.075f*37 2.775 f = f + 5.75f 8.525 ``` As you can see the results differ between the two ways of doing the same calculation. How can that be. As i see it, the numbers are nowhere near the precision limit of the float, so what is going on?
> > As i see it, the numbers are nowhere near the precision limit of the float > > > They really are - or rather, the difference is. The *exact* values involved are 8.52500057220458984375 (rounded up to 8.525001 for display) and 8.5249996185302734375 (rounded up to 8.525 for display). The difference between those two values is 0.0000009536743164062. Given that `float` only has 7 decimal digits of precision (as per the [documentation](http://msdn.microsoft.com/en-us/library/system.single.aspx)), that's a pretty reasonable inaccuracy, in my view. It's not actually the ordering of the operations that matters here. You get *exactly* the same results if you switch them round: ``` float f = 0.075f * 37 + 5.75f; Console.WriteLine(DoubleConverter.ToExactString(f)); f = 5.75f; Console.WriteLine(DoubleConverter.ToExactString(f)); f = f + 0.075f * 37; Console.WriteLine(DoubleConverter.ToExactString(f)); ``` (Using my [`DoubleConverter`](http://pobox.com/~skeet/csharp/DoubleConverter.cs) class.) The difference is that in the first version, *all* the information is available in one go - and actually at compile-time. The compiler does the arithmetic, and I suspect it actually performs it at a higher accuracy, and then reduces the accuracy of the overall *result* to 32 bits afterwards.
Reading a file one int at a time in C++ I am working on a program that requires reading in integers from a file into a 2D array. The concept is easy and I'm generally ok with file I/O. My problem is that the file contains 20 rows of 18 numbers. The numbers are not seperated by white space. An example is: 123456789987654321 192837465564738291 I have to read each individual number into the 2D array. I have created a for loop but I'm not getting the required output from the file I/O part of the loop. Is there a way to do this or do I need to use a work around such as reading the line into a string/array and dividing it? It's driving me mad. In the code, infile has been opened and tested. GRIDSIZE has a size of 9 and grid is the 2D array ``` int n; for(int i=0; i<GRIDSIZE; i++) { for(int j=0; j<GRIDSIZE; j++) { infile.get()>>grid[i][j];//This is causing the problem // infile >> n //Also tried this, not working // grid[i][j] = n; cout<<grid[i][j]<<endl; } } ```
Calling [get() on an ifstream](http://www.cplusplus.com/reference/iostream/istream/get/) returns a single character, casted to an int. So try changing ``` infile.get()>>grid[i][j]; ``` to ``` grid[i][j] = infile.get(); ``` That will give you the ASCII value of the digit. You can then use [isdigit()](http://www.cplusplus.com/reference/clibrary/cctype/isdigit/) (as noted by stefaanv) to make sure you actually have a digit, and then just subtract 0x30 (= 48 or '0') from them to get the integer values (as evident from an [ASCII chart](http://www.antrak.org.tr/gazete/062006/cizimler/tolga-tastan/ascii_table.jpg), the digits go from 0x30 to 0x39). So for instance: ``` int n = infile.get(); if(isdigit(n)) { grid[i][j] = n - '0'; } ```
Cache behaviour in Compute Capability 7.5 These are my assumptions: 1. There are two types of loads, cached and uncached. In the first one, the traffic goes through L1 and L2, while in the second one, the traffic goes only through L2. 2. The default behaviour in Compute Capability 6.x and 7.x are cached accesses. 3. A L1 cache line is 128 bytes and a L2 cache line is 32 bytes, so for every L1 transaction generated, there should be four L2 transactions (one per each sector.) 4. In Nsight, a SM->TEX Request means a warp-level instruction merged from 32 threads. L2->TEX Returns and TEX->SM Returns is a measure of how many sectors are transfered between each memory unit. Assuming Compute Capability 7.5, these are my questions: 1. The third assumption seems to imply that L2->TEX Returns should always be a multiple of four for global cached loads, but that's not always the case. What is happening here? 2. Is there still a point in marking pointers with const and \_\_restrict\_\_ qualifiers? That used to be a hint to the compiler that the data is read-only and therefore can be cached in L1/texture cache, but now all data is cached there, both read-only and not read-only. 3. From my fourth assumption, I would think that whenever TEX->SM Returns is greater than L2->TEX Returns, the difference comes from cache hits. That's because when there's a cache hit, you get some sectors read from L1, but none from L2. Is this true?
CC 6.x/7.x - L1 cache line size is 128 bytes divided into 4 32 byte sectors. On a miss only addressed sectors will be fetched from L2. - L2 cache line size is 128 bytes divided into 4 32 byte sectors. - CC 7.0 (HBM) 64B promotion is enabled. If there is a miss to the lower 64 bytes of the cache line the lower 64 bytes will be fetched from DRAM. If there is a miss to the upper 64 bytes of the cache line then the upper 64 bytes will be fetched. - CC 6.x/7.5 only accessed 32B sectors will be fetched from DRAM. - In terms of L1 cache policy - CC 6.0 has load caching enabled by default - CC 6.x has load caching disabled by default - see programming guide - CC 7.x has load caching enabled by default - see PTX for details on cache control In Nsight Compute the term requests varies between 6.x and 7.x. - For 5.x-6.x the number of requests per instruction varied by the type of operation and the width of the data. For example 32-bit load is 8 threads/request, 64-bit load is 4 threads/request, and 128-bit load is 2 threads/request. - For 7.x requests should be equivalent to instructions unless access pattern has address divergence that causes serialization. Answering your CC 7.5 Questions > > 1. The third assumption seems to imply that L2->TEX Returns should always be a multiple of four for global cached loads, but that's not > always the case. What is happening here? > > > The L1TEX unit will only fetch the missed 32B sectors in a cache line. > > 2. Is there still a point in marking pointers with const and **restrict** qualifiers? That used to be a hint to the compiler that the data is read-only and therefore can be cached in L1/texture cache, > but now all data is cached there, both read-only and not read-only. > > > The compiler can perform additional optimizations if the data is known to be read-only. - See [PTX Cache Operators](https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#cache-operators) - See [PTX Memory Consistency Model](https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#memory-consistency-model) > > 3. From my fourth assumption, I would think that whenever TEX->SM Returns is greater than L2->TEX Returns, the difference comes from > cache hits. That's because when there's a cache hit, you get some > sectors read from L1, but none from L2. Is this true? > > > L1TEX to SM return B/W is 128B/cycle. L2 to SM return B/W is in 32B sectors. The Nsight Compute Memory Workload Analysis | L1/TEX Cache table shows - Sector Misses to L2 (32B sectors) - Returns to SM (cycles == 1-128B)
Get File from Google drive using Intent I am trying to upload document from my app. Everything working fine but when i choose file from drive. ``` data=Intent { act=android.intent.action.VIEW dat=content://com.google.android.apps.docs.storage.legacy/enc=ckpgt5KcEEF_JYniJQafRV_5pEnu_D5UAI1WF-Lu6h2Z_Vw4 (has extras) }} ``` Can any body know how to handle this file. I had already handle all files and images only facing problem with google drive files. I am getting this `content://com.google.android.apps.docs.storage.legacy/enc=ckpgt5KcEEF_JYniJQafRV_5pEnu_D5UAI1WF-Lu6h2Z_Vw4` in intent data Uri.
Handle `Uri` received by Google-Drive files when selected through file chooser. as stated earlier it receives **Virtual File Uri**. I found this sample code simple and easy to understand. the given code sample worked for me .hope it works in your case. 1.So detect this Uri is received by google drive. ``` public static File getFileFromUri(final Context context, final Uri uri) throws Exception { if (isGoogleDrive(uri)) // check if file selected from google drive { return saveFileIntoExternalStorageByUri(context, uri); }else // do your other calculation for the other files and return that file return null; } public static boolean isGoogleDrive(Uri uri) { return "com.google.android.apps.docs.storage.legacy".equals(uri.getAuthority()); } ``` 2.if yes,the uri is stored to external path(here its root directory u can change it according to your need) and the file with that uri is created. ``` public static File saveFileIntoExternalStorageByUri(Context context, Uri uri) throws Exception { InputStream inputStream = context.getContentResolver().openInputStream(uri); int originalSize = inputStream.available(); BufferedInputStream bis = null; BufferedOutputStream bos = null; String fileName = getFileName(context, uri); File file = makeEmptyFileIntoExternalStorageWithTitle(fileName); bis = new BufferedInputStream(inputStream); bos = new BufferedOutputStream(new FileOutputStream( file, false)); byte[] buf = new byte[originalSize]; bis.read(buf); do { bos.write(buf); } while (bis.read(buf) != -1); bos.flush(); bos.close(); bis.close(); return file; } public static String getFileName(Context context, Uri uri) { String result = null; if (uri.getScheme().equals("content")) { Cursor cursor = context.getContentResolver().query(uri, null, null, null, null); try { if (cursor != null && cursor.moveToFirst()) { result = cursor.getString(cursor.getColumnIndex(OpenableColumns.DISPLAY_NAME)); } } finally { cursor.close(); } } if (result == null) { result = uri.getPath(); int cut = result.lastIndexOf('/'); if (cut != -1) { result = result.substring(cut + 1); } } return result; } public static File makeEmptyFileIntoExternalStorageWithTitle(String title) { String root = Environment.getExternalStorageDirectory().getAbsolutePath(); return new File(root, title); } ``` **Note:Here the virtual file is retrieved from Intent `getData()` and used in `context.getContentResolver().openInputStream(intent.getData())`, this will return an `InputStream`. It's handle to get selected file from google drive.** [for more info go through this link](https://gist.github.com/walkingError/915c73ae48882072dc0e8467a813046f)
Greek/latin scientific JLabel in Java Swing application For a scientific application I want to design an input form which lets the user enter certain parameters. Some of them are designated using greek letters, some of them have latin letters. The parameter names should be displayed using ordinary `JLabel` controls. On Windows, the *Tahoma* font (which is used for Labels by default) contains both latin and greek letters, so I simply set the Text property of the label to a greek (unicode) string and everything works fine. I'm wondering if this works also without modifications on Linux and OSX systems resp. for which Java/OS versions this would work. Also I'm curious if there's an easy way to show subscripts in labels ("\eta\_0" in TeX), but this is not that important for my application ...
I have no doubt that the vast majority of Unicode fonts includes the Greek block. On all platforms, and for all locales. When there are missing Unicode blocks, it's for space-saving concerns. The 50 or so characters in the Greek block is nothing compared with the thousands of east Asian characters (which my last Linux desktop actually included by default, btw). Speaking of fancy Unicode: [<http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts>](http://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts) Of course, despite any confidence that you or I may have, you should test your application on as many configurations as you can before deploying. Java tries its best, but in practice I've always found a few things that needed tweeking.
cryptojs: How to generate AES passphrase I want to generate a 256bit password for my AES encryption. When I check the password after the encryption it is different from my initial password. What am I doing wrong? Or is there some security mechanism I am not aware of? My code: ``` password=Generate_key(); var encrypted = CryptoJS.AES.encrypt("Message", password); //Those two should be the same document.write(password+"<br>"); document.write(encrypted.key); function Generate_key() { var key = ""; var hex = "0123456789abcdef"; for (i = 0; i < 64; i++) { key += hex.charAt(Math.floor(Math.random() * 16)); //Initially this was charAt(chance.integer({min: 0, max: 15})); } return key; } ``` The output is i.e. > > 0b05308c9a00f07044416bad7a51bacd282fc5c0c999551a4ff15c302b268b20 > 4df875993770411044fb35953166ee7833c32ca0741e9fec091dfa10138039e8 > > > Is this normal or am I doing something wrong here? Thanks for help!
Encryption is done with a key, which is a set of binary bits, not a password, which implies a human-readable string. To go from a password to a key, one can use a Password Based Key Derivation Function, such as PBKDF2. Crypto-JS already has a [PBKDF2 function built-in](https://code.google.com/p/crypto-js/#PBKDF2), i.e. ``` <script src="http://crypto-js.googlecode.com/svn/tags/3.1.2/build/rollups/pbkdf2.js"></script> <script> var salt = CryptoJS.lib.WordArray.random(128/8); var key128Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 128/32 }); var key256Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 256/32 }); var key512Bits = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 512/32 }); var key512Bits1000Iterations = CryptoJS.PBKDF2("Secret Passphrase", salt, { keySize: 512/32, iterations: 1000 }); </script> ``` In general, use as high an iteration count as you can get away with. Salt should be a random value, as in the example above; you'll need, of course, to store that value along with the iteration count in order to get the same key given the same passphrase.
Why this map function does not give traits' simple names I try to get names of all trait a class extends using `getInterfaces` which returns an array of trait's names. When I manually access each member of the array, the method `getName` returns simple names like this ``` trait A trait B class C() extends A, B val c = C() val arr = c.getClass.getInterfaces arr(0).getName // : String = A arr(1).getName // : String = B ``` However, when I use `map` function on `arr`. The resulting array contains a cryptic version of trait's names ``` arr.map(t => t.getName) // : Array[String] = Array(repl$.rs$line$1$A, repl$.rs$line$2$B) ``` The goal of this question is not about how to get the resulting array that contains simple names (for that purpose, I can just use `arr.map(t => t.getSimpleName)`.) What I'm curious about is that why accessing array manually and using a `map` do not yield a compatible result. Am I wrong to think that both ways are equivalent?
I believe you run things in Scala REPL or Ammonite. When you define: ``` trait A trait B class C() extends A, B ``` classes `A`, `B` and `C` aren't defined in top level of root package. REPL creates some isolated environment, compiles the code and loads the results into some inner "anonymous" namespace. Except this is not true. Where this bytecode was created is reflected in class name. So apparently there was something similar (not necessarily identical) to ``` // repl$ suggest object object repl { // .rs sound like nested object(?) object rs { // $line sounds like nested class class line { /* ... */ } // $line$1 sounds like the first anonymous instance of line new line { trait A } // import from `above // $line$2 sounds like the second anonymous instance of line new line { trait B } // import from above //... } } ``` which was made because of how scoping works in REPL: new line creates a new scope with previous definitions seen and new added (possibly overshadowing some old definition). This could be achieved by creating a new piece of code as code of new anonymous class, compiling it, reading into classpath, instantiating and importing its content. Byt putting each new line into separate class REPL is able to compile and run things in steps, without waiting for you to tell it that the script is completed and closed. When you are accessing class names with runtime reflection you are seeing the artifacts of how things are being evaluated. One path might go trough REPLs prettifiers which hide such things, while the other bypass them so you see the raw value as JVM sees it.
Cannot access a disposed object when running function more than once I have an ASP.NET Core 2.1 application and I am getting the error: > > Cannot access a disposed object. > Object name: 'Amazon.S3.AmazonS3Client' > > > when trying to call my AWS S3 read object service. This service works the first time and fails on the second and subsequent times. I have the following in startup.cs: ``` services.AddSingleton<IAWSService, AWSService>(); services.AddAWSService<IAmazonS3>(); ``` (I have tried configuring AsScoped() to no effect.) This is the function that is causing problems: ``` public class AWSService : IAWSService { private readonly IAmazonS3 _s3Client; public AWSService(IAmazonS3 s3Client) { _s3Client = s3Client; } public async Task<byte[]> ReadObjectFromS3Async(string bucketName, string keyName) { try { GetObjectRequest request = new GetObjectRequest { BucketName = bucketName, Key = keyName }; using (_s3Client) { MemoryStream ms = new MemoryStream(); using (var getObjectResponse = await _s3Client.GetObjectAsync(request)) { getObjectResponse.ResponseStream.CopyTo(ms); } var download = new FileContentResult(ms.ToArray(), "application/pdf"); return download.FileContents; } } catch (AmazonS3Exception e) { Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message); } catch (Exception e) { Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message); } return null; } } ``` } The first time I run the function, a breakpoint shows that this.s3client is not disposed, however subsequent attempts at running this function show that the s3client is disposed, hence the error. **Update** I'm calling this function from a controller: ``` public class CorrespondenceItemController : Controller { private IAWSService _awsService; public CorrespondenceItemController(IAWSService aWSService) { _awsService = aWSService; } public async Task<ActionResult<dynamic>> Send([FromBody]CorrespondenceItemSendViewModel model) { var attachment = await _awsService.ReadObjectFromS3Async(bucket, key) } } ```
That's because you wrap the `_s3Client`'s usage in a `using` block, which disposes the instance afterwards. Don't do that. Let your IoC container handle that for you, by not explicitly or implicitly disposing your `_s3Client`. It's fine to register your wrapper as a singleton, given the answer to [Is the Amazon .NET AWS SDK's AmazonS3 thread safe?](https://stackoverflow.com/questions/7743488/is-the-amazon-net-aws-sdks-amazons3-thread-safe) is "yes". This means that your application has one instance of your `AWSService` at any given time, and that class will be using the same instance of `IAmazonS3` for all requests. Then you only need to dispose it at the end of your application lifetime, and your IoC container will handle that.
Delphi XE4 IDE, how to always hide the bottom panes of the Object Inspector The two panes located at the bottom of the Obeject Inspector has no use at all, and it's consuming screen estates unnecessarily, as illustrated in the screenshot below. How to disable that two panes even after restarting the IDE? Built-in options or third party plugins would be OK with me. Thanks. [![enter image description here](https://i.stack.imgur.com/qeeNW.png)](https://i.stack.imgur.com/qeeNW.png)
The XE4 code below shows how to hide the items you want to remove: They are instances of the classes `THotCommands` and `TDescriptionPane`. **Update** The original version of this answer required a package including an add-in form and a button to refresh the Object Inspector to hide the two unwanted items. In the code below, I've removed the form entirely and the hiding of the items should now be fully automatic. To achieve this, I replaced the previous IDENotifier by a `DesignNotification` object and use its `SelectionChanged` event to invoke the code which hides the `THotCommands` and `TDescriptionPane` controls. `TDesignNotification` implements the `IDesignNotification` interface in DesignIntf.Pas The other detail which turned out to be critical to getting the hiding process to work automatically is to set the `Height` of the `THotCommands` and `TDescriptionPane` controls to 0, because the IDE seems to reset their `Visible` property to `True` after the component selection in the OI is changed. Fortunately, whatever code does that does not also reset their Heights to a non-zero value. Obviously, to use you add a unit containing the code to a package (.Dpk) file and then compile and install the package in the IDE. Code: ``` interface uses [...]ToolsApi, DesignIntf; type TDesignNotification = class(TInterfacedObject, IDesignNotification) procedure ItemDeleted(const ADesigner: IDesigner; AItem: TPersistent); procedure ItemInserted(const ADesigner: IDesigner; AItem: TPersistent); procedure ItemsModified(const ADesigner: IDesigner); procedure SelectionChanged(const ADesigner: IDesigner; const ASelection: IDesignerSelections); procedure DesignerOpened(const ADesigner: IDesigner; AResurrecting: Boolean); procedure DesignerClosed(const ADesigner: IDesigner; AGoingDormant: Boolean); constructor Create; destructor Destroy; override; private procedure HideItems; procedure HideFormItems(Form: TForm); end; var DesignNotification : TDesignNotification; implementation procedure SetUp; begin DesignNotification := TDesignNotification.Create; RegisterDesignNotification(DesignNotification); end; constructor TDesignNotification.Create; begin inherited Create; end; procedure TDesignNotification.DesignerClosed(const ADesigner: IDesigner; AGoingDormant: Boolean); begin end; procedure TDesignNotification.HideFormItems(Form : TForm); var j, l : Integer; Panel : TPanel; C : TComponent; HideCount : Integer; procedure HideControl(AControl : TControl); begin AControl.Height := 0; // This is necessary because the IDE seems to reset // Visible to True when the Object Inspector is refreshed. AControl.Visible := False; end; begin HideCount := 0; for j := 0 to Form.ComponentCount - 1 do begin C := Form.Components[j]; if C is TPanel then begin Panel := TPanel(C); for l := 0 to Panel.ControlCount - 1 do begin if CompareText(Panel.Controls[l].ClassName, 'TDescriptionPane') = 0 then begin HideControl(Panel.Controls[l]); Inc(HideCount); end else if CompareText(Panel.Controls[l].ClassName, 'THotCommands') = 0 then begin HideControl(Panel.Controls[l]); Inc(HideCount); end; if HideCount >= 2 then // we're done exit; end; end; end; end; procedure TDesignNotification.HideItems; var i : Integer; Form : TForm; begin for i := 0 to Screen.FormCount - 1 do begin Form := Screen.Forms[i]; if CompareText(Form.ClassName, 'TPropertyInspector') = 0 then begin HideFormItems(Form); Break; end; end; end; procedure TDesignNotification.DesignerOpened(const ADesigner: IDesigner; AResurrecting: Boolean); begin end; var DestroyCount : Integer; destructor TDesignNotification.Destroy; begin Inc(DestroyCount); inherited; end; procedure TDesignNotification.ItemDeleted(const ADesigner: IDesigner; AItem: TPersistent); begin end; procedure TDesignNotification.ItemInserted(const ADesigner: IDesigner; AItem: TPersistent); begin end; procedure TDesignNotification.ItemsModified(const ADesigner: IDesigner); begin end; procedure TDesignNotification.SelectionChanged(const ADesigner: IDesigner; const ASelection: IDesignerSelections); var C : TComponent; begin // This can get called with ADesigner = Nil if ADesigner = Nil then exit; C := ADesigner.Root; if C <> Nil then begin HideItems; end end; initialization SetUp; finalization if DesignNotification <> Nil then begin UnRegisterDesignNotification(DesignNotification); end; end. ```
What do the Git “pairing broken” and “unknown” statuses mean, and when do they occur? Some options in `git diff`, for instance `--name-status`, cause the output of a status letter next to a file name. They are: > > A, C, D, M, R, T, U, X, B > > > … and they mean > > Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), > type (i.e. regular file, symlink, submodule, …) changed (T), > Unmerged (U), Unknown (X), or pairing Broken (B). > > > **Question:** how should the `X` and `B` statuses be interpreted, and which circumstances lead to their appearance? Can you provide a series of steps leading to such statuses appearing in the output of `git-diff`, and possibly ways to fix them?
The `B` “broken pair” status never appears directly in `--name-status` output, it is only useful as an argument to the option `--diff-filter` when also using the option `-B` (`--break-rewrites`). Using it as a filter selects files that have had at least a certain percentage of their content deleted or changed. This “breaking” is not be terribly useful with `--name-status` since the point of “breaking” is mostly to change how the diff text is generated: it eliminates context lines (unchanged lines) from the diff output instead of generating the add and remove lines that would be required around whatever “random” common subsequences the diff algorithm happened to find. ``` git init broken-pairs cd broken-pairs nums() { seq "$1" "$2" 2>/dev/null || jot $(($2 - $1 + 1)) "$1"; } nums 0 99 > a nums 100 199 > b git add a b git commit -ma=0-99,b=100-199 nums 200 299 > a { nums 100 149; nums 350 399; } > b git diff --name-status --diff-filter=B # selects nothing git diff --name-status --diff-filter=B -B # M100 a git diff --name-status --diff-filter=B -B/50 # M100 a M050 b ``` --- The `X` “unknown” status should never actually appear. If it does appear, it means a pathname that is neither unmerged, added, deleted, modified or had its type changed (effectively: unchanged) unexpectedly made it to the core of the internal diff machinery; the error `feeding unmodified <pathname> to diffcore` will also be generated. It appears to be left over from some old mode of operation.
display images in ![]() tag using nsdata is this possible? my app will download a zip file from the server and I save all the images in an array like this: ``` ZipReadStream *read = [unzipFile readCurrentFileInZip]; NSMutableData *data = [[NSMutableData alloc] initWithLength:info.length]; int bytesRead= [read readDataWithBuffer:data]; if(bytesRead > 0) { [imagesData addObject:data]; [imagesName addObject:info.name]; } ``` then I filter w/c images to be displayed inside the uiview and w/c images to be displayed inside the uiwebview. I display the images inside the uiview like this: ``` UIImage *imageForThisQuestion = [[UIImage alloc]initWithData:[imagesData objectAtIndex:indexOfThisImage]]; ``` this works fine inside the uiview. but how do I display the some of the images inside the uiwebview? can I use the tag here? and also my uiwebview might appear in this format: ``` "blah blah blah blah <img src...> blah blah blah <img src...>" ```
You don't want to save an image in file to refer on it in html like < img src="myimage.png">, right? You can use [data URI scheme](http://en.wikipedia.org/wiki/Data_URI_scheme) to encode your image into URI, and use img tag like < img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUA AAAFCAYAAACNbyblAAAAHElEQVQI12P4//8/w38GIAXDIBKE0DHxgljNBAAO 9TXL0Y4OHwAAAABJRU5ErkJggg==" > Read [hear](http://cocoawithlove.com/2009/06/base64-encoding-options-on-mac-and.html) how to base64 encode NSData of your image.
How to manage test data fixtures for acceptance testing in large projects? Suppose we have large, complex system with large amount of data and complicated business logic. **How to manage test data (Oracle DB) to have fast, reliable acceptance (Selenium etc.) tests starting from known state?** Because of scale and complexity, tests should: - run quite fast (1. fast revert to known DB state before each test/suite 2. definatelly not creating test data by UI before each suite) - base on data created with UI (no direct `INSERTS` to database - risky duplication of business logic) - have several versions/snapshots of DB state (stable group of users with related data - to avoid conflicts between assertions and new data created with ongoing automation development)
What you're describing is called a [Sandbox DB](https://en.wikipedia.org/wiki/Sandbox_(software_development)). For every new deploy you'll have to provide/populate this DB with the data you need and after tests are done, to drop it. > > have several versions/snapshots of DB state > > > This is what a [Fresh Fixture pattern](http://xunitpatterns.com/Fresh%20Fixture.html) and [Prebuilt Fixture pattern](http://xunitpatterns.com/Prebuilt%20Fixture.html) will help you with. Also you could look at the [Fixture Teardown patterns](http://xunitpatterns.com/Fixture%20Teardown%20Patterns.html). Here you can find some considerations when dealing with such [big-data-sandbox-strategies](http://www.techrepublic.com/blog/10-things/10-successful-big-data-sandbox-strategies/). Like scheduling, master data repository and monitoring. To successfully manage all that - a [CI](https://en.wikipedia.org/wiki/Continuous_integration) server have to be put to work. Since you've tagged JAVA, a good options are: - [Jenkins](https://jenkins-ci.org/) and [Database plugin](https://wiki.jenkins-ci.org/display/JENKINS/Database+Plugin) - [Bamboo](https://www.atlassian.com/software/bamboo) - [Hudson](http://hudson-ci.org/)
How to make XMLHttpRequest cross-domain withCredentials, HTTP Authorization (CORS)? I'm unable to make a cross-domain request with an Authorization header (testing with Firefox). I have requests working without authentication, but once I set `withCredentials` to true I am no longer able to read the response from the server. On the server I send back these headers (using an `after_request` method in Flask): ``` resp.headers['Access-Control-Allow-Origin'] = '*' resp.headers['Access-Control-Allow-Credentials'] = 'true' resp.headers['Access-Control-Allow-Methods'] = 'POST, OPTIONS' resp.headers['Access-Control-Allow-Headers'] = 'Authorization' ``` *No OPTIONS call is ever actually made by Firefox*. On the client I make an XMLHttpRequest call: ``` var xhr = new XMLHttpRequest() xhr.open( 'POST', 'http://test.local:8002/test/upload', true) xhr.withCredentials = true xhr.onreadystatechange = function() { console.log( xhr.status, xhr.statusText ) } xhr.send(fd) ``` Without `withCredentials` set the log statement will log the expecting information to the console. Once I set the value however the xhr doesn't allow access and I just write a 0 value and an empty string. I haven't set the authorization header here, but that shouldn't affect my ability to read the result. If I attempt to add a username/password to the "open" command I get a `NS_ERROR_DOM_BAD_URI: Access to restricted URI denied` error. What am I doing wrong?
*I've [written an article](http://mortoray.com/2014/04/09/allowing-unlimited-access-with-cors/) with a complete CORS setup.* I found several issues that can result in this problem: 1. The `Access-Control-Allow-Origin` cannot be a wildcard if credentials are being used. It's easiest just to copy the `Origin` header of the request to this field. It's entirely unclear why the standard would disallow a wildcard. 2. Firefox caches the Access-Control results even if you clear the cache (perhaps for the session). Restarting forced it to do a new `OPTIONS` request. To aid in debugging I added the header `Access-Control-Max-Age: 1` 3. The username/password of the `open` command is apparently not usable as the credentials. You must add an `Authorization` header yourself. `xhr.setRequestHeader( 'Authorization', 'Basic ' + btoa( user + ':' + pass ) )` Overall the `withCredentials` system is rather braindead. It's easier to simply write a server that accepts the authorization as part of the body of the request.
Java -- How to set the keyboard scroll speed for a JScrollPane A `JPanel` has a `JScrollPane` that contains yet another `JPanel` or two. My life depends on increasing the scroll speed using a keyboard's directional arrows. After careful deliberation, the powers that be decided that: `sc.getVerticalScrollBar().setUnitIncrement(240);` should only be applicable to a mouse, in a clever ruse to elicit minor annoyances amongst java developers. Is there anything that can be done to increase scroll speed? My life hangs in the balance.
You have to use a combination of [InputMap.put](http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/InputMap.html#put%28javax.swing.KeyStroke,%20java.lang.Object%29) and [ActionMap.put](http://download.oracle.com/javase/1.4.2/docs/api/javax/swing/ActionMap.html#put%28java.lang.Object,%20javax.swing.Action%29) to capture the keyboard events for the components contained on your `JScrollPane` and process the keyboard events when the `JScrollPane` has the focus. Since the default increment value for scrolling is 1 you should add or substract the desired increment value to the current value of the scrollbar for `JScrollPane` which you can get with `JScrollPane.getVerticalScrollBar().getValue()` and set with `JScrollPane.getVerticalScrollBar().setValue(int).` An example of capturing events for the contained elements withing JScrollPane can be done with this code, I've done with buttons, but you get the point (Sorry for the bad organization of the code): ``` import javax.swing.*; import java.awt.*; import java.awt.event.*; public class Test { public static void main(String[] args) { final JFrame f = new JFrame(""); JPanel panel = new JPanel(); panel.setLayout(new GridLayout(2000,1)); for(int i = 0; i != 2000; i++) { JButton btn = new JButton("Button 2"); panel.add(btn); } final JScrollPane sPane = new JScrollPane(panel); final int increment = 5000; sPane.getVerticalScrollBar().setUnitIncrement(increment); KeyStroke kUp = KeyStroke.getKeyStroke(KeyEvent.VK_UP, 0); KeyStroke kDown = KeyStroke.getKeyStroke(KeyEvent.VK_DOWN, 0); sPane.getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT).put(kUp,"actionWhenKeyUp"); sPane.getActionMap().put("actionWhenKeyUp", new AbstractAction("keyUpAction") { public void actionPerformed(ActionEvent e) { final JScrollBar bar = sPane.getVerticalScrollBar(); int currentValue = bar.getValue(); bar.setValue(currentValue - increment); } } ); sPane.getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT).put(kDown,"actionWhenKeyDown"); sPane.getActionMap().put("actionWhenKeyDown", new AbstractAction("keyDownAction") { public void actionPerformed(ActionEvent e) { final JScrollBar bar = sPane.getVerticalScrollBar(); int currentValue = bar.getValue(); bar.setValue(currentValue + increment); } } ); f.add(sPane); f.pack(); SwingUtilities.invokeLater( new Runnable() { public void run() { f.setVisible(true); } } ); } } ``` We register to listen and process that event with: ``` sPane.getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT).put(kUp,"actionWhenKeyUp"); sPane.getActionMap().put("actionWhenKeyUp", new AbstractAction("keyUpAction") { public void actionPerformed(ActionEvent e) { final JScrollBar bar = sPane.getVerticalScrollBar(); int currentValue = bar.getValue(); bar.setValue(currentValue - increment); } } ); ``` The key code that perform the value of `JScrollBar` increment is of the AbstractAction (in this case when the user press the up key). ``` public void actionPerformed(ActionEvent e) { final JScrollBar bar = sPane.getVerticalScrollBar(); int currentValue = bar.getValue(); bar.setValue(currentValue - increment); } ``` What you should do is to complete the events when your JScrollPane has the focus, but that should be trivial. Hope it helps to save your life :P or at least serve you as a starting point.
Abstract exception super type If throwing `System.Exception` is considered so bad, why wasn't `Exception` made `abstract` in the first place? That way, it would not be possible to call: ``` throw new Exception("Error occurred."); ``` This would enforce using derived exceptions to provide more details about the error that occurred. For example, when I want to provide a custom exception hierarchy for a library, I usually declare an abstract base class for my exceptions: ``` public abstract class CustomExceptionBase : Exception { /* some stuff here */ } ``` And then some derived exception with a more specific purpose: ``` public class DerivedCustomException : CustomExceptionBase { /* some more specific stuff here */ } ``` Then when calling any library method, one could have this generic try/catch block to directly catch any error coming from the library: ``` try { /* library calls here */ } catch (CustomExceptionBase ex) { /* exception handling */ } ``` Is this a good practice? Would it be good if `Exception` was made abstract? EDIT : My point here is that even if an exception class is marked `abstract`, you can still catch it in a catch-all block. Making it abstract is only a way to forbid programmers to throw a "super-wide" exception. Usually, when you voluntarily throw an exception, you should know what type it is and why it happened. Thus enforcing to throw a more specific exception type.
I don't know the actual reasons why it was done this way, and to a certrain degree I agree that preventing infintely wide exceptions from being thrown would be a good thing. BUT... when coding small demo apps or proof-of-concepts, I don't want to start designing 10 different exception sub-classes, or spending time trying to decide which is the "best" exception class for the situation. I'd rather just throw `Exception` and pass a string that explains the details. When it's throw-away code, I don't care about these things and if I was *forced* to care about such things, I'd either create my own `GenericException` class and throw *that* everywhere, or move to a different tool/language. For some projects, I agree that properly creating relevant Exception subclasses is important, but not all projects require that.
Can you listen to Firestore updates when iOS app is in the background? I'm very new to Firestore and trying to understand how the real-time updates work. I'm using something like this now to get the updates from Firestore: ``` db .collection(Collections.session) .whereField("participants", arrayContains:userID) .addSnapshotListener { querySnapshot, error in ``` I noticed that the listener block is not getting fired when the app is in the background, but only when it's brought back to foreground. Is there a way to get the update when the app is running in the background too? Maybe somehow send a push notification or something? Any kind of help is highly appreciated.
> > Is there a way to get the update when the app is running in the background too? > > > Since backgrounded apps are eventually killed by the OS, you don't have a way to run a listener reliably when the app is not actively being used by the user. You are correct in that the only way to (reliably) notify your app of some change in your backend is to send a push notification. A very common approach is to use [Cloud Functions](https://firebase.google.com/docs/functions) to write a [Firestore trigger](https://firebase.google.com/docs/functions/firestore-events) that gets invoked when a document of interest is created, updated, or deleted. You can use this to write backend code that uses [Firebase Cloud Messaging](https://firebase.google.com/docs/cloud-messaging) and the [Firebase Admin SDK](https://firebase.google.com/docs/cloud-messaging/server#firebase-admin-sdk-for-fcm) to send a notification to your app with a payload that tells it to respond to that change.
How to get the system uptime in Windows? I am using windows 7 and xp. I want to know the uptime of the system. What is the command / procedure for getting the uptime?
Following are eight ways to find the **Uptime** in Windows OS. # 1: By using the Task Manager In Windows Vista and Windows Server 2008, the Task Manager has been beefed up to show additional information about the system. One of these pieces of info is the server’s running time. 1. Right-click on the Taskbar, and click Task Manager. You can also click `CTRL`+`SHIFT`+`ESC` to get to the Task Manager. 2. In Task Manager, select the Performance tab. 3. The current system uptime is shown under System or Performance ⇒ CPU for Win 8/10. !["Up Time" field in Task Manager](https://i.stack.imgur.com/ln4Af.png) # 2: By using the System Information Utility The `systeminfo` command line utility checks and displays various system statistics such as installation date, installed hotfixes and more. Open a Command Prompt and type the following command: ``` systeminfo ``` You can also narrow down the results to just the line you need: ``` systeminfo | find "System Boot Time:" ``` ![enter image description here](https://i.stack.imgur.com/viwWl.png) # 3: By using the Uptime Utility Microsoft have published a tool called Uptime.exe. It is a simple command line tool that analyses the computer's reliability and availability information. It can work locally or remotely. In its simple form, the tool will display the current system uptime. An advanced option allows you to access more detailed information such as shutdown, reboots, operating system crashes, and Service Pack installation. Read the following KB for more info and for the download links: - MSKB232243: [Uptime.exe Tool Allows You to Estimate Server Availability with Windows NT 4.0 SP4 or Higher](http://support.microsoft.com/kb/232243). To use it, follow these steps: 1. Download uptime.exe from the above link, and save it to a folder, preferably in one that's in the system's path (such as SYSTEM32). 2. Open an elevated Command Prompt window. To open an elevated Command Prompt, click Start, click All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. You can also type CMD in the search box of the Start menu, and when you see the Command Prompt icon click on it to select it, hold `CTRL`+`SHIFT` and press `ENTER`. 3. Navigate to where you've placed the uptime.exe utility. 4. Run the `uptime.exe` utility. You can add a /? to the command in order to get more options. ![enter image description here](https://i.stack.imgur.com/56jIB.png) It does not offer many command line parameters: ``` C:\uptimefromcodeplex\> uptime /? usage: Uptime [-V] -V display version C:\uptimefromcodeplex\> uptime -V version 1.1.0 ``` # 3.1: By using the old Uptime Utility There is an older version of the "uptime.exe" utility. This has the advantage of NOT needing .NET. (It also has a lot more features beyond simple uptime.) Download link: [Windows NT 4.0 Server Uptime Tool (uptime.exe) (final x86)](https://www.microsoft.com/en-us/download/details.aspx?id=14732) ``` C:\uptimev100download>uptime.exe /? UPTIME, Version 1.00 (C) Copyright 1999, Microsoft Corporation Uptime [server] [/s ] [/a] [/d:mm/dd/yyyy | /p:n] [/heartbeat] [/? | /help] server Name or IP address of remote server to process. /s Display key system events and statistics. /a Display application failure events (assumes /s). /d: Only calculate for events after mm/dd/yyyy. /p: Only calculate for events in the previous n days. /heartbeat Turn on/off the system's heartbeat /? Basic usage. /help Additional usage information. ``` # 4: By using the NET STATISTICS Utility Another easy method, if you can remember it, is to use the approximate information found in the statistics displayed by the NET STATISTICS command. Open a Command Prompt and type the following command: ``` net statistics workstation ``` The statistics should tell you how long it’s been running, although in some cases this information is not as accurate as other methods. ![enter image description here](https://i.stack.imgur.com/aCq6z.png) # 5: By Using the Event Viewer Probably the most accurate of them all, but it does require some clicking. It does not display an exact day or hour count since the last reboot, but it will display important information regarding why the computer was rebooted and when it did so. We need to look at Event ID 6005, which is an event that tells us that the computer has just finished booting, but you should be aware of the fact that there are virtually hundreds if not thousands of other event types that you could potentially learn from. Note: BTW, the 6006 Event ID is what tells us when the server has gone down, so if there’s much time difference between the 6006 and 6005 events, the server was down for a long time. Note: You can also open the Event Viewer by typing eventvwr.msc in the Run command, and you might as well use the shortcut found in the Administrative tools folder. 2. Click on Event Viewer (Local) in the left navigation pane. 3. In the middle pane, click on the Information event type, and scroll down till you see Event ID 6005. Double-click the 6005 Event ID, or right-click it and select View All Instances of This Event. 4. A list of all instances of the 6005 Event ID will be displayed. You can examine this list, look at the dates and times of each reboot event, and so on. 5. Open Server Manager tool by right-clicking the Computer icon on the start menu (or on the Desktop if you have it enabled) and select Manage. Navigate to the Event Viewer. ![enter image description here](https://i.stack.imgur.com/qsYTG.png) # 5.1: Eventlog via PowerShell ``` Get-WinEvent -ProviderName eventlog | Where-Object {$_.Id -eq 6005 -or $_.Id -eq 6006} ``` # 6: Programmatically, by using [GetTickCount64](http://msdn.microsoft.com/en-us/library/windows/desktop/ms724411%28v=vs.85%29.aspx) > > GetTickCount64 retrieves the number of milliseconds that have elapsed since the system was started. > > > # 7: By using WMI ``` wmic os get lastbootuptime ``` # 8: The new uptime.exe for Windows XP and up Like the tool from Microsoft, but compatible with all operating systems up to and including Windows 10 and Windows Server 2016, [this uptime utility](https://neosmart.net/blog/2017/uptime-for-windows/) does not require an elevated command prompt and offers an option to show the uptime in both DD:HH:MM:SS and in human-readable formats (when executed with the `-h` command-line parameter). Additionally, this version of `uptime.exe` will run and show the system uptime even when launched normally from within an explorer.exe session (i.e. not via the command line) and pause for the uptime to be read: [![enter image description here](https://i.stack.imgur.com/QCRJN.png)](https://i.stack.imgur.com/QCRJN.png) and when executed as `uptime -h`: [![enter image description here](https://i.stack.imgur.com/hwOxd.png)](https://i.stack.imgur.com/hwOxd.png)
Fultter: How to change color of all text before a particular character in TextField? I am trying to create a TextField Widget in Flutter App in which I want to allow the user to insert a text string like this: **USER-0123456789** In which the text **USER** (All Text before '**-**' character) should be in red color and other should be in black color. Now the problem is that I don't know any method to do this. After some research, I found that I can do this with a normal Text Widget by using RichText Widget. But, I don't know any similar widget for TextField Widget. Please help me to come out of this situation.
I am able to solve the Question by using an if statement to create two TextSpan as suggested by pskink. The **MyTextController Class:** ``` class MyTextController extends TextEditingController { @override TextSpan buildTextSpan({TextStyle style, bool withComposing}) { List<InlineSpan> children = []; if(text.contains('-')){ children.add(TextSpan(style: TextStyle(color: Colors.redAccent), text: text.substring(0, text.indexOf('-')))); children.add(TextSpan(text: text.substring(text.indexOf('-')))); } else { children.add(TextSpan(style: TextStyle(color: Colors.redAccent), text: text)); } return TextSpan(style: style, children: children); } } ``` Usage in **TextFormField**: ``` TextFormField( keyboardType: TextInputType.text, controller: MyTextController(), ), ``` **Update regarding cursor misplacement bug :** I am unable to find a solution for this bug. If I find it in near future, I will update it here. What I did is, just hide the cursor itself so that it can't be noticed.
Is there something like a class that can be implemented? I'd like to write a **class X** (this) which **inherits from A** (base) can **execute the methods of B** (?) and must **implement the members of C** (interface). Implementing A and C are not a problem. But since X cannot derive from multiple classes it seems impossible to have X inherit the logic of A and B. Note that A is the very important base class and B is almost a interface but contains executable behaviour. The reason why I don't want B to be an interface is because the behaviour is the same for every class that inherits or implements it. **Do I really must declare B as an interface and implement the exact same 10 lines of code for each X that needs the behaviour of B?** --- *2 months later* I am currently learning C++ for using it in UE4 (Unreal Engine 4). Since C++ is a lot less strict than C# it actually contains a ~~pattern~~ ~~implementation~~ ~~idom~~ term that describes this behaviour: These are called **mixin**s. You can read a paragraph about the C++ mixin [here](http://www.bobtacoindustries.com/Content/Devs/CsToCpp-ASomewhatShortGuide.pdf) on page 9 (second paragraph).
> > Do I really must declare `B` as an interface and implement the exact same 10 lines of code for each `X` that needs the behaviour of `B`? > > > Yes and no. You do need to make `B` an interface. But common method implementations should not be duplicated across all implementations of the interface. Instead, they should go into a class extension for interface `B`: ``` public interface B { void MethodX(); void MethodY(); } public static class ExtensionsB { public static void MethodZ(this B b) { // Common implementations go here } } ``` Extension methods provide a way to share implementations "horizontally", without having your classes inherit from a second class. Extension methods behave as if they were regular methods of the class: ``` class X : A, B { public void MethodX() {...} public void MethodY() {...} } public static void Main(string[] args) { var x = new X(); x.SomeMethodFromA(); x.MethodX(); // Calls method from X x.MethodY(); // Calls method from X x.MethodZ(); // Calls method from ExtensionsB } ```
How to update an array element in a MongoDB document I've got a problem updating an array element in MongoDB. This is the structure of a document: ``` { "_id" : ObjectId("57e2645e11c979157400046e"), "site" : "BLABLA", "timestamp_hour" : 1473343200, "values" : [ { "1473343200" : 66 }, { "1473344100" : 230 }, { "1473345000" : 479 }, { "1473345900" : 139 } ] } ``` Now I want to update the element with key "1473345900". How can I do this? I've tried: ``` db.COLLECTIONNAME.update({"values.1473345900": {$exists:true}}, {$set: {"values.$": 0}}) ``` But after that the document looks like: ``` { "_id" : ObjectId("57e2645e11c979157400046e"), "site" : "BLABLA", "timestamp_hour" : 1473343200, "values" : [ { "1473343200" : 66 }, { "1473344100" : 230 }, { "1473345000" : 479 }, 0 ] } ``` What I'm doing wrong? I only want to update the value of 1473345900 to any value... I don't want to update the complete element... Thanks a lot!!!
You need to add an additional query in your update that matches the array element you want to update. A typical query would involve checking for the element's value not equal to the one being updated. The following example update shows this where the **`$`** positional operator identifies the correct index position of the hash key array element `{ "1473345900": 139 }`. If you try to run the update operation without the **`$`** positional operator: ``` db.COLLECTIONNAME.update( { "values.1473345900": { "$exists": true } }, { "$set": { "values.1473345900": 0 } } ) ``` mongo will treat the timestamp 1473345900 as the index position and thus you will get the error > > can't backfill array to larger than 1500000 elements > > > Thus the correct way should be: ``` var val = 32; db.COLLECTIONNAME.update( { "values.1473345900": { "$ne": val, "$exists": true } }, { "$set": { "values.$.1473345900": val } } ) ```
Knockout.js - Multiple ViewModels per page; page-wide functions with different model contexts I am building a huge page with multiple forms on it that the user will fill out in sequence. I want to make each section its own model and was planning on having a masterViewModel that imports the subModels. Each section, however, has Edit & Save buttons that have the same functions: 1. Edit toggles the model into Edit mode 2. Save validates the inputs, saves the data (via ajax), and toggle the state of the model back The only difference between the sets of buttons is their model context. I'm having trouble making a page-level save & edit function that can reference different models using the masterViewModel/subViewModels. Does anyone have any guidance on this? Thanks.
If you had a function on your root view model, then you can call it from anywhere like `click: $root.save`. When Knockout calls the function it will set the context (this) to the current data and also pass it as the first argument. So, the first argument will contain your current model and you can process it from there. Here is a sample: <http://jsfiddle.net/rniemeyer/v22gd/> ``` var viewModel = { one: { name: ko.observable("Bob") }, two: { name: ko.observable("Sue") }, save: function(model) { alert(ko.toJSON(model)); } }; ko.applyBindings(viewModel); ``` with markup like: ``` <div data-bind="with: one"> <input data-bind="value: name" /> <button data-bind="click: $root.save">Save</button> </div> <div data-bind="with: two"> <input data-bind="value: name" /> <button data-bind="click: $root.save">Save</button> </div> ```
How to automatically run the executable in GDB? I'd like to have `gdb` immediately run the executable, as if I'd typed "run" (motivation: I dislike typing "run"). One way is to pipe the command to `gdb` like this: ``` $ echo run | gdb myApp ``` But the problem with this approach is that you lose interactivity with `gdb`, eg. if a breakpoint triggers or `myApp` crashes, `gdb` quits. This method is discussed [here](https://stackoverflow.com/questions/322110/invoke-gdb-to-automatically-pass-arguments-to-the-program-being-debugged). Looking at the options in `--help`, I don't see a way to do this, but perhaps I'm missing something.
``` gdb -ex run ./a.out ``` If you need to pass arguments to `a.out`: ``` gdb -ex run --args ./a.out arg1 arg2 ... ``` EDIT: [Orion](https://stackoverflow.com/users/230851/orion-elenzil) says this doesn't work on Mac OSX. The `-ex` flag has been available since `GDB-6.4` (released in 2005), but OSX uses Apple's fork of GDB, and the latest XCode for Leopard contains `GDB 6.3.50-20050815 (Apple version gdb-967)`, so you are out of luck. Building current `GDB-7.0.1` release is one possible solution. Just be sure to read [this](http://sourceware.org/gdb/wiki/BuildingOnDarwin).
MongoDB sort the result after limit is there a way to sort the documents returned from limit? example: ``` //returns 10 documents: db.users.find() .sort({last_online_timestamp:-1}) .limit(10) //returns 10 another documents db.users.find() .sort({last_online_timestamp:-1}) .limit(10) .sort({messages_count:1}); ``` what I want is to get 10 last logged in users and then sort them by messages count.
You can use an aggregate such as ``` db.users.aggregate([ {"$sort": {"last_online_timestamp":1}}, {"$limit": 10}, {"$sort": {"messages_count": 1}} ]) ``` This will go through stages where the the documents in the collection will be: 1. Sorted by the `last_online_timestamp` field in ascending order 2. Limited to 10 documents 3. Those 10 documents will be sorted by the `messages_count` field in ascending order You may want to change `{"$sort": {"last_online_timestamp":1}` to `{"$sort": {"last_online_timestamp":-1}` depending on the actual values of `last_online_timestamp` For more information on Mongo aggregation, see <https://docs.mongodb.com/manual/aggregation/>.
How to find out which file is affected by a bad sector? Consider you've been informed about a bad sector like this: ``` [48792.329933] Add. Sense: Unrecovered read error - auto reallocate failed [48792.329936] sd 0:0:0:0: [sda] CDB: [48792.329938] Read(10): ... [48792.329949] end_request: I/O error, dev sda, sector 1545882485 [48792.329968] md/raid1:md126: sda: unrecoverable I/O read error for block 1544848128 [48792.330018] md: md126: recovery interrupted. ``` How do I find out which file might include this sector? How to map a sector to file? Or how to find out if it just maps to free filesystem space? The mapping process should be able to deal with the usual storage stack. For example, in the above example, the stack looks like this: ``` /dev/sda+sdb -> Linux MD RAID 1 -> LVM PV -> LVM VG -> LVM LV -> XFS ``` But, of course, it could even look like this: ``` /dev/sda+sdb -> Linux MD RAID 1 -> DM_CRYPT -> LVM PV -> LVM VG -> LVM LV -> XFS ```
The traditional way is to copy all files elsewhere and see which one triggers a read error. Of course, this does not answer the question at all if the error is hidden by the redundancy of the RAID layer. Apart from that I only know the manual approach. Which is way too bothersome to actually go through with, and if there is a tool that does this magic for you, I haven't heard of it yet, and I'm not sure if more generic tools (like `blktrace`) would help in that regard. For the filesystem, you can use `filefrag` or `hdparm --fibmap` to determine block ranges of all files. Some filesystems offer tools to make the lookup in the other direction (e.g. `debugfs icheck`) but I don't know of a syscall that does the same, so there seems to be no generic interface for block->file lookups. For LVM, you can use `lvs -o +devices` to see where each LV is stored; you also need to know the `pvs -o +pe_start,vg_extent_size` for Physical Extent offset/sizes. It may actually be more readable in the `vgcfgbackup`. This should allow you to translate the filesystem addresses to block addresses in each PV. For LUKS, you can see the offset in `cryptsetup luksDump`. For mdadm, you can see the offset in `mdadm --examine`. If the RAID level is something other than 1, you will also need to do some math, and more specifically, you need to know the RAID layout in order to understand which address on the `md` device may translate to which block of which RAID member device. Finally you will need to take partition offsets into account, unless you were using the disks directly without any partitioning.
Parse JSON to C# object using NewtonSoft JSON.net I am trying to deserialize a JSON response I get from a webservice. I am trying to use NewtonSoft Json.NET. I am trying this to parse the response ``` var results = JArray.Parse(response.Content); ``` I get following exception > > Newtonsoft.Json.JsonReaderException occurred HResult=0x80131500 > > Message=Error reading JArray from JsonReader. Current JsonReader item > is not an array: StartObject. Path '', line 1, position 1. > > Source=Newtonsoft.Json > > > I probably need to define the object to return but am not sure how to specify following response (sorry about the formatting, the indentions was removed by the editor here): ``` {"result": [ { "recordType": "sys_ui_script", "hits": [], "tableLabel": "UI Script" }, { "recordType": "sys_script", "hits": [ { "name": "Approval Events (Non-Task)", "className": "sys_script", "tableLabel": "sys_script", "matches": [ { "field": "script", "fieldLabel": "Script", "lineMatches": [ { "line": 21, "context": " updateRecord(current, current.approver.getDisplayValue() + \" rejected the task.\", ", "escaped": " updateRecord(current, current.approver.getDisplayValue() + &quot; rejected the task.&quot;, " } ], "count": 2 }], "sysId": "ad15c8149f4010008f88ed93ee4bcc9f", "modified": 1489179469000 } ], "tableLabel": "Business Rule" } ]} ```
Define a class and deserialize it: ``` var results = JsonConvert.DeserializeObject<RootObject>(response.Content); public class LineMatch { public int line { get; set; } public string context { get; set; } public string escaped { get; set; } } public class Match { public string field { get; set; } public string fieldLabel { get; set; } public List<LineMatch> lineMatches { get; set; } public int count { get; set; } } public class Hit { public string name { get; set; } public string className { get; set; } public string tableLabel { get; set; } public List<Match> matches { get; set; } public string sysId { get; set; } public long modified { get; set; } } public class Result { public string recordType { get; set; } public List<Hit> hits { get; set; } public string tableLabel { get; set; } } public class RootObject { public List<Result> result { get; set; } } ```
What are some good tasks to assign co-op students/interns? My company is investigating hiring a University Computer Science co-op student (BSc year 3) for a 4- or 8-month work term. (I'm not sure how internationally-recognized the term co-op is - it's essentially a paid internship, after which the student returns to their studies.) My team develops a web application in ASP.NET and handheld thick clients on iOS, BlackBerry, and Android. For any of you that have brought interns into an experienced development team, what kind of tasks did you find for them to do? I realize that's a hard question to ask since any answer can be quite specific to an individual business. I'd expect the following: - Tasks that require mentoring but not babysitting - Tasks that will take a few weeks to finish (so the mentor can remain productive) - Tasks that expand the student's understanding of software development I've considered things like expanding code coverage in our unit tests, or developing a feature that's been designed, or improving/writing missing requirements documentation from features that were added without any supporting documentation. I never did an internship so I don't know what sort of tasks are valuable to both the student and the company. Any recommendations? Edit: These are all excellent and thoughtful answers that have helped me a great deal. Thank you all very much. I can only choose one answer, so I'll go along with the crowd and accept the most popular one.
We used to have our co-op students work on internal tools that were not critical to the main application we delivered, but helped in areas such as build automation, making a complex dataset easily viewable in a web application, converting an internal HTML site to a dynamic (and easy-to-update!!) application. I think one co-op student once wrote a log viewer that made it easier to manage logs from one particular application that had a very verbose output. I think some co-op students also got assigned to the testing team, so I imagine they were writing and executing tests. These applications were great for co-ops because the code-bases were small and easy to get into. Since the projects were only used internally on our team the students didn't have to go through lengthy processes to get approval for deployment and implementation. They got to make their own design decicions (with guidance from senior devs) and learnt quite a lot. Occasionally we'd give them simpler defects from the production applications, but only if there wasn't enough on internal tools area to keep them busy.
how to set completion when define my own function I wrote a function to insert some special characters in *"EMACS"*, which will ask to input the name and it inserts the corresponding special character. Example: when I type "alpha", the function will insert `α` instead. But when I type these characters, I found out that I cannot use auto completion. I defined alpha as `α` and beta as `β`, but when I type `al[Tab]` it just inserts a tab. How can I define the auto completion for my function? --Edit-------------------------------- Sorry, I'm still new to emacs and just started to write elisp function yesterday. I just used a straightforward way to implement this function, using `interactive` with code `s` to read strings That's why I wonder how to get the auto-completion done. Because I didn't find any tutorials about such things. Maybe it's for my lack in elisp knowledge to search the right answer. Anyway, here is my code: ``` (defun sp-char (char) "Insert some special chars." (interactive "sInput char name: ") (let ((funky-char)) (setq funky-char (cond ((string= char "Alpha" ) "Α") ((string= char "alpha" ) "α") ((string= char "Beta" ) "Β") ((string= char "beta" ) "β") ((string= char "Gamma" ) "Γ") ((string= char "gamma" ) "γ") ((string= char "Delta" ) "Δ") ((string= char "delta" ) "δ") )) (insert funky-char))) ``` I searched the read-string, but still confused how to get the completion done. I'd appreciate it if you show me a little piece of code as example.
If it's just Unicode chars you're after, there are other ways to solve the problem: 1. There's `insert-char` bound to `C-x 8 RET`. It's got completion, so `*alpha` will get you everything with `alpha` in it. 2. If it's just Greek chars you're after, and you're on Linux, you can use `xmodmap` to redefine your keyboard: each key, like `w` has not two, but four states: normal, shift, Mod4 and Mod4+shift. I've it configured like this: `w W ω Ω`. 3. You could use `yasnippet` to accomplish this task. Like the second method, and different from the first and your method, it has a big advantage that it doesn't break your flow: you're not distracted by having to look at the minibuffer and/or read the possible choices - you just type what you wanted to type and expand. 4. `expand-abbrev` is similar to `yasnippet` but easier to add to, since each abbrev is just an element in the list, instead of a file. But it doesn't have fields/mirrors. But it all really depends on what you're trying to do. ## UPD Corrected version of your code This completion will exit as soon as the candidate is unique. ``` (defvar zz-minibuffer-map (copy-keymap minibuffer-local-must-match-map)) (define-key zz-minibuffer-map [remap self-insert-command] 'zz-self-insert-complete-and-exit) (defun zz-self-insert-complete-and-exit (n) (interactive "p") (self-insert-command n) (ignore-errors (completion--do-completion nil 'expect-exact)) (let ((candidates (completion-all-sorted-completions))) (cond ((null candidates) (backward-delete-char-untabify 1) (minibuffer-complete)) ((eq 1 (safe-length candidates)) (minibuffer-complete-and-exit))))) (defun sp-char (char) "Insert some special chars." (interactive (list (let ((minibuffer-local-must-match-map zz-minibuffer-map)) (completing-read "Input char name: " special-char-alist nil t)))) (insert (cadr (assoc char special-char-alist)))) (defvar special-char-alist '(("Alpha" "Α") ("alpha" "α") ("Beta" "Β") ("beta" "β") ("Gamma" "Γ") ("gamma" "γ") ("Delta" "Δ") ("delta" "δ"))) ```
onLocationChanged callback is made on what thread? The main UI thread? When this callback is made in my app, I have quite a bit work to do (reading & writing to SQL db through an ORM lib and a number of distance based calculations). Naturally I am concerned about not blocking the main UI thread so I've been trying (unsuccessfully) to find out if this is the thread on which the callback is made. If it is, I'm intending to do all the afore-mentioned work on an AsyncTask triggered when the callback is made. This same AsyncTask will receive events from 2 separate activity classes as well. (Responding to user input etc..) A lot of the discussion I have found around this callback seems to be based around people trying to change the thread on which the callback is actually received. This makes no sense to me. Surely the platform determines the context of this callback and the sensible thing to do when it's received is offload any serious work onto another thread, for which AsyncTask seems appropriate. If anyone can outline a successful pattern they've used here it would be really useful.
According to the Android reference documentation for [`LocationManager`](http://developer.android.com/reference/android/location/LocationManager.html#requestLocationUpdates%28java.lang.String,%20long,%20float,%20android.location.LocationListener%29): > > The calling thread must be a Looper thread such as the main thread of > the calling Activity. > > > This means that the Thread that initializes the callback must be the main Thread or a `Looper` Thread. I've found the best way to handle this is to register an `OnLocationChanged` receiver on the main Thread. Then, in my callback I'll create a Runnable to send to a background Thread where I'll perform any long-running tasks (like writing to the database). ``` ExecutorService mThreadPool = Executors.newSingleThreadExecutor(); @Override public void onLocationChanged(Location location) { mThreadPool.execute(new Runnable() { @Override public void run() { // Perform your long-running tasks here. //... } }); } ```