text
stringlengths
64
81.1k
meta
dict
Q: Como realizar conversion de fecha YYYY-mm-dd en bash Como puedo obtener el formato de una fecha en especifico tengo la siguiente fecha: 18APR01 pero ese formato no deseo, el codigo que tengo es el siguiente: date --date=18APR01 +%Y-%m-%d Salida: 2001-04-18 Lo que en realidad quiero es que la salida sea: 2018-04-01 Como puedo lograr ese resultado? A: Aunque el comando date es muy flexible en cuanto al formato de lo que admite en --date, sin embargo no admite que le especifiques tus propios formatos, y el que tú usas tiene para él otro significado. En concreto, cuando usas un patrón de tipo XXYYYZZ donde XX, ZZ son números, pero YYY son letras, asume que XX es el día, ZZ el año, e YYY la abreviatura del nombre del mes, lo que no encaja en tu caso. Si pones sólo YYYZZ, entonces asume que YYY es el nombre del mes, y ZZ el día, dándole al año el valor del año actual. En tu caso esto serviría, al menos en el ejemplo que has puesto, ya que el año (18) coincide con el año actual. Por tanto una opción sería eliminar los dos primeros caracteres de tu cadena. Ejemplo (en un shell script): FECHA=18APR01 date --date=${FECHA:2} +%Y-%m-%d Resultado: 2018-04-01 Naturalmente si no todas tus fechas comienzan por 18 este truco no vale. En ese caso no vas a poder usar el comando date, pero puedes tirar de otros lenguajes de scripting que tengan una buena biblioteca para manejo de fechas, como python. Lo siguiente es un ejemplo de un one-liner que podrías incluir en un shell script, que usa python (vale, quizás es matar moscas a cañonazos) para hacer la conversión de ese formato de fecha: FECHA=15APR01 python -c "from datetime import datetime;print(datetime.strptime(\"$FECHA\", \"%y%b%d\").strftime(\"%Y-%m-%d\"))" Resultado: 2015-04-01 Por completar, aunque no creo que te sea útil, si estás en OSX o en BSD el comando date es diferente al que viene en Linux (que es GNU date), y sí que admite una opción para especificar el formato de la fecha de entrada. En este caso podrías poner: $ LC_ALL=POSIX date -jf %y%b%d +%Y-%m-%d 18APR01 2018-04-01 (lo del LC_ALL es por si tu locale fuera español, en cuyo caso no iba a entender el nombre del mes)
{ "pile_set_name": "StackExchange" }
Q: Populate array to ssh in bash Just some background, I have a file with 1000 servers in it new line delimted. I have to read them to an array the run about 5 commands over SSH. I have been using heredoc notation but that seems to fail. Currently I get an error saying the host isn't recognized. IFS='\n' read -d '' -r -a my_arr < file my_arr=() for i in "${my_arr[@]}"; do ssh "$1" bash -s << "EOF" echo "making back up of some file" cp /path/to/file /path/to/file.bak exit EOF done I get output that lists the first server but then all the ones in the array as well. I know that I am missing a redirect for STDIN that causes this. Thanks for the help. A: Do you need an array? What is wrong with: while read -r host do ssh "$host" bash -s << "EOF" echo "making back up of some file" cp /path/to/file /path/to/file.bak EOF done < file
{ "pile_set_name": "StackExchange" }
Q: sed replace using string containing backslashes I need to replace text in a file with a Windows-style directory path containing backslash (REVERSE SOLIDUS) characters. I am already using an alternative expression delimiter. The backslashes appear to be treated as escape characters. How can I keep the backslashes in the output? $ echo DIR=foobar | sed -e "s#DIR=.*#$(cygpath -w $(pwd))#" C:gwin64homelit The desired output is: C:\cygwin64\home\lit A: You'll have to escape metacharacters in sed replacement pattern. Fortunately, there are only three of those: &, \, and a delimiter / (see this question and this). In your case, since you're using # for delimiter, you'll have to escape # instead of /. You can create a helper shell function (like here): escapeSubst() { sed 's/[&#\]/\\&/g'; } and then pass your string through it before giving it to sed, like this: $ echo DIR=foobar | sed -e "s#DIR=.*#$(cygpath -w $(pwd) | escapeSubst)#" C:\cygwin64\home\lit
{ "pile_set_name": "StackExchange" }
Q: Angular dependency injection: Parent is null when compiling with AoT (Ahead of Time) I got a directive which injects its parent directive (of the same type) in the constructor. It is using 'SkipSelf' so it will not get the directive which is defined on the component it is placed on (but up the tree on the parents). @Directive({ selector: '[myDirective]' }) export class MyDirective { constructor(@SkipSelf() @Optional() public parent?: MyDirective) { console.log('Directive: This is my parent: ' + this.parent); } } Additionally I got a Component which injects the same Directive but is looking at itself (so no SkipSelf used here). import {Component, Optional} from '@angular/core'; import {MyDirective} from '../directives/my-directive.directive'; @Component({ selector: 'my-component', template: './my-component.html' }) export class MyComponent { constructor(@Optional() public parent?: MyDirective) { console.log('Component: This is my parent: ' + this.parent); } } An example-html could look something like this: <div myDirective> <!-- (Directive A) does not have a parent (of course) --> <div> <div myDirective> <!-- (Directive B) gets the parent (Directive A) --> <my-component myDirective> <!-- (Directive C) myDirective gets the parent (Directive B) --> <!-- myComponent gets the parent directive (Directive C) --> </my-component> </div> </div> </div> Everything works as expected when I compile 'normally' with angular-cli (ng serve). Each Directive gets a reference to its parent (when there is one) and each component instance gets a reference to the directive which is defined on it. But now I ran into a problem when compiling with the aot option (ng serve --aot). The directives still get their parent but the component does not. (Instead this.parent is null) Anyone got an idea what might be the problem? Could it be a bug in angular-cli? I am using following versions: @angular/cli: 1.0.0 node: 7.6.0 os: win32 x64 @angular/...: 4.0.1 @angular/cli: 1.0.0 @angular/compiler-cli: 4.0.1 A: I updated angular and angular-cli and fortunately the issue is fixed. @angular/cli: 1.1.1 node: 7.6.0 os: win32 x64 @angular/...: 4.2.2 @angular/cli: 1.1.1 @angular/compiler-cli: 4.2.2
{ "pile_set_name": "StackExchange" }
Q: On add existing item in VS2010, why can't I "add as link" a file from the same project? I have two folders in my library project, folder A and folder B. Folder A will contain all the real files, but Folder B (and a bunch of other folders) need to contain links to the folder A files. I tried going Add existing item (go to folder A)-> add (down arrow) -> add as link but the add existing item dialog window just closes and nothing happens. It seems I can add links to files outside the library project though. What's going on here? -Isaac A: For some reason Visual Studio seems to silently ignore possible problems with adding file as a link. I just had the same problem and the solution was to: Check if project folder already contains a file with the name of file being linked, if so delete or rename this resource. Visual Studio 2010 seems to cache project directory contents, as (1) was not enough to successfully link the file. Restarting VS helps.
{ "pile_set_name": "StackExchange" }
Q: Modifying remote System ODBC DSNs on win7, both 32-bit and 64-bit? I am trying to update DSNs on multiple different user boxes, which should be running Windows 7 x64. People have sometimes created their own DSNs (maybe System, maybe User), and other places where admins have. I want to replace the servername when it's a particular value, with a CNAME for that box. I read this article, which seemed a good start: http://www.sqldataplatform.com/Blog/Post/9/Modifying-ODBC-Settings-with-WMI-and-PowerShell However, when testing this on my box, I ran into a problem where I don't see the System DSNs I expect. When I run the 64-bit "Data Sources (ODBC)" (C:\Windows\system32\odbcad32.exe), which is the default when you go to Start->Administrative Tools->Data Sources, then I see the data source I created. However, this doesn't work: Get-ChildItem -path "HKLM:\SOFTWARE\ODBC\ODBC.INI\" Instead, I get a System DSN that I created in the 32-bit version of Data Sources (ODBC), aka "C:\Windows\SysWOW64\odbcad32.exe" Oddly, if I run this, I get the exact same 32-bit DSN, where I'd expect to get the 32-bit and the 64-bit, even though I see them in different nodes when I open my registry. Get-ChildItem -path "HKLM:\SOFTWARE\ODBC\ODBC.INI\" Get-ChildItem -path "HKLM:\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\" So, any idea how I go about getting the other DSN? Thanks. A: To see the 32-bit one, you need to run C:\windows\SysWOW64\odbcad32.exe. To see the 64-bit one, just run odbcad32.exe (from System32). If you're running a 32-bit powershell session, you will only see the 32-bit one. If you're running a 64-bit session, you can see both.
{ "pile_set_name": "StackExchange" }
Q: Minimal polynomial of $T(A) = A^t - A$ As said in the title , I need to find the min polynomial of that linear transform. The matrices are $M_n(\mathbb{C})$. I've figured out that $T^2 = 2A - 2A^t$ , so a polynomial $p(t) = t^2 + 2t$ works so $p(T) = 0$. Now $p(t)$ breaks to $t(t+2)$ but non of them kills T. Therefore $p(t)$ is the minimal polynomial. I'm having trouble with this, because I guessed $p(t)$, and Im not sure on how to actually find the polynomial. For example, I have no idea how to find a matrix, because of that transpose. Is there another way to do this? A: I do not know if i can say anything better than what you have done... You have seen what $T^2$ would be... this is what you actually have to do.. see what would $T,T^2,T^3\cdots$ be and check for a liner combination that would result zero map .. You have seen the very first non trivial power of $T$ namely $T^2$ and realized it as $-2T$ So, You have $T^2=-2T$ and remaining thing i want to say is not any better than yours.. So, What you have done is natural for me.. P.S : All this is just for your statement I guessed $p(t)$ and I'm not sure on how to actually find the polynomial
{ "pile_set_name": "StackExchange" }
Q: How to fix the error: Cannot load shared library , symbol undefined I have a python script that is converted to a 'one file executable' using pyinstaller. The executable runs in my computer without any problem. When it runs in another computer one of the threads seems to stop working where gtk and wnck are used. Failed to load shared library 'libwnck-3.so.0' referenced by the typelib: /usr/lib/x86_64-linux-gnu/libwnck-3.0.so.0: undefined symbol: gdk_display_get_monitor_at_window The above warning is displayed as soon as the executable is run in the other computer's terminal(I guess when it reads the import statement). An error is thrown when the it reaches the following line; screen = Wnck.Screen.get_default() GLib.GError: g-invoke-error-quark: Could not locate wnck_screen_get_default: /usr/lib/x86_64-linux-gnu/libwnck-3.0.so.0: undefined symbol: gdk_display_get_monitor_at_window (1) The following function is threaded where the error occurs import gi gi.require_version('Wnck', '3.0') gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Wnck def my_window(): screen = Wnck.Screen.get_default() -- this line throws error screen.force_update() while True: time.sleep(.5) while Gtk.events_pending(): Gtk.main_iteration() new_window = screen.get_active_window() .... .... I am using - Ubuntu 16.04, xenial. | version of libgtk-3-0: 3.18.9 Other computer uses - Ubuntu 18.04.4 bionic | version of libgtk-3-0: 3.22.30 A: The copy of libwnck you're using was compiled against a version of GDK that contains the gdk_display_get_monitor_at_window() function, but the copy of GDK you have installed on your system does not contain this function. The gdk_display_get_monitor_at_window() function was introduced in GTK 3.22, so you must make sure that you have GDK 3.22 or later installed.
{ "pile_set_name": "StackExchange" }
Q: Batch - Take Multiple Arguments from User Input I have a batch file that takes arguments from user input when calling the batch from the command line. I'm not very good at batch, hence why I'm here. I have multiple files that I wish to use for compiling C++ code and what I need is a way to get multiple arguments without specifying how many are there. echo | set /p=g++ -c %1\%2.cpp > run.bat pkg-config --cflags --libs gtkmm-3.0 >> run.bat echo PAUSE >> run.bat echo exit >> run.bat start /wait run.bat del run.bat move %2.o %1\ Example run: compile_sing helloworld main This compiles main.cpp in ..\helloworld\ and moves the generated .o file to ..\helloworld\. I also have another batch that runs the program. echo | set /p=g++ %1\%2.o %1\%3.o -o %1\%4 > run.bat pkg-config --cflags --libs gtkmm-3.0 >> run.bat echo PAUSE >> run.bat echo exit >> run.bat start /wait run.bat del run.bat start %1\%4 Example run: run_mult helloworldv2 main file2 execute This will create the executable execute and run it. What I would like is a way to enter multiple file names without having to put %#. The number of files can range from 1-n. For the second one, I'm sure i'll need a special character to put in front of the last argument to specify that it is the ending one. A: Okay, after learning SHIFT, thanks to the tip of JosefZ, I have my answer. To compile multiple files, I used the following set folder_path=%1 SHIFT :start echo | set /p=g++ -c %folder_path%\%1.cpp > run.bat pkg-config --cflags --libs gtkmm-3.0 >> run.bat echo exit >> run.bat start /wait run.bat del run.bat move %1.o %folder_path% SHIFT if not "%1"=="" (goto :start) PAUSE And to run the compiled files, I used the following set folder_path=%1 set files= set var= SHIFT :set_files set files=%files% %folder_path%\%1.o SHIFT set var=%1 if not "%var:~0,1%"=="/" (goto :set_files) echo | set /p=g++ %files% -o %folder_path%\%var:~1% > run.bat pkg-config --cflags --libs gtkmm-3.0 >> run.bat echo PAUSE >> run.bat echo exit >> run.bat start /wait run.bat del run.bat start %folder_path%\%var:~1%
{ "pile_set_name": "StackExchange" }
Q: Generate next combination of size k from integer vector Lets say I have a vector of integers v = {0, 1,..., N-1} of size N. Given a size k, I want to generate all k-sized combinations of v. for example: k = 2, N = 10 {0,1}, {0,2}, ..., {0,9}, {1,2}, ..., {8,9} But I want to do it one by one, using a method called NextCombination: bool NextCombination(vector<int>& v, int k, int N){ if( is not the last combination){ turn v into it's next combination return true; } return false; } that means, given the current state of v, the size k of the combination and the total number of elements, I'd like to change v (if possible) and return a bool indicating it was possible to get some next combination out of v. I could not figure out how to make this without some boring recursions, and since this is just small problem of something I'm doing, I would like to figure out some smart/small solution to that. A: MBo's answer involving std::next_permutation is better as far as readability is concerned. However, that requires making an N-sized vector of 1s and 0s that you can do without if you really want to save on memory. The following solution essentially does the same thing in-place. bool NextCombination(vector<int>& v, int k, int N) { // We want to find the index of the least significant element // in v that can be increased. Let's call that index 'pivot'. int pivot = k - 1; while (pivot >= 0 && v[pivot] == N - k + pivot) --pivot; // pivot will be -1 iff v == {N - k, N - k + 1, ..., N - 1}, // in which case, there is no next combination. if (pivot == -1) return false; ++v[pivot]; for (int i = pivot + 1; i < k; ++i) v[i] = v[pivot] + i - pivot; return true; }
{ "pile_set_name": "StackExchange" }
Q: Compare Lines of file to every other line of same file I am trying to write a program that will print out every line from a file with another line of that file added at the end, basically creating pairs from a portion of each line. If the line is the same, it will do nothing. Also, it must avoid repeating the same pairs. A B is the same as B A In short FileInput: otherstuff A otherstuff B otherstuff C otherstuff D Output: A B A C A D B C B D C D I was trying to do this with a BASH script, but was having trouble because I could not get my nested while loops to work. It would read the first line, compare it to each other line, and then stop (Basically only outputting the first 3 lines in the example output above, the outer while loop only ran once). I also suspect I might be able to do this using MATLAB, so suggestions using that are also welcome. Here is the bash script that I have thus far. As I said, it is no printing out correctly for me, as the outer loop only runs once. #READS IN file from terminal FILE1=$1 #START count at 0 count0= exec 3<&0 exec 0< $FILE1 while read LINEa; do while read LINEb; do eventIDa=$(echo $LINEa | cut -c20-23) eventIDb=$(echo $LINEb | cut -c20-23) echo $eventIDa $eventIDb done done A: Using bash: #!/bin/bash [ -f "$1" ] || { echo >&2 "File not found"; exit 1; } mapfile -t lines < <(cut -c20-23 <"$1" | sort | uniq) for i in ${!lines[@]}; do elem1=${lines[$i]} unset lines[$i] for elem2 in "${lines[@]}"; do echo "$elem1" "$elem2" done done This will read a file given as a parameter on the command line, sort and filter out duplicates, and output all combinations. You can modify the parameter to cut to adjust to your particular input file. Due to the particular way you seem to indent to use cut, your input example above won't work. Instead, use something with the correct line length, such as: 123456789012345678 A 123456789012345678 B 123456789012345678 C 123456789012345678 D
{ "pile_set_name": "StackExchange" }
Q: Update two databases with liquibase command-line I would like to have two databases be updated when I run liquibase update; one being my development database, and the other being the database I run tests against. The credentials and data structure are the same. Note that I'm not using any build automation tools, and merely using the command line. A: You mean you have two separate connections you want to update at the same time? Liquibase can only handle one connection at a time, so you will need to run liquibase update twice, one for each connection.
{ "pile_set_name": "StackExchange" }
Q: NSNotifications For A Date In The Past I am coding an app that has a lot of dates that are based in the past. For instance, an anniversary date. Let's say this date is December 25th, 2000. The user picks this date from a date picker and then the date is saved to the user's device. (so imagine the date saved is December 25th, 2000) While thinking of how I was going to code the NSNotifications, I realized my biggest task (now seeming impossible) is how will I be able to send the user a reminder of a date that is in the future but based on a date in the past. Example: Anniversary date is December 25th, 2000 Remind User Every Year of December 25th. I imagine that there must be a way, but my searches have come up empty handed. A: Not sure what language you are using, but basic logic here is once user selected a date, setup a local notification for the closes date, then set the repeat to kCFCalendarUnitYear Example code in objective-C -(void)setAlert:(NSDate *)date{ //Note date here is the closest anniversary date in future you need to determine first UILocalNotification *localNotif = [[UILocalNotification alloc]init]; localNotif.fireDate = date; localNotif.alertBody = @"Some text here..."; localNotif.timeZone = [NSTimeZone defaultTimeZone]; localNotif.repeatInterval = kCFCalendarUnitYear; //repeat yearly //other customization for the notification, for example attach some info using //localNotif.userInfo = @{@"id":@"some Identifier to look for more detail, etc."}; [[UIApplication sharedApplication]scheduleLocalNotification:localNotif]; } Once you setup the alert and the alert fired, you can handle the notification in the AppDelegate.m file by implementing - (void)application:(UIApplication *)application handleActionWithIdentifier:(NSString *)identifier forLocalNotification:(UILocalNotification *)notification completionHandler:(void(^)())completionHandler{ //handling notification code here. } Edit: For how to get the closest date, you can implemented a method to do that -(NSDate *) closestNextAnniversary:(NSDate *)selectedDate { // selectedDate is the old date you just selected, the idea is extract the month and day component of that date, append it to the current year, if that date is after today, then that's the date you want, otherwise, add the year component by 1 to get the date in next year NSCalendar *calendar = [NSCalendar currentCalendar]; NSInteger month = [calendar component:NSCalendarUnitMonth fromDate:selectedDate]; NSInteger day = [calendar component:NSCalendarUnitDay fromDate:selectedDate]; NSInteger year = [calendar component:NSCalendarUnitYear fromDate:[NSDate date]]; NSDateComponents *components = [[NSDateComponents alloc] init]; [components setYear:year]; [components setMonth:month]; [components setDay:day]; NSDate *targetDate = [calendar dateFromComponents:components]; // now if the target date is after today, then return it, else add one year // special case for Feb 29th, see comments below // your code to handle Feb 29th case. if ([targetDate timeIntervalSinceDate:[NSDate date]]>0) return targetDate; [components setYear:++year]; return [calendar dateFromComponents:components]; } One thing you need to think is how to treat the February 29th, do you want to alarm every year at Feb. 28th (non leap year), or do you want alarm every four years? Then you need to implement your own logic.
{ "pile_set_name": "StackExchange" }
Q: Draw Larger and Smaller UIImageView size when scrolling in iOS I have an app of photo catalog on my iPhone. This app shows three images on screen with scrollview. I want to enlarge/shrink the image size when I am scrolling. I want to expand the image size when the image is centered. And draw the image smaller when scrolled away from center to right/left. I think this behaviour needs to developed in scrollViewDidScroll. Do you know how to do this effect? A: so you want a Coverflow , iCarousel may be the best control for it take a look https://github.com/nicklockwood/iCarousel
{ "pile_set_name": "StackExchange" }
Q: Keep track of ast.Walk() parsing errors in Go I'm writing a custom parser and would like to keep track of errors I come across. How do I keep track of errors during parsing without using a global variable when doing a ast.Walk? type visitor struct { err error } func (v visitor) Visit(n ast.Node) ast.Visitor { switch d := n.(type) { case *ast.BinaryExpr: if d.Op != token.LAND { v.err = fmt.Errorf("Illegal operator :%s", d.Op) // NOT WORKING return v } } return v } I use the above code as:- var v visitor ast.Walk(v, astTree) This probably doesn't work as, in func (v visitor), v is not a pointer to struct. What's the recommended way of keeping track of this? A: Collecting the errors in a struct is a good approach, but you need to use a pointer receiver to make it work. func (v *visitor) Visit(n ast.Node) ast.Visitor { // change to pointer receiver ... } ... var v visitor ast.Walk(&v, astTree) // pass pointer to visitor
{ "pile_set_name": "StackExchange" }
Q: ¿Como respetar el formato dd/mm/yyyy? Quiero almacenar una fecha a la cual se le sumaron ciertos días, con la función setDate(), Ejemplo: 28/02/2018 + 1 día = 01/3/2018. Así que agregue un if y me agrega el 0 pero al momento de almacenar no funciona fecha_termino.setDate(fecha_termino.getDate() + diasNum); //alert(fechaDate.getDate() + '/' + (fechaDate.getMonth() + 1) + '/' + fechaDate.getFullYear()); if((fecha_termino.getMonth() + 0) < 10) { $('#TFecha_termino').val(fecha_termino.getDate() + '/' + '0' +(fecha_termino.getMonth() + 1) + '/' + fecha_termino.getFullYear()); } else { $('#TFecha_termino').val(fecha_termino.getDate() + '/' + (fecha_termino.getMonth() + 1) + '/' + fecha_termino.getFullYear()); } A: Para que la fecha respete el formato dd/mm/yyyy , qué es lo que entiendo que desea realizar , no solo deberá verificar el mes si no también el día para saber cuando agregar el 0 a la izquierda ya que actualmente solo está para el mes, por eso al cambiar al 01 de marzo solo muestra el día 1. Ejm var fecha_termino = new Date(2018,1,28) var diasMas= 1; //Incrementamos la fecha fecha_termino.setDate(fecha_termino.getDate() + diasMas); let dia = fecha_termino.getDate(); let mes = fecha_termino.getMonth()+1; //Si el día es menor a 10 , agregamos el 0 if(dia<10) dia='0'+dia; //Si el mes es menor a 10 , agregamos el 0 if(mes<10) mes='0'+mes; //asignamos concatenando los valores document.getElementById('TFecha_termino').value = dia+ "/"+ mes + "/" + fecha_termino.getFullYear() ; //Jquery //$('#TFecha_termino').val(dia+ "/"+ mes + "/" + fecha_termino.getFullYear()); <input type="text" id="TFecha_termino"> Otra forma un poco más rudimentaria es utilizar algunos métodos de los arrays, como son slice para extraer toda la fecha , luego split para separar la cadena por el - , reverse para colocar el día al inicio y el año al final , y join para concatenar el resultado. var fecha_termino = new Date(2018,1,28) var diasNum = 1; fecha_termino.setDate(fecha_termino.getDate() + diasNum); $('#TFecha_termino').val(fecha_termino.toJSON().slice(0,10).split('-').reverse().join ('/')); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <input type="text" id="TFecha_termino">
{ "pile_set_name": "StackExchange" }
Q: Customize Bootstrap checkboxes I'm using Bootstrap in my Angular application and all other styles are working like they should, but checkbox style doesn't. It just look like plain old checkbox. <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <div class="container"> <form class="form-signin"> <h2 class="form-signin-heading">Please log in</h2> <label for="inputEmail" class="sr-only">User name</label> <input [(ngModel)]="loginUser.Username" type="username" name="username" id="inputEmail" class="form-control" placeholder="User name" required autofocus> <label for="inputPassword" class="sr-only">Password</label> <input [(ngModel)]="loginUser.Password" type="password" name="password" id="inputPassword" class="form-control" placeholder="Password" required> <a *ngIf="register == false" (click)="registerState()">Register</a> <div class="checkbox"> <label> <input type="checkbox" [(ngModel)]="rememberMe" name="rememberme"> Remember me </label> </div> <button *ngIf="register == false" (click)="login()" class="btn btn-lg btn-primary btn-block" type="submit">Log in</button> </form> </div> What it looks like: What I want it to look like with Bootstrap style: A: Since Bootstrap 3 doesn't have a style for checkboxes I found a custom made that goes really well with Bootstrap style. Checkboxes .checkbox label:after { content: ''; display: table; clear: both; } .checkbox .cr { position: relative; display: inline-block; border: 1px solid #a9a9a9; border-radius: .25em; width: 1.3em; height: 1.3em; float: left; margin-right: .5em; } .checkbox .cr .cr-icon { position: absolute; font-size: .8em; line-height: 0; top: 50%; left: 15%; } .checkbox label input[type="checkbox"] { display: none; } .checkbox label input[type="checkbox"]+.cr>.cr-icon { opacity: 0; } .checkbox label input[type="checkbox"]:checked+.cr>.cr-icon { opacity: 1; } .checkbox label input[type="checkbox"]:disabled+.cr { opacity: .5; } <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <!-- Default checkbox --> <div class="checkbox"> <label> <input type="checkbox" value=""> <span class="cr"><i class="cr-icon glyphicon glyphicon-ok"></i></span> Option one </label> </div> <!-- Checked checkbox --> <div class="checkbox"> <label> <input type="checkbox" value="" checked> <span class="cr"><i class="cr-icon glyphicon glyphicon-ok"></i></span> Option two is checked by default </label> </div> <!-- Disabled checkbox --> <div class="checkbox disabled"> <label> <input type="checkbox" value="" disabled> <span class="cr"><i class="cr-icon glyphicon glyphicon-ok"></i></span> Option three is disabled </label> </div> Radio .checkbox label:after, .radio label:after { content: ''; display: table; clear: both; } .checkbox .cr, .radio .cr { position: relative; display: inline-block; border: 1px solid #a9a9a9; border-radius: .25em; width: 1.3em; height: 1.3em; float: left; margin-right: .5em; } .radio .cr { border-radius: 50%; } .checkbox .cr .cr-icon, .radio .cr .cr-icon { position: absolute; font-size: .8em; line-height: 0; top: 50%; left: 13%; } .radio .cr .cr-icon { margin-left: 0.04em; } .checkbox label input[type="checkbox"], .radio label input[type="radio"] { display: none; } .checkbox label input[type="checkbox"]+.cr>.cr-icon, .radio label input[type="radio"]+.cr>.cr-icon { opacity: 0; } .checkbox label input[type="checkbox"]:checked+.cr>.cr-icon, .radio label input[type="radio"]:checked+.cr>.cr-icon { opacity: 1; } .checkbox label input[type="checkbox"]:disabled+.cr, .radio label input[type="radio"]:disabled+.cr { opacity: .5; } <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.10/css/all.css" integrity="sha384-+d0P83n9kaQMCwj8F4RJB66tzIwOKmrdb46+porD/OvrJ+37WqIM7UoBtwHO6Nlg" crossorigin="anonymous"> <!-- Default radio --> <div class="radio"> <label> <input type="radio" name="o3" value=""> <span class="cr"><i class="cr-icon fa fa-circle"></i></span> Option one </label> </div> <!-- Checked radio --> <div class="radio"> <label> <input type="radio" name="o3" value="" checked> <span class="cr"><i class="cr-icon fa fa-circle"></i></span> Option two is checked by default </label> </div> <!-- Disabled radio --> <div class="radio disabled"> <label> <input type="radio" name="o3" value="" disabled> <span class="cr"><i class="cr-icon fa fa-circle"></i></span> Option three is disabled </label> </div> Custom icons You can choose your own icon between the ones from Bootstrap or Font Awesome by changing [icon name] with your icon. <span class="cr"><i class="cr-icon [icon name]"></i> For example: glyphicon glyphicon-remove for Bootstrap, or fa fa-bullseye for Font Awesome .checkbox label:after, .radio label:after { content: ''; display: table; clear: both; } .checkbox .cr, .radio .cr { position: relative; display: inline-block; border: 1px solid #a9a9a9; border-radius: .25em; width: 1.3em; height: 1.3em; float: left; margin-right: .5em; } .radio .cr { border-radius: 50%; } .checkbox .cr .cr-icon, .radio .cr .cr-icon { position: absolute; font-size: .8em; line-height: 0; top: 50%; left: 15%; } .radio .cr .cr-icon { margin-left: 0.04em; } .checkbox label input[type="checkbox"], .radio label input[type="radio"] { display: none; } .checkbox label input[type="checkbox"]+.cr>.cr-icon, .radio label input[type="radio"]+.cr>.cr-icon { opacity: 0; } .checkbox label input[type="checkbox"]:checked+.cr>.cr-icon, .radio label input[type="radio"]:checked+.cr>.cr-icon { opacity: 1; } .checkbox label input[type="checkbox"]:disabled+.cr, .radio label input[type="radio"]:disabled+.cr { opacity: .5; } <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous"> <link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.0.10/css/all.css" integrity="sha384-+d0P83n9kaQMCwj8F4RJB66tzIwOKmrdb46+porD/OvrJ+37WqIM7UoBtwHO6Nlg" crossorigin="anonymous"> <div class="checkbox"> <label> <input type="checkbox" value="" checked> <span class="cr"><i class="cr-icon glyphicon glyphicon-remove"></i></span> Bootstrap - Custom icon checkbox </label> </div> <div class="radio"> <label> <input type="radio" name="o3" value="" checked> <span class="cr"><i class="cr-icon fa fa-bullseye"></i></span> Font Awesome - Custom icon radio checked by default </label> </div> <div class="radio"> <label> <input type="radio" name="o3" value=""> <span class="cr"><i class="cr-icon fa fa-bullseye"></i></span> Font Awesome - Custom icon radio </label> </div> A: You have to use Bootstrap version 4 with the custom-* classes to get this style: <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta/css/bootstrap.min.css" integrity="sha384-/Y6pD6FV/Vv2HJnA6t+vslU6fwYXjCFtcEpHbNJ0lyAFsXTsjBbfaDjzALeQsN6M" crossorigin="anonymous"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <!-- example code of the bootstrap website --> <label class="custom-control custom-checkbox"> <input type="checkbox" class="custom-control-input"> <span class="custom-control-indicator"></span> <span class="custom-control-description">Check this custom checkbox</span> </label> <!-- your code with the custom classes of version 4 --> <div class="checkbox"> <label class="custom-control custom-checkbox"> <input type="checkbox" [(ngModel)]="rememberMe" name="rememberme" class="custom-control-input"> <span class="custom-control-indicator"></span> <span class="custom-control-description">Remember me</span> </label> </div> Documentation: https://getbootstrap.com/docs/4.0/components/forms/#checkboxes-and-radios-1 Custom checkbox style on Bootstrap version 3? Bootstrap version 3 doesn't have custom checkbox styles, but you can use your own. In this case: How to style a checkbox using CSS? These custom styles are only available since version 4. A: /* The customcheck */ .customcheck { display: block; position: relative; padding-left: 35px; margin-bottom: 12px; cursor: pointer; font-size: 22px; -webkit-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } /* Hide the browser's default checkbox */ .customcheck input { position: absolute; opacity: 0; cursor: pointer; } /* Create a custom checkbox */ .checkmark { position: absolute; top: 0; left: 0; height: 25px; width: 25px; background-color: #eee; border-radius: 5px; } /* On mouse-over, add a grey background color */ .customcheck:hover input ~ .checkmark { background-color: #ccc; } /* When the checkbox is checked, add a blue background */ .customcheck input:checked ~ .checkmark { background-color: #02cf32; border-radius: 5px; } /* Create the checkmark/indicator (hidden when not checked) */ .checkmark:after { content: ""; position: absolute; display: none; } /* Show the checkmark when checked */ .customcheck input:checked ~ .checkmark:after { display: block; } /* Style the checkmark/indicator */ .customcheck .checkmark:after { left: 9px; top: 5px; width: 5px; height: 10px; border: solid white; border-width: 0 3px 3px 0; -webkit-transform: rotate(45deg); -ms-transform: rotate(45deg); transform: rotate(45deg); } <div class="container"> <h1>Custom Checkboxes</h1></br> <label class="customcheck">One <input type="checkbox" checked="checked"> <span class="checkmark"></span> </label> <label class="customcheck">Two <input type="checkbox"> <span class="checkmark"></span> </label> <label class="customcheck">Three <input type="checkbox"> <span class="checkmark"></span> </label> <label class="customcheck">Four <input type="checkbox"> <span class="checkmark"></span> </label> </div>
{ "pile_set_name": "StackExchange" }
Q: How to VBScript find file path? ok so I was creating an HTML that opens without toolbars or anything just by itself but I can't make it work for other computers this is what I got set webbrowser = createobject("internetexplorer.application") webbrowser.statusbar = false webbrowser.menubar = false webbrowser.toolbar = false webbrowser.visible = true webbrowser.navigate2 ("C:\Users\unknown\Desktop\Folder\myhtml.html") A: You should handle that: The user desktop folder location can be changed The desktop a user sees is a virtual view of more than one folder in the filesystem. Directly searching for the folder inside the user desktop will leave out the desktop folder configured for all the users. So, it is better to ask the OS to retrieve the required information Option Explicit ' folder in desktop and file in folder Const FOLDER_NAME = "Folder" Const FILE_NAME = "myhtml.html" Dim oFolder Const ssfDESKTOP = &H00& ' Retrieve a reference to the virtual desktop view and try to retrieve a reference ' to the folder we are searching for With WScript.CreateObject("Shell.Application").Namespace( ssfDESKTOP ) Set oFolder = .ParseName(FOLDER_NAME) End With ' If we don't have a folder reference, leave with an error If oFolder Is Nothing Then WScript.Echo "ERROR - Folder not found in desktop" WScript.Quit 1 End If Dim strFolderPath, strFilePath ' Retrieve the file system path of the requested folder strFolderPath = oFolder.Path ' Search the required file and leave with an error if it can not be found With WScript.CreateObject("Scripting.FileSystemObject") strFilePath = .BuildPath( strFolderPath, FILE_NAME ) If Not .FileExists( strFilePath ) Then WScript.Echo "ERROR - File not found in desktop folder" WScript.Quit 1 End If End With ' We have a valid file reference, navigate to it With WScript.CreateObject("InternetExplorer.Application") .statusBar = False .menubar = False .toolbar = False .visible = True .navigate2 strFilePath End With You can find more information on shell scriptable objects here
{ "pile_set_name": "StackExchange" }
Q: Passing struct by value from an explicitly loaded dll built with a different compiler As far as I know it is safe to pass a struct across libraries, if the padding is compatible. So I wrote a test with a struct containing a single member, yet I still get a runtime error when reading the returned struct. PluginInterface.h: #ifndef PLUGIN_INTERFACE_H #define PLUGIN_INTERFACE_H typedef struct { const char *name; } DataStruct; class PluginInterface { public: virtual DataStruct __cdecl getData() = 0; }; #endif // PLUGIN_INTERFACE_H Plugin.h: #ifndef PLUGIN_H #define PLUGIN_H #include "PluginInterface.h" class Plugin : public PluginInterface { public: DataStruct __cdecl getData(); }; #endif // PLUGIN_H Plugin.cpp: #include "Plugin.h" DataStruct Plugin::getData() { DataStruct data; data.name = "name of plugin"; return data; } extern "C" __declspec(dllexport) PluginInterface* getInstance() { return new Plugin; } main.cpp: #include <iostream> #include "windows.h" #include "PluginInterface.h" typedef PluginInterface* (*PluginCreator) (); int main() { HINSTANCE handle = LoadLibrary("Plugin.dll"); if (handle == nullptr) { std::cout << "Unable to open file!" << std::endl; return 0; } PluginCreator creator = (PluginCreator)GetProcAddress(handle, "getInstance"); if (creator == nullptr) { std::cout << "Unable to load file!" << std::endl; return 0; } PluginInterface* plugin = creator(); if (plugin == nullptr) { std::cout << "Unable to create plugin!" << std::endl; return 0; } DataStruct data = plugin->getData(); std::cout << "so far so good" << std::endl; std::cout << data.name << std::endl; // Access violation return 0; } I compiled the plugin with mingw, the executable with VS2012. I also tried to replace the const char* with an int, in which case I get a random integer. I know passing a struct with a single element doesn't make much sense, but I still wonder what the problem is. A: The problem is not with passing the struct by value, as this would work if the function returning the struct was a non-member declared extern "C". The problem is with the call to the virtual function getData(). In your example, the VS2012 compiler generates the code to call the virtual function via a pointer to an object, but the object was created by code generated by a different compiler. This fails because the C++ ABI differs between the two compilers - which means that the underlying implementation of virtual functions is different. The call to creator succeeds because both compilers have the same underlying implemention of the C ABI. If you want to use C++ across library boundaries, you need to ensure that the libraries are compiled with the same C++ ABI version. Note that this can differ between different versions of the same compiler.
{ "pile_set_name": "StackExchange" }
Q: Meaning of the adjective "clear" in the context of "blips of energy" It was in Crash Course Big History. It is at 3 minute and 16 second. Here is the context: When the universe was still very, very small at the quantum scale, tiny fluctuations were popping in and out of existence. These tiny blips of energy usually don't affect the physics of the larger world. But, during inflation, they suddenly were clear, when the universe became big, causing slight inequalities in matter and energy. I cannot get how tiny blips can become clear, and what it means there. Could you please rephrase the sentence for me? A: From the context, I think the speaker meant to say "they suddenly became clear". That is to say, that which did not previously affect the physics of the larger world suddenly resolved more fully into existence.
{ "pile_set_name": "StackExchange" }
Q: How does C Handle Integer Literals with Leading Zeros, and What About atoi? When you create an integer with leading zeros, how does c handle it? Is it different for different versions of C? In my case, they just seem to be dropped (but maybe that is what printf does?): #include <stdio.h> int main() { int a = 005; printf("%i\n", a); return 0; } I know I can use printf to pad with 0s, but I am just wondering how this works. A: Leading zeros indicate that the number is expressed in octal, or base 8; thus, 010 = 8. Adding additional leading zeros has no effect; just as you would expect in math, x + 0*8^n = x; there's no change to the value by making its representation longer. One place you often see this is in UNIX file modes; 0755 actually means 7*8^2+5*8+5 = 493; or with umasks such as 0022 = 2*8+2 = 10. atoi(nptr) is defined as equivalent to strtol(nptr, (char **) NULL, 10), except that it does not detect errors - as such, atoi() always uses decimal (and thus ignores leading zeros). strtol(nptr, anything, 0) does the following: The string may begin with an arbitrary amount of white space (as determined by isspace(3)) followed by a single optional '+' or '-' sign. If base is zero or 16, the string may then include a "0x" prefix, and the number will be read in base 16; otherwise, a zero base is taken as 10 (decimal) unless the next character is '0', in which case it is taken as 8 (octal). So it uses the same rules as the C compiler. A: Be careful! In this statement 005 is an octal constant. int a = 005; In this case it doesn't matter because a single digit octal constant has the same value as the equivalent decimal constant but in C: 015 != 15 Whether an integer literal is expressed in octal, decimal or hexadecimal, once it is parsed by the compiler it is just treated as a value. How an integer is output via printf depends only on its type, its value and the format specifiers (and the active locale). A: The fact that a leading zero indicates a number is octal is something that's often forgotten. I've seen it cause confusion several times, such as when someone tried to input an IP address using a nice, regular format for the octets: 192.168.010.073 and the parser interpreted the last 2 octets as octal numbers. The only thing worse than C's unfortunate use of leading zeros to make a number octal is Javascript's handling of leading zeros to sometimes make a number octal (the number is octal if the rest of the digits are OK - less than 8 - decimal otherwise). In Javascript, (017 == 15) but (018 == 18). I'd rather there be an error; actually I'd rather drop octal literal support altogether. At least use a more in-your-face prefix, like maybe 0t10 (ocTal 8) 0k17 (oKtal 15) But I'm about 35 years too late with my proposal.
{ "pile_set_name": "StackExchange" }
Q: Use SQL data for flot chart with C# and ASP I am new to flot charts and more a SQL guy than a C# programmer. I am attempting to have a bar chart reflect monthly sales. I can't seem to get my data through to the chart. I have been searching all over for a direct answer on getting this to work and have had zero luck after 4 days. Here is what the stored procedure returns: MonthID SoldCount MonthName 4 101 Apr 8 118 Aug 2 74 Feb 1 74 Jan 7 113 Jul 6 126 Jun 3 114 Mar 5 129 May 9 47 Sep Here is my code behind using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Data.SqlClient; using System.Text; using System.Configuration; using SkywebReporter.Classes; namespace SkywebReporter { public partial class DefaultObject : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } string conn = ConfigurationManager.ConnectionStrings["sqlConn"].ConnectionString; [System.Web.Services.WebMethod]//public static web method in code behind public static List<PNMACsales> GetData() //int StartRowindex, { List<PNMACsales> myResult = new List<PNMACsales>(); using (SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["sqlConn"].ConnectionString)) { //string sqlString = "SelectbyYearTotalProductAssign"; string sqlString = "PNMAC.procReportSalesCounts"; using (SqlCommand cmd = new SqlCommand(sqlString, conn)) { cmd.CommandType = System.Data.CommandType.StoredProcedure; conn.Open(); SqlDataReader rdr = cmd.ExecuteReader(CommandBehavior.CloseConnection); while (rdr.Read()) { PNMACsales obj = new PNMACsales(); obj.SoldCount = Convert.ToInt32(rdr["SoldCount"]); obj.MonthName = rdr["MonthName"].ToString(); myResult.Add(obj); } conn.Close(); } } return myResult; } } } This is my JS file Dashboard.js function DrowChart() { jQuery("#placeholder").html(''); var list12 = []; jQuery.ajax({ type: "POST", url: "DefaultObject.aspx/GetData", contentType: "application/json; charset=utf-8", dataType: "json", async: false, data: "{}", success: function (data) { jQuery.map(data.d, function (item) { var list = []; list.push("'" + item.MonthName + "'"); list.push(item.SoldCount); list12.push(list); }); var plot1 = jQuery.jqplot('chart1', [list12], ); } }); } A: Assuming you are successfully getting a response from your GetData() method, you need to serialize the List that you are returning from your GetData() function. This is because JavaScript can't do anything with a C# List. It needs to be converted to a string format (JSON) that JS can recognize. As some of the comments suggested, adding JSON.Net to your C# project is an easy way to do this. Once you have installed JSON.Net, you can change the return type of GetData() to string, and do the following: return JsonConvert.SerializeObject(myResult); In your JavaScript, you'll need to parse the serialized list: var list = JSON.parse(data); The list should be parsed as an array of objects, which you can then pass into flot in the format you need.
{ "pile_set_name": "StackExchange" }
Q: d3 csv data loading I am trying to adapt a simple scatterplot program in D3 to accept a CSV file. When I use the data in the file it works just fine, but when I try to load the CSV file it simply won't work. Is there something simple I am missing? The contents of the CSV file "datatest.csv" are the same as the dataset in the code. I have checked that the browser is loading the data, and it seems to all be there. I figure I'm simply missing a step. <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>D3 Demo: Linear scales</title> <script type="text/javascript" src="../d3/d3.v3.js"></script> <style type="text/css"> /* No style rules here yet */ </style> </head> <body> <script type="text/javascript"> //Width and height var w = 900; var h = 500; var padding = 20; var dataset = []; // var dataset = [ // [5, 20], [480, 90], [250, 50], [100, 33], [330, 95], // [410, 12], [475, 44], [25, 67], [85, 21], [220, 88], // [600, 150] // ]; d3.csv("datatest.csv", function(data) { dataset=data }); //Create scale functions var xScale = d3.scale.linear() .domain([0, d3.max(dataset, function(d) { return d[0]; })]) .range([padding, w - padding * 2]); var yScale = d3.scale.linear() .domain([0, d3.max(dataset, function(d) { return d[1]; })]) .range([h - padding, padding]); var rScale = d3.scale.linear() .domain([0, d3.max(dataset, function(d) { return d[1]; })]) .range([2, 5]); //Create SVG element var svg = d3.select("body") .append("svg") .attr("width", w) .attr("height", h); svg.selectAll("circle") .data(dataset) .enter() .append("circle") .attr("cx", function(d) { return xScale(d[0]); }) .attr("cy", function(d) { return yScale(d[1]); }) .attr("r", function(d) { return rScale(d[1]); }); </script> </body> </html> This is the content of the CSV file: x-coordinate, y-coordinate 5,20 480,90 250,50 100,33 330,95 410,12 475,44 25,67 85,21 220,88 600,150 A: IMPORTANT: While the answer here works, there's a builtin method d3.csv.parseRows(), which achieves the same result. For that, see @ryanmt's answer (also on this page). However, keep in mind that regardless of the method you use, if your CSV has numbers in it then you'll need to convert them from strings to javascript Numbers. You can do it by prefixing the parsed values with a +. For example, in the solution I provided here — which doesn't use parseRows() — that conversion is achieved via +d["x-coordinate"]. THE ANSWER: The CSV parser produces an array of objects, rather than the array of arrays that you need. It looks like this: [ {"x-coordinate":"5"," y-coordinate":"20"}, {"x-coordinate":"480"," y-coordinate":"90"}, {"x-coordinate":"250"," y-coordinate":"50"}, {"x-coordinate":"100"," y-coordinate":"33"}, ... ] To transform it, you need to use a map() function: d3.csv("datatest.csv", function(data) { dataset = data.map(function(d) { return [ +d["x-coordinate"], +d["y-coordinate"] ]; }); }); (Note, map() is not available in older IE. If that matters, then there are plenty of workarounds with d3, jQuery, etc) A: D3 provides a builtin for this very thing. Instead of calling .parse() on your data, call .parseRows. This provides just the data as an Array, rather than an Object (based upon the header line). see the Documentation.
{ "pile_set_name": "StackExchange" }
Q: Getting rid of duplicate entries in a rails sqlite database I am learning ruby on rails. I started by importing some data into a sqlite database from a csv file. Then I successfully transferred that data into my rails environment. Upon inspection of the database, I realized that I had created 5 copies of each entry in the database. I wanted to clean the database and I was wondering what the best options to do that would be? Here is what I guess I need to do, but please suggest better ways if you think of them: Write a method in rails that invokes raw sql that removes the possible duplicates from the table and enters them into another table called "duplicates" Then go through the entries in table "Duplicates" and decide whether to keep them or delete them. Finally after the check is done, transfer the entries to be retained back to the original table Also, where should I put this method to remove duplicates? In the "model" or somewhere else? A: The easiest solution is just clear your database and re-import so you have just one copy. Or you could use an SQLite client and clean it up in SQL directly. For a utility method like this, if you choose that route, generally you would make a Rake task. So it would go in lib/tasks.
{ "pile_set_name": "StackExchange" }
Q: Only downloading new folders/files with rsync I periodically run a rsync command which downloads new files from my remote server. The files that are downloaded are stored in folders, once I have downloaded them to my local machine I may delete folders (and their contents) that are no longer required. When I run my rsync command again it will download any new folders as well as the old folders that I have deleted from my local machine which I don't want. What I would like to do on rsync command is store the folder names in a file (like downloaded.log) and then use this as my exclude file for the next time I run rsync so it will not download these folders again. I think it would be more efficient to store only the folder names rather than folders and filenames as by skipping the folder you would skip the file anyway. Could someone explain how I could have the rsync command output the folders names? Current RSYNC command: rsync -avz --dry-run remote-host:downloads/ ~/Downloads/ A: use the --exclude-from=FILE and put the directories you don't want in this file. For example if you have a dir test with folders a,b and c inside and you want to sync it to a folder test2 but want to ignore folder b and c, you need to create a file like following : $ cat ignore /b /c and then run the command rsync -avz --exclude-from=ignore test/ test2/ edit: To fit to your command rsync -avz --dry-run --exclude-from=/path/to/ignore-file remote-host:downloads/ ~/Downloads/ and in the file /path/to/ignore-file make a list of contents that are on remote-host in the downloads folders like this. subfolder1/ subfolder2/ edit2: To make it automatic you can create a script like that /home/youruser/scripts/add-to-ignore.sh #/bin/bash for filepath in ~/Downloads/* do filename=$(basename $filepath) echo "$filename/" >> /home/youruser/.ignorelist done And then run it like that rsync -avz --dry-run --exclude-from=/path/to/ignore-file remote-host:downloads/ ~/Downloads/ && bash /home/youruser/scripts/add-to-ignore.sh That should do the trick, and the list will keep the old dirs. You could also use --log-file and --log-file-format to log what you've just copied in a file and then have a script to remove the beginning of lines, so you could use this file as a source for --exclude-from.
{ "pile_set_name": "StackExchange" }
Q: Bigquery create table (native or external) link to Google cloud storage I have some files uploaded to Google Cloud Storage (csv and json). I could create BigQuery tables, native or external, linking to these files in Google Cloud Storage. In the process of creating bigquery tables, I could check "Schema Automatically detect". The "Schema Automatically detect" works well with json new line delimited format file. But with the csv file, first row is the 'column name", bigquery cannot do the "schema automatically detect", it treats the first line as data, and then the schema bigquery created will be string_field_1, string_field_2 etc. Are there anything that I need to do for my csv file that makes bigquery "Schema Automatically detect" works? The csv file I have is "Microsoft Excel Comma Separated Value File". Update: If first column is empty, BigQuery autodetect doesn't detect headers custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction ,A123,,,Medley of very old Viennese songs,,,,,,, ,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,, But if first column is not empty - it is OK: custom id,asset id,related isrc,iswc,title,hfa song code,writers,match policy,publisher name,sync ownership share,sync ownership territory,sync ownership restriction 1,A123,,,Medley of very old Viennese songs,,,,,,, 2,A234,,,Suite de pièces No. 3 en Ré Mineur HWV 428 - Allemande,,,,,,, Should it be a feature improvement request for BigQuery? A: CSV autodetect does detect header line in CSV files, so there must be something special about your data. It would be good if you can provide the real data snippet and the actual commands you used. Here is my example that demonstrates how it works: ~$ cat > /tmp/people.csv Id,Name,DOB 1,Bill Gates,1955-10-28 2,Larry Page,1973-03-26 3,Mark Zuckerberg,1984-05-14 ~$ bq load --source_format=CSV --autodetect dataset.people /tmp/people.csv Upload complete. Waiting on bqjob_r33dc9ca5653c4312_0000015af95f6209_1 ... (2s) Current status: DONE ~$ bq show dataset.people Table project:dataset.people Last modified Schema Total Rows Total Bytes Expiration Labels ----------------- ----------------- ------------ ------------- ------------ -------- 22 Mar 21:14:27 |- Id: integer 3 89 |- Name: string |- DOB: date
{ "pile_set_name": "StackExchange" }
Q: Solving sparse linear least squares or a positive definite 5-band matrix system fast I want to quickly solve linear least squares problem for $x \in \mathbb{R}^n$ $$min_x \left\| A x - b \right\|_2^2$$ with a special sparse structure where each row in $A$ has only up to 4 consecutive non-zero entries. This makes its normal matrix $$C = A^T A$$ a positive definite 7-band matrix with a condition number between $8^2$ and $400^2$. So, these condition numbers aren't too bad that solving the Gauß normal equation system $$ C x = A^T b $$ instead wouldn't get me into trouble numerically, I think. What are my options? I could try conjugate gradient methods but I would prefer direct solvers that can deal with these kinds of special cases in $O(n)$ time independent of the condition. I'm aware of algorithms for the tridiagonal case and I guess I could try to adapt them for 5 bands (?). But before reinventing the wheel and/or testing many different algorithms I wanted to ask you about what approach michg be the most efficient in terms of time because I have lots of these problems (millions) with values for $n$ of around 5000 where $A$ has about $4n$ rows. A: The LU factorization of $C$ along with forward and backward substitution works well in this case. The factorization can still be done completely in-place. So, there is no need to touch or create other off-band elements.
{ "pile_set_name": "StackExchange" }
Q: Textarea validation When using input type text, I validate using the following code. <input type="text" name="subject" value="<?php echo $form->value("subject"); ?>"> <?php echo $form->error("subject"); ?> <textarea name="body" cols="10" rows="10"></textarea> <?php echo $form->error("body"); ?> As you can see, I am also using a textarea. How would I add the value="<?php echo $form->value("body"); ?>" to the textarea? Thanks A: Put it in between the <textarea> tags. <textarea name="body" cols="10" rows="10"><?php echo $form->value("body"); ?></textarea>
{ "pile_set_name": "StackExchange" }
Q: cursor is jumping when pressing the arrow keys I have a textbox, where a forbidden character cant be typed. #. This works, however, when the textbox is filled in with data, and I put the focus on the middle of the textbox and then I use the arrow keys to go left and right, then it jumps to the end of the textbox. If I type a character also in the middle of the textbox, it goes to the end again $('[id$=txtClient]').keyup(function () { EnableClientValidateButton(); // When the textbox changes, the user has the ability to validate the client ChangeColorClient("0"); // The color is changed to white, to notify the user the client is not validated yet. var $el = $('[id$=txtClient]'); // the text element to seach for forbidden characters. var text = $el.val(); // The value of the textbox text = text.split("#").join("");//remove occurances of forbidden characters, in this case # $el.val(text);//set it back on the element }); A: Javascript allows you to set the cursor position for inputs. I found two useful functions: getCaretPosition - https://stackoverflow.com/a/2897229/2335291 setCaretPosition - https://stackoverflow.com/a/512542/2335291 And the solution could look like this: function getCaretPosition (elem) { // Initialize var iCaretPos = 0; // IE Support if (document.selection) { // Set focus on the element elem.focus (); // To get cursor position, get empty selection range var oSel = document.selection.createRange (); // Move selection start to 0 position oSel.moveStart ('character', -elem.value.length); // The caret position is selection length iCaretPos = oSel.text.length; } // Firefox support else if (elem.selectionStart || elem.selectionStart == '0') iCaretPos = elem.selectionStart; // Return results return (iCaretPos); } function setCaretPosition(elem, caretPos) { if(elem != null) { if(elem.createTextRange) { var range = elem.createTextRange(); range.move('character', caretPos); range.select(); } else { if(elem.selectionStart) { elem.focus(); elem.setSelectionRange(caretPos, caretPos); } else elem.focus(); } } } $('[id$=txtClient]').keyup(function () { EnableClientValidateButton(); // When the textbox changes, the user has the ability to validate the client ChangeColorClient("0"); // The color is changed to white, to notify the user the client is not validated yet. var $el = $('[id$=txtClient]'); // the text element to seach for forbidden characters. var text = $el.val(); // The value of the textbox text = text.split("#").join("");//remove occurances of forbidden characters, in this case # var pos = getCaretPosition(this); $el.val(text);//set it back on the element setCaretPosition(this, pos); }); A: This is a bit unpleasant, and I'm not 100% happy, but it solves all the given issues that you've had... $("[id$=txtClient]").keyup(function (e) { var text = $(this).val(); if (text.indexOf("#") > -1) { text = text.replace("#", ""); $(this).val(text); } }); Here's a jsFiddle example... http://jsfiddle.net/E4cBK/
{ "pile_set_name": "StackExchange" }
Q: Qt - Writing integer data into JSON I am using Qt (5.5) and I want to exchange data in JSON format in a client-server application. So the format is constant: { "ball": { "posx": 12, "posy": 35 } } I would like to be able to define a ByteArray or string like so: QByteArray data = "{\"ball\":{\"posx\":%s,\"posy\":%s}}" and then just write whatever the values for that into the string. How do I do that? A: QtJson is baked into Qt 5. It is easy to use, and gets it all ready for you pretty easily. #include <QCoreApplication> #include <QDebug> #include <QJsonObject> #include <QJsonDocument> void saveToJson(QJsonObject & json); int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QJsonObject jsonObject; saveToJson(jsonObject); QJsonDocument jsonDoc(jsonObject); qDebug() << "Example of QJsonDocument::toJson() >>>"; qDebug() << jsonDoc.toJson(); qDebug() << "<<<"; return a.exec(); } void saveToJson(QJsonObject & json) { QJsonObject ball; ball["posx"] = 12; ball["posy"] = 35; json["ball"] = ball; } output Example of QJsonDocument::toJson() >>> "{ "ball": { "posx": 12, "posy": 35 } } " <<< Note: qDebug() wraps QString objects in quotes when printing. To get rid of that, pass your QString into qPrintable(). And it puts endl in for you at the end of each line. For a more complex example see the official: JSON Save Game Example http://doc.qt.io/qt-5/qtcore-json-savegame-example.html Hope that helps.
{ "pile_set_name": "StackExchange" }
Q: Cost for Google Map API use in android I have android app which uses Map. I am using my own tile provide. I created a tileprovider class and have given url of my map server. Also have proper authentication key in adndroidmanifest.xml. I want to know if i need to pay google for using the maps in my android application although I am using my own tile provider (auth key is form google). A: No. You don't have to pay for Google Maps Android API v2 even if you don't use your own TileProvider.
{ "pile_set_name": "StackExchange" }
Q: How to insert records for particular store field in extjs I am getting Month, Target, Target1 values are getting from webservice available in store. I want to calculate the total value and insert the total field value in the same store. I am doing this for calculation but I don't know how to insert the total value into the total field. Can anybody tell me how to do this? chartstore.each(function (rec) { total=parseFloat(rec.get('target'))+parseFloat(rec.get('target1')); }); Month Target Target1 Total Jan 25 25 50 Mon 50 50 100 A: You should use convert function in your model for total field like below { name : 'total', convert : function( value, record ) { var totalValue = record.get('Target') + record.get('Target1'); return totalValue; } type: 'number' },
{ "pile_set_name": "StackExchange" }
Q: Error 400 when protecting a worksheet Based on the value of a particular cell I need to, potentially, unprotect a worksheet, set a range to locked and reprotect the worksheet. Conversely if the value of the cell (in this case cell B4 is equal to "work") then I need to unprotect the worksheet, unlock the cells, and then reprotect the worksheet. The reason for this is that I want to stop the user tabbing to cells A8:B19 when cell B4 is not equal to work. When B4 = "work" the user can input numbers into cells A8:B19. There are limited options to input when B4 <> "work" and I have the setting to only tab between unlocked cells checked which makes input easier. Currently I have this: Private Sub Worksheet_Change(ByVal Target As Range) If Not Intersect(Target, Range("B4")) Is Nothing Then If Range("B4").Value = "work" Then Shapes("Rectangle 1").Visible = False Application.EnableEvents = False ActiveSheet.Unprotect ActiveSheet.Range("A8:B19").Locked = True ActiveSheet.Protect ‘When the code hits this line it throws error 400 Application.EnableEvents = True End If If Range("B4").Value <> "work" Then Shapes("Rectangle 1").Visible = True Application.EnableEvents = False ActiveSheet.Unprotect ActiveSheet.Range("A8:B19").Locked = True ActiveSheet.Protect ‘When the code hits this line it throws error 400 Application.EnableEvents = True End If End If End Sub Clearly it depends on the value of cell B4 as to which "ActiveSheet.Protect" causes the error to occur, but it always does. Commenting out the offending line allows the VBA code to run as expected aside however it leaves the worksheet unlocked. I've tried moving the "ActiveSheet.Protect" line to further down the sub, calling it in a different sub etcetera and no luck, it always causes error 400. I'm aware that Elseif would no doubt be better practise however I changed from If, ElseIf End If to see if it made any difference. It didn't. Curiously, I tried a similar thing to prove the principle in another Excel sheet with the following code: Private Sub Worksheet_Change(ByVal Target As Range) If Not Intersect(Target, Range("A1")) Is Nothing Then Dim i As Integer If Cells(1, 1) = "unlock" Then i = 1 Shapes("Oval 1").Visible = False Application.EnableEvents = False ActiveSheet.Unprotect For i = 1 To 5 Cells(i, 2) = i Next i ActiveSheet.Range("C1:C5").Locked = False ActiveSheet.Protect ElseIf Cells(1, 1) <> "unlock" Then i = 1 Shapes("Oval 1").Visible = True Application.EnableEvents = False ActiveSheet.Unprotect For i = 1 To 5 Cells(i, 2) = "" Cells(i, 3) = "" Next i ActiveSheet.Range("C1:C5").Locked = True ActiveSheet.Protect End If Application.EnableEvents = True End If End Sub This works exactly as I'd expect, and doing a similar thing. Aside from the different cells and a for loop I can't see any difference between the two samples of code above with regards to the unprotect, change locked variable, protect process. Baffled, any help much appreciated. A: Use the UserInterfaceOnly argument - this protects the sheet, but still allows any programmatic changes to occur without the need to unprotect and reprotect: Sheets("Some Sheet").Protect Password:="Pass123", UserInterfaceOnly:=True Sheets("Some Sheet").Range("A1").Value = "Foo" '// code runs without error You can use the Workbook_Open event to ensure any required sheets are locked in this way and then there's no need to manage it in any further code: Private Sub Workbook_Open() For Each ws In ThisWorkbook.Sheets ws.Protect UserInterfaceOnly:=True Next End Sub
{ "pile_set_name": "StackExchange" }
Q: Why are APK'S buggier than Play store downloaded apps? I had downloaded an early copy of some game, as its apk was put on apitoide after the app was available for 15 mins. Its an app that allows you to fling spheres at imaginary monsters. Anyway (Ha-Ha!) The apk was a horrible, buggy mess, whilst the play store version was significantly cleaner. ** Is there any reason to why APK's are more 'broken' than their play-store versions? ** A: The Play Store version is the official version, which often has updates pushed out on a regular basis. Developers put effort into maintaining the play store version. If you are using an APK version downloaded from elsewhere, there is both the danger that you have put malware on your device and the high probability that it's older than the official release.
{ "pile_set_name": "StackExchange" }
Q: remote disable apache server and mysql database I have a potential client who would like to use my web app but will only do so if they can have it on a server in their own office instead of using it on my hosted server. I am tempted to just say no, but at the same time I could use the extra client and wanted to investigate options of being able to disable the service in the event of non-payment. I was thinking along the lines of having a cron script check a specific location on my server each day and based on the response either keep operating or disable the apache/mysql services. I could do this except one aspect eludes me. In the old days I used to be able to write a small c app using setuid(0) to execute commands as root. This seems to be no longer the case due to security which is fair enough, but I will need something like this to be able to shutdown apache and mysql. Is there another option? I have just also thought as I was typing that my cron script (if told to disable) could write a .htaccess file redirecting everything to a disabled message. Has anyone done this before and if so how did you do it? The server will be running Ubuntu. A: Don't try to solve communication problems technically, because you will fail. Instead, only hand over the work if payment has been done. Create a contract that handles the details, incl. usage rights (how many copies etc.). Job done. Extra client for you. No hassle. No complex script that might even break your site.
{ "pile_set_name": "StackExchange" }
Q: What if the Chelyabinsk meteor had been a black hole of equivalent mass? The Chelyabinsk meteor is estimated to have had a mass of 10,000 - 13,000 metric tons. A black hole of mass $13\times10^6\ \mathrm{kg}$ has a radius of $$ r = \frac{2GM}{c^2} = \frac{(2 \times 6.674\times10^{-11}\ \mathrm{m^3\ kg^{-1}\ s^{-2}}) \times (13\times10^6\ \mathrm{kg})}{9\times10^{16}\ \mathrm{m^2\ s^{-2}}} \approx 1.93 \times 10^{-20}\ \mathrm m \approx \frac{1}{22000} r_\text{proton}$$ According to this Hawking Radiation Calculator a black hole of this mass has a lifetime of about 185000 seconds, a little over 2 days. Assume for a moment that somehow such a black hole could exist and collide with the earth, am I correct in assuming that, since its interaction cross-section is so small, it would sail through the earth without much happening? Looking it at from another point of view, calculating the gravitational attraction to the black hole at short distances, I find that at $1\ \mathrm m$ the force is negligible ($.0009\ \mathrm N$). However the inverse-square law applies, so at $1\ \mathrm{mm}$ the force is $867\ \mathrm N$, meaning maybe the "cross-section" isn't so small after all. The Hawking Radiation Calculator also gives a luminosity of $\rlap{\raise{0.5ex}{\rule{17ex}{1px}}}\approx 3.56 \times 10^8\ \mathrm W$. At a distance of $\rlap{\raise{0.5ex}{\rule{5ex}{1px}}}1\ \mathrm{km}$ the intensity would be only $\rlap{\raise{0.5ex}{\rule{10ex}{1px}}}28\ \mathrm{W/m^2}$. You probably wouldn't want to get too close to it. (see below) So what happens: Not much, sails right through Lots of fireworks but no lasting damage Immediate global cataclysm (1) or (2) initially but the black hole settles at the center of the earth and eventually consumes the planet (4) but Hawking Radiation prevents net matter inflow and the black hole eventually evaporates in a burst of energy... but then, how much energy? Correction: I did something wrong the first time in the Hawking Radiation Calculator... the actual luminosity would be $\approx 2.1 \times 10^{18}\ \mathrm W$, greater by 10 orders of magnitude. At $1000\ \mathrm{km}$ the flux would be about $167\ \mathrm{kW/m^2}$. So basically a significant fraction of the Earth's surface under the black hole's path would be sterilized, and as it approached the surface it would induce fusion. Not a pretty sight, and we're getting closer to Option 3 for a lot of people. A: Update: The values of luminosity of the blackhole and the composition of the Hawking radiation emitted by a hot blackhole claimed in this answer seem to be inaccurate. Check @A.V.S's answer for a more accurate description. For a blackhole of a mass of a few thousand metric tons, the Hawking blackbody radiation would correspond to an astronomically high temperature of about $10^{16}$ K ($ \because T = \frac{1.227 \times 10^{23}}{M}$ ). The radiation from such a hot blackhole would mostly be in high energy gamma rays with each photon carrying TeVs of energy. Note that the temperature claimed above is orders of magnitude higher than that is required to start nuclear fusion. As most meteors have velocities well above the escape velocity of earth, the blackhole might just pass through the earth in a hyperbolic orbit around the core of the earth ( I assume that there is no other mechanism, electromagnetical or otherwise that would cause the blackhole to lose energy but I could be wrong). $2 \times 10^{18} W$ of energy is enormous, infact an order of magnitude more than the amount of solar radiation received by the earth and all that energy is in high energy gamma rays. This could be the end of all life forms on earth and would permanently deface earth. Update: Not just a fraction of the earth and or a 'lot of people' as you say in your question's update, I believe that the power output would be enough to sterilize the entire planet or atleast the macroscopic life forms. Consider the fact that all the nuclear bombs ever tested and deployed on Earth till now together total about $10^{18} J$. So that would mean the power output of the blackhole would be equivalent to blowing up all those bombs every second. Moreover the temperature of the explosion and the energy of photons produced would be orders of magnitude higher than that produced by any normal fission or a fusion nuclear bomb. The runaway reactions that would follow such an event would have direct and indirect consequences on life on Earth. Also consider that fact that the power output and the temperature of the black hole increase more and more as the black hole evaporates and reduces it mass. A: The temperature of Hawking radiation in units of energy (for a $1.3\cdot10^{7}\,\text{kg}$ black hole) is $$T=\frac{\hbar c^3}{8\pi G M} \approx 813\,\text{GeV} .$$ As explained in my answer to the question What is the relative composition of Hawking radiation? for such high temperatures Hawking radiation consists mainly from quarks and gluons that quickly hadronize producing jets. And since quarks and gluons have a lot of degrees of freedom (due to color and flavor) as a result, total power radiated by a black hole with such a high temperature would be much larger than the number from Xaonon's Hawking radiation calculator. The paper cited in my above-mentioned answer: MacGibbon, J. H., & Webber, B. R. (1990). Quark-and gluon-jet emission from primordial black holes: The instantaneous spectra. Physical Review D, 41(10), 3052, doi. provides the following figure for total power of Hawking radiation (I will omit uncertainties): $$ P_\text{tot}\approx 3.2\times10^{24}\left(\frac{T}{\text{GeV}}\right)^{2.1}\,\text{GeV sec}^{-1}. $$ For a $13\,000$ metric tons black hole, this gives $P_\text{tot}\approx 6.2\cdot10^{20}\,\text{W}$, two orders of magnitude larger that calculator's figure. (Actually, the power should be even more if emission of Higgs bosons is included). Such power corresponds to at least $7 \,\text{tons}\,\text{sec}^{-1}$ mass loss for the black hole, and so the total lifetime (from $13\,000\,\text{tons}$) would be less than 10 minutes. This power also greatly exceed Eddington limit, so there would be no matter absorption. About half of this power would be released in the form of neutrinos/antineutrinos (and small fraction in form of gravitons) and so would not be (noticeably) interacting with any matter. The rest of the energy would be in the form of energetic gamma rays and relativistic hadrons and leptons and would definitely produce a planetary cataclysm. The exact structure of such cataclysm depends on the geometry of a collision. Even if we assume a glancing flythrough of the black hole when the final stage of its explostion happened far enough from the Earth surface, the energy absorbed by Earth would still be enormous. Remember that Tsar Bomba releases only about $2\,\text{kg}\cdot c^2$ of energy, while our black hole releases about 1000 times that energy (in the form of charged particles) every second. Even if we completely disregard the irradiation of Earth during black hole approach (the considerable share of energy would be absorbed by upper atmosphere over large surface area), a 5 second flight inside the atmosphere would produce a fireball equivalent to several thousands Tsar Bombas. The shock wave created would be destructive across the whole globe. If considerable energy is deposited inside the earth core, this would produce an earthquake of an enormous magnitude. Nevertheless, while such cataclysm has the potential to wipe human civilization, in terms of heat capacities of world oceans (and the Earth itself) the total energy is relatively minor. So life inside the ocean would likely survive. A: That meteor was observed quite clearly. And it was moving at speeds that didn't surprise for a meteor, so call it 20 km/s just to have a number. 3E8 watts is going to be visible for a quite good distance. I think we'd have seen it coming. If it didn't stay in the Earth, then in the 2 days of life it has, it gets only about 3E6 km. If that much mass turned to radiation inside that distance, I think we notice. It gets a lot brighter towards the end IIRC. I think with that radius for the horizon, and that much radiation coming out, it does not much notice a little thing like rock to fly through. I think it winds up burning a hole ahead of it, with radiation pressure shoving the debris out of the way. So I'm estimating 2-ish. Seriously intensely bright light, pin-point sized, goes pretty much straight along, comes out the other side. If it hits you, it will be pretty nasty. Probably a lot of serious hard radiation coming from it. It then travels a few times the distance to the moon, then goes with a final super big flash. How bad that final flash would be is more than I can calculate.
{ "pile_set_name": "StackExchange" }
Q: How to download via Chrome on Nexus 7 with Jelly Bean 4.2.2 I have a Nexus 7 updated to Jelly Bean 4.2.2, with all apps updated. Chrome is version 28.0.1500.94. I navigated to http://diablo.incgamers.com/blog/comments/diablo-3-podcast-102-public-game-leeches-economics-and-hardcore-dh-follies in Chrome and long-clicked the "Download" hyperlink located below the video. This is a link to a .MP3 file (audio podcast). A short message appeared, indicating that the download had started. However, no indicator appeared in the notification bar at the top and nothing is present in "Apps, Downloads", either when the download starts or a while afterwards. I looked in the Downloads folder via Rhythm Software's "File Manager". The file appeared there for a short while, but vanished soon afterwards; it vanished long before it would have finished downloading. Neither uninstalling "File Manager" nor restarting the tablet solved the problem. Single-clicking the "Download" link causes Chrome to start playing the file, which is not what I want; I want to download the .MP3 file. II only have a few other apps installed, which are not download/file manager apps. A: I tried to download the link using Chrome for Android SEVERAL times, but it always disappears. Like I've been telling myself before, Android for Chrome is not mature enough and lacking in (so many) features. I would suggest installing a 3rd-party browser like Dolphin or Boat, which are both highly-customizable. I downloaded the MP3 file successfully using Boat (which is my main browser), and it appears in my Download folder.
{ "pile_set_name": "StackExchange" }
Q: document.ready and on load function in angular js? I study in jquery that when only html load it fire document.ready function and when all images or everything load it fire onload function.Same I need to find in angular could you please tell what first function fire and when html is load and what function fire when everything is load in angularjs ? Thanks A: lets say that you have a div to which you want to attach on load function <div ng-controller="MainCtrl"> <div ng-view></div> </div> From MainCtrl you can listen the event $scope.$on('$viewContentLoaded', function(){ }); You can listen in your controllers defined in routes like myController and myRouteController for $viewContentLoaded event, $viewContentLoaded is emitted every time the ngView content is reloaded and should provide similar functionality as the document.ready when routing in angularjs
{ "pile_set_name": "StackExchange" }
Q: Can't parse XML effectively using Python import urllib import xml.etree.ElementTree as ET def getWeather(city): #create google weather api url url = "http://www.google.com/ig/api?weather=" + urllib.quote(city) try: # open google weather api url f = urllib.urlopen(url) except: # if there was an error opening the url, return return "Error opening url" # read contents to a string s = f.read() tree=ET.parse(s) current= tree.find("current_condition/condition") condition_data = current.get("data") weather = condition_data if weather == "<?xml version=": return "Invalid city" #return the weather condition #return weather def main(): while True: city = raw_input("Give me a city: ") weather = getWeather(city) print(weather) if __name__ == "__main__": main() gives error , I actually wanted to find values from google weather xml site tags A: Instead of tree=ET.parse(s) try tree=ET.fromstring(s) Also, your path to the data you want is incorrect. It should be: weather/current_conditions/condition This should work: import urllib import xml.etree.ElementTree as ET def getWeather(city): #create google weather api url url = "http://www.google.com/ig/api?weather=" + urllib.quote(city) try: # open google weather api url f = urllib.urlopen(url) except: # if there was an error opening the url, return return "Error opening url" # read contents to a string s = f.read() tree=ET.fromstring(s) current= tree.find("weather/current_conditions/condition") condition_data = current.get("data") weather = condition_data if weather == "<?xml version=": return "Invalid city" #return the weather condition return weather def main(): while True: city = raw_input("Give me a city: ") weather = getWeather(city) print(weather)
{ "pile_set_name": "StackExchange" }
Q: Set a custom SessionStore for ConfigureApplicationCookie without BuildServiceProvider() I have a .NET Core 3 project (recently upgraded from 2.2) that uses a Redis distributed cache and cookie authentication. It currently looks something like this: public void ConfigureServices(IServiceCollection services) { // Set up Redis distributed cache services.AddStackExchangeRedisCache(...); ... services.ConfigureApplicationCookie(options => { ... // Get a service provider to get the distributed cache set up above var cache = services.BuildServiceProvider().GetService<IDistributedCache>(); options.SessionStore = new MyCustomStore(cache, ...); }): } The problem is that BuildServiceProvider() causes a build error: Startup.cs(...): warning ASP0000: Calling 'BuildServiceProvider' from application code results in an additional copy of singleton services being created. Consider alternatives such as dependency injecting services as parameters to 'Configure'. This doesn't appear to be an option - ConfigureApplicationCookie is in Startup.ConfigureServices and can only configure new services, Startup.Configure can use the new services, but can't override CookieAuthenticationOptions.SessionStore to be my custom store. I've tried adding services.AddSingleton<ITicketStore>(p => new MyCustomRedisStore(cache, ...)) before ConfigureApplicationCookie, but this is ignored. Explicitly setting CookieAuthenticationOptions.SessionStore appears to be the only way to get it to use anything other than the local memory store. Every example I've found online uses BuildServiceProvider(); Ideally I want to do something like: services.ConfigureApplicationCookieStore(provider => { var cache = provider.GetService<IDistributedCache>(); return new MyCustomStore(cache, ...); }); Or public void Configure(IApplicationBuilder app, ... IDistributedCache cache) { app.UseApplicationCookieStore(new MyCustomStore(cache, ...)); } And then CookieAuthenticationOptions.SessionStore should just use whatever I've configured there. How do I make the application cookie use an injected store? A: Reference Use DI services to configure options If all the dependencies of your custom store are injectable, then just register your store and required dependencies with the service collection and use DI services to configure options public void ConfigureServices(IServiceCollection services) { // Set up Redis distributed cache services.AddStackExchangeRedisCache(...); //register my custom store services.AddSingleton<ITicketStore, MyCustomRedisStore>(); //... //Use DI services to configure options services.AddOptions<CookieAuthenticationOptions>(IdentityConstants.ApplicationScheme) .Configure<ITicketStore>((options, store) => { options.SessionStore = store; }); services.ConfigureApplicationCookie(options => { //do nothing }): } If not then work around what is actually registered For example //Use DI services to configure options services.AddOptions<CookieAuthenticationOptions>(IdentityConstants.ApplicationScheme) .Configure<IDistributedCache>((options, cache) => { options.SessionStore = new MyCustomRedisStore(cache, ...); }); Note: ConfigureApplicationCookie uses a named options instance. - @KirkLarkin public static IServiceCollection ConfigureApplicationCookie(this IServiceCollection services, Action<CookieAuthenticationOptions> configure) => services.Configure(IdentityConstants.ApplicationScheme, configure); The option would need to include the name when adding it to services.
{ "pile_set_name": "StackExchange" }
Q: Problemas com vetorização usando o compilador de C++ da intel no Visual Studio O código abaixo é fruto de um trabalho que estou desenvolvendo, basicamente é a multiplicação de uma matriz quadrada, porém, os resultados que eu tive paralelizando a aplicação com a API OpenMP foram superiores aos resultados que obtive usando SIMD da mesma API. O que estou fazendo de errado? é a sintaxe? Algumas informações que podem ser pertinente em identificar o problema: Estou usando o compilador da intel atravéz da IDE do visual studio, o OpenMP do visual studio é versão 2.0 (que não suporta SIMD) mas acho que é o 4.0 que vem com o compilador que está sendo usado. Enfim, pra mim é uma atividade nova (processamento paralelo) então se puderem esclarecer as coisas agradeceria de coração. Segue o código: #include "stdafx.h" #include <iostream> #include <time.h> #include <omp.h> using namespace std; int lin = 800, col = 800; // Valores de linha e coluna int main() { // -------------------------------------- // Cria a matriz 1 int** m1 = new int*[lin]; for (int i = 0; i < lin; ++i) m1[i] = new int[col]; // -------------------------------------- // -------------------------------------- // Cria a matriz 2 int** m2 = new int*[lin]; for (int i = 0; i < lin; ++i) m2[i] = new int[col]; // -------------------------------------- // -------------------------------------- // Cria a matriz resposta int** res = new int*[lin]; for (int i = 0; i < lin; ++i) res[i] = new int[col]; // -------------------------------------- cout << "criou matrizes" << endl; //PREENCHE m1 e m2 // ---------------------------------------------------------------------------- // BLOCO PARALELO #pragma omp simd collapse (2) for (int i = 0; i < lin; ++i) { for (int j = 0; j < lin; ++j) { m1[i][j] = (i + 1); } } // FIM DO BLOCO PARALELO // BLOCO PARALELO #pragma omp simd collapse (2) for (int i = 0; i < lin; ++i) { for (int j = 0; j < lin; ++j) { m2[i][j] = (i + 1); } } // FIM DO BLOCO PARALELO cout << "preencheu" << endl; // ---------------------------------------------------------------------------- //faz a magica rolar clock_t timer = clock(); //valores de marcação de tempo // ---------------------------------------------------------------------------- cout << "iniciou" << endl; #pragma omp simd collapse (2) for (int i = 0; i < lin; i++) { for (int j = 0; j < lin; j++) { res[i][j] = 0; for (int k = 0; k < lin; k++) res[i][j] += m1[i][k] * m2[k][j]; } } cout << "finalizou" << endl; // ---------------------------------------------------------------------------- //marca tempo final e exibe timer = clock() - timer; cout << "Programa Finalizado em " << ((float)timer) / CLOCKS_PER_SEC << " Segundos" << endl; system("Pause"); } // This code is contributed // by Soumik Mondal A: Esta pergunta é semelhante a essa feita no stack em inglês, e a resposta para ela é a mesma que ela recebeu. De qualquer forma, como o propósito deste fórum é fornecer respostas em português, vou traduzir a resposta do @jonathan-dursi: O padrão fornecido pelo link é relativamente claro (pag 13, linhas 19+20) Quando qualquer thread encontrar uma construção simd, as iterações do laço associadas com a construção podem ser executadas nas lanes SIMD que estão disponíveis para a thread. SIMD é uma coisa interna para threads. De forma mais concreta, em uma CPU você pode imaginar usar diretivas simd para solicitar especificamente a vetorização de pedaços de iterações de laço que pertencem individualmente à mesma thread. Isso está expondo os vários níveis de paralelismo existentes em um único processador multinúcleo, de maneira independente da plataforma. Veja, por exemplo, a discussão (junto com o material do acelerador) nesta postagem no blog da intel. Então, basicamente, você desejará usar omp parallel para distribuir o trabalho em diferentes threads que poderão migrar para vários núcleos; e você desejará usar omp simd para usar os pipelines de vetores (por exemplo) em cada núcleo. Normalmente, omp parallel iria por fora do trecho de código para lidar com a distribuição paralela de trabalho de granulação mais grossa e o omp simd contornaria laços apertados dentro dele para explorar o paralelismo de granulação fina. De forma resumida: SIMD isoladamente tem pouco potencial para ganhar de uma região paralela comum. A diretiva simd não cria uma região paralela. Um código com SIMD será mais eficiente que um código paralelo se explorar melhor deficiências comuns como cache misses ou se tiver mais lanes que núcleos no teu processador, o que é bem incomum. Além do que, a diretiva simd é uma dica para o pré-processador. Não há garantias de que seu código será vetorizado. Você pode ter ganhos se combinar as diretivas. (e eu que sem querer respondi a pergunta em inglês ao invés desta :p )
{ "pile_set_name": "StackExchange" }
Q: Singularities of secant varieties of rational normal curves Let $C\subset\mathbb{P}^n$ be a rational normal curve of degree $n$, and let $Sec_k(C)\subset\mathbb{P}^n$ be its $k$-th secant variety. By Theorem 1.1 in this paper: http://ac.els-cdn.com/S0022404908002387/1-s2.0-S0022404908002387-main.pdf?_tid=120cfede-1405-11e4-91e8-00000aab0f6c&acdnat=1406297453_6fca2d4de380c88c04cfe110390a8418 we have that $Sec_k(C)$ is normal and $Sing(Sec_k(C)) = Sec_{k-1}(C)$. Does $Sec_k(C)$ have ordinary singularities of multiplicity two along $Sec_{k-1}(C)\setminus Sec_{k-2}(C)$? More precisely let $f:X\rightarrow\mathbb{P}^n$ be the blow-up of $\mathbb{P}^n$ along $Sec_{k-1}(C)$ with exceptional divisor $E$, and let $Y$ be the strict transform of $Sec_k(C)$. Is the following statement true? The strict transform $Y$ is smooth, it intersects $E$ transversally and we have $$Y = f^{*}Sec_k(C)-2E.$$ I guess this should be true for instance when we consider a rational normal curve $C$ of degree four and $Sec_2(C)$ which is a cubic hypersurface. A: The following is a consequence of Theorem 1 in "A. Bertram, Moduli of Rank-$2$ Vector Bundles, Theta divisors, and the geometry of curves in projective space, J. Differential Geom. 35, 1992, 429-469." Let $C\subset\mathbb{P}^{2h}$ be a degree $2h$ rational normal curve. Consider the following sequence of blow-ups: $\pi_1:X_1\rightarrow\mathbb{P}^{2h}$ the blow-up of $C$, $\pi_2:X_2\rightarrow X_{1}$ the blow-up of the strict transform of $Sec_2(C)$, $\vdots$ $\pi_{h-1}:X_{h-1}\rightarrow X_{h-2}$ the blow-up of the strict transform of $Sec_{h-1}(C)$. Let $\pi:X\rightarrow\mathbb{P}^{2h}$ be the composition of these blow-ups. Then, for any $k\leq h$ the strict transform of $Sec_{k-1}(C)$ is smooth, irreducible and transverse to all exceptional divisors. In particular $Y$ is smooth and the divisor in $Y$ given by the union of the exceptional divisors and the strict transform of $Sec_{h}(C)$ is simple normal crossing. It is enough to apply Theorem 1 of the above cited paper and observe that The rational normal curve is given by the Veronese embedding induced by the line bundle $L = \mathcal{O}_{\mathbb{P}^1}(2h)$ on $\mathbb{P}^1$. Now, $$h^{0}(\mathbb{P}^1,L(-2h)) = 1 = 2h+1-2h= h^{0}(\mathbb{P}^1,L)-2h.$$ This means that $C\subset\mathbb{P}^{2h}$ is embedded by a $2h$-very ample line bundle.
{ "pile_set_name": "StackExchange" }
Q: Dynamically update iframe content in jquery I have an iframe container inside the body section and a button. Any click upon the button makes a call to server and fetches some html content. Now I want the html content which comes as response from my server to be placed inside the iframe. Here is the code I am doing:- var url = 'http://www.google.com'; var request = new XMLHttpRequest(); var serverURL = "http://localhost:8080/myServer/getPage.htm"; request.open('GET', serverURL + '?url=' + url, false); request.send(null); if(request.status == 200) { console.log(request.responseText); var resp = eval('(' + request.responseText + ')'); var data = resp[0].data; //alert(data); var path = 'http://localhost:8080/test/about.html'; $("#newPage").attr('src',data); } On alerting data variable it is giving me correct html code but the last line is not working. I don't know where i am doing wrong. For testing purpose I placed path instead of data then it worked perfectly. Please help! A: I think there is difference between string (javascript variable data here) and an actual html page. You might need to save your data variable in some html page and then update your iframe. I am not too sure about my solution. Check this out How do I dynamically change the content in an iframe using jquery?
{ "pile_set_name": "StackExchange" }
Q: How do I sync a folder from one EBS volume to another for the same EC2? I have an EC2 instance with EBS volumes A and B attached to it, and I want to copy/replicate/sync the data from a specific folder in EBS A to EBS B. EBS A is the primary volume which hosts application installation data and user data, and I'm looking to effectively backup the user data (which is just a specific directory) to EBS B in the event that the application install gets corrupted or needs to be blown away. That way I can simply stand up a new EC2 with a new primary EBS, call it C, attach EBS B to it, and push the user data from EBS B into EBS C. I am using Amazon Linux 2 and have already gone through the process of formatting and mounting the backup EBS. I can manually copy data from EBS A to EBS B but I was hoping someone could point me towards a best practices for keeping the directory data in sync between the two volumes? I have found recommendations for rsync, a cron task, and gluster for similar use cases. Would is be considered good practice to use one these for my use case? A: While you can use rsync, a better alternative is Data Lifecycle Manager, which will make automated EBS snapshots. The reason that it's better is that you can specify a fixed number of snapshots, at a fixed time interval, so you don't need to restore the latest (important if the "current" data is corrupted). To use this most effectively, I would separate the boot volume from the application/data volume(s). So you could just restore the snapshot, spin up a new instance, and mount the restored volume to it.
{ "pile_set_name": "StackExchange" }
Q: How to create https-connections? I have NOT done much of web-developing and I would like to learn about it: how you can create https-connections. I am currently using apache with mysql/php as software development tools. Are the connections/ports to be used always configured from the server or do you need some scripts for it in php? and do you need to do anything else in MySQL except enable: have_ssl ? The tutorials that I have managed to find were a bit confusing about everything and not very thorough, So I was hoping that someone here might be so kind and explain the stuff briefly or maybe give me a link to some good tutorial. If somebody could give me a link to some "easy to read" tutorial or briefly explain how the system works. Thank you! A: You should use Apache-SSL , OR mod_ssl with-in Apache Sever I suggest to use mod_ssl if your are new to developing php/mysql try to use XAMPP of WAMP instead of installing apache and configuring mod_ssl manually you can easily active mod_ssl in both of them if you want to do it manually http://tud.at/programm/apache-ssl-win32-howto.php3 if you want redirect a certain path to use https instead of http , for example http ://yoursite.com/secure/ to https ://yoursite.com/secure/ Modify htaccess file and add this RewriteRule "^(/secure/.*)" "https://%{HTTP_HOST}$1" [R=301,L]
{ "pile_set_name": "StackExchange" }
Q: Programmatically manipulate GPX data I need to do the following in Python open OSM & GPX files (I have packages for this) transform points from GPX (like stick a track to roads) calculate the results (distances, cumulative distances, etc.) I need to do this repeatedly, so a Python script is much more preferred. Desktop software with plugins is not suitable. I'll prefer some Python & C modules than a quest of installing plugins. PostGIS may be an option too. Shapely (Python package) seems to not be able to do this (it works only in 2D on a plane, and mentions it has no projections). I don't need 3D, but I have the input as lat&lon coordinates, and need to do geometric transformations (project a point on a polyline) and calculate distances in metres. What modules should I use? A: osgeo.ogr can read all these formats: OGR Vector Formats osgeo.ogr and shapely support 3D: from osgeo import ogr point = ogr.Geometry(ogr.wkbPoint25D) point.AddPoint(5,4,4) point.GetZ() 4.0 from shapely.geometry import Point point1 = Point(5,4,4) point1.has_z True point1.z 4.0 you can change projections with osgeo.ogr: see Projecting shapefile with transformation using OGR in python and many, many other examples transform the geometries between ogr and shapely is easy: from shapely.wkb import loads point = ogr.Geometry(ogr.wkbPoint25D) point.AddPoint(5,4,4) point_shapely = loads(point.ExportToWkb()) point_shapely.has_z True inverse point_ogr = ogr.CreateGeometryFromWkb(point_shapely.wkb) print point_ogr.GetX(), point_ogr.GetY(), point_ogr.GetZ() 5.0 4.0 0.0 so you can use ogr or pyproj to change the projection of a shapely geometry, (see Measuring distance in spherical Mercator vs zoned UTM for example) and shapely or analytical geometry allows to project a point on a PoLyline (see How to draw perpendicular lines in QGIS?, with PyQGIS, but it is similar with ogr) As one example of the process, here are the results of the creation of geological cross-sections from 3D points (from Python: Using vector and raster layers in a geological perspective, without GIS software, in French, but the scripts and the figures are universal). 3D representation (distance between points): cumulative distance (geological cross-section)
{ "pile_set_name": "StackExchange" }
Q: Why is there no * in this method declaration? Here is the method declaration midway in Apple's documentation: Learning Objective-C: A Primer - (void)insertObject:(id) anObject atIndex:(NSUInteger) index Why is there no * right after NSUInteger. I thought all objects were pointer types and all strongly typed pointers had to have a * character after it. A: NSUInteger is not an object type, it is a typedef to unsigned int. The only reason that you would actually want to use a * in this context would be if you wanted to get the address of an int and store something in it. (Some libraries do this with error messaging). An example of this: -(void) methodName: (NSUInteger *) anInt { *anInt = 5; } NSUInteger a; [obj methodName: &a]; //a is now 5
{ "pile_set_name": "StackExchange" }
Q: Is it OK to include Java Swing with JSP? I tried using Swing code in a JSP page. To my surprise it does work well and fine. But I cannot judge if it is OK to use Swing with JSP? Basically I want to display some pop up reports from Database. I was thinking to display a JFrame pop up/ applet to do the trick. But do a web browser require any additional plugin for this? Or is it fine to do such a thingy? Any guidance will be helpful. A: Always remember that every java fragment you insert into your JSP is executed server-side, so it can be deceitful (it may seem to work in your development local machine, but it is only because the server and the client side are running on the same box). The proper way to do this would be to write an Applet and include it into your page - this way, the browser will download it to client side and run it there. You should subclass JApplet (http://docs.oracle.com/javase/8/docs/api/javax/swing/JApplet.html) and then you will be able to use Swing components at will A: The library works but your controls will never be shown at the client side (browser) but at the server (if it is that you have a working window service: Ms Windows, X11, Xorg,...). I don't think that is a good practice and I would only use Swing library classes not to show GUI components but to use some classes to store special objects such as ImageIcon to store icons. But never to try to paint them. I have a project where I use JLaTeXMath to generate a PNG within a JSP representing some math equations, in this context, I use javax.swing.JLabel to generate the image: TeXFormula formula = new TeXFormula(texCode); TeXIcon texImg = formula.createTeXIcon(TeXConstants.STYLE_DISPLAY, 25); BufferedImage img = new BufferedImage(texImg.getIconWidth(), texImg.getIconHeight(), BufferedImage.TYPE_4BYTE_ABGR); texImg.paintIcon(new JLabel(),img.getGraphics(), 0, 0); try { OutputStream os = res.getOutputStream(); res.setContentType("image/png"); ImageIO.write(img, "png", os); os.close(); res.flushBuffer(); } catch (Exception ex) { log.warn("LaTeX renderer: " + ex.toString() + "\t" + "Msg: " + ex.getMessage()); return; }
{ "pile_set_name": "StackExchange" }
Q: Laptop won't wake up after suspend I have a laptop HP Spectre x360 with an Nvidia GeForce MX150. Yesterday I did a fresh Ubuntu 18.04 install and almost everything worked just fine. The only issue that I have is that when I suspend the laptop, when I want to use it again I only get a black screen with this message: [187.425322] NVRM: Xid (PCI:0000:01:00): 32, Channel ID 00000000 intr 800400000 After that, I can't do anything. Just force shutdown and restart again. What can it be? I'm using the proprietary drivers (nvidia-driver-390). A: I had the same problem and I was able to fix it by updating the kernel to the version 4.18 with the Ukuu app. Beware, the kernel version 4.17 doesn't fix the problem. You can get the Ukuu app via this link
{ "pile_set_name": "StackExchange" }
Q: How does $ \frac{1}{x}\left(\frac{\pi}{2} - \arctan\frac{1}{x}\right)$ simplify to $\frac{1}{x} \arctan x $? Here is a solution I read when trying to solve a problem, and I can't figure out how it jumped in this step here: $$ \frac{1}{x}\left(\frac{\pi}{2} - \arctan\frac{1}{x}\right) = \frac{1}{x} \arctan x $$ This was related to a limit and integral problem where $x \to 0^+$. Please let me know if more information are needed and I will edit! A: I assume you mean this holds for all $x>0$. The factor $\frac{1}{x}$ on both sides is not really helping, since for any $x\neq 0$ your statement is equivalent to $$ \frac{\pi}{2} - \arctan \frac{1}{x} = \arctan x, \qquad x > 0 \tag{1} $$ Now, rearrange the terms: (1) becomes equivalent to $$ \frac{\pi}{2} = \arctan \frac{1}{x} + \arctan x, \qquad x > 0 \tag{2} $$ which is a known identity. One way (maybe not the most elegant) to prove this last identity is to observe that the function $f\colon (0,\infty)\to \mathbb{R}$ defined by $f(x) = \arctan \frac{1}{x} + \arctan x$ is differentiable, and (using the derivative $\arctan' x = \frac{1}{1+x^2}$) that $f'(x) = 0$ for all $x>0$. So $f$ is constant, and since $$\lim_{x\to\infty} f(x) = \arctan 0 + \lim_{x\to\infty}\arctan x = \lim_{x\to\infty}\arctan x = \frac{\pi}{2}$$ you get the result.
{ "pile_set_name": "StackExchange" }
Q: Rails 4 - strong parameters with scaffold - params.fetch I use scaffold commands to make my components in my Rails 4 app. Recently, the terminology used in the method to set the strong params has changed from params.require to params.fetch and now there are curly braces in the setup. private # Never trust parameters from the scary internet, only allow the white list through. def engagement_params params.fetch(:engagement, {}) end I can't find any documentation explaining the change or how to use it. Can I still write params.fetch(:engagement).permit(:opinion) into the fetch command? I don't know what to do with the curly braces. How do I complete the strong params using this new form of expression? A: I never came across this situation but here, I found the reference to fetch method http://api.rubyonrails.org/classes/ActionController/Parameters.html#method-i-fetch Can I still write params.fetch(:engagement).permit(:opinion) into the fetch command? Yes, you can still use params.fetch(:engagement).permit(:attributes, :you, :want, :to, :allow) I don't know what to do with the curly braces. It's a default value which will be returned if key is not present or it will throw an error params.fetch(:engagement) #=> *** ActionController::ParameterMissing Exception: param is missing or the value is empty: engagement params.fetch(:engagement, {}) #=> {} params.fetch(:engagement, 'Francesco') #=> 'Francesco' How do I complete the strong params using this new form of expression? params.fetch(:engagement).permit(:attributes, :you, :want, :to, :allow)
{ "pile_set_name": "StackExchange" }
Q: iPad Safari IOS 5 window.close() closing wrong window We have an iPad application that's working on our older iPads. We open external links using var x = window.open(url) at the end of the day, when the user closes this part of the app, we go through all the windows it opened and do x.close() for each one and everything is okie dokie. Testing on the new iPad with IOS 5 and the lovely tabs, opening the new windows (although now they open as tabs) is fine, but doing x.close() doesn't seem to necessarily close x, it may close window y or z. Doing x.focus() or y.focus() works just fine, the correct tab comes into focus, but close seems to just pick whatever tab it wants. Is this a bug or am I doing something wrong? Example program: <html> <head></head> <body> <script> //The openWindow array will hold the handles of all open child windows var openWindow = new Array(); var win1; var win2; //Track open adds the new child window handle to the array. function trackOpen(winName) { openWindow[openWindow.length]=winName; } //loop over all known child windows and try to close them. No error is //thrown if a child window(s) was already closed. function closeWindows() { var openCount = openWindow.length; for(r=openCount-1;r>=0;r--) { openWindow[r].close(); } } //Open a new child window and add it to the tracker. function open1() { win1 = window.open("http://www.yahoo.com"); trackOpen(win1); } //Open a different child window and add it to the tracker. function open2() { win2 = window.open("http://www.google.com"); trackOpen(win2); } //Open whatever the user enters and add it to the tracker function open3() { var newURL = document.getElementById("url").value; var win3= window.open(newURL); trackOpen(win3); } </script> <input type="button" value="Open 1" onclick="open1()"> <input type="button" value="Open 2" onclick="open2()"> <input type="button" value="Focus 1" onclick="win1.focus()"> <input type="button" value="Focus 2" onclick="win2.focus()"> <input type="button" value="Close 1" onclick="win1.close()"> <input type="button" value="Close 2" onclick="win2.close()"> URL: <input type="text" id="url"> <input type="button" value="Open URL" onclick="open3()"> <input type="button" value="Close All" onclick="closeWindows()"> </body> </html> A: That did the trick for me (iPad 2 and 3; 3 with iOS 5.1.1) var host=window.opener; window.focus(); /* solves the iPad3 problem */ window.close(); /* the actual closing we want to achieve... */ /* makes the focus go back to opener on iPad2, fails silently on iPad3 */ try { host.focus(); } catch(e) {}
{ "pile_set_name": "StackExchange" }
Q: join in cakephp 2.x How can I implement this query in cakephp? I can't get the information from Users table. SELECT * FROM Manufacture LEFT JOIN Order ON Manufacture.order_id = Order.id LEFT JOIN User ON Order.user_id = User.id; Manufacture: id order_id Order: id user_id User: id name class Manufacture extends AppModel { public $belongsTo = array( 'Order' => array( 'className' => 'Order', 'foreignKey' => 'order_id' ) ); } class Order extends AppModel { public $belongsTo = array( 'User' => array( 'className' => 'User', 'foreignKey' => 'user_id' )); } class User extends AppModel { } In controller: $this->Paginator->settings = array( 'limit' => 15 ); $this->set('entities', $this->Paginator->paginate('Manufacture')); A: Read the join section of the official documentation. It comes with examples. When you read that and still have questions let us know.
{ "pile_set_name": "StackExchange" }
Q: Creating keys by using openssl in java I need to use openssl in java code. e.g. $ openssl genrsa -out private.pem 2048 $ openssl pkcs8 -topk8 -in private.pem -outform DER -out private.der -nocrypt $ openssl rsa -in private.pem -pubout -outform DER -out public.der Is there any library or method to implement this? A: The best way is to use Java library for this actions. I can't write exact code right now but it isn't very hard. Look at java.security.KeyPairGenerator and so on. And it will be good experience in understanding of cryptography. But if you need only to call this three command line, Process.waitFor() call is the answer. You can use this class. package ru.donz.util.javatools; import java.io.*; /** * Created by IntelliJ IDEA. * User: Donz * Date: 25.05.2010 * Time: 21:57:52 * Start process, read all its streams and write them to pointed streams. */ public class ConsoleProcessExecutor { /** * Start process, redirect its streams to pointed streams and return only after finishing of this process * * @param args process arguments including executable file * @param runtime just Runtime object for process * @param workDir working dir * @param out stream for redirecting System.out of process * @param err stream for redirecting System.err of process * @throws IOException * @throws InterruptedException */ public static void execute( String[] args, Runtime runtime, File workDir, OutputStream out, OutputStream err ) throws IOException, InterruptedException { Process process = runtime.exec( args, null, workDir ); new Thread( new StreamReader( process.getInputStream(), out ) ).start(); new Thread( new StreamReader( process.getErrorStream(), err ) ).start(); int rc = process.waitFor(); if( rc != 0 ) { StringBuilder argSB = new StringBuilder( ); for( String arg : args ) { argSB.append( arg ).append( ' ' ); } throw new RuntimeException( "Process execution failed. Return code: " + rc + "\ncommand: " + argSB ); } } } class StreamReader implements Runnable { private final InputStream in; private final OutputStream out; public StreamReader( InputStream in, OutputStream out ) { this.in = in; this.out = out; } @Override public void run() { int c; try { while( ( c = in.read() ) != -1 ) { out.write( c ); } out.flush(); } catch( IOException e ) { e.printStackTrace(); } } }
{ "pile_set_name": "StackExchange" }
Q: NGINX Websockets, and SSL Configuration I'm trying to setup a websocket connection (wss). My domain uses ssl (certbot) and is powered by Nginx. I am unsure how to configure my /etc/nginx/sites-available/domain.com file. server { listen 443 ssl http2; listen [::]:443 ssl http2; ... } I added the following into my config block: location /websocket { proxy_pass https://domain.com; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; } When connecting via: wss://domain.com/, I am getting an error of WebSocket connection to 'wss://domain.com/' failed: Error during WebSocket handshake: Unexpected response code: 200 Most of the examples out there are using a nodejs framework to serve their sites, but I am using php. A: Your websocket endpoint is configured in the provided nginx configuration as hosted at the location /websocket within the server, but you are attempting to connect to the root URL wss://domain.com. The websocket connection is an ordinary HTTP (with a TLS wrapper in this case) session until the upgrade takes place, so you must ensure the URL used to access the service includes all relevant path designators, e.g. wss://domain.com/websocket.
{ "pile_set_name": "StackExchange" }
Q: How can I convert a std::process::Command into a command line string? For example: let mut com = std::process::Command::new("ProgramA"); com.env("ENV_1", "VALUE_1") .arg("-a") .arg("foo") .arg("-b") .arg("--argument=bar"); // Get the command line string somehow here. com.output().unwrap(); This will spawn a process with this command line "ProgramA" -a foo -b "--argument=with space" associated with it. Is there a way to get this back out from the com object? A: It turns out Command implements Debug; this will give me the desired result: let answer = format!("{:?}", com);
{ "pile_set_name": "StackExchange" }
Q: A B matrices and Av Bv dependent vectors A,B nxn complex matrices : Prove that exist a vector v(not 0), that A(v) and B(v) are dependent. Extra question: What if A,B are real matrices? A: If $\det B=0$, then the proof is trivial. If $\det B\ne 0$, then it boils down to prove that $\exists z\in\mathbb C,\exists v\in \mathbb C^n:$ $(A-zB)v=0$, which is equivalent to prove that $\exists z\in\mathbb C$: $\det (A-zB)=0$ or even further, $\exists z\in\mathbb C$: $\det (AB^{-1}-zI)=0$. Clearly, this polynomial has roots in $\mathbb C$, so we can conclude the proof. If we want to work only in $\mathbb R$, then we can build a counterexample for even dimensions: $$A=\begin{pmatrix}1&0\\0&1\end{pmatrix},\quad B=\begin{pmatrix}0&1\\-1&0\end{pmatrix}.$$ $Av=v$, but $B$ doesn't have any eigenvectors in $\mathbb R^2$. Edit In the case of odd dimensions, however, the hypothesys holds. Indeed, let's take our reasoning for complex case, replace $\mathbb C$ by $\mathbb R$ everywhere until the part $\exists z\in\mathbb R$: $\det (AB^{-1}-zI)=0$. In the odd-dimensioned space this determinant is a polynomial of the odd order, hence it has roots in $\mathbb R$ and thus we can find such $v$ that $Av$ and $Bv$ are dependant. To summarize: Complex case. Such $v$ exists. Real case, odd dimension. Such $v$ exists. Real case, even dimension. Depends on matrices, we can give examples when such $v$ exists and when it does not.
{ "pile_set_name": "StackExchange" }
Q: How to build a secure AE scheme with generic composition? I am actually looking for using a secure Encrypt-Then-MAC AE scheme , and consider using either an existing "ready-to-use" dedicated AEAD mode (GCM, OCB, CCM, EAX, etc ..) or an alternate composed CTR-Then-CMAC scheme (mainly in order to avoid apparent fragility of GHASH as to limit impacts of accidental IV reuse with gcm) My questions are the following: If for instance I plan to build a composed CTR-Then-CMAC scheme, I think to process as follows derive two Authentication / Encryption keys from unique input key encrypt plaintext P using CTR mode & input Initial counter block (ICB) lastly authenticate the sequence composed of IV, AD and Ciphertext using CMAC. Is is correct ? where / how can we find rules to build correctly a composed AE scheme from input "secure" Encryption & MAC/Authentication modes ? the only standard which addresses such AE generic composition seems to be ISO/IEC 19772:2009 which includes "Encrypt-Then-Mac" mode beside other AE/AEAD modes; but as such ISO standard has to be ordered I didn't still get info about content of "Encrypt-Then-Mac" mode section A: You might look at EAX mode, which combines a block cipher in CTR mode with a CMAC. However, EAX does differ from generic Encrypt-than-MAC in that EAX uses a single master key, which would be an absolute no-no for Encrypt-then-MAC. Normally using the same key for both purposes is quite dangerous, but EAX is explicit about the use of the master key for the cipher and MAC, and EAX comes with a formal proof of security for that usage (and the entire mode). Given that EAX (and CCM to a less flexible degree) provide a proven method for encrypting and authenticating data, for practical purposes my advice would be to just use one of the well known and trusted AEAD modes. For theoretical purposes, the answers to Why choose an authenticated encryption mode instead of a separate MAC? and Should we MAC-then-encrypt or encrypt-then-MAC? should give you a good idea of the steps required to define a your own general composition, and how tricky it is to get all of those right. The EAX paper also describes and proves the security of an EAX2 mode, which is a generic composition approach to create a two key AEAD mode based on a cipher and a MAC.
{ "pile_set_name": "StackExchange" }
Q: Rotate 2D shape around origin in a 3D space I have a 2D square ABCD in a 3D space, with side length 2s, that is represented by four vectors (one for each vertex) and a fifth vector v for the center point. The square lies standing upright on the plane z = -1 (xy plane). v = < 0, 0, -1> OA = <-s, s, -1> OB = < s, s, -1> OC = <-s,-s, -1> OD = < s,-s, -1> Now, consider point P anywhere in the 3D space. I want to rotate the square around the origin, such that v aligns with OP. The result that I want is mainly the resulting rotated OA, OB, OC and OD vectors. Screenshot of scenario. The point P is arbitrary and can be any point in the 3D space. Any help is greatly appreciated! A: You should take a look at the Wikipedia page on rotation matrices, specifically the section on forming a rotation matrix from an axis and an angle. In your case, you have a vector $\vec{v}$ that you would like to align with another vector, $\vec{OP}$. The axis of rotation should be the unit vector $\hat{u}$ normal to these two vectors given by $$\vec{n}=\vec{v}\times\vec{OP}, ~~\hat{u}=\dfrac{\vec{n}}{||\vec{n}||}. $$ The order of the cross product is important. The angle between the two vectors is given by $$\theta = \operatorname{acos}\left(\dfrac{\vec{v}\cdot\vec{OP}}{\left|\left|\vec{v}\right|\right|~\left|\left|\vec{OP}\right|\right|}\right) $$ where $\theta\in[0,\pi)$. Once you have an angle and an axis of rotation you can form the rotation matrix $R$. This is given as on the Wikipedia page. To rotate your square (and any other vectors or points of interest) so that $\vec{v}$ is aligned with $\vec{OP}$ simply multiply all the coordinates $\vec{OA}$, $\vec{OB}$, $\vec{OC}$, and $\vec{OD}$ by the rotation matrix $R$. As mentioned in a comment, you could then also rotated the resulting points by any angle you wish about the axis $\vec{v}_{\text{new}}=\vec{OP}$ while still maintaining the same orientation for $\vec{v}_{\text{new}}$. I believe the only cases where this will fail are if $\vec{OP}=\pm\vec{v}$, in which case the cross product is zero. If $\vec{OP}=\vec{v}$ then there is nothing to do. If $\vec{OP}=-\vec{v}$ then $\theta = \pi$ you can choose $\vec{n} = (a,b,0)$ for any $a$ and $b$ you wish.
{ "pile_set_name": "StackExchange" }
Q: Y-scale of addRSI graph [Quantmod] Using Quantmod I am able to plot RSI of equity data with addRSI(), but the plot's y-scale is not adjustable and what I need is the ability to adjust the y-scale to 0 - 100 like any normal RSI plot does. addRSI(n = 14) %>% print However, after using the following logic to plot the RSI, the error message popped up and I do not have any clue on how to set the price parameter of RSI() as the documentation did not specify the exact meaning of this parameter. print(addTA(RSI(price = 100, n = 14), yrange = c(0,100))) Is there any solution where I can plot the RSI with the y-scale of 0 - 100? A: Instead of using addRSI, which for some reason has a fixed range based on the values, you can use addTA and the yrange option. Using quantmod: library(quantmod) goog <- getSymbols("GOOGL", from = "2019-01-01", auto.assign = F) rsi <- RSI(goog$GOOGL.Close) chartSeries(goog, TA = NULL) addTA(rsi, yrange = c(0, 100)) Or quantmod's chart_Series function. This adds the rsi in the 0-100 range, but it doesn't show those labels only the labels at 70 and 30. chart_Series(goog) add_RSI() Using rtsplot (code straight from the help): Shows the rsi range from 0 to 100 in steps of 20 and highlights the 0-30 and 70-100 bands. library(rtsplot) layout(c(1,1,1,2)) rtsplot(goog, type = "candle") rtsplot(rsi, type = 'l', ylim=c(0,100), y.highlight = c(c(0,30), c(70,100)), y.highlight.col = grDevices::adjustcolor(c('green','red'), 50/255) )
{ "pile_set_name": "StackExchange" }
Q: How to run a post-install script after every "npm install " run I am maintaining the following directory structure: /home/user/Desktop/ |-- app/ | |-- package.json | `-- server.js |-- node/ | |-- bin/ | | |-- node | | `-- npm | |-- include/ | |-- lib/ | `-- share/ | `-- npm.sh I want all my locally installed node modules reside in the directory node. That is, if I run npm install inside the directory app, initially it'll install the modules inside the current directory (app) and then move the node_modules folder to the external directory called node. For this purpose I've written a script npm.sh and placed the mv (move) command inside the postinstall script of package.json. These are the files npm.sh and package.json. content of npm.sh: #/bin/bash export PATH=/home/user/Desktop/node/bin:$PATH export NODE_PATH=/home/user/Desktop/node/node_modules export NODE_MODULE_ROOT=/home/user/Desktop/node /bin/bash content of app/package.json: { "name": "app", "version": "1.0.0", "scripts": { "postinstall": "mv node_modules $NODE_MODULE_ROOT", "start": "node server.js" }, "dependencies": { "jwt-simple": "^0.5.1" } } But the problem is: when I do ./npm.sh && cd app && npm install, everything works as intended. But when I do npm install jwt-simple, the postinstall script is not getting executed. Is there a way to make it work for individual npm install <package> ? Or is there any better way to accomplish this ? A: You can use npm hook scripts to do something after package is installed. Create node_modules/.hooks/postinstall executable and it will be run also after npm install <package>. NOTE: I have noticed problems with npm hook scripts between npm version 5.1.0 until 6.0.1. So if you have problems with hooks, check your npm version and upgrade if necessary.
{ "pile_set_name": "StackExchange" }
Q: Column with color values in tmap I have a SpatialPolygonsDataFrame with columns containing hex-color values. I want to draw the map like this with the package tmap: tm_shape(full.shp) + tm_fill(col="konf1900") But then it is treated as categorial variable, resulting in this: I am not sure how to tell tmap that it should plot the color values directly on the map... Can anyone help on this? edit: see the answers below - the problem was that the dataframe column was no encoded as.character. I think this might help someone sometime... A: Apparently, the problem was the type of the column: full.shp$konf1900char <- as.character(full.shp$konf1900) tm_shape(full.shp) + tm_fill(col="konf1900char") It needs to be converted to characters with as.character. Also, it is important that there are no NA values, they can be converted to white (#ffffff in hex format): full.shp$konf1900char[is.na(full.shp$konf1900char)] <- "#ffffff" with these transformations, it works nicely with tmap and tm_fill takes the color values from the variable. edit: this is the resulting image (compare to screenshot in the question above):
{ "pile_set_name": "StackExchange" }
Q: TimeTCPClient and TimeUDPClient both timing out I need the time from a NTP server. I tied this: TimeUDPClient client = new TimeUDPClient(); try { client.open(); client.setSoTimeout(10000); client.getTime(InetAddress.getByName(host)); client.close(); } catch (IOException exp) { System.out.println("NTP connection error"); exp.printStackTrace(); return; } After a 10 Sekonds I get this exception: java.net.SocketTimeoutException: Receive timed out at java.net.PlainDatagramSocketImpl.receive0(Native Method) at java.net.AbstractPlainDatagramSocketImpl.receive(AbstractPlainDatagramSocketImpl.java:143) at java.net.DatagramSocket.receive(DatagramSocket.java:812) at org.apache.commons.net.time.TimeUDPClient.getTime(TimeUDPClient.java:84) at org.apache.commons.net.time.TimeUDPClient.getTime(TimeUDPClient.java:98) at de.modusoft.opt.viewer.TimeSyncThread.run(TimeSyncThread.java:34) at java.lang.Thread.run(Thread.java:748) I also tried this: TimeTCPClient client = new TimeTCPClient(); client.setConnectTimeout(10000); try { client.connect(host); Date ntpDate = client.getDate(); client.disconnect(); System.out.println("ntpDate = " + ntpDate); } catch (IOException exp) { System.out.println("NTP connection error"); exp.printStackTrace(); return; } And got also a Timout exception. java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.commons.net.SocketClient.connect(SocketClient.java:182) at org.apache.commons.net.SocketClient.connect(SocketClient.java:203) at org.apache.commons.net.SocketClient.connect(SocketClient.java:296) at de.modusoft.opt.viewer.TimeSyncThread.run(TimeSyncThread.java:29) at java.lang.Thread.run(Thread.java:748) host is a String and a tried "0.de.pool.ntp.org", "ntp.xs4all.nl". Thanks for your help. A: If you need the time from an NTP server, you need to use the NTP protocol. The TimeUDPClient and TimeTCPClient classes use the Time Protocol, not NTP.
{ "pile_set_name": "StackExchange" }
Q: Laptop speaker is not in options > sound. Headphones work without problems after I've reinstalled Ubuntu, I've been unable to play sound from my (internal) speakers since. There are only headphones available, even when there are no headphones plugged in. In Settings > Sound simply isn't the speakers. I dual boot with Windows 10, which play sound from speakers without problems. I've tried a lot of things, including reinstalling various packages. The solution that almost worked is here: No sound from speakers, but headphones work I've edited the file /etc/modprobe.d/alsa-base.conf and added line options snd-hda-intel model=generic. This added display port to options > sound menu. I've tried to play sound with my display port, but it obviously didn't work. I think the issue may be in the file because I'm missing the speakers' option from settings > sound menu. Latest Ubuntu (18), everything updated. Ask for additional information if you need any, I'll edit my post and add them here. lspci -nnk | awk -v n='[0403]' 'p&&/^\S/{p=0}!p{p=index($0,n)}p' aplay -l pactl list short sinks: 0 alsa_output.pci-0000_00_1b.0.analog-stereo module-alsa-card.c s16le 2ch 44100Hz SUSPENDED Edit: After I reinstalled my system, the solutiom from @OpenSage started working. Thanks OpenSage! A: I've fixed it. I've followed instructions on ubuntuforums.org and added following lines to /etc/modprobe.d/alsa-base.conf: alias snd-card-0 snd-hda-intel alias sound-slot-0 snd-hda-intel options snd-hda-intel model=dell-m4-1 options snd-hda-intel enable_msi=1 Some of them might be redundant, but I'm not going to test it, it works. These lines (after reboot) added Line Out Built-in Audio, which works and plays sound from speakers. Headphones that were always on the list even when they were unplugged are no longer on the list.
{ "pile_set_name": "StackExchange" }
Q: Can't edit existing form in MS Access, opening .accdb file opens form automatically with no ribbons or menus I'm very new to MS Access 2016 and my boss has asked me to add a simple "add new employee" button to an existing form made by a previous intern. I have access to both an .accdb and an .accdr file. I also have access to the SQL Server where the form's database and tables are located. However, I haven't found a way to edit the form. Whenever I try to open the .accdb file, it automatically opens the form employees here use daily, but there are no ribbons or menus other than a stripped down version of File with only Print, Privacy Options, and Close in it. I have tried using different workstations, earlier versions of Access, and made sure I'm the only one that has the .accdb file open. I've tried using Shift bypass several times to no avail. Privacy Options is empty with only a checkbox for helping improve the program by sending data back Microsoft. F11, Alt-F11, and Ctrl-G don't do anything either. I also can't open MSACCESS.EXE by itself without it spitting a "Cannot find specified database" error. Is there anything else I could do to be able to edit or design the form? A: Um, are you sure you have a full version of Access installed, not a runtime version? -- Edit: re-reading your question, that sounds very much like a runtime version After checking the different workstations, they were indeed all running runtime versions of Access, which is why I couldn't edit anything.
{ "pile_set_name": "StackExchange" }
Q: Split a string with varying factors in SQL Server I have serialized data stored in a column. I want to convert list values to a temp table. But there are more than one factors in row values like ["2","3","4"] [&quot;1&quot;,&quot;2&quot;,&quot;3&quot;] [] ["Select option B","Select option C","Select option D"] ["Moderate","Heavy","Heavy, Big & Abnormal"] If I parse out double quotes then comma in string value will create that as different entity. A: This is tagged with [sql-server-2012] - what a pity... With v2016+ you could call for STRING_SPLIT or even JSON methods... The following is a rather hacky approach but works - at least with your provided test data... Create a mockup-table (please do this yourself the next time). DECLARE @tbl TABLE(ID INT IDENTITY, YourString VARCHAR(100)); INSERT INTO @tbl VALUES ('["2","3","4"]') ,('[&quot;1&quot;,&quot;2&quot;,&quot;3&quot;]') ,('[]') ,('["Select option B","Select option C","Select option D"]') ,('["Moderate","Heavy","Heavy, Big & Abnormal"]'); --This is the query: SELECT t.ID --,t.YourString ,C.Separated.value('text()[1]','nvarchar(max)') AS Parted FROM @tbl t CROSS APPLY(SELECT REPLACE(REPLACE(REPLACE(YourString,'&quot;','"'),'["',''),'"]','')) A(replaced) CROSS APPLY(SELECT CAST('<x>' + REPLACE((SELECT A.replaced [*] FOR XML PATH('')),'","','</x><x>') + '</x>' AS XML)) B(casted) CROSS APPLY B.casted.nodes('/x') C(Separated); The idea in short: First of all I use multiple REPLACE() to clean and harmonise your data. The second CROSS APPLY will then use XML to split up your strings by replacing each comma together with the quotes! with XML tags. Thus we can prevent splitting at internal commas. But before, we have to use FOR XML on the orginal string, to allow characters such as the & in Big & Abnormal. The rest ist rather easy XPath/XQuery. The result +----+-----------------------+ | ID | Parted | +----+-----------------------+ | 1 | 2 | +----+-----------------------+ | 1 | 3 | +----+-----------------------+ | 1 | 4 | +----+-----------------------+ | 2 | 1 | +----+-----------------------+ | 2 | 2 | +----+-----------------------+ | 2 | 3 | +----+-----------------------+ | 3 | [] | +----+-----------------------+ | 4 | Select option B | +----+-----------------------+ | 4 | Select option C | +----+-----------------------+ | 4 | Select option D | +----+-----------------------+ | 5 | Moderate | +----+-----------------------+ | 5 | Heavy | +----+-----------------------+ | 5 | Heavy, Big & Abnormal | +----+-----------------------+
{ "pile_set_name": "StackExchange" }
Q: How can I create a modern cross compile toolchain for the Raspberry Pi 1? At least Debian does not provide a usable toolchain to cross develop for the Raspberry Pi 1. The Linaro toochain is at the time of this writing too old for the Qt5 developer branch. There is a project crosstools-ng, which allows to easily build custom toolchains for all kinds of systems. It supports a fairly modern GCC 4.9.1. The configuration is a bit trial and error, but the main problem is, that the toolchain does not find all the include files or libraries. How is the crosstools-ng to be configured so it can be used to compile Qt5 for the Raspberry Pi 1? A followup how a Raspberry Pi with Raspian has to be prepared to use this toolchain can be found here: How do I prepare a Raspberry Pi with Raspbian so I can cross compile Qt5 programs from a Linux host? A: I start with the not found include/library problem first, since this goes a bit beyond the normal crosstools-ng installation/usage. The problem is, that crosstools-ng rightfully creates gcc compiler, with a target tuple like: arm-vendor-linux-gnueabihf. This is totally correct. However, Raspian installs includes and libs in folders without vendor string: /lib/arm-linux-gnueabihf. Looks like pkg-config cannot handle this. crosstools-ng might be right with the tuple, but is also a bit heavy handed by refusing to add a function to remove this vendor string. The functions in crosstool-ng, which allow to modify the tuple and the vendor string are not an alternative. They just create symbolic links with a new name, but the tuple is hardcoded in GCC. The only way to properly get rid of the vendor string is to patch the crosstools-ng sources. So the first step to get a functional Raspberry Pi/Raspian gcc 4.9.1 toolchain is to clone the crosstools-ng repository: git clone git://crosstool-ng.org/crosstool-ng Second is to patch sources: diff --git a/scripts/config.guess b/scripts/config.guess index dbfb978..9a35943 100755 --- a/scripts/config.guess +++ b/scripts/config.guess @@ -176,7 +176,7 @@ case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; - *) machine=${UNAME_MACHINE_ARCH}-unknown ;; + *) machine=${UNAME_MACHINE_ARCH} ;; esac # The Operating System including object format, if it has switched # to ELF recently, or will in the future. diff --git a/scripts/config.sub b/scripts/config.sub index 6d2e94c..f92db2b 100755 --- a/scripts/config.sub +++ b/scripts/config.sub @@ -317,7 +317,7 @@ case $basic_machine in | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) - basic_machine=$basic_machine-unknown + basic_machine=$basic_machine ;; c54x) basic_machine=tic54x-unknown The rest is the standard configure/make/make install. The next step is to configure crosstools-ng correctly to build the desired toolchain. This is done with ct-ng menuconfig. Going though every single config item would be extremely lengthy, so I added a working config file here: http://pastebin.com/MhQKnhpN It can be imported and with Load an Alternate Configuration File. Finally ct-ng build builds in a few minutes a new toolchain. The toolchain is created in {HOME}/x-tools3, as defined in the config file. To change this, change 'Prefix directory' in 'Path and misc options'. But the toolchain can also be moved manually after the build. The next question/answer will show how to use this toolchain to build a very modern Qt5 for the Raspberry Pi.
{ "pile_set_name": "StackExchange" }
Q: Encrypting SFMC email link parameter values The email address of the subscriber is currently appended as a parameter to our SFMC email links. Rather than have the email as raw text, we would like to encrypt the value. For example, the link url is now: http://www.test.com/[email protected] With encryption, it should be: http://www.test.com/?email=[encrypted value] What would be the best way to achieve this? Note, we want to use encryption, because we need to decrypt the value as well. Addition: Decryption needs to be done in node. So how can we encrypt in SFMC and decrypt in node? A: Lookup : https://developer.salesforce.com/docs/atlas.en-us.noversion.mc-programmatic-content.meta/mc-programmatic-content/EncryptSymmetric.htm You will want to do something like this (using RedirectTo : %%=RedirectTo(Concat('http://www.test.com/?email=', EncryptSymmetric(_emailaddr, 'AES', @null, 'password', @null, '0000000000000000', @null, '00000000000000000000000000000000'))=%%
{ "pile_set_name": "StackExchange" }
Q: Transform Constraint I have 3 animated cubes, two of which, Cube 1 and Cube 2, start stacked on top of each other and off to the negative X of the other one, Cube 3. Cube 1 moves forward along the X axis. Cube 2 moves along the X axis under Cube 1 (transform constraint) while rotating around its Z axis. Cube 3 moves from Y=0 to Y=2 for every full (360°) rotation of Cube 2, then it jumps back to Y=0 as Cube 2 begins another rotation. I would like to change this behavior so that, within one 360° rotation of Cube 2, Cube 3 would move from Y=0 to Y=2 and then smoothly back to Y=0, behaving like a pendulum. See the animation in my .blend file; I hope that helps you understand me. A: That sounds like a job for a driver. Remove the constraint on Cube.002, right click on it's Y location value and select Add Single Driver. The Y value will then be filled with purple. You edit the driver value in the graph editor with it's mode set to Drivers. More info on editing drivers is available in the blender wiki. You want to use the Z rotation of Cube.001, displaying the debug value shows that it varies from -180 to 180. You want to ignore the negative part of the value so abs(var) will remove that, then divide that by 180 and multiply it to get a larger range of movement. That leaves you with a calculation of (abs(var)/180)*100. Changing the 100 used will change the distance Cube.002 moves. You can add a value at the end to move the range of movement away from 0.0.
{ "pile_set_name": "StackExchange" }
Q: Where should I store log files of an application running in Container(docker/kubernetes)? I have to port an application (C/C++) to docker container. The application writes to log files in file system. How and where to store log file when application runs in container ? A: I would recommend the following for writing the log file of the application and to see the stdout and stderr of the logs on running the command $docker logs <docker-name> This requires change in the Dockerfile and this is what you can see in most of the dockers like nginx, httpd. # forward request and error logs to docker log collector RUN ln -sf /dev/stdout /var/log/<app-name>/access.log \ && ln -sf /dev/stderr /var/log/<app-name>/error.log
{ "pile_set_name": "StackExchange" }
Q: Using JQuery to sum integers within table, then find the largest sum I have a large HTML table that is updated automatically every 24 hours with new data. the 5th column contains multiple numbers in each row separated by a line break, each containing class .remaining-detail. I'm looking to add the numbers from each row, and then find which row contains the largest sum from column 5. <table> <tr class="row-3 odd"> <td class="column-1 ">ID</td><br><br><td class="column-2 "> <a href="*" class="ex" title="Big Money">Name</a><br> <br><td class="column "> <a href="*" class="ex" title="Big Money">Area</a><br> <br> </td><td class="column-3 ">$5</td><td class="column-4 "> <div class="remaining-detail">$100,000.00</div> <div class="remaining-detail">$10,000.00</div> <div class="remaining-detail">$1,000.00</div> <div class="remaining-detail">$500.00</div> <div class="remaining-detail">$400.00</div> <div class="remaining-detail">$100.00</div><br><br> </td><td class="column-5 "> <div class="remaining-detail">1</div><br> <div class="remaining-detail">0</div><br> <div class="remaining-detail">36</div><br> <div class="remaining-detail">64</div><br> <div class="remaining-detail">100</div><br> <div class="remaining-detail">972</div><br> </td> </tr></table> <br> I am adding these numbers like this: $(document).ready(function(){ var sum = 0; $('.row-2 .column-5 .remaining-detail').each(function () { sum += parseInt($(this).html().replace(',','')); }); $('#sum2').text(sum); }); This works for a single instance. How would I go about doing this for .ROW N .column-5 .remaining-detail and then find the row with the largest sum? Here is a fiddle with what I have right now: http://jsfiddle.net/3LHb8/ A: You can do that with two nested .each() loops. Here's an example: $(document).ready(function(){ var sums = []; $('tbody tr').each(function() { var rowSum = 0; $(this).find('.remaining-detail').each(function () { rowSum += parseInt($(this).html().replace(',','')); }); sums.push(rowSum); }); $('#sum2').text("Biggest sum is in row " + (1 + sums.indexOf(Math.max.apply(Math, sums)))); }); Here's the jsFiddle. I'm storing the sums of each row and then printing the row with the highest sum, but you can do whatever variation you need. Hope it helps.
{ "pile_set_name": "StackExchange" }
Q: How to remove a row in presence of values less than 2 in any of the columns in R dataframe? I am new to R and this may be a very basic question. I am working on microarray data where there are thousands of columns in a dataframe. I am trying to remove all those rows that have a value less than 2 and greater than -2 in any of the columns. Therefore, I cannot specify the column name. How can I remove all those rows that have any value less than 2 and greater than -2 in any column. Any help would be greatly appreciated. A: indices <- which(apply(DF, 1, function(row) any(abs(row) < 2))) DF[-indices,] First, you want to find the relevant rows to remove. You can achieve that by going over each row (apply with 1 as the second argument) and then check if it has any values between -2 and 2 (not including them). In other words, the absolute value is less than 2. (You can ask if any of the absolute values in the row are less than 2, or if the minimum absolute one is.) This will give you a boolean vector. Applying the which function to it will produce a vector of indices where the value was TRUE. Now you just need to remove those rows from the data.frame (I called it DF).
{ "pile_set_name": "StackExchange" }
Q: I want to fade the border and background of a span without affecting its contents in jQuery I'm making a navigation system on some of the pages of my website that is roughly like this: <a href="link"><span>Page</span></a> Where each navigation link would look like that. I want the link representing current page to have the following properties: background-color: buttonface; border: 2px solid grey; border-radius: 5px; and all navigation links to have these properties: padding: 0 6px 0 6px; In addition I wanted to make the border and background of the current page's link fade into any link on .mouseenter() and fade out on .mouseleave() unless it is the current page in which case it should not fade out. I am relatively new to jQuery and I have no idea how to do this. It isn't completely necessary for the links to be in the format I put above as long as they're listed horizontally across the page and have the properties I specified. If it matters my site also uses the following code for style already: body{font-family: 'Raleway'; background: #F07400; color: black; font-size: 18px; margin: 20px 0 20px 20px;} button{font-size: 18px; font-family: 'Raleway'; border-radius: 5px; border: 2px solid grey;} and $(document).ready(function() { widthval = $(window).width()-40; $("body").css("width", widthval); $("body").fadeOut(0); $("body").fadeIn(1600); $(window).resize(function(){ widthval = $(window).width()-40; $("body").css("width", widthval); }); }); A: You could layer two body layers, by placing a body2 positioned absolute as a child element of body, which would draw it ontop of body. Have the body 2 contain the border information and body contain the content. Then fade body2. But this solution would require the content to exist in both body and body2 because clicks would be blocked to body and processed through body2. updated <div style="width:needed; height:needed;"> <div2 style="position:absolute; width:sameAsBody; height:sameasbody" class="fade"> this content will fade. </div2> content content here will be faded to. If changing just the background it would be literially the same hence it appears that the content did not fade. </div> $(document).ready(function(){ $(‘.fade’).fadeTo(2000, 1.0).fadeTo(4000, 0.0); });
{ "pile_set_name": "StackExchange" }
Q: JavaScript website error: Uncaught TypeError: Failed to execute 'appendChild' on 'Node' Hey I am trying to make a website With just JavaScript but I get this error: Uncaught TypeError: Failed to execute 'appendChild' on 'Node': parameter 1 is not of type 'Node'at html:12:9 Here is my html code: <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> </head> <body> <script> var divEl = document.createElement("div"); document.body.appendChild(divEl); var ptag = document.createElement("p").setAttribute("id", "lol"); divEl.appendChild(ptag); </script> </body> </html> A: setAttribute does not return a node, so your ptag variable is getting set to undefined. From the doc: Adds a new attribute or changes the value of an existing attribute on the specified element. Returns undefined. Try calling setAttribute in a separate statement: <script> var divEl = document.createElement("div"); document.body.appendChild(divEl); var ptag = document.createElement("p"); ptag.setAttribute("id", "lol"); divEl.appendChild(ptag); </script> JSBin: http://jsbin.com/wibeyewuqo/edit?html,output
{ "pile_set_name": "StackExchange" }
Q: What is exclusive arc in database and why it's evil? I was reading most common database design mistakes made by developer Q&A on stackoverflow. At first answer there was phrase about exclusive arc: An exclusive arc is a common mistake where a table is created with two or more foreign keys where one and only one of them can be non-null. Big mistake. For one thing it becomes that much harder to maintain data integrity. After all, even with referential integrity, nothing is preventing two or more of these foreign keys from being set (complex check constraints notwithstanding). I really don't understand why exclusive arc is evil. Probably I didn't understand the basics of it. Is there any good explanation on exclusive arcs? A: As far as I understood it a long time ago, in an exclusive arc a table contains a number of columns that are foreign keys to other tables, but only one of these can be set at a time (due to some logical constraint on the domain following from the real world). As this rule cannot be enforced on the database a corrupt record could be created where more than one of these foreign keys has a value. I'll make an example. Consider an application where a company keeps track of the trucks it uses to deliver goods. A truck can only be in one of three places at the same time: it can be with an employee, it can be in a parking garage or it can be in a maintenance shop. This could be modeled by having a Truck-table with employeeId, parkingGarageId and maintenanceShopId, referencing the Employee, ParkingGarage and MaintenanceShop-tables. There is no way to enforce the rule that only one of these fields is filled out on the level of the database. Bad code or somebody with direct access to the database could insert a record that has two or three fields filled, which amounts to data corruption in the database. A: There is nothing evil about exclusive arcs. Simply enforce the corresponding business rule using a check constraint. Most major database management systems support check constraints (Oracle, SQL Server, PostgreSQL). If you're using a data modeling tool then there is a good chance that your tool will automatically generate the code to implement the check constraint.
{ "pile_set_name": "StackExchange" }
Q: List global variables in a C program Is there a tool around that will list all the global variables in a C program? More specifically, is there a simple commandline tool that will do this, i.e. not a heavyweight IDE, CASE, graphical toolkit system etc., but just something that can be run like foo *.c? A: If you happen to compile the file on most unixes you have nm that just lists you all linker symbols. These symbols are classified in different groups (a bit platform dependent) so you should easily find out which the variables are. A: ctags -R -x --sort=yes --c-kinds=v --file-scope=no file "c:\my sources" > c:\ctagop.txt A: Try ctags. Or, gcc with -aux-info. There is also gccxml and libclang but those two aren't very simple.
{ "pile_set_name": "StackExchange" }
Q: Bower: Installing legacy bootstrap (2.3.2) with bower I have an open source webapp using bootstrap 2.3.2 and which Currently I can't move to bootstrap 3 (completely different grid system) - I'm trying to move the webapp to use bower to handle dependencies but bower install bootstrap#2.3.2 fetches something that looks as the raw code repo not built, for example no css folder, just the seperate less files, no one minified bootstrap.min.js but multiple different plugins (not concatenated) etc.. Is this fixable? should I install differently? A: You are doing everything correct. Take a look at the README for v2.3.2 on github Bootstrap includes a makefile with convenient methods for working with the framework. Before getting started, be sure to install the necessary local dependencies: $ npm install When completed, you'll be able to run the various make commands provided: build - make Runs the recess compiler to rebuild the /less files and compiles the docs. Requires recess and uglify-js. test - make test Runs jshint and qunit tests headlessly in phantomjs (used for ci). Depends on having phantomjs installed. watch - make watch A: The solution I ended up with: forked bootstrap repo, cleaned up(other branches etc..) git reset --hard to the v2.3.2 tag - checked it into another branch and pushe it to github. more cleanup (removing the old master branch, higher tags). run the build processes and setup an "updated" dist folder for the v2.3.2 tag. changed the bower package and published it as bootstrap2.3.2 package. Now I (and everyone else) can install with bower install from this repo. The results are here if someone want to use it. A: I just ran into this and was basically setting up my own fork of Bootstrap 2.3.2 (before I noticed your answer). As I dove in, I noticed that Bootstrap 2.3.2 actually does come with pre-built assets you can use in your project. While it's not as obvious as the dist/ directory in Bootstrap 3, you can find the location of the prebuilt assets in the project's bower.json: "main": ["./docs/assets/js/bootstrap.js", "./docs/assets/css/bootstrap.css"], So there you go. Add {"bootstrap": "~2.3.2"} to your bower.json dependencies like you normally would, then use it like this: <link href="bower_components/bootstrap/docs/assets/css/bootstrap.css" rel="stylesheet"> <script src="bower_components/bootstrap/docs/assets/js/bootstrap.js"></script> No build step required.
{ "pile_set_name": "StackExchange" }
Q: ACL role assignments not showing on screen, but still working When I go to civicrm/acl/entityrole it shows no role assignments and says "There are no Role Assignments. You can add one now." But they exist, have not been deleted and are still working. I can see them in the database table (select * from civicrm_acl_entity_role;) Anyone else have this problem? It looks like a bug A: Once I saw a second person report the same issue, I suspected a bug, so I successfully replicated the problem on the demo site. See the CiviCRM bug reporting page. I determined that this is a side effect of changes made in CRM-20351. Because this seemed like an easy enough bug to fix, I went ahead and did so. I filed it as CRM-21076, and submitted a solution as pull request #10866. It's a one-line fix, so feel free to fix manually. I'm going to argue that this should go into 4.7.24, but 4.7.25 seems almost a certainty.
{ "pile_set_name": "StackExchange" }
Q: How can I call a clean method before get_or_create saves to the database? I've got a model in Django in which some data is frequently invalid. Usually I call a clean() method that I've written to deal with these situations, however, get_or_create() seems to call save() before I get a chance to call clean(). How can clean my data before get_or_create() attempts to write to the database? Here's the relevant parts of my model: class Article(models.Model): optional_attribute = models.CharField(max_length = 250) def clean(self): if not self.optional_attribute: self.optional_attribute = 'Default' A: It might be more appropriate to override the save method: class Article(models.Model): optional_attribute = models.CharField(max_length = 250) def save(self, *args, **kwargs): if not self.optional_attribute: self.optional_attribute = 'Default' super(Article, self).save(*args, **kwargs) Alternatively, you could use a pre_save signal handler: from django.db.models.signals import pre_save class Article(models.Model): optional_attribute = models.CharField(max_length = 250) @classmethod def pre_save_handler(cls, sender, instance, **kwargs): if not instance.optional_attribute: instance.optional_attribute = 'Default' pre_save.connect(Article.pre_save_handler, sender=Article) If you want to retain your clean method, you could simply use one of these techniques and call your clean method from within.
{ "pile_set_name": "StackExchange" }
Q: Reading file in React action I am new in javascript and have problem with it's asynchronous behavior. I am trying to read file in React action. Important part looks like this: if(file){ const reader = new FileReader(); reader.readAsDataURL(inquiryFile); reader.onload = function() { body = JSON.stringify({ fileToUpload: reader.result, }); return dispatch(basicRequest(body)); }; reader.onerror = function(error) { console.error('Error uploadingFile: ', error); }; } else{ return dispatch(basicRequest()); } The component, which is responsible for calling this action needs to dispatch another action depending on either success or error result. return submitFileAction(masterId, data).then((result) => { if (!result.error) { console.log('success'); } else { console.log('error'); } }); Problem is, that result returning to 'then part' is undefined and filereader.onload is called after I get error. I would like to ask how to await result from filereader. Thanks. A: You probably want to wrap the FileReader into a Promise. if (file) { return new Promise(function(resolve, reject){ ... reader.onload = function(){ ... resolve(dispatch(basicRequest(body))); }; reader.onerror = function(error) { reject(error); }; }); } The error would be handled as: return submitFileAction(masterId, data).then((result) => { console.log('success'); }).catch((error) => { console.log('error'); }); This presumes dispatch() also returns a Promise.
{ "pile_set_name": "StackExchange" }
Q: driver expression dot.product function I´m coming from 3ds max and I would like to rebuild a scene in blender. It has a controller that outputs the angle between a bone and the world z-axis via the dot product function. That´s pretty simple in 3ds Max with e.g > myvec=$.transform.row1 globalz=[0,0,1] theAngle = acos(dot (normalize > myvec) (normalize globalz)) The script controller calculates the angle between the bones X-axis(roll axis) and the world z-axis. This works for realtime tranformations and animations as well. How could I make this with a blender driver? So far I have written a script which is running in the console line by line(sorry for the noobish syntax) import bpy import mathutils import math from mathutils import Matrix, Vector myvec=bpy.data.objects['Armature'].pose.bones['Bone'].matrix.col[1] gloma= Matrix() ori=gloma.col[2] ori=ori.dot(myvec) ori=math.acos(ori) ori=math.degrees(ori) print (ori) After I had created a driver function with bpy.app.driver_namespace that driver didn´t got evaluated. Any help would be very appreciated. A: Script: import bpy import math def test_fn2(pose_bone): rig = pose_bone.id_data mat = rig.matrix_world @ pose_bone.matrix vec = mat.col[1].normalized() angle = math.degrees(math.acos(vec[2])) return angle bpy.app.driver_namespace["test_fn2"] = test_fn2 Settings: Turn on register in Text Editor. And Auto Run Python Scripts in User Preferences if you need. Expression: test_fn2(bones["Bone"])/45 Additions: Original script has two mistakes. 1. Transformation matrix for converting to world space is object.matrix_world @ pose_bone.matrix 2. Vector myvec needs to be normalized.
{ "pile_set_name": "StackExchange" }
Q: Assign final variable in a try block Very short question: Is there a more elegant way to do this: Object tmp; try { tmp = somethingThatCanFail(); } catch (Fail f) { tmp = null; } final Object myObject = tmp; // now I have a final myObject, which can be used in anonymous classes A: You could extract the creation of the value in its own method: final Object myObject = getObjectOrNull(); public Object getObjectOrNull() { try{ return somethingThatCanFail(); } catch (Fail f) { return null; } } It's longer, but depending on your definition of "elegant" it might be more elegant.
{ "pile_set_name": "StackExchange" }
Q: Weird inheritance pattern Suppose I have a base class like this: class Abstract { public: /* This type should be the deriver of this class */ virtual DerivedType foo(void) = 0; }; And I want DerivedType to be different depending on who derives from this class. In fact I want DerivedType to be the type that Derives from Abstract. I realize that I could do something like this: template<typename der_t> class Abstract { public: virtual der_t foo(void) = 0; }; And then it should be used like this: class Derived : public virtual Abstract<Derived> { }; Unfortunately there is no way to force someone to pass in the right type in the template. That is someone could do this: class Derived : public virtual Abstract<int> { }; So is there any better way to do this, or is there a way to force someone to pass in the right parameter? A: The usual trick for CRTP's is to have a private constructor that only the passed-in class can access via a friend directive: template <class Derived> struct Crtp { private: friend Derived; Crtp() = default; }; It isn't perfect, but guards against errors. Note : static_asserting is not a practical solution, because at the time Crtp is instantiated Derived is still incomplete, and can't be checked for base classes.
{ "pile_set_name": "StackExchange" }
Q: Displaying JFrames inside JSplitPane Java I'm making GUI using JSplitPane and I want to display a JFrame in the left side of the JSplitPane and another JFrame inside the right side of a JSplitPane. The name of the other JFrame is Minimize.java and Diagram.java. My problem is how can i call these and display it in the left and right side of the JSplitPane? An update for this post, I converted my JFrame to a JPanel and successful displayed but the problem now is it didn't perform the function/method. Here's my code for the Main Form. public LogicGates() { Minimize mi = new Minimize(); //mi.setVisible(true); JLabel iExp = new JLabel("Inputted Expression: "); p.add(iExp); j1= new JLabel(""); j1.setVisible(false); p.add(j1); JScrollPane aaScrollPane = new JScrollPane(aa); //here is my problem,when i run the code it displays the label and jcombobox but didn't perform the function gatessplit = new JSplitPane(JSplitPane.HORIZONTAL_SPLIT, mi, aaScrollPane); gatessplit.setOneTouchExpandable(true); gatessplit.setDividerLocation(300); //Provide minimum sizes for the two components in the split pane. Dimension minimumSize = new Dimension(150, 80); //frame.setMinimumSize(minimumSize); aaScrollPane.setMinimumSize(minimumSize); //Provide a preferred size for the split pane. gatessplit.setPreferredSize(new Dimension(900, 500)); } A: Could you not use JPanels here combined with a Layout Manager on the JFrame? This will allow you to create a JFrame, then add a layout to it comprising of two sections on the left and right. You can then add a JPanel to the left and right and add components to each JPanel. JPanel tutorial: http://docs.oracle.com/javase/tutorial/uiswing/components/panel.html Layout tutorial: http://docs.oracle.com/javase/tutorial/uiswing/layout/visual.html
{ "pile_set_name": "StackExchange" }
Q: How do I implement a role Based Permission system on a webpage? For a little project I'm doing, I need to restrict user access on my HTML page. It currently uses PHP, HTML and MySQL database. What I need to do is have a Admin role and a regular User role, where the website has several tables with data where the Admin will be able to view, edit, remove and add data. While this happens, I only want the regular User to be able to see the tables with no way of messing with them. I've done some research, but I never found anything for HTML specific pages. What I've tried was looking up RBAC but I don't know if that is fitting for my kind of problem. <div class="anonymous"> <center><h1>Welcome Anonymous User!</h1></center> </div> <div class="end_user"> <center><h1>Welcome End-User!</h1></center> </div> <div class="agent"> <center><h1>Welcome Agent!</h1></center> </div> <div class="manager"> <center><h1>Welcome Manager!</h1></center> </div> I have found a little bit of this code online, which the mixes it with Js and CSS, however, I am not sure if this is the way to go. A: The first thing you should do is create a session variable for the user. Somewhere in your login code you could put the following: $_SESSION["user_role"] = "admin"; // if the admin were logging in Then when the user loads a webpage, you can check the session and build the HTML based on the permissions that user has. <html><body> <?php if ($_SESSION["user_role"] == "admin") echo "<p> This text is only visible to an admin! </p>" else echo "<p> This text is visible to non-admins. </p>" ?> </body></html>
{ "pile_set_name": "StackExchange" }
Q: Can True Polymorph be used repeatedly to never age? Let's say we have a level 20 human wizard named Dumbledore. Dumbledore is wicked old. Dumbledore would like to not die of old age, as he has a few enemies left to take care of. Can Dumbledore cast True Polymorph on himself, concentrate for the full duration, and permanently de-age himself into A younger version of any other creature A long-lived creature (dragon, elf, etc) And if yes to any of these, will this permanent transformation extend Dumbledore's lifespan? Would doing so permit Dumbledore to repeatedly cast True Polymorph and never die of old age? Note: Dumbledore could obviously still die of other causes, such as Power Word: Kill cast by a talented former student. A: Yes you can. The spell never makes any specific claims about the age of a creature, only its challenge rating. If you turn a creature into another kind of creature, the new form can be any kind you choose whose challenge rating is equal to or less than the target’s (or its level, if the target doesn’t have a challenge rating). [PHB 283] So you could turn yourself into an elf or similarly long-lived creature which is also a level 20 wizard. If you wanted to turn yourself creature like a dragon with the intent of doing this repeatedly, you need to be sure to note the spell casting abilities of your new form. From the spell description: "The creature is limited in the actions it can perform by the nature of its new form, and it can’t speak, cast spells, or take any other action that requires hands or speech unless its new form is capable of such actions." [PHB 283] If you choose a creature that can innately spellcast (or it has hands and can speak), then this isn't a problem. A: Probably, no True Polymorph transformation isn't actually permanent. RAW it is still "a creature transformed by magic", so the permanent transformation still can be dispelled, through the Dispel Magic spell or an antimagic field. When you are permanently polymorphed, a creature with Truesight can still see your true form. All these things indicate that a polymorphed creature still has its true (but somehow hidden) form. Another evidence is that Power Word Kill affects the true form, not the assumed one. Being polymorphed into a yonger being doesn't automatically prevent your true form from aging. RAW imply it will age, because there is no rule that says your true form resides in stasis while you are polymorphed. However, rules don't clarify this point explicitly, so I suggest to figure it out by using the general RAI idea about aging. Can D&D magic prevent aging? Death by old age has special meaning in D&D magic. Even the True Resurrection spell won't work: True Resurrection You touch a creature that has been dead for no longer than 200 years and that died for any reason except old age... nor other spells: Resurrection You touch a dead creature that has been dead for no more than a century, that didn’t die of old age... Revivify You touch a creature that has died within the last minute. That creature returns to life with 1 hit point. This spell can’t return to life a creature that has died of old age... The only spell that can return you from dead in this case is the rare druidic Reincarnation spell, which doesn't return your original body, but gives you a new body (hence, a new true form) instead. The only potion that actually can make your true form younger is the Potion of Longevity, but it has special condition - you can't drink it for ever, because you will age eventually (straight away, if you're unlucky): Potion of longevity When you drink this potion, your physical age is reduced by 1d6 + 6 years, to a minimum of 13 years. Each time you subsequently drink a potion of longevity, there is 10 percent cumulative chance that you instead age by 1d6 + 6 years. If no magic can easily give you eternal life, why True Polymorph should be different? It's like the picture of Dorian Gray Dumbledore could actually try to prolong his life by all the magic means, inclding the True Polymorph method. How it will ends is up to the DM. I can suggest the Dorian Gray scenario. Dumbledore assumes an elf form and lives for 200 years. Eventually he lose the assumed form. Maybe a powerful envier casts Dispel Magic on him. All the 200 years return to his original form, turning him into an incredible old man. He died from old age right at the moment. A: I will add this to $hamwowters' answer. While the following is open to interpretation Choose one creature or nonmagical object that you can see within range. You transform the creature into a different creature, the creature into an object, or the object into a creature (the object must be neither worn nor carried by another creature)." [PHB 283] The intent of the spell is to turn the target into a different form whether it living or non-living. The trope that the spell derives from likewise has all manner of things and people turning into different things. To use the spell to transform the target into a younger version of himself is a more limited use of the spell. Obviously it has benefits in terms of age but going back to the original trope, we have myths and legends where people were transformed into a statue or something and the climax occurred decades or even centuries later. So given the level of the spell I don't see any issue with using it as a form of anti-aging. In terms of mechanics the spell wordage more concerned with CR and combat effectiveness than this type of use of the spell. On other words of all the implications of this spell the designers are concerned with somebody transforming into a say a dragon and gain all the dragon's powers than the other implication, like anti-aging.
{ "pile_set_name": "StackExchange" }
Q: Android LinearLayout: Layout inside a Layout I am confused by Android layout design. I want to achieve the layout in the following image: But the code below is producing something like this image: My code <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="50sp" android:layout_below="@+id/view" android:layout_alignParentLeft="true" android:layout_alignParentStart="true" android:background="@drawable/bg_shadow" android:animateLayoutChanges="true" android:layout_marginTop="5dp" android:id="@+id/detailsLay"> <CheckBox android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/repeat" android:id="@+id/repeatCheckBox" android:longClickable="false" android:textColor="@android:color/background_light" android:paddingTop="15sp" android:paddingRight="15sp" android:paddingLeft="1sp" android:paddingBottom="15sp" /> <TextView android:layout_width="30dp" android:layout_height="20sp" android:textAppearance="?android:attr/textAppearanceSmall" android:text="@string/sat" android:id="@+id/satTextView" android:layout_gravity="center_vertical|right" android:textColor="@android:color/holo_orange_light" android:layout_marginLeft="2sp" /> <ImageView android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/arrowDown" android:src="@android:drawable/arrow_down_float" android:layout_gravity="center_vertical" /> <LinearLayout android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical"> <CheckBox android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/everyday"/> </LinearLayout> </LinearLayout> A: This is just a Sample Xml learn from this. <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" > <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" > <Button android:id="@+id/button1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="Button" /> <TextView android:id="@+id/textView1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="Large Text" android:textAppearance="?android:attr/textAppearanceLarge" /> <ImageView android:id="@+id/imageView1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:src="@drawable/ic_launcher" /> </LinearLayout> <LinearLayout android:layout_width="match_parent" android:layout_height="wrap_content" > <Button android:id="@+id/button2" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="Button" /> <CheckBox android:id="@+id/checkBox1" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="CheckBox" /> <TextView android:id="@+id/textView2" android:layout_width="0dp" android:layout_height="wrap_content" android:layout_weight="1" android:text="Large Text" android:textAppearance="?android:attr/textAppearanceLarge" /> </LinearLayout> </LinearLayout> Use attributes like weight and user orientation of linearlayout.
{ "pile_set_name": "StackExchange" }
Q: Does nHibernate Escape strings? From a user report, 'when editing foo. textboxBar allows special characters which produce a warning .net error on save ' textboxBar.Text is assign to an object and saved using nHibernate <property name="TextboxBar" length="255" not-null="false" /> Thinking it may be nHibernate not escaping strings but can't find it in the docs. Does nHibernate 1.2.0 automatically escape strings, link appreciated? A: I doubt that it even needs to escape strings - I'd expect values to be passed in parameterised statements. I strongly suspect this has nothing to do with nHibernate - I suspect this is just an ASP.NET error, although admittedly that's assuming that it's an ASP.NET application. If this is the case, you probably just want to turn off validation for that page. See the ASP.NET FAQ page on validation for more details.
{ "pile_set_name": "StackExchange" }
Q: SDL2 Wont draw Rectangles Correctly I'm creating a window and drawing a box but for some reason instead of drawing a box, the screen is just changed to that color. I have attached a photo of how the window looks and I will attach the source code. #include <iostream> #include <SDL.h> #undef main using namespace std; int SCREEN_WIDTH = 650; int SCREEN_HEIGHT = 650; int main() { if (SDL_Init(SDL_INIT_VIDEO) != 0) { std::cout << "SDL_Init Error: " << SDL_GetError() << std::endl; return 1; } SDL_Window *window = SDL_CreateWindow("Cells", 100, 100, SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN); if (window == nullptr) { std::cout << "SDL_CreateWindow Error: " << SDL_GetError() << std::endl; SDL_Quit(); return 1; } SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); if (renderer == nullptr) { std::cout << "SDL_CreateRenderer Error: " << SDL_GetError() << std::endl; SDL_DestroyWindow(window); SDL_Quit(); return 1; } SDL_Event event; bool quit = false; while (!quit) { while (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { quit = true; } } SDL_RenderClear(renderer); // renderTextures SDL_Rect fillRect = { 122, 122, 122, 122}; SDL_SetRenderDrawColor(renderer, 255, 255, 255, 255); SDL_RenderFillRect(renderer, &fillRect); SDL_RenderPresent(renderer); } return 0; } Doesn't Draw Correctly A: SDL_RenderClear uses current draw colour, which you modified, so your clear and rectangle colour is the same. Set different clear colour (the one you want at background where nothing else is drawn) with e.g. SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255); SDL_RenderClear(renderer); // now draw your rectangles with different col
{ "pile_set_name": "StackExchange" }
Q: Create a recursive view that has a "with recursive" statement in Teradata I would like to create a recursive view in Teradata (i.e., CREATE RECURSIVE VIEW) from the following reproducible example: CREATE VOLATILE TABLE vt1 ( foo VARCHAR(10) , counter INTEGER , bar INTEGER ) ON COMMIT PRESERVE ROWS; INSERT INTO vt1 VALUES ('a', 1, '1'); INSERT INTO vt1 VALUES ('a', 2, '2'); INSERT INTO vt1 VALUES ('a', 3, '2'); INSERT INTO vt1 VALUES ('a', 4, '4'); INSERT INTO vt1 VALUES ('a', 5, '1'); INSERT INTO vt1 VALUES ('b', 1, '3'); INSERT INTO vt1 VALUES ('b', 2, '1'); INSERT INTO vt1 VALUES ('b', 3, '1'); INSERT INTO vt1 VALUES ('b', 4, '2'); WITH RECURSIVE cte (foo, counter, bar, rsum) AS ( SELECT foo , counter , bar , bar AS rsum FROM vt1 QUALIFY ROW_NUMBER() OVER (PARTITION BY foo ORDER BY counter) = 1 UNION ALL SELECT t.foo , t.counter , t.bar , CASE WHEN cte.rsum < 3 THEN t.bar + cte.rsum ELSE t.bar END FROM vt1 t JOIN cte ON t.foo = cte.foo AND t.counter = cte.counter + 1 ) SELECT cte.* , CASE WHEN rsum < 5 THEN 0 ELSE 1 END AS tester FROM cte ORDER BY foo , counter ; This creates this output: ╔═════╦═════════╦═════╦══════╦════════╗ ║ foo ║ counter ║ bar ║ rsum ║ tester ║ ╠═════╬═════════╬═════╬══════╬════════╣ ║ a ║ 1 ║ 1 ║ 1 ║ 0 ║ ║ a ║ 2 ║ 2 ║ 3 ║ 0 ║ ║ a ║ 3 ║ 2 ║ 5 ║ 1 ║ ║ a ║ 4 ║ 4 ║ 4 ║ 0 ║ ║ a ║ 5 ║ 1 ║ 5 ║ 1 ║ ║ b ║ 1 ║ 3 ║ 3 ║ 0 ║ ║ b ║ 2 ║ 1 ║ 4 ║ 0 ║ ║ b ║ 3 ║ 1 ║ 5 ║ 1 ║ ║ b ║ 4 ║ 2 ║ 2 ║ 0 ║ ╚═════╩═════════╩═════╩══════╩════════╝ Which I would ultimately like to "save" as a view. I have tried CREATE RECURSIVE VIEW and several variants, but I think I'm not understanding how to get around the WITH RECURSIVE cte statement. For a related question to understand what's going on, see this question A: Okay, that was actually harder than I thought: create recursive view db.test_view ( foo, counter,bar,rsum) as (SELECT foo, counter, bar, bar AS rsum FROM vt1 QUALIFY ROW_NUMBER() OVER (PARTITION BY foo ORDER BY counter) = 1 UNION ALL SELECT t.foo, t.counter, t.bar, CASE WHEN cte.rsum < 5 THEN t.bar + cte.rsum ELSE t.bar END FROM vt1 t JOIN test_view cte ON t.foo = cte.foo AND t.counter = cte.counter + 1 ) Don't qualify the recursive join to the view. IE, JOIN test_view, not JOIN db.test_view.
{ "pile_set_name": "StackExchange" }
Q: Computed Observable not updating on selection change I've looked at this for a few hours now and can't figure out what's missing. I've tried retooling the computed observable into a pureComputed, but it didn't help. I've looked at adding an ".extend({notify: 'always'});" to the end per KnockoutJS Forcing A Computed Observable to Re-Compute, but didn't help. I made the numStudents an observable, but that didn't help either. Not sure what combination I'm missing. Also made the availableClassSize to have observables in it as well. I set up the Num Students dropdown like this <select data-bind=" options: $root.availableClassSize, optionsText: 'name', optionsValue: 'value', value: numStudents()"> </select> and the computed function is // computed functions self.totalClassSize = ko.computed(function() { var total = 0; ko.utils.arrayForEach(self.assistants(), function (asst) { total += asst.numStudents(); }); return total; }); Since it's not a writeable/updateable, I didn't think it needed the valueHasMutated() option. I'm out of ideas, and would think that with observables behind the scenes the computed would update when you change the value of the Num Students dropdown. When I add an assistant, it does update the total, but that's for a new row. TIA, Steve jsFiddle A: All you need to do to get it to work is change value: numStudents()"> to value: numStudents">. See the "optionsValue" section here, which explains (emphasis added by me): Similar to optionsText, you can also pass an additional parameter called optionsValue to specify which of the objects’ properties should be used to set the value attribute on the elements that KO generates. The key point is that you're specifying the property you want to have the updated value. And here is the updated Fiddle.
{ "pile_set_name": "StackExchange" }
Q: IF/ELSE IF/ELSE with Observable as conditional I am trying to create a method to return a setting that could be stored in one of three places. I have created three observables, each to attempt to retrieve the setting value from a source. I would like to be able to "join" these in such a way to create an IF/ELSE IF/ELSE construct so that only one of the Observables emits the setting value in the end. The condition to move away from one observable to the next is that the previous "failed", i.e., emitted a value that did not pass a condition. The order in which each option is tried is important. So my method looks something like this so far: getSetting(settingName: string): Observable<any> { const optionOne$ = this.getItemAtFirst(settingName).pipe( take(1), filter(setting => setting && this.isSettingValid(settingName, setting))); const optionTwo$ = this.getItemAtSecond().pipe( take(1), filter(setting => setting && this.isSettingValid(settingName, setting))); const optionThree$ = of(this.defaultSettingValue); return merge(optionOne$, optionTwo$, optionThree$); } This obviously does not work because merge does not provide the selection effect that I need. Any suggestions? Thanks in advance! A: EDIT: The TL;DR Replace the merge with concat and add first to a pipe on your concat: concat(optionOne$, optionTwo$, optionThree$) .pipe( first() ); concat subscribes sequentially to each observable, meaning optionOne$ is subscribed to first, it emits its values, when it completes it is unsubscribed from, then concat subscribes to optionTwo$, and so on...until all the observables complete, and then finally emits all the values as an array. Adding first simply cuts this process short since it completes the concat after the first value emitted. You really need to check that all the observables inside the concat will complete, or you could end up with a condition where one of them never emits or completes, and that will mean you will never get a value emitted from concat. —- Assuming that your isSettingValid() returns false if it fails validation, then just add a pipe() with first() to the concat(). See slightly modified version below, and stackblitz here. This article is also informative. And I suggest this article which has cool gifs for explaining the operators and how they work - I always come back to it when I'm trying to figure out something like this. I think this answers your question because it emits the first value which passes the validation and ignores the rest. import { of, Observable, concat } from 'rxjs'; import { take, filter, first } from 'rxjs/operators'; function isSettingValid(setting): boolean { return setting !== 'error' ? true : false; } function getSetting(): Observable<any> { const optionOne$ = of('error') .pipe( take(1), filter(setting => isSettingValid(setting)) ); const optionTwo$ = of('second') .pipe( take(1), filter(setting => isSettingValid(setting)) ); const optionThree$ = of('default').pipe( take(1), filter(setting => isSettingValid(setting)) ); return concat(optionOne$, optionTwo$, optionThree$) .pipe( first() // emits only the first value which passes validation then completes ); } getSetting() .subscribe((setting) => { console.log(setting); }) I also suggest removing the setting &&, like in my snippet, and putting the null check inside your validation function - it just makes the filter easier to read if you do it this way imo. To make it more concise you could consider replacing the take(1) and filter() with first and a predicate function that will have the same result: const optionOne$ = of('error') .pipe( first(setting => isSettingValid(setting)) );
{ "pile_set_name": "StackExchange" }
Q: javascript remove event in handler, please have a look at http://jsbin.com/nubeb/1/edit (function(){ var func = function(e){ console.log("mouse move"); document.removeEventListener("mousemove",func); }; document.addEventListener("mousemove",func); console.log("working"); }()); i want to know that,Is it possible to replace 'func' from document.removeEventListener("mousemove",func); to some other keyword, i want to write it like below code (function(){ document.addEventListener("mousemove",function(e){ document.removeEventListener("mousemove",***); }); }()); A: we have 2 different option, here the first one is using arguments.callee which is gonna be deprecated in the near future, using arguments.callee we have access to the current function which is being executed, so you can do something like this: (function(){ document.addEventListener("mousemove",function mylistener(e){ document.removeEventListener("mousemove", arguments.callee); }); }()); Warning: The 5th edition of ECMAScript (ES5) forbids use of arguments.callee() in strict mode. Read this for more info: arguments.callee As you see, other than getting deprecated in the near future, you can not use arguments.callee, in the strict mode which can bring us some trouble. We have a new alternative, which can help us not use arguments.callee. Ok let's say we have a function like this: var myfunc = function yourfunc(){ //yourfunc is accessible }; //but here yourfunc is not accessible In this code, we can use yourfunc only in the function body but out of that context we only have myfunc. It sounds like we kind of have a private pointer in the scope of the function which is accessible and can be used instead of arguments.callee. So this is the new alternative which can be also used in strict mode, so in your code you can do this like: (function(){ document.addEventListener("mousemove",function mylistener(e){ document.removeEventListener("mousemove", mylistener); }); }());
{ "pile_set_name": "StackExchange" }
Q: Use emacs to copy kill-ring to window/buffer based on position This may be too involved. Considering: In emacs in r-mode or lisp mode (etc) information can be sent directly (copied, pasted, evaluated) from one buffer to the the R or Lisp interpreter. I typically configure an emacs session to have 3 windows - a large horizontal window on top and two windows beneath it. (How) could I configure, which keys/ commands might I use to send the kill-ring to the last cursor position of the top window / buffer? The buffer / window will not always necessarily have the same contents/file. (How) could I name it upon initialization? Similar to C-X, C-B or C-X, B how might I specify which of the three window positions to go to (based on position)? A: See window-at. For example, (defun yank-into-top-window (&optional arg) (interactive "*P") (with-selected-window (window-at 0 0) (yank arg)))
{ "pile_set_name": "StackExchange" }
Q: How to add JWT Bearer Token to ALL requests to an API I'm in the process of trying to put together a small project which uses Asp.Net Core Identity, Identity Server 4 and a Web API project. I've got my MVC project authenticating correctly with IdS4 from which I get a JWT which I can then add to the header of a request to my Web API project, this all works as expected. The issue I have is how I'm actually adding the token to the HttpClient, basically I'm setting it up for every request which is obviously wrong otherwise I'd have seen other examples online, but I haven't been able to determine a good way to refactor this. I've read many articles and I have found very little information about this part of the flow, so I'm guessing it could be so simple that it's never detailed in guides, but I still don't know! Here is an example MVC action that calls my API: [HttpGet] [Authorize] public async Task<IActionResult> GetFromApi() { var client = await GetHttpClient(); string testUri = "https://localhost:44308/api/TestItems"; var response = await client.GetAsync(testUri, HttpCompletionOption.ResponseHeadersRead); var data = await response.Content.ReadAsStringAsync(); GetFromApiViewModel vm = new GetFromApiViewModel() { Output = data }; return View(vm); } And here is the GetHttpClient() method which I call (currently residing in the same controller): private async Task<HttpClient> GetHttpClient() { var client = new HttpClient(); var expat = HttpContext.GetTokenAsync("expires_at").Result; var dataExp = DateTime.Parse(expat, null, DateTimeStyles.RoundtripKind); if ((dataExp - DateTime.Now).TotalMinutes < 10) { //SNIP GETTING A NEW TOKEN IF ITS ABOUT TO EXPIRE } var accessToken = await HttpContext.GetTokenAsync("access_token"); client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken); return client; } My StartUp classes are pretty standard from what I gather, but if they could be useful, then I'll add them in. A: I've read many articles and I have found very little information about this part of the flow, so I'm guessing it could be so simple that it's never detailed in guides, but I still don't know! The problem is that the docs are really spread all over, so it's hard to get a big picture of all the best practices. I'm planning a blog series on "Modern HTTP API Clients" that will collect all these best practices. First, I recommend you use HttpClientFactory rather than new-ing up an HttpClient. Next, adding an authorization header is IMO best done by hooking into the HttpClient's pipeline of message handlers. A basic bearer-token authentication helper could look like this: public sealed class BackendApiAuthenticationHttpClientHandler : DelegatingHandler { private readonly IHttpContextAccessor _accessor; public BackendApiAuthenticationHttpClientHandler(IHttpContextAccessor accessor) { _accessor = accessor; } protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var expat = await _accessor.HttpContext.GetTokenAsync("expires_at"); var dataExp = DateTime.Parse(expat, null, DateTimeStyles.RoundtripKind); if ((dataExp - DateTime.Now).TotalMinutes < 10) { //SNIP GETTING A NEW TOKEN IF ITS ABOUT TO EXPIRE } var token = await _accessor.HttpContext.GetTokenAsync("access_token"); // Use the token to make the call. request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); return await base.SendAsync(request, cancellationToken); } } This can be hooked up via DI: services.AddTransient<BackendApiAuthenticationHttpClientHandler>(); services.AddHttpClient<MyController>() .ConfigureHttpClient((provider, c) => c.BaseAddress = new Uri("https://localhost:44308/api")) .AddHttpMessageHandler<BackendApiAuthenticationHttpClientHandler>(); Then you can inject an HttpClient into your MyController, and it will magically use the auth tokens: // _client is an HttpClient, initialized in the constructor string testUri = "TestItems"; var response = await _client.GetAsync(testUri, HttpCompletionOption.ResponseHeadersRead); var data = await response.Content.ReadAsStringAsync(); GetFromApiViewModel vm = new GetFromApiViewModel() { Output = data }; return View(vm); This pattern seems complex at first, but it separates the "how do I call this API" logic from "what is this action doing" logic. And it's easier to extend with retries / circuit breakers / etc, via Polly.
{ "pile_set_name": "StackExchange" }
Q: convert json to xml without changing the order of parameters in python I'm using Json2xml module for converting json format to xml format. But, while converting it changes the order of the parameters. How do I convert without changing the order of parameters? Here's my python code. from json2xml.json2xml import Json2xml data = Json2xml.fromjsonfile('example.json').data data_object = Json2xml(data) xml_output = data_object.json2xml() print xml_output example.json { "action": { "param1": "aaa", "param2": "bbb" } } The output is <action> <param2>bbb</param2> <param1>aaa</param1> </action> Is there a way to convert json to xml without changing the order of parameters? A: Try using an OrderedDict: from collections import OrderedDict from json2xml.json2xml import Json2xml data = Json2xml.fromjsonfile('example.json').data data = OrderedDict(data) data_object = Json2xml(data) xml_output = data_object.json2xml() print xml_output
{ "pile_set_name": "StackExchange" }
Q: Can I coordinate colors between seaborn and networkx? I'm trying to coordinate colors on a networkx network chart with the colors on seaborn charts. When I use the same color pallet (Dark2) and same group ids, the two plots still come out different. To be clear, the nodes in group 0 should be the same as the bars in group 0. The same should hold true for groups 1 & 2. Here is the chart I get when I run the code below, showing that the colors don't remain consistent: When I run the code below, in the network plot, the colors for group 0 are the same as they are in the count plot. But groups 1 and 2 change colors between the network plot and the count plot. Does anyone know how to coordinate the colors? I have compared the color mappings in plt.cm.Dark2.colors to the mappings in sns.color_palette('Dark2', 3) and they appear to be the same (besides the fact that sns only includes the first 3 colors. Also worth noting, seaborn is following the expected order of colors, networkx is not. import pandas as pd import networkx as nx import matplotlib.pyplot as plt import seaborn as sns # create dataframe of connections df = pd.DataFrame({ 'from':['A', 'B', 'C','A'], 'to':['D', 'A', 'E','C']}) # create graph G = nx.Graph() for i, r in df.iterrows(): G.add_edge(r['from'], r['to']) # create data frame mapping nodes to groups groups_df = pd.DataFrame() for i in G.nodes(): if i in 'AD': group = 0 elif i in 'BC': group = 1 else: group = 2 groups_df.loc[i, 'group'] = group # make sure it's in same order as nodes of graph groups_df = groups_df.reindex(G.nodes()) # create node node and count chart where color is group id fig, ax = plt.subplots(ncols=2) nx.draw(G, with_labels=True, node_color=groups_df['group'], cmap=plt.cm.Dark2, ax=ax[0]) sns.countplot('index', data=groups_df.reset_index(), palette='Dark2', hue='group', ax=ax[1]) A: networkx distributes the values equally over the colors of the colormap. Since it apparently cannot take in a norm (which would be the usual way to tackle this), you would need to create a new colormap with only the colors you're interested in. cmap = ListedColormap(plt.cm.Dark2(np.arange(3))) Also, seaborn will desaturate the colors to be used, so to get the same colors as with the colormap, you need to set saturation=1 in the seaborn call. import numpy as np import pandas as pd import networkx as nx import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap import seaborn as sns # create dataframe of connections df = pd.DataFrame({ 'from':['A', 'B', 'C','A'], 'to':['D', 'A', 'E','C']}) # create graph G = nx.Graph() for i, r in df.iterrows(): G.add_edge(r['from'], r['to']) # create data frame mapping nodes to groups groups_df = pd.DataFrame() for i in G.nodes(): if i in 'AD': group = 0 elif i in 'BC': group = 1 else: group = 2 groups_df.loc[i, 'group'] = group # make sure it's in same order as nodes of graph groups_df = groups_df.reindex(G.nodes()) # create node node and count chart where color is group id fig, ax = plt.subplots(ncols=2) # create new colormap with only the first 3 colors from Dark2 cmap = ListedColormap(plt.cm.Dark2(np.arange(3))) nx.draw(G, with_labels=True, node_color=groups_df['group'], cmap=cmap, ax=ax[0]) sns.countplot('index', data=groups_df.reset_index(), palette=cmap.colors, hue='group', ax=ax[1], saturation=1) plt.show()
{ "pile_set_name": "StackExchange" }
Q: How Do I get Compiler to Recognize Reference to Library? I downloaded the json-c library and was trying to do some basic tests in my environment (Ubuntu with Atom and gcc). However, I seem to be missing something in my makefile because every time I try to compile I get undefined reference errors. Below is what I'm trying to run, #include "json.h" #include <stdio.h> int main() { struct json_object *jobj; char *str = "{ \"msg-type\": [ \"0xdeadbeef\", \"irc log\" ], \ \"msg-from\": { \"class\": \"soldier\", \"name\": \"Wixilav\" }, \ \"msg-to\": { \"class\": \"supreme-commander\", \"name\": \"[Redacted]\" }, \ \"msg-log\": [ \ \"soldier: Boss there is a slight problem with the piece offering to humans\", \ \"supreme-commander: Explain yourself soldier!\", \ \"soldier: Well they don't seem to move anymore...\", \ \"supreme-commander: Oh snap, I came here to see them twerk!\" \ ] \ }"; printf("str:\n---\n%s\n---\n\n", str); jobj = json_tokener_parse(str); printf("jobj from str:\n---\n%s\n---\n", json_object_to_json_string_ext(jobj, JSON_C_TO_STRING_SPACED | JSON_C_TO_STRING_PRETTY)); return 0; } According to website, I'm supposed to use the two flags to link to the library. CFLAGS += $(shell pkg-config --cflags json-c) LDFLAGS += $(shell pkg-config --libs json-c) I do that in the makefile below but it still doesn't work. CFLAGS += $(shell pkg-config --cflags json-c) LDFLAGS += $(shell pkg-config --libs json-c) CCBIN=/usr/bin/gcc CC=$(CCBIN) $(CFLAGS) $(LDFLAGS) -Wall -Wextra -pedantic -std=c99 -g -fsanitize=address default: json json: json_type.c $(CC) -O3 -o json json_type.c -lm .PHONY: clean clean: rm -Rf *.o lib/*.o json *.dSYM .PHONY: package package: tar -cvzf json.tar * When I run 'make json' I get the following errors, make json /usr/bin/gcc -I/usr/local/include/json-c -L/usr/local/lib -ljson-c -Wall -Wextra -pedantic -std=c99 -g -fsanitize=address -O3 -o json json_type.c -lm /tmp/ccXo2zav.o: In function `main': /home/zachary/Atom_Projects/Projects/SandBox/JSON/json_type.c:25: undefined reference to `json_tokener_parse' /home/zachary/Atom_Projects/Projects/SandBox/JSON/json_type.c:26: undefined reference to `json_object_to_json_string_ext' collect2: error: ld returned 1 exit status makefile:10: recipe for target 'json' failed make: *** [json] Error 1 I'm really new to writing makefiles so hopefully it's something silly. Any advice would be helpful, thanks! A: The order of object files and libraries on the linker command line is significant. Libraries are searched in the order they are encountered in order to find function to satisfy dependencies in the preceding objects. You are putting all the libraries before the objects that need them, so they won't actually be used to resolve any of those undefined references. Additionally, you are using some well-known make variables unconventionally, and making no use at all of make's built-in defaults: variable CC, if you choose to redefine it, should contain the essential command for running the C compiler. Usually this does not include any flags: CC = gcc # or CC = /usr/bin/gcc you would then put all the C compilation flags you need into the CFLAGS variable: CFLAGS = -O3 $(shell pkg-config --cflags json-c) -Wall -Wextra -pedantic -std=c99 -g -fsanitize=address I have to assume that json-c means to give you an illustrative example, not a recipe, when it suggests putting the json-c library options in LDFLAGS. With GNU make, the output of pkg-config --libs would be better put in variable LDLIBS: LDLIBS = $(shell pkg-config --libs json-c) With that said, using built-in variables in the conventional way matters most if you also rely on make's built-in rules. These will be entirely sufficient for your case, so you don't actually have to express a build command in your makefile, just the appropriate depedencies: json: json_type.o So, supposing that json-c and pkg-config are properly installed, and you're using GNU make, this Makefile should be sufficient: CC = /usr/bin/gcc CFLAGS = -std=c99 -O3 -g CFLAGS += $(shell pkg-config --cflags json-c) CFLAGS += -fsanitize=address -Wall -Wextra -pedantic LDLIBS = $(shell pkg-config --libs json-c) all: json json: json_type.o clean: rm -rf *.o json .PHONY: clean
{ "pile_set_name": "StackExchange" }
Q: Loop through MySql Table and output in Particular order with PHP The below loops through a table in my MySql database and shows all the car records in a table cell that match. I know i can use the ORDER BY command in the MySql query to order by a particular column but how can i place a particular record on top and then list all the remaining matches? For example i have a simple form before this where the user selects their favourite car ($_POST['user_car']) and the required spec ($_POST['user_spec']) with the $_POST method. So my query shows all records that match the spec selected, but i want the selected car to be at the very top of the list with the remaining to follow. $query_params = array( ':spec' => $_POST['user_spec'] ); $query = " SELECT * FROM table WHERE spec=:spec "; try { $stmt = DB::get()->prepare($query); $stmt->execute($query_params); $rows = $stmt->fetchAll(); } catch(PDOException $ex) {} foreach($rows as $row): $cell .= '<td> <input type="checkbox" '.$check.' name="spec[]" value="'.$row['id'].'"/><span> '.$row['spec'].' </td>'; endforeach; A: The ORDER BY does not limit you to just column(S) you can put evaluated statements in there too. SELECT * FROM table WHERE spec=:spec ORDER BY car_name=:user_car DESC, car_name ASC if users favourite car is 'mini', then the record with car_name of 'mini' will evaluate to 1, and the rest to 0, so it will appear first. The rest of the cars will then be ordered by their name.
{ "pile_set_name": "StackExchange" }
Q: Check if value entered has digits in it I'm trying to get my program to check if the value entered for password_entered has a digit somewhere in it. I currently have c that looks like: int CFD(char password_entered[]); int main() { char password_entered[20]; //max is 20 char /*do{*/ printf("password?? \n"); scanf("%s", password_entered); if(CFD(password_entered)) { //contains digit(s) character. } else{ // no digits } /*}*/ return 0; } int CFD(char password_entered[]){ int i; for(i=0; i<strlen(password_entered); i++){ if( isdigit(password_entered[i]) ){ printf("\ndigit(s).\n"); return true; } else{ printf("\nno digit(s).\n"); return false; } } } The issue is, when i run my code, it keeps falling into the else{} what am I doing wrong here? Part of the issue, if I enter in 3 it works, but not John3 - how can i better evaluate this? Thank you! A: You only test the first character of your password and then immediately return. Try this: int CFD(char password_entered[]) { int i; for (i=0; i<strlen(password_entered); i++) { if ( isdigit(password_entered[i]) ) { printf("\ndigit(s).\n"); return 1; } } printf("\nno digit(s).\n"); return 0; }
{ "pile_set_name": "StackExchange" }