date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/21
592
1,589
<issue_start>username_0: I want to minimize the following LPP: c=60x+40y+50z subject to 20x+10y+10z>=350 , 10x+10y+20z>=400, x,y,z>=0 my code snippet is the following(I'm using scipy package for the first time) ``` from scipy.optimize import linprog c = [60, 40, 50] A = [[20,10], [10,10],[10,20]] b = [350,400] res = linprog(c, A, b) print(res) ``` The output is : [screenshot of the output in Pycharm](https://i.stack.imgur.com/ZEhMh.png) 1.Can someone explain the parameters of the linprog function in detail, especially how the bound will be calculated? 2.Have I written the parameters right? I am naive with LPP basics, I think I am understanding the parameters wrong.<issue_comment>username_1: [`linprog`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html) expects `A` to have one row per inequation and one column per variable, and not the other way around. Try this: ``` from scipy.optimize import linprog c = [60, 40, 50] A = [[20, 10, 10], [10, 10, 20]] b = [350, 400] res = linprog(c, A, b) print(res) ``` Output: ``` fun: -0.0 message: 'Optimization terminated successfully.' nit: 0 slack: array([ 350., 400.]) status: 0 success: True x: array([ 0., 0., 0.]) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The message is telling you that your `A_ub` matrix has incorrect dimension. It is currently a 3x2 matrix which cannot left-multiply your 3x1 optimization variable `x`. You need to write: ``` A = [[20,10, 10], [10,10,20]] ``` which is a 2x3 matrix and can left multiply `x`. Upvotes: 1
2018/03/21
505
1,465
<issue_start>username_0: I have some troubles using the pipe operator (%>%) with the unique function. ``` df = data.frame( a = c(1,2,3,1), b = 'a') unique(df$a) # no problem here df %>% unique(.$a) # not working here # I got "Error: argument 'incomparables != FALSE' is not used (yet)" ``` Any idea?<issue_comment>username_1: What is happening is that `%>%` takes the object on the left hand side and feeds it into the first argument of the function by default, and then will feed in other arguments as provided. Here is an example: ``` df = data.frame( a = c(1,2,3,1), b = 'a') MyFun<-function(x,y=FALSE){ return(match.call()) } > df %>% MyFun(.$a) MyFun(x = ., y = .$a) ``` What is happening is that `%>%` is matching `df` to `x` and `.$a` to `y`. So for `unique` your code is being interpreted as: `unique(x=df, incomparables=.$a)` which explains the error. For your case you need to pull out `a` before you run unique. If you want to keep with `%>%` you can use `df %>% .$a %>% unique()` but obviously there are lots of other ways to do that. Upvotes: 3 <issue_comment>username_2: As other answers mention : `df %>% unique(.$a)` is equivalent to `df %>% unique(.,.$a)`. To force the dots to be explicit you can do: ``` df %>% {unique(.$a)} # [1] 1 2 3 ``` An alternative option from `magrittr` ``` df %$% unique(a) # [1] 1 2 3 ``` Or possibly stating the obvious: ``` df$a %>% unique() # [1] 1 2 3 ``` Upvotes: 4 [selected_answer]
2018/03/21
1,041
3,701
<issue_start>username_0: I used the following code to include my js file: ```php Meta::addJs('admin_js', '/resources/assets/admin_js/admin_app.js'); ``` The file exists but in the console I see status 404. If I move the file to 'public' folder - all ok. But I want that this file be stored in 'resources' directory<issue_comment>username_1: 1. you have to move js files to `public` or store into `storage` and make symlinks. 2. or you need to create symlinks the `resources` directory to a `public` directory(which is not recommended). 3. you need to use the most recommended and effective method of Laravel of using Laravel mix. Please use the link below to read about laravel mix a solution. <https://laravel.com/docs/5.6/mix> It will allow you to place your `js` assets into `resources` directory and make compressed js file in `public` which will be used by the setup. Upvotes: 3 <issue_comment>username_2: i don't know if i am late but when i needed to load js file from directories and sub directories in my view file, i did this and it worked perfectly for me. BTW i use laravel 5.7. first of all i wrote a function that searched for any file in any given directory with this . ```php /** * Search for file and return full path. * * @param string $pattern * @return array */ function rglob($pattern, $flags = 0) { $files = glob($pattern, $flags); foreach (glob(dirname($pattern).'/*', GLOB_ONLYDIR|GLOB_NOSORT) as $dir) { $files = array_merge($files, rglob($dir.'/'.basename($pattern), $flags)); } return $files; } ``` the above will return a full path for each ".js" file in my "resources/view" directory. Then i made a call to the above function to copy the content of each js to a single js file (new.js) which i created in my public/js using the below function. ```php /** * Load js file from each view subdirectory into public js. * * @return void */ function load_js_function(){ //call to the previous function that returns all js files in view directory $files = rglob(resource_path('views').'/*/*.js'); foreach($files as $file) { copy($file, base_path('public/js/new.js')); } } ``` After this i made call to the load\_js\_function() in my master blade layout(where i load all my css,js etc) immediately after loading the public/js/new.js, you can see below. ```html {{ load_js_function() }} ``` These solution updates the file in public as you update the content of the original file. Vote up if it works for you and comment if you have an issue implementing this, i will be glad to shed more light. cheers Upvotes: 0 <issue_comment>username_3: Update for LARAVEL 7.0 : Adding a vanilla JavaScript file to ‘resources’ : 1. Create the file in `resources/js` (called '**file.js**' in this example) 2. Go to `webpack.mix.js` file (at the bottom) 3. Add there: `mix.js('resources/js/file.js', 'public/js');` 4. Run in terminal : `npm run production` Upvotes: 2 <issue_comment>username_4: **Update for LARAVEL 9.0:** Laravel now by default uses Vite instead of Mix Adding a vanilla JavaScript file to ‘resources’ : 1. Create the file in resources/js (called 'file.js' in this example) 2. Go to vite.config.js file and add your filename to input ``` input: [ // rest of your inputs 'resources/js/file.js', ], ``` 3. Add reference to your : ``` {{-- rest of your head --}} @vite('resources/js/file.js') ``` 4. Run `npm run dev` (run `npm install` if you haven't already) More information can be found in the [documentation](https://laravel.com/docs/10.x/vite). Upvotes: 0
2018/03/21
1,305
3,854
<issue_start>username_0: I did everything step by step as mentioned in angular site and still requests aren't proxied. 8080 - my springboot app and backend 4200 - my Angular2 frontend In Angular2 project I have file proxy.cons.json with content like this: ``` { "/api": { "target": "http://localhost:8080", "secure": false, "changeOrigin": true, "logLevel": "debug" } } ``` In Angular2 package.json I changed start procedure to "start": "ng serve --proxy-config proxy.conf.json" When I type inside commander npm start then at the start I can see Proxy created: /api -> <http://localhost:8080>. Well, so far is good I guess. I'm trying to send a request (Angular2) Below is my service.ts file ``` import { Injectable } from '@angular/core'; import { Headers, Http } from '@angular/http'; import 'rxjs/add/operator/toPromise'; @Injectable() export class SpipService { constructor(private http: Http) { } callRest(): Promise { let context:String = window.location.pathname.split('/').slice(0,-1).join('/'); console.log("Please check--------------->"); return this.http.get("/api/serviceContext/rest/hello/") .toPromise() .then(response => response.text() as string) .catch(this.handleError); } private handleError(error: any): Promise { console.error('Some error occured', error); return Promise.reject(error.message || error); } } ``` package.json: ``` { "name": "angular", "version": "0.0.0", "license": "MIT", "scripts": { "ng": "ng", "start": "ng serve --proxy-config proxy.conf.json", "build": "ng build --prod", "test": "ng test", "lint": "ng lint", "e2e": "ng e2e" }, "private": true, "dependencies": { "@angular/animations": "^5.2.0", "@angular/common": "^5.2.0", "@angular/compiler": "^5.2.0", "@angular/core": "^5.2.0", "@angular/forms": "^5.2.0", "@angular/http": "^5.2.0", "@angular/platform-browser": "^5.2.0", "@angular/platform-browser-dynamic": "^5.2.0", "@angular/router": "^5.2.0", "core-js": "^2.4.1", "rxjs": "^5.5.6", "zone.js": "^0.8.19" }, "devDependencies": { "@angular/cli": "1.6.8", "@angular/compiler-cli": "^5.2.0", "@angular/language-service": "^5.2.0", "@types/jasmine": "~2.8.3", "@types/jasminewd2": "~2.0.2", "@types/node": "~6.0.60", "codelyzer": "^4.0.1", "jasmine-core": "~2.8.0", "jasmine-spec-reporter": "~4.2.1", "karma": "~2.0.0", "karma-chrome-launcher": "~2.2.0", "karma-coverage-istanbul-reporter": "^1.2.1", "karma-jasmine": "~1.1.0", "karma-jasmine-html-reporter": "^0.2.2", "protractor": "~5.1.2", "ts-node": "~4.1.0", "tslint": "~5.9.1", "typescript": "~2.5.3" } } ``` I'm getting an error that <http://localhost:4200/api/serviceContext/rest/hello/> 404 (Not Found). As we can see, nothing has been proxied. Why? To be clear. When I go manually, all works fine. Please guide.<issue_comment>username_1: Your JSON should be this ``` { "/api": { "target": "http://localhost:8080", "secure": false, "changeOrigin": true, "logLevel": "debug" } } ``` Making your calls look like this ``` return this.http.get("/api/serviceContext/rest/hello/"). ``` **EDIT** My own configuration : ``` { "/api": { "target": "http://private-url-your-peeker:8765/", "secure": false, "changeOrigin": true, "logLevel": "debug" } } ``` My calls ``` return this.http.get("api/serviceContext/rest/hello/"). ``` Upvotes: 1 <issue_comment>username_2: After i modified by `proxy.conf.json` as below all things seems to be working well. Hope it helps someone in future: ``` { "/api/*": { "target": "http://localhost:8080", "secure": false, "pathRewrite": {"^/api" : ""}, "changeOrigin": true, "logLevel": "debug" } } ``` Upvotes: 2
2018/03/21
588
2,041
<issue_start>username_0: I am new to React so looking like how can I render the data of image, name and user name from render method of one component to other components. ``` class Avatar extends React.Component { render() { return ( ![]({''}) ) } } class Label extends React.Component { render() { return ( Name: ===== ) } } class ScreenName extends React.Component { render() { return ( ### Username: ) } } class Badge extends React.Component { render() { return ( ) } } ``` And the render method is this. How to read this image username and name into other components and update the view. Tried using {this.props.name} and also {this.props.user.name} but I am getting name as undefined. ``` ReactDOM.render( , document.getElementById('app') ); ``` And the HTML is this ``` ```<issue_comment>username_1: You can pass the data via props <https://codesandbox.io/s/o4nz576jn5> Upvotes: 0 <issue_comment>username_2: You pass data via the component's props. It looks like this: ``` class Avatar extends React.Component { render() { return ( ![]({this.props.img}) ) } } class Label extends React.Component { render() { return ( {this.props.name} ================= ) } } class ScreenName extends React.Component { render() { return ( ### {this.props.username} ) } } class Badge extends React.Component { render() { return ( ) } } ``` And after some refactoring, you end up with this: ``` const Avatar = ({img}) => ( ![]({img}) ); const Label = ({name}) => ( {name} ====== ); const ScreenName = ({username}) => { ### {username} ); const Badge = ({user}) => ( ) ``` Note that here we made use of so called **functional stateless components**, which can make your code a lot shorter and often more elegant. See [here](https://hackernoon.com/react-stateless-functional-components-nine-wins-you-might-have-overlooked-997b0d933dbc). Upvotes: 1
2018/03/21
1,908
7,282
<issue_start>username_0: I am working on an android project.I am having the problem with login activity. I am getting JSONException for not converting String into JSONObject and also connection timeout error. I am using android volley Library, MySQL XAMPP server. Here are my Login.php ``` php include("Connection.php"); if(isset($_POST["email"]) && isset($_POST["password"])) { $email=$_POST["email"]; $password=$_POST["password"]; $result = mysqli_query($conn, "select * from user_master where email='$email' && password='$password'"); if(mysqli_num_rows($result) 0) { echo "success"; exit; } else { echo "INVALID"; exit; } } ?> ``` LoginRequest.java ``` package com.talentakeaways.ttpms; import com.android.volley.AuthFailureError; import com.android.volley.Response; import com.android.volley.toolbox.StringRequest; import java.util.HashMap; import java.util.Map; /** * Created by chand on 15-03-2018. */ public class LoginRequest extends StringRequest { private static final String LOGIN_URL = "http://10.26.16.22:80/Ttpms/login.php"; private Map parameters; LoginRequest(String username, String password, Response.Listener listener, Response.ErrorListener errorListener) { super(Method.POST, LOGIN\_URL, listener, errorListener); parameters = new HashMap<>(); parameters.put("email", username); parameters.put("password", <PASSWORD>); } @Override protected Map getParams() throws AuthFailureError { return parameters; } } ``` Ttpm\_Login.java ``` package com.talentakeaways.ttpms; import android.app.ProgressDialog; import android.content.Intent; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.util.Log; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import com.android.volley.NetworkError; import com.android.volley.RequestQueue; import com.android.volley.Response; import com.android.volley.ServerError; import com.android.volley.TimeoutError; import com.android.volley.VolleyError; import com.android.volley.toolbox.Volley; import org.json.JSONException; import org.json.JSONObject; import info.androidhive.androidsplashscreentimer.R; public class Ttpm_Login extends AppCompatActivity { //declaration of edit text, button and string values EditText tenantname, passWord; Button bt_login; String userName, password; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_ttpm_login); setTitle("Login"); //set title of the activity initialize(); final RequestQueue requestQueue = Volley.newRequestQueue(Ttpm_Login.this); //onClickListener method for button bt_login.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { //assigning String variables to the text in edit texts userName = tenantname.getText().toString(); password = <PASSWORD>(); //Validating the String values if (validateUsername(userName) && validatePassword(password)) { //Start ProgressDialog final ProgressDialog progressDialog = new ProgressDialog(Ttpm_Login.this); progressDialog.setTitle("Please Wait"); progressDialog.setMessage("Logging You In"); progressDialog.setCancelable(false); progressDialog.show(); //LoginRequest from class LoginRequest LoginRequest loginRequest = new LoginRequest(userName, password, new Response.Listener() { @Override public void onResponse(String response) { Log.i("Login Response", response); progressDialog.dismiss(); try { JSONObject jsonObject = new JSONObject(response); //If Success then start Dashboard Activity if (jsonObject.getBoolean("success")) { Intent loginSuccess = new Intent(getApplicationContext(), Ttpm\_Dashboard.class); startActivity(loginSuccess); finish(); } //else Invalid else { if (jsonObject.getString("status").equals("INVALID")) Toast.makeText(getApplicationContext(), "User Not Found", Toast.LENGTH\_SHORT).show(); else { Toast.makeText(getApplicationContext(), "Passwords Don't Match", Toast.LENGTH\_SHORT).show(); } } } catch (JSONException e) { e.printStackTrace(); Log.getStackTraceString(e); Toast.makeText(Ttpm\_Login.this, "Bad Response from the Server", Toast.LENGTH\_SHORT).show(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { progressDialog.dismiss(); if (error instanceof ServerError) { Toast.makeText(Ttpm\_Login.this, "Server Error", Toast.LENGTH\_SHORT).show(); } else if (error instanceof TimeoutError) { Toast.makeText(Ttpm\_Login.this, "Connection Timed Out", Toast.LENGTH\_SHORT).show(); } else if (error instanceof NetworkError) { Toast.makeText(Ttpm\_Login.this, "Bad Network Connection", Toast.LENGTH\_SHORT).show(); } } }); requestQueue.add(loginRequest); } } }); } private void initialize() { tenantname = findViewById(R.id.tenantname); passWord = findViewById(R.id.password); bt\_login = findViewById(R.id.login); } private boolean validateUsername(String string) { //Validating the entered USERNAME if (string.equals("")) { tenantname.setError("Enter a Username"); return false; } else if (string.length() > 50) { tenantname.setError("Maximum 50 Characters"); return false; } else if (string.length() < 6) { tenantname.setError("Minimum 6 Characters"); return false; } tenantname.setEnabled(false); return true; } private boolean validatePassword(String string) { //Validating the entered PASSWORD if (string.equals("")) { passWord.setError("Enter Your Password"); return false; } else if (string.length() > 32) { passWord.setError("Maximum 32 Characters"); return false; } // else if (string.length() < 8) { // passWord.setError("Minimum 8 Characters"); // return false; // } // passWord.setEnabled(false); return true; } } ``` please help. thanks in advance<issue_comment>username_1: With this line ``` JSONObject jsonObject = new JSONObject(response); ``` you are trying to read the response as a Json, but the response is not a Json at all. Please, open your php page with postman, so you can see the responde you get and properly understand the issue Upvotes: 2 [selected_answer]<issue_comment>username_2: When your PHP server returns information, it's the characters ``` success ``` or ``` INVALID ``` But if you try to parse these values as JSON, you will get a failure ``` JSON.parse("success") 09:59:02.442 VM271:1 Uncaught SyntaxError: Unexpected token a in JSON at position 0 at JSON.parse ``` Valid JSON values are [string, number, array, object, true, false, and null](https://json.org/) So you could fix your PHP server to echo a string, number, array, object, true, false or null, like so ``` if(mysqli_num_rows($result) > 0) { echo '"success"'; exit; } else { echo '"INVALID"'; exit; } ``` Upvotes: 0
2018/03/21
3,660
5,366
<issue_start>username_0: I Have two arrays ``` "index":`[2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018]` ``` , and ``` "data":`[[null,null,null,63.0], [null,null,57.2307692308,null], [null,null,15.0,27.6666666667], [null,null,10.75,12.0], [null,213.0,14.6666666667,48.5], [null,128.0,21.25,23.0], [627.7647058824,113.0,36.7,19.5], [1201.0,132.0,523.7950819672,146.3708333333], [5414.1873111782,74.6666666667,668.5256916996,739.8781725888], [14697.0882352941,130.5,2812.1279069767,1258.0739856802], [11784.9188034188,1700.5353982301,4192.4097560976,1443.0708661417], [11256.6581196581,1218.4015748031,9908.7030075188,2055.599078341], [14321.3364485981,1032.9083333333,22745.9067357513,2683.6695652174], [16341.3267326733,491.2529411765,23721.7028571429,9549.88252149], [10479.1470588235,347.04,21638.5,16300.375]]} ``` ` Please suggest the best and suitable method in javascript.. ``` { x: 2004, y: null }, { x: 2005, y: null }, { x: 2006, y: null }, { x: 2007, y: null }, { x: 2008, y: 213.0 }, { x: 2009, y: 128.0 }, { x: 2010, y: 627.7647058824}, { x: 2011, y: 1201.0}, { x: 2012, y: 5414.1873111782}, { x: 2013, y: 14697.0882352941}, { x: 2014, y: 11784.9188034188}, { x: 2015, y: 11256.6581196581}, { x: 2016, y: 14321.3364485981}, { x: 2017, y: 16341.3267326733}, { x: 2018, y: 10479.1470588235} ``` like wise total 4 arrays for 4 elements in array 2. Please help me with the code in javascript<issue_comment>username_1: You should use the funcion `reduce` for this, this will allow you to iterate into one array and have easier control on the new array. Read the docs on how to use this: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce> . Hope this helps! Upvotes: 0 <issue_comment>username_2: The main problem is, you have an array which is switched index wise for mapping. You need a different approach and with `reduce`, you could iterate the outer array and the inner array and switch the indices for pushing the result object. ```js var data = [[null, null, null, 63.0], [null, null, 57.2307692308, null], [null, null, 15.0, 27.6666666667], [null, null, 10.75, 12.0], [null, 213.0, 14.6666666667, 48.5], [null, 128.0, 21.25, 23.0], [627.7647058824, 113.0, 36.7, 19.5], [1201.0, 132.0, 523.7950819672, 146.3708333333], [5414.1873111782, 74.6666666667, 668.5256916996, 739.8781725888], [14697.0882352941, 130.5, 2812.1279069767, 1258.0739856802], [11784.9188034188, 1700.5353982301, 4192.4097560976, 1443.0708661417], [11256.6581196581, 1218.4015748031, 9908.7030075188, 2055.599078341], [14321.3364485981, 1032.9083333333, 22745.9067357513, 2683.6695652174], [16341.3267326733, 491.2529411765, 23721.7028571429, 9549.88252149], [10479.1470588235, 347.04, 21638.5, 16300.375]], index = [2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018], result = data.reduce((r, a, i) => (a.forEach((y, j) => (r[j] = r[j] || []).push({ x: index[i], y })), r), []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: I think my js tools can help you <https://github.com/wm123450405/linqjs> ```js let index = [2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018]; let data = [[null,null,null,63.0],[null,null,57.2307692308,null],[null,null,15.0,27.6666666667],[null,null,10.75,12.0],[null,213.0,14.6666666667,48.5],[null,128.0,21.25,23.0],[627.7647058824,113.0,36.7,19.5],[1201.0,132.0,523.7950819672,146.3708333333],[5414.1873111782,74.6666666667,668.5256916996,739.8781725888],[14697.0882352941,130.5,2812.1279069767,1258.0739856802],[11784.9188034188,1700.5353982301,4192.4097560976,1443.0708661417],[11256.6581196581,1218.4015748031,9908.7030075188,2055.599078341],[14321.3364485981,1032.9083333333,22745.9067357513,2683.6695652174],[16341.3267326733,491.2529411765,23721.7028571429,9549.88252149],[10479.1470588235,347.04,21638.5,16300.375]]; console.log(index.asEnumerable().zip(data, (i, d) => ({ x: i, y: d.asEnumerable().max() })).toArray()); ``` Or you can use Array.prototype.map ```js let index = [2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018]; let data = [[null,null,null,63.0],[null,null,57.2307692308,null],[null,null,15.0,27.6666666667],[null,null,10.75,12.0],[null,213.0,14.6666666667,48.5],[null,128.0,21.25,23.0],[627.7647058824,113.0,36.7,19.5],[1201.0,132.0,523.7950819672,146.3708333333],[5414.1873111782,74.6666666667,668.5256916996,739.8781725888],[14697.0882352941,130.5,2812.1279069767,1258.0739856802],[11784.9188034188,1700.5353982301,4192.4097560976,1443.0708661417],[11256.6581196581,1218.4015748031,9908.7030075188,2055.599078341],[14321.3364485981,1032.9083333333,22745.9067357513,2683.6695652174],[16341.3267326733,491.2529411765,23721.7028571429,9549.88252149],[10479.1470588235,347.04,21638.5,16300.375]]; console.log(index.map((x, i) => ({ x, y: Math.max(...data[i]) }))); ``` Upvotes: 0
2018/03/21
2,736
10,223
<issue_start>username_0: I have a subprocess that either quits with a returncode, or asks something and waits for user input. I would like to detect when the process asks the question and quit immediately. The fact that the process asks the question or not is enough for me to decide the state of the system. The problem is that I cannot read the question because the child process probably does not flush standard output. So I cannot rely on parsing `subprocess.Popen().stdout`: when trying to read it, well, it blocks because input is being read first. A bit like this ``` # ask.py, just asks something without printing anything if a condition is met # here, we'll say that the condition is always met input() ``` Of course, the actual subprocess is a third party binary, and I cannot modify it easily to add the necessary flush calls, which would solve it. I could also try the Windows equivalent of `unbuffer` ([What is the equivalent of unbuffer program on Windows?](https://stackoverflow.com/questions/11516258/what-is-the-equivalent-of-unbuffer-program-on-windows)) which is called `winpty`, which would (maybe) allow me to detect output and solve my current issue, but I'd like to keep it simple and I'd like to solve the standard *input* issue first... I tried... well, lots of things that don't work, including trying to pass a fake file as `stdin` argument, which doesn't work because `subprocess` takes the `fileno` of the file, and we cannot feed it rubbish... ``` p = subprocess.Popen(["python","ask.py"],...) ``` Using `communicate` with a string doesn't work either, because you cannot control when the string is read to be fed to the subprocess (probably through a system pipe). Those questions were promising but either relied on standard output, or only apply to Linux * [Detecting when a child process is waiting for input](https://stackoverflow.com/questions/18107541/detecting-when-a-child-process-is-waiting-for-input) * [How can I know whether my subprocess is waiting for my input ?(in python3)](https://stackoverflow.com/questions/12180073/how-can-i-know-whether-my-subprocess-is-waiting-for-my-input-in-python3) What I'm currently doing is running the process with a timeout, and if the timeout is reached, I then decide that the program is blocked. But it costs the timeout waiting time. If I could decide as soon as `stdin` is read by the subprocess, that would be better. I'd like to know if there's a native python solution (possibly using `ctypes` and windows extensions) to detect read from stdin. But a native solution that doesn't use Python but a non-Microsoft proprietary language could do.<issue_comment>username_1: My idea to find out if the subprocess reads user input is to (ab)use the fact that file objects are stateful: if the process reads data from its stdin, we should be able to detect a change in the stdin's state. The procedure is as follows: 1. Create a temporary file that'll be used as the subprocess's stdin 2. Write some data to the file 3. Start the process 4. Wait a little while for the process to read the data (or not), then use the [`tell()`](https://docs.python.org/3/library/io.html#io.IOBase.tell) method to find out if anything has been read from the file This is the code: ``` import os import time import tempfile import subprocess # create a file that we can use as the stdin for the subprocess with tempfile.TemporaryFile() as proc_stdin: # write some data to the file for the subprocess to read proc_stdin.write(b'whatever\r\n') proc_stdin.seek(0) # start the thing cmd = ["python","ask.py"] proc = subprocess.Popen(cmd, stdin=proc_stdin, stdout=subprocess.PIPE) # wait for it to start up and do its thing time.sleep(1) # now check if the subprocess read any data from the file if proc_stdin.tell() == 0: print("it didn't take input") else: print("it took input") ``` Ideally the temporary file could be replaced by some kind of pipe or something that doesn't write any data to disk, but unfortunately I couldn't find a way to make it work without a real on-disk file. Upvotes: 2 <issue_comment>username_2: if we not want let to child process process user input, but simply kill it in this case, solution can be next: * start child process with redirected *stdin* to pipe. * pipe server end we create in asynchronous mode and main set pipe buffer to 0 size * before start child - write 1 byte to this pipe. * because pipe buffer is 0 size - operation not complete, until another side not read this byte * after we write this 1 byte and operation in progress (pending) - start child process. * finally begin wait what complete first: write operation or child process ? * if write complete first - this mean, that child process begin read from *stdin* - so kill it at this point one possible implementation on *c++*: ``` struct ReadWriteContext : public OVERLAPPED { enum OpType : char { e_write, e_read } _op; BOOLEAN _bCompleted; ReadWriteContext(OpType op) : _op(op), _bCompleted(false) { } }; VOID WINAPI OnReadWrite(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, OVERLAPPED* lpOverlapped) { static_cast(lpOverlapped)->\_bCompleted = TRUE; DbgPrint("%u:%x %p\n", static\_cast(lpOverlapped)->\_op, dwErrorCode, dwNumberOfBytesTransfered); } void nul(PCWSTR lpApplicationName) { ReadWriteContext wc(ReadWriteContext::e\_write), rc(ReadWriteContext::e\_read); static const WCHAR pipename[] = L"\\\\?\\pipe\\{221B9EC9-85E6-4b64-9B70-249026EFAEAF}"; if (HANDLE hPipe = CreateNamedPipeW(pipename, PIPE\_ACCESS\_DUPLEX|FILE\_FLAG\_OVERLAPPED, PIPE\_TYPE\_BYTE|PIPE\_READMODE\_BYTE|PIPE\_WAIT, 1, 0, 0, 0, 0)) { static SECURITY\_ATTRIBUTES sa = { sizeof(sa), 0, TRUE }; PROCESS\_INFORMATION pi; STARTUPINFOW si = { sizeof(si)}; si.dwFlags = STARTF\_USESTDHANDLES; si.hStdInput = CreateFileW(pipename, FILE\_GENERIC\_READ|FILE\_GENERIC\_WRITE, 0, &sa, OPEN\_EXISTING, 0, 0); if (INVALID\_HANDLE\_VALUE != si.hStdInput) { char buf[256]; if (WriteFileEx(hPipe, "\n", 1, &wc, OnReadWrite)) { si.hStdError = si.hStdOutput = si.hStdInput; if (CreateProcessW(lpApplicationName, 0, 0, 0, TRUE, CREATE\_NO\_WINDOW, 0, 0, &si, π)) { CloseHandle(pi.hThread); BOOLEAN bQuit = true; goto \_\_read; do { bQuit = true; switch (WaitForSingleObjectEx(pi.hProcess, INFINITE, TRUE)) { case WAIT\_OBJECT\_0: DbgPrint("child terminated\n"); break; case WAIT\_IO\_COMPLETION: if (wc.\_bCompleted) { DbgPrint("child read from hStdInput!\n"); TerminateProcess(pi.hProcess, 0); } else if (rc.\_bCompleted) { \_\_read: rc.\_bCompleted = false; if (ReadFileEx(hPipe, buf, sizeof(buf), &rc, OnReadWrite)) { bQuit = false; } } break; default: \_\_debugbreak(); } } while (!bQuit); CloseHandle(pi.hProcess); } } CloseHandle(si.hStdInput); // let execute pending apc SleepEx(0, TRUE); } CloseHandle(hPipe); } } ``` --- another variant of code - use event completion, instead apc. however this not affect final result. this variant of code give absolute the same result as first: ``` void nul(PCWSTR lpApplicationName) { OVERLAPPED ovw = {}, ovr = {}; if (ovr.hEvent = CreateEvent(0, 0, 0, 0)) { if (ovw.hEvent = CreateEvent(0, 0, 0, 0)) { static const WCHAR pipename[] = L"\\\\?\\pipe\\{221B9EC9-85E6-4b64-9B70-249026EFAEAF}"; if (HANDLE hPipe = CreateNamedPipeW(pipename, PIPE_ACCESS_DUPLEX|FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE|PIPE_READMODE_BYTE|PIPE_WAIT, 1, 0, 0, 0, 0)) { static SECURITY_ATTRIBUTES sa = { sizeof(sa), 0, TRUE }; PROCESS_INFORMATION pi; STARTUPINFOW si = { sizeof(si)}; si.dwFlags = STARTF_USESTDHANDLES; si.hStdInput = CreateFileW(pipename, FILE_GENERIC_READ|FILE_GENERIC_WRITE, 0, &sa, OPEN_EXISTING, 0, 0); if (INVALID_HANDLE_VALUE != si.hStdInput) { char buf[256]; if (!WriteFile(hPipe, "\n", 1, 0, &ovw) && GetLastError() == ERROR_IO_PENDING) { si.hStdError = si.hStdOutput = si.hStdInput; if (CreateProcessW(lpApplicationName, 0, 0, 0, TRUE, CREATE_NO_WINDOW, 0, 0, &si, π)) { CloseHandle(pi.hThread); BOOLEAN bQuit = true; HANDLE h[] = { ovr.hEvent, ovw.hEvent, pi.hProcess }; goto __read; do { bQuit = true; switch (WaitForMultipleObjects(3, h, false, INFINITE)) { case WAIT_OBJECT_0 + 0://read completed __read: if (ReadFile(hPipe, buf, sizeof(buf), 0, &ovr) || GetLastError() == ERROR_IO_PENDING) { bQuit = false; } break; case WAIT_OBJECT_0 + 1://write completed DbgPrint("child read from hStdInput!\n"); TerminateProcess(pi.hProcess, 0); break; case WAIT_OBJECT_0 + 2://process terminated DbgPrint("child terminated\n"); break; default: __debugbreak(); } } while (!bQuit); CloseHandle(pi.hProcess); } } CloseHandle(si.hStdInput); } CloseHandle(hPipe); // all pending operation completed here. } CloseHandle(ovw.hEvent); } CloseHandle(ovr.hEvent); } } ``` Upvotes: 3 [selected_answer]
2018/03/21
2,523
9,222
<issue_start>username_0: I'm using `gcloud sql instances create` to create CloudSQL instances. More often than not, that command times out, but provides a command to wait until databse creation is done. ``` $ gcloud sql instances create mydb Creating Cloud SQL instance...failed. ERROR: (gcloud.sql.instances.create) Operation https://www.googleapis.com/sql/v1beta4/projects/foobar/operations/abcd is taking longer than expected. You can continue waiting for the operation by running `gcloud sql operations wait --project foobar abcd` $ gcloud sql operations wait --project foobar abcd Waiting for [https://www.googleapis.com/sql/v1beta4/projects/foobar/operations/abcd]...done. NAME TYPE START END ERROR STATUS abcd CREATE 2018-03-19T15:04:29.477+00:00 2018-03-19T15:10:08.561+00:00 - DONE ``` I wonder if it's possible to configure the timeout for `gcloud sql instances create`? I can't seem to find anything in the [docs](https://cloud.google.com/sdk/gcloud/reference/sql/instances/create), not even the timeout it's using by default. Another option would be to start instance creation asynchronously, list the pending operations and wait for their completion: ``` $ gcloud sql instances create mydb --async $ PENDING_OPERATIONS=$(gcloud sql operations list --instance=mydb --filter='status!=DONE' --format='value(name)') $ gcloud sql operations wait "${PENDING_OPERATIONS}" ``` But then again, would `gcloud sql operations wait` timeout at some point? The [docs](https://cloud.google.com/sdk/gcloud/reference/sql/operations/wait) don't mention any timeout either.<issue_comment>username_1: My idea to find out if the subprocess reads user input is to (ab)use the fact that file objects are stateful: if the process reads data from its stdin, we should be able to detect a change in the stdin's state. The procedure is as follows: 1. Create a temporary file that'll be used as the subprocess's stdin 2. Write some data to the file 3. Start the process 4. Wait a little while for the process to read the data (or not), then use the [`tell()`](https://docs.python.org/3/library/io.html#io.IOBase.tell) method to find out if anything has been read from the file This is the code: ``` import os import time import tempfile import subprocess # create a file that we can use as the stdin for the subprocess with tempfile.TemporaryFile() as proc_stdin: # write some data to the file for the subprocess to read proc_stdin.write(b'whatever\r\n') proc_stdin.seek(0) # start the thing cmd = ["python","ask.py"] proc = subprocess.Popen(cmd, stdin=proc_stdin, stdout=subprocess.PIPE) # wait for it to start up and do its thing time.sleep(1) # now check if the subprocess read any data from the file if proc_stdin.tell() == 0: print("it didn't take input") else: print("it took input") ``` Ideally the temporary file could be replaced by some kind of pipe or something that doesn't write any data to disk, but unfortunately I couldn't find a way to make it work without a real on-disk file. Upvotes: 2 <issue_comment>username_2: if we not want let to child process process user input, but simply kill it in this case, solution can be next: * start child process with redirected *stdin* to pipe. * pipe server end we create in asynchronous mode and main set pipe buffer to 0 size * before start child - write 1 byte to this pipe. * because pipe buffer is 0 size - operation not complete, until another side not read this byte * after we write this 1 byte and operation in progress (pending) - start child process. * finally begin wait what complete first: write operation or child process ? * if write complete first - this mean, that child process begin read from *stdin* - so kill it at this point one possible implementation on *c++*: ``` struct ReadWriteContext : public OVERLAPPED { enum OpType : char { e_write, e_read } _op; BOOLEAN _bCompleted; ReadWriteContext(OpType op) : _op(op), _bCompleted(false) { } }; VOID WINAPI OnReadWrite(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, OVERLAPPED* lpOverlapped) { static_cast(lpOverlapped)->\_bCompleted = TRUE; DbgPrint("%u:%x %p\n", static\_cast(lpOverlapped)->\_op, dwErrorCode, dwNumberOfBytesTransfered); } void nul(PCWSTR lpApplicationName) { ReadWriteContext wc(ReadWriteContext::e\_write), rc(ReadWriteContext::e\_read); static const WCHAR pipename[] = L"\\\\?\\pipe\\{221B9EC9-85E6-4b64-9B70-249026EFAEAF}"; if (HANDLE hPipe = CreateNamedPipeW(pipename, PIPE\_ACCESS\_DUPLEX|FILE\_FLAG\_OVERLAPPED, PIPE\_TYPE\_BYTE|PIPE\_READMODE\_BYTE|PIPE\_WAIT, 1, 0, 0, 0, 0)) { static SECURITY\_ATTRIBUTES sa = { sizeof(sa), 0, TRUE }; PROCESS\_INFORMATION pi; STARTUPINFOW si = { sizeof(si)}; si.dwFlags = STARTF\_USESTDHANDLES; si.hStdInput = CreateFileW(pipename, FILE\_GENERIC\_READ|FILE\_GENERIC\_WRITE, 0, &sa, OPEN\_EXISTING, 0, 0); if (INVALID\_HANDLE\_VALUE != si.hStdInput) { char buf[256]; if (WriteFileEx(hPipe, "\n", 1, &wc, OnReadWrite)) { si.hStdError = si.hStdOutput = si.hStdInput; if (CreateProcessW(lpApplicationName, 0, 0, 0, TRUE, CREATE\_NO\_WINDOW, 0, 0, &si, π)) { CloseHandle(pi.hThread); BOOLEAN bQuit = true; goto \_\_read; do { bQuit = true; switch (WaitForSingleObjectEx(pi.hProcess, INFINITE, TRUE)) { case WAIT\_OBJECT\_0: DbgPrint("child terminated\n"); break; case WAIT\_IO\_COMPLETION: if (wc.\_bCompleted) { DbgPrint("child read from hStdInput!\n"); TerminateProcess(pi.hProcess, 0); } else if (rc.\_bCompleted) { \_\_read: rc.\_bCompleted = false; if (ReadFileEx(hPipe, buf, sizeof(buf), &rc, OnReadWrite)) { bQuit = false; } } break; default: \_\_debugbreak(); } } while (!bQuit); CloseHandle(pi.hProcess); } } CloseHandle(si.hStdInput); // let execute pending apc SleepEx(0, TRUE); } CloseHandle(hPipe); } } ``` --- another variant of code - use event completion, instead apc. however this not affect final result. this variant of code give absolute the same result as first: ``` void nul(PCWSTR lpApplicationName) { OVERLAPPED ovw = {}, ovr = {}; if (ovr.hEvent = CreateEvent(0, 0, 0, 0)) { if (ovw.hEvent = CreateEvent(0, 0, 0, 0)) { static const WCHAR pipename[] = L"\\\\?\\pipe\\{221B9EC9-85E6-4b64-9B70-249026EFAEAF}"; if (HANDLE hPipe = CreateNamedPipeW(pipename, PIPE_ACCESS_DUPLEX|FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE|PIPE_READMODE_BYTE|PIPE_WAIT, 1, 0, 0, 0, 0)) { static SECURITY_ATTRIBUTES sa = { sizeof(sa), 0, TRUE }; PROCESS_INFORMATION pi; STARTUPINFOW si = { sizeof(si)}; si.dwFlags = STARTF_USESTDHANDLES; si.hStdInput = CreateFileW(pipename, FILE_GENERIC_READ|FILE_GENERIC_WRITE, 0, &sa, OPEN_EXISTING, 0, 0); if (INVALID_HANDLE_VALUE != si.hStdInput) { char buf[256]; if (!WriteFile(hPipe, "\n", 1, 0, &ovw) && GetLastError() == ERROR_IO_PENDING) { si.hStdError = si.hStdOutput = si.hStdInput; if (CreateProcessW(lpApplicationName, 0, 0, 0, TRUE, CREATE_NO_WINDOW, 0, 0, &si, π)) { CloseHandle(pi.hThread); BOOLEAN bQuit = true; HANDLE h[] = { ovr.hEvent, ovw.hEvent, pi.hProcess }; goto __read; do { bQuit = true; switch (WaitForMultipleObjects(3, h, false, INFINITE)) { case WAIT_OBJECT_0 + 0://read completed __read: if (ReadFile(hPipe, buf, sizeof(buf), 0, &ovr) || GetLastError() == ERROR_IO_PENDING) { bQuit = false; } break; case WAIT_OBJECT_0 + 1://write completed DbgPrint("child read from hStdInput!\n"); TerminateProcess(pi.hProcess, 0); break; case WAIT_OBJECT_0 + 2://process terminated DbgPrint("child terminated\n"); break; default: __debugbreak(); } } while (!bQuit); CloseHandle(pi.hProcess); } } CloseHandle(si.hStdInput); } CloseHandle(hPipe); // all pending operation completed here. } CloseHandle(ovw.hEvent); } CloseHandle(ovr.hEvent); } } ``` Upvotes: 3 [selected_answer]
2018/03/21
229
730
<issue_start>username_0: ``` render() { console.log("render") return ( {() => { console.log("works!") if(condition) return( this.change(event)} >) }} ``` I try to put a condition inside the return but appear this error [![enter image description here](https://i.stack.imgur.com/xY2Gd.png)](https://i.stack.imgur.com/xY2Gd.png)<issue_comment>username_1: As the error is trying to tell you, you're telling React to print a function, which doesn't really make sense. You need to call the function: ``` {(() => { ... return ...}())} ``` Upvotes: 1 <issue_comment>username_2: You can simplify this: ```js render() { return ( { condition && this.change(event)}> } ); } ``` Upvotes: 3 [selected_answer]
2018/03/21
838
2,873
<issue_start>username_0: I'd like to be able to replace a subset of element values in a vector within my data.frame object in R. Toy examples I've found thus far are simple (and small) enough to manually type and replace the few elements you want to target with those you want to replace. While this toy example will again be small enough to manually type the target and replacement elements, I'm hoping it serves as an easy representation in which there are *many more unique names*, yet the problem remains the same: ``` SampleID <- rep(c("Keith", "Mick", "Brian", "Ronnie"), times = 3) Trial <- sort(rep(c(1,2,3), times = 4)) set.seed(10) Scores <- sample.int(100, 12) df <- data.frame(SampleID, Trial, Scores) ``` Now take this example and extend it way out to include thousands of unique individual SampleID names; let's say this study actually has a list of like 5000 unique individuals, and your dataset needed to be recoded such that 100 individuals needed to be renamed. Is there a way utilize two vectors that represent the lists of identified `target` names you want to replace with the `replacement` names you want to recode with, without having to type something like: ``` df$SampleID <- recode(df$SampleID, "Mick" = "jagger", ... 99 other "target" = "replacement" values) ``` Perhaps the trick is to iterate with a for loop? Many thanks.<issue_comment>username_1: I would create a named vector and utilize it with `forcats::fct_recode`: ``` library(forcats) library(dplyr) names(target) <- replacement df <- df %>% mutate(SampleID = fct_recode(SampleID, target)) ``` Upvotes: 2 <issue_comment>username_2: I would recommend creating a reference data frame that contains `target` and `replacement` fields, like so: ``` new_df <- data.frame(target = 'Mick', replacement = 'Jagger') ``` Then you can merge this into your current `df`: ``` df <- merge(df, new_df, by.x = 'SampleID', by.y = 'target', all.x = TRUE) ``` Then it's just a matter of using an `ifelse()` statement to replace values in `SampleID` with the values in `replacement` where `!is.na(replacement)`: ``` df$SampleID <- ifelse(!is.na(df$replacement), df$replacement, df$SampleID) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: To explicitly put in a toy replacement vector to work with: ``` replace_list <- list(target = c('Keith', 'Mick', 'Brian', 'Ronnie'), replacement = c('Richards', 'Jagger', 'Jones', 'Wood')) ``` There's probably a more elegant solution, but if you just want to use string vectors, maybe a simple lookup function that you then map over each element of the target vector. ``` replace_funct <- function(x) { if (!is.na(replace_list$target[[x]])) replace_list$replacement[[which(replace_list$target == x)]] else x } library(purrr) df$NewSampleID <- map_chr(df$SampleID, replace_funct) ``` Upvotes: 0
2018/03/21
693
2,115
<issue_start>username_0: I need only date in the **yyyy-MM-dd** format, but I'm getting (20/03/2018 0:00:00) date in wrong format. ``` var d = Convert.ToDateTime("2018-03-20T00:00:00.000",CultureInfo.InvariantCulture).ToString("yyyy-MM-dd"); var finaldate = DateTime.TryParseExact(d, "yyyy-MM-dd", null); ``` Output i am getting --20/03/2018 0:00:00 expected -- **2018-03-20**<issue_comment>username_1: I will try to explain what the others meant when they wrote "DateTime has no format". `DateTime` is a C# type that has properties for year, month, day, etc. If you want to print a `DateTime`, you first have to convert it to a `string`. During this conversion, you can define the output format. Also, if you parse a `string` into a `DateTime`, you can define the expected input format. This is what the "[Standard Date and Time Format Strings](https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-date-and-time-format-strings)" / "[Custom Date and Time Format Strings](https://learn.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings)" are for. An example: ``` string d = Convert.ToDateTime("2018-03-20T00:00:00.000",CultureInfo.InvariantCulture).ToString("yyyy-MM-dd"); DateTime finaldate = DateTime.TryParseExact(d, "yyyy-MM-dd", null); // not a string! int year = finadate.Year; // year == 2018 (a number!) int month = finaldate.Month; // month == 3 (a number again!) string isoFormat = finaldate.ToString("yyyy-MM-dd"); // isoFormat == "2018-03-20" string usFormat = finaldate.ToString("MM/dd/yyyy"); // usFormat == "03/20/2018" // and so on... ``` Note that if you just call `ToString()` without specifying any format, the result will depend on the culture of the current thread (probably "en-Us" judging from the output you have shown). See [DateTime.ToString Method](https://msdn.microsoft.com/en-us/library/k494fzbf(v=vs.110).aspx). Upvotes: 2 <issue_comment>username_2: `DateTime` always returns `DateTime` object with the time part. To get expected output, you have to return string eg. `DateTime.Now.ToString("dd-MM-yyyy")` Upvotes: 0
2018/03/21
737
2,956
<issue_start>username_0: When it is referred to use [min-max-scaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) and when [Standard Scalar](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html). I think it depends on the data. Is there any features of data to look on to decide to go for which preprocessing method. I looked at the [docs](http://scikit-learn.org/stable/modules/preprocessing.html#scaling-features-to-a-range) but can someone give me more insight into it.<issue_comment>username_1: I hope this helps. When to use MinMaxScaler, RobustScaler, StandardScaler, and Normalizer <https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02> Upvotes: 1 <issue_comment>username_2: The scaling will indeed depend of the type of data that you will. For most cases, `StandardScaler` is the scaler of choice. If you know that you have some outliers, go for the `RobustScaler`. Then, you deal with some features with a weird distribution like for instance the digits, it will not be the best to use these scalers. Indeed, on this dataset, there a lot of pixel at zero meaning that you have a pick at zero for this distribution involving that dividing by the std. dev. will not be beneficial. So basically when the distribution of a feature is far to be Normal then you need to take an alternative. In the case of the digits, the `MinMaxScaler` is a much better choice. However, if you want to keep the zero at zeros (because you use sparse matrices), you will go for a `MaxAbsScaler`. NB: also look at the `QuantileTransformer` and the `PowerTransformer` if you want a feature to follow a Normal/Uniform distribution whatever the original distribution was. Upvotes: 3 <issue_comment>username_3: ### StandardScaler **StandardScaler** assumes that data usually has distributed features and will scale them to zero mean and 1 standard deviation. Use `StandardScaler()` if you know the data distribution is normal. For most cases, **StandardScaler** would do no harm. Especially when dealing with variance (PCA, clustering, logistic regression, SVMs, perceptrons, neural networks) in fact Standard Scaler would be very important. On the other hand, it will not make much of a difference if you are using tree-based classifiers or regressors. ### MinMaxScaler **MinMaxScaler** will transform each value in the column proportionally within the range [0,1]. This is quite acceptable in cases where we are not concerned about the standardisation along the variance axes. e.g. image processing or neural networks expecting values between 0 to 1. --- * [Guide to Scaling and Standardizing](https://www.kaggle.com/code/discdiver/guide-to-scaling-and-standardizing/notebook) * [Compare the effect of different scalers on data with outliers](https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html) Upvotes: 1
2018/03/21
677
2,815
<issue_start>username_0: I have a git repository on BitBucket with multiple branches and I want to fork this repository with all its branches to give an access to an external collaborator. My question is it possible to merge a commit from a specific branch of the original repository to the forked one knowing that that nothing has changed in the forked repository? If yes, what are the necessary steps to do that?<issue_comment>username_1: I hope this helps. When to use MinMaxScaler, RobustScaler, StandardScaler, and Normalizer <https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02> Upvotes: 1 <issue_comment>username_2: The scaling will indeed depend of the type of data that you will. For most cases, `StandardScaler` is the scaler of choice. If you know that you have some outliers, go for the `RobustScaler`. Then, you deal with some features with a weird distribution like for instance the digits, it will not be the best to use these scalers. Indeed, on this dataset, there a lot of pixel at zero meaning that you have a pick at zero for this distribution involving that dividing by the std. dev. will not be beneficial. So basically when the distribution of a feature is far to be Normal then you need to take an alternative. In the case of the digits, the `MinMaxScaler` is a much better choice. However, if you want to keep the zero at zeros (because you use sparse matrices), you will go for a `MaxAbsScaler`. NB: also look at the `QuantileTransformer` and the `PowerTransformer` if you want a feature to follow a Normal/Uniform distribution whatever the original distribution was. Upvotes: 3 <issue_comment>username_3: ### StandardScaler **StandardScaler** assumes that data usually has distributed features and will scale them to zero mean and 1 standard deviation. Use `StandardScaler()` if you know the data distribution is normal. For most cases, **StandardScaler** would do no harm. Especially when dealing with variance (PCA, clustering, logistic regression, SVMs, perceptrons, neural networks) in fact Standard Scaler would be very important. On the other hand, it will not make much of a difference if you are using tree-based classifiers or regressors. ### MinMaxScaler **MinMaxScaler** will transform each value in the column proportionally within the range [0,1]. This is quite acceptable in cases where we are not concerned about the standardisation along the variance axes. e.g. image processing or neural networks expecting values between 0 to 1. --- * [Guide to Scaling and Standardizing](https://www.kaggle.com/code/discdiver/guide-to-scaling-and-standardizing/notebook) * [Compare the effect of different scalers on data with outliers](https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html) Upvotes: 1
2018/03/21
799
3,114
<issue_start>username_0: So basically I have to print out the worker's name in reverse, and loop through the string character by character. This is what I have for that method under Worker.java: ``` public String printRev(String fName) { for (int i = 0; i < fName.length(); i++) { fName = fName.charAt(i) + fName; } return fName; } ``` And for UseWorker.java I have: ``` anil.printRev(); jasmin.printRev(); fred.printRev(); ``` But it's giving me the error message: "The method printRev(String) in the type Worker isn't applicable for the arguments ()." It's also worth mentioning that I am not allowed to modify UseWorker, only Worker. How can I go about fixing the problem?<issue_comment>username_1: I hope this helps. When to use MinMaxScaler, RobustScaler, StandardScaler, and Normalizer <https://towardsdatascience.com/scale-standardize-or-normalize-with-scikit-learn-6ccc7d176a02> Upvotes: 1 <issue_comment>username_2: The scaling will indeed depend of the type of data that you will. For most cases, `StandardScaler` is the scaler of choice. If you know that you have some outliers, go for the `RobustScaler`. Then, you deal with some features with a weird distribution like for instance the digits, it will not be the best to use these scalers. Indeed, on this dataset, there a lot of pixel at zero meaning that you have a pick at zero for this distribution involving that dividing by the std. dev. will not be beneficial. So basically when the distribution of a feature is far to be Normal then you need to take an alternative. In the case of the digits, the `MinMaxScaler` is a much better choice. However, if you want to keep the zero at zeros (because you use sparse matrices), you will go for a `MaxAbsScaler`. NB: also look at the `QuantileTransformer` and the `PowerTransformer` if you want a feature to follow a Normal/Uniform distribution whatever the original distribution was. Upvotes: 3 <issue_comment>username_3: ### StandardScaler **StandardScaler** assumes that data usually has distributed features and will scale them to zero mean and 1 standard deviation. Use `StandardScaler()` if you know the data distribution is normal. For most cases, **StandardScaler** would do no harm. Especially when dealing with variance (PCA, clustering, logistic regression, SVMs, perceptrons, neural networks) in fact Standard Scaler would be very important. On the other hand, it will not make much of a difference if you are using tree-based classifiers or regressors. ### MinMaxScaler **MinMaxScaler** will transform each value in the column proportionally within the range [0,1]. This is quite acceptable in cases where we are not concerned about the standardisation along the variance axes. e.g. image processing or neural networks expecting values between 0 to 1. --- * [Guide to Scaling and Standardizing](https://www.kaggle.com/code/discdiver/guide-to-scaling-and-standardizing/notebook) * [Compare the effect of different scalers on data with outliers](https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html) Upvotes: 1
2018/03/21
585
1,363
<issue_start>username_0: I have a data frame like this: ``` id info 1 0 1 0 2 0 2 10 3 20 3 20 ``` I want to remove the rows for all "id"s that have no change in their "info", that is, remove all rows where the "info" is identical for a certain "id". For the example above, I would end up: ``` id info 2 0 2 10 ```<issue_comment>username_1: Here is a solution with `data.table`: ``` library("data.table") DT <- fread( "id info 1 0 1 0 2 0 2 10 3 20 3 20") DT[, .N, .(id, info)][N==1, .(id, info)] # > DT[, .N, .(id, info)][N==1, .(id, info)] # id info # 1: 2 0 # 2: 2 10 ``` a variant: ``` DT[, if (.N==1) TRUE, .(id, info)][, .(id, info)] ``` Here is a solution using an anti-join: ``` DT[!DT[duplicated(DT)], on=names(DT)] ``` Upvotes: 2 <issue_comment>username_2: A base R solution, ``` df[!with(df, ave(info, id, FUN = function(i)var(i) == 0)),] #slightly different syntax (as per @lmo) #df[ave(df$info, df$id, FUN=var) > 0,] ``` which gives, > > > ``` > id info > 3 2 0 > 4 2 10 > > ``` > > Upvotes: 3 [selected_answer]<issue_comment>username_3: Another `data.table` solution using `.SD` magic variable. ``` df <- data.table(id = c(1,1,2,2,3,3), info=c(0,0,0,10,20,20)) df[,.SD[uniqueN(.SD)>1],id] id info 1: 2 0 2: 2 10 ``` Upvotes: -1
2018/03/21
544
2,021
<issue_start>username_0: I have SOAP wsdl from Magento and I have to get simple product list from it. Logic is to get 'simple' products from catalogProductList. So far in VS2015 I have create console app, and in reference add service reference, paste url of wsdl, but further I have no idea what to do, also didn't find any similar examples. In PHP code would be like this: ``` $proxy = new SoapClient('http://magentowebshop/api/v2_soap/?wsdl'); $sessionId = $proxy->login('user', 'pass'); $complexFilter = array( 'complex_filter' => array( array( 'key' => 'type', 'value' => array('key' => 'in', 'value' => 'simple') ) ) ); $result = $proxy->catalogProductList($sessionId, $complexFilter); var_dump($result); ```<issue_comment>username_1: When you added the serviceReference it asked you to input a NameSpace [![Add Web reference screenshot](https://i.stack.imgur.com/W1atY.png)](https://i.stack.imgur.com/W1atY.png) In your code use that namespace to create a client a be able to call the exposed methods of the WSDL ``` private void testMethod { ServiceReference1.ExampleClient client ; client = new ServiceReference1.ExampleClient(); client.exampleMethod() ; } ``` when you type `ServiceReference1.` you will see a list of the clients created based on the wsdl that you added. and then when you type `client.` you will see methods that you can get to achieve what you want Upvotes: 1 <issue_comment>username_2: Ok i find a solution: ``` ServiceReference1.PortTypeClient client = new ServiceReference1.PortTypeClient(); string sessionID = client.login("user", "pass"); filters filter = new filters(); filter.complex_filter = new[] { new complexFilter { key = "type", value = new associativeEntity { key = "in", value = "simple"} } }; var list = client.catalogProductList(sessionID, filter, "catalog"); client.endSession(sessionID); ``` Upvotes: 0
2018/03/21
426
1,618
<issue_start>username_0: I have a repository class something like ... ``` public class StuffRepository { public Stuff save(Stuff v); } ``` I want to make a mock with @MockBean returning first parameter v. How can I do that? ``` @MockBean private StuffRepository stuffRepository; public void test() { given(stuffRepository.save(??)).willReturn(??); } ```<issue_comment>username_1: When you added the serviceReference it asked you to input a NameSpace [![Add Web reference screenshot](https://i.stack.imgur.com/W1atY.png)](https://i.stack.imgur.com/W1atY.png) In your code use that namespace to create a client a be able to call the exposed methods of the WSDL ``` private void testMethod { ServiceReference1.ExampleClient client ; client = new ServiceReference1.ExampleClient(); client.exampleMethod() ; } ``` when you type `ServiceReference1.` you will see a list of the clients created based on the wsdl that you added. and then when you type `client.` you will see methods that you can get to achieve what you want Upvotes: 1 <issue_comment>username_2: Ok i find a solution: ``` ServiceReference1.PortTypeClient client = new ServiceReference1.PortTypeClient(); string sessionID = client.login("user", "pass"); filters filter = new filters(); filter.complex_filter = new[] { new complexFilter { key = "type", value = new associativeEntity { key = "in", value = "simple"} } }; var list = client.catalogProductList(sessionID, filter, "catalog"); client.endSession(sessionID); ``` Upvotes: 0
2018/03/21
566
2,032
<issue_start>username_0: I am fetching some records : ``` $companies = \App\User::where('type', 'pet-salon')->orWhere('type', 'veterinarian')->get(); return response()->json($companies); ``` The data coming back is an array of objects: ``` [{ id: 2, type: "xxx", deactivate: 0, xxx: "Hello", middle_name: "Mid", lastname: "xxx", //... }] ``` This is the jQuery typeahead code: ``` $('#getCompaniesForConnection').typeahead({ source: function (query, process) { return $.get('/all/companies', { query: query }, function (data) { return process(data); }); } }); ``` The exception its giving me : > > Uncaught TypeError: b.toLowerCase is not a function > > > And the results drop-down is not showing too, What am i missing here ?<issue_comment>username_1: When you added the serviceReference it asked you to input a NameSpace [![Add Web reference screenshot](https://i.stack.imgur.com/W1atY.png)](https://i.stack.imgur.com/W1atY.png) In your code use that namespace to create a client a be able to call the exposed methods of the WSDL ``` private void testMethod { ServiceReference1.ExampleClient client ; client = new ServiceReference1.ExampleClient(); client.exampleMethod() ; } ``` when you type `ServiceReference1.` you will see a list of the clients created based on the wsdl that you added. and then when you type `client.` you will see methods that you can get to achieve what you want Upvotes: 1 <issue_comment>username_2: Ok i find a solution: ``` ServiceReference1.PortTypeClient client = new ServiceReference1.PortTypeClient(); string sessionID = client.login("user", "pass"); filters filter = new filters(); filter.complex_filter = new[] { new complexFilter { key = "type", value = new associativeEntity { key = "in", value = "simple"} } }; var list = client.catalogProductList(sessionID, filter, "catalog"); client.endSession(sessionID); ``` Upvotes: 0
2018/03/21
937
3,311
<issue_start>username_0: How do I throttle in React a function called on scroll that uses bind like the following? ``` this.getElementPosition = this.getElementPosition.bind(this); ``` What I have so far is the following, but it returns me `TypeError: Cannot read property 'bind' of undefined` ``` throttle() { _.throttle((this.getElementPosition = this.getElementPosition.bind(this)), 100) } componentDidMount() { window.addEventListener('scroll', this.throttle) } componentWillUnmount() { window.removeEventListener('scroll', this.throttle); } getElementPosition() { var rect = this.elementPosition.getBoundingClientRect(); if ((rect.top > 0) && (rect.top < (window.innerHeight * 0.75))) { this.elementPosition.setAttribute("data-position","in-viewport"); } else if (rect.top > window.innerHeight) { this.elementPosition.setAttribute("data-position","below-viewport"); } else if (rect.bottom < 0) { this.elementPosition.setAttribute("data-position","above-viewport"); } } render() { return ( { this.elementPosition = element; }} data-position="below-viewport"> ); } ```<issue_comment>username_1: Try like this, per lodash docs <https://lodash.com/docs/4.17.4#throttle>: ``` componentDidMount() { window.addEventListener('scroll', _.throttle(this.doThrottle, 100)) } doThrottle = () => { this.getElementPosition = this.getPosition() } getPosition = () => { // stuff } ``` Note the arrow function on `doThrottle` and `getPosition` turns it into an implicit `bind(this)` in the component. The original problem was in your own throttle method in which `this` was not bound to your component. Upvotes: 1 <issue_comment>username_2: you need to bind 'this' to thorttle as well in constructor. ``` constructor() { super(); this.throttle = this.throttle.bind(this); this.getElementPosition = this.getElementPosition.bind(this) } throttle() { _.throttle(this.getElementPosition, 100) } ``` Upvotes: 1 <issue_comment>username_3: This is my approach ``` import React, { Component } from 'react' import throttle from 'lodash.throttle' class Scroller extends Component { constructor(props) { super(props) this.element = React.createRef() this.onScroll = this.onScroll.bind(this) this.onScrollThrottled = throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled) } componentWillUnmount() { this.element.current.removeEventListener('scroll', this.onScrollThrottled) } onScroll(e) { console.log(e.target.scrollTop) } render() { return ( ... ) } } ``` Lodash `throttle` returns a function, so wrapping it with another function (method class) will force you to invoke it ``` onScrollThrottled() { return throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled()) } ``` And `throttle(this.onScroll, 300)` **!==** `throttle(this.onScroll, 300)`, so you'll not be able to unsubscribe properly on `componentWillUnmount`. That's why I used class property instead. Also, I used standalone version of lodash throttle, which can be added via npm ([see doc](https://www.npmjs.com/package/lodash.throttle)) > > npm i --save lodash.throttle > > > Upvotes: 2
2018/03/21
1,194
4,099
<issue_start>username_0: So I am showing a spinner while loading some images and in Chrome, Firefox, Edge... it disappears after the content is loaded. The spinner is replaced by a picture. However in Internet Explorer version 9 & 10 it stays in place and the picture appears underneath. From component template: ``` ``` From component: ``` private getContent() { this.loadingContent = true; this.flightsService.getContent().map(data => { this.flights = data; }).subscribe(() => { this.loadingContent = true; }, () => { this.loadingContent = false; }, () => { this.loadingContent = false; }); } ``` spinner.css ``` .spinner { width: 100%; height: 50px; margin: 100px auto; animation-name: spin; animation-duration: 400ms; animation-iteration-count: infinite; animation-timing-function: linear; -ms-animation-name: spin; -ms-animation-duration: 400ms; -ms-animation-iteration-count: infinite; -ms-animation-timing-function: linear; -webkit-animation-name: spin; -webkit-animation-duration: 400ms; -webkit-animation-iteration-count: infinite; -webkit-animation-timing-function: linear; -moz-animation-name: spin; -moz-animation-duration: 400ms; -moz-animation-iteration-count: infinite; -moz-animation-timing-function: linear; } @keyframes spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } @-ms-keyframes spin { from { -ms-transform: rotate(0deg); } to { -ms-transform: rotate(360deg); } } @-moz-keyframes spin { from { -moz-transform: rotate(0deg); } to { -moz-transform: rotate(360deg); } } @-webkit-keyframes spin { from { -webkit-transform: rotate(0deg); } to { -webkit-transform: rotate(360deg); } } ```<issue_comment>username_1: Try like this, per lodash docs <https://lodash.com/docs/4.17.4#throttle>: ``` componentDidMount() { window.addEventListener('scroll', _.throttle(this.doThrottle, 100)) } doThrottle = () => { this.getElementPosition = this.getPosition() } getPosition = () => { // stuff } ``` Note the arrow function on `doThrottle` and `getPosition` turns it into an implicit `bind(this)` in the component. The original problem was in your own throttle method in which `this` was not bound to your component. Upvotes: 1 <issue_comment>username_2: you need to bind 'this' to thorttle as well in constructor. ``` constructor() { super(); this.throttle = this.throttle.bind(this); this.getElementPosition = this.getElementPosition.bind(this) } throttle() { _.throttle(this.getElementPosition, 100) } ``` Upvotes: 1 <issue_comment>username_3: This is my approach ``` import React, { Component } from 'react' import throttle from 'lodash.throttle' class Scroller extends Component { constructor(props) { super(props) this.element = React.createRef() this.onScroll = this.onScroll.bind(this) this.onScrollThrottled = throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled) } componentWillUnmount() { this.element.current.removeEventListener('scroll', this.onScrollThrottled) } onScroll(e) { console.log(e.target.scrollTop) } render() { return ( ... ) } } ``` Lodash `throttle` returns a function, so wrapping it with another function (method class) will force you to invoke it ``` onScrollThrottled() { return throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled()) } ``` And `throttle(this.onScroll, 300)` **!==** `throttle(this.onScroll, 300)`, so you'll not be able to unsubscribe properly on `componentWillUnmount`. That's why I used class property instead. Also, I used standalone version of lodash throttle, which can be added via npm ([see doc](https://www.npmjs.com/package/lodash.throttle)) > > npm i --save lodash.throttle > > > Upvotes: 2
2018/03/21
944
3,462
<issue_start>username_0: I've Firebase Database where each user has own email and username. How to check unique username? I tried to make it like this, but my code doesn't work properly therefore different users can have the same username ``` usernameField.isHidden = false let username:String = self.usernameField.text! if (usernameField.text?.isEmpty == false){ ref.child("users").queryOrdered(byChild("username").queryEqual(toValue: username).observeSingleEvent(of: .value, with: { snapshot in if snapshot.exists(){ print("username exist") }else{ ref.root.child("users").child(userID).updateChildValues(["username": username]) } }) } ``` [![enter image description here](https://i.stack.imgur.com/xCyJh.png)](https://i.stack.imgur.com/xCyJh.png) I'm a little bit newbie in Firebase I store email and username for each user like this `newUserReference.setValue(["username":String(), "email" : self.emailTextField.text!])`. On next view, user can type `username` in `usernameField.text` and this value will be added in Firebase Database. But if the next user (user 2) will type the same username like previous user, it must be blocked, because username should be unique<issue_comment>username_1: Try like this, per lodash docs <https://lodash.com/docs/4.17.4#throttle>: ``` componentDidMount() { window.addEventListener('scroll', _.throttle(this.doThrottle, 100)) } doThrottle = () => { this.getElementPosition = this.getPosition() } getPosition = () => { // stuff } ``` Note the arrow function on `doThrottle` and `getPosition` turns it into an implicit `bind(this)` in the component. The original problem was in your own throttle method in which `this` was not bound to your component. Upvotes: 1 <issue_comment>username_2: you need to bind 'this' to thorttle as well in constructor. ``` constructor() { super(); this.throttle = this.throttle.bind(this); this.getElementPosition = this.getElementPosition.bind(this) } throttle() { _.throttle(this.getElementPosition, 100) } ``` Upvotes: 1 <issue_comment>username_3: This is my approach ``` import React, { Component } from 'react' import throttle from 'lodash.throttle' class Scroller extends Component { constructor(props) { super(props) this.element = React.createRef() this.onScroll = this.onScroll.bind(this) this.onScrollThrottled = throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled) } componentWillUnmount() { this.element.current.removeEventListener('scroll', this.onScrollThrottled) } onScroll(e) { console.log(e.target.scrollTop) } render() { return ( ... ) } } ``` Lodash `throttle` returns a function, so wrapping it with another function (method class) will force you to invoke it ``` onScrollThrottled() { return throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled()) } ``` And `throttle(this.onScroll, 300)` **!==** `throttle(this.onScroll, 300)`, so you'll not be able to unsubscribe properly on `componentWillUnmount`. That's why I used class property instead. Also, I used standalone version of lodash throttle, which can be added via npm ([see doc](https://www.npmjs.com/package/lodash.throttle)) > > npm i --save lodash.throttle > > > Upvotes: 2
2018/03/21
815
2,761
<issue_start>username_0: I have a table with 3 fields: ``` id order date 1 1 null 1 2 not null 1 3 null 2 1 null 2 2 null 2 3 null 2 4 not null 3 1 null ``` I need the "id" in which: * ALL the "order" in (1,2,3) and * ALL the "date" is null (so it is id 2) I've tried as follows: `where order in (1,2,3) and date is null` but it returns both id 2 and id 1 (I'm expecting id 2 only). Thanks for helps.<issue_comment>username_1: Try like this, per lodash docs <https://lodash.com/docs/4.17.4#throttle>: ``` componentDidMount() { window.addEventListener('scroll', _.throttle(this.doThrottle, 100)) } doThrottle = () => { this.getElementPosition = this.getPosition() } getPosition = () => { // stuff } ``` Note the arrow function on `doThrottle` and `getPosition` turns it into an implicit `bind(this)` in the component. The original problem was in your own throttle method in which `this` was not bound to your component. Upvotes: 1 <issue_comment>username_2: you need to bind 'this' to thorttle as well in constructor. ``` constructor() { super(); this.throttle = this.throttle.bind(this); this.getElementPosition = this.getElementPosition.bind(this) } throttle() { _.throttle(this.getElementPosition, 100) } ``` Upvotes: 1 <issue_comment>username_3: This is my approach ``` import React, { Component } from 'react' import throttle from 'lodash.throttle' class Scroller extends Component { constructor(props) { super(props) this.element = React.createRef() this.onScroll = this.onScroll.bind(this) this.onScrollThrottled = throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled) } componentWillUnmount() { this.element.current.removeEventListener('scroll', this.onScrollThrottled) } onScroll(e) { console.log(e.target.scrollTop) } render() { return ( ... ) } } ``` Lodash `throttle` returns a function, so wrapping it with another function (method class) will force you to invoke it ``` onScrollThrottled() { return throttle(this.onScroll, 300) } componentDidMount() { this.element.current.addEventListener('scroll', this.onScrollThrottled()) } ``` And `throttle(this.onScroll, 300)` **!==** `throttle(this.onScroll, 300)`, so you'll not be able to unsubscribe properly on `componentWillUnmount`. That's why I used class property instead. Also, I used standalone version of lodash throttle, which can be added via npm ([see doc](https://www.npmjs.com/package/lodash.throttle)) > > npm i --save lodash.throttle > > > Upvotes: 2
2018/03/21
1,489
5,233
<issue_start>username_0: I have two maps `Map`. I want to merge both maps, sort in descending order, and get top 5. In case of duplicate keys in merge I need to sum the values. I have the following code that works: ``` Map topFive = (Stream.concat(map1.entrySet().stream(), map2.entrySet().stream()) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, Long::sum))) .entrySet() .stream() .sorted(Map.Entry.comparingByValue(Comparator.reverseOrder())) .limit(5) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (v1, v2) -> v1, LinkedHashMap::new)); ``` But I would like to know if there is a better solution.<issue_comment>username_1: A better solution might be to use an accumulator that keeps the top 5, rather than sorting the whole stream. Now you're doing an estimated n \* log(n) comparisons instead of something between n and n \* log(5). Upvotes: 0 <issue_comment>username_2: I would focus on making the code easier to read: ``` // Merge Map merged = new HashMap<>(map1); map2.forEach((k, v) -> merged.merge(k, v, Long::sum)); // Sort descending List> list = new ArrayList<>(merged.entrySet()); list.sort(Map.Entry.comparingByValue(Comparator.reverseOrder())); // Select top entries Map top5 = new LinkedHashMap<>(); list.subList(0, Math.min(5, list.size())) .forEach(e -> e.put(e.getKey(), e.getValue())); ``` Also, by not using streams, this solution will surely have better performance. Upvotes: 0 <issue_comment>username_3: If you mean **better** in terms of **performance**, and you have large collections, and only need few top elements you can avoid sorting the entire map, given the `n*log(n)` complexity. If you already have Guava, you can use [MinMaxPriorityQueue](https://google.github.io/guava/releases/22.0/api/docs/com/google/common/collect/MinMaxPriorityQueue.html) to store only the best *N* results. And then just sort this few constant *N* elements. ``` Comparator> comparator = Entry.comparingByValue(reverseOrder()); Map merged = Stream.of(map1, map2) .map(Map::entrySet) .flatMap(Set::stream) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, Long::sum)); MinMaxPriorityQueue> tops = MinMaxPriorityQueue.orderedBy(comparator) .maximumSize(5) .create(merged.entrySet()); Map sorted = tops.stream() .sorted(comparator) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue, (m1, m2) -> m1, LinkedHashMap::new)); ``` If you don't have/want to use Guava, you can simulate the `MinMaxPriorityQueue` by using a custom `TreeMap` (Also a class that receives the max size in constructor can be created, if you don't want to use an anonymous class [this is to show the functionality]). ``` Set> sorted = new TreeSet>(comparator) { @Override public boolean add(Entry entry) { if (size() < 5) { // 5 can be constructor arg in custom class return super.add(entry); } else if (comparator().compare(last(), entry) > 0) { remove(last()); return super.add(entry); } else { return false; } } }; ``` And add all the elements to the set with top. ``` sorted.addAll(merged); ``` You can also change the merge function, to use something similar to the merge mentioned by Federico. ``` Map merged = new HashMap<>(map1); map2.forEach((k, v) -> merged.merge(k, v, Long::sum)); ``` This tends to be faster that using streams, and after that, once you have the merged map, you can select the top N elements with `MinMaxPriorityQueue` or `TreeSet`, avoiding again the unnecessary need of sorting the entire collection. Upvotes: 2 <issue_comment>username_4: Just adding another solution using a `Collector`. It uses a `TreeSet` as the intermediate accumulation type, converting the set to a map with the finisher. ``` private > Collector, Map> toMap(BinaryOperator mergeFunction, Comparator comparator, int limit) { Objects.requireNonNull(mergeFunction); Objects.requireNonNull(comparator); Supplier> supplier = () -> new TreeSet<>(comparator); BiConsumer, E> accumulator = (set, entry) -> accumulate(set, entry, mergeFunction); BinaryOperator> combiner = (destination, source) -> { source.forEach(e -> accumulator.accept(destination, e)); return destination; }; Function, Map> finisher = s -> s.stream() .limit(limit) .collect(Collectors.toMap(E::getKey, E::getValue, (v1, v2) -> v1, LinkedHashMap::new)); return Collector.of(supplier, accumulator, combiner, finisher); } private > void accumulate( TreeSet set, E newEntry, BinaryOperator mergeFunction) { Optional entryFound = set.stream() .filter(e -> Objects.equals(e.getKey(), newEntry.getKey())) .findFirst(); if (entryFound.isPresent()) { E existingEntry = entryFound.get(); set.remove(existingEntry); existingEntry.setValue(mergeFunction.apply(existingEntry.getValue(), newEntry.getValue())); set.add(existingEntry); } else { set.add(newEntry); } } ``` This is how you would use it, comparing the entries by value (in reverse) and using the `Long::sum` merge function for entry collisions. ``` Comparator> comparator = Map.Entry.comparingByValue(Comparator.reverseOrder()); Map topFive = Stream.of(map1, map2) .map(Map::entrySet) .flatMap(Collection::stream) .collect(toMap(Long::sum, comparator, 5)); ``` Upvotes: 0
2018/03/21
999
3,052
<issue_start>username_0: I use laravel 5.6 I have a field. The data type of the field is json The value of the field (desc field) like this : ``` [ {"code": "1", "club": "CHE", "country": "ENGLAND"}, {"code": "2", "club": "BAY", "country": "GERMANY"}, {"code": "3", "club": "JUV", "country": "ITALY"}, {"code": "4", "club": "RMA", "country": "SPAIN"}, {"code": "5", "club": "CHE", "country": "ENGLAND"}, {"code": "6", "club": "BAY", "country": "GERMANY"}, {"code": "7", "club": "JUV", "country": "ITALY"}, {"code": "8", "club": "RMA", "country": "SPAIN"}, {"code": "CODE", "club": "CLUB", "country": "COUNTRY"} ] ``` I want to check the key of club have value "CHE" or not I try like this : ``` ->where('desc->club','=', 'CHE')->get(); ``` But it does not work How can I solve this problem?<issue_comment>username_1: Just use a SQL `LIKE` operator ``` ->where('desc', 'like', '%"club": "CHE"%'); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: What version of MySQL are you using? The JSON operator `->` is not implemented until MySQL 5.7. See <https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html#operator_json-column-path> Also, you aren't using the operator correctly. For the example you show, it should be: ``` desc->>'$[0].club' = 'CHE' ``` The double-`>>` is so that the value returned does not include double-quotes. But *which* club are you searching? `$[0]` only searches the first entry in your JSON array. If you want to find if *any* entry has that club = CHE, then this will work: ``` JSON_SEARCH(j->'$[*].club', 'one', 'CHE') IS NOT NULL ``` You can read more about the JSON functions here: <https://dev.mysql.com/doc/refman/5.7/en/json-search-functions.html> And path syntax here: <https://dev.mysql.com/doc/refman/5.7/en/json-path-syntax.html> I showed expressions in raw SQL. I'll leave it to you to adapt that to Laravel. Upvotes: 0 <issue_comment>username_3: Try this: ``` ->whereRaw('JSON_CONTAINS(`desc`, \'{"club":"CHE"}\')') ``` Upvotes: 3 <issue_comment>username_4: ``` use Illuminate\Support\Facades\DB; MyModel::where(DB::raw('metadata->>\'$.original_text\''), 'LIKE', $originalText)->first(); ``` Upvotes: 0 <issue_comment>username_5: ``` MyModel::whereJsonContains('desc->club', 'CHE')->first(); ``` Upvotes: 2 <issue_comment>username_6: you can add this scope to model and use ``` public function scopeJsonArrayHasCondition($query , $col , $keys , $function , $as = 'temp_table') { $columns = []; foreach ($keys as $key){ $columns[] = "$key VARCHAR(150) PATH '$.$key'"; } $columns = implode(',',$columns); return $query->join(\DB::raw("JSON_TABLE($col, '$[*]' COLUMNS($columns)) as $as") , function(){}) ->where($function); } ``` how to use ``` BranchContract::query()->jsonArrayHasCondition('payments',[ 'club' , 'country' ],function ($q){ $q->where('temp_table.country' , 'ENGLAND')->where('temp_table.club' , 'CHE'); })->get() ``` Upvotes: 0
2018/03/21
377
1,451
<issue_start>username_0: For those of you who have used flask-login will know the decorator `fresh_login_required`. I am trying to implement something similar by myself. It doesn't have to be a decorator. I just want to be able to do some works after the user has been totally disconnected (example: closed the browser). And is it a good idea to have a global integer variable to count the total active/online users? Thanks in advance :)<issue_comment>username_1: You should be able to do it with: ``` @app.before_request def make_session_permanent(): session.permanent = False ``` From the docs: <http://flask.pocoo.org/docs/0.11/api/#flask.session.permanent> Upvotes: 0 <issue_comment>username_2: Flask sessions are handled by flask session interface. Flask-Login just check if the current session is fresh. ``` from flask import session def login_fresh(): ''' This returns ``True`` if the current login is fresh. ''' return session.get('_fresh', False) def login_user(): ''' Your login code here ''' session['_fresh'] = True def logout_user(): ''' Your logout code here ''' if '_fresh' in session: session.pop('_fresh') ``` The code snippet above is from Flask-Login source code,that can be accessed [here](https://github.com/maxcountryman/flask-login/blob/master/flask_login/utils.py). If you needs the user to login again, just check if his login is fresh. Upvotes: 1
2018/03/21
988
3,496
<issue_start>username_0: I am trying to send Richtext data to PHP using Ajax all things working correctly excepts for the font color when I apply the font color it's commenting all the code after that because color is in HEX format i have tried change font color to RGB but it also change after sending to PHP now i have removed it from Textrich but i really need to know how to escape this problem. **This is the code printed by console log** ``` test ``` **this what PHP echo** ``` ![enter image description here](https://i.stack.imgur.com/py5zj.png) now my problem is i am trying to store text-area data in the database but when text area data process by the PHP it's commenting all the code after `#` i need a way to escape that problem This is my javascript code ``` `$('#save_btn').click(function(event){ event.preventDefault(); console.log($("#example").val()); if(text()){ $.ajax({ url:'../web/php/addoffer.php?offertitle='+$('#offer_title').val()+'&offer_desc='+$("#example").val()+'&offerstart='+$("#offer_s_date")+'&offerend='+$("#offer_e_date").val(), type:'GET', success:function(confirmation){ console.log(confirmation); } }); } }); ``` This is my php code ``` if(isset($_GET['offertitle']) && isset($_GET['offer_desc']) && isset($_GET['offerstart']) && isset($_GET['offerend'])){ echo $_GET['offer_desc']; } ``` This is the [text-rich](https://www.jqueryscript.net/text/Rich-Text-Editor-jQuery-RichText.html) editor i am using ```<issue_comment>username_1: Edit : `Test` With: `Test` That should fix it :) NOTE: There is no need to add if "Test" is all that is in the div you can use the style option within the div. Or you could even create and give them a class and style that specific class Upvotes: 2 <issue_comment>username_2: try by givind inline css ``` Test ``` Upvotes: 1 <issue_comment>username_3: When AJAX requests fail, you should always examine the request in your browser to determine what *exactly* is being sent to the server. Your request is being made to the following URL: ``` url:'../web/php/addoffer.php?offertitle='+$('#offer_title').val()+'&offer_desc='+$("#example").val()+'&offerstart='+$("#offer_s_date")+'&offerend='+$("#offer_e_date").val() ``` However we already know that the value of `$("#example").val()` is: ``` test ``` Assuming the other values you have in the URL are just regular strings without any special characters, what you have is: ``` url:'../web/php/addoffer.php?offertitle=somestring&offer_desc=test&offerstart=somestring&offerend=somestring' ``` As you'll notice, this contains the `#` character, which means that everything after that is treated as a [fragment identifier](https://www.ibm.com/support/knowledgecenter/en/SSGMCP_5.1.0/com.ibm.cics.ts.internet.doc/topics/dfhtl_uricomp.html), and not part of the URL itself. Therefore, what PHP *actually* gets is: ``` '../web/php/addoffer.php?offertitle=somestring&offer_desc=url encode all values in the URL so that PHP receives what you would expect. If it is based on user input, I would also recommend encoding the other values as well. **HOWEVER** ----------- With all that said, you should not be using GET to perform an action. This should be done using POST. [Using GET is totally inappropriate for this](https://stackoverflow.com/a/6834275/811240). Also with POST you don't have to worry about URL encoding your variables. ``` Upvotes: 2 [selected_answer]
2018/03/21
395
1,229
<issue_start>username_0: I have created an ontology by grouping together many ontologies. I want to use SPARQL to identify all middle terms (relationships) from one ontology from the group ontology. The following approach only produces a `pyparsing error`. ``` g = rdflib.Graph() result = g.parse("without-bfo.owl") qres = g.query( """ PREFIX sudo: SELECT ?v WHERE { ?s sudo:?v ?o. }""") ``` If I remove the `sudo:` prefix, this query returns all triples.<issue_comment>username_1: You can check if the relation starts with your namespace with `CONTAINS` ``` SELECT ?v WHERE { ?s ?v ?o. FILTER CONTAINS(?v, "http://purl.url/sudo/ontology#") } ``` You can also try `STRSTARTS` see [w3 documentation](https://www.w3.org/TR/sparql11-query/#func-contains) Upvotes: 1 <issue_comment>username_2: @username_1 was close. The following worked for me. One has to declare `?v` as a string using `STR`. [This SO post](https://stackoverflow.com/questions/9597981/sparql-restricting-result-resource-to-certain-namespaces#9605812) suggested the syntax. ``` qres = g.query( """ SELECT DISTINCT ?v WHERE { ?s ?v ?o . FILTER CONTAINS(STR(?v), "sudo") }""") ``` for row in qres: print row Upvotes: 0
2018/03/21
299
887
<issue_start>username_0: Tell me please. I can configure two TFS servers and replicate them. Thank you > > Blockquote > > ><issue_comment>username_1: You can check if the relation starts with your namespace with `CONTAINS` ``` SELECT ?v WHERE { ?s ?v ?o. FILTER CONTAINS(?v, "http://purl.url/sudo/ontology#") } ``` You can also try `STRSTARTS` see [w3 documentation](https://www.w3.org/TR/sparql11-query/#func-contains) Upvotes: 1 <issue_comment>username_2: @username_1 was close. The following worked for me. One has to declare `?v` as a string using `STR`. [This SO post](https://stackoverflow.com/questions/9597981/sparql-restricting-result-resource-to-certain-namespaces#9605812) suggested the syntax. ``` qres = g.query( """ SELECT DISTINCT ?v WHERE { ?s ?v ?o . FILTER CONTAINS(STR(?v), "sudo") }""") ``` for row in qres: print row Upvotes: 0
2018/03/21
310
990
<issue_start>username_0: Is there any way I can have Puppet complete documenation available offline ? Its hard checking module documentation everytime via browser. I hope it will help other Puppet module developers. Thank you.<issue_comment>username_1: You can check if the relation starts with your namespace with `CONTAINS` ``` SELECT ?v WHERE { ?s ?v ?o. FILTER CONTAINS(?v, "http://purl.url/sudo/ontology#") } ``` You can also try `STRSTARTS` see [w3 documentation](https://www.w3.org/TR/sparql11-query/#func-contains) Upvotes: 1 <issue_comment>username_2: @username_1 was close. The following worked for me. One has to declare `?v` as a string using `STR`. [This SO post](https://stackoverflow.com/questions/9597981/sparql-restricting-result-resource-to-certain-namespaces#9605812) suggested the syntax. ``` qres = g.query( """ SELECT DISTINCT ?v WHERE { ?s ?v ?o . FILTER CONTAINS(STR(?v), "sudo") }""") ``` for row in qres: print row Upvotes: 0
2018/03/21
1,812
6,136
<issue_start>username_0: I am trying to use argparse to create my script with parameters but I am not being able to. The name of my script is `pipeline` and it has some options arguments like -b, -c, -i and -r. If you call the script `./pipeline -b` should give an error asking for a git repository path but I am not being able to do this. ``` from git import Repo import os import sys import subprocess import argparse class Ci: def build(self,args): cloned_repo = Repo.clone_from(args) print("clonning repository " + args) cloned_repo dir = git.split('/')(-1) if os.path.isdir(dir): print("repository cloned successfully") else: print("error to clone repository") if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-b','-build',action='store_true', help='execute mvn clean install') parser.add_argument('-c','-compress',action='store_true',help='zip a directory recursively') parser.add_argument('-i','-integration',action='store_true',help='execute mvn verify') parser.add_argument('-r','-release',action='store_true',help='execute build,integration and compress respectively') args = parser.parse_args() if args.b: a = Ci() a.build() if len(sys.argv) < 2: parser.print_help() sys.exit(1) ``` I can't make this sub-parameter work and I can't find a way to pass this parameter to my `build` function. e.g: ```none ./pipeline -b Output: error missins git path ./pipeline -b https://git/repo Output: clonning repo ``` and the string `"https://git/repo"` has to be passed as the argument to my `build` function: How can I make it work?<issue_comment>username_1: first a note about convention: usually the longer option name is preceded by two hyphens like this `'--build'` second, `'store_true'` is the action you perform with `'-b'`, which means argparse doesnt expect an argument after it, it just sets the `args.build` variable to `True` (and if the argument wasn't there it would set it to `False`) try removing the `action='store_true'` and then it will default to storing the next value it finds in the argument list into `args.build` Upvotes: 3 [selected_answer]<issue_comment>username_2: Reducing your code to: ``` import argparse class Ci: def build(self,args): print("clonning repository " + args) if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('-b','-build',action='store_true', help='execute mvn clean install') parser.add_argument('-c','-compress',action='store_true',help='zip a directory recursively') parser.add_argument('-i','-integration',action='store_true',help='execute mvn verify') parser.add_argument('-r','-release',action='store_true',help='execute build,integration and compress respectively') args = parser.parse_args() print(args) if args.b: a = Ci() a.build() ``` I get: ``` 1313:~/mypy$ python3 stack49408644.py -b Namespace(b=True, c=False, i=False, r=False) Traceback (most recent call last): File "stack49408644.py", line 22, in a.build() TypeError: build() missing 1 required positional argument: 'args' ``` The parser runs fine, seeing `args.b` to `True`. But the call to `build` is wrong. It does not match the method's definition. Providing a 'directory' doesn't help either, because `-b` is `True/False` ``` 1313:~/mypy$ python3 stack49408644.py -b foo usage: stack49408644.py [-h] [-b] [-c] [-i] [-r] stack49408644.py: error: unrecognized arguments: foo ``` You need to either change `-b` to take an value, or add another argument that takes a value. @username_1 showed how to change `-b`. Instead let's add: ``` parser.add_argument('adir', help='a directory for build') ``` and change the `build` call ``` a.build(args.adir) ``` Now the value is passed on to the method: ``` 1322:~/mypy$ python3 stack49408644.py -b usage: stack49408644.py [-h] [-b] [-c] [-i] [-r] adir stack49408644.py: error: the following arguments are required: adir 1322:~/mypy$ python3 stack49408644.py -b foo Namespace(adir='foo', b=True, c=False, i=False, r=False) clonning repository foo ``` Instead redefining `-b`: ``` parser.add_argument('-b','-build', help='execute mvn clean install') if args.b is not None: a = Ci() a.build(args.b) ``` test runs: ``` 1322:~/mypy$ python3 stack49408644.py -b usage: stack49408644.py [-h] [-b B] [-c] [-i] [-r] stack49408644.py: error: argument -b/-build: expected one argument 1324:~/mypy$ python3 stack49408644.py -b foo Namespace(b='foo', c=False, i=False, r=False) clonning repository foo ``` So your parser needs to accept a value. And you need to pass that value on to your code. You seem to have read enough of the `argparse` docs to get things like `print_help` and `store_true`, but missed the simpler use of `store` (default) or positional. Were you trying to do something more sophisticated? Upvotes: 1 <issue_comment>username_3: I agree with @username_2 (why isn't an accepted answer?). I guess you found your problem, i.e., store\_true does not take argument, then follow username_2 indications. In addition, I opened the question because of its title, I was expecting something different such as the following. I wanted to find a way to pass the argparse arguments to a function and possibly modify them with the function arguments. Here is the solution I wrote in case others come looking for this. It may need to be adjusted to account for positional arguments, I also highlight the possible use of vars(args) to get a dictionary from the argparse arguments to compare dict-to-dict with the args\_dict: ``` def get_options(args_dict: dict): """ get options from command-line, update with function args_dict if needed """ args = get_cmd() # this is the function taking cmd-line arguments for key, val in args_dict.items(): if not hasattr(args, key): raise AttributeError('unrecognized option: ', key) else: setattr(args, key, val) return(args) ``` Upvotes: 1
2018/03/21
498
1,694
<issue_start>username_0: We're using VSTS to build and release our front end code (JS + WebPack) We now have 2 separate builds for Dev and Test. Build tasks: 1. Get sources 2. npm install 3. npm build dev 4. Archive dist files 5. Copy Publish Artifact: drop (+release pipelines) In the "Triggers" section in VSTS, it is posible to listen to multiple branches. It seems unnecessary to have to so similar builds (?) when we have individual release pipelines. The only different is step 3 (*npm build dev* and *npm build test*) My question is: Is it possible to dynamically at build time determin the build environment based on source branch that triggered the build? And dynamically set arg in step 3?<issue_comment>username_1: Sure, you can add a PowerShell task to check source branch (using [built-in variable](https://learn.microsoft.com/en-us/vsts/build-release/concepts/definitions/build/variables?view=vsts&tabs=batch), such as Build.SourceBranch), then add or modify the variable through Logging Commands (e.g. `Write-Host "##vso[task.setvariable variable=currentEnv;]Dev"`). After that you can use that variable (currentEnv) in npm task (e.g. Command and arguments: `run $(currentEnv)`) Upvotes: 1 <issue_comment>username_2: **Thank you** @starian :+1: Ended up creating a branch selector shell script (.sh) [![branchSelector](https://i.stack.imgur.com/Clbpa.png)](https://i.stack.imgur.com/Clbpa.png) The script [![buildTask](https://i.stack.imgur.com/6BUp9.png)](https://i.stack.imgur.com/6BUp9.png) VSTS Build tasks [![triggers](https://i.stack.imgur.com/iW0l2.png)](https://i.stack.imgur.com/iW0l2.png) VSTS Triggers (default development) Upvotes: 1 [selected_answer]
2018/03/21
292
1,121
<issue_start>username_0: I did get this error message in intellij while running a spring-boot server: ``` 'mvn' is not recognized as an internal or external command, operable program or batch file ``` It didnt recognize maven when I was trying to build the project standing in my projectfolder with: mvn spring-boot:run I solved this by downloading maven: <https://maven.apache.org/install.html> and manually added the bin file in the apache-maven-3.5.3 folder to librarys It seems a bit backwards. Are there any other/better way?<issue_comment>username_1: If Maven is installed manually via: <http://maven.apache.org/download.cgi> and: `'mvn' is not recognized as an internal or external command, operable program or batch file`. Just **restart your IntelliJ IDEA** and run your spring-boot server again! Upvotes: 2 <issue_comment>username_2: In my case, I had to uncheck `Shell integration` in IntelliJIDEA Menu -> Preferences -> Tools -> Terminal [enter image description here](https://i.stack.imgur.com/7Smjt.png) Upvotes: 0 <issue_comment>username_3: In Windows Run your intellij as administrator Upvotes: 0
2018/03/21
2,538
9,605
<issue_start>username_0: I have a view that uses the ExpandableListView that has a ton of logic around it and in the adapters. For e.g., it looks like this [![enter image description here](https://i.stack.imgur.com/sAWp7.png)](https://i.stack.imgur.com/sAWp7.png) I have a requirement to display the same view with a different skin that has the expand/collapse hidden and has a border around parent and its children, something like this [![enter image description here](https://i.stack.imgur.com/QbRyP.png)](https://i.stack.imgur.com/QbRyP.png) I see attributes to have border for the whole control or just parent or individual child but nothing to have a border around parent and its children. Has anyone done something like this? Short of not using Expandablelistview and recreating the view, is there anyway I can achieve the border? Edit 1: [Here is a gist](https://gist.github.com/balarajagopal/903f9ec214dcee020b7e43a837cd5652) that has the template for what I am trying to do. Edit 2: I have a solution playing with parent and child borders, ``` setting parent to ┎─┒ and all-but-last children to ┃ ┃ and last child to ┖─┚ ``` [Here is the gist for the solution I have so far](https://gist.github.com/balarajagopal/1df7c06c4d52e6cb59473417626554ee) I am still open to a better solution and will offer the bounty to anything that is less kludge than my solution.<issue_comment>username_1: You can try using [this](https://github.com/huajianjiang/ExpandableRecyclerView) library. Custom RecyclerView that implement features like ExpandableListView. Upvotes: 0 <issue_comment>username_2: Well I have a solution for you, but It's better to use recycleView instead of listView, However, We can draw line for every sides e.g: for parent group it will be something like `┎─┒` and for all child's without the last child it will be something like: `┎ ┒` and for the last child it will be like : `──`. The code: `groupbg.xml: ``` xml version="1.0" encoding="UTF-8"? ``` *normalchild.xml*: ``` ``` *bottomchild.xml*: ``` ``` No set it to your adapter: ``` private int childrenCount; @Override public int getChildrenCount(int groupPosition) { return childrenCount = data.get(groupPosition).getItems().length; } @Override public View getGroupView(int groupPosition, boolean isExpanded, View convertView, ViewGroup parent) { View view; if (convertView == null){ view = LayoutInflater.from(context).inflate(R.layout.item, parent, false); } else { view = convertView; } view.setBackground(context.getResources().getDrawable(R.drawable.groupbg)); TextView lblNumber = view.findViewById(R.id.lblNumber); TextView lblName = view.findViewById(R.id.lblName); lblNumber.setText((groupPosition + 1) + "."); lblName.setText(((TestModel)getGroup(groupPosition)).getCategory()); return view; } @Override public View getChildView(int groupPosition, int childPosition, boolean isLastChild, View convertView, ViewGroup parent) { View view; if (convertView == null){ view = LayoutInflater.from(context).inflate(R.layout.item_child, parent, false); } else { view = convertView; } TextView lblNumber = view.findViewById(R.id.lblNumber); TextView lblName = view.findViewById(R.id.lblName); lblNumber.setText((childPosition + 1)+ "."); lblName.setText((String)getChild(groupPosition, childPosition)); if (childPosition < childrenCount) view.setBackground(context.getResources().getDrawable(R.drawable.normalchild)); else view.setBackground(context.getResources().getDrawable(R.drawable.bottomchild)); return view; } ``` Upvotes: 1 <issue_comment>username_3: **EDIT** So I've added `ItemDecoration` feature to `ExpandableListView`, It's pretty much works like the `RecyclerView`'s `ItemDecoration`, here is the code: Subclass the `ExpandableListView` ``` public class ExpandableListViewItemDecoration extends ExpandableListView { private List itemDecorations = new ArrayList<>(1); /\* ... \*/ public void addItemDecoration(ItemDecorationListView item){ itemDecorations.add(item); } @Override public void draw(Canvas canvas) { super.draw(canvas); final int count = itemDecorations.size(); for (int i = 0; i < count; i++) { itemDecorations.get(i).onDrawOver(canvas, this); } } ``` ItemDecorationListView: ``` public abstract class ItemDecorationListView { public void onDrawOver(Canvas c, ListView parent) { } } ``` The ItemDecorator: ``` public class ItemDecoratorBorderListView extends ItemDecorationListView { private final Paint paint = new Paint(); private final int size; public ItemDecoratorBorderListView(int size, @ColorInt int color) { this.size = size; paint.setColor(color); paint.setStrokeWidth(size); paint.setStyle(Paint.Style.STROKE); } public static final String TAG = ItemDecoratorBorderListView.class.getSimpleName(); @Override public void onDrawOver(Canvas c, ListView parent) { super.onDrawOver(c, parent); int childCount = parent.getChildCount(); for (int i = 0; i < childCount; i++) { View child = parent.getChildAt(i); if (isHeader(child, parent, i)) { for (int j = i + 1; j < childCount; j++) { View childEnd = parent.getChildAt(j); boolean end = isHeader(childEnd, parent, i) || j == childCount - 1; if (end) { if (BuildConfig.DEBUG) { Log.d(TAG, String.format(Locale.ENGLISH, "Draw called i: %d, j: %d", i, j)); } childEnd = parent.getChildAt(j - 1); if (j == childCount - 1) { childEnd = parent.getChildAt(j); } float top = child.getTop() + child.getTranslationY() + size + child.getPaddingTop(); float bottom = childEnd.getBottom() + childEnd.getTranslationY() - size - childEnd .getPaddingBottom(); float right = child.getRight() + child.getTranslationX() - size - child.getPaddingRight(); float left = child.getLeft() + child.getTranslationX() + size + child.getPaddingLeft(); c.drawRect(left, top, right, bottom, paint); i = j - 1; break; } } } } } public boolean isHeader(View child, ListView parent, int position) { //You need to set an Id for your layout return child.getId() == R.id.header; } ``` } And just add it to your `ExpandableListView`: ``` expandableList.addItemDecoration(new ItemDecoratorBorderListView( getResources().getDimensionPixelSize(R.dimen.stroke_size), Color.GRAY )); ``` ![](https://i.stack.imgur.com/hkUQa.png) ***Old Answer:*** This is an implementation with `RecyclerView` and `ItemDecoration`, I've already written this solution before knowing you're stuck with legacy code, So I'm sharing this anyway. Item Decoration: ``` public class ItemDecoratorBorder extends RecyclerView.ItemDecoration { private final Paint paint = new Paint(); private final int size; public ItemDecoratorBorder(int size, @ColorInt int color) { this.size = size; paint.setColor(color); paint.setStrokeWidth(size); paint.setStyle(Paint.Style.STROKE); } public static final String TAG = ItemDecoratorBorder.class.getSimpleName(); @Override public void onDrawOver(Canvas c, RecyclerView parent, RecyclerView.State state) { super.onDrawOver(c, parent, state); if (parent.getLayoutManager() == null) { return; } int childCount = parent.getChildCount(); RecyclerView.LayoutManager lm = parent.getLayoutManager(); for (int i = 0; i < childCount; i++) { View child = parent.getChildAt(i); if (isHeader(child, parent)) { for (int j = i + 1; j < childCount; j++) { View childEnd = parent.getChildAt(j); boolean end = isHeader(childEnd, parent) || j == childCount - 1; if (end) { if (BuildConfig.DEBUG) { Log.d(TAG, String.format(Locale.ENGLISH, "Draw called i: %d, j: %d", i, j)); } childEnd = parent.getChildAt(j - 1); if (j == childCount - 1) { childEnd = parent.getChildAt(j); } float top = child.getTop() + child.getTranslationY() + size + child.getPaddingTop(); float bottom = lm.getDecoratedBottom(childEnd) + childEnd.getTranslationY() - size - childEnd.getPaddingBottom(); float right = lm.getDecoratedRight(child) + child.getTranslationX() - size - child.getPaddingRight(); float left = lm.getDecoratedLeft(child) + child.getTranslationX() + size + child.getPaddingLeft(); c.drawRect(left, top, right, bottom, paint); i = j - 1; break; } } } } } public boolean isHeader(View child, RecyclerView parent) { int viewType = parent.getLayoutManager().getItemViewType(child); return viewType == R.layout.layout_header; } ``` I'm finding where a group starts and ends using the view types and draw a rectangle around the start and end position. The code is available at [my github repo](https://github.com/MRezaNasirloo/GroupBorder) ![](https://i.stack.imgur.com/Ok90h.png) Upvotes: 2
2018/03/21
2,579
6,850
<issue_start>username_0: I have three SQL tables as below: (The "orders" table below is not complete) [![enter image description here](https://i.stack.imgur.com/COHrF.jpg)](https://i.stack.imgur.com/COHrF.jpg) How to resolve the following question using just one sql query: Select the customers who ordered in 2014 all the products (at least) that the customers named 'Smith' ordered in 2013. Is this possible? I have thought about this: 1. Firstly, find the all the products that the client named "Smith" ordered in 2013. 2. Secondly, find the list of customers who at least have ordered all the above products in 2014. Which brings me to a SQL query like this: ``` SELECT cname, FROM customers NATURAL JOIN orders WHERE YEAR(odate) = '2014' AND "do_something_here" (SELECT DISTINCT pid FROM orders NATURAL JOIN customers WHERE LOWER(cname)='smith' AND YEAR(odate)='2013') as results; ``` in which all the subquery results (the list of products that "Smith" ordered in 2013) should be used to find the clients needed. But I don't know if this is the good approach. Sorry for my English because I am not a native speaker. If you want to test it out on phpMyAdmin, here's the SQL: ``` -- phpMyAdmin SQL Dump -- version 4.7.5 -- https://www.phpmyadmin.net/ -- -- Host: localhost -- Generation Time: Mar 21, 2018 at 02:49 PM -- Server version: 5.7.20 -- PHP Version: 7.1.7 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; SET AUTOCOMMIT = 0; START TRANSACTION; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8mb4 */; -- -- Database: `tp1` -- -- -------------------------------------------------------- -- -- Table structure for table `customers` -- CREATE TABLE `customers` ( `cid` int(11) NOT NULL, `cname` varchar(30) NOT NULL, `residence` varchar(50) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Dumping data for table `customers` -- INSERT INTO `customers` (`cid`, `cname`, `residence`) VALUES (0, 'didnotorder', 'Great Britain'), (1, 'Jones', 'USA'), (2, 'Blake', NULL), (3, 'Dupond', 'France'), (4, 'Smith', 'Great Britain'), (5, 'Gupta', 'India'), (6, 'Smith', 'France'); -- -------------------------------------------------------- -- -- Table structure for table `orders` -- CREATE TABLE `orders` ( `pid` int(11) NOT NULL, `cid` int(11) NOT NULL, `odate` date NOT NULL, `quantity` int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Dumping data for table `orders` -- INSERT INTO `orders` (`pid`, `cid`, `odate`, `quantity`) VALUES (1, 1, '2014-12-12', 2), (1, 4, '2014-11-12', 6), (2, 1, '2014-06-02', 6), (2, 1, '2014-08-20', 6), (2, 1, '2014-12-12', 2), (2, 2, '2010-11-12', 1), (2, 2, '2014-07-21', 3), (2, 3, '2014-10-01', 1), (2, 3, '2014-11-12', 1), (2, 4, '2014-01-07', 1), (2, 4, '2014-02-22', 1), (2, 4, '2014-03-19', 1), (2, 4, '2014-04-07', 1), (2, 4, '2014-05-22', 1), (2, 4, '2014-09-12', 4), (2, 6, '2014-10-01', 1), (3, 1, '2014-12-12', 1), (3, 2, '2013-01-01', 1), (3, 4, '2015-10-12', 1), (3, 4, '2015-11-12', 1), (4, 1, '2014-12-12', 3), (4, 2, '2014-06-11', 2), (4, 5, '2014-10-12', 1), (4, 5, '2014-11-13', 5), (5, 2, '2015-07-21', 3), (6, 2, '2015-07-21', 7), (6, 3, '2014-12-25', 1); -- -------------------------------------------------------- -- -- Table structure for table `products` -- CREATE TABLE `products` ( `pid` int(11) NOT NULL, `pname` varchar(30) NOT NULL, `price` decimal(7,2) NOT NULL, `origin` varchar(20) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- -- Dumping data for table `products` -- INSERT INTO `products` (`pid`, `pname`, `price`, `origin`) VALUES (0, 'wasnotordered', '11.00', NULL), (1, 'chocolate', '5.00', 'Belgium'), (2, 'sugar', '0.75', 'India'), (3, 'milk', '0.60', 'France'), (4, 'tea', '10.00', 'India'), (5, 'chocolate', '7.50', 'Switzerland'), (6, 'milk', '1.50', 'France'); -- -- Indexes for dumped tables -- -- -- Indexes for table `customers` -- ALTER TABLE `customers` ADD PRIMARY KEY (`cid`); -- -- Indexes for table `orders` -- ALTER TABLE `orders` ADD PRIMARY KEY (`pid`,`cid`,`odate`), ADD KEY `orders_fk_cid` (`cid`); -- -- Indexes for table `products` -- ALTER TABLE `products` ADD PRIMARY KEY (`pid`); -- -- Constraints for dumped tables -- -- -- Constraints for table `orders` -- ALTER TABLE `orders` ADD CONSTRAINT `orders_fk_cid` FOREIGN KEY (`cid`) REFERENCES `customers` (`cid`), ADD CONSTRAINT `orders_fk_pid` FOREIGN KEY (`pid`) REFERENCES `products` (`pid`); COMMIT; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; ```<issue_comment>username_1: You can try something like the following. Basically force join the customers with all the products from smith of 2013, then `LEFT JOIN` with the products each customer bought of 2014. If both counts are equal means that all products from smith of 2013 were bought at least once in 2014, for each customer. ``` SELECT C.cid FROM Customers C CROSS JOIN ( SELECT DISTINCT P.pid FROM Customers C INNER JOIN Orders O ON C.cid = O.cid INNER JOIN Products P ON O.pid = P.pid WHERE C.cname = 'Smith' AND YEAR(O.odate) = 2013) X LEFT JOIN ( SELECT DISTINCT C.cid, P.pid FROM Customers C INNER JOIN Orders O ON C.cid = O.cid INNER JOIN Products P ON O.pid = P.pid WHERE YEAR(O.odate) = 2014) R ON C.cid = R.cid AND X.pid = R.pid GROUP BY C.cid HAVING COUNT(X.pid) = COUNT(R.pid) ``` If you want to see the customers even if there are no products from smith of 2013, you can switch the `CROSS JOIN` for a `FULL JOIN (...) X ON 1 = 1`. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can solve this by finding all the products that each `cid` has in common with the Smith customers. Then, just check that the count covers all the products: ``` select o2014.cid, count(distinct o2013.pid) as num_products, group_concat(distinct o2013.pid) as products from orders o2013 join orders o2014 on o2013.pid = o2014.pid and year(o2013.odate) = 2013 and year(o2014.odate) = 2014 where o2013.cid = (select c.cid from customers c where c.cname = 'Smith') group by o2014.cid having num_products = (select count(distinct o2013.products) from orders o2013 where o2013.cid = (select c.cid from customers c where c.cname = 'Smith') ); ``` Upvotes: 0
2018/03/21
946
3,255
<issue_start>username_0: I'm kind of new and learning Microsoft SQL. So I have these 2 tables ``` CREATE TABLE Char_item( char_id VARCHAR(5) NOT NULL REFERENCES Character(char_id) ON DELETE CASCADE ON UPDATE CASCADE, item_id VARCHAR(5) NOT NULL REFERENCES Item(item_id) ON DELETE CASCADE ON UPDATE CASCADE, item_qty INTEGER NOT NULL CHECK(item_qty >= 0), PRIMARY KEY(char_id, item_id) ); CREATE TABLE Item( item_id VARCHAR(5) NOT NULL UNIQUE, item_name VARCHAR(20) NOT NULL UNIQUE, item_desc VARCHAR(50) NOT NULL, unit_price FLOAT NOT NULL, armor_id VARCHAR(5) REFERENCES Armor(armor_id) ON DELETE CASCADE ON UPDATE CASCADE, weapon_id VARCHAR(5) REFERENCES Weapon(weapon_id) ON DELETE CASCADE ON UPDATE CASCADE ); ``` char\_item has the item and its quantity for each item in the inventory while item table holds the unit\_price per item. I want to create a stored procedure that helps me in getting the TOTAL PRICE of my ENTIRE inventory using SELECT item\_id will give me all item\_ids in one go which I don't want because then I won't be able to calculate the total price for each specific item found in the inventory. I was thinking about making a loop but I am kind of stuck into how to proceed.<issue_comment>username_1: You can try something like the following. Basically force join the customers with all the products from smith of 2013, then `LEFT JOIN` with the products each customer bought of 2014. If both counts are equal means that all products from smith of 2013 were bought at least once in 2014, for each customer. ``` SELECT C.cid FROM Customers C CROSS JOIN ( SELECT DISTINCT P.pid FROM Customers C INNER JOIN Orders O ON C.cid = O.cid INNER JOIN Products P ON O.pid = P.pid WHERE C.cname = 'Smith' AND YEAR(O.odate) = 2013) X LEFT JOIN ( SELECT DISTINCT C.cid, P.pid FROM Customers C INNER JOIN Orders O ON C.cid = O.cid INNER JOIN Products P ON O.pid = P.pid WHERE YEAR(O.odate) = 2014) R ON C.cid = R.cid AND X.pid = R.pid GROUP BY C.cid HAVING COUNT(X.pid) = COUNT(R.pid) ``` If you want to see the customers even if there are no products from smith of 2013, you can switch the `CROSS JOIN` for a `FULL JOIN (...) X ON 1 = 1`. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can solve this by finding all the products that each `cid` has in common with the Smith customers. Then, just check that the count covers all the products: ``` select o2014.cid, count(distinct o2013.pid) as num_products, group_concat(distinct o2013.pid) as products from orders o2013 join orders o2014 on o2013.pid = o2014.pid and year(o2013.odate) = 2013 and year(o2014.odate) = 2014 where o2013.cid = (select c.cid from customers c where c.cname = 'Smith') group by o2014.cid having num_products = (select count(distinct o2013.products) from orders o2013 where o2013.cid = (select c.cid from customers c where c.cname = 'Smith') ); ``` Upvotes: 0
2018/03/21
486
1,634
<issue_start>username_0: I have a large dump of data in json that looks like: ``` [{ "recordList" : { "record" : [{ "Production" : { "creator" : { "name" : "A" } } }, { "Production" : {} }, { "Production" : [{ "creator" : { "name" : "B" }, "creator" : { "name" : "C" } }] }] } }] ``` I need to check if there is at least one creator in a record or not. If there is I give a 1 else a 0 for that field in a CSV-file. My code: ``` jq -r '.[].recordList.record[]|"\(if ((.Production.creator.name)? // (.Production[]?.creator.name)?) == null or ((.Production.creator.name)?|length // (.Production[]?.creator.name)?|length) == 0 then 0 else 1 end),"' file.json ``` The problem is that the field 'Production' is only an array when there are multiple creators. The result I want to get in this case is: ``` 1, 0, 1, ```<issue_comment>username_1: **`jq`** solution: ``` jq -r '.[].recordList.record[].Production | "\(if ((type == "array" and .[0].creator.name !="") or (type == "object" and .creator.name and .creator.name !="")) then 1 else 0 end),"' file.json ``` The output: ``` 1, 0, 1, ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Simplified jq solution: ``` jq -r '.[].recordList.record[].Production | ((type == "array" and .[0].creator.name) or .creator.name) | if . then "1," else "0," end' file.json ``` Upvotes: 0
2018/03/21
540
1,573
<issue_start>username_0: By following many docs/tutorials I implemented SSL with Kernel and reverse proxy in my SF. I made it work but the access point Url is as follow : <https://mycluster.westeurope.cloudapp.azure.com:19081> before I implemented https, I had a CNAME mycustomdomain.com redirecting to mycluster.westeurope.cloudapp.azure.com which was working fine. So now, I would like to know if there's a way to call <http://mycustomdomain.com> and access the actual Uri. Is there a way with what I already have in place through probes/lbrules for example? Or do I have to implement an Application Gateway or use API management or something else? Edit : LBRules+Probes ``` AppPortProbe : 44338 (backend ssl port in the SF) FabricGatewayProbe : 19000 FabricHttpGatewayProbe : 19080 SFReverseProxyProbe : 19081 [Rule : Probe] [AppPortLBRule (TCP/80 to TCP/19081) : 19081] [LBHttpRule (TCP/19080) : 19080] [LBRule (TCP/19000) : 19000] [LBSFReverseProxyRule (TCP/19081 to TCP/44338) : 44338] ```<issue_comment>username_1: **`jq`** solution: ``` jq -r '.[].recordList.record[].Production | "\(if ((type == "array" and .[0].creator.name !="") or (type == "object" and .creator.name and .creator.name !="")) then 1 else 0 end),"' file.json ``` The output: ``` 1, 0, 1, ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Simplified jq solution: ``` jq -r '.[].recordList.record[].Production | ((type == "array" and .[0].creator.name) or .creator.name) | if . then "1," else "0," end' file.json ``` Upvotes: 0
2018/03/21
444
1,614
<issue_start>username_0: I'm posting this because I searched stackoverflow and docs for a long time without finding an answer -- hopefully this helps somebody out. The question is, for testing purposes, how do I find the URL that's related to admin actions for a specific model? Admin model urls can all be found by `reverse(admin:appname_modelname_*)`, where \* is the action (change, delete, etc). But I couldn't find one for the admin actions, and since I was defining custom actions, I'd like to get the url.<issue_comment>username_1: The answer, which is hard to find, is that actions are referenced by `reverse(admin:appname_modelname_changelist)` Upvotes: 0 <issue_comment>username_2: This took a fair bit of digging, I couldn't find anything in the Django docs about it and I ended up having to inspect the source code of a third party library. Essentially there are 2 URL patterns, one for bulk actions and one for object actions: * Bulk: `r'admin///actions/(?P\\w+)/$'` * Object: `r'admin///(?P.+)/actions/(?P\\w+)/$'` The URL name pattern is `\_\_actions` Therefore we can **reverse the bulk view**: * Using args: `reverse("admin:\_\_actions", args=["foo"])` * Using kwargs: `reverse("admin:\_\_actions", kwargs={"tool": "foo"})` and **reverse the object view:** * Using args: `reverse("admin:\_\_actions", args=[1, "foo"])` * Using kwargs: `reverse("admin:\_\_actions", kwargs={"pk": 1, "tool": "foo"})` Upvotes: 1 <issue_comment>username_3: The URL for all custom actions is `reverse(admin:\_\_changelist)`, but the action name is specified in the `action` field of the POST data. Upvotes: 1
2018/03/21
370
1,241
<issue_start>username_0: I want to SELECT multiple rows and then INSERT them INTO another table/database. My current query only works with 1 result, I need it to work with for example, 100: ``` DECLARE @var INT; SELECT @var = column FROM database.dbo.table1 -- this will produce for example, 100 results IF (@var IS NULL) -- which it is all 100 times INSERT INTO database.dbo.table2 (column) VALUES (@var) ``` How do I do this, can this even be done? I'm using Microsoft SQL Server Management Studio 2016.<issue_comment>username_1: I assume you want: ``` INSERT INTO database.dbo.table2(column) SELECT column FROM database.dbo.table1 WHERE column IS NULL; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` You can use cursor for insert the data like below DECLARE @var INT; Declare AIX Cursor for SELECT column FROM database.dbo.table1; Open AIX; Fetch Next from AIX into @var; -- this will produce for example, 100 results WHILE @@FETCH_STATUS = 0 BEGIN IF (@var IS NULL) -- which it is all 100 times INSERT INTO database.dbo.table2 (column) VALUES (@var) FETCH NEXT FROM AIX INTO @var; END CLOSE AIX; DEALLOCATE AIX; ``` Upvotes: 0
2018/03/21
948
3,197
<issue_start>username_0: Source used before asked: [Pandas: Iterate through a list of DataFrames and export each to excel sheets](https://stackoverflow.com/questions/36973867/pandas-iterate-through-a-list-of-dataframes-and-export-each-to-excel-sheets?rq=1) [Splitting dataframe into multiple dataframes](https://stackoverflow.com/questions/19790790/splitting-dataframe-into-multiple-dataframes) I have managed to do all of this: ``` # sort the dataframe df.sort(columns=['name'], inplace=True) # set the index to be this and don't drop df.set_index(keys=['name'], drop=False,inplace=True) # get a list of names names=df['name'].unique().tolist() # now we can perform a lookup on a 'view' of the dataframe joe = df.loc[df.name=='joe'] # now you can query all 'joes' ``` I have managed to make this work - `joe = df.loc[df.name=='joe']` and it gave the exact result what I was looking for. As solution to make it work for big amount of data I found this potential solution. ``` writer = pandas.ExcelWriter("MyData.xlsx", engine='xlsxwriter') List = [Data , ByBrand] for i in List: i.to_excel(writer, sheet_name= i) writer.save() ``` Currently I have: ``` teacher_names = ['Teacher A', 'Teacher B', 'Teacher C'] ``` df = ``` ID Teacher_name Student_name Teacher_name Teacher A 1.0 Teacher A Student 1 Teacher A NaN Teacher A Student 2 Teacher B 0.0 Teacher B Student 3 Teacher C 2.0 Teacher C Student 4 ``` If I use - `test = df.loc[df.Teacher_name=='Teacher A']` - Will receive exact result. **Question:** How to optimise that it will automatically save the "test" result to (for each teacher separate) excel file ( `.to_excel(writer, sheet_name=Teacher_name` ) with teacher name, and will do it for all the existing in the database teacher?<issue_comment>username_1: This should work for you. You were nearly there, you just need to iterate the `names` list and filter your dataframe each time. ``` names = df['name'].unique().tolist() writer = pandas.ExcelWriter("MyData.xlsx", engine='xlsxwriter') for myname in names: mydf = df.loc[df.name==myname] mydf.to_excel(writer, sheet_name=myname) writer.save() ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: @username_1, the text 'sheetname' is to be replaced with 'sheet\_name'. Also once the 'name' variable is converted to list, upon running the for loop for creating multiple sheets based on unique name value, I get the following error: ``` InvalidWorksheetName: Invalid Excel character '[]:*?/\' in sheetname '['. ``` Alternate way of creating multiple worksheets (in a single excel file) based on a column value (through a function): ``` def writesheet(g): a=g['name'].tolist()[0] g.to_excel(writer,sheet_name = str(a),index=False) df.groupby('name').apply(writesheet) writer.save() ``` Source: [How to split a large excel file into multiple worksheets based on their given ip address using pandas python](https://stackoverflow.com/questions/54996477/how-to-split-a-large-excel-file-into-multiple-worksheets-based-on-their-given-ip) Upvotes: 0
2018/03/21
2,081
6,099
<issue_start>username_0: I'm working on an Ionic 3 project in which the response http data is in JSON array format like this (from the console): Country Array (5) ``` 0: {record_id: "1", local_TimeStamp: "16:00:00", country: "USA"} 1: {record_id: "2", local_TimeStamp: "17:00:00", country: "Japan"} 2: {record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"} 3: {record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand"} 4: {record_id: "5", local_TimeStamp: "16:00:00", country: "China"} ``` How to (1) delete one of the above items (2) append to the last index of the above JSON array. Note: Due to some special view requirements, step 1 and 2 need to be separate. So the result JSON array will look like this: ``` 0: {record_id: "1", local_TimeStamp: "16:00:00", country: "USA"} 1: {record_id: "2", local_TimeStamp: "17:00:00", country: "Japan"} 3: {record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand"} 4: {record_id: "5", local_TimeStamp: "16:00:00", country: "China"} 2: {record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"} <- Moved ``` I'd tried this code: ``` country.splice(country.findIndex(e => e.country === 'Korea'),1); country = [...country]; // From console it's OK. Record deleted. // Next append the element back: country.push({record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"}); country = [...country]; // From console looks OK. Element appended to the last index of the Json array. ``` But if I run the code again: ``` country.splice(country.findIndex(e => e.country === 'Korea'),1); ``` It can not find the element Korea anymore.<issue_comment>username_1: Use `array.prototype.slice`, `array.prototype.find` and `spread`: ```js var arr = [ {record_id: "1", local_TimeStamp: "16:00:00", country: "USA"}, {record_id: "2", local_TimeStamp: "17:00:00", country: "Japan"}, {record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"}, {record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand"}, {record_id: "5", local_TimeStamp: "16:00:00", country: "China"} ]; var index = arr.findIndex(e => e.country === "Korea"); var a = arr.slice(0, index); var b = arr.slice(index + 1); var c = arr.find(e => e.country === "Korea"); var result = [...a, ...b, c]; console.log(result); ``` Upvotes: 0 <issue_comment>username_2: [<NAME>](https://stackoverflow.com/users/91403/renato-gama)'s comment might do the trick for you. But *if* you need to do these steps separately: > > How to (1) delete one of the above items > > > You can use `Array.prototype.filter` to get an array without certain record, for example: `myArray.filter(record => record.country !== "Korea")`. But considering you want to add the record again, you could use `Array.prototype.splice`, like this: ``` var removed = myArray.splice(/* arguments */); ``` Above would be step 1 and step 2 would be pushing it again - `myArray.push(removed)`; > > But if I run the code again: > > > country.splice(country.findIndex(e => e.country === 'Korea'),1); > > > It can not find the element Korea anymore. > > > You can't find the element, since you removed it with `splice` and didn't save removed element to a variable. Upvotes: 0 <issue_comment>username_3: It looks like you're doing some unnecessary destructuring with the spread operator. Here's a simple example using functions: **Example** ``` let myCountries = [ {record_id: "1", local_TimeStamp: "16:00:00", country: "USA"}, {record_id: "2", local_TimeStamp: "17:00:00", country: "Japan"}, {record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"}, {record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand"}, {record_id: "5", local_TimeStamp: "16:00:00", country: "China"} ]; function removeCountry(country, countries) { const index = countries.findIndex(c => c.country === country); // Note: Country not found will mean index is -1 // which will remove the last country from the array. return countries.splice(index, 1); } function appendCountry(country, countries) { if (!country) return; countries.push(country); return countries; } // Usage: const remove = removeCountry('Korea', myCountries); appendCountry(remove); ``` **Update** I've given an example of how to use this and save the removed country to a variable for easier appending. Upvotes: 2 [selected_answer]<issue_comment>username_4: you could do it like this: ```js let country = [ { record_id: "1", local_TimeStamp: "16:00:00", country: "USA" }, { record_id: "2", local_TimeStamp: "17:00:00", country: "Japan" }, { record_id: "3", local_TimeStamp: "17:00:00", country: "Korea" }, { record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand" }, { record_id: "5", local_TimeStamp: "16:00:00", country: "China" } ]; const korea = country.splice(country.findIndex(e => e.country === 'Korea'), 1); country = [...country, ...korea]; ``` Upvotes: 1 <issue_comment>username_5: Something like this might do it, although the indices you supply in your expected output don't make sense to me: ```js const originalCountries = [ {record_id: "1", local_TimeStamp: "16:00:00", country: "USA"}, {record_id: "2", local_TimeStamp: "17:00:00", country: "Japan"}, {record_id: "3", local_TimeStamp: "17:00:00", country: "Korea"}, {record_id: "4", local_TimeStamp: "15:00:00", country: "Thailand"}, {record_id: "5", local_TimeStamp: "16:00:00", country: "China"} ]; const removeCountry = (name, countries) => { const idx = countries.findIndex(c => c.country === name); return idx > -1 ? { countries: countries.slice(0, idx).concat(countries.slice(idx + 1)), country: countries[idx] } : {countries, country: null}; }; const addCountry = (country, countries) => countries.concat(country); console.log('Step 1'); const {countries, country} = removeCountry('Korea', originalCountries); console.log(countries); console.log('Step 2'); const updatedCountries = addCountry(country, countries); console.log(updatedCountries); ``` Upvotes: 0
2018/03/21
143
524
<issue_start>username_0: How do i find out what format the data i am trying to request is in? The data can be found in the following address: <https://api.coinmarketcap.com/v1/ticker/> Thank you :)<issue_comment>username_1: The `Content-Type` header field in the response is what you're looking for. Upvotes: 0 <issue_comment>username_2: The response headers for this request include ``` content-type: application/json ``` as your browser will tell you. So it is [JSON](https://json.org/). Upvotes: 2 [selected_answer]
2018/03/21
428
1,119
<issue_start>username_0: I want to find an elegant way in Python to change URL like `"file:///C:/AAA/BBB"` to `"C:\AAA\BBB"`.<issue_comment>username_1: You can `split` then `join`: ``` '\\'.join(s[len('file:///'):].split('/')) ``` Upvotes: 2 <issue_comment>username_2: ``` '\\'.join(s.split('/')[3:]) ``` Edits : spliced out 'file:' and 2 '' from list returned by split. So no need of filter Upvotes: 1 <issue_comment>username_3: Using `str.replace()`: ``` s = "file:///C:/AAA/BBB" s_new = s.replace("file:///", "").replace("/", "\\") print(s_new) #C:\AAA\BBB ``` **Timing Results** On my laptop running python 2.x ``` #@liliscent's solution %%timeit '\\'.join(s[len('file:///'):].split('/')) #1000000 loops, best of 3: 603 ns per loop #@username_3's solution %%timeit s_new = s.replace("file:///", "").replace("/", "\\") #1000000 loops, best of 3: 555 ns per loop #combination of both above %%timeit s[len('file:///'):].replace('/', '\\') #1000000 loops, best of 3: 396 ns per loop #username_2's solution %%timeit '\\'.join(s.split('/')[3:]) #1000000 loops, best of 3: 696 ns per loop ``` Upvotes: 0
2018/03/21
1,175
3,065
<issue_start>username_0: I have an AJAX request to my Rest Webservice with a custom Header "login". Here is my rest configuration : ``` restConfiguration() .component("netty4-http") .bindingMode(RestBindingMode.json) .dataFormatProperty("prettyPrint", "true") .enableCORS(true) .corsAllowCredentials(true) .corsHeaderProperty("Access-Control-Allow-Headers", "Origin, Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers, login") .contextPath(contextPath).host(host).port(port); ``` I'm getting a 200 response to the OPTIONS preflight request but the "login" Header is not the Access-Control-Allow-Headers and my browser never send the actual request. Also i have not done any configuration in my route for cors. Here is my request headers > > User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) > Gecko/20100101 Firefox/58.0 > > > Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > > Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3 > > > Accept-Encoding: gzip, deflate > > > Access-Control-Request-Method: GET > > > Access-Control-Request-Headers: login > > > Origin: <http://127.0.0.1:8081> > > > DNT: 1 > > > Connection: keep-alive > > > And responce headers : > > content-length: 0 > > > Accept: > text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 > > > Accept-Encoding: gzip, deflate > > > Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3 > > > Access-Control-Allow-Credentials: true > > > Access-Control-Allow-Headers: Origin, Accept, X-Requested-With, > Content-Type, Access-Control-Request-Method, > Access-Control-Request-Headers > > > Access-Control-Allow-Methods: GET, HEAD, POST, PUT, DELETE, TRACE, > OPTIONS, CONNECT, PATCH > > > Access-Control-Allow-Origin: <http://127.0.0.1:8081> > > > Access-Control-Max-Age: 3600 > > > Access-Control-Request-Headers: login > > > Access-Control-Request-Method: GET > > > breadcrumbId: ID-resitt-ws-1521624297667-0-6 > > > DNT: 1 > > > Origin: <http://127.0.0.1:8081> > > > User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:58.0) > Gecko/20100101 Firefox/58.0 > > > connection: keep-alive (modifié) > > > I feel like my cors configuration doesn't change anything to my situation.<issue_comment>username_1: We found an answer, `.enableCORS(true)` must be placed after `.contextPath(contextPath).host(host).port(port)` line. like this ``` restConfiguration() .component("netty4-http") .bindingMode(RestBindingMode.json) .dataFormatProperty("prettyPrint", "true") .contextPath(contextPath).host(host).port(port) .enableCORS(true) .corsAllowCredentials(true) .corsHeaderProperty("Access-Control-Allow-Headers", "Origin, Accept, X-Requested-With, Content-Type, Access-Control-Request-Method, Access-Control-Request-Headers, login"); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In my case the CORS headers wheren't in http response when the camel thrown an exception and returned a 500 error. Upvotes: 0
2018/03/21
430
1,688
<issue_start>username_0: i am working on bitmap save to application folder in android below nougat its work fine but in nougat i have issue so can anyone help me? below my code for save bitmap ``` String getRoot = Environment.getExternalStorageDirectory().getAbsolutePath(); File myDir = new File(getRoot + "/" + folderNAme); String fname = picName + format; File file = new File(myDir, fname); try { FileOutputStream out = new FileOutputStream(file); finalBitmap.compress(form, quality, out); out.flush(); out.close(); } catch (Exception e) { e.printStackTrace(); } Intent mediaScanIntent = new Intent( Intent.ACTION_MEDIA_SCANNER_SCAN_FILE); Uri contentUri = Uri.fromFile(file); mediaScanIntent.setData(contentUri); act.sendBroadcast(mediaScanIntent); return file; ``` Thanks<issue_comment>username_1: Please try this:- ``` private void SaveImage(Bitmap finalBitmap) { String root = Environment.getExternalStorageDirectory().toString(); File myDir = new File(root + "/saved_images"); myDir.mkdirs(); String fname = "filename.jpg"; File file = new File (myDir, fname); if (file.exists ()) file.delete (); try { FileOutputStream out = new FileOutputStream(file); finalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out); out.flush(); out.close(); } catch (Exception e) { e.printStackTrace(); } } ``` Upvotes: 1 <issue_comment>username_2: You have to give runtime permissions for nougat for saving your file. See this: <https://developer.android.com/about/versions/nougat/android-7.0-changes.html> Upvotes: 0
2018/03/21
677
2,073
<issue_start>username_0: I always get confused of what the best method is to position content and `divs` inside of a `div`. ```css .container { width: 300px; height: 500px; background: #f5f5f5; } .content { position: absolute; bottom: 0px; } ``` ```html ### Title ``` In the example, I want the text to positon bottom and center. I can do this by using the position `absoulute` then adding margin. Is there a better method of doing this which would **avoid** placing content out of the page's layout flow using `absolute` coordinates such as `bottom`, `left`, etc?<issue_comment>username_1: Same thing can be achieved by flex. here is the updated css :- ``` .container { width: 300px; height: 500px; background: #f5f5f5; display: flex; align-items: flex-end; justify-content: center; } ``` Upvotes: 2 <issue_comment>username_2: Flexbox would be ideal for this (Supported on IE11 & all other major browsers). If you want to impact only `.content`'s content you could do: ```css .container { width: 300px; height: 500px; background: #f5f5f5; } .content { /* new */ display: flex; align-items: flex-end; justify-content: center; height: 100%; } ``` ```html ### Title ``` If you want to impact `.container`'s content considering more content on it, you could do: ```css .container { width: 300px; height: 500px; background: #f5f5f5; /* new */ display: flex; flex-flow: column wrap; align-items: center; justify-content: flex-end; } ``` ```html ### Title ### content 2 ### content 3 ``` Upvotes: 2 <issue_comment>username_3: By placing an absolute element (child) in a relative placed element (parent), the position (`top|right|bottom|left`) of the child will be relative to the parent, and not the window. Like this: ```css .container { width: 300px; height: 500px; background: #f5f5f5; position: relative; } .content { position: absolute; bottom: 0; left: 0; right: 0; text-align: center; } ``` ```html ### Title ``` Upvotes: 0
2018/03/21
315
1,364
<issue_start>username_0: We are using spring boot actuator to get health status of an application, my understanding is that request for health check will be handled by thread out of thread pool that is used to serve actual service requests. Is there a way to limit number of requests for health endpoint to prevent a DDOS type starvation.<issue_comment>username_1: You can use Spring Boot Throttling community library. I think you could restrict DDOS access to your endpoints (Actuator or otherwise) using it's configuration. <https://github.com/weddini/spring-boot-throttling> Upvotes: 1 <issue_comment>username_2: Another possibility to reduce DDOS vulnerability on the `/health` endpoint is to have your health checks run on a separate thread pool. This ensures that: * no more than one health indicator concurrently runs at any given time against an underlying service * your `/health` endpoint returns instantly (as it returns healths pre-calculated on different threads). For this purpose, and if you are using Spring Boot >= 2.2, you can use the separate library [spring-boot-async-health-indicator](https://github.com/antoinemeyer/spring-boot-async-health-indicator/) to run your healthchecks on a separate thread pool by simply annotating them with `@AsyncHealth`. *Disclaimer: I created this library to address this issue (among others)* Upvotes: 0
2018/03/21
558
2,251
<issue_start>username_0: I am trying to use pySerial on a Windows 10 machine (Python 3.6.4, 32 bit) to read serial data from a piece of lab equipment that would normally log its data to a serial ASCII printer. Connecting using a USB-to-serial adaptor. If I connect the computer to the printer, I can print using serial.write(), so I know my adaptor is working. However, when I connect the computer to the lab equipment and try to read data using the following code, I get nothing all: ``` import serial ser = serial.Serial('COM5') while True: if ser.in_waiting != 0: datastring = ser.read(size=ser.in_waiting) print(str(datastring)) ``` I know the lab equipment is transmitting when I run the code. Have also tried connecting two USB-to-serial adaptors to the computer with a serial cable in-between the adaptors and sending data from one serial port to the other. Again, I can write without problem, but the other port receives nothing. EDIT: I turned out to have a hardware problem. I had connected the lab equipment to my USB-to-serial adaptor (and, for testing purposes, the two USB-to serial adaptors to each other) using a standard serial cable. Connecting using a null modem solved the problem.<issue_comment>username_1: You can use Spring Boot Throttling community library. I think you could restrict DDOS access to your endpoints (Actuator or otherwise) using it's configuration. <https://github.com/weddini/spring-boot-throttling> Upvotes: 1 <issue_comment>username_2: Another possibility to reduce DDOS vulnerability on the `/health` endpoint is to have your health checks run on a separate thread pool. This ensures that: * no more than one health indicator concurrently runs at any given time against an underlying service * your `/health` endpoint returns instantly (as it returns healths pre-calculated on different threads). For this purpose, and if you are using Spring Boot >= 2.2, you can use the separate library [spring-boot-async-health-indicator](https://github.com/antoinemeyer/spring-boot-async-health-indicator/) to run your healthchecks on a separate thread pool by simply annotating them with `@AsyncHealth`. *Disclaimer: I created this library to address this issue (among others)* Upvotes: 0
2018/03/21
350
1,161
<issue_start>username_0: I got an exception while adding many emails to the queue on a Laravel 4.2 app. > > exception 'Pheanstalk\_Exception' with message 'JOB\_TOO\_BIG: job data exceeds server-enforced limit' > > > I am confused whether this error is due to a single job or because the queue is too long. As the job is an email, I don't think this is different than any other email jobs which get added to the queue before it throws this exception.<issue_comment>username_1: After sending the command line and body, the client waits for a reply, which may be: * "JOB\_TOO\_BIG\r\n" The client has requested to put a job with a body larger than max-job-size bytes. [protocol.txt](https://github.com/kr/beanstalkd/blob/b7b4a6a14b7e8d096dc8cbc255b23be17a228cbb/doc/protocol.txt#L168) see also: [Change max-job size of beanstalkd](https://stackoverflow.com/questions/35782549/change-max-job-size-of-beanstalkd) Upvotes: 0 <issue_comment>username_2: it could be related to the queue driver that you are using, if you are using sync driver the jobs are executed directly on the main thread. if it is the case try with database or redis driver. Upvotes: -1
2018/03/21
614
2,236
<issue_start>username_0: I am trying to use Microsoft Graph API V2.0 to access user's OneNote Notebooks. I am trying to authorize via OAuth using the follow link sample: ``` https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize? response_type=code &client_id={id} &redirect_uri={url} &scope=Notes.Read%20offline_access &state={state} ``` When I login with a work account I get a message saying that: > > {app\_name} needs permission to access resources in your organisation that only an admin can grant. Please ask an admin to grant permission to this app before you can use it. > > > And: > > Message: AADSTS90094: The grant requires admin permission. > > > Using an admin account I have no problems. From [Notes permissions](https://developer.microsoft.com/en-us/graph/docs/concepts/permissions_reference#notes-permissions), none of Note scopes (i.e. `Notes.Read`, `Notes.ReadWrite`, `Notes.Create` or `Notes.ReadWriteAll`) *require Admin Consent*. Is there any reason for this to request admin permissions?<issue_comment>username_1: This occurs when the Azure AD instance/tenant has disabled "Users can consent to apps accessing company data on their behalf". This is a global User Setting in Azure AD: [![enter image description here](https://i.stack.imgur.com/HmG2N.png)](https://i.stack.imgur.com/HmG2N.png) When this option is set to `No`, user's will be blocked from executing the [User Consent](http://massivescale.com/microsoft-v2-endpoint-user-vs-admin/) flow: [![enter image description here](https://i.stack.imgur.com/acN75.png)](https://i.stack.imgur.com/acN75.png) To get around this, an Admin will either need to consent on the User's behalf or they need to re-enable the User Consent option (this is the recommended solution, there are few rational reasons to entirely turn off User Consent). Upvotes: 2 [selected_answer]<issue_comment>username_2: We wrote a detailed article dealing with the [need admin approval issue](https://appfluence.com/help/article/microsoft-teams-need-admin-approval/), addressing the most common situations. I suspect the problem is addressed in the first section, and your IT admin needs to allow users to request access. Upvotes: 0
2018/03/21
1,736
6,581
<issue_start>username_0: I have the following UI: [![enter image description here](https://i.stack.imgur.com/dsVTJ.png)](https://i.stack.imgur.com/dsVTJ.png) and from there I need to change dynamically the button `Archive` and `Unarchive` as well as the icon from FontAwesome depending on the action taken and the response from the backend. The response basically looks like: ``` {"row_id":"12518","status":true} ``` In a few words: * In the blue row (archived item): if I click on `Unarchive` button and the response is `true` I should: remove the background color, change the button text to `Archive`, change the button class from `unarchive-form` to `archive-form` and change the FontAwesome icon class from `fa fa-arrow-circle-up` to `fa fa-arrow-circle-down`. * In the other row (non archived item): if I click on `Archive` and the response is `true` then I should: add the blue background, change the button text to `Unarchive`, change the button class from `archive-form` to `unarchive-form` and change the FontAwesome icon class from `fa fa-arrow-circle-down` to `fa fa-arrow-circle-up`. I have almost everything covered (and I will show the sources for just one case since it's pretty much the same and I don't want to make a big post): ``` $.ajax({ url: '/ajax/forms/archive', type: 'POST', dataType: 'json', data: { form_id: ids } }).done(function (response) { $.each(response, function (index, value) { var this_row_tr = $('input[data-form=' + value.row_id + ']').closest('tr'), this_btn = this_row_tr.find('.archive-form'); if (value.status === true) { // here I add the background color to the TR this_row_tr.addClass('info'); // here I swap the button class this_btn.removeClass('archive-form').addClass('unarchive-form'); // here I swap the button text and also the whole FontAwesome part this_btn.text(' Unarchive'); } else if (value.status === false) { this_row_tr.addClass('error'); all_archived = false; } }); if (all_archived) { toastr["success"]("The forms were archived successfully.", "Information"); } else { toastr["error"]("Something went wrong, the forms could not be archived.", "Error"); } }).error(function (response) { console.log(response); toastr["error"]("Something went wrong, the forms could not be archived.", "Error"); }); ``` But the problem comes when I change the text for the FontAwesome since I get the plain text instead of the icon. For example clicking on `Unarchive` in the first row will become into this: [![enter image description here](https://i.stack.imgur.com/Eb5Oz.png)](https://i.stack.imgur.com/Eb5Oz.png) As you can see there everything is fine but the FontAwesome icon. Is there any way to refresh the button so the changes takes effect? Or do you know any other way to achieve this?<issue_comment>username_1: As per my understanding you only need the functionality for Archieve and UnArchieve, Could you please try the following script ``` $(document).ready(function(){ $(".unarchive-form").click(unArchieveToArchieve); $(".archive-form").click(archieveUnArchieve); function unArchieveToArchieve(){ var btn = $(this); var rowId = $(this).parent().parent().parent().find(".to-upload").attr("data-form"); var response = [{"row_id":rowId,"status" :true}]; $.each(response, function (index, value) { var this_row_tr = $('input[data-form=' + value.row_id + ']').closest('tr'); if (value.status === true) { this_row_tr.attr('class',''); $(btn).text("Archieve"); $(btn).removeClass("unarchive-form"); $(btn).addClass("archive-form"); $(btn).click(archieveUnArchieve) } else if (value.status === false) { this_row_tr.attr('class','error'); all_deleted = false; } }); } function archieveUnArchieve(){ var btn = $(this); var rowId = $(this).parent().parent().parent().find(".to-upload").attr("data-form"); var response = [{"row_id":rowId,"status" :true}]; $.each(response, function (index, value) { var this_row_tr = $('input[data-form=' + value.row_id + ']').closest('tr'); if (value.status === true) { this_row_tr.attr('class','info'); $(btn).text("Unarchive"); $(btn).removeClass("archive-form"); $(btn).addClass("unarchive-form"); $(btn).click(unArchieveToArchieve) } else if (value.status === false) { this_row_tr.attr('class' , 'error'); all_deleted = false; } }); } }); ``` I hope this will solve your problem. Let me know if you need further details. Upvotes: 0 <issue_comment>username_2: You don't have to change the text like you are doing now, use `.html()` instead and replace only necessary strings, this way you maintain the structure of the button's content. Try this: ``` // this one is for the Archive action $.each(response, function (index, value) { var this_row_tr = $('input[data-form=' + value.row_id + ']').closest('tr'); var archiveToUnarchive = this_row_tr.find('.btn[class*="archive"]'); if (value.status === true) { this_row_tr.addClass('info'); archiveToUnarchive.toggleClass('archive-form unarchive-form') .html(archiveToUnarchive.html() .replace('Archive', 'Unarchive') .replace('fa-arrow-circle-down', 'fa-arrow-circle-up')); } else if (value.status === false) { this_row_tr.addClass('error'); all_deleted = false; } }); // this one is for the Unarchive action $.each(response, function (index, value) { var this_row_tr = $('input[data-form=' + value.row_id + ']').closest('tr'); var unarchiveToArchive = this_row_tr.find('.btn[class*="archive"]'); if (value.status === true) { unarchiveToArchive.toggleClass('archive-form unarchive-form') .html(unarchiveToArchive.html() .replace('Unarchive', 'Archive') .replace('fa-arrow-circle-up', 'fa-arrow-circle-down')); this_row_tr.removeClassName('info'); } else if (value.status === false) { this_row_tr.addClass('error'); all_deleted = false; } }); ``` Upvotes: 2 [selected_answer]
2018/03/21
741
2,668
<issue_start>username_0: Hi I have macro which supposed to click on button on web page..problem is there is no ID behind just this code. Its intranet web page. Begining of my code ``` Set IE = New InternetExplorerMedium IE.Navigate = "some website" While IE.ReadyState <> READYSTATE_COMPLETE DoEvents Wend IE.Visible = True IE.document.getElementById("CategoryCombo").Value = "78" IE.document.getElementById("CategoryCombo").FireEvent ("onchange") '~~> wait until element is present on web page Do Set ieobj = Nothing On Error Resume Next Set ieobj = IE.document.getElementById("text1265") DoEvents Loop Until Not ieobj Is Nothing '~~> search box IE.document.getElementById("text1265").Value = "some value" '~~> button click example IE.document.getElementById("subBtn").Click ``` Part of website code ``` [![View document](Images/DocTypes/PDF.gif)](javascript:FSResults_fsopenWindow('index.fsp?pn=DOLViewDocument&d=78&q=78-17158635-1&o=78-17158635-1&p=DOCUMENT_NAME#ensureVisible') "View document") ``` I tried ``` IE.document.getElementByTitle("View document").FireEvent ("onclick") ``` I also tried ``` IE.document.getElementByTagName("a").FireEvent ("onclick") ``` Thank you<issue_comment>username_1: > > I also tried > > > `IE.document.getElementByTagName("a").FireEvent ("onclick")` > > > This should work for you, but you are missing something. You need to specify which tag you are looking for, so `.getElementsByTagName("a")` turns into `.getElementsByTagName("a")(i)`, where `i` is the index of the tag you are looking for (you get the index by counting in order every tag in the HTML, starting from 0). The index is needed because `.getElementsByTagName("tagName")` returns an array, not a single value. Upvotes: 3 [selected_answer]<issue_comment>username_2: In the end I came up with this. Thank you for help username_1, your answer send me to right direction ``` Set elements = IE.document.getElementsByTagName("a") For Each element In elements If Left(element, 33) = "javascript:FSResults_fsopenWindow" Then Debug.Print element IE.Navigate element End If Next ``` Upvotes: 0 <issue_comment>username_3: You could have used CSS selectors. For example, an attribute selector ``` ie.document.querySelector("[title='View document']").Click ``` Or same thing to target href by its value ``` ie.document.querySelector("[href*=fsopenWindow]").Click ``` The last one looks for href containing `'fsopenWindow'` Both of these avoid looping over collections of elements and use CSS so are faster. Upvotes: 1
2018/03/21
389
1,215
<issue_start>username_0: I have the following *very* simple Y86 program in file `foo.ys`: ``` irmovl $1, %eax ``` Running the following command: ``` $ yas foo.ys ``` I get the following (utterly useless) feedback from `yas`: ``` Invalid Line ``` So--I have a few questions: 1. Where does the Y86 documentation live? 2. What is invalid about the above program?<issue_comment>username_1: It looks like Y86 is *supposed* to be a subset of the IA-32, [whose documentation can be found here](https://software.intel.com/sites/default/files/managed/a4/60/253665-sdm-vol-1.pdf). It seems, in IA-32, move instructions ending in `l` don't move long data types; they do conditional moves and things irrelevant to Y86. Yet, some versions of Y86 support `irmovl`, some don't. If you're using a 64-bit version of `yas`, use quadword instructions and addresses: ``` irmovq $1, %rax ``` Upvotes: -1 <issue_comment>username_2: Looks like you are using the 64 bit version, y86-64. I have checked, that in fact produces the given error message. While x86-64 does support 32 bit, apparently y86-64 doesn't. You should use `irmovq $1, %rax` instead (note the change of instruction suffix and register prefix). Upvotes: 1
2018/03/21
452
1,566
<issue_start>username_0: I'm having troubles figuring out how to type this utility function that can receive some data in any form of Object / Array / Number / String. Then it calls its respective handler like `parseArray` which receives an array which then calls `parseData` on each item of the array. So this function will pretty much get some data in any structure, and return the same structure with some parsing on it. Would love some guidance on this one. ``` function parseData(data) { if (Array.isArray(data)) { return parseArray(data); } if (typeof data === 'object') { return parseObject(data); } if (typeof data === 'string') { return parseString(data); } return data; } ```<issue_comment>username_1: It looks like Y86 is *supposed* to be a subset of the IA-32, [whose documentation can be found here](https://software.intel.com/sites/default/files/managed/a4/60/253665-sdm-vol-1.pdf). It seems, in IA-32, move instructions ending in `l` don't move long data types; they do conditional moves and things irrelevant to Y86. Yet, some versions of Y86 support `irmovl`, some don't. If you're using a 64-bit version of `yas`, use quadword instructions and addresses: ``` irmovq $1, %rax ``` Upvotes: -1 <issue_comment>username_2: Looks like you are using the 64 bit version, y86-64. I have checked, that in fact produces the given error message. While x86-64 does support 32 bit, apparently y86-64 doesn't. You should use `irmovq $1, %rax` instead (note the change of instruction suffix and register prefix). Upvotes: 1
2018/03/21
1,146
3,697
<issue_start>username_0: I have a lot (>100,000) lowercase strings in a list, where a subset might look like this: ``` str_list = ["hello i am from denmark", "that was in the united states", "nothing here"] ``` I further have a dict like this (in reality this is going to have a length of around ~1000): ``` dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} ``` For all strings in the list which contain any of the dict's keys, I want to replace the **entire string** with the corresponding dict value. The expected result should thus be: ``` str_list = ["dk", "us", "nothing here"] ``` What is the most efficient way to do this given the number of strings I have and the length of the dict? Extra info: There is never more than one dict key in a string.<issue_comment>username_1: Something like this would work. Note that this will convert the string to the first encountered key fitting the criteria. If there are multiple you may want to modify the logic based on whatever fits your use case. ``` strings = [str1, str2, str3] converted = [] for string in strings: updated_string = string for key, value in dict_x.items() if key in string: updated_string = value break converted.append(updated_string) print(converted) ``` Upvotes: 1 <issue_comment>username_2: Try ``` str_list = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} for k, v in dict_x.items(): for i in range(len(str_list)): if k in str_list[i]: str_list[i] = v print(str_list) ``` This iterates through the key, value pairs in your dictionary and looks to see if the key is in the string. If it is, it replaces the string with the value. Upvotes: 1 <issue_comment>username_3: This seems to be a good way: ``` input_strings = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} output_strings = [] for string in input_strings: for key, value in dict_x.items(): if key in string: output_strings.append(value) break else: output_strings.append(string) print(output_strings) ``` Upvotes: 2 <issue_comment>username_4: Assuming: ``` lst = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} ``` You can do: ``` res = [dict_x.get(next((k for k in dict_x if k in my_str), None), my_str) for my_str in lst] ``` which returns: ``` print(res) # -> ['dk', 'us', 'nothing here'] ``` The cool thing about this (apart from it being a python-ninjas favorite weapon aka *list-comprehension*) is the `get` with a default of `my_str` and `next` with a `StopIteration` value of `None` that triggers the above default. Upvotes: 2 [selected_answer]<issue_comment>username_5: You can subclass `dict` and use a list comprehension. In terms of performance, I advise you try a few different methods and see what works best. ``` class dict_contains(dict): def __getitem__(self, value): key = next((k for k in self.keys() if k in value), None) return self.get(key) str1 = "hello i am from denmark" str2 = "that was in the united states" str3 = "nothing here" lst = [str1, str2, str3] dict_x = dict_contains({"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"}) res = [dict_x[i] or i for i in lst] # ['dk', 'us', "nothing here"] ``` Upvotes: 1
2018/03/21
1,409
4,681
<issue_start>username_0: Firstly I appreciate that there are many answers out there explaining this topic but I just can't understand it at the moment. I want to loop through a JavaScript object I have created and then perform various actions like making a request to an API and then storing some data in Redis. This is what I have so far ``` const params = { "handle1": { "screen_name": "handle1", "hash_tag": "#hashtag1"}, "handle2": { "screen_name": "handle2", "hash_tag": "#hashtag2"} } for (const k of Object.keys(params)) { console.log("Searching for " + params[k]['screen_name']) client.get('statuses/user_timeline', { screen_name: params[k]['screen_name']}) .then(function (tweets) { for (const key of Object.keys(tweets)) { const val = tweets[key]['text']; if(val.includes(params[k]['hash_tag'])) { console.log("Found") r_client.hset(params[k]['screen_name'], 'tweet_id', tweets[key]['id'], 'tweet_text', tweets[key]['text'], function (err, res) { console.log(res) }); r_client.hgetall(params[k]['screen_name'], function(err, object) { console.log(object); }); } } r_client.quit(); }) .catch(function (error) { throw error; }); } ``` When I run this the output is as follows ``` Searching for handle1 Searching for handle2 Found 0 { tweet_id: '123456789', tweet_text: 'text found in tweet' } Found undefined undefined ``` So straight away I have a problem in that the first loop hasn't event finished and it's moved onto the second loop. I would like to run this in sequential order (if that's the best way), but more importantly I was hoping someone could break down my code and explain how I should be approaching this to have it run correctly.<issue_comment>username_1: Something like this would work. Note that this will convert the string to the first encountered key fitting the criteria. If there are multiple you may want to modify the logic based on whatever fits your use case. ``` strings = [str1, str2, str3] converted = [] for string in strings: updated_string = string for key, value in dict_x.items() if key in string: updated_string = value break converted.append(updated_string) print(converted) ``` Upvotes: 1 <issue_comment>username_2: Try ``` str_list = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} for k, v in dict_x.items(): for i in range(len(str_list)): if k in str_list[i]: str_list[i] = v print(str_list) ``` This iterates through the key, value pairs in your dictionary and looks to see if the key is in the string. If it is, it replaces the string with the value. Upvotes: 1 <issue_comment>username_3: This seems to be a good way: ``` input_strings = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} output_strings = [] for string in input_strings: for key, value in dict_x.items(): if key in string: output_strings.append(value) break else: output_strings.append(string) print(output_strings) ``` Upvotes: 2 <issue_comment>username_4: Assuming: ``` lst = ["hello i am from denmark", "that was in the united states", "nothing here"] dict_x = {"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"} ``` You can do: ``` res = [dict_x.get(next((k for k in dict_x if k in my_str), None), my_str) for my_str in lst] ``` which returns: ``` print(res) # -> ['dk', 'us', 'nothing here'] ``` The cool thing about this (apart from it being a python-ninjas favorite weapon aka *list-comprehension*) is the `get` with a default of `my_str` and `next` with a `StopIteration` value of `None` that triggers the above default. Upvotes: 2 [selected_answer]<issue_comment>username_5: You can subclass `dict` and use a list comprehension. In terms of performance, I advise you try a few different methods and see what works best. ``` class dict_contains(dict): def __getitem__(self, value): key = next((k for k in self.keys() if k in value), None) return self.get(key) str1 = "hello i am from denmark" str2 = "that was in the united states" str3 = "nothing here" lst = [str1, str2, str3] dict_x = dict_contains({"denmark" : "dk", "germany" : "ger", "norway" : "no", "united states" : "us"}) res = [dict_x[i] or i for i in lst] # ['dk', 'us', "nothing here"] ``` Upvotes: 1
2018/03/21
348
1,329
<issue_start>username_0: I have a product X in several sizes (added as variation products of the main product X). For small sizes the price is available and customers can shop it online. However, for large sizes of this product X, it is not shopable online. Therefore for some variations of this product X I would like to remove the add to cart button and change it to a button who goes to my "request a quote" page. My idea was that if I set the variable product price to 0 that the add to cart button goes away and the "request a quote" button appears. Any ideas on how to do this in woocommerce (php)?<issue_comment>username_1: you just need to make some condition with **if** function ``` $price = 2000; $response = 'Add To Cart; if(size = XL) { $price = 0; $response = "You can Request another Size"; } ``` Upvotes: 0 <issue_comment>username_2: put this function in your function.php ``` add_filter('woocommerce_get_price_html', 'requestQuote', 10, 2); function requestQuote($price, $product) { if ( $price == wc_price( 0.00 ) ){ remove_action( 'woocommerce_single_product_summary', 'woocommerce_template_single_add_to_cart', 30 ); return 'Request Quote'; } else{ return $price;} } ``` Upvotes: 1
2018/03/21
573
1,970
<issue_start>username_0: In R, suppose I have a logical vector of the same length as the data. I would like to change the background color of the ggplot depending on the logical vector. In the example below `background_change` is the logical vector. ``` library(ggplot2) background_change <- economics$unemploy < 7777 ggplot(economics, aes(date, psavert)) + geom_line() ``` Note that this is different from [other](https://stackoverflow.com/questions/33322061/change-background-color-panel-based-on-year-in-ggplot-r) questions posted on stackoverflow which do the background change manually. This is to tedious for my application.<issue_comment>username_1: Not a perfect solution, but works for a given example. For each value generate `geom_rect` with color defined by `unemploy < 7777` and coordinates `x` as `xmin`, `xmax` (start = end). ``` thresholdUnemploy <- 7777 library(ggplot2) ggplot(economics, aes(date, psavert)) + geom_rect(aes(xmin = date, xmax = date, ymin = -Inf, ymax = Inf, color = unemploy < thresholdUnemploy), size = 0.7, show.legend = FALSE) + geom_line() ``` [![enter image description here](https://i.stack.imgur.com/p8rAW.png)](https://i.stack.imgur.com/p8rAW.png) **Why this is not perfect:** depending on the density of the x-axis points you might need to adjust size of `geom_rect`. Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is a slight variation on this answer that does not require manually adjusting the size parameter. This works by adding 1 month to the xmax argument. ``` ggplot(economics, aes(date, psavert)) + geom_rect(aes(xmin = date, xmax = date+months(1), ymin = -Inf, ymax = Inf, fill = unemploy < 7777), show.legend = FALSE) + geom_line() ``` [![enter image description here](https://i.stack.imgur.com/KWdd1.png)](https://i.stack.imgur.com/KWdd1.png) Upvotes: 1
2018/03/21
549
1,816
<issue_start>username_0: I have seen the boards hub in several videos on how to use VSTS. It is a preview feature and shows up right between "Work Items" and "Back Logs". I have opted in on all the preview feature but cannot get it to show up. here is a spot in a video talking about it: <https://youtu.be/16gOTI_OBw8?t=141> Here is what my opt in features shows: [![opt in picture](https://i.stack.imgur.com/JwsVS.png)](https://i.stack.imgur.com/JwsVS.png) What am I missing??<issue_comment>username_1: Not a perfect solution, but works for a given example. For each value generate `geom_rect` with color defined by `unemploy < 7777` and coordinates `x` as `xmin`, `xmax` (start = end). ``` thresholdUnemploy <- 7777 library(ggplot2) ggplot(economics, aes(date, psavert)) + geom_rect(aes(xmin = date, xmax = date, ymin = -Inf, ymax = Inf, color = unemploy < thresholdUnemploy), size = 0.7, show.legend = FALSE) + geom_line() ``` [![enter image description here](https://i.stack.imgur.com/p8rAW.png)](https://i.stack.imgur.com/p8rAW.png) **Why this is not perfect:** depending on the density of the x-axis points you might need to adjust size of `geom_rect`. Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is a slight variation on this answer that does not require manually adjusting the size parameter. This works by adding 1 month to the xmax argument. ``` ggplot(economics, aes(date, psavert)) + geom_rect(aes(xmin = date, xmax = date+months(1), ymin = -Inf, ymax = Inf, fill = unemploy < 7777), show.legend = FALSE) + geom_line() ``` [![enter image description here](https://i.stack.imgur.com/KWdd1.png)](https://i.stack.imgur.com/KWdd1.png) Upvotes: 1
2018/03/21
816
3,059
<issue_start>username_0: I want to get a Parametrized class which is able to return an object of type T AND children of T This is the code: ``` import java.lang.reflect.InvocationTargetException; class A {}; class B extends A {}; public class testGenerics { T a; T getA() { return getA(B.class); // Compilation problem: //The method getA(Class) in the type testGenerics is not applicable for the arguments (Class**) } T getA(Class clazz) throws InstantiationException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException { return clazz.getConstructor().newInstance(); } }** ``` This is how I think: I am declaring generic type T, that extends A. Therefore, I can create from Class clazz an instance of type T extending A. However, when I decide to get A from B.class (Which extends A): ``` getA(B.class) ``` I get following error: > > The method getA(Class< T >) in the type testGenerics< T > is not applicable for the arguments (Class< B >) > > > Why is this? How can I fix it?<issue_comment>username_1: This does not work because `B` is not `T`. Yes, `B` also extends `T`, but it is not `T`. Imagine you have also a class `C extends A` and create an instance of `testGenerics`. Your `getA()` method would instantiate `B` but should return `C`. The following, however, works fine with a few tweaks: ``` testGenerics **tg = new testGenerics**(); B b = tg.getA(B.class);**** ``` How to fix this depends on what you actually want to do. This, for instance allows you to instantiate any subclass of `A`: ``` public ~~S getA(Class ~~clazz) throws InstantiationException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException { return clazz.getConstructor().newInstance(); }~~~~ ``` However you can't really use `T` here (like ). Since `T` is a parameter type, you have no guarantees on whether `B` is or extends `T`. Upvotes: 0 <issue_comment>username_2: Your problem is the class definition `class testGenerics`. That means that `T` is defined when creating an instance of that class and could be bound to any subclass of `A` - which might not be `B` but `C` etc. Thus passing `B.class` isn't guaranteed to match. To fix that put the defintion of `T` at the method level: ``` A getA(Class clazz) throws InstantiationException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, NoSuchMethodException, SecurityException { return clazz.getConstructor().newInstance(); } //No generics needed here, since the method above returns A anyways. //If it'd return T you'd have to change the return type here to B since getA(B.class) binds T to be B now A getA() throws Exception { return getA(B.class); } ``` Since the method level `T` hides the definition at the class level you need to do something about that: either use a different name (e.g. `S`) or remove the definition at the class level (it doesn't make much sense there anyways). Upvotes: 2 [selected_answer]
2018/03/21
652
2,508
<issue_start>username_0: I'm getting an HTML string from an AJAX request, that looks something like that: ``` ``` What i need, is to be able use this string in a React component, as it were valid JSX. I tried using the `dangerouslySetInnerHTML` as instructed in this discussion: [how-to-parse-html-to-react-component](https://stackoverflow.com/questions/44643424/how-to-parse-html-to-react-component) I've also tried React libraries like `react-html-parse` and `html-react-parser`. No success When i was working with *AngularJS(1)*, i was using some small directive that would take care of that situation. I need to achieve the same with *React*. Any ideas? EDIT: if anybody is interested, i found a library that takes care of the issue: <https://www.npmjs.com/package/react-jsx-parser><issue_comment>username_1: my suggestion is you do something like the following. first set a state variable to equal the HTML you want to return ``` this.setState({html: response.data.html}) ``` then in your return you can do the following ``` return ( {this.state.html} ); ``` Upvotes: -1 <issue_comment>username_2: You're not supposed to be doing things this way. Although React components look like html tags when using JSX, they are actually either classes or functions. What you're trying to do is equivalent to trying to parse: ``` "document.write('foo')" ``` You're literally mixing a javascript function name in string form with html markup in the same string. You will have to parse the string to separate the React component from the surrounding html markup, map the React components in string form to their actual React counterparts, and use ReactDOM.render to add the component to the page. ``` let mapping = [ {"name":"","component":}, {"name":"","component":} ]; let markup = ""; let targetComponent; mapping.forEach((obj) => {if(markup.indexOf(obj.name)>-1){ targetComponent=obj.component;//acquired a reference to the actual component markup.replace(obj.name,"");//remove the component markup from the string } }); /*place the empty div at the target location within the page give it an "id" attribute either using element.setAttribute("id","label") or by just modifying the string to add id="label" before the div is placed in the DOM ie, */ //finally add the component using the id assigned to it ReactDOM.render(targetComponent,document.getElementById("label")); ``` Never tried this before. I wonder if this will actually work :) Upvotes: -1
2018/03/21
659
2,618
<issue_start>username_0: I'm trying to retrieve a set of variables by defining them inside a scope and using the scope filtering in `tf.get_collection()`: ``` with tf.variable_scope('inner'): v = tf.get_variable(name='foo', shape=[1]) ... # more variables ... variables = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, 'inner') # do stuff with variables ``` This normally works fine but sometimes my code is called by a module that's already defined its own scope `get_collection()` doesn't find the variables anymore: ``` with tf.variable_scope('outer'): with tf.variable_scope('inner'): v = tf.get_variable(name='foo', shape=[1]) ... # more variables ... ``` I believe the filtering is a regex because I can make get\_collection() work by prefixing my scope search term with `.*` but that's a bit hacky. Is there a better way to handle this?<issue_comment>username_1: my suggestion is you do something like the following. first set a state variable to equal the HTML you want to return ``` this.setState({html: response.data.html}) ``` then in your return you can do the following ``` return ( {this.state.html} ); ``` Upvotes: -1 <issue_comment>username_2: You're not supposed to be doing things this way. Although React components look like html tags when using JSX, they are actually either classes or functions. What you're trying to do is equivalent to trying to parse: ``` "document.write('foo')" ``` You're literally mixing a javascript function name in string form with html markup in the same string. You will have to parse the string to separate the React component from the surrounding html markup, map the React components in string form to their actual React counterparts, and use ReactDOM.render to add the component to the page. ``` let mapping = [ {"name":"","component":}, {"name":"","component":} ]; let markup = ""; let targetComponent; mapping.forEach((obj) => {if(markup.indexOf(obj.name)>-1){ targetComponent=obj.component;//acquired a reference to the actual component markup.replace(obj.name,"");//remove the component markup from the string } }); /*place the empty div at the target location within the page give it an "id" attribute either using element.setAttribute("id","label") or by just modifying the string to add id="label" before the div is placed in the DOM ie, */ //finally add the component using the id assigned to it ReactDOM.render(targetComponent,document.getElementById("label")); ``` Never tried this before. I wonder if this will actually work :) Upvotes: -1
2018/03/21
1,146
4,085
<issue_start>username_0: I was developing my own node module to put it on the npm website. This node module has some interactions with a database. I need to receive three values form the user (dbName, server, port) and set them in my module so that I can connect to the database. The first thing that came to my mind was this: asking the user to open the config file and change the code (assign the values to the three variables): ``` var dbConf = { server: '', port: 0, dbName: '' }; ``` But I think this approach is totally wrong. I tried to create a function and ask the user to first call that function with the three parameters(dbName, server, port) and that function does the work for me. Then the user first require my module, then call the function and finally make use of the module: ``` var myModule = require('myModule'); myModule.config('TestDB', 'localhost', 27017); myModule.someMethod()... ``` But I don't know how to write my index.js file to do this job! I wrote something like this: (index.js) ``` var config = function(dbName, server, port ) { var dbConf = { server: '', port: 0, dbName: '' }; dbConf.server = server; dbConf.port = port; dbConf.dbName = dbName ; return 'mongodb://' + dbConf.server + ':' + dbConf.port + '/' + dbConf.dbName; } //connect to mongoDB local server mongoose.connect(config); module.exports = { config: config, mongoose: mongoose }; ``` But it didn't work. How can I do this job? update: index.js: ``` function gridFS(dbName, server, port) { var dbUrl = 'mongodb://' + server + ':' + port + '/' + dbName; this.mongoose = mongoose.connect(dbUrl); this.db = mongoose.connection; this.gfs = gridfsLockingStream(this.mongoose.connection.db, this.mongoose.mongo); //if the connection goes through this.db.on('open', function (err) { if (err) throw err; console.log("connected correctly to the server"); }); this.db.on('error', console.error.bind(console, 'connection error:')); } gridFS.prototype.putFile = function putFile(...) {}; gridFS.prototype.getFileById = function getFileById(id, callback) { this.putFile(); //here is the problem } module.exports = gridFS; ```<issue_comment>username_1: In your module.js file ``` let alertMethod = (message) => { console.log(message); } let myModule = (database, server, port) => { return { alert: alertMethod } } module.exports = myModule; ``` app.js file ``` let myModule = require('./module'); const _module = myModule('database', 'server', 'port'); _module.alert('YO!'); ``` Upvotes: 1 <issue_comment>username_2: Depends on what you want to do with this `myModule.someMethod` of yours. Its easier if you post a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve). --- Despite that, your code has some easy to spot problems: First of all, `mongoose.connect` expects a `String` parameter, as you can see [here](http://mongoosejs.com/docs/connections.html), but you are passing a `function` instead. Your `config` returns a `String`, yes, but its a `function` (you would need to call it in order to get a `String`). That brings us to the second problem, when you call `mongoose.connect` like you did, it will be called as soon as you first import the module, before you are able to call `myModule.config` and pass your parameters to create a connection string. --- An example of how you could do it is organizing your module like this: ``` var config = function(dbName, server, port ) { this.connectionString = 'mongodb://' + server + ':' + port + '/' + dbName; } var connect = function() { mongoose.connect(this.connectionString); } module.exports = { config: config, connect: connect } ``` Then you would use it like this ``` var myModule = require('./index.js') myModule.config('TestDB', 'localhost', 27017); myModule.connect(); ``` This example should also give you an idea of how your other methods (like `myModule.someMethod`) will look like. Upvotes: 0
2018/03/21
1,080
3,251
<issue_start>username_0: Using JavaScript, I am appending HTML to a table when conditions are met. When conditions are met, I'd like to place a button to the right of a table's row. Currently, I can append the button to the row by using a `td` tag, however the button would take on all styling of the row it is in. Instead, I have added the button to a new row that I'd like to place to the right side of its row above. Below is HTML that adds a new row for the button: ```html | | [Task Name](javascript: "Task Name, Click to sort by Task Name") [Open Task Name sort and filter menu](javascript:; "Open Task Name sort and filter menu") | | [Task Status](javascript: "Task Status, Click to sort by Task Status") [Open Task Status sort and filter menu](javascript:; "Open Task Status sort and filter menu") | [Priority](javascript: "Priority, Click to sort by Priority") [Open Priority sort and filter menu](javascript:; "Open Priority sort and filter menu") | [Due Date](javascript: "Due Date, Click to sort by Due Date") [Open Due Date sort and filter menu](javascript:; "Open Due Date sort and filter menu") | [% Complete](javascript: "% Complete, Click to sort by % Complete") [Open % Complete sort and filter menu](javascript:; "Open % Complete sort and filter menu") | | | --- | --- | --- | --- | --- | --- | --- | --- | | | | | | --- | | [Test](https://portal.oldnational.com/divisions/testing/_layouts/15/listform.aspx?PageType=4&ListId=%7BC23BD4E4%2DE8EF%2D4666%2DBE60%2D1CBB5B6645F4%7D&ID=1&ContentTypeID=0x0108007E8FE1AB05F4B74B8B0683F809D34CF0) | | [Open Menu](# "Open Menu dialog for selected item") | Completed | (2) Normal | 3/30/2018 | 100 % | | 1 | | | | | | --- | | [Test2](https://portal.oldnational.com/divisions/testing/_layouts/15/listform.aspx?PageType=4&ListId=%7BC23BD4E4%2DE8EF%2D4666%2DBE60%2D1CBB5B6645F4%7D&ID=4&ContentTypeID=0x0108007E8FE1AB05F4B74B8B0683F809D34CF0) | | [Open Menu](# "Open Menu dialog for selected item") | In Progress | (2) Normal | 3/19/2018 | 50 % | | 4 | ``` How can I make ``` | 1 | ``` or ``` | 4 | ``` display to the right of the row above it without adding it as a `td` in the row? I have tried `float:right` for the row and replacing `| |` with .<issue_comment>username_1: You are missing col-span on the `|`. Assuming 7 cells for each row. ``` | 4 | ``` Or if you want to try to avoid the `float: right;` see if this will be enough. You might have to add `text-align: right`. ``` | | 4 | ``` ---- EDIT To line it up to the right, of the desired row, you need to add a `|` via JS and make sure you add a `|` on the top or the last `th` should look like `... |` Personally, would make last header like `... |` and then add a blank `|` where you inject button here if needed. Upvotes: 0 <issue_comment>username_2: You could have `|` there all the time, just empty without the button. And when the conditions are met, just fill this `|` with button code. In your .css file you would already have appropriate styling for that. (And having `|` in the row all the time won't break your row when you add new one, and there will be different count of td's in each row) Upvotes: 1
2018/03/21
567
1,843
<issue_start>username_0: I am trying to create a donate button for our school website and so far its gone good, But my teacher asked for me to possibly find a way to change the button color, I was wondering how I could do exactly that, and does it have to be specific colors ? Here is my code: ``` ```<issue_comment>username_1: Just use a background-color: ```html ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: For the background color you can use the "background-color" property like that : ```html ``` You can get the color codes online (<http://www.color-hex.com/>) Upvotes: 1 <issue_comment>username_3: To change the background color of an element, you can use the CSS property "background", or "background-color". Some colors have a color name equivalent, but you can get more specific by using hexadecimal colors. There are many websites that can help you find an exact color, such as <https://htmlcolorcodes.com>. Here's an example with a red background: With the color name: With a hexadecimal color: Upvotes: 2 <issue_comment>username_4: **Please try this follow code:** ```js $("#button").click(function() { $("#button").addClass('button-clicked'); }); ``` ```css .button-clicked { background: red; font-size:99px; height: 180px; width: 700px } ``` ```html ``` Upvotes: 1 <issue_comment>username_5: Try following code: ```css .button { background-color: #4CAF50; /* Green */ border: none; color: white; padding: 15px 32px; text-align: center; text-decoration: none; display: inline-block; font-size: 16px; margin: 4px 2px; cursor: pointer; } .btn2 {background-color: #008CBA;} /* Blue */ .btn3 {background-color: #f44336;} /* Red */ .btn4 {background-color: #e7e7e7; color: black;} /* Gray */ .btn5 {background-color: #555555;} /* Black */ ``` ```html ``` Upvotes: 1
2018/03/21
570
1,833
<issue_start>username_0: I would like to declare an attribute and assign a value to it in the component's HTML tag, like this: ``` ``` Then I'd like to get the value of this attribute (`my-attr`) from inside my controller. Is this possible using AngularJS 1.6 and TypeScript, and if yes how?<issue_comment>username_1: Just use a background-color: ```html ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: For the background color you can use the "background-color" property like that : ```html ``` You can get the color codes online (<http://www.color-hex.com/>) Upvotes: 1 <issue_comment>username_3: To change the background color of an element, you can use the CSS property "background", or "background-color". Some colors have a color name equivalent, but you can get more specific by using hexadecimal colors. There are many websites that can help you find an exact color, such as <https://htmlcolorcodes.com>. Here's an example with a red background: With the color name: With a hexadecimal color: Upvotes: 2 <issue_comment>username_4: **Please try this follow code:** ```js $("#button").click(function() { $("#button").addClass('button-clicked'); }); ``` ```css .button-clicked { background: red; font-size:99px; height: 180px; width: 700px } ``` ```html ``` Upvotes: 1 <issue_comment>username_5: Try following code: ```css .button { background-color: #4CAF50; /* Green */ border: none; color: white; padding: 15px 32px; text-align: center; text-decoration: none; display: inline-block; font-size: 16px; margin: 4px 2px; cursor: pointer; } .btn2 {background-color: #008CBA;} /* Blue */ .btn3 {background-color: #f44336;} /* Red */ .btn4 {background-color: #e7e7e7; color: black;} /* Gray */ .btn5 {background-color: #555555;} /* Black */ ``` ```html ``` Upvotes: 1
2018/03/21
300
1,192
<issue_start>username_0: How to display variable value in Report.html file of RobotFramework. I am using RIDE Thanks!<issue_comment>username_1: The fact that you're using RIDE is irrelevant, just so you know. RIDE is just a Robot Framework test data editor and doesn't influence the execution itself. Now to answer your actual question, you could simply use the `Log` keyword from the BuiltIn library. ``` Log ${my_variable_name} ``` However this will display your variable in the log file, rather than in the report file. The report file is meant to be a summary report and I'm not sure why you'd want to log a variable's value in there, but if you insist on doing so, perhaps you could use the keyword `Set Test Documentation` to append your variable to the test's documentation, which is displayed in the report. Upvotes: 1 <issue_comment>username_2: Let ${result} be the variable, you want to display in the report.html file. Then, you can use the following keyword ``` Set Test Message ${result} ``` If the ${result} variable is returned by the **RUN PROCESS** keyword, then you can use ${result.stdout}, ie, ``` Set Test Message ${result.stdout} ``` Upvotes: 0
2018/03/21
517
1,976
<issue_start>username_0: In my Angular CLI components, I get an error when defining a variable as a 'var' or 'let' outside of a function. I also get an error using the keyword 'function'. If I declare just by name it's fine. For instance in this stripped down component: ``` import { Component } from '@angular/core'; @Component({ selector: 'home', templateUrl: 'home.component.html', styleUrls: ['home.component.scss'] }) export class HomeComponent { constructor() {} var aaa = '1'; // nope let bbb = '2'; // nope ccc = '3'; // ok method1(){ // ok let ddd = '4'; // ok } function method2(){ // nope } } ``` For aaa, bbb, and method2 it is throwing the error: "[ts] Unexpected token. A constructor, method, accessor, or property was expected." What am I doing wrong here? I suspect it is a configuration problem with my project?<issue_comment>username_1: The fact that you're using RIDE is irrelevant, just so you know. RIDE is just a Robot Framework test data editor and doesn't influence the execution itself. Now to answer your actual question, you could simply use the `Log` keyword from the BuiltIn library. ``` Log ${my_variable_name} ``` However this will display your variable in the log file, rather than in the report file. The report file is meant to be a summary report and I'm not sure why you'd want to log a variable's value in there, but if you insist on doing so, perhaps you could use the keyword `Set Test Documentation` to append your variable to the test's documentation, which is displayed in the report. Upvotes: 1 <issue_comment>username_2: Let ${result} be the variable, you want to display in the report.html file. Then, you can use the following keyword ``` Set Test Message ${result} ``` If the ${result} variable is returned by the **RUN PROCESS** keyword, then you can use ${result.stdout}, ie, ``` Set Test Message ${result.stdout} ``` Upvotes: 0
2018/03/21
1,696
5,499
<issue_start>username_0: I want to filter Strings from documents the same way sklearn's [CountVectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html%22CountVectorizer%22) does. It uses the following RegEx: `(?u)\b\w\w+\b`. This java code should behave the same way: ``` Pattern regex = Pattern.compile("(?u)\\b\\w\\w+\\b"); Matcher matcher = regex.matcher("this is the document.!? äöa m²"); while(matcher.find()) { String match = matcher.group(); System.out.println(match); } ``` But this doesnt produce the desired output, as it does in python: ``` this is the document äöa m² ``` It instead outputs: ``` this is the document ``` What can i do to include non-ascii characters, as the python RegeEx does?<issue_comment>username_1: There is one more step left: you need to specify that `\w` includes unicode characters too. `Pattern.UNICODE_CHARACTER_CLASS` for the rescue: ``` Pattern regex = Pattern.compile("(?u)\\b\\w\\w+\\b", Pattern.UNICODE_CHARACTER_CLASS); // ^^^^^^^^^^ Matcher matcher = regex.matcher("this is the document.!? äöa m²"); while(matcher.find()) { String match = matcher.group(); System.out.println(match); } ``` Upvotes: 0 <issue_comment>username_2: As suggested by Wiktor in the comments, you could use `(?U)` to turn on the flag `UNICODE_CHARACTER_CLASS`. While this does allow matching `äöa`, this still doesn't match `m²`. That's because `UNICODE_CHARACTER_CLASS` with `\w` doesn't recognize `²` as a valid alphanumeric character. As a replacement for `\w`, you can use `[\pN\pL_]`. This matches Unicode numbers `\pN` and Unicode letters `\pL` (plus `_`). The `\pN` Unicode character class includes the `\pNo` character class, which includes the *Latin 1 Supplement - Latin-1 punctuation and symbols* character class (it includes `²³¹`). Alternatively, you could just add the `\pNo` Unicode character class to a character class with `\w`. This means the following regular expressions correctly match your strings: ``` [\pN\pL_]{2,} # Matches any Unicode number or letter, and underscore (?U)[\w\pNo]{2,} # Uses UNICODE_CHARACTER_CLASS so that \w matches Unicode. # Adds \pNo to additionally match ²³¹ ``` So why doesn't `\w` match `²` in Java but it does in Python? --- Java's interpretation --------------------- Looking at [OpenJDK 8-b132's `Pattern` implementation](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/regex/Pattern.java), we get the following information (I removed information irrelevant to answering the question): > > Unicode support > --------------- > > > The following **Predefined Character classes** and **POSIX character > classes** are in conformance with the recommendation of *Annex C: > Compatibility Properties* of *Unicode Regular Expression*, when > `UNICODE_CHARACTER_CLASS` flag is specified. > > > `\w` A word character: `[\p{Alpha}\p{gc=Mn}\p{gc=Me}\p{gc=Mc}\p{Digit}\p{gc=Pc}\p{IsJoin_Control}]` > > > Great! Now we have a *definition* for `\w` when the `(?U)` flag is used. Plugging these Unicode character classes into [this amazing tool](https://unicode.org/cldr/utility/list-unicodeset.jsp) will tell you exactly what each of these Unicode character classes match. Without making this post super long, I'll just go ahead and tell you that neither of the following classes matches `²`: * `\p{Alpha}` * `\p{gc=Mn}` * `\p{gc=Me}` * `\p{gc=Mc}` * `\p{Digit}` * `\p{gc=Pc}` * `\p{IsJoin_Control}` --- Python's interpretation ----------------------- So why does Python match `²³¹` when the `u` flag is used in conjunction with `\w`? This one was very difficult to track down, but I went digging into [Python's source code (I used Python 3.6.5rc1 - 2018-03-13)](https://www.python.org/downloads/source/). After removing a lot of the fluff for how this gets called, basically the following happens: * `\w` is defined as `CATEGORY_UNI_WORD`, which is then prefixed with `SRE_`. `SRE_CATEGORY_UNI_WORD` calls `SRE_UNI_IS_WORD(ch)` * `SRE_UNI_IS_WORD` is defined as `(SRE_UNI_IS_ALNUM(ch) || (ch) == '_')`. * `SRE_UNI_IS_ALNUM` calls `Py_UNICODE_ISALNUM`, which is, in turn, defined as `(Py_UNICODE_ISALPHA(ch) || Py_UNICODE_ISDECIMAL(ch) || Py_UNICODE_ISDIGIT(ch) || Py_UNICODE_ISNUMERIC(ch))`. * The important one here is `Py_UNICODE_ISDECIMAL(ch)`, defined as `Py_UNICODE_ISDECIMAL(ch) _PyUnicode_IsDecimalDigit(ch)`. Now, let's take a look at the method `_PyUnicode_IsDecimalDigit(ch)`: ``` int _PyUnicode_IsDecimalDigit(Py_UCS4 ch) { if (_PyUnicode_ToDecimalDigit(ch) < 0) return 0; return 1; } ``` As we can see, this method returns `1` if `_PyUnicode_ToDecimalDigit(ch) < 0`. So what does `_PyUnicode_ToDecimalDigit` look like? ``` int _PyUnicode_ToDecimalDigit(Py_UCS4 ch) { const _PyUnicode_TypeRecord *ctype = gettyperecord(ch); return (ctype->flags & DECIMAL_MASK) ? ctype->decimal : -1; } ``` Great, so basically, if the character's UTF-32 encoded byte has the `DECIMAL_MASK` flag this will evaluate to true and a value greater than or equal to `0` will be returned. UTF-32 encoded byte value for `²` is `0x000000b2` and our flag `DECIMAL_MASK` is `0x02`. `0x000000b2 & 0x02` evaluates to true and so `²` is deemed to be a valid Unicode alphanumeric character in python, thus `\w` with `u` flag matches `²`. Upvotes: 3 [selected_answer]
2018/03/21
212
751
<issue_start>username_0: I installed Docker CE from the official docker site on Ubuntu server 16.04.4 LTS, Release: 16.04, Codename: xenial. I can not find the `/var/lib/docker` file even after I install docker on my machine. When I try to run an Ubuntu container for example with: ``` docker run -it ubuntu ``` an error message: ``` docker: Can not connect to the Docker daemon at unix: ///var/run/docker.sock ``` Is the docker daemon running?<issue_comment>username_1: Please try with sudo : `sudo docker run -it ubuntu` Upvotes: 0 <issue_comment>username_2: First you have to ckeck if the docker service is running `service docker status` if it's not running, start it: `service docker start` then you can use `docker run ....` Upvotes: 2
2018/03/21
161
533
<issue_start>username_0: I have a column of data in excel containing a number. What I need is a formula that can add the word NA ( repeated with a space) for whatever the number value is, for example: ``` A1 A2 3 NA NA NA 1 NA 8 NA NA NA NA NA NA NA NA ``` Is this possible?<issue_comment>username_1: Use the formula : `=REPT("NA ",A1)` Upvotes: 2 <issue_comment>username_2: Use the formula ``` =TRIM(REPT("NA ",A1)) ``` to have spaces in between but not have a space in the end. Upvotes: 1
2018/03/21
250
808
<issue_start>username_0: i have an angular 4 project which i want to serve via `ng serve`. If i'm doing it inside the command line, inside the project directory (`/hardware/angular-src`) it is no problem. But when i now try to run `ng serve /hardware/angular-src` from root directory `root@myPc:/$ ng serve /hardware/angular-src` it says `node_modules appears empty you may need to run npm install` **So this works:** ``` user@myPc:/$ cd /hardware/angular-src user@myPc:/hardware/angular-src$ ng serve ``` **This works not** ``` user@myPc:/$ ng serve /hardware/angular-src ``` why?<issue_comment>username_1: Use the formula : `=REPT("NA ",A1)` Upvotes: 2 <issue_comment>username_2: Use the formula ``` =TRIM(REPT("NA ",A1)) ``` to have spaces in between but not have a space in the end. Upvotes: 1
2018/03/21
586
1,829
<issue_start>username_0: Is there a way to customize the attachment urls so instead of ``` /rails/active_storage/representations/ /rails/active_storage/blobs/ ``` We could have something like this: ``` /custom_path/representations/ /custom_path/blobs/ ```<issue_comment>username_1: Monkey patching is always on your side . Just for interest with next patch, I could change ActiveStorage Controller path: ```rb module MapperMonkeypatch def map_match(paths, options) paths.collect! do |path| path.is_a?(String) ? path.sub('/rails/active_storage/', '/custom_path/') : path end super end end ActionDispatch::Routing::Mapper.prepend(MapperMonkeypatch) ``` and everything seems works . Upvotes: 3 <issue_comment>username_2: Recently, there was an addition which makes the route prefix configurable: <https://github.com/rails/rails/commit/7dd9916c0d5e5d149bdde8cbeec42ca49cf3f6ca> Just in master branch now, but should be integrated into **~> 5.2.2** have been integrated into **Rails 6.0.0** and higher. Then, it's just a matter of configuration: ``` Rails.application.configure do config.active_storage.routes_prefix = '/whereever' end ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: Tested the solution by @username_2. The `config.active_storage.routes_prefix` option only works for rails 6.0.0alpha and higher branch. The patch is not available for 5.2.2 I made a fork for 5.2.2 with the patch. <https://github.com/StaymanHou/rails/tree/v5.2.2.1> To use it, simply replace the line `gem 'rails', '~> 5.2.2'` with `gem 'rails', '5.2.2.1', git: 'https://github.com/StaymanHou/rails.git', tag: 'v5.2.2.1'`. And run `bundle install --force` `coffee-rails` gem will be required if you don't have it yet for rails edge install - <https://github.com/rails/rails/issues/28965> Upvotes: 0
2018/03/21
282
812
<issue_start>username_0: as it is said in the title, I want to compute the variance per row in my dataset, all columns are continuous : I have tried to use the rowVars function from the package matrixStats, but it doesn't work ``` x[, variance := rowVars(.SD), .SDcols=varQuant] ``` I have the following error : ``` Error in rowVars(.SD): Argument 'x' must be a matrix or a vector. ```<issue_comment>username_1: Try this code: ``` #Toy data.frame with two rows df<-data.table(rbind(runif(100,10,100),runif(100,10,100))) #Apply var function to each row apply(df,1,var) [1] 726.3197 652.2919 ``` Upvotes: 0 <issue_comment>username_2: Another way, using data.table: ``` library(matrixStats) library(data.table) x[, variance := rowVars(as.matrix(.SD))] ``` You just forgot the as.matrix part. Upvotes: 2
2018/03/21
333
1,089
<issue_start>username_0: I was wondering if it is possible to start Acitivty from service running in background and then move MainActivity to background. I don't want to finish() MainActivity. Just hide it. ``` Intent it=new Intent(context, NewPopupRecognizer.class); it.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(it); ``` I tried this code but it is always starting Activity with delay. I want to create popup Activity which i can turn on/off with floating button. I used to use WindowManger but it was very problematic so I decided to try doing it with Activity. The popup should be like: Facebook Messenger or Google Assistant.<issue_comment>username_1: Try this code: ``` #Toy data.frame with two rows df<-data.table(rbind(runif(100,10,100),runif(100,10,100))) #Apply var function to each row apply(df,1,var) [1] 726.3197 652.2919 ``` Upvotes: 0 <issue_comment>username_2: Another way, using data.table: ``` library(matrixStats) library(data.table) x[, variance := rowVars(as.matrix(.SD))] ``` You just forgot the as.matrix part. Upvotes: 2
2018/03/21
612
2,093
<issue_start>username_0: Hi I am new to Xamarin Forms and am from native iOS development. For showing alerts / action sheets I am using this <https://github.com/aritchie/userdialogs>. For implementing this I followed this [How to use Acr.UserDialogs](https://devlinduldulao.pro/how-to-use-acr-userdialogs/). I am getting alert successfully by following this, but now **I need to customise the OK / Cancel Button's background color, alignment, frame values, hide and show the buttons.**. Thanks in advance.<issue_comment>username_1: > > I need to customise the OK / Cancel Button's background color, alignment, frame values, hide and show the buttons. > > > As the author [said](https://github.com/aritchie/userdialogs/issues/504), you could do this by creating a style and apply it in AlertConfig. For example: In the style.xml: ``` <!-- Used for the buttons --> <item name="colorAccent">#AAAAAA</item> <!-- Used for the title and text --> <item name="android:textColorPrimary">#FFFFFF</item> <!-- Used for the background --> <item name="android:background">#DDDDDD</item> <!-- Used for the Alignment --> <item name="android:gravity">center\_horizontal</item> ``` And you could find this style Id in the `Resource.Designer.cs`. ``` // aapt resource value: 0x7f0b0189 public const int AlertDialogCustom = 2131427721; ``` Then in the code create a AlertConfig to config the alertdialog: ``` AlertConfig alertConfig = new AlertConfig(); alertConfig.OkText = "OKOK"; alertConfig.Message = "Message"; alertConfig.Title = "Title"; alertConfig.AndroidStyleId=2131427721; UserDialogs.Instance.Alert(alertConfig); ``` Upvotes: 2 <issue_comment>username_2: With [Rg.Plugins.Popup](https://github.com/rotorgames/Rg.Plugins.Popup) Nuget you can customize the popup. Upvotes: 1 <issue_comment>username_3: Finally I am not using any nugget packages. Now I created my own CustomAlert class. I hope it will helpful for any one. @Miguel Angel please look at below code In my CustomAlert.xaml file ``` xml version="1.0" encoding="UTF-8"? ``` Thank you Upvotes: 0
2018/03/21
940
2,326
<issue_start>username_0: Please see the simplified example: ``` A=[(721,'a'),(765,'a'),(421,'a'),(422,'a'),(106,'b'),(784,'a'),(201,'a'),(206,'b'),(207,'b')] ``` I want group adjacent tuples with attribute 'a', every two pair wise and leave tuples with 'b' alone. So the desired tuple would looks like: ``` A=[[(721,'a'),(765,'a')], [(421,'a'),(422,'a')], [(106,'b')], [(784,'a'),(201,'a')], [(206,'b')],[(207,'b')]] ``` What I can do is to build two separated lists contains tuples with `a` and `b`. Then pair tuples in `a`, and add back. But it seems not very efficient. Any faster and simple solutions?<issue_comment>username_1: You can use `itertools.groupby`: ``` import itertools A=[(721,'a'),(765,'a'),(421,'a'),(422,'a'),(106,'b'),(784,'a'),(201,'a'),(206,'b'),(207,'b')] def split(s): return [s[i:i+2] for i in range(0, len(s), 2)] new_data = [i if isinstance(i, list) else [i] for i in list(itertools.chain(*[split(list(b)) if a == 'a' else list(b) for a, b in itertools.groupby(A, key=lambda x:x[-1])])) ``` Output: ``` [[(721, 'a'), (765, 'a')], [(421, 'a'), (422, 'a')], [(106, 'b')], [(784, 'a'), (201, 'a')], [(206, 'b')], [(207, 'b')]] ``` Upvotes: 2 <issue_comment>username_2: Assuming `a` items are *always* in pairs, a simple approach would be as follows. Look at the first item - if it's an `a`, use it and the next item as a pair. Otherwise, just use the single item. Then 'jump' forward by 1 or 2, as appropriate: ``` A=[(721,'a'),(765,'a'),(421,'a'),(422,'a'),(106,'b'),(784,'a'),(201,'a'),(206,'b'),(207,'b')] result = [] count = 0 while count <= len(A)-1: if A[count][1] == 'a': result.append([A[count], A[count+1]]) count += 2 else: result.append([A[count]]) count += 1 print(result) ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: No need to use two lists. Edit: If the 'a' are not assumed to come always as pairs/adjacent ``` A = [(721,'a'),(765,'a'),(421,'a'),(422,'a'),(106,'b'),(784,'a'),(201,'a'),(206,'b'),(207,'b')] new_list = [] i = 0 while i < len(A): if i == len(A)-1: new_list.append([A[i]]) i+=1 elif (A[i][1]==A[i+1][1]=='a') : new_list.append([A[i], A[i+1]]) i += 2 else: new_list.append([A[i]]) i += 1 print(new_list) ``` Upvotes: 0
2018/03/21
673
2,455
<issue_start>username_0: I'm following a step-by-step guide written by a Microsoft field engineer on below documentation to create BizTalk 2016 AOAG with SQL 2016 enterprise edition in conjunction with Server Management Studio (14.0.17224.0): <https://learn.microsoft.com/en-us/biztalk/core/high-availability-using-sql-server-always-on-availability-groups?redirectedfrom=MSDN> My question on preparing the availability group using Server Management Studio (14.0.17224.0) supports database DTC option is that new functionally added to SQL management studio (14.0.17224.0) and during creation of his lab environment he mention DTC support cannot be applied from the SSMS and must be done in script? I created AG though SSMS GUI and ran query to find DTC status on DATABASE as below [![enter image description here](https://i.stack.imgur.com/Pd38i.png)](https://i.stack.imgur.com/Pd38i.png) : --===Checking where there DTC support is ENABLED or NOT for BIZTALK 2016 SSODB====== `SELECT NAME,DTC_SUPPORT FROM sys.availability_groups`<issue_comment>username_1: **HOLD ON!** The first thing you should do is really, double, triple check you really, absolutely 110% need (as in no other option) to use AOAG with BizTalk Server. While supported, AOAG is in practice a **net negative with BizTalk Server** over regular failover clustering alone since it adds *significant additional complexity* to the setup, while providing *no incremental benefit* to BizTalk Server. Note, this situation is specific to BizTalk Server. Many apps can and do benefit from AOAG, just not BizTalk Server. Upvotes: 2 <issue_comment>username_2: Availability Groups is better H/A solution in my opinion. It comes with database-redundancy and a much faster failover which a BizTalk environmient will benifit from. From SQL 2016 SP2 and SQL Server 2017 it also supports Cross Database Transaction inside instances in AOAG so you only need one instance (instead of 4 instance in SQL 2016 SP1). I belive that a Windows Failover Cluster is more complex because you have a lot more shared resources while in Availiability Groups it is only the listner that is a resource in WFC. Starting with SQL Server 2016 (13.x) Service Pack 2 you can alter an availability group for distributed transactions. For SQL Server 2016 (13.x) versions before Service Pack 2, you need to drop, and recreate the availability group with the DTC\_SUPPORT = PER\_DB setting. Upvotes: 1 [selected_answer]
2018/03/21
1,093
3,145
<issue_start>username_0: ``` Select lot_loc.whse, lot_loc.item, item.Ufprofile, item.UfColor, item.Uflength, item.unit_weight*Lot_loc.qty_on_hand 'QTY LBS. On Hand', item.unit_weight*Lot_loc.qty_rsvd 'QTY LBS. Reserved', item.UfQtyPerSkid, lot_loc.loc, Lot_loc.lot, Lot_loc.qty_on_hand, Lot_loc.qty_rsvd, itemwhse.qty_reorder, DateDiff(day, lot.Create_Date, GetDate())'Days Old', lot_loc.CreateDate, coitem.co_num, coitem.co_line, coitem.co_cust_num, custaddr.name, coitem.due_date, item.description, item.unit_weight*item.lot_size 'STD Run Size (Lbs.)' from lot_loc_mst lot_loc left outer join rsvd_inv_mst rsvd_inv on lot_loc.lot = rsvd_inv.lot LEFT OUTER JOIN coitem_mst coitem ON coitem.co_num = rsvd_inv.ref_num AND coitem.co_line = rsvd_inv.ref_line AND coitem.item = rsvd_inv.item left join custaddr_mst custaddr on coitem.co_cust_num = custaddr.cust_num and coitem.cust_seq = custaddr.cust_seq Left join item_mst item on lot_loc.item = item.item left join itemwhse_mst itemwhse on lot_loc.item = itemwhse.item and lot_loc.whse = itemwhse.whse inner join lot_mst lot on Lot_loc.lot = lot.lot ``` I need to group the data by qty lbs on hand. Not sure how to do it. I get a error message when i try to do it. > > Error :Column 'lot\_loc\_mst.whse' is invalid in the select list because > it is not contained in either an aggregate function or the GROUP BY > clause. > > ><issue_comment>username_1: all the columns that are not being aggregated must be in the group by for example select a, b, sum(c) from tab group by a,b <-- maybe you are missing this part Upvotes: 0 <issue_comment>username_2: Just an example to try and help you with aggregation. I havent included every column but hopefully you can understand it enough to complete it yourself. If you want to group by a column all other columns must be either included in the group by clause as well or use an aggregate function such as SUM, MIN, MAX depending on what you want from that column. Otherwise SQL doesnt know what to do with these columns e.g. ``` SELECT * FROM ( Select min(item.Uflength) AS [Item Length], SUM(item.unit_weight*Lot_loc.qty_on_hand) [QTY LBS. On Hand], SUM(item.unit_weight*Lot_loc.qty_rsvd) [QTY LBS. Reserved], min(item.UfQtyPerSkid) AS [Qty Per Skid], DateDiff(day, min(lot.Create_Date), GetDate())[Days Old], min(item.description) AS [Item Description], SUM(item.unit_weight*item.lot_size) [STD Run Size (Lbs.)] from lot_loc_mst lot_loc left outer join rsvd_inv_mst rsvd_inv on lot_loc.lot = rsvd_inv.lot LEFT OUTER JOIN coitem_mst coitem ON coitem.co_num = rsvd_inv.ref_num AND coitem.co_line = rsvd_inv.ref_line AND coitem.item = rsvd_inv.item left join custaddr_mst custaddr on coitem.co_cust_num = custaddr.cust_num and coitem.cust_seq = custaddr.cust_seq Left join item_mst item on lot_loc.item = item.item left join itemwhse_mst itemwhse on lot_loc.item = itemwhse.item and lot_loc.whse = itemwhse.whse inner join lot_mst lot on Lot_loc.lot = lot.lot ) P GROUP BY [QTY LBS. On Hand] ``` Upvotes: 1
2018/03/21
538
1,788
<issue_start>username_0: Consider I have two datasets: ``` data dataset_1; input CASENO X; datalines; 1 100 2 200 3 300 ; data dataset_2; input CASENO Y; datalines; 2 200000 3 300000 ; ``` I'm looking to find how many CASENOs appear in both lists: in the example above, I would get 2. My data is very large. It is taking a long time to get this sort of result by using a merge. ``` data result; merge dataset_1 (in = a) and dataset_2 (in = b); by CASENO; if a and b; RUN; ``` I'm looking for a more efficient way - edit: for clarity, is there a way to return the number of matches in two datasets without SAS having to write out the resulting file?<issue_comment>username_1: Try: ``` proc sql; select count(caseno) as Number from dataset_1 where caseno in (select caseno from dataset_2); quit; ``` Upvotes: 1 <issue_comment>username_2: If the datasets are already sorted, the data step merge is incredibly efficient. It passes over each row in each table exactly once. Of course, if you just want the *count*, you don't need to output all of the rows to a dataset, you can just: ``` data _null_; merge dataset_1 (in = a keep=caseno) dataset_2 (in = b keep=caseno) end=eof; by CASENO; if a and b then count+1; if eof then call symputx('count',count); RUN; ``` This will be much faster to run since you're not writing anything out. I also add KEEP statements (as Tom points out in comments) to the incoming datasets to only read in the by variable, this produces a speed-up of about 10%. If the datasets are *indexed*, you have some additional options that will be faster as they will be doing index scans (such as using SQL). But sorted, non-indexed tables, it's hard to improve on the data step merge. Upvotes: 3 [selected_answer]
2018/03/21
499
2,007
<issue_start>username_0: Currently, to update a k8s deployment image, we use the `kubectl set image` command like this: ``` kubectl set image deployment/deployment_name container=url_to_container ``` While this command updates the URL used for the main container in the deployment, it does not update the URL for the `initContainer` also set within the deployment. Is there a similar `kubectl` command I can use to update the `initContainer` to the same URL?<issue_comment>username_1: The [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) seems to suggests that only containers are concerned. Maybe you could switch to `kubectl patch` ? (I know it's more tedious...) ``` kubectl patch deployment/deployment_name --patch "{\"spec\": {\"template\": {\"spec\": {\"initContainers\": [{\"name\": \"container_name\",\"image\": \"url_to_container\"}]}}}}" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Since the accepted answer, the team [developed](https://github.com/kubernetes/kubernetes/pull/72276) the ability to set image for Kubernetes init containers. Using the same command, simply use the init container name for the container part of the command you supplied. ``` kubectl set image deployment/deployment_name myInitContainer=url_to_container ``` In case you want to update both container images in a single command, use: ``` kubectl set image deployment/deployment_name myInitContainer=url_to_container container=url_to_container ``` Upvotes: 4 <issue_comment>username_3: The snippet above is based on solution provided by @username_1. Therefore, here I removed the empty spaces on the json argument and added a namespace example. I did that because I faced issues when I was writing a pipeline for drone.io. ``` kubectl patch deployment deployment_name -n namespace_name -p "{\"spec\":{\"template\":{\"spec\":{\"initContainers\":[{\"name\":\"deployment_name\",\"image\":\"url_to_container"}]}}}}" ``` Upvotes: 0
2018/03/21
553
1,967
<issue_start>username_0: ``` Table A1 Table B1 A --100 id =1 A --100 id=1 B -- 100 id =2 A -100 id=1 C -- 200 id=3 A - 100 id =1 ``` Need to sum all values from two tables where id =1. ``` select (SUM(A1.A) + SUM(nvl(B1.A,0))) SUM from A1 a, B1 b where a.id='1' AND b.id='1'; ``` I am getting sum as 600 but it should be 400<issue_comment>username_1: The [documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) seems to suggests that only containers are concerned. Maybe you could switch to `kubectl patch` ? (I know it's more tedious...) ``` kubectl patch deployment/deployment_name --patch "{\"spec\": {\"template\": {\"spec\": {\"initContainers\": [{\"name\": \"container_name\",\"image\": \"url_to_container\"}]}}}}" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Since the accepted answer, the team [developed](https://github.com/kubernetes/kubernetes/pull/72276) the ability to set image for Kubernetes init containers. Using the same command, simply use the init container name for the container part of the command you supplied. ``` kubectl set image deployment/deployment_name myInitContainer=url_to_container ``` In case you want to update both container images in a single command, use: ``` kubectl set image deployment/deployment_name myInitContainer=url_to_container container=url_to_container ``` Upvotes: 4 <issue_comment>username_3: The snippet above is based on solution provided by @username_1. Therefore, here I removed the empty spaces on the json argument and added a namespace example. I did that because I faced issues when I was writing a pipeline for drone.io. ``` kubectl patch deployment deployment_name -n namespace_name -p "{\"spec\":{\"template\":{\"spec\":{\"initContainers\":[{\"name\":\"deployment_name\",\"image\":\"url_to_container"}]}}}}" ``` Upvotes: 0
2018/03/21
501
1,693
<issue_start>username_0: ``` int x = 6; ``` it works, but ``` Int32 x = new Int32(6); ``` does not. Why Int32's default constructor parameterless? **how does it assigns 6 to x?** edit: more explanation.. <https://referencesource.microsoft.com/#mscorlib/system/int32.cs,225942ed7b7a3252> line 38: ``` internal int m_value; ``` how compiler does assign 6 to this m\_value? Int32's constructor is parameterless.<issue_comment>username_1: > > How does it assign 6 to `x`? > > > `6`, the literal, is already an integer. The language is designed in a way that there are *literal expressions* within the syntax which are directly interpreted by the the compiler. A plain `6` is an integer literal and already corresponds to an `Int32` object with the value 6. The compiler does not actually need to call a constructor for literals but can create the objects directly. Depending on the type, there may be different syntaxes for different literals. For example a string literal `"foo"` also makes the compiler create a string object with the value *“foo”* directly. Note that this is nothing special to C# and its typing system. So whether `Int32` is a value type or not does not actually matter (`String` is not even a value type and there are still literals). Upvotes: 2 <issue_comment>username_2: ``` x = 6; ``` After compilation becomes: ``` IL_0001: ldc.i4.6 IL_0002: stloc.0 ``` So as you see .NET has instruction for creation integer values from literals. It's done under the hood. ldc.i4.6 creates "6" and put it onto the top of the stack. (that is actually creation of integer value) stloc.0 copies value from the top of the stack in to variable x Upvotes: 2
2018/03/21
916
3,094
<issue_start>username_0: Are there any limitation on the amount of arguments that can be send in an event? I have a function in which I want to trigger event that has 12 arguments of which 6 arguments are arrays. I get Stack too deep, try using less variables. Without the event the function works normally. I am guessing event arguments have some limitations or count towards max arguments in a solidity function but I cannot find any documentation around it. Can anyone clarify this? Edit: The contract looks something like this: I'm using safe math and the \_getAddressSubArrayTo is an internal pure function that gets a sub array from index to index. ``` event LogTemp(address a, address b, address[] c, uint256[] d, address[] e, uint256[] f, address[] g, uint256[] h, uint256 i, uint256 j, uint256 k, bytes32 l); function test(address[] _addresses, uint256[] _uints, uint8 _v, bytes32 _r, bytes32 _s, bool test) public returns (bool) { Temp memory temp = Temp({ a: _addresses[0], b: _addresses[1], c: _getAddressSubArrayTo(_addresses, 2, _uints[3].add(2)), d: _getUintSubArrayTo(_uints, 5, _uints[3].add(5)), e: _getAddressSubArrayTo(_addresses, _uints[3].add(2), (_uints[3].add(2)).add(_uints[4])), f: _getUintSubArrayTo(_uints, _uints[3].add(5), (_uints[3].add(5)).add(_uints[4])), g: _getAddressSubArrayTo(_addresses, (_uints[3].add(2)).add(_uints[4]), _addresses.length), h: _getUintSubArrayTo(_uints,(_uints[3].add(5)).add(_uints[4]), _uints.length), i: _uints[0], j: _uints[1], k: _uints[2], l: hash( _addresses, _uints ) }); LogTemp( temp.a, temp.b, temp.c, temp.d, temp.e, temp.f, temp.g, temp.h, temp.i, temp.j, temp.k, temp.l ); } ```<issue_comment>username_1: Yes, there are limits. You can have up to three indexed arguments in your event. Non-indexed arguments are less restrictive as it’s not limited by the event data structure itself, but is limited by the block gas size for storage (at a cost of 8 gas per byte of data stored in the log). [Solidity event documentation](http://solidity.readthedocs.io/en/latest/contracts.html#events) Upvotes: 3 <issue_comment>username_2: I was able to find an answer: If you look at the ContractCompiler.cpp where FunctionDefinition is declared, you see there is a limit of 17 elements on the stack ; ``` if (stackLayout.size() > 17) BOOST_THROW_EXCEPTION( CompilerError() << errinfo_sourceLocation(_function.location()) << errinfo_comment("Stack too deep, try removing local variables.") ); ``` Events are defined as functions, as can be seen in ExpressionCompiler.cpp. Simply put Events are treated as functions so they have a limit of 17 arguments. Array counts as 2 so in my example where I have 6 arrays + 6 normal arguments this equals 18 and I'm breaking the stack by 1. Upvotes: 3 [selected_answer]
2018/03/21
632
1,672
<issue_start>username_0: my question is how to create 3 image full size linkable boxes responsive [![https://imgur.com/a/RfruG](https://i.stack.imgur.com/Lt09s.jpg)](https://i.stack.imgur.com/Lt09s.jpg) Please advise, what framework to use?<issue_comment>username_1: For a setup as straightforward as this, you can give each image a percentage width of `33.33%`. ```css div { color: rgb(255, 255, 255); font-weight: bold; text-align: center; line-height: 180px; } img { float: left; width: 33.33%; height: 180px; } img:nth-of-type(1) { background-color: rgb(127, 0, 0); } img:nth-of-type(2) { background-color: rgb(127, 127, 0); } img:nth-of-type(3) { background-color: rgb(0, 63, 0); } ``` ```html ![Image 1]() ![Image 2]() ![Image 3]() ``` Upvotes: 0 <issue_comment>username_2: We normally don't do your work here but I made an exception and hope this is your solution: ```css html, body { height: 100%; } body { display: flex; align-items: stretch; align-content: stretch; } body>div { flex-grow: 1; height: 100%; width: 100%; background-position: center; background-repeat: no-repeat; background-size: cover; text-align: center; display: table; } body>div>h1 { vertical-align: middle; display: table-cell; } @media (max-device-width: 20cm) { body { flex-direction: column; } body>div { height: 33%; } } ``` ```html Text ==== Text ==== Text ==== ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You Can Use col-md-4 property of bootstrap for responsive part and insert the desired link in the link tag. I hope it will help you. Upvotes: 0
2018/03/21
614
1,680
<issue_start>username_0: i have a table named "Post" and each post has post image,the image saved in table as a string , when i am viewing the post i need to put the image as a css background image ,the code i wrote is: and it isn't get the image !!<issue_comment>username_1: For a setup as straightforward as this, you can give each image a percentage width of `33.33%`. ```css div { color: rgb(255, 255, 255); font-weight: bold; text-align: center; line-height: 180px; } img { float: left; width: 33.33%; height: 180px; } img:nth-of-type(1) { background-color: rgb(127, 0, 0); } img:nth-of-type(2) { background-color: rgb(127, 127, 0); } img:nth-of-type(3) { background-color: rgb(0, 63, 0); } ``` ```html ![Image 1]() ![Image 2]() ![Image 3]() ``` Upvotes: 0 <issue_comment>username_2: We normally don't do your work here but I made an exception and hope this is your solution: ```css html, body { height: 100%; } body { display: flex; align-items: stretch; align-content: stretch; } body>div { flex-grow: 1; height: 100%; width: 100%; background-position: center; background-repeat: no-repeat; background-size: cover; text-align: center; display: table; } body>div>h1 { vertical-align: middle; display: table-cell; } @media (max-device-width: 20cm) { body { flex-direction: column; } body>div { height: 33%; } } ``` ```html Text ==== Text ==== Text ==== ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You Can Use col-md-4 property of bootstrap for responsive part and insert the desired link in the link tag. I hope it will help you. Upvotes: 0
2018/03/21
368
1,355
<issue_start>username_0: I have HTML template which have injected bootsrap modal, but modal opening only with the button. I want that modal open automatically when page load, without button. So this is the button code: ``` Small modal ``` This is the modal code HTML: ``` × #### Modal title Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Close Save changes ```<issue_comment>username_1: You could load the modal on `document.ready` After the document loads you can either trigger the click event on the button as ``` Small modal $( document ).ready(function() { $('#smallModalButton').click(); }); ``` or you could simply toggle the modal as ``` $( document ).ready(function() { $('.bs-example-modal-sm').modal('toggle'); }); ``` Upvotes: 0 <issue_comment>username_2: I found a solution. Change the class from fade to show, and add onclick to close button. Code: ``` ``` Button: ``` Close ``` Upvotes: -1
2018/03/21
432
1,572
<issue_start>username_0: I made a Python module in C/C++ with Python C API. I use setuptools.Extension in my setup.py. It creates one .py file which loads a python module from some compiled .pyd file: ``` def __bootstrap__(): global __bootstrap__, __loader__, __file__ import sys, pkg_resources, imp __file__ = pkg_resources.resource_filename(__name__, 'zroya.cp36-win32.pyd') __loader__ = None; del __bootstrap__, __loader__ imp.load_dynamic(__name__,__file__) __bootstrap__() ``` But it does not generate python stubs for IDE autocomplete feature. I would like all exported functions and classes to be visible from .py file: ``` def myfunction_stub(*args, **kwargs): """ ... function docstring """ pass ``` Is it possible? Or do I have to create some python "preprocessor" which loads data from .pyd file and generate stubs with docstrings? Source code is available on [github](https://github.com/malja/zroya/tree/zroya2/zroya).<issue_comment>username_1: You could load the modal on `document.ready` After the document loads you can either trigger the click event on the button as ``` Small modal $( document ).ready(function() { $('#smallModalButton').click(); }); ``` or you could simply toggle the modal as ``` $( document ).ready(function() { $('.bs-example-modal-sm').modal('toggle'); }); ``` Upvotes: 0 <issue_comment>username_2: I found a solution. Change the class from fade to show, and add onclick to close button. Code: ``` ``` Button: ``` Close ``` Upvotes: -1
2018/03/21
732
2,558
<issue_start>username_0: I've an issue where my query is throwing an error even though it is **commented**. I didn't undertand why it is happening . I found this error code between my big query scripts where we are trying to execute everything in one shot .And also i tried of executing this commented script in seperate window but still showing me error meassage .Can anyone help me reason and soultion for this .thanks in advance . ``` /* USE [MY_DATABASE] GO /*002458 -- End */ GO /****** Object: Synonym [dbo].[SYN_BCM_POLICYINVOICE_D] Script Date: 2/1/2018 7:15:33 PM ******/ IF NOT EXISTS (select * from sys.synonyms where name='SYN_IM_ARCH_PREMIUMBATCH') BEGIN CREATE SYNONYM [dbo].[SYN_IM_ARCH_PREMIUMBATCH] FOR [VUEDATA_Billing].[dbo].[IM_ARCH_PREMIUMBATCH] END GO */ ``` **Error Message:** > > Msg 113, Level 15, State 1, Line 10 Missing end comment mark '*/'. Msg > 102, Level 15, State 1, Line 2 Incorrect syntax near '*'. > > ><issue_comment>username_1: As you can see in the code formatting of your question, the comment is being closed by your line of `/*002458 -- End */`. That's why the `GO` is colored as it is. If you have this code in SSMS (or really any decent code editor) you should likewise see that the comment has been closed. Because of that your code is not commented out and then in addition you have the trailing `*/` at the end, which would also cause a syntax error. Upvotes: 1 <issue_comment>username_2: When You comment code with `/*` Commented\_text `*/` the `/*` and `*/` go in pairs. that means You have two blocks of commented code. It looks like you wanted to comment all code seeing `/*` at the start and `*/` at the end. But they don't work as nested. First start of comment ends when meets `*/` . So the last lane is not commented. Upvotes: 0 <issue_comment>username_3: Be careful when you use comment in SQLServer: `--` will only comment the line (but can be problematic if you unwrap the text in an other client language, where all the query may be commented) `/* */` will comment all the content between the open `/*` and the **FIRST** closing `*/` after. The line with `/**** object: ... ***/` has a closing tag `*/` `*/` at the end of the script throw an error because it tried to terminate comment wich is not open. If you want a properly commented code I suggest to use `/* */` on each functional block. To finish, you also can just remove the automatic `/**** object: ... ***/` line and the code will run perfectly (read: nothing will happen because its commented) Upvotes: 2
2018/03/21
581
2,288
<issue_start>username_0: I am running a raw query like the below on an MSSQL database: ``` sequelize.query(DELETE FROM vehicle where vehicleId IN ('1', '2', '3')) ``` The return is always `[ [], [] ]`. This should contain the metadata and an array result. We also have a MYSQL database with the same tables. When the same query is ran in there, the results come with the expected metadata and results. According to the [documentation](http://docs.sequelizejs.com/manual/tutorial/raw-queries.html), if I include `{ type: sequelize.QueryTypes.DELETE}` as the second parameter, it would return only the results without the metadata. Instead, it returns `undefined`. What am I missing here?<issue_comment>username_1: As you can see in the code formatting of your question, the comment is being closed by your line of `/*002458 -- End */`. That's why the `GO` is colored as it is. If you have this code in SSMS (or really any decent code editor) you should likewise see that the comment has been closed. Because of that your code is not commented out and then in addition you have the trailing `*/` at the end, which would also cause a syntax error. Upvotes: 1 <issue_comment>username_2: When You comment code with `/*` Commented\_text `*/` the `/*` and `*/` go in pairs. that means You have two blocks of commented code. It looks like you wanted to comment all code seeing `/*` at the start and `*/` at the end. But they don't work as nested. First start of comment ends when meets `*/` . So the last lane is not commented. Upvotes: 0 <issue_comment>username_3: Be careful when you use comment in SQLServer: `--` will only comment the line (but can be problematic if you unwrap the text in an other client language, where all the query may be commented) `/* */` will comment all the content between the open `/*` and the **FIRST** closing `*/` after. The line with `/**** object: ... ***/` has a closing tag `*/` `*/` at the end of the script throw an error because it tried to terminate comment wich is not open. If you want a properly commented code I suggest to use `/* */` on each functional block. To finish, you also can just remove the automatic `/**** object: ... ***/` line and the code will run perfectly (read: nothing will happen because its commented) Upvotes: 2
2018/03/21
634
2,760
<issue_start>username_0: I'm Using Token-Based Authentication in my webApi application. for each login OAuth generates an access token for user. if a user tries to do login more than once. it may own some more valid token. is there a limitation on this process. Here is my Startup class: ``` public void Configuration(IAppBuilder app) { HttpConfiguration config = new HttpConfiguration(); ConfigureOAuth(app); WebApiConfig.Register(config); app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll); app.UseWebApi(config); //Rest of code is here; } public void ConfigureOAuth(IAppBuilder app) { OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions() { AllowInsecureHttp = true, TokenEndpointPath = new PathString("/token"), AccessTokenExpireTimeSpan = TimeSpan.FromDays(1), Provider = new SimpleAuthorizationServerProvider() }; // Token Generation app.UseOAuthAuthorizationServer(OAuthServerOptions); app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions()); } ``` and here is "GrantResourceOwnerCredentials" Method: ``` public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { context.OwinContext.Response.Headers.Add("Access-Control-Allow-Origin", new[] { "*" }); using (AuthRepository _repo = new AuthRepository()) { IdentityUser user = await _repo.FindUser(context.UserName, context.Password); if (user == null) { context.SetError("invalid_grant", "The user name or password is incorrect."); return; } } var identity = new ClaimsIdentity(context.Options.AuthenticationType); identity.AddClaim(new Claim("sub", context.UserName)); identity.AddClaim(new Claim("role", "user")); context.Validated(identity); } ```<issue_comment>username_1: I am afraid the token is valid until it expires and it will contain all the info related to the user. So to do what you want you have to create your own layer to validate if the user has or not a token, like creating a mapping table and then a custom filter to reject the request if the user is not using the last token generated for him. Upvotes: 0 <issue_comment>username_2: One of the main limitation of oauth token is it's expiry. So if you generate long living token then it is valid for long time. So some of common approach to handle such senerio is : * issue short living token with additional refresh token * store token in database and every time when new token is generated then make old one token status to expire. Then you can write your custom authorize attribute to check whether token is expire or not. Upvotes: 2 [selected_answer]
2018/03/21
296
1,358
<issue_start>username_0: I'm working on a react native project and I generated a keystore and released the signed apk to play store few days ago. While working I mistakenly copied another keystore into the `android/app` directory in the project and since then could not return to the older version anymore. Checking version history of the keystore also states it was created today rather than modified today. How can I get the keystore back when I don't have a Time Machine setup?<issue_comment>username_1: I am afraid the token is valid until it expires and it will contain all the info related to the user. So to do what you want you have to create your own layer to validate if the user has or not a token, like creating a mapping table and then a custom filter to reject the request if the user is not using the last token generated for him. Upvotes: 0 <issue_comment>username_2: One of the main limitation of oauth token is it's expiry. So if you generate long living token then it is valid for long time. So some of common approach to handle such senerio is : * issue short living token with additional refresh token * store token in database and every time when new token is generated then make old one token status to expire. Then you can write your custom authorize attribute to check whether token is expire or not. Upvotes: 2 [selected_answer]
2018/03/21
754
2,412
<issue_start>username_0: Assume I have `users` which is `List`, and `User` class has `Type` and `Age` properties. I want to filter that list of users by some condition, and do something per item, based on condition. Let this list has 10 users, and some of them are of Type `"complex"` and some of them `"simple`". I want my final list to be users with age above 30, and if Type of user is "complex", to do something on it and then add it to final list. It should be something like: ``` var users= users.Where(u => u.Age > 30 and if u.Type = "complex" ? u = doSomething(u)).ToList(); ``` and if `doSomething(u)` returns null, skip current "u" to be added to list. It is half correct code and half like pseudo code because I don't know how to fit `if u.Type = "complex" ? u = doSomething(u)` part into LINQ expression. How can it be done? EDIT: And how to do it if I want in final lsit users with Age > 30 OR users with Type = "complex" (and doSomething()) on complex users?<issue_comment>username_1: ``` var users = users.Where(u => u.Age > 30) // common filter .Select(u => u.Type == "complex" ? doSomething(u) : u) //select user or transformed .Where(u => u != null) //not null only .ToList(); ``` Upvotes: 2 <issue_comment>username_2: I would do something as follow : ``` users.Where(u => u.Age > 30).Select((u) => { if (u.Type == "complex") { // Do something } return u; }).ToList(); ``` Select all users with age > 30 and then altering the result depending on the type property. EDIT : And for your Edit question just add the condition to the where : ``` [...].Where(u => u.Age > 30 || u.Type == "complex")[...] ``` Upvotes: 0 <issue_comment>username_3: Well, I came up with some sick variant... ``` public bool DoSomething() { // Do anything return true; } var v = users.Where(x => x!= null && x.Age > 30 && x.Type == "Complex" && x.DoSomething() == true).ToList(); ``` For your edit: ``` var v = users.Where(x => x!= null && (x.Age > 30 || x.Type == "Complex") && x.DoSomething() == true).ToList(); ``` Upvotes: 0 <issue_comment>username_4: AND QUERY ``` var andList = users.Where(u => u.Age > 30 && (u.Type == "complex" && doSomething(u) != null)).ToList(); ``` OR QUERY ``` var orList = users.Where(u => u.Age > 30 || (u.Type == "complex" && doSomething(u) != null)).ToList(); ``` Upvotes: 0
2018/03/21
408
1,295
<issue_start>username_0: Hi I would like to a add a page navigation on the side of my Rmarkdown file. It would be nice if it could look like the sidebar [on this page](https://rmarkdown.rstudio.com/rmarkdown_websites.html#site_navigation). ``` --- title: "My Title" author: "My Name" date: "`r format(Sys.time(), '%d %B, %Y')`" output: html_document: theme: united highlight: tango code_folding: hide df_print: paged toc: true toc_depth: 2 --- # Section 1 abc ## Section 1.1 abcabc # Section 2 abcabcabc ``` can anyone help?<issue_comment>username_1: I've tried this and works pretty well: ``` --- title: "TEST" output: html_document: toc: true toc_float: true toc_collapsed: true toc_depth: 3 number_sections: true theme: lumen --- ``` For more info please check the answer in this link: [RMarkdown: Floating TOC and TOC at beginning](https://stackoverflow.com/questions/48261379/rmarkdown-floating-toc-and-toc-at-beginning) Upvotes: 3 <issue_comment>username_2: Adding one more answer which fixes the incorrect indentation on the answer given by @username_1: ```yaml --- title: "TEST" output: html_document: toc: true toc_float: true toc_collapsed: true toc_depth: 3 number_sections: true theme: lumen --- ``` Upvotes: 2
2018/03/21
1,345
4,545
<issue_start>username_0: AddDeviceAsync method of registry throw exception ArgumentNull. Parameter, registryManager and output parameter notjing is null but it still throw exception. Exception : {"Message":"ErrorCode:ArgumentNull;BadRequest","ExceptionMessage":"Tracking ID:adf7e83e7db046969086702500cbe73b-G:2-TimeStamp:03/21/2018 14:09:50"} Method: ProjectXXX.NTB.FN.DeviceRegistration.IdentityCreationService.CreateDeviceIdentity() Description: Unexpected exception Is it issue related to Microsoft.Azure.Devices library ?? Code ``` RegistryManager registryManager = null; void Initialize() { registryManager = RegistryManager.CreateFromConnectionString(AppSetting.IoTHubConnectionString); } public async Task CreateDeviceIdentity(string deviceId, string deviceKey) { Device device = new Device(deviceId); Device newdevice = new Device(); Exception exception = null; bool bExceptionHasOccured = false; string token = string.Empty; string primarySasToken = Base64Encode(deviceKey); string secondarySasToken = Base64Encode($"{deviceId}-{deviceId}"); device.Authentication = new AuthenticationMechanism { SymmetricKey = new SymmetricKey { PrimaryKey = primarySasToken, SecondaryKey = secondarySasToken } }; try { newdevice = await registryManager.AddDeviceAsync(device); break; } catch (DeviceAlreadyExistsException) { token = GetDeviceToken(deviceId); break; } catch (IotHubThrottledException e) { } catch (SocketException e) { } catch (Exception e) { } token = newdevice.Authentication.SymmetricKey.PrimaryKey; return token; } ``` Stack Trace System.ArgumentException: at Microsoft.Azure.Devices.HttpClientHelper+d\_\_36.MoveNext (Microsoft.Azure.Devices, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) at Microsoft.Azure.Devices.HttpClientHelper+d\_\_12`1.MoveNext (Microsoft.Azure.Devices, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35) at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult (mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089) at ProjectXXX.NTB.FN.DeviceRegistration.IdentityCreationService+d\_\_6.MoveNext (ProjectXXX.NTB.FN.DeviceRegistration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=nullProjectXXX.NTB.FN.DeviceRegistration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: D:\ProjectXXX\ProjectXXX.NTB\ProjectXXX.NTB\DeviceRegistrationLib\ProjectXXX.NTB.FN.DeviceRegistration\IdentityCreationService.csProjectXXX.NTB.FN.DeviceRegistration, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null: 122)<issue_comment>username_1: A valid SymmetricKey is must be Base64 encoded string and between 16 and 64 bytes in length. I've no idea of what kind of your `deviceKey`. I use a GUID and test with the following code piece and the device can be successfully created. ``` public static async Task CreateDeviceIdentity(string deviceId) { Device device = new Device(deviceId); Device newdevice = new Device(); string token = string.Empty; var primaryKey = Guid.NewGuid(); var secondaryKey = Guid.NewGuid(); byte[] bytes = Encoding.UTF8.GetBytes(primaryKey.ToString()); string base64PrimaryKey = Convert.ToBase64String(bytes); bytes = Encoding.UTF8.GetBytes(secondaryKey.ToString()); string base64SecondaryKey = Convert.ToBase64String(bytes); try { device.Authentication = new AuthenticationMechanism { SymmetricKey = new SymmetricKey { PrimaryKey = base64PrimaryKey, SecondaryKey = base64SecondaryKey } }; newdevice = await registryManager.AddDeviceAsync(device); } catch (Exception ex) { } token = newdevice.Authentication.SymmetricKey.PrimaryKey; return token; } ``` Upvotes: 1 <issue_comment>username_2: It worked when I downgraded Microsoft.Azure.Devices from 1.5.1 to 1.3.2 Upvotes: 3 [selected_answer]<issue_comment>username_3: by downgrading the Microsoft.Azure.Devices from 1.5.1 to 1.3.2 it should work Upvotes: 1
2018/03/21
980
3,862
<issue_start>username_0: I'm having trouble understanding what runs on the main thread during an async await operation and would be grateful for some answers. Let's say I have a button that is supposed to log the user in. it is supposed to block all other user input while the login process transpires, show a progress view and then when the result comes in display it and here is the method that performs the log in ``` button_clicked(object sender, EventArgs e) { do_login(); //I do not await the result do_some_other_stuff(); //this doesn't actually exist I just put it here to ask my questions } async Task do_login() { string user_name = txtUser.Text; string password = <PASSWORD>; show_progress(true); //this is not an async method; string error = await _login.do_login(user_name, password);//this is an async method that can take up to 20 seconds to complete; show_progress(false); if (error != null) { show_error(error); } else { show_next_screen(); } } ``` I have two questions on the above example a) What will be run on the main thread? If I understand it correctly only \_login.do\_login will be run on a seperate thread, all others will be on the main thread, is this correct? b) In what order will the methods be executed? Again if I understand it correctly, it will be : * do\_login() * show\_progress(true); * \_login.do\_login starts; * do\_some\_other\_stuff(); * \_login.do\_login finishes; * show\_progress(false); and it will continue from there is this correct? if not, how can I achieve such a behaviour? c) If my code above is correct then why do I keep receiving a warning that do\_login() is not awaited? I do not wish to await it I just want it to run what it can and return when it wants, should I ignore that warning?<issue_comment>username_1: The answer to your main question is: *it depends*. The `_login.do_login` method will *likely* be put onto its own thread, but it actually depends on the .NET task scheduler. In WPF and ASP.NET it will be scheduled onto the thread pool if it doesn't immediately return a completed task. The important part is that you know it will not block execution of the calling (in your case, the main) thread. Your understanding of the method flow is correct since you don't await `do_login`. As far as the warning goes; you can mark `do_login` as `async void` to avoid it, though generally you only do that for event handlers which can then await a `Task` returning method. If you do go the `async void` route; make sure to put a `try/catch` in as such methods will throw all the way up to the root handler and can cause your app to crash. Upvotes: 2 [selected_answer]<issue_comment>username_2: Technically, depending on the implementation of `do_login`, everything could run in the main thread. In this case I assume you're contacting a web server, so that part won't, but this is not always true. And asynchronous operation does not necessarily executes in another thread. One operation is asynchronous when: * It doesn't block the calling thread. Usually, UI threads run an 'event loop'. So an asynchronous task could simply put a new piece of work into the event queue to be executed whenever the scheduler determines, but in the same thread. In this case you don't use two threads, but still, you don't have to wait for the task to complete and you don't know when it'll finish. To be precise, all the code in your post will run in the main thread. Only the part in `do_login` that manages the connection with the server, waiting and retrieving data will execute asynchronously. You're mostly right about the sequence, with a few adjustments: 1. do\_login() (until the await) 2. login.\_do\_login() starts executing 3. do\_some\_other\_stuff() 4. ... 5. login.do\_login finishes 6. show\_progress() Upvotes: 2
2018/03/21
678
2,095
<issue_start>username_0: I want to detect the current day and after that, add active to a class, to highlight the current day. ``` var date = new Date(); var day = date.getDay(); if ( day == 'Montag' ){ $(".monday").addClass("active"); } if ( day == 'Dienstag' ){ $(".tuesday").addClass("active"); } if ( day == 'Mittwoch' ){ $(".wednesday").addClass("active"); } if ( day == 'Donnerstag' ){ $(".thursday").addClass("active"); } if ( day == 'Freitag' ){ $(".friday").addClass("active"); } if ( day == 'Samstag' ){ $(".saturday").addClass("active"); } ``` I know, switch would be better. Thx<issue_comment>username_1: ``` if ( day == 1 ){ $(".monday").addClass("active"); } ... ``` or ``` switch (day) { case 1: $(".monday").addClass("active"); break; ... } ``` or ``` var dayClasses = ["sunday", "monday", ...]; $("." + dayClasses[day]).addClass("active"); ``` Upvotes: 0 <issue_comment>username_2: You could do something like that to have a more readable code. ``` var daysOfWeeks = ['sunday', 'monday', 'tuesday', 'wednesday', 'thursday', 'friday', 'saturday']; var date = new Date(); $('.' + daysOfWeek[date.getDay()]).addClass("active"); ``` Remember that [`getDay()`](https://www.w3schools.com/jsref/jsref_getday.asp) returns an integer between 0 and 6 included (and 0 corresponds to Sunday). Upvotes: 1 <issue_comment>username_3: `getDay()` returns a Number from 0 to 6. [Reference](https://www.w3schools.com/jsref/jsref_getday.asp) To make your code even better you can store you class names in an array and access it by the index given by `getDay()` and use it directly in your selector: ```js var days = ["monday", "tuesday", "wednesday", "thursday", "friday", "saturday", "sunday"]; var date = new Date(); var day = date.getDay(); $("." + days[day]).addClass("active"); ``` ```css span { color: blue; } .active { color: red; } ``` ```html Montag Dienstag Mittwoch Donnerstag Freitag Samstag Sonntag ``` Upvotes: 2 [selected_answer]
2018/03/21
352
1,303
<issue_start>username_0: My model looks like: ``` protected $appends = array('status'); public function getStatusAttribute() { if ($this->someattribute == 1) { $status = 'Active'; } elseif ($this->someattribute == 2) { $status = 'Canceled'; ... } else { $status = 'Some antoher status'; } return $status; } ``` And I want to order a collection of this models by this `status` attribute, is it possible? `Model::where(...)->orderBy(???)` p.s. I need exactly `orderBy`, not `sortBy` solution.<issue_comment>username_1: Either you can use ``` Model::where(...)->orderBy('someattribute')->get(); ``` in this case you will only get integer value in place of `someattribute`, or you can use `DB` query s follows ``` DB::select(DB::raw('(CASE WHEN someattribute = 1 THEN "Active" CASE WHEN someattribute = 1 THEN "Canceled" ELSE "Some antoher status" END) AS status')) ->orderBy('someattribute', 'desc'); ``` Upvotes: -1 <issue_comment>username_2: There is no way to make Eloquent do this because it only creates a SQL query. It does not have the ability to translate the PHP logic in your code into a SQL query. A work around is to loop over the results afterwards and manually check the appends fields. Collections may be useful here. Upvotes: 0
2018/03/21
829
2,619
<issue_start>username_0: I'm trying to make a page where I've got 4 background pictures one behind another, and when I hover the mouse over them, the first picture goes to top left, the second to bottom left, the third, top right and the last one bottom right. I've managed to work this out with the first one, it is exactly what I need to, but the other pictures do not even appear behind the first one, when I hover the mouse, it is just blank space. I'm using CSS because it is all I know at the moment, I have some notions of javascript but I do not know if it can help me with this issue. I put the code here for better understanding. In the end, I just need to repeat this code 4 times, just changing the position, but I can't make it work as simple as I thought. ```css body, html { height: 100%; margin: 0; } .bg { /* The image used */ background-image: url("i3.jpg"), url("i4.jpg"), url("class.jpg"), url("prod.jpg"); /* Full height */ height: 100%; /* Center and scale the image nicely */ background-position: center, center, center, center; background-repeat: no-repeat, no-repeat, no-repeat, no-repeat; background-size: cover, cover, cover, cover; } .bg { /* Isso aqui deixa a imagem em preto e branco */ -moz-filter: grayscale(100%); -ms-filter: grayscale(100%); filter: grayscale(100%); filter: gray; /* IE 6-9 */ transition: filter .5s ease-in-out; } .bg:hover { /* Isso aqui ativa o hover */ -webkit-filter: none; -moz-filter: none; -ms-filter: none; filter: none; } .bg { /* Isso aqui é do hover 1 */ transition: 1s ease; } .bg:hover { /* Isso aqui é do hover 2 */ -webkit-transform: scale(0.8); -ms-transform: scale(0.8); transform: scale(0.5); transform-origin: left top; transition: 1s ease; } ```<issue_comment>username_1: I would break up the images into four separate images (with different class names) and use the z-index property. <https://www.w3schools.com/cssref/pr_pos_z-index.asp> Hope that helps Upvotes: 0 <issue_comment>username_2: You will need to change the `background-position` of all the images on hover... ```css body, html { height: 100%; margin: 0; } .bg { background-image: url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80"); height: 100%; background-position: center; background-repeat: no-repeat; transition: 1s ease; } .bg:hover { background-position: top left, top right, bottom left, bottom right; } ``` Upvotes: 2