id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
fc447d8ae3b78d4cf17fb16815e918f31ae8f286
|
Stackoverflow Stackexchange
Q: In GHCi, what is the difference between ':module' and 'import'? In GHCi, there appear to be two ways to include installed modules:
Prelude> :module Database.HDBC
Prelude Database.HDBC>
and
Prelude> import Database.HDBC
Prelude Database.HDBC>
Is there any difference between these? If not, why the duplicate commands?
I've always used import, and now am noticing :module used in Real World Haskell's databases chapter.
A: From the GHCi docs:
The :module command provides a way to do two things that cannot be
done with ordinary import declarations:
*
*:module supports the * modifier on modules, which opens the full top-level scope of a module, rather than just its exports.
*Imports can be removed from the context, using the syntax :module -M. The import syntax is cumulative (as in a Haskell module), so this is the only way to subtract from the scope.
|
Q: In GHCi, what is the difference between ':module' and 'import'? In GHCi, there appear to be two ways to include installed modules:
Prelude> :module Database.HDBC
Prelude Database.HDBC>
and
Prelude> import Database.HDBC
Prelude Database.HDBC>
Is there any difference between these? If not, why the duplicate commands?
I've always used import, and now am noticing :module used in Real World Haskell's databases chapter.
A: From the GHCi docs:
The :module command provides a way to do two things that cannot be
done with ordinary import declarations:
*
*:module supports the * modifier on modules, which opens the full top-level scope of a module, rather than just its exports.
*Imports can be removed from the context, using the syntax :module -M. The import syntax is cumulative (as in a Haskell module), so this is the only way to subtract from the scope.
|
stackoverflow
|
{
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:861463",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531586"
}
|
711be75f7b7eb887ca257b4acabd5611c11562d3
|
Stackoverflow Stackexchange
Q: Laravel 5.4 upload original file name and extention how can i upload original filename (file.jpg) to database when submitting file via form. Controller:
public function addCv(Request $request){
$cv = Cv::create($request->all());
$file = $request->file_name;
$filename = $file->getClientOriginalName();
Storage::putFileAs('public/uploads', $file, $filename);
return redirect()->back();
}
at the moment, this function uploads a path like this C:\xampp\tmp\php18DD.tmp.
Instead of that i want just filename and extension (file.extension).
Storage is working fine - storing with original name.
A: You could try
$file = $request->image->getClientOriginalName(); //Get Image Name
$extension = $request->image->getClientOriginalExtension(); //Get Image Extension
$fileName = $file.'.'.$extension; //Concatenate both to get FileName (eg: file.jpg)
|
Q: Laravel 5.4 upload original file name and extention how can i upload original filename (file.jpg) to database when submitting file via form. Controller:
public function addCv(Request $request){
$cv = Cv::create($request->all());
$file = $request->file_name;
$filename = $file->getClientOriginalName();
Storage::putFileAs('public/uploads', $file, $filename);
return redirect()->back();
}
at the moment, this function uploads a path like this C:\xampp\tmp\php18DD.tmp.
Instead of that i want just filename and extension (file.extension).
Storage is working fine - storing with original name.
A: You could try
$file = $request->image->getClientOriginalName(); //Get Image Name
$extension = $request->image->getClientOriginalExtension(); //Get Image Extension
$fileName = $file.'.'.$extension; //Concatenate both to get FileName (eg: file.jpg)
A: I would suggest adding enctype="multipart/form-data" in the form tag in the view, from where you are uploading the file:
<form enctype="multipart/form-data">
A: You can use like below,
$this->getRequest()->files['name_of_file_field_in_post']->getClientOriginalName();
Reference: Get Uploaded File's Original Name
|
stackoverflow
|
{
"language": "en",
"length": 134,
"provenance": "stackexchange_0000F.jsonl.gz:861481",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531652"
}
|
3595bbe4f0e14ddd6c74b205e6cffd89aa9fdf7a
|
Stackoverflow Stackexchange
Q: javascript - How to check if array has at least one negative value? I created an array of integers and wanted to know if it had one or more negative values in it.
I do not want to create a for() loop and check if each element in the array is positive or negative because I only want to return once (ex: I don't want my function to "return false;" a million times).
One option I considered was multiplying each value in the array by the absolute value of its reciprocal, so I get an array of 1s or -1s (or undefined if value is 0) and then I could sum all of the values in this second array to see if it equals the length of the array.
However, the problem with this method is it does not account for 1/0, and also it is tedious. I want to know if there is a faster way to check if an array contains at least one negative value.
--from a beginner JavaScript programmer
A: why do you find min value in array?
see
JavaScript: min & max Array values?
Math.min(...array)
|
Q: javascript - How to check if array has at least one negative value? I created an array of integers and wanted to know if it had one or more negative values in it.
I do not want to create a for() loop and check if each element in the array is positive or negative because I only want to return once (ex: I don't want my function to "return false;" a million times).
One option I considered was multiplying each value in the array by the absolute value of its reciprocal, so I get an array of 1s or -1s (or undefined if value is 0) and then I could sum all of the values in this second array to see if it equals the length of the array.
However, the problem with this method is it does not account for 1/0, and also it is tedious. I want to know if there is a faster way to check if an array contains at least one negative value.
--from a beginner JavaScript programmer
A: why do you find min value in array?
see
JavaScript: min & max Array values?
Math.min(...array)
A: You could leverage Array.prototype.some which will return true or false if an item in the array matches the given condition. It'll also stop checking remaining values if the condition matches an element:
let values = [1, 4, 6, -10, -83];
let hasNegative = values.some(v => v < 0);
A: What's the issue with the for loop needing to return false all the time?
fucntion containsNegative(myArray)
for(var i = 0; i < myArray.length(); i++)
{
if(myArray[i] < 0){
return true;
}
}
return false;
}
and if you wanted to get the amount of negative numbers
fucntion getNegative(myArray)
var count = 0;
for(var i = 0; i < myArray.length(); i++)
{
if(myArray[i] < 0){
count++
}
}
return count;
}
A: This will only return 1 value and break as soon as a negative is found.
function doesArrayContainNegative(array){
//if negative is found return true (breaking loop)
for(var arr of array){
if(arr < 0) return true;
}
//if no elements are negative return false
return false;
}
var array1 = [9, -3, 5, 8]
console.log(doesArrayContainNegative(array1))
|
stackoverflow
|
{
"language": "en",
"length": 365,
"provenance": "stackexchange_0000F.jsonl.gz:861490",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531677"
}
|
a42d251a8bd891b09f4e43250fda6673d017a2d7
|
Stackoverflow Stackexchange
Q: How can I do less than, greater than in JSON Postgres fields? If I have some json:
id = 1, json = {'key':95}
id = 2, json = {'key':90}
id = 3, json = {'key':50}
Is there a way I can use Postgres fields to query for key greater than >= 90?
A: If you use postgres version >= 9.3, then you can:
select * from t
where (json->>'key')::numeric >= 90
|
Q: How can I do less than, greater than in JSON Postgres fields? If I have some json:
id = 1, json = {'key':95}
id = 2, json = {'key':90}
id = 3, json = {'key':50}
Is there a way I can use Postgres fields to query for key greater than >= 90?
A: If you use postgres version >= 9.3, then you can:
select * from t
where (json->>'key')::numeric >= 90
A: Use the operator ->> (Get JSON object field as text), e.g.
with my_table(id, json) as (
values
(1, '{"key":95}'::json),
(2, '{"key":90}'),
(3, '{"key":50}')
)
select *
from my_table
where (json->>'key')::int >= 90;
id | json
----+------------
1 | {"key":95}
2 | {"key":90}
(2 rows)
|
stackoverflow
|
{
"language": "en",
"length": 117,
"provenance": "stackexchange_0000F.jsonl.gz:861496",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531689"
}
|
ac7a2b7e725ea745b4a066912c1925aed6f124d4
|
Stackoverflow Stackexchange
Q: Pandas: Selecting rows for which groupby.sum() satisfies condition In pandas I have a dataframe of the form:
>>> import pandas as pd
>>> df = pd.DataFrame({'ID':[51,51,51,24,24,24,31], 'x':[0,1,0,0,1,1,0]})
>>> df
ID x
51 0
51 1
51 0
24 0
24 1
24 1
31 0
For every 'ID' the value of 'x' is recorded several times, it is either 0 or 1. I want to select those rows from df that contain an 'ID' for which 'x' is 1 at least twice.
For every 'ID' I manage to count the number of times 'x' is 1, by
>>> df.groupby('ID')['x'].sum()
ID
51 1
24 2
31 0
But I don't know how to proceed from here. I would like the following output:
ID x
24 0
24 1
24 1
A: Use groupby and filter
df.groupby('ID').filter(lambda s: s.x.sum()>=2)
Output:
ID x
3 24 0
4 24 1
5 24 1
|
Q: Pandas: Selecting rows for which groupby.sum() satisfies condition In pandas I have a dataframe of the form:
>>> import pandas as pd
>>> df = pd.DataFrame({'ID':[51,51,51,24,24,24,31], 'x':[0,1,0,0,1,1,0]})
>>> df
ID x
51 0
51 1
51 0
24 0
24 1
24 1
31 0
For every 'ID' the value of 'x' is recorded several times, it is either 0 or 1. I want to select those rows from df that contain an 'ID' for which 'x' is 1 at least twice.
For every 'ID' I manage to count the number of times 'x' is 1, by
>>> df.groupby('ID')['x'].sum()
ID
51 1
24 2
31 0
But I don't know how to proceed from here. I would like the following output:
ID x
24 0
24 1
24 1
A: Use groupby and filter
df.groupby('ID').filter(lambda s: s.x.sum()>=2)
Output:
ID x
3 24 0
4 24 1
5 24 1
A: df = pd.DataFrame({'ID':[51,51,51,24,24,24,31], 'x':[0,1,0,0,1,1,0]})
df.loc[df.groupby(['ID'])['x'].transform(func=sum)>=2,:]
out:
ID x
3 24 0
4 24 1
5 24 1
A: Using np.bincount and pd.factorize
alternative advance technique to draw better performance
f, u = df.ID.factorize()
df[np.bincount(f, df.x.values)[f] >= 2]
ID x
3 24 0
4 24 1
5 24 1
In obnoxious one-liner form
df[(lambda f, w: np.bincount(f, w)[f] >= 2)(df.ID.factorize()[0], df.x.values)]
ID x
3 24 0
4 24 1
5 24 1
np.bincount and np.unique
I could've used np.unique with the return_inverse parameter to accomplish the same exact thing. But, np.unique will sort the array and will change the time complexity of the solution.
u, f = np.unique(df.ID.values, return_inverse=True)
df[np.bincount(f, df.x.values)[f] >= 2]
One-liner
df[(lambda f, w: np.bincount(f, w)[f] >= 2)(np.unique(df.ID.values, return_inverse=True)[1], df.x.values)]
Timing
%timeit df[(lambda f, w: np.bincount(f, w)[f] >= 2)(df.ID.factorize()[0], df.x.values)]
%timeit df[(lambda f, w: np.bincount(f, w)[f] >= 2)(np.unique(df.ID.values, return_inverse=True)[1], df.x.values)]
%timeit df.groupby('ID').filter(lambda s: s.x.sum()>=2)
%timeit df.loc[df.groupby(['ID'])['x'].transform(func=sum)>=2]
%timeit df.loc[df.groupby(['ID'])['x'].transform('sum')>=2]
small data
1000 loops, best of 3: 302 µs per loop
1000 loops, best of 3: 241 µs per loop
1000 loops, best of 3: 1.52 ms per loop
1000 loops, best of 3: 1.2 ms per loop
1000 loops, best of 3: 1.21 ms per loop
large data
np.random.seed([3,1415])
df = pd.DataFrame(dict(
ID=np.random.randint(100, size=10000),
x=np.random.randint(2, size=10000)
))
1000 loops, best of 3: 528 µs per loop
1000 loops, best of 3: 847 µs per loop
10 loops, best of 3: 20.9 ms per loop
1000 loops, best of 3: 1.47 ms per loop
1000 loops, best of 3: 1.55 ms per loop
larger data
np.random.seed([3,1415])
df = pd.DataFrame(dict(
ID=np.random.randint(100, size=100000),
x=np.random.randint(2, size=100000)
))
1000 loops, best of 3: 2.01 ms per loop
100 loops, best of 3: 6.44 ms per loop
10 loops, best of 3: 29.4 ms per loop
100 loops, best of 3: 3.84 ms per loop
100 loops, best of 3: 3.74 ms per loop
|
stackoverflow
|
{
"language": "en",
"length": 460,
"provenance": "stackexchange_0000F.jsonl.gz:861497",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531696"
}
|
05be764c431782325720edbc8a917edbad33831e
|
Stackoverflow Stackexchange
Q: PyCharm: option to warn user on typing mismatches I would like PyCharm to warn me on the following python3 code:
def foo() -> str:
return 'abc'
x: int = foo() # I want to be warned here
Is there an option I can enable to get this warning?
The motivation here is that I have functions whose return-types are not as easily deducible at first glance like in this example. I want to declare what I think the types of my variables should be, for readability, and I want PyCharm to deduce whether what I think is correct.
A: Turns out this is an open issue in PyCharm (PY-24832).
|
Q: PyCharm: option to warn user on typing mismatches I would like PyCharm to warn me on the following python3 code:
def foo() -> str:
return 'abc'
x: int = foo() # I want to be warned here
Is there an option I can enable to get this warning?
The motivation here is that I have functions whose return-types are not as easily deducible at first glance like in this example. I want to declare what I think the types of my variables should be, for readability, and I want PyCharm to deduce whether what I think is correct.
A: Turns out this is an open issue in PyCharm (PY-24832).
|
stackoverflow
|
{
"language": "en",
"length": 110,
"provenance": "stackexchange_0000F.jsonl.gz:861504",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531717"
}
|
0ff2b6a457a0e3a293f6d45cd11be0dc334642b2
|
Stackoverflow Stackexchange
Q: trouble with erb-loader, is not working with .vue.erb files im working with vue and everything going great, but at the moment to compile with erb, the console shows me a error:
ERROR in ./app/javascript/packs/onboarding/HomeForm.vue.erb
Module parse failed: /Users/yorch/SitesRails/homie/node_modules/rails-erb-
loader/index.js??ref--3!/Users/yorch/SitesRails/homie/app/javascript/packs/onboarding/HomeForm.vue.erb
Unexpected token (1:0)
You may need an appropriate loader to handle this file type.
<template>
<div>
<form action="/" style="margin-top: 50px;" @submit.prevent="login">
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this
command.
please, hope can help me!
thanks
<template>
<div>
<form action="/" style="margin-top: 50px;" @submit.prevent="login">
<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" v model="home.name">
</div>
<div class="form-group">
<input type="submit" class="btn btn-primary" value="Login">
</div>
<label>{{greeting}}</label>
</form>
</div>
</template>
<script>
export default {
props: [
'user'
],
data() {
return {
home: {
name: ''
},
}
},
computed: {
greeting() {
return `${this.home.name} ${this.user}`
}
},
methods: {
login() {
alert('login');
}
}
}
</script>
this is the content of the file.
at the moment only is html and plane js, but with erb extension the compile y not working
|
Q: trouble with erb-loader, is not working with .vue.erb files im working with vue and everything going great, but at the moment to compile with erb, the console shows me a error:
ERROR in ./app/javascript/packs/onboarding/HomeForm.vue.erb
Module parse failed: /Users/yorch/SitesRails/homie/node_modules/rails-erb-
loader/index.js??ref--3!/Users/yorch/SitesRails/homie/app/javascript/packs/onboarding/HomeForm.vue.erb
Unexpected token (1:0)
You may need an appropriate loader to handle this file type.
<template>
<div>
<form action="/" style="margin-top: 50px;" @submit.prevent="login">
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this
command.
please, hope can help me!
thanks
<template>
<div>
<form action="/" style="margin-top: 50px;" @submit.prevent="login">
<div class="form-group">
<label for="name">Name</label>
<input type="text" class="form-control" id="name" v model="home.name">
</div>
<div class="form-group">
<input type="submit" class="btn btn-primary" value="Login">
</div>
<label>{{greeting}}</label>
</form>
</div>
</template>
<script>
export default {
props: [
'user'
],
data() {
return {
home: {
name: ''
},
}
},
computed: {
greeting() {
return `${this.home.name} ${this.user}`
}
},
methods: {
login() {
alert('login');
}
}
}
</script>
this is the content of the file.
at the moment only is html and plane js, but with erb extension the compile y not working
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:861514",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531746"
}
|
dff87cdce419662049fd15f171292629925a9c8d
|
Stackoverflow Stackexchange
Q: Selenium webdriver with Phantomjs save_screenshot doesn't work in Docker container The very same code works on my local machine, but doesn't work in a Docker container. On my local machine, it saves an image of the desired website as it's supposed to. In the Docker container, it saves a .png file with the right name, but the image is only 8kB and is blank. There is no error message. The Docker container has access to the Internet because pinging google.com from the container's bash shows that Internet connection is working. Similarly, if I try to get it to show me the html from this page, it fails in Docker but succeeds on my local system. Any idea what's wrong here?
Here's the code that invokes Selenium and phantomjs:
def init_driver():
driver = webdriver.PhantomJS()
driver.set_window_size(1600, 1200)
# must give the page enough time to fully render
driver.implicitly_wait(WAIT_TIME)
return driver
def render_page(driver, url):
driver.get(url)
def save_image(driver, path):
driver.save_screenshot(path)
IMAGE_NAME = 'test_image.png'
WAIT_TIME = 10
url = 'https://www.google.com/'
driver = phantom_tools.init_driver()
render_page(driver, url)
save_image(driver, IMAGE_NAME)
|
Q: Selenium webdriver with Phantomjs save_screenshot doesn't work in Docker container The very same code works on my local machine, but doesn't work in a Docker container. On my local machine, it saves an image of the desired website as it's supposed to. In the Docker container, it saves a .png file with the right name, but the image is only 8kB and is blank. There is no error message. The Docker container has access to the Internet because pinging google.com from the container's bash shows that Internet connection is working. Similarly, if I try to get it to show me the html from this page, it fails in Docker but succeeds on my local system. Any idea what's wrong here?
Here's the code that invokes Selenium and phantomjs:
def init_driver():
driver = webdriver.PhantomJS()
driver.set_window_size(1600, 1200)
# must give the page enough time to fully render
driver.implicitly_wait(WAIT_TIME)
return driver
def render_page(driver, url):
driver.get(url)
def save_image(driver, path):
driver.save_screenshot(path)
IMAGE_NAME = 'test_image.png'
WAIT_TIME = 10
url = 'https://www.google.com/'
driver = phantom_tools.init_driver()
render_page(driver, url)
save_image(driver, IMAGE_NAME)
|
stackoverflow
|
{
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:861521",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531761"
}
|
da9a404899bd73eeff8177f8ba1a7c58bdf83e0b
|
Stackoverflow Stackexchange
Q: Android Studio XML Error Currently testing out Android Instant Apps using Android Studio 3.0 Canary 3 and I'm getting this error when I try to build the app and emulate it.
Any ways to fix it? (I'm making a multi-feature Instant App).
Error:
~/Documents/GitHub/AndroidInstantApp/android-topeka/topeka-ui/build/intermediates/manifests/full/feature/debug/AndroidManifest.xml:2
attribute 'split' in tag is not a valid split name
Error:com.android.builder.internal.aapt.AaptException: AAPT2 link
failed: Error:java.util.concurrent.ExecutionException:
com.android.builder.internal.aapt.AaptException: AAPT2 link failed:
Error:Execution failed for task
':topeka-ui:processDebugFeatureResources'.
Failed to execute aapt
A: I think we may have found a bug of this alpha release.
I solved the problem by removing the dash ("-") from the module name:
Apparently it is not well supported for split names.
The strange part is, both the codelabs and my project were initially working correctly with the dash.
|
Q: Android Studio XML Error Currently testing out Android Instant Apps using Android Studio 3.0 Canary 3 and I'm getting this error when I try to build the app and emulate it.
Any ways to fix it? (I'm making a multi-feature Instant App).
Error:
~/Documents/GitHub/AndroidInstantApp/android-topeka/topeka-ui/build/intermediates/manifests/full/feature/debug/AndroidManifest.xml:2
attribute 'split' in tag is not a valid split name
Error:com.android.builder.internal.aapt.AaptException: AAPT2 link
failed: Error:java.util.concurrent.ExecutionException:
com.android.builder.internal.aapt.AaptException: AAPT2 link failed:
Error:Execution failed for task
':topeka-ui:processDebugFeatureResources'.
Failed to execute aapt
A: I think we may have found a bug of this alpha release.
I solved the problem by removing the dash ("-") from the module name:
Apparently it is not well supported for split names.
The strange part is, both the codelabs and my project were initially working correctly with the dash.
|
stackoverflow
|
{
"language": "en",
"length": 125,
"provenance": "stackexchange_0000F.jsonl.gz:861537",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531814"
}
|
5dd0b38aacb5808282d6cd2baf18c56f3dee5085
|
Stackoverflow Stackexchange
Q: Chrome 59 support for basic auth credentials in URLs alternative for usage with Chromedriver? With Chrome 59 the support for putting basic auth credentials in URLs - like https://foo:[email protected] has ended - this was warned a while ago within https://www.chromestatus.com/feature/5669008342777856.
Has anyone had to work around this with Selenium and Chromedriver yet? Specifically within Python?
A: In our situation (automated testing using WebDriver via C# with NTLM auth) we found that once you hit the page with the credentials although you can't load the sub-resources on the page you are still authorized for that browser session.
So we go to a page that we don't want to test (in our case the home page) with valid credentials in order to get authorized at the start of our test suite. From then on we browse to the pages we want to test without any credentials and so long as we don't close the session everything works.
|
Q: Chrome 59 support for basic auth credentials in URLs alternative for usage with Chromedriver? With Chrome 59 the support for putting basic auth credentials in URLs - like https://foo:[email protected] has ended - this was warned a while ago within https://www.chromestatus.com/feature/5669008342777856.
Has anyone had to work around this with Selenium and Chromedriver yet? Specifically within Python?
A: In our situation (automated testing using WebDriver via C# with NTLM auth) we found that once you hit the page with the credentials although you can't load the sub-resources on the page you are still authorized for that browser session.
So we go to a page that we don't want to test (in our case the home page) with valid credentials in order to get authorized at the start of our test suite. From then on we browse to the pages we want to test without any credentials and so long as we don't close the session everything works.
|
stackoverflow
|
{
"language": "en",
"length": 156,
"provenance": "stackexchange_0000F.jsonl.gz:861586",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44531972"
}
|
0686627c312802b343fe715a2eaa31e7120ce980
|
Stackoverflow Stackexchange
Q: How to check if local file is same as S3 object without downloading it with boto3? How to check if local file is same as file stored in S3 without downloading it? To avoid downloading large files again and again. S3 objects have e-tags, but they are difficult to compute if file was uploaded in parts and solution from this question doesn't seem to work. Is there some easier way avoid unnecessary downloads?
A: I would just compare the last modified time and download if they are different. Additionally you can also compare the size before downloading. Given a bucket, key and a local file fname:
import boto3
import os.path
def isModified(bucket, key, fname):
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return int(obj.last_modified.strftime('%s')) != int(os.path.getmtime(fname))
|
Q: How to check if local file is same as S3 object without downloading it with boto3? How to check if local file is same as file stored in S3 without downloading it? To avoid downloading large files again and again. S3 objects have e-tags, but they are difficult to compute if file was uploaded in parts and solution from this question doesn't seem to work. Is there some easier way avoid unnecessary downloads?
A: I would just compare the last modified time and download if they are different. Additionally you can also compare the size before downloading. Given a bucket, key and a local file fname:
import boto3
import os.path
def isModified(bucket, key, fname):
s3 = boto3.resource('s3')
obj = s3.Object(bucket, key)
return int(obj.last_modified.strftime('%s')) != int(os.path.getmtime(fname))
A: Can you use a small local database, e.g. a text file?
*
*Download an S3 object once. Not its ETag.
*Compute whatever signature you want.
*Put the (ETag, signature) pair into the 'database'.
Next time, before you proceed with downloading, look up the ETag in the 'database'. If it's there, compute the signature of your existing file, and compare with the signature corresponding to the ETag. If they match, the remote file is the same that you have.
There's a possibility that the same file will be re-uploaded with different chunking, thus changing the ETag. Unless this is very probable, you can just ignore the false negative and re-download the file in that rare case.
A: If you don't need an immediate inventory, you can generate s3 storage inventory then import them into your database for future usage.
Compute the local file Etag as shown here for normal file and huge multipart file.
|
stackoverflow
|
{
"language": "en",
"length": 280,
"provenance": "stackexchange_0000F.jsonl.gz:861616",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532078"
}
|
595b680f05a3ed6902cde59143d17f09da1fe131
|
Stackoverflow Stackexchange
Q: Xcode 9 Storyboards are not displaying properly On Xcode 9, all my storyboards seem to be displaying in blue outlines only. Doesn't seem to be a new feature....Anyone has the same issue?
A: I has this issue in two different storyboards, where I can see an error on the top which said:
"An internal error occurred..."
In one of them I could solve the issue removing Derived Data following this steps on Xcode:
Preferences > Locations > Derived Data > click the arrow to open in
Finder > Delete it
In the other one I wasn't able to solve it so I'm using Xcode 8 for that Storyboard (You can open both Xcode 8 and Xcode9b2 and code with them at the same time)
|
Q: Xcode 9 Storyboards are not displaying properly On Xcode 9, all my storyboards seem to be displaying in blue outlines only. Doesn't seem to be a new feature....Anyone has the same issue?
A: I has this issue in two different storyboards, where I can see an error on the top which said:
"An internal error occurred..."
In one of them I could solve the issue removing Derived Data following this steps on Xcode:
Preferences > Locations > Derived Data > click the arrow to open in
Finder > Delete it
In the other one I wasn't able to solve it so I'm using Xcode 8 for that Storyboard (You can open both Xcode 8 and Xcode9b2 and code with them at the same time)
A: After a week of trying all the suggestions in the internet I found that Xcode 9 required some libraries in macOS High Sierra.
So install them and then it will work:
https://www.apple.com/lae/macos/high-sierra/
Hopefully it will help someone
A: This recommendation might helpful for others who are not able to solve this issue.
This issue could occur If Xcode debugging tools are not installed properly,
While installing Xcode 9 from Mac OS 10.12.x.
What I would recommend is you go to the below url, and install the additional tools(additional tools, comman line tools...) compatible with your OS version(v10.12 in this case).
https://developer.apple.com/download/more/
A: i fixed this issue by Hiding Rectangles Bounds
goto Editor -> Canvas -> uncheck Show bounds Rectangles
NOTE: check/uncheck twice if not done on first try
A: I had the exact same issue after upgrading to macOS High Sierra. Re-installing Xcode + command line tools did not help.
I changed this configuration in Xcode and it made the problem disappear: "Preferences" > "General" > "Locked Files" > "automatically unlock files".
A: For me, I had both Xcode 8 and Xcode 9 open on same computer. Closing Xcode 8 and reopening Xcode 9 solved the problem.
|
stackoverflow
|
{
"language": "en",
"length": 323,
"provenance": "stackexchange_0000F.jsonl.gz:861625",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532110"
}
|
41ede694726fb62ca62a02426e8d7f6bcb3f03c3
|
Stackoverflow Stackexchange
Q: Sklearn on aws lambda I want to use sklearn on AWS lambda. sklearn has dependencies on scipy(173MB) and numpy(75MB). The combined size of all these packages exceeds AWS Lambda disk space limit of 256 MB.
How can I use AWS lambda to use sklearn?
A: This guy gets it down to 40MB, though I have not tried it myself yet.
The relevant Github repo.
|
Q: Sklearn on aws lambda I want to use sklearn on AWS lambda. sklearn has dependencies on scipy(173MB) and numpy(75MB). The combined size of all these packages exceeds AWS Lambda disk space limit of 256 MB.
How can I use AWS lambda to use sklearn?
A: This guy gets it down to 40MB, though I have not tried it myself yet.
The relevant Github repo.
A: there is a two way to do this
1) installing the modules dynamically
2) aws batch
1) installing the modules dynamically
def lambdahandler():
#install numpy package
# numpy code
#uninstall numpy package
## now install Scipy package
# execute scipy code
or vice versa depends on your code
2) using Aws batch
This is the best way where you don't have any limitation regarding Memory space.
just you need to build a Docker image and need to write an all required packages and libraries inside the requirement.txt file.
A: I wanted to do the same, and it was very difficult indeed. I ended up buying this layer that includes scikit-learn, pandas, numpy and scipy.
https://www.awslambdas.com/layers/3/aws-lambda-scikit-learn-numpy-scipy-python38-layer
There is another layer that includes xgboost as well.
|
stackoverflow
|
{
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:861642",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532156"
}
|
73c613b8702b80f0792b89a2d7cea4d9415702de
|
Stackoverflow Stackexchange
Q: 'Newtonsoft.Json' already has a dependency defined for 'Microsoft.CSharp' I'm installing NewtonSoft.Json for parsing Json in .NET application. When I'm using Visual Studio(VS) 2012, it can not be installed via NuGet.
This is the error I have got:
'Newtonsoft.Json' already has a dependency defined for 'Microsoft.CSharp'
I tried to copy the DLL over and just use it, seems like some dependencies screwed up in this version (10.0.2).
After few hours research, finally I've found out it is the problem of the compatibility of VS2012 and Newtonsoft Json 10.0.2.
A: because NuGet Package Manager (Version 2.8.60318.667) for VS 2012 does not support .NETStandard (Used for Latest Newtonsoft Json Parser Lib.
https://github.com/NuGet/Home/issues/3131
I resolve this issue by installing older version of Newtonsoft Json:
PM> Install-Package Newtonsoft.Json -Version 9.0.1
More details on:
https://github.com/NuGet/Home/issues/5162
.
|
Q: 'Newtonsoft.Json' already has a dependency defined for 'Microsoft.CSharp' I'm installing NewtonSoft.Json for parsing Json in .NET application. When I'm using Visual Studio(VS) 2012, it can not be installed via NuGet.
This is the error I have got:
'Newtonsoft.Json' already has a dependency defined for 'Microsoft.CSharp'
I tried to copy the DLL over and just use it, seems like some dependencies screwed up in this version (10.0.2).
After few hours research, finally I've found out it is the problem of the compatibility of VS2012 and Newtonsoft Json 10.0.2.
A: because NuGet Package Manager (Version 2.8.60318.667) for VS 2012 does not support .NETStandard (Used for Latest Newtonsoft Json Parser Lib.
https://github.com/NuGet/Home/issues/3131
I resolve this issue by installing older version of Newtonsoft Json:
PM> Install-Package Newtonsoft.Json -Version 9.0.1
More details on:
https://github.com/NuGet/Home/issues/5162
.
A: I had the same issue using VS2015 and creating a NuGet package with dependency on Newtonsoft.Json version=10.0.3. I used the approach suggested by Vin.X in his answer as the work around.
After installing Newtonsoft.Json version=9.0.1 into your project, add following description in your .nuspec file.
<dependencies>
<dependency id="Newtonsoft.Json" version="10.0.3" />
</dependencies>
Application that consumes your package will install Newtonsoft.Json version=10.0.3 along with your package as a dependency into your project.
A: Try removing existing version of package from solution package directory and then
try the following command. It worked for me.
PM> Install-Package Newtonsoft.Json -Version 9.0.1
A: Installing/restoring NuGet packages which target .NET standard requires NuGet.exe version 3.4+.
From the release notes for v3.4: https://learn.microsoft.com/en-us/nuget/release-notes/nuget-3.4
New Features
*
*Support for the netstandard and netstandardapp framework monikers
This version of NuGet comes with VS2015 Update 2
NuGet 3.4 was released March 30, 2016 as part of the Visual Studio 2015 Update 2 and Visual Studio 15 Preview Release
A: I ran into the same issue. I think you need to update NuGet for VS2013 (*Prob VS2012 also)
here
https://marketplace.visualstudio.com/items?itemName=NuGetTeam.NuGetPackageManagerforVisualStudio2013
A: This question isn't specifically about TFS/Azure Devops, but I ran into the exception in the title this morning, and my resolution gets around having to downgrade versions.
We updated Visual Studio on our build servers and all of our builds broke.
Below are the versions I'm currently targeting:
*
*Nuget: 5.4.0
*Newtonsoft.Json: 12.0.3
*Azure Devops Server (on prem): 2019
*Visual Studio 2019: 16.5.2
We found that we needed to add a task called NuGet Tool Installer in the beginning of our task list to force it to use version 5.4.0 because auto-discovery was selecting an older version and failing.
Once this was functional, and packages restored, it failed to package our source for distribution. So we have the latest NuGet.exe, .NET Framework reference of NewtonSoft.Json (i.e. not netstandard), but still it wasn't working. We were using NuGet Packager previously, and I'm not entirely sure when the task became deprecated, but it was still functional for us until the VS updates. There is a new task called NuGet which has a drop down for the different features.
Selecting Pack, and configuring the fields to mimic what the deprecated task had led to a successful build.
A: Try to install Newtonsoft MsgPack it will install Newtonsoft.json DLL to your project.
|
stackoverflow
|
{
"language": "en",
"length": 519,
"provenance": "stackexchange_0000F.jsonl.gz:861646",
"question_score": "51",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532170"
}
|
854d29811eb5e46eefc4abdb035cad676d47973f
|
Stackoverflow Stackexchange
Q: Git Bash: how to change directory when activating virtual environment Created virtualenvs on Python 3.6.1 using virtualenvwrapper, using Git Bash mingw64 terminal, Win7.
How can I have the working directory automatically changed to another location when I activate a specific virtual environment?
Example: When I run workon temp_env I want the working directory to be changed to as if I just ran cd "/c/Users/me/Desktop/temp_env".
A: I wanted a setup where the working directory would automatically change to a location defined for a specific virtual environment.
*
*After installing virtualenvwrapper I added following lines to ~/.bashrc per the docs
export WORKON_HOME=$HOME/.virtualenvs
source virtualenvwrapper.sh
*Then I created a new virtual env: mkvirtualenv temp_env
*Inside the $HOME/.virtualenvs/temp_env directory, I added a line to the postactivate script (which was created with the virtual env) to change working directory
cd "/path/to/folder/"
More info on ways to define behavior when activating, deactivating, etc virtual environments are located here.
|
Q: Git Bash: how to change directory when activating virtual environment Created virtualenvs on Python 3.6.1 using virtualenvwrapper, using Git Bash mingw64 terminal, Win7.
How can I have the working directory automatically changed to another location when I activate a specific virtual environment?
Example: When I run workon temp_env I want the working directory to be changed to as if I just ran cd "/c/Users/me/Desktop/temp_env".
A: I wanted a setup where the working directory would automatically change to a location defined for a specific virtual environment.
*
*After installing virtualenvwrapper I added following lines to ~/.bashrc per the docs
export WORKON_HOME=$HOME/.virtualenvs
source virtualenvwrapper.sh
*Then I created a new virtual env: mkvirtualenv temp_env
*Inside the $HOME/.virtualenvs/temp_env directory, I added a line to the postactivate script (which was created with the virtual env) to change working directory
cd "/path/to/folder/"
More info on ways to define behavior when activating, deactivating, etc virtual environments are located here.
|
stackoverflow
|
{
"language": "en",
"length": 153,
"provenance": "stackexchange_0000F.jsonl.gz:861651",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532175"
}
|
cae68f99d4dff10dd21dab464ea43c8135f7bfaa
|
Stackoverflow Stackexchange
Q: how to customize hoverinfo in plotly histogram? there
I created a histogram from plotly and tried to put CDF on the hoverinfo. Somehow the "CDF"s shown were not right. I think I didn't quote the correct bin info. Here is the code.
cdf <- ecdf(iris$Sepal.Length)
plot_ly(iris) %>%
add_trace(x=~Sepal.Length,
type='histogram',
hoverinfo='text+x+y',
text=~cdf(Sepal.Length))
Thanks!
|
Q: how to customize hoverinfo in plotly histogram? there
I created a histogram from plotly and tried to put CDF on the hoverinfo. Somehow the "CDF"s shown were not right. I think I didn't quote the correct bin info. Here is the code.
cdf <- ecdf(iris$Sepal.Length)
plot_ly(iris) %>%
add_trace(x=~Sepal.Length,
type='histogram',
hoverinfo='text+x+y',
text=~cdf(Sepal.Length))
Thanks!
|
stackoverflow
|
{
"language": "en",
"length": 53,
"provenance": "stackexchange_0000F.jsonl.gz:861655",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532181"
}
|
2d0b35a2319a3e2c92c49ca911b5c0a4801b1e42
|
Stackoverflow Stackexchange
Q: SpriteKit: run code or block when action removed? The run function for SKNode lets you run a block when the action completes, but what if the action is cancelled/removed via removeAllActions?
Cancelling an action doesn't invoke the completion block from the run function.
Is there a callback or way to run code when the action is cancelled/removed?
A: Yes, if you remove an action before it has completed, the completion block will not run. Per Docs:
The run(:completion:) method is identical to the run(:) method, but after the action completes, your block is called. This callback is only called if the action runs to completion. If the action is removed before it completes, the completion handler is never called.
|
Q: SpriteKit: run code or block when action removed? The run function for SKNode lets you run a block when the action completes, but what if the action is cancelled/removed via removeAllActions?
Cancelling an action doesn't invoke the completion block from the run function.
Is there a callback or way to run code when the action is cancelled/removed?
A: Yes, if you remove an action before it has completed, the completion block will not run. Per Docs:
The run(:completion:) method is identical to the run(:) method, but after the action completes, your block is called. This callback is only called if the action runs to completion. If the action is removed before it completes, the completion handler is never called.
A: A work around could be:
class YourSpriteNode: SKNode {
func doSometingAtCompletionAction() {
//all your stuff
}
override removeAllActions() {
super.removeAllActions()
doSometingAtCompletionAction()
}
}
|
stackoverflow
|
{
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:861663",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532198"
}
|
a259c323eb314334144bfa79be5933e7c0bc78fa
|
Stackoverflow Stackexchange
Q: Required output I have a list like this
[[{1:"one",2:"two"},{1:"one"}],[{3:"three",4:"four"},{3:"three"}]]
required output:
[{1:"one",2:"two"},{1:"one"},{3:"three",4:"four"},{3:"three"}]
Can someone please tell me how to proceed?
A: Iterate over the list's lists to add that to another list.
list_1 = [[{1:"one",2:"two"},{1:"one"}],[{3:"three",4:"four"},{3:"three"}]]
list_2 = []
for list in list_1:
for dictionary in list:
list_2.append(dictionary)
print(list_2) # [{1: 'one', 2: 'two'}, {1: 'one'}, {3: 'three', 4: 'four'}, {3: 'three'}]
|
Q: Required output I have a list like this
[[{1:"one",2:"two"},{1:"one"}],[{3:"three",4:"four"},{3:"three"}]]
required output:
[{1:"one",2:"two"},{1:"one"},{3:"three",4:"four"},{3:"three"}]
Can someone please tell me how to proceed?
A: Iterate over the list's lists to add that to another list.
list_1 = [[{1:"one",2:"two"},{1:"one"}],[{3:"three",4:"four"},{3:"three"}]]
list_2 = []
for list in list_1:
for dictionary in list:
list_2.append(dictionary)
print(list_2) # [{1: 'one', 2: 'two'}, {1: 'one'}, {3: 'three', 4: 'four'}, {3: 'three'}]
A: You can try this:
from itertools import chain
l = [[{1:"one",2:"two"},{1:"one"}],[{3:"three",4:"four"},{3:"three"}]]
new_l = list(chain(*l))
Final Output:
[{1: 'one', 2: 'two'}, {1: 'one'}, {3: 'three', 4: 'four'}, {3: 'three'}]
|
stackoverflow
|
{
"language": "en",
"length": 91,
"provenance": "stackexchange_0000F.jsonl.gz:861758",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532532"
}
|
6e842088c3ada4faa7784e8f0f4a4d04fb9f08f8
|
Stackoverflow Stackexchange
Q: Flask import app from parent directory I have an app structured like so:
name
-app.py
-__init__.py
-folder1
-views.py
-models.py
-__init__.py
The content of my app.py:
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
if __name__ == '__main__':
app.run('0.0.0.0')
And init.py in the name folder is:
from app import app
How would I import this app into views.py? Currently, I'm using
from name import app
from models import Class1
app.add_url_rule('/', view_func=Class1.as_view('class1'))
, but then when I run the app it returns a 404 error.
A: This is what I did to my apps:
In __init__.py:
from .app import app
with app.app_context():
from .folder1 import models, views # noqa
In folder1/views.py:
from flask import current_app as app
# then use `app` as usual
from .models import Class1
app.add_url_rule('/', view_func=Class1.as_view('class1'))
The "app_context()" injects the current app object into the current_app proxy. Read this to understand the mechanism.
Also it is recommended to explicitly use relative imports (with the extra dots ".").
|
Q: Flask import app from parent directory I have an app structured like so:
name
-app.py
-__init__.py
-folder1
-views.py
-models.py
-__init__.py
The content of my app.py:
from flask import Flask
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
if __name__ == '__main__':
app.run('0.0.0.0')
And init.py in the name folder is:
from app import app
How would I import this app into views.py? Currently, I'm using
from name import app
from models import Class1
app.add_url_rule('/', view_func=Class1.as_view('class1'))
, but then when I run the app it returns a 404 error.
A: This is what I did to my apps:
In __init__.py:
from .app import app
with app.app_context():
from .folder1 import models, views # noqa
In folder1/views.py:
from flask import current_app as app
# then use `app` as usual
from .models import Class1
app.add_url_rule('/', view_func=Class1.as_view('class1'))
The "app_context()" injects the current app object into the current_app proxy. Read this to understand the mechanism.
Also it is recommended to explicitly use relative imports (with the extra dots ".").
|
stackoverflow
|
{
"language": "en",
"length": 163,
"provenance": "stackexchange_0000F.jsonl.gz:861770",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532557"
}
|
6fce763311fe3d2d687a4789327b71f1214e5ff7
|
Stackoverflow Stackexchange
Q: Order in which files are read using os.listdir? When performing the following code, is there an order in which Python loops through files in the provided directory? Is it alphabetical? How do I go about establishing an order these files are loops through, either by date created/modified or alphabetically).
import os
for file in os.listdir(path)
df = pd.read_csv(path+file)
// do stuff
A: As per documentation: "The list is in arbitrary order"
https://docs.python.org/3.6/library/os.html#os.listdir
If you wish to establish an order (alphabetical in this case), you could sort it.
import os
for file in sorted(os.listdir(path)):
df = pd.read_csv(path+file)
// do stuff
|
Q: Order in which files are read using os.listdir? When performing the following code, is there an order in which Python loops through files in the provided directory? Is it alphabetical? How do I go about establishing an order these files are loops through, either by date created/modified or alphabetically).
import os
for file in os.listdir(path)
df = pd.read_csv(path+file)
// do stuff
A: As per documentation: "The list is in arbitrary order"
https://docs.python.org/3.6/library/os.html#os.listdir
If you wish to establish an order (alphabetical in this case), you could sort it.
import os
for file in sorted(os.listdir(path)):
df = pd.read_csv(path+file)
// do stuff
A: You asked several questions:
*
*Is there an order in which Python loops through the files?
No, Python does not impose any predictable order. The docs say 'The list is in arbitrary order'. If order matters, you must impose it. Practically speaking, the files are returned in the same order used by the underlying operating system, but one mustn't rely on that.
*
*Is it alphabetical?
Probably not. But even if it were you mustn't rely upon that. (See above).
*
*How could I establish an order?
for file in sorted(os.listdir(path)):
|
stackoverflow
|
{
"language": "en",
"length": 192,
"provenance": "stackexchange_0000F.jsonl.gz:861799",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532641"
}
|
97568825adc3a557152026627834fb82dcaf9de6
|
Stackoverflow Stackexchange
Q: Best way to measure code coverage for python without checking every imported module I am in the process of trying to integrate code coverage into our development pipeline, and we are having a hard time using nosetest to accurately estimate our code coverage due to the fact that it also checks coverage for each of our imported libraries in our python packages. So we end up with code coverage percentages for things like import os, which is not allowing us to really see the data we want.
I have looked into using coverage.py, but that's in the very early stages.
I thought to ask if anyone else has had this issue and how they overcame it.
Thanks in advance!
A: Nosetests has an option to only produce coverage for named packages e.g.
--cover-package=foo --cover-package=bar
This is what I've done in the past.
However, I moved over to pytest. I liked this better because it produced better error message, including dictionary diffs.
|
Q: Best way to measure code coverage for python without checking every imported module I am in the process of trying to integrate code coverage into our development pipeline, and we are having a hard time using nosetest to accurately estimate our code coverage due to the fact that it also checks coverage for each of our imported libraries in our python packages. So we end up with code coverage percentages for things like import os, which is not allowing us to really see the data we want.
I have looked into using coverage.py, but that's in the very early stages.
I thought to ask if anyone else has had this issue and how they overcame it.
Thanks in advance!
A: Nosetests has an option to only produce coverage for named packages e.g.
--cover-package=foo --cover-package=bar
This is what I've done in the past.
However, I moved over to pytest. I liked this better because it produced better error message, including dictionary diffs.
|
stackoverflow
|
{
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:861802",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532647"
}
|
0e458ae8b0c090645e072c64f991d165433515a7
|
Stackoverflow Stackexchange
Q: Python Seaborn rotate x axis labels but align labels to axis I created a Seaborn barplot using the below code:
import seaborn as sns
barplot = sns.barplot(x='abc'
,y='def'
,data=df
,ci=None
)
barplot.set_xticklabels(barplot.get_xticklabels(), rotation=45)
barplot.figure
This code produced a barplot that looks like this:
Is there a way to rotate and align the x axis labels so the barplot looks something like this? So the end of the x axis labels are aligned to the x axis?
I am using Python 3.6 and seaborn 0.7.1
|
Q: Python Seaborn rotate x axis labels but align labels to axis I created a Seaborn barplot using the below code:
import seaborn as sns
barplot = sns.barplot(x='abc'
,y='def'
,data=df
,ci=None
)
barplot.set_xticklabels(barplot.get_xticklabels(), rotation=45)
barplot.figure
This code produced a barplot that looks like this:
Is there a way to rotate and align the x axis labels so the barplot looks something like this? So the end of the x axis labels are aligned to the x axis?
I am using Python 3.6 and seaborn 0.7.1
|
stackoverflow
|
{
"language": "en",
"length": 85,
"provenance": "stackexchange_0000F.jsonl.gz:861817",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532716"
}
|
35832da15245bcde9e09aa0324bc2891d5288939
|
Stackoverflow Stackexchange
Q: Is alternative icon available for watchOS? Since iOS 10.3, developers can set alternative icons which is very nice.
Is it supported in the companion watchOS app?
My app was recently rejected due to this.
Specifically, we noticed that for the user selected alternative icon themes no alternative Apple Watch icons matching the iPhone icon theme were submitted.
A: watchOS does not support alternate icon, as of watchOS 4.0 beta.
I ended up answering Apple Review team, the app is intended to not provide alternate icons in Watch App.
|
Q: Is alternative icon available for watchOS? Since iOS 10.3, developers can set alternative icons which is very nice.
Is it supported in the companion watchOS app?
My app was recently rejected due to this.
Specifically, we noticed that for the user selected alternative icon themes no alternative Apple Watch icons matching the iPhone icon theme were submitted.
A: watchOS does not support alternate icon, as of watchOS 4.0 beta.
I ended up answering Apple Review team, the app is intended to not provide alternate icons in Watch App.
A: From Apple's Human Interface Guidelines, you should provide all kinds of sizes for your alternate icon.
In your case, you should provide the icon of watchOS app's size.
Like your primary app icon, each alternate app icon is delivered as a collection of related images that vary in size. When the user chooses an alternate icon, the appropriate sizes of that icon replace your primary app icon on the Home screen, in Spotlight, and elsewhere in the system. To ensure that alternate icons appear consistently throughout the system—the user shouldn't see one version of your icon on the Home screen and a completely different version in Settings, for example—provide them in the same sizes you provide for your primary app icon (with the exception of the large App Store icon). See App Icon Sizes.
How to:
the value of CFBundleIconFiles in info.plist is an array. You should fill it with the names of icon files:
*
*no @2x @3x suffix
*no force rules about file naming, app will automatically pick the right solution for the sense it needs.
*no care about order
|
stackoverflow
|
{
"language": "en",
"length": 272,
"provenance": "stackexchange_0000F.jsonl.gz:861818",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532717"
}
|
b07a4bfaa514c513df1f8e7749a00c47ced1c101
|
Stackoverflow Stackexchange
Q: Algorithmics issue, python string, no idea I have algorithm problem with Python and strings.
My issue:
My function should sum maximum values of substring.
For example:
ae-afi-re-fi -> 2+6+3+5=16
but
ae-a-fi-re-fi -> 2-10+5+3+5=5
I try use string.count function and counting substring, but this method is not good.
What would be the best way to do this in Python? Thanks in advance.
string = "aeafirefi"
Sum the value of substrings.
A: Probably having a dictionary with:
key = substring: value = value
So if you have:
string = "aeafirefi"
first you look for the whole string in the dictionary, if you don't find it, you cut the last letter so you have "aeafiref", until you find a substring or you have an only letter.
then you skip the letters used: for example, if you found "aeaf", you start all over again using string = "iref".
|
Q: Algorithmics issue, python string, no idea I have algorithm problem with Python and strings.
My issue:
My function should sum maximum values of substring.
For example:
ae-afi-re-fi -> 2+6+3+5=16
but
ae-a-fi-re-fi -> 2-10+5+3+5=5
I try use string.count function and counting substring, but this method is not good.
What would be the best way to do this in Python? Thanks in advance.
string = "aeafirefi"
Sum the value of substrings.
A: Probably having a dictionary with:
key = substring: value = value
So if you have:
string = "aeafirefi"
first you look for the whole string in the dictionary, if you don't find it, you cut the last letter so you have "aeafiref", until you find a substring or you have an only letter.
then you skip the letters used: for example, if you found "aeaf", you start all over again using string = "iref".
A: Here's a brute force solution:
values_dict = {
'ae': 2,
'qd': 3,
'qdd': 5,
'fir': 4,
'afi': 6,
're': 3,
'fi': 5
}
def get_value(x):
return values_dict[x] if x in values_dict else -10
def next_tokens(s):
"""Returns possible tokens"""
# Return any tokens in values_dict
for x in values_dict.keys():
if s.startswith(x):
yield x
# Return single character.
yield s[0]
def permute(s, stack=[]):
"""Returns all possible variations"""
if len(s) == 0:
yield stack
return
for token in next_tokens(s):
perms = permute(s[len(token):], stack + [token])
for perm in perms:
yield perm
def process_string(s):
def process_tokens(tokens):
return sum(map(get_value, tokens))
return max(map(process_tokens, permute(s)))
print('Max: {}'.format(process_string('aeafirefi')))
A: In my solution i'll use permutations from itertools module in order to list all the possible permutations of substrings that you gave in your question presented into a dict called vals. Then iterate through the input string and split the strings by all the permutations found below. Then sum the values of each permutations and finally get the max.
PS: The key of this solution is the get_sublists() method.
This is an example with some tests:
from itertools import permutations
def get_sublists(a, perm_vals):
# Find the sublists in the input string
# Based on the permutations of the dict vals.keys()
for k in perm_vals:
if k in a:
a = ''.join(a.split(k))
# Yield the sublist if we found any
yield k
def sum_sublists(a, sub, vals):
# Join the sublist and compare it to the input string
# Get the difference by lenght
diff = len(a) - len(''.join(sub))
# Sum the value of each sublist (on every permutation)
return sub , sum(vals[k] for k in sub) - diff * 10
def get_max_sum_sublists(a, vals):
# Get all the possible permutations
perm_vals = permutations(vals.keys())
# Remove duplicates if there is any
sub = set(tuple(get_sublists(a, k)) for k in perm_vals)
# Get the sum of each possible permutation
aa = (sum_sublists(a, k, vals) for k in sub)
# return the max of the above operation
return max(aa, key= lambda x: x[1])
vals = {'ae': 2, 'qd': 3, 'qdd': 5, 'fir': 4, 'afi': 6, 're': 3, 'fi': 5}
# Test
a = "aeafirefi"
final, s = get_max_sum_sublists(a, vals)
print("Sublists: {}\nSum: {}".format(final, s))
print('----')
a = "aeafirefiqdd"
final, s = get_max_sum_sublists(a, vals)
print("Sublists: {}\nSum: {}".format(final, s))
print('----')
a = "aeafirefiqddks"
final, s = get_max_sum_sublists(a, vals)
print("Sublists: {}\nSum: {}".format(final, s))
Output:
Sublists: ('ae', 'afi', 're', 'fi')
Sum: 16
----
Sublists: ('afi', 'ae', 'qdd', 're', 'fi')
Sum: 21
----
Sublists: ('afi', 'ae', 'qdd', 're', 'fi')
Sum: 1
Please try this solution with many input strings as you can and don't hesitate to comment if you found any wrong result.
|
stackoverflow
|
{
"language": "en",
"length": 578,
"provenance": "stackexchange_0000F.jsonl.gz:861821",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532721"
}
|
996d5284f18ab010df40b78c7c086e63035430d5
|
Stackoverflow Stackexchange
Q: How is Testim.io different from Selenium? Testim.io is an automated testing platform. How is it different from Selenium?
A: Testim.io is a SaaS applying machine learning to test automation.
Usage: You use testim's plugin to record test cases or bugs (& submit to Trello/JIRA). You can then edit and add javascript or image validation to your tests, and run them via testim's site or via CLI and your CI/CD cloud to execute your tests.
Machine Learning: When you record tests, testim uses machine learning weighting rather than individual CSS Selectors or XPath to identify DOM elements to test. When you execute tests, the tests rebalance weighting, so you don't have to continually fix your !@#$% tests because you (or React.js) changed the name of the selector or XPath element.
|
Q: How is Testim.io different from Selenium? Testim.io is an automated testing platform. How is it different from Selenium?
A: Testim.io is a SaaS applying machine learning to test automation.
Usage: You use testim's plugin to record test cases or bugs (& submit to Trello/JIRA). You can then edit and add javascript or image validation to your tests, and run them via testim's site or via CLI and your CI/CD cloud to execute your tests.
Machine Learning: When you record tests, testim uses machine learning weighting rather than individual CSS Selectors or XPath to identify DOM elements to test. When you execute tests, the tests rebalance weighting, so you don't have to continually fix your !@#$% tests because you (or React.js) changed the name of the selector or XPath element.
A: It is a browser add-on and runs inside your browser. It supports record and playback. Also, it is cloud based. All your scripts are stored in the cloud. Supports multiple locators as QTP.
|
stackoverflow
|
{
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:861827",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532742"
}
|
dd7ebc780db014107cc28c7195adc43512bba8bf
|
Stackoverflow Stackexchange
Q: Cloud Functions with Firebase: signInWithEmailAndPassword is not a function I'm beginning writing code with Cloud Functions with Firebase.
Of the functions below, testCreateUserAccount succeeds.
testLogin fails with a Type Error at runtime, stating "signInWithEmailAndPassword is not a function"
From what I have seen in the documentation, createUser is under the same class as signInWithEmailAndPassword, so its not clear to me why attempting to call signInWithEmailAndPassword would fail. Any ideas? Thanks!
"use strict";
var functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.testCreateUserAccount = functions.https.onRequest ((req, res) => {
var email = "[email protected]";
var password = "joejoe";
admin.auth().createUser({
email: email,
password: password,
disabled: false
});
} );
exports.testLogin = functions.https.onRequest ((req, res) => {
var email = "[email protected]";
var password = "joejoe";
admin.auth().signInWithEmailAndPassword(email, password);
} );
A: You used admin.auth().signInWithEmailAndPassword(email, password) on server side, you must use it on client side.
|
Q: Cloud Functions with Firebase: signInWithEmailAndPassword is not a function I'm beginning writing code with Cloud Functions with Firebase.
Of the functions below, testCreateUserAccount succeeds.
testLogin fails with a Type Error at runtime, stating "signInWithEmailAndPassword is not a function"
From what I have seen in the documentation, createUser is under the same class as signInWithEmailAndPassword, so its not clear to me why attempting to call signInWithEmailAndPassword would fail. Any ideas? Thanks!
"use strict";
var functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp(functions.config().firebase);
exports.testCreateUserAccount = functions.https.onRequest ((req, res) => {
var email = "[email protected]";
var password = "joejoe";
admin.auth().createUser({
email: email,
password: password,
disabled: false
});
} );
exports.testLogin = functions.https.onRequest ((req, res) => {
var email = "[email protected]";
var password = "joejoe";
admin.auth().signInWithEmailAndPassword(email, password);
} );
A: You used admin.auth().signInWithEmailAndPassword(email, password) on server side, you must use it on client side.
A: Now you could use the identitytoolkit's rest endpoint:
https://identitytoolkit.googleapis.com/v1/accounts:signUp?key=[API_KEY]
Here you have the documentation:
https://firebase.google.com/docs/reference/rest/auth/
A: You can use Identity toolkit REST API like this then response with idToken. IdToken can be use to verify an account using admin.auth().verifyIdToken(idToken) in Cloud Function.
https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key=YOUR_PROJECT_API_KEY_THAT_ALLOWS_IDENTITY_TOOLKIT_SERVICE&[email protected]&password=samplepass&returnSecureToken=true
|
stackoverflow
|
{
"language": "en",
"length": 185,
"provenance": "stackexchange_0000F.jsonl.gz:861849",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532818"
}
|
708816bd220f523e5c84add5cbe255503e21f57d
|
Stackoverflow Stackexchange
Q: How do I refresh an access token from Azure AD using django-rest-framework-social-oauth2? The documentation gives an example of how to convert an Azure access_token that the user already has from the login process, but I'm not seeing anything about how to refresh that token. I managed to roll my own using adal, the Azure AD library for python, but I'm wondering if there's a better way using the tools included in DRF social oauth 2 or other django oauth packages that I'm just not finding. Please advise. Below is the function that refreshes my Azure AD token.
def refresh_social_access_token(self, request):
"""
This function leverages adal
https://github.com/AzureAD/azure-activedirectory-library-for-python
to refresh an expired access token.
.acquire_token_with_refresh_token(self, refresh_token, azure_ad_app_key,
resource, azure_ad_app_secret)
"""
user_social_auth = request.user.social_auth.filter(user=request.user) \
.values('provider', 'extra_data')[0]
context = AuthenticationContext(f'https://login.microsoftonline.com/{self.TENANT_ID}')
token = context.acquire_token_with_refresh_token(
user_social_auth['extra_data']['refresh_token'],
SOCIAL_AUTH_AZUREAD_OAUTH2_KEY,
user_social_auth['extra_data']['resource'],
client_secret=SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET
)
try:
expiry = convert_iso_to_epoch(token["expiresOn"])
user_social_auth = request.user.social_auth.get(user=request.user)
user_social_auth.extra_data['expires_on'] = expiry
user_social_auth.save()
except KeyError:
HttpError('Oauth2 token could not be refreshed as configured.')
|
Q: How do I refresh an access token from Azure AD using django-rest-framework-social-oauth2? The documentation gives an example of how to convert an Azure access_token that the user already has from the login process, but I'm not seeing anything about how to refresh that token. I managed to roll my own using adal, the Azure AD library for python, but I'm wondering if there's a better way using the tools included in DRF social oauth 2 or other django oauth packages that I'm just not finding. Please advise. Below is the function that refreshes my Azure AD token.
def refresh_social_access_token(self, request):
"""
This function leverages adal
https://github.com/AzureAD/azure-activedirectory-library-for-python
to refresh an expired access token.
.acquire_token_with_refresh_token(self, refresh_token, azure_ad_app_key,
resource, azure_ad_app_secret)
"""
user_social_auth = request.user.social_auth.filter(user=request.user) \
.values('provider', 'extra_data')[0]
context = AuthenticationContext(f'https://login.microsoftonline.com/{self.TENANT_ID}')
token = context.acquire_token_with_refresh_token(
user_social_auth['extra_data']['refresh_token'],
SOCIAL_AUTH_AZUREAD_OAUTH2_KEY,
user_social_auth['extra_data']['resource'],
client_secret=SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET
)
try:
expiry = convert_iso_to_epoch(token["expiresOn"])
user_social_auth = request.user.social_auth.get(user=request.user)
user_social_auth.extra_data['expires_on'] = expiry
user_social_auth.save()
except KeyError:
HttpError('Oauth2 token could not be refreshed as configured.')
|
stackoverflow
|
{
"language": "en",
"length": 157,
"provenance": "stackexchange_0000F.jsonl.gz:861857",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532839"
}
|
4c6eecdc4c1b22e17e05953b8aea1b776adda1d1
|
Stackoverflow Stackexchange
Q: Background-clip: content-box won't work I'm learning web development and I'm currently "finishing" with css, but I am stuck at backround-clip property, more precisely content-box value. However I try, it just won't work and it looks like I set it to padding-box. Also notice I didn't use background (shorthand property).
There must be something I'm missing. My main source of learning this property is CSS Tricks and as you can see, my example follows it almost to the letter. Anyway, here's the JSFiddle link and see it for yourself: https://jsfiddle.net/av857arj/1/
A: Your boxes have no padding, so the padding-box and content-box will look the same. When you add padding to all three boxes, you can see the difference.
#clip-ex-container {
width: 95%;
margin: auto;
padding: 10px 0;
}
.clip-ex-bb, .clip-ex-pb, .clip-ex-cb {
width: 20%;
margin: 1em;
height: 50px;
float: left;
background-color: rgb(189, 218, 49);
border: 0.6em solid rgba(54, 80, 65, 0.49);
padding: 1em;
}
.clip-ex-bb {
background-clip: border-box;
margin-left: 2.9em;
}
.clip-ex-pb {background-clip: padding-box;}
.clip-ex-cb {background-clip: content-box;}
<div id="clip-ex-container" class="clearfix">
<div class="clip-ex-bb">
<p>Border Box</p>
</div>
<div class="clip-ex-pb">
<p>Padding Box</p>
</div>
<div class="clip-ex-cb">
<p>Content Box</p>
</div>
</div>
|
Q: Background-clip: content-box won't work I'm learning web development and I'm currently "finishing" with css, but I am stuck at backround-clip property, more precisely content-box value. However I try, it just won't work and it looks like I set it to padding-box. Also notice I didn't use background (shorthand property).
There must be something I'm missing. My main source of learning this property is CSS Tricks and as you can see, my example follows it almost to the letter. Anyway, here's the JSFiddle link and see it for yourself: https://jsfiddle.net/av857arj/1/
A: Your boxes have no padding, so the padding-box and content-box will look the same. When you add padding to all three boxes, you can see the difference.
#clip-ex-container {
width: 95%;
margin: auto;
padding: 10px 0;
}
.clip-ex-bb, .clip-ex-pb, .clip-ex-cb {
width: 20%;
margin: 1em;
height: 50px;
float: left;
background-color: rgb(189, 218, 49);
border: 0.6em solid rgba(54, 80, 65, 0.49);
padding: 1em;
}
.clip-ex-bb {
background-clip: border-box;
margin-left: 2.9em;
}
.clip-ex-pb {background-clip: padding-box;}
.clip-ex-cb {background-clip: content-box;}
<div id="clip-ex-container" class="clearfix">
<div class="clip-ex-bb">
<p>Border Box</p>
</div>
<div class="clip-ex-pb">
<p>Padding Box</p>
</div>
<div class="clip-ex-cb">
<p>Content Box</p>
</div>
</div>
|
stackoverflow
|
{
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:861863",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532849"
}
|
4796edaa36ff8a250404dfbdea9b6878073cdbfc
|
Stackoverflow Stackexchange
Q: Nested route disappear immediately instead of playing :leave animation I have outer routes: /posts and /about. /posts route has nested routes: / and /pages/:pageNumber.
When navigating between nested routes (/ and /pages/:pageNumber) animations work well. But when navigating to /about nested route immediately disappears.
animateChild() doesn't help. In parent router component animation:
transition(':leave', [
query('@*', animateChild()),
animate('/*some easing*/', style({/*some styles*/}))
])
This causes error query("@*") returned zero elements.. So nested route is removed immediately, parent component can't see it.
Angular version: 4.2.2
A: For me, the cause of the problem was that, when navigating from /page1 to /page2, in page2.component.ts I was loading some data from the server in the constructor. Moving that code to ngOnInit solved this issue, after 2 weeks of breaking my head.
|
Q: Nested route disappear immediately instead of playing :leave animation I have outer routes: /posts and /about. /posts route has nested routes: / and /pages/:pageNumber.
When navigating between nested routes (/ and /pages/:pageNumber) animations work well. But when navigating to /about nested route immediately disappears.
animateChild() doesn't help. In parent router component animation:
transition(':leave', [
query('@*', animateChild()),
animate('/*some easing*/', style({/*some styles*/}))
])
This causes error query("@*") returned zero elements.. So nested route is removed immediately, parent component can't see it.
Angular version: 4.2.2
A: For me, the cause of the problem was that, when navigating from /page1 to /page2, in page2.component.ts I was loading some data from the server in the constructor. Moving that code to ngOnInit solved this issue, after 2 weeks of breaking my head.
|
stackoverflow
|
{
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:861875",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532886"
}
|
a8ba2cbc78143e3b54f837c5f58950bc3bd88e71
|
Stackoverflow Stackexchange
Q: Exclude columns by names in mutate_at in dplyr I am trying to do something very simple, and yet can't figure out the right way to specify. I simply want to exclude some named columns from mutate_at. It works fine if I specify position, but I don't want to hard code positions.
For example, I want the same output as this:
mtcars %>% mutate_at(-c(1, 2), max)
But, by specifying mpg and cyl column names.
I tried many things, including:
mtcars %>% mutate_at(-c('mpg', 'cyl'), max)
Is there a way to work with names and exclusion in mutate_at?
A: You can use vars to specify the columns, which works the same way as select() and allows you to exclude columns using -:
mtcars %>% mutate_at(vars(-mpg, -cyl), max)
|
Q: Exclude columns by names in mutate_at in dplyr I am trying to do something very simple, and yet can't figure out the right way to specify. I simply want to exclude some named columns from mutate_at. It works fine if I specify position, but I don't want to hard code positions.
For example, I want the same output as this:
mtcars %>% mutate_at(-c(1, 2), max)
But, by specifying mpg and cyl column names.
I tried many things, including:
mtcars %>% mutate_at(-c('mpg', 'cyl'), max)
Is there a way to work with names and exclusion in mutate_at?
A: You can use vars to specify the columns, which works the same way as select() and allows you to exclude columns using -:
mtcars %>% mutate_at(vars(-mpg, -cyl), max)
A: One option is to pass the strings inside one_of
mtcars %>%
mutate_at(vars(-one_of("mpg", "cyl")), max)
|
stackoverflow
|
{
"language": "en",
"length": 140,
"provenance": "stackexchange_0000F.jsonl.gz:861876",
"question_score": "39",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532888"
}
|
3c0f62d9f58e39d3770935dc0486f9d9e05facad
|
Stackoverflow Stackexchange
Q: How can I plot a style like gnuplot's 'with impulses' with matplotlib? I'd like to create a plot like the one below with python/pandas/matplotlib. The upper clip is no problem, but I haven't been able to get a plot like the lower clip to work. I can do it in gnuplot where the equivalent plot style is 'with impulses'. Is this possible with matplotlib? If it is not possible with matplotlib is there another python graphics package that would work?
A: The easiest way to create such a plot is to use pyplot.stem.
An example can be found here.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0.1, 6*np.pi, 50)
plt.stem(x, np.cos(x)+1, linefmt='g-', markerfmt=' ')
plt.stem(x, -np.sin(x)-1, linefmt='r-', markerfmt=' ', basefmt="gray")
plt.show()
Another option is to use pyplot.vlines.
|
Q: How can I plot a style like gnuplot's 'with impulses' with matplotlib? I'd like to create a plot like the one below with python/pandas/matplotlib. The upper clip is no problem, but I haven't been able to get a plot like the lower clip to work. I can do it in gnuplot where the equivalent plot style is 'with impulses'. Is this possible with matplotlib? If it is not possible with matplotlib is there another python graphics package that would work?
A: The easiest way to create such a plot is to use pyplot.stem.
An example can be found here.
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0.1, 6*np.pi, 50)
plt.stem(x, np.cos(x)+1, linefmt='g-', markerfmt=' ')
plt.stem(x, -np.sin(x)-1, linefmt='r-', markerfmt=' ', basefmt="gray")
plt.show()
Another option is to use pyplot.vlines.
A: Here is a worked example using vlines as @ImportanceOfBeingErnes suggested, which raises another question. Is one solution preferable to the other? More efficient or better in some way?
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0.1, 6*np.pi, 50)
plt.vlines(x, 0, np.cos(x)+1, color='g')
plt.vlines(x, 0, -np.sin(x)-1, color='r')
plt.show()
|
stackoverflow
|
{
"language": "en",
"length": 182,
"provenance": "stackexchange_0000F.jsonl.gz:861900",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44532951"
}
|
24aca26fc43d81b044b41b9090862eaffc1c0d6a
|
Stackoverflow Stackexchange
Q: How can I change CSS class name dynamically with reagent? About reagent.
I need to change some CSS class name dynamically.
How should I do that?
Sample code is here.
(defn app []
(let [array [1, 2, 3]]
(fn []
[:div
(for [index array]
;; I wanna change this classname like `item-1, item-2, ...`
^{:key index} [:div.i-wanna-change-this-classname-dynamically index])])))
A: Change
[:div.i-wanna-change-this-classname-dynamically index]
to
[:div {:class (str “item-” index)} index]
Reagent provides a shorthand syntax of :div.class1.class2#id, but you can also set these in a map as the first item in the vector after :div.
Also keep in mind the CSS :nth-child() selector as another option for dynamic styling.
|
Q: How can I change CSS class name dynamically with reagent? About reagent.
I need to change some CSS class name dynamically.
How should I do that?
Sample code is here.
(defn app []
(let [array [1, 2, 3]]
(fn []
[:div
(for [index array]
;; I wanna change this classname like `item-1, item-2, ...`
^{:key index} [:div.i-wanna-change-this-classname-dynamically index])])))
A: Change
[:div.i-wanna-change-this-classname-dynamically index]
to
[:div {:class (str “item-” index)} index]
Reagent provides a shorthand syntax of :div.class1.class2#id, but you can also set these in a map as the first item in the vector after :div.
Also keep in mind the CSS :nth-child() selector as another option for dynamic styling.
|
stackoverflow
|
{
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:861986",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533203"
}
|
a66eec4ed22ffe536209e3bfcd1aa2326dfe8d0e
|
Stackoverflow Stackexchange
Q: Make ScrollView size automatically up to a max height I have a ScrollView that I want to grow as needed up to a certain amount, but it seems to always be the size of maxHeight:
<ScrollView style={{flex: 1, maxHeight: "50%"}}><Text>Top</Text></ScrollView>
<View style={{flex: 1}}><Text>Bottom</Text></View>
What I wanted was for the Top view to be pretty small. And if the text there was longer, it would get taller as needed, but never taller than 50% of the screen. Is that possible?
A: I have had to do something similar and found the solution to be:
<View style={{maxHeight:"50%"}}>
<ScrollView style={{flexGrow:0}}>
<Text>Top</Text>
</ScrollView>
</View>
<View style={{flex: 1}}><Text>Bottom</Text></View>
|
Q: Make ScrollView size automatically up to a max height I have a ScrollView that I want to grow as needed up to a certain amount, but it seems to always be the size of maxHeight:
<ScrollView style={{flex: 1, maxHeight: "50%"}}><Text>Top</Text></ScrollView>
<View style={{flex: 1}}><Text>Bottom</Text></View>
What I wanted was for the Top view to be pretty small. And if the text there was longer, it would get taller as needed, but never taller than 50% of the screen. Is that possible?
A: I have had to do something similar and found the solution to be:
<View style={{maxHeight:"50%"}}>
<ScrollView style={{flexGrow:0}}>
<Text>Top</Text>
</ScrollView>
</View>
<View style={{flex: 1}}><Text>Bottom</Text></View>
|
stackoverflow
|
{
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:861989",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533225"
}
|
60e4734fd7d68ada981532bcdd28adfb3440f01d
|
Stackoverflow Stackexchange
Q: How to print out 'Live' mouse position coordinates using pyautogui? I used lots of different source codes, and even copied and pasted but I keep getting random symbols that shift when i move my mouse over them
here is my code...
import pyautogui, time, sys
print('Press Ctrl-C to quit.')
try:
while True:
CurserPos = pyautogui.position()
print('\b' * len(CurserPos), end='\r')
sys.stdout.flush()
I will show the output as an image.
I am rather new to Python and would really appreciate some expert advice.
Thanks
A: This code will print the live position of your mouse after every one second.
import pyautogui as py #Import pyautogui
import time #Import Time
while True: #Start loop
print (py.position())
time.sleep(1)
Pyautogui can programmatically control the mouse & keyboard.
More information about it can be found here https://pypi.org/project/PyAutoGUI/
|
Q: How to print out 'Live' mouse position coordinates using pyautogui? I used lots of different source codes, and even copied and pasted but I keep getting random symbols that shift when i move my mouse over them
here is my code...
import pyautogui, time, sys
print('Press Ctrl-C to quit.')
try:
while True:
CurserPos = pyautogui.position()
print('\b' * len(CurserPos), end='\r')
sys.stdout.flush()
I will show the output as an image.
I am rather new to Python and would really appreciate some expert advice.
Thanks
A: This code will print the live position of your mouse after every one second.
import pyautogui as py #Import pyautogui
import time #Import Time
while True: #Start loop
print (py.position())
time.sleep(1)
Pyautogui can programmatically control the mouse & keyboard.
More information about it can be found here https://pypi.org/project/PyAutoGUI/
A: Code :
import pyautogui
pyautogui.displayMousePosition()
Here is some output :
Press Ctrl-C to quit.
X: 0 Y: 1143 RGB: ( 38, 38, 38)
Here is the video where this is being demonstrated https://youtu.be/dZLyfbSQPXI?t=809
A: Use pyautogui.displayMousePosition() instead of pyautogui.position()
A: If you want the coordinates of displayMousePosition stored in a variable, try this:
import pyautogui
def getMousePosition():
pyautogui.displayMousePosition()
coords = pyautogui.position()
return coords
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:861996",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533241"
}
|
cca813ecdbb6e957c9038e103e6589dc0ffe4346
|
Stackoverflow Stackexchange
Q: Getting Data from API and Parsing For context, I'm really new to web-development.
Is there a better way of getting the data from this website than removing non-numeric characters from the string you get from .read(), such as shown in this solution and then separating the two numbers?
If the python script is calling the API and getting data, how would you automate that process to refresh the data in a time period (e.g. every minute)?
A: This data is in JSON format, you can retrieve it as a Python dict using the requests library:
>>> import requests
>>> data = requests.get("https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR"
).json()
>>> data
{'BTC': 0.1432, 'EUR': 343.04, 'USD': 388.04}
If you want to run it regularly there are a few different options; you could use cron (or taskscheduler on windows), or you could use a loop with time.sleep(60).
|
Q: Getting Data from API and Parsing For context, I'm really new to web-development.
Is there a better way of getting the data from this website than removing non-numeric characters from the string you get from .read(), such as shown in this solution and then separating the two numbers?
If the python script is calling the API and getting data, how would you automate that process to refresh the data in a time period (e.g. every minute)?
A: This data is in JSON format, you can retrieve it as a Python dict using the requests library:
>>> import requests
>>> data = requests.get("https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR"
).json()
>>> data
{'BTC': 0.1432, 'EUR': 343.04, 'USD': 388.04}
If you want to run it regularly there are a few different options; you could use cron (or taskscheduler on windows), or you could use a loop with time.sleep(60).
A: That data is in JSON format, which is roughly equivalent to a dictionary in Python. I'm not an expert in Python, but I believe that you'll need to import the json module and parse the data with .loads() - then you can access the values as properties of the dictionary.
So for example, your data looks like this:
{"BTC":0.1434,"USD":387.92,"EUR":343.51}
In your script, you'll import json, put the data into a variable, and parse it as a dictionary:
import json
json_string = '{"BTC":0.1434,"USD":387.92,"EUR":343.51}'
parsed_json = json.loads(json_string)
Now if you reference parsed_json, you can access the values:
print parsed_json['BTC']
# 0.1434
print parsed_json['EUR']
# 343.51
And so on.
Edit
After re-reading your question, I feel like what you are aiming for is some combination of the accepted answer and mine. Here's what I think you're looking for (borrowing from the accepted answer):
>>> import requests
>>> data = requests.get("https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=BTC,USD,EUR"
).json()
>>> data['USD']
387.92
>>> data['BTC']
0.1434
The data returned by requests.get() is already parsed, so there's no need to parse it again with json.loads(). To access the value of a dictionary's attribute, type the name of the dictionary and then the attribute in brackets.
A: Python has the ability to parse json resposne from api into dictionary results
https://pythonspot.com/en/json-encoding-and-decoding-with-python/ gives a good tutorial on using json. To automate to run every minute take a look at What is the best way to repeatedly execute a function every x seconds in Python? . I hope this is helpful
|
stackoverflow
|
{
"language": "en",
"length": 387,
"provenance": "stackexchange_0000F.jsonl.gz:861997",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533244"
}
|
b95fddc05c1c70650e14842d3970856dded25265
|
Stackoverflow Stackexchange
Q: How to assign more memory to docker container As the title reads, I'm trying to assign more memory to my container. I'm using an image from docker hub called "aallam/tomcat-mysql" in case that's relevant.
When I start it normally without any special flags, there's a memory limit of 2GB (even though I read that memory is unbounded if not set)
Here are my docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ba57d6c9e9d2 0.22% 145.6 MiB / 1.952 GiB 7.29% 508 B / 508 B 0 B / 6.91 MB 68
I tried setting memory explicitly like so but with same results
docker run -d --memory=10g --memory-swap=-1 -e MYSQL_PASSWORD=password -p 3307:3306 -p 8081:8080 aallam/tomcat-mysql
I've read that perhaps the VM is what's restricting it. But then why does docker stats show that container size limit is 2GB?
A: Screen shots for Docker Desktop V3.3.3 (Mac)
|
Q: How to assign more memory to docker container As the title reads, I'm trying to assign more memory to my container. I'm using an image from docker hub called "aallam/tomcat-mysql" in case that's relevant.
When I start it normally without any special flags, there's a memory limit of 2GB (even though I read that memory is unbounded if not set)
Here are my docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ba57d6c9e9d2 0.22% 145.6 MiB / 1.952 GiB 7.29% 508 B / 508 B 0 B / 6.91 MB 68
I tried setting memory explicitly like so but with same results
docker run -d --memory=10g --memory-swap=-1 -e MYSQL_PASSWORD=password -p 3307:3306 -p 8081:8080 aallam/tomcat-mysql
I've read that perhaps the VM is what's restricting it. But then why does docker stats show that container size limit is 2GB?
A: Screen shots for Docker Desktop V3.3.3 (Mac)
A: Allocate maximum memory to your docker machine from (docker preference -> advance )
Screenshot of advance settings:
This will set the maximum limit docker consume while running containers. Now run your image in new container with -m=4g flag for 4 gigs ram or more. e.g.
docker run -m=4g {imageID}
Remember to apply the ram limit increase changes. Restart the docker and double check that ram limit did increased. This can be one of the factor you not see the ram limit increase in docker containers.
A: That 2GB limit you see is the total memory of the VM (virtual machine) on which docker runs.
If you are using Docker Desktop you can easily increase it from the Whale icon in the task bar, then go to Preferences -> Advanced:
But if you are using VirtualBox behind, open VirtualBox, Select and configure the docker-machine assigned memory.
See this for Mac:
https://docs.docker.com/desktop/settings/mac/#advanced
MEMORY
By default, Docker for Mac is set to use 2 GB runtime memory, allocated from the total available memory on your Mac. You can increase the RAM on the app to get faster performance by setting this number higher (for example to 3) or lower (to 1) if you want Docker for Mac to use less memory.
For Windows:
https://docs.docker.com/desktop/settings/windows/#advanced
Memory - Change the amount of memory the Docker for Windows' Linux VM uses
A: If you want to change the default container and you are using Virtualbox, you can do it via the commandline / CLI:
docker-machine stop
VBoxManage modifyvm default --cpus 2
VBoxManage modifyvm default --memory 4096
docker-machine start
|
stackoverflow
|
{
"language": "en",
"length": 416,
"provenance": "stackexchange_0000F.jsonl.gz:862020",
"question_score": "189",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533319"
}
|
6fc499d80c6282813ab4765a8ecb670cece05bda
|
Stackoverflow Stackexchange
Q: Compiler fails to import type from global.d.ts Here is the full NPM package on GitHub.
tsconfig.json
{
"compilerOptions": {
"target": "es5",
"module": "commonjs"
}
}
global.d.ts
interface Foo { }
index.ts
const x: Foo = {};
This is what happens when we build:
$ \node_modules\.bin\tsc .\index.ts
index.ts(1,10): error TS2304: Cannot find name 'Foo'.
This is our version:
$ .\node_modules\.bin\tsc --version
Version 2.3.4
These are the files that tsc lists:
$ .\node_modules\.bin\tsc --listFiles
C:/temp/node_modules/typescript/lib/lib.d.ts
C:/temp/global.d.ts
C:/temp/index.ts
How can we automatically load Foo into the index.ts file?
Research
The documentation on global.d.ts indicates that the above should work.
A: You have to pass the global.d.ts file as part of the tsc's argument as well:
$ \node_modules\.bin\tsc .\index.ts .\global.d.ts
But note that by specifying the files you are ignoring your tsconfig.json file. So if you want to use your tsconfig.json file, just call tsc without any parameters and it will use the files listed when you do tsc --listFiles.
From the documentation:
When input files are specified on the command line, tsconfig.json
files are ignored.
|
Q: Compiler fails to import type from global.d.ts Here is the full NPM package on GitHub.
tsconfig.json
{
"compilerOptions": {
"target": "es5",
"module": "commonjs"
}
}
global.d.ts
interface Foo { }
index.ts
const x: Foo = {};
This is what happens when we build:
$ \node_modules\.bin\tsc .\index.ts
index.ts(1,10): error TS2304: Cannot find name 'Foo'.
This is our version:
$ .\node_modules\.bin\tsc --version
Version 2.3.4
These are the files that tsc lists:
$ .\node_modules\.bin\tsc --listFiles
C:/temp/node_modules/typescript/lib/lib.d.ts
C:/temp/global.d.ts
C:/temp/index.ts
How can we automatically load Foo into the index.ts file?
Research
The documentation on global.d.ts indicates that the above should work.
A: You have to pass the global.d.ts file as part of the tsc's argument as well:
$ \node_modules\.bin\tsc .\index.ts .\global.d.ts
But note that by specifying the files you are ignoring your tsconfig.json file. So if you want to use your tsconfig.json file, just call tsc without any parameters and it will use the files listed when you do tsc --listFiles.
From the documentation:
When input files are specified on the command line, tsconfig.json
files are ignored.
|
stackoverflow
|
{
"language": "en",
"length": 173,
"provenance": "stackexchange_0000F.jsonl.gz:862023",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533334"
}
|
d48109424fc25a665f7260a79bb6881189903884
|
Stackoverflow Stackexchange
Q: Neo4j / Cypher: Is CREATE UNIQUE deprecated? When I write a simple Cypher query like this:
MATCH (r:Person {name:'Jon'})
MATCH (s:Person {name:'Ana'})
CREATE UNIQUE (r)-[:FRIEND_OF]->(s)
I'm receiving an alert messsage in the Neo4j browser. The alert message says:
The RULE planner is not available in the current CYPHER version, the
query has been run by an older CYPHER version. CREATE UNIQUE is
unsupported for current CYPHER version, the query has been execute by
an older CYPHER version
Here a print screen of the alert message:
I searched by this message in the Neo4j Github and did not find anything. Also the docs has no mention to any depreciation.
My question is: Is CREATE UNIQUE deprecated? Why?
I'm using Neo4j 3.2.1.
Thanks.
PS: I know my query can be refactoring. It is only an example. Also all refactoring made in the query using CREATE UNINQUE show the same alert message in the Neo4j browser.
A: CREATE UNIQUE is set to be completely replaced by MERGE. So your syntax would be :
MATCH (r:Person {name:'Jon'})
MATCH (s:Person {name:'Ana'})
MERGE (r)-[:FRIEND_OF]->(s)
Regards,
Tom
|
Q: Neo4j / Cypher: Is CREATE UNIQUE deprecated? When I write a simple Cypher query like this:
MATCH (r:Person {name:'Jon'})
MATCH (s:Person {name:'Ana'})
CREATE UNIQUE (r)-[:FRIEND_OF]->(s)
I'm receiving an alert messsage in the Neo4j browser. The alert message says:
The RULE planner is not available in the current CYPHER version, the
query has been run by an older CYPHER version. CREATE UNIQUE is
unsupported for current CYPHER version, the query has been execute by
an older CYPHER version
Here a print screen of the alert message:
I searched by this message in the Neo4j Github and did not find anything. Also the docs has no mention to any depreciation.
My question is: Is CREATE UNIQUE deprecated? Why?
I'm using Neo4j 3.2.1.
Thanks.
PS: I know my query can be refactoring. It is only an example. Also all refactoring made in the query using CREATE UNINQUE show the same alert message in the Neo4j browser.
A: CREATE UNIQUE is set to be completely replaced by MERGE. So your syntax would be :
MATCH (r:Person {name:'Jon'})
MATCH (s:Person {name:'Ana'})
MERGE (r)-[:FRIEND_OF]->(s)
Regards,
Tom
A: Try this
MATCH (lft:Person {name:'Jon'}),(rgt)
WHERE rgt.name IN ['Ana']
CREATE UNIQUE (lft)-[r:KNOWS]->(rgt)
RETURN r
note that you can search for multiple names too like this
MATCH (lft:Person {name:'Jon'}),(rgt)
WHERE rgt.name IN ['Ana','Maria']
CREATE UNIQUE (lft)-[r:KNOWS]->(rgt)
RETURN r
|
stackoverflow
|
{
"language": "en",
"length": 219,
"provenance": "stackexchange_0000F.jsonl.gz:862097",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533551"
}
|
2243d14dda4763cc4104bed04a93f591ff3ef628
|
Stackoverflow Stackexchange
Q: swagger-codegen add annotation based on custom property description I am trying to do something like the following:
In my schema JSON models section:
"MyObject": {
"type": "object",
"description": "my description",
"properties": {
"type": "string",
"description": "my property description",
"customAnnotation": "true"
}
}
So right out of the gate, I'm trying to extend JSON Schema - likely my first problem. However, I do not know how to do this legitimately, if that is even possible.
Snippet for use case for "customAnnotation" the moustache template (-l spring):
{{#vars}}
{{^customAnnotation}}@CustomAnnotation {{/customAnnotation}}public {{{datatypeWithEnum}}} {{getter}}() {
return {{name}};
}
{{/vars}}
Can I actually do something like this? clues helpful (yes, I'm a newbie in this area)!
Note: I would also like to use the count of found "customAnnotation" > 0 to annotate a class. Something like:
{{^containsCustomAnnotations}}@ContainsCustomAnnotations {{/hasCustomAnnotation}}public void MyClass {
}
Thanks!
A: For the first part, schema:
"MyObject": {
"type": "object",
"description": "my description",
"properties": {
"foo": {
"type": "string",
"description": "my property description",
"x-customAnnotation": true
}
}
}
and template:
{{#vendorExtensions.x-customAnnotation}}@CustomAnnotation {{/vendorExtensions.x-customAnnotation}}public {{{datatypeWithEnum}}} {{getter}}() {
return {{name}};
}
cf. Swagger Codegen :- Vendor Extensions are not accessible
|
Q: swagger-codegen add annotation based on custom property description I am trying to do something like the following:
In my schema JSON models section:
"MyObject": {
"type": "object",
"description": "my description",
"properties": {
"type": "string",
"description": "my property description",
"customAnnotation": "true"
}
}
So right out of the gate, I'm trying to extend JSON Schema - likely my first problem. However, I do not know how to do this legitimately, if that is even possible.
Snippet for use case for "customAnnotation" the moustache template (-l spring):
{{#vars}}
{{^customAnnotation}}@CustomAnnotation {{/customAnnotation}}public {{{datatypeWithEnum}}} {{getter}}() {
return {{name}};
}
{{/vars}}
Can I actually do something like this? clues helpful (yes, I'm a newbie in this area)!
Note: I would also like to use the count of found "customAnnotation" > 0 to annotate a class. Something like:
{{^containsCustomAnnotations}}@ContainsCustomAnnotations {{/hasCustomAnnotation}}public void MyClass {
}
Thanks!
A: For the first part, schema:
"MyObject": {
"type": "object",
"description": "my description",
"properties": {
"foo": {
"type": "string",
"description": "my property description",
"x-customAnnotation": true
}
}
}
and template:
{{#vendorExtensions.x-customAnnotation}}@CustomAnnotation {{/vendorExtensions.x-customAnnotation}}public {{{datatypeWithEnum}}} {{getter}}() {
return {{name}};
}
cf. Swagger Codegen :- Vendor Extensions are not accessible
|
stackoverflow
|
{
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:862108",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533587"
}
|
570d461831505e93b0aa5d7da3fe4813d29b58dc
|
Stackoverflow Stackexchange
Q: How to use ImmutableJS's reduce function with an Immutable accumulator? I'm trying to use ImmutableJS's reduce function like so:
const myChanges = data
.getIn(['a', 'b']) // Immutable.List
.reduce((accum, data) => {
// console.log('accum', accum);
// console.log('data', data); <--- an Immutable.Map
accum.push(List([ data.get('id'), data.get('time') ]));
return accum;
}, List());
However, accum always console logs as an empty Immutable.List. My guess is that it's because List() can't be mutated, so each time accum is returning a new empty Immutable.List().
What I would like to get is a list of lists, where each of the inner lists (essentially a tuple) consists of id and time.
How can I get this to work? Do I need to use withMutations?
A: In immutable all data are 'readonly'. When you call the push method it returns a new object with the changes.
For this code to work you need to return directly:
const myChanges = data.getIn(['a', 'b'])
.reduce((accum, data) => {
return accum.push(List([ data.get('id'), data.get('time') ]));
}, List());
or you can store the result in a variable and return it
const myChanges = data.getIn(['a', 'b'])
.reduce((accum, data) => {
let result = accum.push(List([ data.get('id'), data.get('time') ]));
return result;
}, List());
|
Q: How to use ImmutableJS's reduce function with an Immutable accumulator? I'm trying to use ImmutableJS's reduce function like so:
const myChanges = data
.getIn(['a', 'b']) // Immutable.List
.reduce((accum, data) => {
// console.log('accum', accum);
// console.log('data', data); <--- an Immutable.Map
accum.push(List([ data.get('id'), data.get('time') ]));
return accum;
}, List());
However, accum always console logs as an empty Immutable.List. My guess is that it's because List() can't be mutated, so each time accum is returning a new empty Immutable.List().
What I would like to get is a list of lists, where each of the inner lists (essentially a tuple) consists of id and time.
How can I get this to work? Do I need to use withMutations?
A: In immutable all data are 'readonly'. When you call the push method it returns a new object with the changes.
For this code to work you need to return directly:
const myChanges = data.getIn(['a', 'b'])
.reduce((accum, data) => {
return accum.push(List([ data.get('id'), data.get('time') ]));
}, List());
or you can store the result in a variable and return it
const myChanges = data.getIn(['a', 'b'])
.reduce((accum, data) => {
let result = accum.push(List([ data.get('id'), data.get('time') ]));
return result;
}, List());
A: Your problem is that you return the original accum, not the new value that includes the data. Remember that push returns a new list, unlike a mutable JS Array!
const myChanges = data
.getIn(['a', 'b'])
.reduce((accum, d) => {
return accum.push(List([ d.get('id'), d.get('time') ]));
}, List());
|
stackoverflow
|
{
"language": "en",
"length": 242,
"provenance": "stackexchange_0000F.jsonl.gz:862134",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533642"
}
|
cb135f7367f6386f46c8a49f7f4f5259f07f55f8
|
Stackoverflow Stackexchange
Q: Where Can I Find My Zend Studio License Code I purchased my Zend Studio license from Zend about 18 hours ago and have received the receipt email and can see the license numbers sitting in my Zend account, but I cannot find how to get my license code from that.
Anyone purchased a license for Zend Studio and can tell me what the process is to convert a license number into a license code?
Thanks
Edit #1
A check of the Zend Studio support page at http://files.zend.com/help/Zend-Studio/content/registering_your_license.htm?Highlight=license shows that the instructions say that I need to go to the help menu and click on the Register link in order to enter my order number and license number, but when I check my Help menu this does not appear anywhere. The only place anything to do with a license is in the Zend Studio License window, but it is asking for a license key, and when I enter my license number, it says it is invalid.
|
Q: Where Can I Find My Zend Studio License Code I purchased my Zend Studio license from Zend about 18 hours ago and have received the receipt email and can see the license numbers sitting in my Zend account, but I cannot find how to get my license code from that.
Anyone purchased a license for Zend Studio and can tell me what the process is to convert a license number into a license code?
Thanks
Edit #1
A check of the Zend Studio support page at http://files.zend.com/help/Zend-Studio/content/registering_your_license.htm?Highlight=license shows that the instructions say that I need to go to the help menu and click on the Register link in order to enter my order number and license number, but when I check my Help menu this does not appear anywhere. The only place anything to do with a license is in the Zend Studio License window, but it is asking for a license key, and when I enter my license number, it says it is invalid.
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:862172",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533741"
}
|
06fcaa7bdb648676c7f077c3b1d8f5c260cbd7ce
|
Stackoverflow Stackexchange
Q: How to validate Angular2 form controls that depend on eachother? I am trying to create a time picker component that binds to a reactive form control. I want the form control to be invalid if the end time is before the start time. I am aware of the built in validators that Angular comes with, but I want to be able to validate against another form control. If someone could point me in the right direction that would be great.
A: This is the approach I take:
*
*Define the validator on the FormGroup containing the two controls.
*In the validator, use the FormGroup object that is passed in to retrieve the values of the two FormControl objects and perform the validation.
|
Q: How to validate Angular2 form controls that depend on eachother? I am trying to create a time picker component that binds to a reactive form control. I want the form control to be invalid if the end time is before the start time. I am aware of the built in validators that Angular comes with, but I want to be able to validate against another form control. If someone could point me in the right direction that would be great.
A: This is the approach I take:
*
*Define the validator on the FormGroup containing the two controls.
*In the validator, use the FormGroup object that is passed in to retrieve the values of the two FormControl objects and perform the validation.
|
stackoverflow
|
{
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:862187",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533770"
}
|
6f1537ff5d63a98e821e7d3ec4d285b2d0ec9edd
|
Stackoverflow Stackexchange
Q: Maximum PID on osx I found this question about what's the maximum PID for Linux and my question is exactly the same for OSX :
OSX doesn't seem to have the /proc/sys/kernel/pid_max file containing this value on Linux.
Is there an equivalent file or an other way to find out what is the range of PIDs on an OSX system?
A: The maximum PID on macOS is 99998.
Unlike on Linux, this value is not tunable. I'm not aware of any way to retrieve it in a program; the only assumption you should make is that the value of a process ID will fit into the pid_t type.
|
Q: Maximum PID on osx I found this question about what's the maximum PID for Linux and my question is exactly the same for OSX :
OSX doesn't seem to have the /proc/sys/kernel/pid_max file containing this value on Linux.
Is there an equivalent file or an other way to find out what is the range of PIDs on an OSX system?
A: The maximum PID on macOS is 99998.
Unlike on Linux, this value is not tunable. I'm not aware of any way to retrieve it in a program; the only assumption you should make is that the value of a process ID will fit into the pid_t type.
|
stackoverflow
|
{
"language": "en",
"length": 109,
"provenance": "stackexchange_0000F.jsonl.gz:862201",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533802"
}
|
3fbb898371d8a1451338bfab517413afd02f80ba
|
Stackoverflow Stackexchange
Q: Get list of all available pip packages and their versions I am writing a system where I need to get a list of all available packages that can be installed via the pip running on my machine and their default versions. The reason being I need a way to make a production build of my system reproducible, even if someone manually upgraded a single package for pip.
I currently have this one liner to accomplish it, but it doesn't always work cleanly and I'd prefer to steer away from text parsing if at all possible.
$ pip search * | awk '{print $1 $2}' | cut -d ')' -f 1 | awk -F'(' '{print $1"=="$2}'
Is there an easy way to do this in pip? It would be nice if there was an equivalent to pip freeze but for all the available packages instead of just what's installed.
A: See PyPI Simple API on how to get the list of all available packages without versions.
|
Q: Get list of all available pip packages and their versions I am writing a system where I need to get a list of all available packages that can be installed via the pip running on my machine and their default versions. The reason being I need a way to make a production build of my system reproducible, even if someone manually upgraded a single package for pip.
I currently have this one liner to accomplish it, but it doesn't always work cleanly and I'd prefer to steer away from text parsing if at all possible.
$ pip search * | awk '{print $1 $2}' | cut -d ')' -f 1 | awk -F'(' '{print $1"=="$2}'
Is there an easy way to do this in pip? It would be nice if there was an equivalent to pip freeze but for all the available packages instead of just what's installed.
A: See PyPI Simple API on how to get the list of all available packages without versions.
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:862212",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533833"
}
|
e64860b5745f663c8a9a9d6a57fc4096f4c89f5e
|
Stackoverflow Stackexchange
Q: Space after symbol with JS Intl I want to format a currency with NumberFormat of Intl and get the returned value with a space " " between the symbol and the number.
new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'USD' }).format(12345)
// "US$12.345,00"
new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'BRL' }).format(12345)
// "R$12.345,00"
What I want: "US$ 12.345,00", "R$ 12.345,00"
Any ideas?
A:
console.log(formatPrice(1200, 'en-US', 'USD'));
console.log(formatPrice(1200, 'fr-FR', 'EUR'));
function formatPrice(value, locale, currency) {
return new Intl.NumberFormat(locale, { style: 'currency', currency })
.format(value)
// if the price begins with digit, place the space after the digit
.replace(/^([\d,.]+)/, '$1 ')
// if the price ends with digit, place the space before the digit
.replace(/([\d,.]+)$/, ' $1')
}
|
Q: Space after symbol with JS Intl I want to format a currency with NumberFormat of Intl and get the returned value with a space " " between the symbol and the number.
new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'USD' }).format(12345)
// "US$12.345,00"
new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'BRL' }).format(12345)
// "R$12.345,00"
What I want: "US$ 12.345,00", "R$ 12.345,00"
Any ideas?
A:
console.log(formatPrice(1200, 'en-US', 'USD'));
console.log(formatPrice(1200, 'fr-FR', 'EUR'));
function formatPrice(value, locale, currency) {
return new Intl.NumberFormat(locale, { style: 'currency', currency })
.format(value)
// if the price begins with digit, place the space after the digit
.replace(/^([\d,.]+)/, '$1 ')
// if the price ends with digit, place the space before the digit
.replace(/([\d,.]+)$/, ' $1')
}
A: You can use replace to further format the currency.
var usd = new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'USD' }).format(12345).replace(/^(\D+)/, '$1 ');
var euro = new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'EUR' }).format(12345).replace(/^(\D+)/, '$1 ');
var deEuro = new Intl.NumberFormat('de', { style: 'currency', currency: 'EUR' }).format(12345).replace(/^(\D+)/, '$1 ');
console.log(usd);
console.log(euro);
console.log(deEuro);
Update
There is currently an issue with Intl.js where some browsers put a space between the currency and the value resulting in the output that OP wanted. In that case, the formatting above will result in 2 spaces (as seen in the comments below).
You can add on .replace(/\s+/g, ' ') to replace multiple spaces with a single space. This will ensure that if a space was added by the browser due to the above issue, the final output will still have a single space as expected.
var usd = new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'USD' }).format(12345).replace(/^(\D+)/, '$1 ').replace(/\s+/, ' ');
var euro = new Intl.NumberFormat('pt-br', { style: 'currency', currency: 'EUR' }).format(12345).replace(/^(\D+)/, '$1 ').replace(/\s+/, ' ');
var deEuro = new Intl.NumberFormat('de', { style: 'currency', currency: 'EUR' }).format(12345).replace(/^(\D+)/, '$1 ').replace(/\s+/, ' ');
console.log(usd);
console.log(euro);
console.log(deEuro);
|
stackoverflow
|
{
"language": "en",
"length": 303,
"provenance": "stackexchange_0000F.jsonl.gz:862241",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533919"
}
|
d0d9746198044c31e743123265e5469c219d1edb
|
Stackoverflow Stackexchange
Q: Hide search highlight in VSCode Vim I'm trying to switch from MacVim to VSCode and I use VSCode Vim extension. The most annoying thing I found so far is: if I search with / command - I can't disable a highlighting of search results.
Could you pls help me to find a way how to hide search result highlighting after I've done with search?
A: Alternately, you can also just set vim.hlsearch to false.
|
Q: Hide search highlight in VSCode Vim I'm trying to switch from MacVim to VSCode and I use VSCode Vim extension. The most annoying thing I found so far is: if I search with / command - I can't disable a highlighting of search results.
Could you pls help me to find a way how to hide search result highlighting after I've done with search?
A: Alternately, you can also just set vim.hlsearch to false.
A: You can also just bind it to escape, which isn't bound to anything in normal mode.
"vim.normalModeKeyBindingsNonRecursive": [
{
"before": [
"<Esc>"
],
"commands": [
":nohl"
],
}
]
A: I've found an answer:
in settings.json
"vim.normalModeKeyBindingsNonRecursive": [
{
"before":["<C-n>"],
"after":[],
"commands": [
{
"command": ":nohl"
}
]
}
]
|
stackoverflow
|
{
"language": "en",
"length": 126,
"provenance": "stackexchange_0000F.jsonl.gz:862243",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533925"
}
|
9c0ed3e55e64bd08ae75d520bccf7dc42183be10
|
Stackoverflow Stackexchange
Q: V8/Node.js increase max allowed String length AFAIK V8 has a known hard limit on the length of allowed Strings. Trying to parse >500MB Strings will pop the error:
Invalid String Length
Using V8 flags to increase the heap size doesn't make any difference
$ node --max_old_space_size=5000 process-large-string.js
I know that I should be using Streams instead. However is there any way to increase the maximum allowed String length anyway?
Update: Answer from @PaulIrish below indicates they upped it to 1GB - but it's still not user-configurable
A: Sorry, no, there is no way to increase the maximum allowed String length.
It is hard-coded in the source, and a lot of code implicitly relies on it, so while allowing larger strings is known to be on people's wishlist, it is going to be a lot of work and won't happen in the near future.
|
Q: V8/Node.js increase max allowed String length AFAIK V8 has a known hard limit on the length of allowed Strings. Trying to parse >500MB Strings will pop the error:
Invalid String Length
Using V8 flags to increase the heap size doesn't make any difference
$ node --max_old_space_size=5000 process-large-string.js
I know that I should be using Streams instead. However is there any way to increase the maximum allowed String length anyway?
Update: Answer from @PaulIrish below indicates they upped it to 1GB - but it's still not user-configurable
A: Sorry, no, there is no way to increase the maximum allowed String length.
It is hard-coded in the source, and a lot of code implicitly relies on it, so while allowing larger strings is known to be on people's wishlist, it is going to be a lot of work and won't happen in the near future.
A: In summer 2017, V8 increased the maximum size of strings from ~256MB to ~1GB. Specifically, from 2^28 - 16 to 2^30 - 25 on 64-bit platforms. V8 ticket.
This change landed in:
*
*V8: 6.2.100
*Chromium: 62.0.3167.0
*Node.js: 9.0.0
|
stackoverflow
|
{
"language": "en",
"length": 184,
"provenance": "stackexchange_0000F.jsonl.gz:862266",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44533966"
}
|
1fc574a669f7168c12d86df59440cca580d4a87c
|
Stackoverflow Stackexchange
Q: How to invoke method by function pointer? Invoking a method
Normal way :
QMetaObject::invokeMethod(obj, "function");
But instead of using string.This is what I want :
QMetaObject::invokeMethod(obj, function());
// or any macro like SLOT
QMetaObject::invokeMethod(obj, FUNC_NAME(function()));
A: I strongly recommend you to use the normal way, i.e. using QMetaObject::invokeMethod(obj, "function"). However, if you want you can use the following stringify macro:
#define FUNC_NAME(a) (QString(#a).remove(QRegExp("\\((.*)\\)")).trimmed().toLatin1().constData())
//usage
QMetaObject::invokeMethod(obj, FUNC_NAME(function()));
The above macro convert argument to string then remove method/function arguments between (...).
|
Q: How to invoke method by function pointer? Invoking a method
Normal way :
QMetaObject::invokeMethod(obj, "function");
But instead of using string.This is what I want :
QMetaObject::invokeMethod(obj, function());
// or any macro like SLOT
QMetaObject::invokeMethod(obj, FUNC_NAME(function()));
A: I strongly recommend you to use the normal way, i.e. using QMetaObject::invokeMethod(obj, "function"). However, if you want you can use the following stringify macro:
#define FUNC_NAME(a) (QString(#a).remove(QRegExp("\\((.*)\\)")).trimmed().toLatin1().constData())
//usage
QMetaObject::invokeMethod(obj, FUNC_NAME(function()));
The above macro convert argument to string then remove method/function arguments between (...).
|
stackoverflow
|
{
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:862312",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534090"
}
|
0e8af3c0801ceaf75e9e0c2b1244d1a93b685500
|
Stackoverflow Stackexchange
Q: Cordova - Youtube iFrame Embed - Fullscreen No Video This is a weird issue. Running on Cordova Android, if I have a iFrame embed Youtube video with width and height set to 100%:
<iframe width="100%" height="100%" src="https://www.youtube.com/embed/2Xk744838J4" frameborder="0" allowfullscreen></iframe>
When switching to fullscreen using the native control, the screen becomes black. No video, but audio keeps playing. Rotating the screen doesn't help.
If I set a fixed width and height:
<iframe width="560" height="315" src="https://www.youtube.com/embed/2Xk744838J4" frameborder="0" allowfullscreen></iframe>
When switching to fullscreen, at first the screen is still black. Once I rotate the screen, the video shows. Better than the first scenario.
Sometimes it just work for both scenarios, but most of the time, it has issues as described above.
I'm running on Android 7.0, Samsung S7. Is it just my device, or some kind of bug?
|
Q: Cordova - Youtube iFrame Embed - Fullscreen No Video This is a weird issue. Running on Cordova Android, if I have a iFrame embed Youtube video with width and height set to 100%:
<iframe width="100%" height="100%" src="https://www.youtube.com/embed/2Xk744838J4" frameborder="0" allowfullscreen></iframe>
When switching to fullscreen using the native control, the screen becomes black. No video, but audio keeps playing. Rotating the screen doesn't help.
If I set a fixed width and height:
<iframe width="560" height="315" src="https://www.youtube.com/embed/2Xk744838J4" frameborder="0" allowfullscreen></iframe>
When switching to fullscreen, at first the screen is still black. Once I rotate the screen, the video shows. Better than the first scenario.
Sometimes it just work for both scenarios, but most of the time, it has issues as described above.
I'm running on Android 7.0, Samsung S7. Is it just my device, or some kind of bug?
|
stackoverflow
|
{
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:862323",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534132"
}
|
f26df75ad0a1f82cc11bbb2bc68c3b71dc9b58d6
|
Stackoverflow Stackexchange
Q: Laravel how to add new field in query result How can I add new field in each item, I have used put() but it only add on the last item.
return self::where('latest', 1)
->where('competitionId',$competitionId)
->orderBy('won','desc')
->orderBy('teamName','asc')
->get(['teamName','played','won','lost','percentage', 'streak'])
->put('test', ['123', '345'])
->toJson();
Result:
{
"0": {"teamName": "A"},
"1": {"teamName": "B"},
"2": {"teamName": "C", "test": ['123', '345']},
}
Expected output:
{
"0": {"teamName": "A", "test": "qwerty"},
"1": {"teamName": "B", "test": "qwerty"},
"2": {"teamName": "C", "test": "qwerty"},
}
A: you can use map()
->map(function ($item) {
$item['test'] = ['123', '345'];
return $item;
});
|
Q: Laravel how to add new field in query result How can I add new field in each item, I have used put() but it only add on the last item.
return self::where('latest', 1)
->where('competitionId',$competitionId)
->orderBy('won','desc')
->orderBy('teamName','asc')
->get(['teamName','played','won','lost','percentage', 'streak'])
->put('test', ['123', '345'])
->toJson();
Result:
{
"0": {"teamName": "A"},
"1": {"teamName": "B"},
"2": {"teamName": "C", "test": ['123', '345']},
}
Expected output:
{
"0": {"teamName": "A", "test": "qwerty"},
"1": {"teamName": "B", "test": "qwerty"},
"2": {"teamName": "C", "test": "qwerty"},
}
A: you can use map()
->map(function ($item) {
$item['test'] = ['123', '345'];
return $item;
});
|
stackoverflow
|
{
"language": "en",
"length": 92,
"provenance": "stackexchange_0000F.jsonl.gz:862326",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534142"
}
|
be58d424552bac03ed41e17ca9083649b9af1d6d
|
Stackoverflow Stackexchange
Q: How to download a file from a server with express I am using node.js and express. I want to create a file on the server and then download when the end point is hit.
Here is the code I currently have.
router.get('/download', (req, res) => {
const fileController = new FileController();
fileController.generateJSONFile()
.then((file) => {
fs.writeFile('fooFile.json', file, 'utf8');
}).then((success) => {
res.download('fooFile.json');
})
.catch((error) => {
res.status(500).send();
});
});
I would also like it to immediately delete the file off the server after the download is completed.
I do not need to use res.download() if there is a better way to accomplish this goal.
A: Would this help? This should tell the browser to download the file as fooFile.json. This would not require saving of temporary file.
router.get('/download', (req, res) => {
const fileController = new FileController();
fileController.generateJSONFile()
.then((file) => {
res.setHeader('Content-Type', 'application/octet-stream; charset=utf-8');
res.setHeader('Content-Disposition', 'attachment; filename="fooFile.json"');
res.send(file);
})
.catch((error) => {
res.status(500).send();
});
});
if you just want to send json.
res.setHeader('Content-Type', 'application/json; charset=utf-8');
res.send(file);
|
Q: How to download a file from a server with express I am using node.js and express. I want to create a file on the server and then download when the end point is hit.
Here is the code I currently have.
router.get('/download', (req, res) => {
const fileController = new FileController();
fileController.generateJSONFile()
.then((file) => {
fs.writeFile('fooFile.json', file, 'utf8');
}).then((success) => {
res.download('fooFile.json');
})
.catch((error) => {
res.status(500).send();
});
});
I would also like it to immediately delete the file off the server after the download is completed.
I do not need to use res.download() if there is a better way to accomplish this goal.
A: Would this help? This should tell the browser to download the file as fooFile.json. This would not require saving of temporary file.
router.get('/download', (req, res) => {
const fileController = new FileController();
fileController.generateJSONFile()
.then((file) => {
res.setHeader('Content-Type', 'application/octet-stream; charset=utf-8');
res.setHeader('Content-Disposition', 'attachment; filename="fooFile.json"');
res.send(file);
})
.catch((error) => {
res.status(500).send();
});
});
if you just want to send json.
res.setHeader('Content-Type', 'application/json; charset=utf-8');
res.send(file);
|
stackoverflow
|
{
"language": "en",
"length": 167,
"provenance": "stackexchange_0000F.jsonl.gz:862365",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534252"
}
|
aac41bf1afb6a54de6e116ff088313229824e77b
|
Stackoverflow Stackexchange
Q: Change platform on Elastic Beanstalk from PHP to Node.js I'm trying to change the platform on an existing Elastic Beanstalk instance from PHP 7 to Node.js. However, via the AWS Dashboard, I can only change/upgrade the version of PHP.
Is it currently possible to make this change through the dashboard or command line?
A: I think you can use this solution:
aws elasticbeanstalk list-available-solution-stacks
aws elasticbeanstalk update-environment --solution-stack-name "64bit Amazon Linux 2017.09 v4.4.4 running Node.js" --environment-name "example-env" --region "eu-west-1"
|
Q: Change platform on Elastic Beanstalk from PHP to Node.js I'm trying to change the platform on an existing Elastic Beanstalk instance from PHP 7 to Node.js. However, via the AWS Dashboard, I can only change/upgrade the version of PHP.
Is it currently possible to make this change through the dashboard or command line?
A: I think you can use this solution:
aws elasticbeanstalk list-available-solution-stacks
aws elasticbeanstalk update-environment --solution-stack-name "64bit Amazon Linux 2017.09 v4.4.4 running Node.js" --environment-name "example-env" --region "eu-west-1"
|
stackoverflow
|
{
"language": "en",
"length": 80,
"provenance": "stackexchange_0000F.jsonl.gz:862412",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534440"
}
|
75be7108c5c47a6492e626ee3606d869220a5a19
|
Stackoverflow Stackexchange
Q: How to release (get-item c:\temp\a.log).OpenRead() from lock status? I use PowerShell command (get-item c:\temp\a.log).OpenRead() to test what happen with the file.
After file is opened to read, if I issue (get-item c:\temp\a.log).OpenWrite(), it will return the following error
Exception calling "OpenWrite" with "0" argument(s): "The process cannot access the file
'C:\temp\a.log' because it is being used by another process."
+ (get-item c:\temp\a.log).OpenWrite()
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : IOException
How can I release the OpenRead() status?
A: I found a way to release the lock status
I just invoke another command:
$s = (get-item c:\temp\a.log).OpenRead()
, then use
$s.close()
The file is not locked anymore
|
Q: How to release (get-item c:\temp\a.log).OpenRead() from lock status? I use PowerShell command (get-item c:\temp\a.log).OpenRead() to test what happen with the file.
After file is opened to read, if I issue (get-item c:\temp\a.log).OpenWrite(), it will return the following error
Exception calling "OpenWrite" with "0" argument(s): "The process cannot access the file
'C:\temp\a.log' because it is being used by another process."
+ (get-item c:\temp\a.log).OpenWrite()
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : IOException
How can I release the OpenRead() status?
A: I found a way to release the lock status
I just invoke another command:
$s = (get-item c:\temp\a.log).OpenRead()
, then use
$s.close()
The file is not locked anymore
A: Just to explain why you're seeing this behavior when you open a file with .OpenRead() and then again with .OpenWrite(), it's caused by sharing (or lack thereof), not locking. Sharing dictates what kind of access is allowed for other streams opened from the same file while the current stream is still open.
OpenRead and OpenWrite are convenience methods that wrap the FileStream constructor; OpenRead creates a read-only stream with read sharing allowed, and OpenWrite creates a write-only stream with no sharing allowed. You may notice that there is another method called simply Open with overloads that allow you to specify the access (second parameter) and sharing (third parameter) yourself. We can translate OpenRead and OpenWrite to Open, thus...
$read = (get-item c:\temp\a.log).OpenRead()
# The following line throws an exception
$write = (get-item c:\temp\a.log).OpenWrite()
...becomes...
$read = (get-item c:\temp\a.log).Open('Open', 'Read', 'Read') # Same as .OpenRead()
# The following line throws an exception
$write = (get-item c:\temp\a.log).Open('OpenOrCreate', 'Write', 'None') # Same as .OpenWrite()
Either way you write it, the third line will fail to create a write-only stream because $read will only allow other streams to read as well. One way to prevent this conflict is to close the first stream before opening the second:
$read = (get-item c:\temp\a.log).Open('Open', 'Read', 'Read') # Same as .OpenRead()
try
{
# Use $read...
}
finally
{
$read.Close()
}
# The following line succeeds
$write = (get-item c:\temp\a.log).Open('OpenOrCreate', 'Write', 'None') # Same as .OpenWrite()
try
{
# Use $write...
}
finally
{
$write.Close()
}
If you really do need a read-only and a write-only stream to be open on the same file simultaneously, you can always pass your own values to Open to allow this:
$read = (get-item c:\temp\a.log).Open('Open', 'Read', 'ReadWrite')
# The following line succeeds
$write = (get-item c:\temp\a.log).Open('OpenOrCreate', 'Write', 'Read')
Note that the sharing goes both ways: $read needs to include Write in its sharing value so that $write can be opened with Write access, and $write needs to include Read in its sharing value because $read is already open with Read access.
In any case, it is always good practice to call Close() on any Stream when you are done using it.
A: I found previous post just works because the variable $s is just what I invoked $s = (get-item c:\temp\a.log).OpenRead() earlier.
So $s is just the object that need to be closed.
I try the following test to make it more clearly.
case1:
$a = (get-item a.txt).OpenRead() #the a.txt is locked
$a.Close() #a.txt is unlocked
case2:
$a = (get-item a.txt).OpenRead() #the a.txt is locked
$b = (get-item a.txt).OpenRead()
$a.Close() #a.txt is locked
$b.close() #a.txt is unlocked
case3:
$a = (get-item a.txt).OpenRead() #the a.txt is locked
$a = "bbb" #the a.txt is locked for a while, finally it will unlock
For case3, it seems the system will process the dangling object, then release the lock status.
|
stackoverflow
|
{
"language": "en",
"length": 587,
"provenance": "stackexchange_0000F.jsonl.gz:862418",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534455"
}
|
13bad958877f342eb886f590a74702c9e97acfe0
|
Stackoverflow Stackexchange
Q: Get current loaded language translation in Odoo I want to know how to get the current loaded language translation in Odoo using python code.
For example, I'd like to determine if the loaded translation language is in Japanese.
A: I think you wanted to get the language of the loaded translation in odoo.
I have also encountered this.
Use this
request.env.lang
Example:current_lang = request.env.lang
If the loaded language is in Japanese, the output is ja_JP.
Hope this helps!
|
Q: Get current loaded language translation in Odoo I want to know how to get the current loaded language translation in Odoo using python code.
For example, I'd like to determine if the loaded translation language is in Japanese.
A: I think you wanted to get the language of the loaded translation in odoo.
I have also encountered this.
Use this
request.env.lang
Example:current_lang = request.env.lang
If the loaded language is in Japanese, the output is ja_JP.
Hope this helps!
|
stackoverflow
|
{
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:862471",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534642"
}
|
5a4a5ff4c912cb7716262bea9b40797e510b31dc
|
Stackoverflow Stackexchange
Q: How to flat query result? With a sample make it easy understand, with https://developer.github.com/v4/explorer/
query the viewer info:
query {
viewer {
followers {
totalCount
}
following {
totalCount
}
}
}
the result is:
{
"data": {
"viewer": {
"followers": {
"totalCount": 131
},
"following": {
"totalCount": 28
}
}
}
}
what I want is:
{
"data": {
"viewer": {
"followersCount" 131,
"followingCount": 28
}
}
}
so does GraphQL support this ? and how to do it?
A: GraphQL doesn't support this type of data flattening.
You must change the data structure in your code or work with the returned data structure.
EDIT: I just came across this repository (graphql-lodash) that could help you achieve what you want.
|
Q: How to flat query result? With a sample make it easy understand, with https://developer.github.com/v4/explorer/
query the viewer info:
query {
viewer {
followers {
totalCount
}
following {
totalCount
}
}
}
the result is:
{
"data": {
"viewer": {
"followers": {
"totalCount": 131
},
"following": {
"totalCount": 28
}
}
}
}
what I want is:
{
"data": {
"viewer": {
"followersCount" 131,
"followingCount": 28
}
}
}
so does GraphQL support this ? and how to do it?
A: GraphQL doesn't support this type of data flattening.
You must change the data structure in your code or work with the returned data structure.
EDIT: I just came across this repository (graphql-lodash) that could help you achieve what you want.
|
stackoverflow
|
{
"language": "en",
"length": 122,
"provenance": "stackexchange_0000F.jsonl.gz:862473",
"question_score": "25",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534644"
}
|
2ee4e118e4efa05c65049bfcef5974724531d983
|
Stackoverflow Stackexchange
Q: debugging node 8 with visual studio code? Using Visual Studio Code Version 1.13.0, when started a node debug test2.js, the node is version 0.12 with following config, I can debug and response from vscode was:
Debugging with legacy protocol because it was detected.
but when the node is V8.0 and 'node debug test2.js' is issued, debugging VSCODE got:
Debugging with legacy protocol because Node.js version could not be determined (Error: read ECONNRESET)
Any idea why? I'm using 'attach', the config as follow:
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Attach",
"port": 5858
}
{
"type": "node",
"request": "launch",
"name": "Launch Program",
"program": "${file}"
}
]
A: You need to use the new "inspector" protocol as the documentation says:
{
"type": "node",
"request": "attach",
"name": "Attach (Inspector Protocol)",
"port": 9229,
"protocol": "inspector"
}
|
Q: debugging node 8 with visual studio code? Using Visual Studio Code Version 1.13.0, when started a node debug test2.js, the node is version 0.12 with following config, I can debug and response from vscode was:
Debugging with legacy protocol because it was detected.
but when the node is V8.0 and 'node debug test2.js' is issued, debugging VSCODE got:
Debugging with legacy protocol because Node.js version could not be determined (Error: read ECONNRESET)
Any idea why? I'm using 'attach', the config as follow:
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Attach",
"port": 5858
}
{
"type": "node",
"request": "launch",
"name": "Launch Program",
"program": "${file}"
}
]
A: You need to use the new "inspector" protocol as the documentation says:
{
"type": "node",
"request": "attach",
"name": "Attach (Inspector Protocol)",
"port": 9229,
"protocol": "inspector"
}
A: If you still get the error:
debugging with legacy protocol because node.js version could not be determined
Use the following steps:
*
*brew uninstall node.
*restar computer.
*brew install node.
It works in Visual Studio Code Version 1.15.1; node Version 8.4.0
|
stackoverflow
|
{
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:862478",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534656"
}
|
cf472c45e4040c6e7158266a4c2f86b12f4edd12
|
Stackoverflow Stackexchange
Q: CRASHED UIKit : _prepareForCAFlush I have an issue with my app, a "NSInternalInconsistencyException" is thrown. According to Crashlytics, the function that triggers this is _prepareForCAFlush.
Unfortunately, we are unable to detect what could be the problem (the bug can't be reproduced). Has anybody encounter this kind of crash? Here's the stack trace.
Fatal Exception: NSInternalInconsistencyException
0 CoreFoundation 0x1e140df7 __exceptionPreprocess
1 libobjc.A.dylib 0x1d3a3077 objc_exception_throw
2 CoreFoundation 0x1e140cd1 +[NSException raise:format:]
3 Foundation 0x1ea3b987 -[NSAssertionHandler handleFailureInFunction:file:lineNumber:description:]
4 UIKit 0x23465ea1 _prepareForCAFlush
5 UIKit 0x2348347f _beforeCACommitHandler
6 CoreFoundation 0x1e0fbf15 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__
7 CoreFoundation 0x1e0fa191 __CFRunLoopDoObservers
8 CoreFoundation 0x1e0fa5a7 __CFRunLoopRun
9 CoreFoundation 0x1e049533 CFRunLoopRunSpecific
10 CoreFoundation 0x1e049341 CFRunLoopRunInMode
11 GraphicsServices 0x1f820bfd GSEventRunModal
12 UIKit 0x23251e67 -[UIApplication _run]
13 UIKit 0x2324c591 UIApplicationMain
14 App 0xc4d9a4 main (AppDelegate.swift:22)
15 libdispatch.dylib 0x1d81350b (Missing)
|
Q: CRASHED UIKit : _prepareForCAFlush I have an issue with my app, a "NSInternalInconsistencyException" is thrown. According to Crashlytics, the function that triggers this is _prepareForCAFlush.
Unfortunately, we are unable to detect what could be the problem (the bug can't be reproduced). Has anybody encounter this kind of crash? Here's the stack trace.
Fatal Exception: NSInternalInconsistencyException
0 CoreFoundation 0x1e140df7 __exceptionPreprocess
1 libobjc.A.dylib 0x1d3a3077 objc_exception_throw
2 CoreFoundation 0x1e140cd1 +[NSException raise:format:]
3 Foundation 0x1ea3b987 -[NSAssertionHandler handleFailureInFunction:file:lineNumber:description:]
4 UIKit 0x23465ea1 _prepareForCAFlush
5 UIKit 0x2348347f _beforeCACommitHandler
6 CoreFoundation 0x1e0fbf15 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__
7 CoreFoundation 0x1e0fa191 __CFRunLoopDoObservers
8 CoreFoundation 0x1e0fa5a7 __CFRunLoopRun
9 CoreFoundation 0x1e049533 CFRunLoopRunSpecific
10 CoreFoundation 0x1e049341 CFRunLoopRunInMode
11 GraphicsServices 0x1f820bfd GSEventRunModal
12 UIKit 0x23251e67 -[UIApplication _run]
13 UIKit 0x2324c591 UIApplicationMain
14 App 0xc4d9a4 main (AppDelegate.swift:22)
15 libdispatch.dylib 0x1d81350b (Missing)
|
stackoverflow
|
{
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:862486",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534684"
}
|
6c9bc6cc2750bc46005f445a27323d9d11aa9c1c
|
Stackoverflow Stackexchange
Q: R print UTF-8 code in data.frames on Windows platform Rstudio When there are UTF-8 characters in the data frame, it won't be displayed properly.
For example, the following is correct:
> "\U6731"
[1] "朱"
But when I put that in a data frame and have it printed, here it is:
> data.frame(x="\U6731")
x
1 <U+6731>
Hence I believe this has nothing to do with encoding issues.
Is there any direct way to print 朱 instead of <U+6731>.
I have to use Windows in company so using Linux might not be feasible for me.
A: The corpus library has a work-around for this bug. Either do this:
library(corpus)
df <- data.frame(x = "\U6731")
print.corpus_frame(df)
Or else do this:
class(df) <- c("corpus_frame", "data.frame")
df
|
Q: R print UTF-8 code in data.frames on Windows platform Rstudio When there are UTF-8 characters in the data frame, it won't be displayed properly.
For example, the following is correct:
> "\U6731"
[1] "朱"
But when I put that in a data frame and have it printed, here it is:
> data.frame(x="\U6731")
x
1 <U+6731>
Hence I believe this has nothing to do with encoding issues.
Is there any direct way to print 朱 instead of <U+6731>.
I have to use Windows in company so using Linux might not be feasible for me.
A: The corpus library has a work-around for this bug. Either do this:
library(corpus)
df <- data.frame(x = "\U6731")
print.corpus_frame(df)
Or else do this:
class(df) <- c("corpus_frame", "data.frame")
df
A: You are right, while calling the whole dataframe it will give codes for UTF-8 characters:
> data.frame(x="\U6731")
x
1 <U+6731>
But if you call for columns or rows, it would print nicely:
# through the column name
> data.frame(x="\U6731")$x
[1] 朱
Levels: 朱
# through the column index
> data.frame(x="\U6731")[,1]
[1] 朱
Levels: 朱
# through the row index
> data.frame(x="\U6731")[1,]
[1] 朱
Levels: 朱
Not sure if this helps. Could you be more specific why and how exactly you need to output these characters?
|
stackoverflow
|
{
"language": "en",
"length": 209,
"provenance": "stackexchange_0000F.jsonl.gz:862488",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534690"
}
|
8c5120d611f28cc0577193c7e133731340e6f14c
|
Stackoverflow Stackexchange
Q: How to display column headers returned by PostgreSQL function? I have the following PostgreSQL function which returns multiple columns from a table:
CREATE OR REPLACE FUNCTION userbyid(id integer)
RETURNS TABLE(id int, username character varying(30), email character varying(254), last_login timestamp with time zone) AS
$$
SELECT
id,
username,
email,
last_login
FROM
auth_user
WHERE
id = $1;
$$
LANGUAGE 'sql';
The result it returns looks like this:
userbyid
--------------------------------------------------------------
(2, smith, [email protected], "2017-06-04 19:47:49.472259+00")
Is it possible to display the output with the correct column headers, something like this:
id username email last_login
--------------------------------------------------------------
2 smith [email protected] 2017-06-04
I'm looking at the CREATE FUNCTION document page and it isn't clear how to do this. I've also searched online and haven't seen an article that discusses this.
A: Use your set returning function in the FROM clause
SELECT * FROM userbyid(1);
as opposed to
SELECT userbyid(1);
Here is dbfiddle demo
Sample output:
id | username | email | last_login
----+----------+-------------------+------------------------
1 | user1 | [email protected] | 2017-06-13 12:00:00-04
|
Q: How to display column headers returned by PostgreSQL function? I have the following PostgreSQL function which returns multiple columns from a table:
CREATE OR REPLACE FUNCTION userbyid(id integer)
RETURNS TABLE(id int, username character varying(30), email character varying(254), last_login timestamp with time zone) AS
$$
SELECT
id,
username,
email,
last_login
FROM
auth_user
WHERE
id = $1;
$$
LANGUAGE 'sql';
The result it returns looks like this:
userbyid
--------------------------------------------------------------
(2, smith, [email protected], "2017-06-04 19:47:49.472259+00")
Is it possible to display the output with the correct column headers, something like this:
id username email last_login
--------------------------------------------------------------
2 smith [email protected] 2017-06-04
I'm looking at the CREATE FUNCTION document page and it isn't clear how to do this. I've also searched online and haven't seen an article that discusses this.
A: Use your set returning function in the FROM clause
SELECT * FROM userbyid(1);
as opposed to
SELECT userbyid(1);
Here is dbfiddle demo
Sample output:
id | username | email | last_login
----+----------+-------------------+------------------------
1 | user1 | [email protected] | 2017-06-13 12:00:00-04
A: You need to format PostgreSQL to show the colum headers;
Set it with:
\t
this can be seen in PSQL help
\?
A: You can use "as" in your query.
Select Id as id, username as username....
|
stackoverflow
|
{
"language": "en",
"length": 204,
"provenance": "stackexchange_0000F.jsonl.gz:862500",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534728"
}
|
e0c7665c169d9736e781eff7a01962d06e4553c5
|
Stackoverflow Stackexchange
Q: How to get card brand using Stripe and Swift STPAPIClient.shared().createToken(withCard: cardParams) { (token, error) in
if error != nil {
//fail
} else if let token = token {
print(token.card?.brand) //Optional(__C.STPCardBrand)
print(token.card?.brand.hashValue) //Optional(0)
print(token.card?.brand.rawValue) //Optional(0)
}
}
Does anyone know why Stripe isn't returning the card brand? I'm using a Stripe test card and the rest of the info is getting returned.
A: @OlegDanu's answer with unwrapping
As he said use STPCard.stringFromBrand(from: token.card?.brand) but card? is an Optional of type STPCard and I didn't realize that and spent some time trying to unwrap it. Anyway it's best to unwrap it first
if let card = token.card { }
Here's the code below
STPAPIClient.shared().createToken(withCard: card, completion: {
[weak self] (token, error) in
if let error = error {
print(error.localizedDescription)
return
}
guard let token = token else { return }
// card is an Optional of type STPCard
if let card = token.card {
let brand = STPCard.string(from: card.brand)
print(brand)
}
})
|
Q: How to get card brand using Stripe and Swift STPAPIClient.shared().createToken(withCard: cardParams) { (token, error) in
if error != nil {
//fail
} else if let token = token {
print(token.card?.brand) //Optional(__C.STPCardBrand)
print(token.card?.brand.hashValue) //Optional(0)
print(token.card?.brand.rawValue) //Optional(0)
}
}
Does anyone know why Stripe isn't returning the card brand? I'm using a Stripe test card and the rest of the info is getting returned.
A: @OlegDanu's answer with unwrapping
As he said use STPCard.stringFromBrand(from: token.card?.brand) but card? is an Optional of type STPCard and I didn't realize that and spent some time trying to unwrap it. Anyway it's best to unwrap it first
if let card = token.card { }
Here's the code below
STPAPIClient.shared().createToken(withCard: card, completion: {
[weak self] (token, error) in
if let error = error {
print(error.localizedDescription)
return
}
guard let token = token else { return }
// card is an Optional of type STPCard
if let card = token.card {
let brand = STPCard.string(from: card.brand)
print(brand)
}
})
A: So checking the API documentation I found that brand is en enum:
var brand: STPCardBrand { get }
having these values:
typedef NS_ENUM(NSInteger, STPCardBrand) {
STPCardBrandVisa,
STPCardBrandAmex,
STPCardBrandMasterCard,
STPCardBrandDiscover,
STPCardBrandJCB,
STPCardBrandDinersClub,
STPCardBrandUnknown,
};
You could also consider using the static stringFromBrand function:
Returns a string representation for the provided card brand; i.e.
[NSString stringFromBrand:STPCardBrandVisa] == @"Visa". Declaration
*
*(nonnull NSString *)stringFromBrand:(STPCardBrand)brand;
class func string(from brand: STPCardBrand) -> String
Example:
print(STPCard.stringFromBrand(from: token.card?.brand))
Swift 4:
print(STPCard.string(from: token.card!.brand))
|
stackoverflow
|
{
"language": "en",
"length": 238,
"provenance": "stackexchange_0000F.jsonl.gz:862541",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534829"
}
|
d23daa3656d987eae844eac22c6f9234c6870638
|
Stackoverflow Stackexchange
Q: Github Search: how to search in multiple languages Github search supports:
<keyword> language:javascript
But I want something like:
<keyword> language:javascript OR language:typescript
So that I can sort them by stars or do other filters in a single search.
The reason is: with typescript becomes more and more popular each day, single filter with language:javascript is not enought anymore.
A: You can do it in the command line by using github Api by a + symbol:
curl "https://api.github.com/search/repositoriesq=$guitar-scales+language:"javascript"+language:"typescript"&per_page=100&page=$i" | jq ".items[] | {name, language}"
And here is a sample from the search result:
{
"name": "react-boilerplate",
"language": "JavaScript"
}
{
"name": "electrode",
"language": "JavaScript"
}
{
"name": "claygl",
"language": "JavaScript"
}
{
"name": "mean",
"language": "TypeScript"
}
{
"name": "rapidpro",
"language": "JavaScript"
}
{
"name": "react-native-scaling-drawer",
"language": "JavaScript"
}
|
Q: Github Search: how to search in multiple languages Github search supports:
<keyword> language:javascript
But I want something like:
<keyword> language:javascript OR language:typescript
So that I can sort them by stars or do other filters in a single search.
The reason is: with typescript becomes more and more popular each day, single filter with language:javascript is not enought anymore.
A: You can do it in the command line by using github Api by a + symbol:
curl "https://api.github.com/search/repositoriesq=$guitar-scales+language:"javascript"+language:"typescript"&per_page=100&page=$i" | jq ".items[] | {name, language}"
And here is a sample from the search result:
{
"name": "react-boilerplate",
"language": "JavaScript"
}
{
"name": "electrode",
"language": "JavaScript"
}
{
"name": "claygl",
"language": "JavaScript"
}
{
"name": "mean",
"language": "TypeScript"
}
{
"name": "rapidpro",
"language": "JavaScript"
}
{
"name": "react-native-scaling-drawer",
"language": "JavaScript"
}
|
stackoverflow
|
{
"language": "en",
"length": 129,
"provenance": "stackexchange_0000F.jsonl.gz:862542",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534830"
}
|
9a4f2c16820e579398ea6ca4f5e780ec7d0ae289
|
Stackoverflow Stackexchange
Q: Limit google places autocomplete results to only display state or country or both The api for this is the 'types' parameter.
Documented here https://developers.google.com/places/supported_types#table3
Ideally I'd pass ['administrative_area_level_1','country'] but it doesn't show any results.
Passing only one of these types doesn't show any results either.
The best match is ['(regions)'], but besides countries and states, it finds city names as well, which are irrelevant in my use case.
How can this be solved?
Thanks
A: Unfortunately, for autocomplete, only the geocode, address, establishment, (regions), and (cities) types can be used.
The only way this problem can really be solved is by using the HTTP API and creating your own UI for the autocomplete box. This link can be used for the api: https://maps.googleapis.com/maps/api/place/autocomplete/json?input=QUERY&key=API_KEY&types=(regions). The result is returned in a JSON object, and each item has a types field consisting of an array of the types applicable to the location. The type at index 0 is the one you're interested in. It's usually something like country, or administrative_area_level_1.
|
Q: Limit google places autocomplete results to only display state or country or both The api for this is the 'types' parameter.
Documented here https://developers.google.com/places/supported_types#table3
Ideally I'd pass ['administrative_area_level_1','country'] but it doesn't show any results.
Passing only one of these types doesn't show any results either.
The best match is ['(regions)'], but besides countries and states, it finds city names as well, which are irrelevant in my use case.
How can this be solved?
Thanks
A: Unfortunately, for autocomplete, only the geocode, address, establishment, (regions), and (cities) types can be used.
The only way this problem can really be solved is by using the HTTP API and creating your own UI for the autocomplete box. This link can be used for the api: https://maps.googleapis.com/maps/api/place/autocomplete/json?input=QUERY&key=API_KEY&types=(regions). The result is returned in a JSON object, and each item has a types field consisting of an array of the types applicable to the location. The type at index 0 is the one you're interested in. It's usually something like country, or administrative_area_level_1.
|
stackoverflow
|
{
"language": "en",
"length": 168,
"provenance": "stackexchange_0000F.jsonl.gz:862574",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534927"
}
|
51e9663fa17e6692bb8b5cdfafe3267d6d37c6cb
|
Stackoverflow Stackexchange
Q: C# select random numbers within a period of time I have a list of employee numbers (about 300 employees), each employee will place his fingerprint into a scanner and I have to randomly select 20 employees between 7am and 7:15am and within a random interval of time.
All employees entrance shift is between 7:00am and 7:15am, Human Resources needs to receive an alert exactly after they place their finger into the door scanner only for the randomnly selected employee so inmediatly the employee is moved into a room for a special test.
For example, 300 employees will go in within a frame of 15 minutes, I have to select only 20 but don't want to be sequencial, so maybe the first employee will be selected but then second is not, and third is not, and maybe the fourth is selected again, and so on.
And there is other complex rule, once a employee is radonmly selected, the algorith will have to wait between 30 and 60 seconds (this has to be random also) until the random logic is executed again.
Here is an example in Excel of how it should work:
Any clue?
|
Q: C# select random numbers within a period of time I have a list of employee numbers (about 300 employees), each employee will place his fingerprint into a scanner and I have to randomly select 20 employees between 7am and 7:15am and within a random interval of time.
All employees entrance shift is between 7:00am and 7:15am, Human Resources needs to receive an alert exactly after they place their finger into the door scanner only for the randomnly selected employee so inmediatly the employee is moved into a room for a special test.
For example, 300 employees will go in within a frame of 15 minutes, I have to select only 20 but don't want to be sequencial, so maybe the first employee will be selected but then second is not, and third is not, and maybe the fourth is selected again, and so on.
And there is other complex rule, once a employee is radonmly selected, the algorith will have to wait between 30 and 60 seconds (this has to be random also) until the random logic is executed again.
Here is an example in Excel of how it should work:
Any clue?
|
stackoverflow
|
{
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:862585",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44534964"
}
|
677ae3bb9462e9208daf3864bc35b5ac0ddc2a4c
|
Stackoverflow Stackexchange
Q: OpenCV & Python: Cover a colored mask over a image I want to cover a image with a transparent solid color overlay in the shape of a black-white mask
Currently I'm using the following java code to implement this.
redImg = new Mat(image.size(), image.type(), new Scalar(255, 0, 0));
redImg.copyTo(image, mask);
I'm not familiar with the python api.
So I want to know if there any alternative api in python.
Is there any better implementation?
image:
mask:
what i want:
A: The idea is to convert the mask to a binary format where pixels are either 0 (black) or 255 (white). White pixels represent sections that are kept while black sections are thrown away. Then set all white pixels on the mask to your desired BGR color.
Input image and mask
Result
Code
import cv2
image = cv2.imread('1.jpg')
mask = cv2.imread('mask.jpg', 0)
mask = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
image[mask==255] = (36,255,12)
cv2.imshow('image', image)
cv2.imshow('mask', mask)
cv2.waitKey()
|
Q: OpenCV & Python: Cover a colored mask over a image I want to cover a image with a transparent solid color overlay in the shape of a black-white mask
Currently I'm using the following java code to implement this.
redImg = new Mat(image.size(), image.type(), new Scalar(255, 0, 0));
redImg.copyTo(image, mask);
I'm not familiar with the python api.
So I want to know if there any alternative api in python.
Is there any better implementation?
image:
mask:
what i want:
A: The idea is to convert the mask to a binary format where pixels are either 0 (black) or 255 (white). White pixels represent sections that are kept while black sections are thrown away. Then set all white pixels on the mask to your desired BGR color.
Input image and mask
Result
Code
import cv2
image = cv2.imread('1.jpg')
mask = cv2.imread('mask.jpg', 0)
mask = cv2.threshold(mask, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
image[mask==255] = (36,255,12)
cv2.imshow('image', image)
cv2.imshow('mask', mask)
cv2.waitKey()
A: Now after I deal with all this Python, OpenCV, Numpy thing for a while, I find out it's quite simple to implement this with code:
image[mask] = (0, 0, 255)
-------------- the original answer --------------
I solved this by the following code:
redImg = np.zeros(image.shape, image.dtype)
redImg[:,:] = (0, 0, 255)
redMask = cv2.bitwise_and(redImg, redImg, mask=mask)
cv2.addWeighted(redMask, 1, image, 1, 0, image)
A: this is what worked for me:
red = np.ones(mask.shape)
red = red*255
img[:,:,0][mask>0] = red[mask>0]
so I made a 2d array with solid 255 values and replaced it with my image's red band in pixels where the mask is not zero.
redmask
|
stackoverflow
|
{
"language": "en",
"length": 264,
"provenance": "stackexchange_0000F.jsonl.gz:862618",
"question_score": "18",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535068"
}
|
8f65e8699c7b9af92921eeadaf6cf6038f41ef86
|
Stackoverflow Stackexchange
Q: How to install ElasticSearch on Windows? I am new for elasticsearch.I searched in google about elasticsearch and how to install it.Is JDK is required for elasticsearch install and configure in windows OS.Please anyone suggest solution.
A: If you search in google for "elasticsearch install windows" you can find this link
https://www.elastic.co/guide/en/elasticsearch/reference/current/windows.html
Which shows you note "Elasticsearch requires Java 8 or later. Use the official Oracle distribution or an open-source distribution such as OpenJDK."
|
Q: How to install ElasticSearch on Windows? I am new for elasticsearch.I searched in google about elasticsearch and how to install it.Is JDK is required for elasticsearch install and configure in windows OS.Please anyone suggest solution.
A: If you search in google for "elasticsearch install windows" you can find this link
https://www.elastic.co/guide/en/elasticsearch/reference/current/windows.html
Which shows you note "Elasticsearch requires Java 8 or later. Use the official Oracle distribution or an open-source distribution such as OpenJDK."
A: Recent versions of Elasticsearch have an MSI package with a GUI installer. That's the easiest way to install.
If you prefer a manual install, you'll need to:
*
*get the ZIP file
*download and install Java, then set JAVA_HOME under System Properties -> Advanced -> Environment Variables
*finally, you can start Elasticsearch (via elasticsearch.bat) or install it as a service via elasticsearch-service.bat install
|
stackoverflow
|
{
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:862636",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535136"
}
|
da724f382c6cab5ae997ac2d4273a4f1a5b97ce6
|
Stackoverflow Stackexchange
Q: Kotlin kotlinClass.class.getName() cannot return package name but only simple class name AClass.class.getName();
if AClass is a java class, this method will return package name and class name.
but when i convert AClass java file to Kotlin file ,it will only return a class name. so system cannot find this class path
the code above
A: Try below solution::-
var name = MainActivity::class.java.canonicalName as String
|
Q: Kotlin kotlinClass.class.getName() cannot return package name but only simple class name AClass.class.getName();
if AClass is a java class, this method will return package name and class name.
but when i convert AClass java file to Kotlin file ,it will only return a class name. so system cannot find this class path
the code above
A: Try below solution::-
var name = MainActivity::class.java.canonicalName as String
A: If it is a java fragment
var fragmentSimpleName = FragmentName::class.java.simpleName as String
A: This is what I use to get class-name.
val TAG = javaClass.simpleName
For Android developer's, it's very useful to declare as a field, and call to print logs.
A: there are many ways to get the full qualified name of a java Class in kotlin:
get name via the property KClass.qualifiedName:
val name = AClass::class.qualifiedName;
OR get name via the property Class.name:
val name = AClass::class.java.name;
OR get name via the method Class#getName:
val name = AClass::class.java.getName();
the table of the qualified name of a class as below:
|-----------------------|-----------------------|-----------------------|
| | Class | Anonymous Class |
|-----------------------|-----------------------|-----------------------|
| KClass.qualifiedName | foo.bar.AClass | null |
|-----------------------|-----------------------|-----------------------|
| Class.name | foo.bar.AClass | foo.bar.AClass$1 |
|-----------------------|-----------------------|-----------------------|
| Class.getName() | foo.bar.AClass | foo.bar.AClass$1 |
|-----------------------|-----------------------|-----------------------|
A: Maybe I am a little bit late for the party, but I do it using hash code of the new instance of the fragment. It is an Int so allows all kinds of tests.
private val areaFragment by lazy { Area_Fragment.newInstance() }
var fragmentHashCode = fragment.hashCode()
when (fragmentHashCode) {
areaFragment.hashCode() -> {
myNavigationView.setCheckedItem(R.id.nav_area)
}
|
stackoverflow
|
{
"language": "en",
"length": 255,
"provenance": "stackexchange_0000F.jsonl.gz:862651",
"question_score": "34",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535168"
}
|
8a5ab6e80b67f8d546a9844edd354f10fd209c66
|
Stackoverflow Stackexchange
Q: i18n Internationalization and Localization in React Django I am working on a Project where I am using React js for Front-End and Django for backend. I need to implement i18n Internationalization and Localization
I saw Django documentation and came across django I18n javascript_catalog.
How to use the same using getText() in React JS?. Is there any other way to implement?.
Thanks in Advance
A: Update for: django > 2.0:
from django.views.i18n import JavaScriptCatalog
urlpatterns = [
path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'),
]
Reference
Old:
Use below code in urls.py of project
from django.views.i18n import javascript_catalog
js_info_dict = {
'domain': 'djangojs',
'packages': ('name',)
}
urlpatterns += i18n_patterns(
url(r'^jsi18n/$', javascript_catalog, js_info_dict),
Add below line to your base html file
<script type="text/javascript" src="/jsi18n/"></script>
|
Q: i18n Internationalization and Localization in React Django I am working on a Project where I am using React js for Front-End and Django for backend. I need to implement i18n Internationalization and Localization
I saw Django documentation and came across django I18n javascript_catalog.
How to use the same using getText() in React JS?. Is there any other way to implement?.
Thanks in Advance
A: Update for: django > 2.0:
from django.views.i18n import JavaScriptCatalog
urlpatterns = [
path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog'),
]
Reference
Old:
Use below code in urls.py of project
from django.views.i18n import javascript_catalog
js_info_dict = {
'domain': 'djangojs',
'packages': ('name',)
}
urlpatterns += i18n_patterns(
url(r'^jsi18n/$', javascript_catalog, js_info_dict),
Add below line to your base html file
<script type="text/javascript" src="/jsi18n/"></script>
|
stackoverflow
|
{
"language": "en",
"length": 119,
"provenance": "stackexchange_0000F.jsonl.gz:862659",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535194"
}
|
37ad790df88aa031c53ac9ee17aff240aaba1406
|
Stackoverflow Stackexchange
Q: Determine when Subject has no subscribers I want to create a broadcast system using PublishSubject, a background task will poll some endpoint and broadcast the result periodically using this Subject. I would like to start the polling when the first subscriber subscribes to the Subject, and stop the polling when there are no more subscribers. If a new subscriber subscribes, polling should resume.
The only function I see that is somewhat related is hasObservers() but it doesn't quite fit my needs, I would like to have callbacks for subscription and unsubscription - on the former I would start polling if not stated, and on the latter I would stop polling if there are no more subscribers; how could this be achieved?
A: You could create a wrapper around a subject that would keep count, but sounds like your problem could be solved with a ConnectableObservable.
Consider this:
Observable<PollData> pollData = Observable.interval(1, TimeUnit.SECONDS)
.flatMap(i -> api.pollData())
.share();
Using the share() operator makes that observable become a ConnectableObservable that will start when the first observer subscribes to it, share all emissions with subsequent subscriptions, and automatically stop when it's last observer unsubscribes.
Read more about it here.
|
Q: Determine when Subject has no subscribers I want to create a broadcast system using PublishSubject, a background task will poll some endpoint and broadcast the result periodically using this Subject. I would like to start the polling when the first subscriber subscribes to the Subject, and stop the polling when there are no more subscribers. If a new subscriber subscribes, polling should resume.
The only function I see that is somewhat related is hasObservers() but it doesn't quite fit my needs, I would like to have callbacks for subscription and unsubscription - on the former I would start polling if not stated, and on the latter I would stop polling if there are no more subscribers; how could this be achieved?
A: You could create a wrapper around a subject that would keep count, but sounds like your problem could be solved with a ConnectableObservable.
Consider this:
Observable<PollData> pollData = Observable.interval(1, TimeUnit.SECONDS)
.flatMap(i -> api.pollData())
.share();
Using the share() operator makes that observable become a ConnectableObservable that will start when the first observer subscribes to it, share all emissions with subsequent subscriptions, and automatically stop when it's last observer unsubscribes.
Read more about it here.
|
stackoverflow
|
{
"language": "en",
"length": 196,
"provenance": "stackexchange_0000F.jsonl.gz:862667",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535213"
}
|
93ec730ad4a0be3d871f5cc61036859a05575e6c
|
Stackoverflow Stackexchange
Q: Curl Server Sent Event no output when pipe I trying to use sse to tail log from server, curl works fine with sse, but when piped, curl will not output anything
# This is ok
curl localhost:8080/logging/tail
OUTPUT
data:2017-06-14 11:57:21.345 WARN 56999 --- [nio-8080-exec-4] a.b.c : ABC
data:2017-06-14 11:57:21.353 INFO 56999 --- [nio-8080-exec-4] a.b.c : DEF
With pipe
# Nothing show up, with -s or not
# Without -s, the process show's curl do receive data
curl -s localhost:8080/logging/tail | sed '/^\s*$/d'
curl -s localhost:8080/logging/tail | cat
How to make curl sse work with pipe ?
I want to remove the data: prefix and the empty lines for sse output.
$ curl --version
curl 7.51.0 (x86_64-apple-darwin16.0) libcurl/7.51.0 SecureTransport zlib/1.2.8
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz UnixSockets
Server is spring SseEmitter.
A: As per this answer,
you can add either --no-buffer or -N to the curl command
-N / --no-buffer
|
Q: Curl Server Sent Event no output when pipe I trying to use sse to tail log from server, curl works fine with sse, but when piped, curl will not output anything
# This is ok
curl localhost:8080/logging/tail
OUTPUT
data:2017-06-14 11:57:21.345 WARN 56999 --- [nio-8080-exec-4] a.b.c : ABC
data:2017-06-14 11:57:21.353 INFO 56999 --- [nio-8080-exec-4] a.b.c : DEF
With pipe
# Nothing show up, with -s or not
# Without -s, the process show's curl do receive data
curl -s localhost:8080/logging/tail | sed '/^\s*$/d'
curl -s localhost:8080/logging/tail | cat
How to make curl sse work with pipe ?
I want to remove the data: prefix and the empty lines for sse output.
$ curl --version
curl 7.51.0 (x86_64-apple-darwin16.0) libcurl/7.51.0 SecureTransport zlib/1.2.8
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz UnixSockets
Server is spring SseEmitter.
A: As per this answer,
you can add either --no-buffer or -N to the curl command
-N / --no-buffer
|
stackoverflow
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:862693",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535283"
}
|
9bae68fe8a32524fe4d67a7fe77beb4a3d3d48ee
|
Stackoverflow Stackexchange
Q: ERROR: The requested URL could not be retrieved I have a project in CodeIgniter and hosted in Sub-domain, and it has an error:
The requested URL could not be retrieved
Here is a picture of it:
Error
A: I had the same error on a new installation of CodeIgniter. I solved this error by turning off my VPN browser plugin.
You might have a proxy or VPN app running turn it off and see.
|
Q: ERROR: The requested URL could not be retrieved I have a project in CodeIgniter and hosted in Sub-domain, and it has an error:
The requested URL could not be retrieved
Here is a picture of it:
Error
A: I had the same error on a new installation of CodeIgniter. I solved this error by turning off my VPN browser plugin.
You might have a proxy or VPN app running turn it off and see.
A: Please use this mandatory part before moving your code from localhost to live server.
*
*Change base_url in config file.
*Do necessary changes in .htaccess file
*Change database connection credentials
This will help you.
A: Below steps worked for me -
*
*Search for Proxy Settings on windows machine
*Disable all the proxy settings
|
stackoverflow
|
{
"language": "en",
"length": 130,
"provenance": "stackexchange_0000F.jsonl.gz:862710",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535339"
}
|
0ac90784498afb188755265599dc20b6e0cc0ff4
|
Stackoverflow Stackexchange
Q: Mariadb unique constraint error ( duplicate entry) for two records with one space difference Mariadb is version 10.0.23
I use below script to do testing
create table test(
username varchar(30)
,constraint UK_TEST unique (username);
insert into test values('name1');
1 row inserted.
insert into test values ('name1 ');
the second insert got error, the error message is
duplicate entry 'name1 ' for key 'UK_TEST',
highlight, the second one is not same as the first one, the values have one more space in suffix
is there anyone can help me on this issue?
A: According to the documentation, for VARCHAR and several other data types, trailing spaces are ignored in comparisons, including those used for unique constraints:
Currently, all MariaDB collations are of type PADSPACE, meaning that
VARCHAR (as well as CHAR and TEXT values) are compared without regard
for trailing spaces. This does not apply to the LIKE pattern-matching
operator, which takes into account trailing spaces.
If a unique index consists of a column where trailing pad characters
are stripped or ignored, inserts into that column where values differ
only by the number of trailing pad characters will result in a
duplicate-key error.
|
Q: Mariadb unique constraint error ( duplicate entry) for two records with one space difference Mariadb is version 10.0.23
I use below script to do testing
create table test(
username varchar(30)
,constraint UK_TEST unique (username);
insert into test values('name1');
1 row inserted.
insert into test values ('name1 ');
the second insert got error, the error message is
duplicate entry 'name1 ' for key 'UK_TEST',
highlight, the second one is not same as the first one, the values have one more space in suffix
is there anyone can help me on this issue?
A: According to the documentation, for VARCHAR and several other data types, trailing spaces are ignored in comparisons, including those used for unique constraints:
Currently, all MariaDB collations are of type PADSPACE, meaning that
VARCHAR (as well as CHAR and TEXT values) are compared without regard
for trailing spaces. This does not apply to the LIKE pattern-matching
operator, which takes into account trailing spaces.
If a unique index consists of a column where trailing pad characters
are stripped or ignored, inserts into that column where values differ
only by the number of trailing pad characters will result in a
duplicate-key error.
A: You have marked "username" field as "UNIQUE". So, it will accept only unique values in that entire column. You are trying to insert duplicate values.
Read more at https://dev.mysql.com/doc/refman/5.7/en/constraint-primary-key.html
A: You run into this kind of problems when the username in database is either set to "UNIQUE" or set as "PRIMARY KEY". In your case it is unique, hence you cannot have two lines that share the same username which is very logic and correct.
I advise you to read more about mysql before going any forward.
Please read more here about primary key constraints: https://dev.mysql.com/doc/refman/5.7/en/constraint-primary-key.html
|
stackoverflow
|
{
"language": "en",
"length": 290,
"provenance": "stackexchange_0000F.jsonl.gz:862720",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535380"
}
|
b982881a205cc13f6634bd3d107b0a8118a896b5
|
Stackoverflow Stackexchange
Q: Unmarshall DynamoDB JSON Given some DynamoDB JSON via a DynamoDB NewImage stream event, how do I unmarshall it to regular JSON?
{"updated_at":{"N":"146548182"},"uuid":{"S":"foo"},"status":{"S":"new"}}
Normally I would use AWS.DynamoDB.DocumentClient, however I can't seem to find a generic Marshall/Unmarshall function.
Sidenote: Do I lose anything unmarshalling DynamoDB JSON to JSON and back again?
A: You can use the AWS.DynamoDB.Converter.unmarshall function. Calling the following will return { updated_at: 146548182, uuid: 'foo', status: 'new' }:
AWS.DynamoDB.Converter.unmarshall({
"updated_at":{"N":"146548182"},
"uuid":{"S":"foo"},
"status":{"S":"new"}
})
Everything that can be modeled in DynamoDB's marshalled JSON format can be safely translated to and from JS objects.
|
Q: Unmarshall DynamoDB JSON Given some DynamoDB JSON via a DynamoDB NewImage stream event, how do I unmarshall it to regular JSON?
{"updated_at":{"N":"146548182"},"uuid":{"S":"foo"},"status":{"S":"new"}}
Normally I would use AWS.DynamoDB.DocumentClient, however I can't seem to find a generic Marshall/Unmarshall function.
Sidenote: Do I lose anything unmarshalling DynamoDB JSON to JSON and back again?
A: You can use the AWS.DynamoDB.Converter.unmarshall function. Calling the following will return { updated_at: 146548182, uuid: 'foo', status: 'new' }:
AWS.DynamoDB.Converter.unmarshall({
"updated_at":{"N":"146548182"},
"uuid":{"S":"foo"},
"status":{"S":"new"}
})
Everything that can be modeled in DynamoDB's marshalled JSON format can be safely translated to and from JS objects.
A: AWS SDK for JavaScript version 3 (V3) provides nice methods for marshalling and unmarshalling DynamoDB records reliably.
const { marshall, unmarshall } = require("@aws-sdk/util-dynamodb");
const dynamo_json = { "updated_at": { "N": "146548182" }, "uuid": { "S": "foo" }, "status": { "S": "new" } };
const to_regular_json = unmarshall(dynamo_json);
const back_to_dynamo_json = marshall(to_regular_json);
Output:
// dynamo_json
{
updated_at: { N: '146548182' },
uuid: { S: 'foo' },
status: { S: 'new' }
}
// to_regular_json
{ updated_at: 146548182, uuid: 'foo', status: 'new' }
// back_to_dynamo_json
{
updated_at: { N: '146548182' },
uuid: { S: 'foo' },
status: { S: 'new' }
}
|
stackoverflow
|
{
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:862745",
"question_score": "36",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535445"
}
|
010852b64a1ef9c4463aa1de29bd2b8f3cdf7387
|
Stackoverflow Stackexchange
Q: Internet stopped working on Android Emulator (Mac OS) I am using Android Studio 2.3(Latest). Till yesterday it was all good and working, today emulator is not connecting to the data network.
I couldn't find any solution working so far. My Mac is running on Mac OS Sierra, connected to WiFi with no proxy.
A: It's a bug with IPv6 name resolution, removing any IPv6 nameserver in /etc/resolv.confg fixes the issue, see https://issuetracker.google.com/issues/155686508#comment3
|
Q: Internet stopped working on Android Emulator (Mac OS) I am using Android Studio 2.3(Latest). Till yesterday it was all good and working, today emulator is not connecting to the data network.
I couldn't find any solution working so far. My Mac is running on Mac OS Sierra, connected to WiFi with no proxy.
A: It's a bug with IPv6 name resolution, removing any IPv6 nameserver in /etc/resolv.confg fixes the issue, see https://issuetracker.google.com/issues/155686508#comment3
A: For me the issue appears to stem from the DNS settings my company enforces.
In order to be able to get network access for my emulator I needed to launch the emulator with the same corporate dns-server specified.
I'm on a Mac, so first I checked my network settings to find what my DNS was set to:
System Preferences -> Network -> Wi-Fi -> Advanced -> DNS
Then navigated to the sdk emulator location (for convenience):
cd ~/Library/Android/sdk/emulator
Then listed the available emulators:
./emulator -list-avds
Then ran the desired emulator with dns server override:
./emulator @<emulator_name> -dns-server <dns.server.ip.address>
It would be nice if I could set this DNS to be used by emulators launched through Android Studio, but hopefully these steps help someone else in a similar position.
A: You can go to: System Preferences -> Network -> Wi-Fi -> Advanced -> DNS
So you add a new DNS 8.8.8.8. It might solve your problem.
A: Couldn't find any solution by tweaking network settings. So added a new virtual device from Tools -> Android ->AVD Manager by downloading a new system image(Android O, API 26). And it's working now.
If you want to use the same API level then make sure to delete the existing system image and download it again.
A: In Mac OS go to:
System Preferences -> Network -> select Wi-Fi os left panel -> Advanced on right panel -> DNS -> add new DNS server; for example 8.8.8.8 and 8.8.4.4 (Google Public DNS) or 1.1.1.1 and 1.0.0.1 (Cloudflare and APNIC DNS) or another public DNS provider. Then restart the emulator so the changes take effect.
Edited jun/2020
Another option is to pass dns-server params when start Android emulator.
According with this solution https://stackoverflow.com/a/51858653/3328566, I changed the emulator executable name and I created a bash script to load the AVD with param -dns-server 8.8.8.8.
In your Android SDK default folder /Users/[MY_USER_ACCOUNT]/Library/Android/sdk/emulator/emulator
*
*Rename the binary emulator to emulator_original
*Create a bash script named emulator that contains:
#!/bin/bash
/Users/[MY_USER_ACCOUNT]/Library/Android/sdk/emulator/emulator_original -dns-server 8.8.8.8 $@
*Change the script permissions with chmod +x emulator
Now, you can start AVD from Android Studio normally
In this case, you don't need to set DNS server in System Preferences. You are setting the DNS server only for the emulator, avoiding other problems
A: I'm new to Android Studio and just ran into this issue. Network in the sim was working fine and stopped working for some reason. Didn't like any of the solutions above, so I poked around the AVD Manager and found an option to wipe the data on the sim.
*
*quit the sim
*open AVD Manager
*Actions > open down arrow for more options
*select Wipe Data
*restart sim
A: i tried purge all android studio files and reinstall,
start with -dns-server,
wifi dns to 8.8.8.8,
none of them not working for me.
i found it only can using ip address in emulator.
but this post saved me.
https://www.bswen.com/2021/08/others-how-to-enable-android-emulator-internet-access.html.
1\ turn off you macos wifi;
2\ cold boot emulator;
3\ waiting the emulator's wifi connected (limited connection but it's ok)
4\ turn on you macos wifi;
it's working now.
A: If you have Blue Coat Unified Agent, internet wont work. Kindly uninstall it.
It can be uninstalled by going to below folder-
/Library/Application Support/bcua
A: Go to open AVD manager and click wipe data
That's it now the Internet will work. This is how I solved my Issue.
A: None of the answers worked for me on a m1 mac, I was not even able to connect to localhost for the react-native development server.
The trick for me was to turn off the cellular data "T-mobile," then it would use AndroidWiFi for internet and everything worked fine.
Here's a screenshot of my working settings:
A: I was facing the same issue but when I followed below step it was resolved.
System preferences-> Network-> Ethernet/WiFi->Advance->DNS->Add->8.8.8.8->
Add->8.8.4.4 and then click on OK and apply the changes and then Cold boot the android emulator from AVD manager/device manager.
A: if its an android project, u can change the baseUrl to 10.0.2.2, note this is only applicable from android emulator, will not work on phone
e.g Api endpoint will now look like this:
val baseUri : String = "http://10.0.2.2/restapi/"
val loginEndpoint = "${baseUri}login"
A: There was an update available to my Android Studio, i updated it and it worked!
|
stackoverflow
|
{
"language": "en",
"length": 797,
"provenance": "stackexchange_0000F.jsonl.gz:862758",
"question_score": "94",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535500"
}
|
4e2f50c951fcdb94b43a383c4c5348576e56a14b
|
Stackoverflow Stackexchange
Q: Angular ngClass and click event for toggling class In Angular, I would like to use ngClass and click event to toggle class. I looked through online but some are angular1 and there isn't any clear instruction or example. Any help will be much appreciated!
In HTML, I have the following:
<div class="my_class" (click)="clickEvent($event)" ngClass="{'active': toggle}">
Some content
</div>
In .ts:
clickEvent(event) {
// Haven't really got far
var targetEle = event.srcElement.attributes.class;
}
A: If you're looking for an HTML only way of doing this in angular...
<div #myDiv class="my_class" (click)="myDiv.classList.toggle('active')">
Some content
</div>
The important bit is the #myDiv part.
It's a HTML Node reference, so you can use that variable as if it was assigned to document.querySelector('.my_class')
NOTE: this variable is scope specific, so you can use it in *ngFor statements
|
Q: Angular ngClass and click event for toggling class In Angular, I would like to use ngClass and click event to toggle class. I looked through online but some are angular1 and there isn't any clear instruction or example. Any help will be much appreciated!
In HTML, I have the following:
<div class="my_class" (click)="clickEvent($event)" ngClass="{'active': toggle}">
Some content
</div>
In .ts:
clickEvent(event) {
// Haven't really got far
var targetEle = event.srcElement.attributes.class;
}
A: If you're looking for an HTML only way of doing this in angular...
<div #myDiv class="my_class" (click)="myDiv.classList.toggle('active')">
Some content
</div>
The important bit is the #myDiv part.
It's a HTML Node reference, so you can use that variable as if it was assigned to document.querySelector('.my_class')
NOTE: this variable is scope specific, so you can use it in *ngFor statements
A: Instead of having to create a function in the ts file you can toggle a variable from the template itself. You can then use the variable to apply a specific class to the element. Like so-
component.html -
<div (click)="status=!status"
[ngClass]="status ? 'success' : 'danger'">
Some content
</div>
So when status is true the class success is applied. When it is false danger class is applied.
This will work without any additional code in the ts file.
EDIT: Recent versions of angular require the variable to be declared in the controller -
component.ts -
status: boolean = false;
A: Angular6 using the renderer2 without any variables and a clean template:
template:
<div (click)="toggleClass($event,'testClass')"></div>
in ts:
toggleClass(event: any, className: string) {
const hasClass = event.target.classList.contains(className);
if (hasClass) {
this.renderer.removeClass(event.target, className);
} else {
this.renderer.addClass(event.target, className);
}
}
One could put this in a directive too ;)
A: We can also use ngClass to assign multiple CSS classes based on multiple conditions as below:
<div
[ngClass]="{
'class-name': trueCondition,
'other-class': !trueCondition
}"
></div>
A: This should work for you.
In .html:
<div class="my_class" (click)="clickEvent()"
[ngClass]="status ? 'success' : 'danger'">
Some content
</div>
In .ts:
status: boolean = false;
clickEvent(){
this.status = !this.status;
}
A: ngClass should be wrapped in square brackets as this is a property binding. Try this:
<div class="my_class" (click)="clickEvent($event)" [ngClass]="{'active': toggle}">
Some content
</div>
In your component:
//define the toogle property
private toggle : boolean = false;
//define your method
clickEvent(event){
//if you just want to toggle the class; change toggle variable.
this.toggle = !this.toggle;
}
Hope that helps.
A: If you want to toggle text with a toggle button.
HTMLfile which is using bootstrap:
<input class="btn" (click)="muteStream()" type="button"
[ngClass]="status ? 'btn-success' : 'btn-danger'"
[value]="status ? 'unmute' : 'mute'"/>
TS file:
muteStream() {
this.status = !this.status;
}
A: So normally you would create a backing variable in the class and toggle it on click and tie a class binding to the variable. Something like:
@Component(
selector:'foo',
template:`<a (click)="onClick()"
[class.selected]="wasClicked">Link</a>
`)
export class MyComponent {
wasClicked = false;
onClick() {
this.wasClicked= !this.wasClicked;
}
}
A: You can try this.
Html.
<button *ngFor="let color of colors; let index=index" class="example1"
(click)="selectColor(index)" [class.choose__color]="viewMode == index">
<mat-icon>fiber_manual_record</mat-icon>
</button>
css.
.example1:hover {
border-bottom: 2px solid black;
}
.choose__color {
border-bottom: 2px solid black;
}
ts.
selectColor(index: any) {
this.viewMode = index;
}
|
stackoverflow
|
{
"language": "en",
"length": 520,
"provenance": "stackexchange_0000F.jsonl.gz:862762",
"question_score": "81",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535515"
}
|
2d9587d211e5018487394872e1a73a7146cd8e7f
|
Stackoverflow Stackexchange
Q: Laravel how merge two query results into a single object I'm currently stuck on how to merge two query results into a single object . Below is my code.
EDITED
Model methods
public static function getTeamStats($competitionId, $teamId) {
return TeamCompetitionStatistics::where('competitionId', $competitionId)
->where('teamid', $teamId)
->where('periodNumber', 0)
->get();
}
public static function getTeamPosition($competitionId, $teamId){
return self::where('latest', 1)
->where('competitionId',$competitionId)
->where('competitorId', $teamId)
->get(['position', 'streak'])
->map(function($item, $key){
$item->position = $item->position . date("S", mktime(0, 0, 0, 0, $item->position, 0));
if(strpos($item->streak, '-') !== FALSE) {
$item->streak = str_replace('-', 'L', $item->streak);
}
else {
$item->streak = 'W'.$item->streak;
}
return $item;
});
}
Getting values in controller
$teamStanding = Ladder::getTeamPosition($request->competitionId, $request->id);
$teamStatistics = TeamCompetitionStatistics::getTeamStats($request->competitionId, $request->id);
$result = $teamStatistics->merge($teamStanding);
Returned result: [{'teamstanding': 'data'}, {'teamstatictics': 'data'}]
Expected output: [{'teamstanding': 'data', 'teamstatictics': 'data'}]
A: Try merge()
The merge method merges the given array or collection with the original collection.
$first = ModelName::where('<fieldName>','<searchText>')
->get();
$second = Album::where('<fieldName>','<searchText>')
->get();
$finalResult = $first->merge($second);
$finalResult->each(function($record)
{
echo $record-><fieldName>.'<br />';
});
|
Q: Laravel how merge two query results into a single object I'm currently stuck on how to merge two query results into a single object . Below is my code.
EDITED
Model methods
public static function getTeamStats($competitionId, $teamId) {
return TeamCompetitionStatistics::where('competitionId', $competitionId)
->where('teamid', $teamId)
->where('periodNumber', 0)
->get();
}
public static function getTeamPosition($competitionId, $teamId){
return self::where('latest', 1)
->where('competitionId',$competitionId)
->where('competitorId', $teamId)
->get(['position', 'streak'])
->map(function($item, $key){
$item->position = $item->position . date("S", mktime(0, 0, 0, 0, $item->position, 0));
if(strpos($item->streak, '-') !== FALSE) {
$item->streak = str_replace('-', 'L', $item->streak);
}
else {
$item->streak = 'W'.$item->streak;
}
return $item;
});
}
Getting values in controller
$teamStanding = Ladder::getTeamPosition($request->competitionId, $request->id);
$teamStatistics = TeamCompetitionStatistics::getTeamStats($request->competitionId, $request->id);
$result = $teamStatistics->merge($teamStanding);
Returned result: [{'teamstanding': 'data'}, {'teamstatictics': 'data'}]
Expected output: [{'teamstanding': 'data', 'teamstatictics': 'data'}]
A: Try merge()
The merge method merges the given array or collection with the original collection.
$first = ModelName::where('<fieldName>','<searchText>')
->get();
$second = Album::where('<fieldName>','<searchText>')
->get();
$finalResult = $first->merge($second);
$finalResult->each(function($record)
{
echo $record-><fieldName>.'<br />';
});
A: Adding my answer as the above solutions didn't quite work for me, both just added two separate objects to one array: {"Name":"A Name"},{"Surname":"A Surname"}. I had to collect my array and use first.
https://laravel.com/docs/5.4/collections#method-merge
$first = $modelone->where('Id', '1')->first(['Name']);
$second = $modeltwo->where('Thing', '1')->first(['Surname']);
$collection = collect($first);
$merged = $collection->merge($second);
$result[] = $merged->all();
return $result;
//output: [{"Name":"A Name","Surname":"A Surname"}]
A: Personally I am not a fan on transforming 2 possible big queries in Collections and merge them, seems like a lot of processing in there.
I normally use union(), maybe this can help others.
Laravel documentation for unions
$first = DB::table('users')
->whereNull('first_name');
$users = DB::table('users')
->whereNull('last_name')
->union($first)
->get();
A: You can use all() function.
$teamStanding = Ladder::getTeamPosition($request->competitionId, $request->id)->get();
$teamStatistics = TeamCompetitionStatistics::getTeamStats($request->competitionId, $request->id)->get();
$merged = $teamStatistics->merge($teamStanding);
$result = $merged->all();
// return [{'teamstanding': 'data', 'teamstatictics': 'data'}]
|
stackoverflow
|
{
"language": "en",
"length": 288,
"provenance": "stackexchange_0000F.jsonl.gz:862770",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535539"
}
|
874d9ec83fac662158ffe9b870846ec6dd7a416c
|
Stackoverflow Stackexchange
Q: ImporError: No module named 'SimpleXMLRPCServer' I was trying to get on hands with frozen astropy. But when I try to install it it gives
ImporError: No module named 'SimpleXMLRPCServer'
I also tried to install using pip, but it shows:
Could not find a version that satisfies the requirement xmlrpclib (from versions: )
No matching distribution found for xmlrpclib
A: The SimpleXMLRPCServer module has been merged into xmlrpc.server standard module in Python3. (https://docs.python.org/3/library/xmlrpc.server.html)
Just do "from xmlrpc.server import SimpleXMLRPCServer"
|
Q: ImporError: No module named 'SimpleXMLRPCServer' I was trying to get on hands with frozen astropy. But when I try to install it it gives
ImporError: No module named 'SimpleXMLRPCServer'
I also tried to install using pip, but it shows:
Could not find a version that satisfies the requirement xmlrpclib (from versions: )
No matching distribution found for xmlrpclib
A: The SimpleXMLRPCServer module has been merged into xmlrpc.server standard module in Python3. (https://docs.python.org/3/library/xmlrpc.server.html)
Just do "from xmlrpc.server import SimpleXMLRPCServer"
|
stackoverflow
|
{
"language": "en",
"length": 79,
"provenance": "stackexchange_0000F.jsonl.gz:862790",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535589"
}
|
9bff8f5e77fff6921118d93f54a376de7251a0cb
|
Stackoverflow Stackexchange
Q: Why GLM_CONSTEXPR_SIMD? The OpenGL mathematics library defines a macro GLM_CONSTEXPR_SIMD which causes expressions like vec3(1.0f, 0.0f, 0.0f, 1.0f) to be a constexpr only when generating platform-independent code, i.e., only whenGLM_ARCH is GLM_ARCH_PURE.
I assume this is done for performance reasons, but why would making something non-constexpr increase performance? And how does SIMD play a role in the decision?
A: This is most probably related to the fact that SIMD intrinsics are not defined constexpr. When you generate platform-independent code, it does not use intrinsics and thus, it can be declared constexpr. However, as soon as you pinpoint a platform e.g. with SSE/AVX, in order to benefit from these SIMD functions, constexpr must be stripped away.
Additional info available at Constexpr and SSE intrinsics
|
Q: Why GLM_CONSTEXPR_SIMD? The OpenGL mathematics library defines a macro GLM_CONSTEXPR_SIMD which causes expressions like vec3(1.0f, 0.0f, 0.0f, 1.0f) to be a constexpr only when generating platform-independent code, i.e., only whenGLM_ARCH is GLM_ARCH_PURE.
I assume this is done for performance reasons, but why would making something non-constexpr increase performance? And how does SIMD play a role in the decision?
A: This is most probably related to the fact that SIMD intrinsics are not defined constexpr. When you generate platform-independent code, it does not use intrinsics and thus, it can be declared constexpr. However, as soon as you pinpoint a platform e.g. with SSE/AVX, in order to benefit from these SIMD functions, constexpr must be stripped away.
Additional info available at Constexpr and SSE intrinsics
|
stackoverflow
|
{
"language": "en",
"length": 124,
"provenance": "stackexchange_0000F.jsonl.gz:862795",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535608"
}
|
2a4ef1415dd744f368a1327e5b54a97e92eaa1a6
|
Stackoverflow Stackexchange
Q: installing progressbar Python package I get this error:
E:\opensource_codes\semi-auto-anno\src>pip install progressbar
Collecting progressbar
Downloading progressbar-2.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\setup.py", line 5, in <module>
import progressbar
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\progressbar\__init__.py", line 59, in <module>
from progressbar.widgets import *
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\progressbar\widgets.py", line 121, in <module>
class FileTransferSpeed(Widget):
File "C:\ProgramData\Anaconda3\lib\abc.py", line 133, in __new__
cls = super().__new__(mcls, name, bases, namespace)
ValueError: 'format' in __slots__ conflicts with class variable
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\
and I have:
E:\opensource_codes\semi-auto-anno\src>python
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
how should I install progressbar for Python 3.6.0?
A: conda install progressbar2
or maybe
pip install progressbar2
|
Q: installing progressbar Python package I get this error:
E:\opensource_codes\semi-auto-anno\src>pip install progressbar
Collecting progressbar
Downloading progressbar-2.3.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\setup.py", line 5, in <module>
import progressbar
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\progressbar\__init__.py", line 59, in <module>
from progressbar.widgets import *
File "C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\progressbar\widgets.py", line 121, in <module>
class FileTransferSpeed(Widget):
File "C:\ProgramData\Anaconda3\lib\abc.py", line 133, in __new__
cls = super().__new__(mcls, name, bases, namespace)
ValueError: 'format' in __slots__ conflicts with class variable
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in C:\Users\mona6\AppData\Local\Temp\pip-build-0_37al8d\progressbar\
and I have:
E:\opensource_codes\semi-auto-anno\src>python
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
how should I install progressbar for Python 3.6.0?
A: conda install progressbar2
or maybe
pip install progressbar2
A: Not sure if it is the best method, but it works for me:
$ git clone https://github.com/coagulant/progressbar-python3.git
Cloning into 'progressbar-python3'...
remote: Counting objects: 30, done.
remote: Total 30 (delta 0), reused 0 (delta 0), pack-reused 30
Unpacking objects: 100% (30/30), done.
mona6@DESKTOP-0JQ770H MINGW64 /e/opensource_codes
$ ls
cnpy/ depth-masking-src/ gesture_recognition/ semi-auto-anno/
cnpy_cmake/ depth-masking-src.zip progressbar-python3/
mona6@DESKTOP-0JQ770H MINGW64 /e/opensource_codes
$ cd progressbar-python3/
mona6@DESKTOP-0JQ770H MINGW64 /e/opensource_codes/progressbar-python3 (master)
$ ls
ChangeLog.yaml LICENSE.txt progressbar/ README.txt tox.ini
examples.py* MANIFEST.in README.md setup.py*
E:\opensource_codes\progressbar-python3>python setup.py install
C:\ProgramData\Anaconda3\lib\site-packages\setuptools-27.2.0-py3.6.egg\setuptools\dist.py:331: UserWarning: Normalizing '2.3dev' to '2.3.dev0'
running install
running bdist_egg
running egg_info
creating progressbar.egg-info
writing progressbar.egg-info\PKG-INFO
writing dependency_links to progressbar.egg-info\dependency_links.txt
writing top-level names to progressbar.egg-info\top_level.txt
writing manifest file 'progressbar.egg-info\SOURCES.txt'
reading manifest file 'progressbar.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'progressbar.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
creating build
creating build\lib
creating build\lib\progressbar
copying progressbar\compat.py -> build\lib\progressbar
copying progressbar\progressbar.py -> build\lib\progressbar
copying progressbar\widgets.py -> build\lib\progressbar
copying progressbar\__init__.py -> build\lib\progressbar
creating build\bdist.win-amd64
creating build\bdist.win-amd64\egg
creating build\bdist.win-amd64\egg\progressbar
copying build\lib\progressbar\compat.py -> build\bdist.win-amd64\egg\progressbar
copying build\lib\progressbar\progressbar.py -> build\bdist.win-amd64\egg\progressbar
copying build\lib\progressbar\widgets.py -> build\bdist.win-amd64\egg\progressbar
copying build\lib\progressbar\__init__.py -> build\bdist.win-amd64\egg\progressbar
byte-compiling build\bdist.win-amd64\egg\progressbar\compat.py to compat.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\progressbar\progressbar.py to progressbar.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\progressbar\widgets.py to widgets.cpython-36.pyc
byte-compiling build\bdist.win-amd64\egg\progressbar\__init__.py to __init__.cpython-36.pyc
creating build\bdist.win-amd64\egg\EGG-INFO
copying progressbar.egg-info\PKG-INFO -> build\bdist.win-amd64\egg\EGG-INFO
copying progressbar.egg-info\SOURCES.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying progressbar.egg-info\dependency_links.txt -> build\bdist.win-amd64\egg\EGG-INFO
copying progressbar.egg-info\top_level.txt -> build\bdist.win-amd64\egg\EGG-INFO
zip_safe flag not set; analyzing archive contents...
creating dist
creating 'dist\progressbar-2.3.dev0-py3.6.egg' and adding 'build\bdist.win-amd64\egg' to it
removing 'build\bdist.win-amd64\egg' (and everything under it)
Processing progressbar-2.3.dev0-py3.6.egg
Copying progressbar-2.3.dev0-py3.6.egg to c:\programdata\anaconda3\lib\site-packages
Adding progressbar 2.3.dev0 to easy-install.pth file
Installed c:\programdata\anaconda3\lib\site-packages\progressbar-2.3.dev0-py3.6.egg
Processing dependencies for progressbar==2.3.dev0
Finished processing dependencies for progressbar==2.3.dev0
E:\opensource_codes\progressbar-python3>python
Python 3.6.0 |Anaconda 4.3.1 (64-bit)| (default, Dec 23 2016, 11:57:41) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import progressbar
>>>
|
stackoverflow
|
{
"language": "en",
"length": 421,
"provenance": "stackexchange_0000F.jsonl.gz:862798",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535616"
}
|
d8a20859f15445263352e80d76ea1d09e1b1f3e9
|
Stackoverflow Stackexchange
Q: Convert Scss to Sass mixin I'm looking for some documentation on Mixins specifically for .Sass
Not .Scss
but due to the naming conventions they keep coming up with the same search results.
@function calculateRem($size) {
$remSize: $size / 16px;
@return $remSize * 1rem;
}
@mixin font-size($size) {
font-size: $size;
font-size: calculateRem($size);
}
@mixin becomes =
but further then this I don't know what function become etc....
A: You can use sass-convert to convert your file from scss to sass
your code in sass is
@function calculateRem($size)
$remSize: $size / 16px
@return $remSize * 1rem
=font-size($size)
font-size: $size
font-size: calculateRem($size)
Reference
http://sass-lang.com/documentation/file.SASS_REFERENCE.html#syntax
|
Q: Convert Scss to Sass mixin I'm looking for some documentation on Mixins specifically for .Sass
Not .Scss
but due to the naming conventions they keep coming up with the same search results.
@function calculateRem($size) {
$remSize: $size / 16px;
@return $remSize * 1rem;
}
@mixin font-size($size) {
font-size: $size;
font-size: calculateRem($size);
}
@mixin becomes =
but further then this I don't know what function become etc....
A: You can use sass-convert to convert your file from scss to sass
your code in sass is
@function calculateRem($size)
$remSize: $size / 16px
@return $remSize * 1rem
=font-size($size)
font-size: $size
font-size: calculateRem($size)
Reference
http://sass-lang.com/documentation/file.SASS_REFERENCE.html#syntax
|
stackoverflow
|
{
"language": "en",
"length": 102,
"provenance": "stackexchange_0000F.jsonl.gz:862828",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535717"
}
|
72b31e9517986ace1f790193a86b435cd14fb326
|
Stackoverflow Stackexchange
Q: C++ - Parsing number from std::string I need to iterate through a shopping list which I have put into a vector and further separate each line by the quantity and item name. How can I get a pair with the number as the first item and the item name as the second?
Example:
vector<string> shopping_list = {"3 Apples", "5 Mandarin Oranges", "24 Eggs", "152 Chickens"}
I'm not sure how big the number will be so I can't use a constant index.
Ideally I would like a vector of pairs.
A: You can write a function to split quantity and item like following:
#include <sstream>
auto split( const std::string &p ) {
int num;
std::string item;
std::istringstream ss ( p);
ss >>num ; // assuming format is integer followed by space then item
getline(ss, item); // remaining string
return make_pair(num,item) ;
}
Then use std::transform to get vector of pairs :
std::transform( shopping_list.cbegin(),
shopping_list.cend(),
std::back_inserter(items),
split );
See Here
|
Q: C++ - Parsing number from std::string I need to iterate through a shopping list which I have put into a vector and further separate each line by the quantity and item name. How can I get a pair with the number as the first item and the item name as the second?
Example:
vector<string> shopping_list = {"3 Apples", "5 Mandarin Oranges", "24 Eggs", "152 Chickens"}
I'm not sure how big the number will be so I can't use a constant index.
Ideally I would like a vector of pairs.
A: You can write a function to split quantity and item like following:
#include <sstream>
auto split( const std::string &p ) {
int num;
std::string item;
std::istringstream ss ( p);
ss >>num ; // assuming format is integer followed by space then item
getline(ss, item); // remaining string
return make_pair(num,item) ;
}
Then use std::transform to get vector of pairs :
std::transform( shopping_list.cbegin(),
shopping_list.cend(),
std::back_inserter(items),
split );
See Here
A: You can use std::stringstream as follows.
vector< pair<int,string> > myList;
for(int i=0;i<shopping_list.size();i++) {
int num;
string item;
std::stringstream ss;
ss<<shopping_list[i];
ss>>num;
ss>>item;
myList.push_back(make_pair(num,item));
...
}
num is your required number.
A: I suggest you the following solution without stringstream just as alternative solution
#include <iostream>
#include <string>
#include <vector>
using namespace std;
int main() {
vector<string> shopping_list = { "3 Apples", "5 Mandarin Oranges", "24 Eggs", "152 Chickens" };
vector< pair<int, string> > pairs_list;
for (string s : shopping_list)
{
int num;
string name;
int space_pos = s.find_first_of(" ");
if (space_pos == std::string::npos)
continue; // format is broken : no spaces
try{
name = s.substr(space_pos + 1);
num = std::stoi(s.substr(0, space_pos));
}
catch (...)
{
continue; // format is broken : any problem
}
pairs_list.push_back(make_pair(num, name));
}
for (auto p : pairs_list)
{
cout << p.first << " : " << p.second << endl;
}
return 0;
}
|
stackoverflow
|
{
"language": "en",
"length": 308,
"provenance": "stackexchange_0000F.jsonl.gz:862852",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535788"
}
|
a8cffcf292cdbd305c11f21b4365cc3354c7544c
|
Stackoverflow Stackexchange
Q: Add cancel button on progress dialog I am trying to enable the Cancel button on the ProgressDialog in Xamarin Android, but it doesn't appear.
This is what I did until now:
ProgressDialog progressDialog = new ProgressDialog(Context);
progressDialog.SetProgressStyle(ProgressDialogStyle.Horizontal);
progressDialog.SetCancelable(true);
progressDialog.CancelEvent += (o, e) =>
{
// Cancel download
};
progressDialog.Show();
Related questions: How to set cancel button in Progress Dialog? or Android ProgressDialog can't add Cancel button
A: Note: ProgressDialog is now deprecated in API-26
var progress = new ProgressDialog(this);
progress.SetTitle("Syncing Events");
progress.Indeterminate = false;
progress.SetProgressStyle(ProgressDialogStyle.Horizontal);
progress.Max = totalEvents;
progress.Progress = currentEvent;
progress.SetButton(-3, "CancelLeft", (sender, e) => {
Log.Debug("SO", "Cancel");
});
progress.SetButton(-2, "CancelMiddle", (sender, e) =>
{
Log.Debug("SO", "Cancel");
});
progress.SetButton(-1, "CancelRight", (sender, e) =>
{
Log.Debug("SO", "Cancel");
});
progress.Show();
|
Q: Add cancel button on progress dialog I am trying to enable the Cancel button on the ProgressDialog in Xamarin Android, but it doesn't appear.
This is what I did until now:
ProgressDialog progressDialog = new ProgressDialog(Context);
progressDialog.SetProgressStyle(ProgressDialogStyle.Horizontal);
progressDialog.SetCancelable(true);
progressDialog.CancelEvent += (o, e) =>
{
// Cancel download
};
progressDialog.Show();
Related questions: How to set cancel button in Progress Dialog? or Android ProgressDialog can't add Cancel button
A: Note: ProgressDialog is now deprecated in API-26
var progress = new ProgressDialog(this);
progress.SetTitle("Syncing Events");
progress.Indeterminate = false;
progress.SetProgressStyle(ProgressDialogStyle.Horizontal);
progress.Max = totalEvents;
progress.Progress = currentEvent;
progress.SetButton(-3, "CancelLeft", (sender, e) => {
Log.Debug("SO", "Cancel");
});
progress.SetButton(-2, "CancelMiddle", (sender, e) =>
{
Log.Debug("SO", "Cancel");
});
progress.SetButton(-1, "CancelRight", (sender, e) =>
{
Log.Debug("SO", "Cancel");
});
progress.Show();
A: I managed to do it the following way:
progressDialog.SetButton("Cancel", new EventHandler<DialogClickEventArgs>(
(s, args) => {
// Cancel download
}
));
A: ProgressDialog myDialog = new ProgressDialog(YourActivity.this);
myDialog.setMessage("Loading...");
myDialog.setCancelable(false);
myDialog.setButton(DialogInterface.BUTTON_NEGATIVE, "Cancel", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
}
});
myDialog.show();
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:862858",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535825"
}
|
a5b216d5c7b5590b79de4469347b57d2ed6c1a6c
|
Stackoverflow Stackexchange
Q: Unable to make the Checkbox work with redux-form and react-semantic-ui I'm trying to use redux-form with react-semantic-ui and is having trouble with the Checkbox component. The Checkbox is not being checked. I've followed the example from the redux-form documentation, but no luck. Here's the Code snippet :
renderCheckBox = ({ input, label }) => {
console.log(input.value);
return (
<Form.Field>
<Checkbox
label={label}
checked={input.value ? true : false}
onChange={input.onChange}
/>
</Form.Field>
);
};
<Field
name="activated"
label="Activate?"
component={this.renderCheckBox}
/>
The output of console.log(input.value) is empty.
A: Reusable redux form checkbox with semantic ui
import React from 'react';
import { object } from 'prop-types';
import { Field } from 'redux-form/immutable';
import { Checkbox as CheckboxUI } from 'semantic-ui-react';
const Checkbox = ({
input: { value, onChange, ...input },
meta: { touched, error },
...rest
}) => (
<div>
<CheckboxUI
{...input}
{...rest}
defaultChecked={!!value}
onChange={(e, data) => onChange(data.checked)}
type="checkbox"
/>
{touched && error && <span>{error}</span>}
</div>
);
Checkbox.propTypes = {
input: object.isRequired,
meta: object.isRequired
};
Checkbox.defaultProps = {
input: null,
meta: null
};
export default props => <Field {...props} component={Checkbox} />;
How to use?
import Checkbox from './Checkbox';
<form>
...
<Checkbox name="example" />
...
</form>
|
Q: Unable to make the Checkbox work with redux-form and react-semantic-ui I'm trying to use redux-form with react-semantic-ui and is having trouble with the Checkbox component. The Checkbox is not being checked. I've followed the example from the redux-form documentation, but no luck. Here's the Code snippet :
renderCheckBox = ({ input, label }) => {
console.log(input.value);
return (
<Form.Field>
<Checkbox
label={label}
checked={input.value ? true : false}
onChange={input.onChange}
/>
</Form.Field>
);
};
<Field
name="activated"
label="Activate?"
component={this.renderCheckBox}
/>
The output of console.log(input.value) is empty.
A: Reusable redux form checkbox with semantic ui
import React from 'react';
import { object } from 'prop-types';
import { Field } from 'redux-form/immutable';
import { Checkbox as CheckboxUI } from 'semantic-ui-react';
const Checkbox = ({
input: { value, onChange, ...input },
meta: { touched, error },
...rest
}) => (
<div>
<CheckboxUI
{...input}
{...rest}
defaultChecked={!!value}
onChange={(e, data) => onChange(data.checked)}
type="checkbox"
/>
{touched && error && <span>{error}</span>}
</div>
);
Checkbox.propTypes = {
input: object.isRequired,
meta: object.isRequired
};
Checkbox.defaultProps = {
input: null,
meta: null
};
export default props => <Field {...props} component={Checkbox} />;
How to use?
import Checkbox from './Checkbox';
<form>
...
<Checkbox name="example" />
...
</form>
A: If you want to know whether the checkbox is checked or not, you have to use
onChange={(e, { checked }) => input.onChange(checked)}
instead of
onChange={input.onChange}
Here's a working example
|
stackoverflow
|
{
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:862863",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535840"
}
|
ee6fdb77d4ed0234375ceb97407ce5d3425fe7bc
|
Stackoverflow Stackexchange
Q: how to send email directly in React Native App I am progromming about function: send Email to gmail Address directly from React native App.
I searched on Internet and try library: https://github.com/anarchicknight/react-native-communications, https://github.com/chirag04/react-native-mail.
Howerver, they only show me view of Gmail App which I installed in my device.
I want react native app will send directly to Address Email.
My device I tested run on Android Platform.
Thank you so much
A: I have tried, and so far succeeded in testing with iOS, with react-native-email ("npm install react-native-email").
There is a bit of fluffing around when sending the first email as you have to "login" to your email account. But otherwise, test emails are going through fine.
Also, SendPulse is a bulk newsletter service, not for individual emails.
One annoying caveat: it won't work in your emulator. It will return a URL error when you click the send button. But it works fine on a real device. I'm using Expo (and who wouldn't) and it works fine on my iPhone.
Complete code for testing purposes here: https://github.com/tiaanduplessis/react-native-email
|
Q: how to send email directly in React Native App I am progromming about function: send Email to gmail Address directly from React native App.
I searched on Internet and try library: https://github.com/anarchicknight/react-native-communications, https://github.com/chirag04/react-native-mail.
Howerver, they only show me view of Gmail App which I installed in my device.
I want react native app will send directly to Address Email.
My device I tested run on Android Platform.
Thank you so much
A: I have tried, and so far succeeded in testing with iOS, with react-native-email ("npm install react-native-email").
There is a bit of fluffing around when sending the first email as you have to "login" to your email account. But otherwise, test emails are going through fine.
Also, SendPulse is a bulk newsletter service, not for individual emails.
One annoying caveat: it won't work in your emulator. It will return a URL error when you click the send button. But it works fine on a real device. I'm using Expo (and who wouldn't) and it works fine on my iPhone.
Complete code for testing purposes here: https://github.com/tiaanduplessis/react-native-email
A: You need an email server or an email services to send an email, there is no way you can send an email directly from the client side.
There are several of them in the internet, you can try: MailGun or SendPulse, they got some good free tiers.
Your job is just calling a simple POST method from your app to their APIs.
|
stackoverflow
|
{
"language": "en",
"length": 241,
"provenance": "stackexchange_0000F.jsonl.gz:862872",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535860"
}
|
86e5b2b525f4769a37e9f8f071961a8dde37d824
|
Stackoverflow Stackexchange
Q: 400 (Bad Request) in image url angular js ImageUrl/{{product.thumb_vImage}} 400 (Bad Request)
I getting above error in my console; I want to know the reason of it.
<div class="img" ng-repeat="product in products| orderBy : sort_by |limitTo:limit track by $index " >
<a ng-click="displayProduct(product)" data-toggle="modal" data-target="#modal_product">
<img class="image_{{product.iProductID}}" src="{{product.thumb_vImage}}" alt="">
<div class="overlay"></div>
</a>
</div>
A: It should be like this use {{}} instead of [[]] also ng-src instead of src,
<img class="image_{{product.iProductID}}" ng-src="{{product.thumb_vImage}}" alt="">
|
Q: 400 (Bad Request) in image url angular js ImageUrl/{{product.thumb_vImage}} 400 (Bad Request)
I getting above error in my console; I want to know the reason of it.
<div class="img" ng-repeat="product in products| orderBy : sort_by |limitTo:limit track by $index " >
<a ng-click="displayProduct(product)" data-toggle="modal" data-target="#modal_product">
<img class="image_{{product.iProductID}}" src="{{product.thumb_vImage}}" alt="">
<div class="overlay"></div>
</a>
</div>
A: It should be like this use {{}} instead of [[]] also ng-src instead of src,
<img class="image_{{product.iProductID}}" ng-src="{{product.thumb_vImage}}" alt="">
|
stackoverflow
|
{
"language": "en",
"length": 74,
"provenance": "stackexchange_0000F.jsonl.gz:862874",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44535862"
}
|
5f43480764a13cd634d3ac6e824b14170b7679c2
|
Stackoverflow Stackexchange
Q: How to implement Random sleep in golang I am trying to implement random time sleep (in Golang)
r := rand.Intn(10)
time.Sleep(100 * time.Millisecond) //working
time.Sleep(r * time.Microsecond) // Not working (mismatched types int and time.Duration)
A: Match the types of argument to time.Sleep:
r := rand.Intn(10)
time.Sleep(time.Duration(r) * time.Microsecond)
This works because time.Duration has int64 as its underlying type:
type Duration int64
Docs: https://golang.org/pkg/time/#Duration
|
Q: How to implement Random sleep in golang I am trying to implement random time sleep (in Golang)
r := rand.Intn(10)
time.Sleep(100 * time.Millisecond) //working
time.Sleep(r * time.Microsecond) // Not working (mismatched types int and time.Duration)
A: Match the types of argument to time.Sleep:
r := rand.Intn(10)
time.Sleep(time.Duration(r) * time.Microsecond)
This works because time.Duration has int64 as its underlying type:
type Duration int64
Docs: https://golang.org/pkg/time/#Duration
A: If you try to run same rand.Intn several times, you will see always the same number in output
Just like its written in the official docu https://golang.org/pkg/math/rand/
Top-level functions, such as Float64 and Int, use a default shared Source that produces a deterministic sequence of values each time a program is run. Use the Seed function to initialize the default Source if different behavior is required for each run.
It rather should look like
rand.Seed(time.Now().UnixNano())
r := rand.Intn(100)
time.Sleep(time.Duration(r) * time.Millisecond)
|
stackoverflow
|
{
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:862931",
"question_score": "31",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536045"
}
|
5da10387ea7000e342eb80c5405ec4dee83bb3a9
|
Stackoverflow Stackexchange
Q: What's the difference between !! and ? in Kotlin? I am new to Kotlin. I want to know the difference between this two !! and ? in below code.
Below, there are two snippets: the first uses !! for mCurrentDataset and another having ? for same variable.
if(!mCurrentDataset!!.load(mDataSetString.get(mCurrentDataSelectionIndex), STORAGE_TYPE.STORAGE_APPRESOURCE))
{
Log.d("MyActivity","Failed to load data.")
return false
}
if(!mCurrentDataset?.load(mDataSetString.get(mCurrentDataSelectionIndex), STORAGE_TYPE.STORAGE_APPRESOURCE)!!)
{
Log.d("MyActivity","Failed to load data.")
return false
}
A: In Addition to what Alexander said and as shown in the docs too,
the ?. safe call operator is very useful in chaining, something like this
student?.department?.hod?.name
if there is no student, returns null otherwise look for his department. If department doesn't exist returns null otherwise look for hod (head of department) and so on.
If any one of student, department or hod is null then the result will be null.
|
Q: What's the difference between !! and ? in Kotlin? I am new to Kotlin. I want to know the difference between this two !! and ? in below code.
Below, there are two snippets: the first uses !! for mCurrentDataset and another having ? for same variable.
if(!mCurrentDataset!!.load(mDataSetString.get(mCurrentDataSelectionIndex), STORAGE_TYPE.STORAGE_APPRESOURCE))
{
Log.d("MyActivity","Failed to load data.")
return false
}
if(!mCurrentDataset?.load(mDataSetString.get(mCurrentDataSelectionIndex), STORAGE_TYPE.STORAGE_APPRESOURCE)!!)
{
Log.d("MyActivity","Failed to load data.")
return false
}
A: In Addition to what Alexander said and as shown in the docs too,
the ?. safe call operator is very useful in chaining, something like this
student?.department?.hod?.name
if there is no student, returns null otherwise look for his department. If department doesn't exist returns null otherwise look for hod (head of department) and so on.
If any one of student, department or hod is null then the result will be null.
A: As it said in Kotlin reference, !! is an option for NPE-lovers :)
a!!.length
will return a non-null value of a.length or throw a NullPointerException if a is null:
val a: String? = null
print(a!!.length) // >>> NPE: trying to get length of null
a?.length
returns a.length if a is not null, and null otherwise:
val a: String? = null
print(a?.length) // >>> null is printed in the console
To sum up:
+------------+--------------------+---------------------+----------------------+
| a: String? | a.length | a?.length | a!!.length |
+------------+--------------------+---------------------+----------------------+
| "cat" | Compile time error | 3 | 3 |
| null | Compile time error | null | NullPointerException |
+------------+--------------------+---------------------+----------------------+
Might be useful: What is a NullPointerException?
A: the precedence of operators !, ?., !! is ?. > !! > !.
the !! operator will raising KotlinNullPointerException when operates on a null reference, for example:
null!!;// raise NullPointerException
the safe call ?. operator will return null when operates on a null reference, for example:
(null as? String)?.length; // return null;
the !! operator in your second approach maybe raise NullPointerException if the left side is null, for example:
mCurrentDataset?.load(..)!!
^-------------^
|
when mCurrentDataset== null || load() == null a NullPointerException raised.
you can using the elvis operator ?: instead of the !! operator in your case, for example:
!(mCurrentDataset?.load(..)?:false)
A: Safe Calls operator
In Kotlin
var a = x?.length;
Equivalent code in Java
int a = valueOfInt();
int valueOfInt() {
if (x != null) {
return x;
} else {
return null;
}
}
Side chain rule
bob?.department?.head?.name
it can be read as->
If bob is not null give me the department ,
if department is not null give me the head,
if head is not null give me the name.
If any of it is null, then it returns null
? before Data type
If ? used before data type like:
val b: String? = null
it means you can assign null value to it otherwise null value can't be assigned to it.
The !! Operator
For those who like to have Null Pointer Exception (NPE) in their program.
val l = b!!.length
this will return a non-null value of b if b is not null OR throw an NPE if b is null
A: this is '!!' double-bang operator is always return not-null value and this is '?' safe call operator returns value if value is not null, and null otherwise
This is unsafe nullable type (T?) conversion to a non-nullable type (T). It will throw NullPointerException if the value is null.
It is documented here along with Kotlin means of null-safety.
ref - hotkey
A: SafeCall Operator(?):
var a: String = "abc"
a = null //compile time error
val b: String? = null
val result = b?.length//returns null
Assertion Operator(!!):
val b: String? = "dd" //any value or null
val l = b!!.length
//this throws null pointer exception if b is null otherwise returns actual
|
stackoverflow
|
{
"language": "en",
"length": 626,
"provenance": "stackexchange_0000F.jsonl.gz:862954",
"question_score": "89",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536114"
}
|
e511a4b0e7adbcc70e3bca249e22141036ed27e4
|
Stackoverflow Stackexchange
Q: Alarm Functionality In ReactNative For Android And iOS I want to make app that play music every day at specific time (for example : 07:00) even if the app in background or turned off
I had done it in android with Java & Android SDK using Alarm Manager , i'm looking for something like that for ReactNative
I had searched for libraries but i did not found useful resources
A: check this out https://github.com/wmcmahan/react-native-calendar-events , The alarms associated with the calendar event, as an array of alarm objects.
|
Q: Alarm Functionality In ReactNative For Android And iOS I want to make app that play music every day at specific time (for example : 07:00) even if the app in background or turned off
I had done it in android with Java & Android SDK using Alarm Manager , i'm looking for something like that for ReactNative
I had searched for libraries but i did not found useful resources
A: check this out https://github.com/wmcmahan/react-native-calendar-events , The alarms associated with the calendar event, as an array of alarm objects.
|
stackoverflow
|
{
"language": "en",
"length": 89,
"provenance": "stackexchange_0000F.jsonl.gz:862956",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536122"
}
|
4133aa1480718e1eb84141a8c02123ccff027c60
|
Stackoverflow Stackexchange
Q: Call by value in C I'm new to programming and I am currently working on C.
I learned that C does not have call by reference. The programs that we write to pass the address of actual parameters to the formal parameters is also call by Value in C.
Correct me if I'm wrong.. However, I ran this program :
//Swapping of two numbers using functions.
#include
void swap(int *,int *);
void main()
{
int x,y;
printf ("Enter the values of x and y : ");
scanf("%d %d",&x,&y);
swap(x,y);
printf("The value of x = %d and y = %d",x,y);
}
void swap(int *a,int *b)
{
int temp;
temp=*b;
*b=*a;
*a=temp;
}
It compiles just fine.. however, I'm getting a Segmentation Fault in the Output.
It asks me the enter the value of X and Y and then gives, Segmentation fault..
Please help!!
A: you are sending an int to a function that expects int*, thus when you are dereferencing - temp=*b; you are trying to access memory you don't own -> segfault. Call swap like this: swap(&x,&y);
|
Q: Call by value in C I'm new to programming and I am currently working on C.
I learned that C does not have call by reference. The programs that we write to pass the address of actual parameters to the formal parameters is also call by Value in C.
Correct me if I'm wrong.. However, I ran this program :
//Swapping of two numbers using functions.
#include
void swap(int *,int *);
void main()
{
int x,y;
printf ("Enter the values of x and y : ");
scanf("%d %d",&x,&y);
swap(x,y);
printf("The value of x = %d and y = %d",x,y);
}
void swap(int *a,int *b)
{
int temp;
temp=*b;
*b=*a;
*a=temp;
}
It compiles just fine.. however, I'm getting a Segmentation Fault in the Output.
It asks me the enter the value of X and Y and then gives, Segmentation fault..
Please help!!
A: you are sending an int to a function that expects int*, thus when you are dereferencing - temp=*b; you are trying to access memory you don't own -> segfault. Call swap like this: swap(&x,&y);
A: So close
swap(&x,&y);
You were not passing references (pointers)
A: To avoid similar segfaults at runtime, always compile with -Wall.
Actually, there is a way to pass-by-reference in C, just change two lines in your code like this:
#define swap(x,y) swap_impl__(&(x), &(y))
static void swap_impl__(int *a, int *b)
{
int temp;
temp=*b;
*b=*a;
*a=temp;
}
void main()
{
int x,y;
printf ("Enter the values of x and y : ");
scanf("%d %d",&x,&y);
swap(x,y);
printf("The value of x = %d and y = %d",x,y);
}
A: the call by value method of passing arguments to a function copies the actual value of an argument into the formal parameter of the function. In this case, changes made to the parameter inside the function have no effect on the argument.
but here you are passing values
swap(x,y)
but taking it as address
void swap(int *a,int *b)
so it is looking for an address which passed by your variable.
for example, if you have passed something like
swap(x,y)
and
if we have x=100 and y=200, then it is assuming 100 and 200 to be addresses
and trying to access it will definitely give you error as they may not exist or having garbage value.
|
stackoverflow
|
{
"language": "en",
"length": 379,
"provenance": "stackexchange_0000F.jsonl.gz:862966",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536143"
}
|
da2e2c02826a71aa0c116a5eb4b7f989c2a94f34
|
Stackoverflow Stackexchange
Q: Module compiled with swift 4.0 cannot be imported in swift 3.0.2 I just used Xcode 9 beta version and did not even compile my app. Now I am not able to run it in Xcode 8. The framework import is giving error. I have to give the archive. Help please. In the snapshot attached , I am importing an external framework MMCardView written in swift. Now that is throwing error in Xcode 8. What should i do?
A: Try cleaning the project from Product -> Clean in Xcode 8 and build again.
|
Q: Module compiled with swift 4.0 cannot be imported in swift 3.0.2 I just used Xcode 9 beta version and did not even compile my app. Now I am not able to run it in Xcode 8. The framework import is giving error. I have to give the archive. Help please. In the snapshot attached , I am importing an external framework MMCardView written in swift. Now that is throwing error in Xcode 8. What should i do?
A: Try cleaning the project from Product -> Clean in Xcode 8 and build again.
|
stackoverflow
|
{
"language": "en",
"length": 93,
"provenance": "stackexchange_0000F.jsonl.gz:862969",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536154"
}
|
436b5d3a17655d636b1a5a7b95c0642bcea0361f
|
Stackoverflow Stackexchange
Q: Spark: saveAsTextFile() only creating SUCCESS file and no part file when writing to local filesystem I am writing an RDD to a file using below command:
rdd.coalesce(1).saveAsTextFile(FilePath)
When the FilePath is HDFS path (hdfs://node:9000/folder/) everything works fine.
When the FilePath is local path (file:///home/user/folder/) everything seems to work. The output folder is created and SUCCESS file is also present.
However I do not see any part-00000 file containing the output. There is no other file. There is no error in the spark console output either.
I also tried calling collect on the RDD before calling saveAsTextFile(), giving 777 permission to output folder but nothing is working.
Please help.
A: save to local make effects only when using local master
|
Q: Spark: saveAsTextFile() only creating SUCCESS file and no part file when writing to local filesystem I am writing an RDD to a file using below command:
rdd.coalesce(1).saveAsTextFile(FilePath)
When the FilePath is HDFS path (hdfs://node:9000/folder/) everything works fine.
When the FilePath is local path (file:///home/user/folder/) everything seems to work. The output folder is created and SUCCESS file is also present.
However I do not see any part-00000 file containing the output. There is no other file. There is no error in the spark console output either.
I also tried calling collect on the RDD before calling saveAsTextFile(), giving 777 permission to output folder but nothing is working.
Please help.
A: save to local make effects only when using local master
A: In order to save a Spark object to the local driver filesystem, you'll need to use collect(), then open a file yourself to write that collection into.
Otherwise, if you ran as part of a YARN job, for example, you should go look at the local filesystems of the nodemanagers where the Spark job ran
|
stackoverflow
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:863006",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536279"
}
|
e13f972c2768a859a1ba8a9704dd5fbe9741f23b
|
Stackoverflow Stackexchange
Q: Curl --data-binary equivalent in python-requests library I'm trying to post testing data to a server by using python-requests library in python. I am able to post data successfully with the following command using Curl in the terminal:
curl -i -XPOST 'http://myServerAddress/write?db=some_data' --data-binary 'param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '
I'm trying to do the same thing with requests or maybe even pycurl python library. I am having a hard time translating the "--data-binary" part with pycurl or requests. Doing something like this with requests library for example:
import requests
p = requests.post('http://myServerAddress/write?db=some_data', data={'param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '})
print(p)
print(p.status_code)
print(p.text)
Getting "TypeError: a bytes-like object is required, not 'set'" in the shell when I run the code. What am I missing? Any help is appreciated. Thanks.
A: Try something like this
import requests
data='param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '
p = requests.post('http://myServerAddress/write?db=some_data', data.encode())
|
Q: Curl --data-binary equivalent in python-requests library I'm trying to post testing data to a server by using python-requests library in python. I am able to post data successfully with the following command using Curl in the terminal:
curl -i -XPOST 'http://myServerAddress/write?db=some_data' --data-binary 'param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '
I'm trying to do the same thing with requests or maybe even pycurl python library. I am having a hard time translating the "--data-binary" part with pycurl or requests. Doing something like this with requests library for example:
import requests
p = requests.post('http://myServerAddress/write?db=some_data', data={'param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '})
print(p)
print(p.status_code)
print(p.text)
Getting "TypeError: a bytes-like object is required, not 'set'" in the shell when I run the code. What am I missing? Any help is appreciated. Thanks.
A: Try something like this
import requests
data='param1,state=test,param2=1 param3=2.932,param4=3250 1497064544944 '
p = requests.post('http://myServerAddress/write?db=some_data', data.encode())
|
stackoverflow
|
{
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:863007",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536281"
}
|
06d46c544dabdb20ac1141816ae377d572223edd
|
Stackoverflow Stackexchange
Q: In NotePad++ how can I copy and paste everything inside a "+" section? In notepad++ you can collapse code blocks with the "+" button.... how can I collapse a large code block, copy it, then in a new file, collapse a large code block, and then replace the entire code block with the copied one?
Thanks!
A: You can not do this in a one step. But here is a workaround for this. While in the shrink state (with + is in place) Right click immediately after the + and in the popup menu click Begin/End Select. Then Right click at the beginning of the next line again and click Begin/End Select. This will select the text between two positions. Now copy the content by pressing Ctrl-C and repeat the same procedure to select the text to be overwritten. Then press Ctrl-V to paste the text.
|
Q: In NotePad++ how can I copy and paste everything inside a "+" section? In notepad++ you can collapse code blocks with the "+" button.... how can I collapse a large code block, copy it, then in a new file, collapse a large code block, and then replace the entire code block with the copied one?
Thanks!
A: You can not do this in a one step. But here is a workaround for this. While in the shrink state (with + is in place) Right click immediately after the + and in the popup menu click Begin/End Select. Then Right click at the beginning of the next line again and click Begin/End Select. This will select the text between two positions. Now copy the content by pressing Ctrl-C and repeat the same procedure to select the text to be overwritten. Then press Ctrl-V to paste the text.
|
stackoverflow
|
{
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:863016",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536320"
}
|
888df79e5a9f7d563f50c2e4b3a00d2eb62f51cc
|
Stackoverflow Stackexchange
Q: Show white status bar with black icons below Marshmello I want to show the status bar as white and icons as black. For this I checked solutions on SO for Marshmello.
How can I make the status bar white with black icons?
How to change the status bar notification icons' color/tint in android (marshmallow and above 23+)?
Used <item name="android:windowLightStatusBar">true</item> this in theme but this only works marshmello and above it.
Also tried this:
How to set Status bar to white background and black text (black icon) in my app
How to achieve this below marshmello?
Please help thank you..
|
Q: Show white status bar with black icons below Marshmello I want to show the status bar as white and icons as black. For this I checked solutions on SO for Marshmello.
How can I make the status bar white with black icons?
How to change the status bar notification icons' color/tint in android (marshmallow and above 23+)?
Used <item name="android:windowLightStatusBar">true</item> this in theme but this only works marshmello and above it.
Also tried this:
How to set Status bar to white background and black text (black icon) in my app
How to achieve this below marshmello?
Please help thank you..
|
stackoverflow
|
{
"language": "en",
"length": 101,
"provenance": "stackexchange_0000F.jsonl.gz:863018",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536324"
}
|
42d2112085632b36097287a592775e8a3a55fe51
|
Stackoverflow Stackexchange
Q: Laravel application shared hosting, storage folder symbolic Link issue I've created symbolic link at local PC, where its working fine, but I've uploaded same it to shared hosting, it is not working there.
Basically I've images in the storage folder root/storage/public/images/
i want to display them by getting
$path=asset('storage/images/'.$item->image);
so the problem is in the shared hosting this way
$path=asset('storage/images/'.$item->image);
getting from the domain directory not from the parent directory, so for that there is no way to create symbolic link on shared hosting so what should I do to get images from the parent directory.
I am beginner in laravel one can help me to solve this problem.
Thanks
A: first delete folder storage from folder public and, using this code in web.php
Route::get('foo', function(){
$targetFolder = $_SERVER['DOCUMENT_ROOT'].'/project_foder/laravel/storage/app/public';
$linkFolder = $_SERVER['DOCUMENT_ROOT'].'/project_foder/public/storage';
symlink($targetFolder, $linkFolder);
return 'success';
});
or
Route::get('foo', function(){
Artisan::call('storage:link', []);
return 'success';
})
|
Q: Laravel application shared hosting, storage folder symbolic Link issue I've created symbolic link at local PC, where its working fine, but I've uploaded same it to shared hosting, it is not working there.
Basically I've images in the storage folder root/storage/public/images/
i want to display them by getting
$path=asset('storage/images/'.$item->image);
so the problem is in the shared hosting this way
$path=asset('storage/images/'.$item->image);
getting from the domain directory not from the parent directory, so for that there is no way to create symbolic link on shared hosting so what should I do to get images from the parent directory.
I am beginner in laravel one can help me to solve this problem.
Thanks
A: first delete folder storage from folder public and, using this code in web.php
Route::get('foo', function(){
$targetFolder = $_SERVER['DOCUMENT_ROOT'].'/project_foder/laravel/storage/app/public';
$linkFolder = $_SERVER['DOCUMENT_ROOT'].'/project_foder/public/storage';
symlink($targetFolder, $linkFolder);
return 'success';
});
or
Route::get('foo', function(){
Artisan::call('storage:link', []);
return 'success';
})
A: I got this a solution here
first, delete the public/storage {storage} folder Second, put this code in the top of the web.php file
Artisan::call('storage:link');
this code runs the php artisan storage:link command manually
A: Please create link folder with storage folder that will create same folder in the project file library. For further information,
Symbolic links using PHP
|
stackoverflow
|
{
"language": "en",
"length": 205,
"provenance": "stackexchange_0000F.jsonl.gz:863020",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536329"
}
|
2ae59303d408f30f424181a158a55d26c6859532
|
Stackoverflow Stackexchange
Q: Typescript - What is the difference between null and undefined? I want to know what is the difference between null and undefined in typescript. I know in javascript it is possible to use both of them in order to check a variable has no value. But in typescript I want to know the difference exactly and when it is better to use each one of them.
Thanks.
A: This post explains the differences very well. They are the same in TypeScript as in JavaScript.
As for what you should use: You may define that on your own. You may use either, just be aware of the differences and it might make sense to be consistent.
The TypeScript coding style guide for the TypeScript source code (not an official "how to use TypeScript" guide) states that you should always use undefined and not null: Typescript Project Styleguide.
|
Q: Typescript - What is the difference between null and undefined? I want to know what is the difference between null and undefined in typescript. I know in javascript it is possible to use both of them in order to check a variable has no value. But in typescript I want to know the difference exactly and when it is better to use each one of them.
Thanks.
A: This post explains the differences very well. They are the same in TypeScript as in JavaScript.
As for what you should use: You may define that on your own. You may use either, just be aware of the differences and it might make sense to be consistent.
The TypeScript coding style guide for the TypeScript source code (not an official "how to use TypeScript" guide) states that you should always use undefined and not null: Typescript Project Styleguide.
A: The value 'undefined' denotes that a variable has been declared, but hasn't been assigned any value. So, the value of the variable is 'undefined'.
On the other hand, 'null' refers to a non-existent object, which basically means 'empty' or 'nothing'.
You can manually assign the value 'undefined' to a variable, but that isn't recommended. So, 'null' is assigned to a variable to specify that the variable doesn't contain any value or is empty. But 'undefined' is used to check whether the variable has been assigned any value after declaration.
|
stackoverflow
|
{
"language": "en",
"length": 237,
"provenance": "stackexchange_0000F.jsonl.gz:863023",
"question_score": "51",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536340"
}
|
eb35c651a32da2164a8fadadf61d2e6f22efb341
|
Stackoverflow Stackexchange
Q: How to handle Paste(Ctrl+v or with mouse) event in vue.js? I need to call a function when something is pasted on to a text area in my vue.js application. In this case in which event should I call my function?
A: You can simply use the paste event:
<textarea @paste="onPaste"></textarea>
...
methods: {
onPaste (evt) {
console.log('on paste', evt)
}
}
...
It's not a vue-specific event. See https://developer.mozilla.org/en-US/docs/Web/Events/paste
|
Q: How to handle Paste(Ctrl+v or with mouse) event in vue.js? I need to call a function when something is pasted on to a text area in my vue.js application. In this case in which event should I call my function?
A: You can simply use the paste event:
<textarea @paste="onPaste"></textarea>
...
methods: {
onPaste (evt) {
console.log('on paste', evt)
}
}
...
It's not a vue-specific event. See https://developer.mozilla.org/en-US/docs/Web/Events/paste
A: Using the onPaste method in Vue 2.6, the evt.target.value is empty. To get the text value, use:
methods: {
onPaste (evt) {
console.log('on paste', evt.clipboardData.getData('text'))
}
}
A: Additionally you can disable Past event (CTRL+V) for an input, with the .prevent function.
<input v-model="input" @paste.prevent class="input" type="text">
Past action will be automatically disabled for this input
A: It is already done and wrapped into watch functionality, and it also handles "cut" event (with mouse) and keyboard keys too.
All you need is to set watcher to your property like so
data: {
coupon_code: '',
},
watch: {
coupon_code: function(){
console.log('watch-'+this.coupon_code);
},
},
and HTML
<input type="text" autocomplete='off' v-model="coupon_code" >
documenation
A: The onPaste method needs to return true for text to be actually pasted.
Using the example above from @CodinCat, and updating it.
<textarea @paste="onPaste"></textarea>
...
methods: {
onPaste (evt) {
console.log('on paste', evt)
return true;
}
}
...
A: I'm seeing a lot of varied answers on here, many of which I would flag during a peer code review.
The shortest amount of code to compensate for pasting (including keyboard shortcuts) would be:
<textarea **@input**="doSomething" />.
You shouldn't be using @keyup, keydown, etc. for handling text input.
See - https://developer.mozilla.org/en-US/docs/Web/API/KeyboardEvent
Note: KeyboardEvent events just indicate what interaction the user had with a key on the keyboard at a low level, providing no contextual meaning to that interaction. When you need to handle text input, use the input event instead. Keyboard events may not be fired if the user is using an alternate means of entering text, such as a handwriting system on a tablet or graphics tablet.
|
stackoverflow
|
{
"language": "en",
"length": 339,
"provenance": "stackexchange_0000F.jsonl.gz:863029",
"question_score": "31",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536362"
}
|
923a1d864ca82c46b1d639a2c04baba8ffcc1c11
|
Stackoverflow Stackexchange
Q: How to configure the default shell in msys2/mintty? I updated msys2 recently and found mintty always shows 'Shells (bash)' dialog before it invokes.
It's little bit annoying to click the button every time, how can I suppress this dialog with fixing the default shell?
Mintty version is mintty 2.7.7 (x86_64-pc-msys).
A: Try installing the msys2-launcher package with pacman -S msys2-launcher. Then you should have three executables in the MSYS2 installation directory, and you should run the shell using those executables. You can then pin the shell to your Windows taskbar for future launching.
|
Q: How to configure the default shell in msys2/mintty? I updated msys2 recently and found mintty always shows 'Shells (bash)' dialog before it invokes.
It's little bit annoying to click the button every time, how can I suppress this dialog with fixing the default shell?
Mintty version is mintty 2.7.7 (x86_64-pc-msys).
A: Try installing the msys2-launcher package with pacman -S msys2-launcher. Then you should have three executables in the MSYS2 installation directory, and you should run the shell using those executables. You can then pin the shell to your Windows taskbar for future launching.
A: I tried installing msys2-launcher, but could not find the package.
Instead, I updated the Target field in my Windows shortcut to point to the msys2 bash directly:
C:\msys64\usr\bin\mintty.exe /usr/bin/bash
A: Try this one:
D:\msys64\usr\bin\bash.exe -c 'MSYSTEM=MSYS exec /bin/fish -l -i'
The 'MSYSTEM' variable could be MSYS, MINGW32, MING64. And the command can be integrated to terminal emulator like consolez, cmder. mintty.exe won't allow you do that since it's not console application.
|
stackoverflow
|
{
"language": "en",
"length": 166,
"provenance": "stackexchange_0000F.jsonl.gz:863035",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536373"
}
|
0fc2170405910ff92d6e154c0a0f68dd67ee9729
|
Stackoverflow Stackexchange
Q: Owl Carousel 2 server side rendering in Reactjs I am working on Owl Carousel 2. I want to give support in server side rendering Is it possible in Reactjs using yarn? please give me an example of server side rendering using Reactjs for Owl Carousel 2.
A: I used loadable to be able to import the library and it worked fine with SSR.
First install loadable:
npm install @loadable/component
Import the component in your JS file
import loadable from '@loadable/component';
Then I imported ReactOwlCarousel as below:
const ReactOwlCarousel = loadable(() => import('react-owl-carousel'), { ssr: false });
return <ReactOwlCarousel... />
Hope this helps someone!
|
Q: Owl Carousel 2 server side rendering in Reactjs I am working on Owl Carousel 2. I want to give support in server side rendering Is it possible in Reactjs using yarn? please give me an example of server side rendering using Reactjs for Owl Carousel 2.
A: I used loadable to be able to import the library and it worked fine with SSR.
First install loadable:
npm install @loadable/component
Import the component in your JS file
import loadable from '@loadable/component';
Then I imported ReactOwlCarousel as below:
const ReactOwlCarousel = loadable(() => import('react-owl-carousel'), { ssr: false });
return <ReactOwlCarousel... />
Hope this helps someone!
|
stackoverflow
|
{
"language": "en",
"length": 104,
"provenance": "stackexchange_0000F.jsonl.gz:863043",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536399"
}
|
e3071f58a88b795dac625eb50dcb64fe2881d38b
|
Stackoverflow Stackexchange
Q: JavaScript dispatch CustomEvent: new or reuse I want to dispatch a CustomEvent on different occasions. Most examples on the net create and dispatch CustomEvents only once. I want to know how to do it correctly.
In this example the "A-Button" dispatches the same event over and over again.
The "B-Button" creates a new CustomEvent each time, the event should be dispatched.
var targetA = document.getElementById('targetA');
var targetB = document.getElementById('targetB');
var evtA = new CustomEvent("myEvent");
function a(){
targetA.dispatchEvent(evtA);
}
function b(){
targetB.dispatchEvent(new CustomEvent("myOtherEvent"));
}
targetA.addEventListener("myEvent", function(e){
console.log(e);
console.log(e.timeStamp);
});
targetB.addEventListener("myOtherEvent", function(e){
console.log(e);
console.log(e.timeStamp);
});
<button onclick="a()">A</button>
<button onclick="b()">B</button>
<div id="targetA"></div>
<div id="targetB"></div>
Sideeffects of the A approach could be that the timestamp is not updated. This could lead to unexpected behaviour on handlers that depend on the timestamp.
The B approach could be less performant, since the CustomEvent object is instanciated over and over again. (Memory should not be a problem.)
Is there a "correct" way, or are there any best practices?
|
Q: JavaScript dispatch CustomEvent: new or reuse I want to dispatch a CustomEvent on different occasions. Most examples on the net create and dispatch CustomEvents only once. I want to know how to do it correctly.
In this example the "A-Button" dispatches the same event over and over again.
The "B-Button" creates a new CustomEvent each time, the event should be dispatched.
var targetA = document.getElementById('targetA');
var targetB = document.getElementById('targetB');
var evtA = new CustomEvent("myEvent");
function a(){
targetA.dispatchEvent(evtA);
}
function b(){
targetB.dispatchEvent(new CustomEvent("myOtherEvent"));
}
targetA.addEventListener("myEvent", function(e){
console.log(e);
console.log(e.timeStamp);
});
targetB.addEventListener("myOtherEvent", function(e){
console.log(e);
console.log(e.timeStamp);
});
<button onclick="a()">A</button>
<button onclick="b()">B</button>
<div id="targetA"></div>
<div id="targetB"></div>
Sideeffects of the A approach could be that the timestamp is not updated. This could lead to unexpected behaviour on handlers that depend on the timestamp.
The B approach could be less performant, since the CustomEvent object is instanciated over and over again. (Memory should not be a problem.)
Is there a "correct" way, or are there any best practices?
|
stackoverflow
|
{
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:863051",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536422"
}
|
1ab4967422d6dc551f9b84e86771ee0e347885b2
|
Stackoverflow Stackexchange
Q: Extract decimal Number from string in C# I am trying to extract numbers from my string using the following code:
var mat = Regex.Match(stringValue, @"\d+").Value;
But when stringValue contains a decimal like "($23.01)", It only extracts 23 instead of 23.01. How can I get the decimal value 23.01?
A: Have you tried this Example:
string inputStr = "($23.01)";
Console.WriteLine(Regex.Match(inputStr, @"\d+.+\d").Value);
Or else you can try this LinqSolution:
Console.WriteLine(String.Concat(inputStr.Where(x=> x=='.'||Char.IsDigit(x))));
|
Q: Extract decimal Number from string in C# I am trying to extract numbers from my string using the following code:
var mat = Regex.Match(stringValue, @"\d+").Value;
But when stringValue contains a decimal like "($23.01)", It only extracts 23 instead of 23.01. How can I get the decimal value 23.01?
A: Have you tried this Example:
string inputStr = "($23.01)";
Console.WriteLine(Regex.Match(inputStr, @"\d+.+\d").Value);
Or else you can try this LinqSolution:
Console.WriteLine(String.Concat(inputStr.Where(x=> x=='.'||Char.IsDigit(x))));
A: Try to approach the problem this way. A decimal number has the following features:
*
*start with one or more digits (\d+)
*after that, there can be one or 0 dots (\.?)
*if a dot is present, one or more digits should also follow (\d+)
Since the last two features are kind of related, we can put it in a group and add a ? quantifier: (\.\d+)?.
So now we have the whole regex: \d+(\.\d+)?
If you want to match decimal numbers like .01 (without the 0 at the front), you can just use | to mean "or" and add another case (\.\d+). Basically: (\d+(\.\d+)?)|(\.\d+)
A: Try this
var mat= Regex.Split(stringValue, @"[^0-9.]+")
.Where(c => c != "." && c.Trim() != "");
|
stackoverflow
|
{
"language": "en",
"length": 192,
"provenance": "stackexchange_0000F.jsonl.gz:863053",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44536431"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.