id
stringlengths 40
40
| text
stringlengths 29
2.03k
| original_text
stringlengths 3
154k
| subdomain
stringclasses 20
values | metadata
dict |
---|---|---|---|---|
eee61e570e18ce56b585df5e65a8e6a0515a47be
|
Stackoverflow Stackexchange
Q: Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'" modernizr I have created new asp.net mvc 5 project in visual studio 2015 professional
And I have added meta tag in my layout for Content Security Policy as -
<meta http-equiv="content-security-policy"
content="default-src 'none'; script-src 'self';
connect-src 'self'; img-src 'self'; style-src 'self';" />
Now when I run my application I get following error in chrome browser console -
Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-CwE3Bg0VYQOIdNAkbB/Btdkhul49qZuwgNCMPgNY5zw='), or a nonce ('nonce-...') is required to enable inline execution.
modernizr-2.6.2.js:157
There are 6 errors for modernizr-2.6.2.js:157 and one is related to script, i.e. refused to load the script localhost
I don’t think I have any inline style in my project and then why CSP refused to apply error ?
A: Apparently modernizr either injects a style element with some CSS properties, or else injects some style attributes; you can deal with it by changing your CSP policy this:
<meta http-equiv="content-security-policy"
content="default-src 'none'; script-src 'self';
connect-src 'self'; img-src 'self';
style-src 'self' 'sha256-CwE3Bg0VYQOIdNAkbB/Btdkhul49qZuwgNCMPgNY5zw=';" />
|
Q: Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'" modernizr I have created new asp.net mvc 5 project in visual studio 2015 professional
And I have added meta tag in my layout for Content Security Policy as -
<meta http-equiv="content-security-policy"
content="default-src 'none'; script-src 'self';
connect-src 'self'; img-src 'self'; style-src 'self';" />
Now when I run my application I get following error in chrome browser console -
Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-CwE3Bg0VYQOIdNAkbB/Btdkhul49qZuwgNCMPgNY5zw='), or a nonce ('nonce-...') is required to enable inline execution.
modernizr-2.6.2.js:157
There are 6 errors for modernizr-2.6.2.js:157 and one is related to script, i.e. refused to load the script localhost
I don’t think I have any inline style in my project and then why CSP refused to apply error ?
A: Apparently modernizr either injects a style element with some CSS properties, or else injects some style attributes; you can deal with it by changing your CSP policy this:
<meta http-equiv="content-security-policy"
content="default-src 'none'; script-src 'self';
connect-src 'self'; img-src 'self';
style-src 'self' 'sha256-CwE3Bg0VYQOIdNAkbB/Btdkhul49qZuwgNCMPgNY5zw=';" />
|
stackoverflow
|
{
"language": "en",
"length": 190,
"provenance": "stackexchange_0000F.jsonl.gz:849948",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44495929"
}
|
7ffdba34d337c66afd766c9119e098dbaed570dc
|
Stackoverflow Stackexchange
Q: How to find css unit for this number I have a input type text
<input type="text">
Basically I am using javascript ClientRect to get caret details. ClientRect looks like this
[object ClientRect]
{
[functions]: ,
__proto__: { },
bottom: 540.7999877929687,
constructor: { },
height: 24,
left: 1034.5399169921875,
right: 1034.5399169921875,
top: 516.7999877929687,
width: 0
}
This is generated on everytext input.
left: 1034.5399169921875,
left: 1065.5399169921875,
left: 1078.5399169921875,
I want to convert this number to CSS units like px/%/rem/vh. So that I can put dynamic css. How to do it?
A: Try accessing the left position of your input and subtract the left position of your caret. This should give you an approximate width of the text in the input, if that's what you are looking for. You'll need to add an id or create a selector for your text input.
var inputElementRect = document.getElementById('YOURINPUTID').getBoundingClientRect()
var width = inputElementRect.left - caretRect.left
|
Q: How to find css unit for this number I have a input type text
<input type="text">
Basically I am using javascript ClientRect to get caret details. ClientRect looks like this
[object ClientRect]
{
[functions]: ,
__proto__: { },
bottom: 540.7999877929687,
constructor: { },
height: 24,
left: 1034.5399169921875,
right: 1034.5399169921875,
top: 516.7999877929687,
width: 0
}
This is generated on everytext input.
left: 1034.5399169921875,
left: 1065.5399169921875,
left: 1078.5399169921875,
I want to convert this number to CSS units like px/%/rem/vh. So that I can put dynamic css. How to do it?
A: Try accessing the left position of your input and subtract the left position of your caret. This should give you an approximate width of the text in the input, if that's what you are looking for. You'll need to add an id or create a selector for your text input.
var inputElementRect = document.getElementById('YOURINPUTID').getBoundingClientRect()
var width = inputElementRect.left - caretRect.left
A: Those values are px by default .. so just add suffix as px to that value and use it.
<input type="text">
to get that value
let text = document.querySelector('input');
let values = text.getBoundingClientRect();
let top_value = values.top + 'px';
let bottom_value = values.bottom + 'px';
let width_value = values.width + 'px';
let height_value = values.height + 'px';
console.log('top: '+ top_value);
console.log('bottom: '+ bottom_value);
console.log('width: '+ width_value);
console.log('height: '+ height_value);
here properties other than width and height are relative to the view port ( top, bottom, left, right ) ,
so if scroll this values will changes ..
to get the perfect values even if scroll add this values with window.scrollX , window.scrollY or can use window.pageXOffset , window.pageYOffset
A: So if I understand the question correctly, you have position values for the cursor inside of the input and you want to convert it into different types of CSS units, presumably so you can do something to the input or related things
The first thing to understand is that ClientRect positions are relative to the viewport. So as vhutchinson pointed out, if you want the width of text you need to compare to the input's "left" value as defined by getBoundingClientRects. That's a good start, but if you're not just influencing left but also care about top, you need to account for scrolling. If your window/page is the only scrolling container, you should be able to do this simply by adding window.scrollY to top, and window.scrollX to left to understand your offset relative to the window.
All of these units are pixels by default... if you want to convert to rem it's pretty straightforward, 1 rem = the font-size of your root element, so to convert to rem you can do something like
var remBase = parseInt(window.getComputedStyle(document.body).getPropertyValue('font-size'), 10);
var remValue = (myComputedPixelValue / remBase) + "rem";
Doing VW is similar using the answer in Get the browser viewport dimensions with JavaScript for cross-browser window dimensions, you'd end up with something that looks like
var viewportWidth = Math.max(document.documentElement.clientWidth, window.innerWidth || 0);
var vwValue = (myComputedPixelValue / viewportWidth) + "vw";
Percentages are trickier, because you'd need to compute it based on the parent of the element you're applying the css value to, but the general idea follows the same principle.
|
stackoverflow
|
{
"language": "en",
"length": 528,
"provenance": "stackexchange_0000F.jsonl.gz:849957",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44495946"
}
|
9775e5a61af344814da485af35fc29ab45e6e0d5
|
Stackoverflow Stackexchange
Q: Android Library Design/Configuration I have the following situation and I don't really know how to solve it. We have a library module that we use in our app but we also release this library to end customers. The problem is that the library shouldn't include all files and all methods in both cases.
So imagine the following case:
*
*class A with methods A::a1(), A::a2()
*class B with methods B::b1(), B::b2()
*class C with methods C::c1(), C::c2()
I would like to have now the following result:
*
*in case of our app - the library should include all classes with all methods
*in case of the end customers - I'd like e.g. to remove class C and remove method B::b2() from the library
How would I achieve this on Android? On iOS this is a rather easy problem since you can just define different header files for different configurations. In that case the implementation file can include all methods etc. and you just configure everything with the corresponding header files.
I hope somebody can help me! :)
|
Q: Android Library Design/Configuration I have the following situation and I don't really know how to solve it. We have a library module that we use in our app but we also release this library to end customers. The problem is that the library shouldn't include all files and all methods in both cases.
So imagine the following case:
*
*class A with methods A::a1(), A::a2()
*class B with methods B::b1(), B::b2()
*class C with methods C::c1(), C::c2()
I would like to have now the following result:
*
*in case of our app - the library should include all classes with all methods
*in case of the end customers - I'd like e.g. to remove class C and remove method B::b2() from the library
How would I achieve this on Android? On iOS this is a rather easy problem since you can just define different header files for different configurations. In that case the implementation file can include all methods etc. and you just configure everything with the corresponding header files.
I hope somebody can help me! :)
|
stackoverflow
|
{
"language": "en",
"length": 178,
"provenance": "stackexchange_0000F.jsonl.gz:849961",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44495966"
}
|
21897fcac600f3cd10707903dc3a70023d996937
|
Stackoverflow Stackexchange
Q: Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0 I am trying to implement Authenticate with Firebase on Android using a Phone Number and its second step is to add the dependency for Firebase Authentication to your app-level build.gradle file:
compile 'com.google.firebase:firebase-auth:11.0.0'
<br>after adding it , I try to sync project with Gradle Files
and it showing error<br>Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0
<a href="openFile:/project/Mobileveridacetion/app/build.gradle">Open File</a><br><a href="open.dependency.in.project.structure">Show in Project Structure dialog</a><br>Error:(26, 13) Failed to resolve: com.google.firebase:firebase-auth:11.0.0
<a href="openFile:/project/Mobileveridacetion/app/build.gradle">Show in File</a><br><a href="open.dependency.in.project.structure">Show in Project Structure dialog</a>
A: Just go to SDK manager and update your Google Repository.
It will work. :)
|
Q: Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0 I am trying to implement Authenticate with Firebase on Android using a Phone Number and its second step is to add the dependency for Firebase Authentication to your app-level build.gradle file:
compile 'com.google.firebase:firebase-auth:11.0.0'
<br>after adding it , I try to sync project with Gradle Files
and it showing error<br>Error:Failed to resolve: com.google.firebase:firebase-core:11.0.0
<a href="openFile:/project/Mobileveridacetion/app/build.gradle">Open File</a><br><a href="open.dependency.in.project.structure">Show in Project Structure dialog</a><br>Error:(26, 13) Failed to resolve: com.google.firebase:firebase-auth:11.0.0
<a href="openFile:/project/Mobileveridacetion/app/build.gradle">Show in File</a><br><a href="open.dependency.in.project.structure">Show in Project Structure dialog</a>
A: Just go to SDK manager and update your Google Repository.
It will work. :)
A: Go to Tools > Android > SDK Manager click on SDK Tools and update the following to their revision number :
*
*Google Repository (revision: 53)
*Android SDK Platform-Tools (revision: 26.0.0)
|
stackoverflow
|
{
"language": "en",
"length": 126,
"provenance": "stackexchange_0000F.jsonl.gz:849978",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496010"
}
|
f61f2b9b8c38daf6574f5f64907bca9d0e875ddb
|
Stackoverflow Stackexchange
Q: How to convert string labels to numeric values I have a csv file(delimiter=,) containing following fields
filename labels
xyz.png cat
pqz.png dog
abc.png mouse
there is a list containing all the classes
data-classes = ["cat", "dog", "mouse"]
Question : How to replace the string labels in csv with the index of the labels data-classes (i.e. if label == cat then label should change to 0 ) and save it in csv file.
A: Assuming that all classes are present in your list you can do this using apply and call index on the list to return the ordinal position of the class in the list:
In[5]:
df['labels'].apply(data_classes.index)
Out[5]:
0 0
1 1
2 2
Name: labels, dtype: int64
However, it will be faster to define a dict of your mapping and pass this an use map IMO as this is cython-ised so should be faster:
In[7]:
d = dict(zip(data_classes, range(0,3)))
d
Out[7]: {'cat': 0, 'dog': 1, 'mouse': 2}
In[8]:
df['labels'].map(d, na_action='ignore')
Out[8]:
0 0
1 1
2 2
Name: labels, dtype: int64
If there are classes not present then NaN is returned
|
Q: How to convert string labels to numeric values I have a csv file(delimiter=,) containing following fields
filename labels
xyz.png cat
pqz.png dog
abc.png mouse
there is a list containing all the classes
data-classes = ["cat", "dog", "mouse"]
Question : How to replace the string labels in csv with the index of the labels data-classes (i.e. if label == cat then label should change to 0 ) and save it in csv file.
A: Assuming that all classes are present in your list you can do this using apply and call index on the list to return the ordinal position of the class in the list:
In[5]:
df['labels'].apply(data_classes.index)
Out[5]:
0 0
1 1
2 2
Name: labels, dtype: int64
However, it will be faster to define a dict of your mapping and pass this an use map IMO as this is cython-ised so should be faster:
In[7]:
d = dict(zip(data_classes, range(0,3)))
d
Out[7]: {'cat': 0, 'dog': 1, 'mouse': 2}
In[8]:
df['labels'].map(d, na_action='ignore')
Out[8]:
0 0
1 1
2 2
Name: labels, dtype: int64
If there are classes not present then NaN is returned
|
stackoverflow
|
{
"language": "en",
"length": 183,
"provenance": "stackexchange_0000F.jsonl.gz:849998",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496057"
}
|
0c8f70cd6d77ff6aa09b170ca79ba04bf977b0ea
|
Stackoverflow Stackexchange
Q: JSON stringify objects with json strings already as values Might be a duplicate question, but couldn't find the answer. I want to stringify a javascript object that contains some JSON strings as values.
For example:
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
As you see, the value for key 'options' is a JSON string. I want to stringify the object 'obj', without double stringifying the inner JSON string.
Is there any clean and neat solution for this, except parsing each value with JSON string and stringifying the object?
A: Assuming you don't know which properties are JSON, you could use the replacer function parameter on JSON.stringify to check if a value is a JSON string. The below example tries to parse each string inside a try..catch , so is not the most efficient, but should do the trick (on nested properties as well)
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
function checkVal(key,val){
if(typeof val === 'string'){
try{return JSON.parse(val);}catch(e){}
}
return val;
}
var res = JSON.stringify(obj,checkVal);
console.log('normal output', JSON.stringify(obj))
console.log('with replacer', res);
|
Q: JSON stringify objects with json strings already as values Might be a duplicate question, but couldn't find the answer. I want to stringify a javascript object that contains some JSON strings as values.
For example:
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
As you see, the value for key 'options' is a JSON string. I want to stringify the object 'obj', without double stringifying the inner JSON string.
Is there any clean and neat solution for this, except parsing each value with JSON string and stringifying the object?
A: Assuming you don't know which properties are JSON, you could use the replacer function parameter on JSON.stringify to check if a value is a JSON string. The below example tries to parse each string inside a try..catch , so is not the most efficient, but should do the trick (on nested properties as well)
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
function checkVal(key,val){
if(typeof val === 'string'){
try{return JSON.parse(val);}catch(e){}
}
return val;
}
var res = JSON.stringify(obj,checkVal);
console.log('normal output', JSON.stringify(obj))
console.log('with replacer', res);
A: No, you can't do that.
If you did not encode that string, JSON.parse will not return a correct string.
The cleanest solution to do that is use JSON for obj.options, and stringify it when you need to use it.
A: In this case, you need to parse options to JSONObject first.
You can do this using following two approaches:
Approach 1:
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
obj.options = JSON.parse(obj.options);
console.log(JSON.stringify(obj));
Approach 2:
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
var result = JSON.stringify(obj, function(key, val) {
if (key === "options"){
return JSON.parse(val);
}else{
return val;
}
});
console.log(result);
Now this will stringify options only once.
A: You could do like this
var obj = {id:1, options:"{\"code\":3,\"type\":\"AES\"}"};
var options =JSON.parse(obj.options);
obj.options = options;
console.log(obj);
|
stackoverflow
|
{
"language": "en",
"length": 291,
"provenance": "stackexchange_0000F.jsonl.gz:850007",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496075"
}
|
a31d69a5e076f936ef92016fc1594edd9613cf72
|
Stackoverflow Stackexchange
Q: How to insert emoji into MYSQL 5.5 and higher using Django ORM I am trying to insert emoji's into a certain filed in my mysql table.
I ran alter command and changed the collation to "utf8mb4_general_ci"
ALTER TABLE XYZ MODIFY description VARCHAR(250) CHARACTER SET utf8mb4
COLLATE utf8mb4_general_ci;
Table details after above query:
+-------------+--------------+---------------+--------------------+
| Column | Type | Character Set | Collation |
+-------------+--------------+---------------+--------------------+
| description | varchar(250) | utf8mb4 | utf8mb4_general_ci |
+-------------+--------------+---------------+--------------------+
After this I ran the query to update description column with emoji's, every time I ran below query, the emoji is replaced by '?'.
update XYZ set description='a test with : ' where id = 1;
But when i print the result from a select query for the same id, it displays' '?' in place of emoji. The result was:
"a test with : ??"
Carried out necessary changes into model file.
Please accept my Apologies for not making it clear, would appreciate any lead in this matter.
A: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
...
'OPTIONS': {
'charset': 'utf8mb4',
'use_unicode': True, },
},
}
my.cnf:
[mysqld]
character-set-server=utf8mb4
default-collation=utf8mb4_unicode_ci
[client]
default-character-set=utf8mb4
|
Q: How to insert emoji into MYSQL 5.5 and higher using Django ORM I am trying to insert emoji's into a certain filed in my mysql table.
I ran alter command and changed the collation to "utf8mb4_general_ci"
ALTER TABLE XYZ MODIFY description VARCHAR(250) CHARACTER SET utf8mb4
COLLATE utf8mb4_general_ci;
Table details after above query:
+-------------+--------------+---------------+--------------------+
| Column | Type | Character Set | Collation |
+-------------+--------------+---------------+--------------------+
| description | varchar(250) | utf8mb4 | utf8mb4_general_ci |
+-------------+--------------+---------------+--------------------+
After this I ran the query to update description column with emoji's, every time I ran below query, the emoji is replaced by '?'.
update XYZ set description='a test with : ' where id = 1;
But when i print the result from a select query for the same id, it displays' '?' in place of emoji. The result was:
"a test with : ??"
Carried out necessary changes into model file.
Please accept my Apologies for not making it clear, would appreciate any lead in this matter.
A: DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
...
'OPTIONS': {
'charset': 'utf8mb4',
'use_unicode': True, },
},
}
my.cnf:
[mysqld]
character-set-server=utf8mb4
default-collation=utf8mb4_unicode_ci
[client]
default-character-set=utf8mb4
A: this save me on MYSQL 8.0.1
my.cnf
[client]
default-character-set = utf8mb4
[mysql]
default-character-set = utf8mb4
[mysqld]
character-set-client-handshake = FALSE
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
|
stackoverflow
|
{
"language": "en",
"length": 213,
"provenance": "stackexchange_0000F.jsonl.gz:850013",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496101"
}
|
1cc2e458fd4ec014b7cf586bd9c7b0c9f6f1d455
|
Stackoverflow Stackexchange
Q: Get the height of an children element Hello i will set the height for my div into state.
i tried get the whole height by get window.innerHeightthis worked fine but i cant access the children of my div, i also try document.getElementById('masonryParent') i get also the right result but how i can access the div bellow. Any suggestions?
this.setState({
width: window.innerWidth, height: window.innerHeight
});
A: Try this
document.getElementById('masonryParent').children[0].style.height
|
Q: Get the height of an children element Hello i will set the height for my div into state.
i tried get the whole height by get window.innerHeightthis worked fine but i cant access the children of my div, i also try document.getElementById('masonryParent') i get also the right result but how i can access the div bellow. Any suggestions?
this.setState({
width: window.innerWidth, height: window.innerHeight
});
A: Try this
document.getElementById('masonryParent').children[0].style.height
A: If you are using jquery you can you this :
$('.masonryParent div').style.height
|
stackoverflow
|
{
"language": "en",
"length": 82,
"provenance": "stackexchange_0000F.jsonl.gz:850015",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496105"
}
|
8a9390b15b5b087654e49795a19a93de7381dfc6
|
Stackoverflow Stackexchange
Q: How to make the status bar translucent when using react-navigation on Android? I'm using latest react-native(0.45.1) and react-navigation(1.0.0-beta.11).
Because using react-navigation, i can't use ReactNative.StatusBar, so I don't know how to make the status bar translucent.
Maybe effective both on Android and iOS, THX!
A: <StatusBar
translucent
backgroundColor="#5E8D48"
barStyle="light-content"
/>
|
Q: How to make the status bar translucent when using react-navigation on Android? I'm using latest react-native(0.45.1) and react-navigation(1.0.0-beta.11).
Because using react-navigation, i can't use ReactNative.StatusBar, so I don't know how to make the status bar translucent.
Maybe effective both on Android and iOS, THX!
A: <StatusBar
translucent
backgroundColor="#5E8D48"
barStyle="light-content"
/>
|
stackoverflow
|
{
"language": "en",
"length": 51,
"provenance": "stackexchange_0000F.jsonl.gz:850032",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496165"
}
|
e52f5166192dfecde476adb07715f0826a6013ef
|
Stackoverflow Stackexchange
Q: VueJS mustache undefined constant I was trying to make VueJS work with Laravel and something odd is happening.
My template:
<div id="tuto">
<p>{{ texte }}</p>
</div>
My VueJS script:
var vm = new Vue({
el: '#tuto',
data: {
texte: '<span>Mon texte</span>',
}
});
I am getting this error:
Use of undefined constant texte - assumed 'texte' (View: /var/www/vhosts/xxxx/resources/views/admin/xxx/index.blade.php)
Full error here.
Does someone know where it's messing up?
Thank's
A: If you are using a .blade.php file then you need to do:
<div id="tuto">
<p>@{{ texte }}</p>
</div>
That's because blade also uses mustaches, so they get processed by blade before vue even sees them, which is why you are receiving an error from Laravel and not from Vue.
See the Blade & JavaScript Frameworks section of the Laravel docs for more details.
|
Q: VueJS mustache undefined constant I was trying to make VueJS work with Laravel and something odd is happening.
My template:
<div id="tuto">
<p>{{ texte }}</p>
</div>
My VueJS script:
var vm = new Vue({
el: '#tuto',
data: {
texte: '<span>Mon texte</span>',
}
});
I am getting this error:
Use of undefined constant texte - assumed 'texte' (View: /var/www/vhosts/xxxx/resources/views/admin/xxx/index.blade.php)
Full error here.
Does someone know where it's messing up?
Thank's
A: If you are using a .blade.php file then you need to do:
<div id="tuto">
<p>@{{ texte }}</p>
</div>
That's because blade also uses mustaches, so they get processed by blade before vue even sees them, which is why you are receiving an error from Laravel and not from Vue.
See the Blade & JavaScript Frameworks section of the Laravel docs for more details.
A: There are two ways going about that.
1: Use the @ before the mustaches.
Example:
@{{ texte }}
2: The other one is to use the @verbatim directive over a code block.
Example:
@verbatim
{{ texte }}
@endverbatim
https://laravel.com/docs/5.4/blade#blade-and-javascript-frameworks
A: data must be a function:
var vm = new Vue({
el: '#tuto',
data: function(){
return {
texte: '<span>Mon texte</span>'
}
}
});
|
stackoverflow
|
{
"language": "en",
"length": 197,
"provenance": "stackexchange_0000F.jsonl.gz:850057",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496229"
}
|
fdace73cafdeff595e363b34f071fbfdf5e0bad5
|
Stackoverflow Stackexchange
Q: Android Voice Interaction I have referred this for providing voice interaction in my app.
I followed the same step but still not able to open my app using voice command for example: Find pizza in my app.
As I am not able to open the App by voice command isVoiceInteraction() is always false.
Please help if anybody has successfully implemented voice interaction.
|
Q: Android Voice Interaction I have referred this for providing voice interaction in my app.
I followed the same step but still not able to open my app using voice command for example: Find pizza in my app.
As I am not able to open the App by voice command isVoiceInteraction() is always false.
Please help if anybody has successfully implemented voice interaction.
|
stackoverflow
|
{
"language": "en",
"length": 63,
"provenance": "stackexchange_0000F.jsonl.gz:850081",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496292"
}
|
777c3a1a78abc543c224cee2c432d0e10a90e2b9
|
Stackoverflow Stackexchange
Q: How to convert data into little endian format? var val = 1240;
convert into little endian formate swift 3
Ex: 1500 (0x5DC) to 0xDC050000
A: let value = UInt16(bigEndian: 1500)
print(String(format:"%04X", value.bigEndian)) //05DC
print(String(format:"%04X", value.littleEndian)) //DC05
Make sure you are actually using the bigEndian initializer.
With 32-bit integers:
let value = UInt32(bigEndian: 1500)
print(String(format:"%08X", value.bigEndian)) //000005DC
print(String(format:"%08X", value.littleEndian)) //DC050000
|
Q: How to convert data into little endian format? var val = 1240;
convert into little endian formate swift 3
Ex: 1500 (0x5DC) to 0xDC050000
A: let value = UInt16(bigEndian: 1500)
print(String(format:"%04X", value.bigEndian)) //05DC
print(String(format:"%04X", value.littleEndian)) //DC05
Make sure you are actually using the bigEndian initializer.
With 32-bit integers:
let value = UInt32(bigEndian: 1500)
print(String(format:"%08X", value.bigEndian)) //000005DC
print(String(format:"%08X", value.littleEndian)) //DC050000
A: If you want 1500 as an array of bytes in little-endian order:
var value = UInt32(littleEndian: 1500)
let array = withUnsafeBytes(of: &value) { Array($0) }
If you want that as a Data:
let data = Data(array)
Or, if you really wanted that as a hex string:
let string = array.map { String(format: "%02x", $0) }.joined()
A: let timeDevide = self.setmiliSecond/100
var newTime = UInt32(littleEndian: timeDevide)
let arrayTime = withUnsafeBytes(of: &newTime)
{Array($0)}
let timeDelayValue = [0x0B] + arrayTime
A: You can do something like
//: Playground - noun: a place where people can play
import UIKit
extension String {
func hexadecimal() -> Data? {
var data = Data(capacity: count / 2)
let regex = try! NSRegularExpression(pattern: "[0-9a-f]{1,2}", options: .caseInsensitive)
regex.enumerateMatches(in: self, range: NSRange(location: 0, length: utf16.count)) { match, _, _ in
let byteString = (self as NSString).substring(with: match!.range)
var num = UInt8(byteString, radix: 16)!
data.append(&num, count: 1)
}
guard !data.isEmpty else { return nil }
return data
}
}
func convertInputValue<T: FixedWidthInteger>(_ inputValue: Data) -> T where T: CVarArg {
let stride = MemoryLayout<T>.stride
assert(inputValue.count % (stride / 2) == 0, "invalid pack size")
let fwInt = T.init(littleEndian: inputValue.withUnsafeBytes { $0.pointee })
let valuefwInt = String(format: "%0\(stride)x", fwInt).capitalized
print(valuefwInt)
return fwInt
}
var inputString = "479F"
var inputValue: Data! = inputString.hexadecimal()
let val: UInt16 = convertInputValue(inputValue) //9F47
inputString = "479F8253"
inputValue = inputString.hexadecimal()
let val2: UInt32 = convertInputValue(inputValue) //53829F47
|
stackoverflow
|
{
"language": "en",
"length": 289,
"provenance": "stackexchange_0000F.jsonl.gz:850090",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496320"
}
|
c5ff3fc6be1764749584776389e739038642e857
|
Stackoverflow Stackexchange
Q: Migrate hibernate 3 to 5 guide needed? I want to do an impact analysis on the migration from Hibernate 3 (3.2.6.ga) to Hibernate 5 (5.2), especially the integration with Spring. But i can't find any documentation on the subject. So any help will be grateful !
A: You can refer to hibernate's migration guide and spring-boot 1.4 upgrade guide.
|
Q: Migrate hibernate 3 to 5 guide needed? I want to do an impact analysis on the migration from Hibernate 3 (3.2.6.ga) to Hibernate 5 (5.2), especially the integration with Spring. But i can't find any documentation on the subject. So any help will be grateful !
A: You can refer to hibernate's migration guide and spring-boot 1.4 upgrade guide.
|
stackoverflow
|
{
"language": "en",
"length": 60,
"provenance": "stackexchange_0000F.jsonl.gz:850093",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496327"
}
|
17fd50ca2b00bc9c0e7314db92aba16b4dc0896f
|
Stackoverflow Stackexchange
Q: Xcode 9 Beta Build System fails with no errors So, this is strange. Trying to build my fairly large project with the new Xcode Beta Build System and it fails with 0 errors. The old build system works fine.
The status bar at the top of the IDE displays the following:
Planning build...
Scanning build tasks...
It has got further than this before, but now seems to be failing really quickly. No idea how I can debug this. Any ideas?
A: I would recommend you to try these steps:
*
*Quit Xcode
*Delete folder ~/Library/Developer/Xcode/DerivedData
*Reopen Xcode and try again
Hope it helps
EDIT: Quiting and reopening of Xcode after deletion of derived data is essential, because Xcode can hold derived data in it's cache. So deletion of derived data when Xcode is running does not help very often.
|
Q: Xcode 9 Beta Build System fails with no errors So, this is strange. Trying to build my fairly large project with the new Xcode Beta Build System and it fails with 0 errors. The old build system works fine.
The status bar at the top of the IDE displays the following:
Planning build...
Scanning build tasks...
It has got further than this before, but now seems to be failing really quickly. No idea how I can debug this. Any ideas?
A: I would recommend you to try these steps:
*
*Quit Xcode
*Delete folder ~/Library/Developer/Xcode/DerivedData
*Reopen Xcode and try again
Hope it helps
EDIT: Quiting and reopening of Xcode after deletion of derived data is essential, because Xcode can hold derived data in it's cache. So deletion of derived data when Xcode is running does not help very often.
A: Remove 'Derived Data' before running the build.
To remove, go to
File > Work Space Settings > Go to the directory path > Delete.
Do not forget to clean the project
Cmd + Shift+ k
|
stackoverflow
|
{
"language": "en",
"length": 176,
"provenance": "stackexchange_0000F.jsonl.gz:850106",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496360"
}
|
0f2ec0811875b460fb97779128fe7d218471b5ea
|
Stackoverflow Stackexchange
Q: Php Embedded tableau report is not showing in safari I Embedded a tableau report in my web page using php. I works fine in Chrome and Firefox. But not loading the page in safari.
My code is like this,
<embed src="{{ $tableauLink }}" width="100%" height="600px" type="application/pdf">
A: You may be using an older version of safari. Try it on an updated version. This solved my problem.
|
Q: Php Embedded tableau report is not showing in safari I Embedded a tableau report in my web page using php. I works fine in Chrome and Firefox. But not loading the page in safari.
My code is like this,
<embed src="{{ $tableauLink }}" width="100%" height="600px" type="application/pdf">
A: You may be using an older version of safari. Try it on an updated version. This solved my problem.
|
stackoverflow
|
{
"language": "en",
"length": 67,
"provenance": "stackexchange_0000F.jsonl.gz:850110",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496377"
}
|
a6ef6fefdae434547d1982b3cd86ba0aab56e409
|
Stackoverflow Stackexchange
Q: error: 'hash' is not a class template #include <unordered_map>
#include <memory>
#include <vector>
template<> // Voxel has voxel.position which is a IVec2 containing 2 values, it also has a bool value
struct hash<Voxel> {
size_t operator()(const Voxel & k) const
{
return Math::hashFunc(k.position);
}
};
template<typename T> // This was already given
inline size_t hashFunc(const Vector<T, 2>& _key)
{
std::hash<T> hashfunc;
size_t h = 0xbd73a0fb;
h += hashfunc(_key[0]) * 0xf445f0a9;
h += hashfunc(_key[1]) * 0x5c23b2e1;
return h;
}
My main
int main()
{
Voxel t{ 16,0,true };
std::hash(t);
}
Right now i am writing on an specialisation for std::hash. Now the online submission page always returns the following errors for my code. I don't know why and what i did wrong.
error: 'hash' is not a class template struct hash<>
and
error: no match for call to '(const std::hash<Math::Vector<int, 2ul> >) (const Math::Vector<int, 2ul>&)' noexcept(declval<const_Hash((declval<const_Key&>()))>.
My own compiler only throws
error: The argument list for "class template" std :: hash "" is missing.
A: For posterity, I got the same error message when I had forgotten to #include <functional>.
|
Q: error: 'hash' is not a class template #include <unordered_map>
#include <memory>
#include <vector>
template<> // Voxel has voxel.position which is a IVec2 containing 2 values, it also has a bool value
struct hash<Voxel> {
size_t operator()(const Voxel & k) const
{
return Math::hashFunc(k.position);
}
};
template<typename T> // This was already given
inline size_t hashFunc(const Vector<T, 2>& _key)
{
std::hash<T> hashfunc;
size_t h = 0xbd73a0fb;
h += hashfunc(_key[0]) * 0xf445f0a9;
h += hashfunc(_key[1]) * 0x5c23b2e1;
return h;
}
My main
int main()
{
Voxel t{ 16,0,true };
std::hash(t);
}
Right now i am writing on an specialisation for std::hash. Now the online submission page always returns the following errors for my code. I don't know why and what i did wrong.
error: 'hash' is not a class template struct hash<>
and
error: no match for call to '(const std::hash<Math::Vector<int, 2ul> >) (const Math::Vector<int, 2ul>&)' noexcept(declval<const_Hash((declval<const_Key&>()))>.
My own compiler only throws
error: The argument list for "class template" std :: hash "" is missing.
A: For posterity, I got the same error message when I had forgotten to #include <functional>.
A: You are specializing std::hash<> in the global namespace and this is ill-formed.
The specialization must be declared in the same namespace, std. See the example for std::hash:
// custom specialization of std::hash can be injected in namespace std
namespace std
{
template<> struct hash<S>
{
typedef S argument_type;
typedef std::size_t result_type;
result_type operator()(argument_type const& s) const
...
|
stackoverflow
|
{
"language": "en",
"length": 238,
"provenance": "stackexchange_0000F.jsonl.gz:850152",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496509"
}
|
25ba971d8bf490e47bef6d9fa11852d95f24c28d
|
Stackoverflow Stackexchange
Q: How to check if a string contains an other string (substring) in robot framework? How to check if a string contains an other string in robot framework?
Something like
${bool} | String Contains | Hello World | World
Get Substring doesn't help, because it needs a start index.
A: i have found an another solution
${match} | ${value} | Run Keyword And Ignore Error | Should Contain | full string | substring
${RETURNVALUE} | Set Variable If | '${match}' == 'PASS' | ${True} | ${False}
|
Q: How to check if a string contains an other string (substring) in robot framework? How to check if a string contains an other string in robot framework?
Something like
${bool} | String Contains | Hello World | World
Get Substring doesn't help, because it needs a start index.
A: i have found an another solution
${match} | ${value} | Run Keyword And Ignore Error | Should Contain | full string | substring
${RETURNVALUE} | Set Variable If | '${match}' == 'PASS' | ${True} | ${False}
A: From the String library use, Get Lines Containing String, doc here. Then check the result.
A: ${source}= Set Variable this is a string
# ${contains} will be True if "is a" is a part of the ${source} value
${contains}= Evaluate "is a" in """${source}"""
# will fail if "is a" is not a part of the ${source} value
Should Be True "is a" in """${source}"""
# using a robotframework keyword from the String library
# it is actually a wrapper of python's "var_a in var_b" - the previous approaches
Should Contain ${source} is a
# as last alternative - an approach that will store
# the result in a boolean, with RF standard keywords
# ${contains} will be True if "is a" is a part of the ${source} value
${contains}= Run Keyword And Return Status Should Contain ${source} is a
Hope the example is self-explanatory
A: A direct if condition can be used to check if a string is part of another
IF '${var1}' in '${var2}'
Log ${var1}
END
This works with Robot 5+
|
stackoverflow
|
{
"language": "en",
"length": 261,
"provenance": "stackexchange_0000F.jsonl.gz:850168",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496558"
}
|
00743568169d03679c94647a715e4cc3bd31c748
|
Stackoverflow Stackexchange
Q: Where are php's extensions .so files located? I opened some some ini files like mysqli.ini , mysql.ini , pdo_mysql.ini. Inside those files there is an .so extension added for those files. I want to know where these .so files are stored.
Inside mysqli.ini file
; configuration for php MySQL module
; priority=20
extension=mysqli.so
Inside mysql.ini file
; configuration for php MySQL module
; priority=20
extension=mysql.so
Inside pdo_mysql.ini file
; configuration for php MySQL module
; priority=20
extension=pdo_mysql.so
A: The .so files should be located inside your php extension directory.
You can use phpinfo() function to find the location of your php extension directory or you can use php -i from the command line.
Example
root@73fa7795de48:/# php -i | grep extension_dir
extension_dir => /usr/lib/php/20160303 => /usr/lib/php/20160303
|
Q: Where are php's extensions .so files located? I opened some some ini files like mysqli.ini , mysql.ini , pdo_mysql.ini. Inside those files there is an .so extension added for those files. I want to know where these .so files are stored.
Inside mysqli.ini file
; configuration for php MySQL module
; priority=20
extension=mysqli.so
Inside mysql.ini file
; configuration for php MySQL module
; priority=20
extension=mysql.so
Inside pdo_mysql.ini file
; configuration for php MySQL module
; priority=20
extension=pdo_mysql.so
A: The .so files should be located inside your php extension directory.
You can use phpinfo() function to find the location of your php extension directory or you can use php -i from the command line.
Example
root@73fa7795de48:/# php -i | grep extension_dir
extension_dir => /usr/lib/php/20160303 => /usr/lib/php/20160303
A: On Linux system
php -i | grep extension_dir | cut -d " " -f 5
will display you the current php extension dir
|
stackoverflow
|
{
"language": "en",
"length": 150,
"provenance": "stackexchange_0000F.jsonl.gz:850175",
"question_score": "23",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496577"
}
|
3f8c957aec5c09f0ae29a536ccf87bff7acb6228
|
Stackoverflow Stackexchange
Q: input type="file" - accept files without any extension how do I accept files without any extension?
I need to accept .json and no-extension files
I tried
<input type="file" accept=".json, . "/>
and some variations of it with dot and empty space, but none of them work
A: There is no such feature in HTML file input yet.If you want it you can develop this feature either by serverside progamming or javascript validation. Accept all format files and Validate format and find desired formats and restrict all other formats.
|
Q: input type="file" - accept files without any extension how do I accept files without any extension?
I need to accept .json and no-extension files
I tried
<input type="file" accept=".json, . "/>
and some variations of it with dot and empty space, but none of them work
A: There is no such feature in HTML file input yet.If you want it you can develop this feature either by serverside progamming or javascript validation. Accept all format files and Validate format and find desired formats and restrict all other formats.
|
stackoverflow
|
{
"language": "en",
"length": 89,
"provenance": "stackexchange_0000F.jsonl.gz:850183",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496604"
}
|
5bd3ea9e0724dcd80ca30b00278c5e5e2e4d2f31
|
Stackoverflow Stackexchange
Q: What is the difference between yarn, grunt, npm, bower and nuget package manager? I'm a .net developer with exposure to nuget package manager console only. I was reading about nodejs and reactjs; where both require npm & yarn packet managers.
Can any one explain the difference between these products? And why are they introduced?
A: npm is the Node Package Manager.
Basically it is used to install dependencies.
In your case you will need this for React.
Yarn package manager is also used for to install dependencies i.e to install Javascript packages.
The difference between npm and yarn is:
Yarn
Takes 10-12sec to install packages.
Yarn installs all dependencies parallel.
Does not always require an internet connection to install dependencies.
NPM
Takes 20-25sec to install packages.
NPM always installs each dependency one after the other which may end up taking a lot of time.
Installing dependencies always requires an internet connection.
|
Q: What is the difference between yarn, grunt, npm, bower and nuget package manager? I'm a .net developer with exposure to nuget package manager console only. I was reading about nodejs and reactjs; where both require npm & yarn packet managers.
Can any one explain the difference between these products? And why are they introduced?
A: npm is the Node Package Manager.
Basically it is used to install dependencies.
In your case you will need this for React.
Yarn package manager is also used for to install dependencies i.e to install Javascript packages.
The difference between npm and yarn is:
Yarn
Takes 10-12sec to install packages.
Yarn installs all dependencies parallel.
Does not always require an internet connection to install dependencies.
NPM
Takes 20-25sec to install packages.
NPM always installs each dependency one after the other which may end up taking a lot of time.
Installing dependencies always requires an internet connection.
A: From Wikipedia: A package manager [...] is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs.
Instead of a complete computer program, you could also think of smaller parts like libraries, frameworks or just some a bunch of files packaged together.
While NuGet focusses mainly on .NET (there are a lot of non-.NET packages on NuGet however), NPM (Node Package Manager), Yarn and Bower are JavaScript package managers.
Yarn was created by Facebook and Open Sourced. Speed comparisons found online show that Yarn is faster than NPM. Yarn is also able to install packages from a cache and does not require a connection to the Internet (only if a package was downloaded before).
Grunt is a JavaScript Task Runner, not a package manager. You can use it to automate repetitive tasks like minification, compilation, unit testing, linting, etc.
|
stackoverflow
|
{
"language": "en",
"length": 299,
"provenance": "stackexchange_0000F.jsonl.gz:850207",
"question_score": "20",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496676"
}
|
6730a243a944786de9bec5ef9450c9836cf99e27
|
Stackoverflow Stackexchange
Q: Qt OLEAUT32.DLL, COMDLG32.DLL not registered? I'm developing a new software in Qt but since a few days I get some confusing error messages when I run the project:
mincore\com\oleaut32\dispatch\ups.cpp(2128)\OLEAUT32.dll!75FEEF12: (caller: 75FEE58F) ReturnHr(1) tid(10a0) 8002801D Bibliothek nicht registriert.
mincore\com\oleaut32\dispatch\ups.cpp(2128)\OLEAUT32.dll!75FEEF12: (caller: 75FEE58F) ReturnHr(2) tid(10a0) 8002801D Bibliothek nicht registriert.
After I try to open an XML file with a QFileDialog a new error message appears:
shell\comdlg32\fileopensave.cpp(14267)\COMDLG32.DLL!76FC7BED: (caller: 76FF686C) ReturnHr(1) tid(10a0) 80004005 Unbekannter Fehler
CallContext:[\PickerModalLoop]
I really don't know how to fix the problem, maybe it exists because of a new windows10 update?
Both of the messages won't crash the program. The first message appears every time I run project, the second only appears when openening an XML file, this also leads to not being able to work with the program because I need to open that XML file. I'm pretty sure the problem does not come in with a code problem.
Does anyone can help me? I reinstalled Qt, the problem still exists.
|
Q: Qt OLEAUT32.DLL, COMDLG32.DLL not registered? I'm developing a new software in Qt but since a few days I get some confusing error messages when I run the project:
mincore\com\oleaut32\dispatch\ups.cpp(2128)\OLEAUT32.dll!75FEEF12: (caller: 75FEE58F) ReturnHr(1) tid(10a0) 8002801D Bibliothek nicht registriert.
mincore\com\oleaut32\dispatch\ups.cpp(2128)\OLEAUT32.dll!75FEEF12: (caller: 75FEE58F) ReturnHr(2) tid(10a0) 8002801D Bibliothek nicht registriert.
After I try to open an XML file with a QFileDialog a new error message appears:
shell\comdlg32\fileopensave.cpp(14267)\COMDLG32.DLL!76FC7BED: (caller: 76FF686C) ReturnHr(1) tid(10a0) 80004005 Unbekannter Fehler
CallContext:[\PickerModalLoop]
I really don't know how to fix the problem, maybe it exists because of a new windows10 update?
Both of the messages won't crash the program. The first message appears every time I run project, the second only appears when openening an XML file, this also leads to not being able to work with the program because I need to open that XML file. I'm pretty sure the problem does not come in with a code problem.
Does anyone can help me? I reinstalled Qt, the problem still exists.
|
stackoverflow
|
{
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:850217",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496701"
}
|
a9a1a612ae96a19022f6fd074f5f71623c2dba2a
|
Stackoverflow Stackexchange
Q: Chrome eats javascript keydown event handler on F11 key press, when browser is already in full screen mode Chrome is eating F11 key press event when the browser is already in full screen mode.
$(document).on('keydown', function(e) {
console.log(e.keyCode);
});
Above code prints the key code when F11 is pressed for the first time and chrome switches to full screen mode, however if F11 key is pressed again chrome switches to normal mode, but eats the F11 key press event.
Is there any way to handle F11 event on chrome in full screen mode ?
PLUNKER
A: Chrome prevents this key detection, and not by accident. This is to prevent developer's code from forcing the user to stay in full screen. When Chrome is in full screen mode, there is no way to prevent clicking F11 via Javascript.
|
Q: Chrome eats javascript keydown event handler on F11 key press, when browser is already in full screen mode Chrome is eating F11 key press event when the browser is already in full screen mode.
$(document).on('keydown', function(e) {
console.log(e.keyCode);
});
Above code prints the key code when F11 is pressed for the first time and chrome switches to full screen mode, however if F11 key is pressed again chrome switches to normal mode, but eats the F11 key press event.
Is there any way to handle F11 event on chrome in full screen mode ?
PLUNKER
A: Chrome prevents this key detection, and not by accident. This is to prevent developer's code from forcing the user to stay in full screen. When Chrome is in full screen mode, there is no way to prevent clicking F11 via Javascript.
|
stackoverflow
|
{
"language": "en",
"length": 138,
"provenance": "stackexchange_0000F.jsonl.gz:850219",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496707"
}
|
e38d8e7f544425ffa366c907d20365c7c2d2d17a
|
Stackoverflow Stackexchange
Q: Display complex json in Angularjs I would like to display complex json with bollean values, strings and even with arrays. How can I do that?
$scope.options = {
"option_1" : "some string",
"option_2" : true,
"option_3" : [1.123456789, 0.123548912, -7.156248965],
"option_4" : null,
"option_5" : [1234.45678, 75.142012]
}
I use something like this : but I have problem with arrays :
<ul ng-repeat="(key, value) in options">
<li>{{key}}</li>
<span>{{value}}</span>
</ul>
I would like display something like table with keys like headings and values under appropriate keys like this :
option_1 option_2 option_3 ...
some string true 1.123456789
0.123548912
-7.156248965
A: It should be like this.
app = angular.module("myApp", []);
app.controller("myCtrl", function($scope) {
$scope.isArray = angular.isArray;
$scope.options = {
"option_1": "some string",
"option_2": true,
"option_3": [1.123456789, 0.123548912, -7.156248965],
"option_4": null,
"option_5": [1234.45678, 75.142012]
}
})
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<div ng-app="myApp">
<div ng-controller="myCtrl">
<table class="table table-striped table-bordered table-hover">
<thead>
<tr>
<th ng-repeat="(key, value) in options">
{{ key }}
</th>
</tr>
</thead>
<tbody>
<tr>
<td ng-repeat="(key, value) in options">
{{isArray(value) ? '': value}}
<table ng-if="isArray(value)">
<tr ng-repeat="v in value">
<td>
{{v}}
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
</div>
</div>
|
Q: Display complex json in Angularjs I would like to display complex json with bollean values, strings and even with arrays. How can I do that?
$scope.options = {
"option_1" : "some string",
"option_2" : true,
"option_3" : [1.123456789, 0.123548912, -7.156248965],
"option_4" : null,
"option_5" : [1234.45678, 75.142012]
}
I use something like this : but I have problem with arrays :
<ul ng-repeat="(key, value) in options">
<li>{{key}}</li>
<span>{{value}}</span>
</ul>
I would like display something like table with keys like headings and values under appropriate keys like this :
option_1 option_2 option_3 ...
some string true 1.123456789
0.123548912
-7.156248965
A: It should be like this.
app = angular.module("myApp", []);
app.controller("myCtrl", function($scope) {
$scope.isArray = angular.isArray;
$scope.options = {
"option_1": "some string",
"option_2": true,
"option_3": [1.123456789, 0.123548912, -7.156248965],
"option_4": null,
"option_5": [1234.45678, 75.142012]
}
})
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.23/angular.min.js"></script>
<div ng-app="myApp">
<div ng-controller="myCtrl">
<table class="table table-striped table-bordered table-hover">
<thead>
<tr>
<th ng-repeat="(key, value) in options">
{{ key }}
</th>
</tr>
</thead>
<tbody>
<tr>
<td ng-repeat="(key, value) in options">
{{isArray(value) ? '': value}}
<table ng-if="isArray(value)">
<tr ng-repeat="v in value">
<td>
{{v}}
</td>
</tr>
</table>
</td>
</tr>
</tbody>
</table>
</div>
</div>
|
stackoverflow
|
{
"language": "en",
"length": 186,
"provenance": "stackexchange_0000F.jsonl.gz:850246",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496821"
}
|
e6608a1ff3bbeb1b50b874afbd33c312967fe997
|
Stackoverflow Stackexchange
Q: How to use Realm with feature-module structure? I want to rewrite my app to instant-app. But I get some problems with importing Realm to feature module. If I write
apply plugin: 'com.android.feature'
apply plugin:'realm-android'
in the feature module Gradle can't build project and the error is:
Error:(2, 0) The android or android-library plugin must be applied to the project
But if I put this plugin to application module, classes from base module can't use Realm.
apply plugin: 'com.android.application'
apply plugin:'realm-android'
The error will be next:
Error:(23, 16) error: package io.realm does not exist
How to use realm in a feature module?
A: Realm explicitly checks for the existence of com.android.application or com.android.library plugins. Since it is not aware of com.android.feature plugin, you receive an exception.
https://github.com/realm/realm-java/blob/7dbacb438f8f1130155eacf06347fce703c8f1a8/gradle-plugin/src/main/groovy/io/realm/gradle/Realm.groovy#L34
void apply(Project project) {
// Make sure the project is either an Android application or library
def isAndroidApp = project.plugins.withType(AppPlugin)
def isAndroidLib = project.plugins.withType(LibraryPlugin)
if (!isAndroidApp && !isAndroidLib) {
throw new GradleException("'com.android.application' or 'com.android.library' plugin required.")
}
|
Q: How to use Realm with feature-module structure? I want to rewrite my app to instant-app. But I get some problems with importing Realm to feature module. If I write
apply plugin: 'com.android.feature'
apply plugin:'realm-android'
in the feature module Gradle can't build project and the error is:
Error:(2, 0) The android or android-library plugin must be applied to the project
But if I put this plugin to application module, classes from base module can't use Realm.
apply plugin: 'com.android.application'
apply plugin:'realm-android'
The error will be next:
Error:(23, 16) error: package io.realm does not exist
How to use realm in a feature module?
A: Realm explicitly checks for the existence of com.android.application or com.android.library plugins. Since it is not aware of com.android.feature plugin, you receive an exception.
https://github.com/realm/realm-java/blob/7dbacb438f8f1130155eacf06347fce703c8f1a8/gradle-plugin/src/main/groovy/io/realm/gradle/Realm.groovy#L34
void apply(Project project) {
// Make sure the project is either an Android application or library
def isAndroidApp = project.plugins.withType(AppPlugin)
def isAndroidLib = project.plugins.withType(LibraryPlugin)
if (!isAndroidApp && !isAndroidLib) {
throw new GradleException("'com.android.application' or 'com.android.library' plugin required.")
}
|
stackoverflow
|
{
"language": "en",
"length": 164,
"provenance": "stackexchange_0000F.jsonl.gz:850251",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496837"
}
|
7dc3c46eafc6939acbb1d76ab890799ea5a34cf9
|
Stackoverflow Stackexchange
Q: IntelliJ and Git: How to see diffs between a commit and two older commit? You probably know this window where you can see the diffs between a commit to ONE commit older.
Do you know how can I see exactly the same comparison, but between a commit and a previous commit which is not necessarily ONE before the current one.
I know that I can do it per one file, but I want to do it for the whole project.
A: In IntelliJ, there is no command or action to compare exact revisions, unfortunately.
Here are couple related requests:
https://youtrack.jetbrains.com/issue/IDEA-125616 and https://youtrack.jetbrains.com/issue/IDEA-100431
However, there is a way to see what has changed between two commits. To do so you need to go to the Version control - Log tab and select the entire range between wanted commits (e.g select the later commit, then scroll down to the older commit and click on it with Shift). In the right pane showing changed files you will see all the changes.
|
Q: IntelliJ and Git: How to see diffs between a commit and two older commit? You probably know this window where you can see the diffs between a commit to ONE commit older.
Do you know how can I see exactly the same comparison, but between a commit and a previous commit which is not necessarily ONE before the current one.
I know that I can do it per one file, but I want to do it for the whole project.
A: In IntelliJ, there is no command or action to compare exact revisions, unfortunately.
Here are couple related requests:
https://youtrack.jetbrains.com/issue/IDEA-125616 and https://youtrack.jetbrains.com/issue/IDEA-100431
However, there is a way to see what has changed between two commits. To do so you need to go to the Version control - Log tab and select the entire range between wanted commits (e.g select the later commit, then scroll down to the older commit and click on it with Shift). In the right pane showing changed files you will see all the changes.
A: You can select any number of commits in the git log window (using shift/ctrl and click or cursor keys) and the right-hand pane will show the cumulated differences.
A: Another way to do it:
*
*Open the 1: Project panel
*Right-click your project's root folder
*Select Git → Show History from the menu
This opens up a completely different view of the git log, where you can do exactly what you'd expect to be able to from the main (9: Version Control) git log... namely:
*
*Select (only!) two commits
*Click Compare
From the pop-up dialog that appears, you can select any file and press Ctrl-/Cmd-D (or right-click and select the only menu item) to see the changes.
Unfortunately, there doesn't seem to be any way to "pin" that view to your workspace, though it hovers on top as long as you need it.
Hopefully one day JetBrains will create a "best of both worlds" merged version of these UIs, so we can just compare stuff from the main Version Control log. To add to the list of JetBrains tickets for this issue listed in another answer... the oldest one appears to be https://youtrack.jetbrains.com/issue/IDEA-86480
A: Also in CLion (I think in other JetBrains IDE-s it's the same):
*
*open VCS log
*filter VCS log via other branch (e.g., personal/sherstennikov/krt-23941)
*top n commits must be the range on other branch we want to diff against current branch
*select other branch HEAD with left click
*right click on it
*select in menu 'Branch ' (e.g.: Branch 'personal/sherstennikov/krt-23941')
*expand via arrow on the right and click 'Compare with Current'
*you get a window (see pic) 'Comparing with in root '
top left pane contains range of commits from other-branch
bottom left pane contains log of current-branch
right pane contains list-of-files-which-differ between commit/several-selected-commits (if range selected, list of files is cumulative) in other-branch and HEAD (or maybe selection) in current-branch
*now you can click on file in right pane to get it's diff between selected versions in a separate window (let's call it file-diff-between-other-and-current-window)
10.also in file-diff-between-other-and-current-window right on the left of unified/side-by-side viewer selector, there's a control to switch between files in the aforementioned list-of-files-which-differ
|
stackoverflow
|
{
"language": "en",
"length": 532,
"provenance": "stackexchange_0000F.jsonl.gz:850285",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496928"
}
|
564ed4c2fca4fc172e7c402c361f83a8bd5846c4
|
Stackoverflow Stackexchange
Q: How to handle multiple Firebase FCM tokens per user? From the official documentation I understand that the way it works is something like this:
*
*User installs app, FCM token is generated
*Sending token to app server
*Server uses token to send push-notifications to this device.
What if at the same time this user installs app on the other device - should I store multiple tokens per user on the app server? If yes - that means there should be something like checking for which ones are expired?
A:
What if at the same time this user installs app on the other device - should I store multiple tokens per user on the app server?
Yes. A user could have multiple devices, a case where Device Groups are commonly used.
If yes - that means there should be something like checking for which ones are expired?
If a token expires, a callback is triggered (onTokenRefresh() for Android), from where you'll have to send the new token to your App Server and delete the old one corresponding to the user/device.
|
Q: How to handle multiple Firebase FCM tokens per user? From the official documentation I understand that the way it works is something like this:
*
*User installs app, FCM token is generated
*Sending token to app server
*Server uses token to send push-notifications to this device.
What if at the same time this user installs app on the other device - should I store multiple tokens per user on the app server? If yes - that means there should be something like checking for which ones are expired?
A:
What if at the same time this user installs app on the other device - should I store multiple tokens per user on the app server?
Yes. A user could have multiple devices, a case where Device Groups are commonly used.
If yes - that means there should be something like checking for which ones are expired?
If a token expires, a callback is triggered (onTokenRefresh() for Android), from where you'll have to send the new token to your App Server and delete the old one corresponding to the user/device.
A: Graykos, You can do sth like this:
Each time a user have a new login, get a new token from google, and when logout delete that token (Edit note: Close app & run it again, not called a new login and it's not create a new token).
So if user login from multiple device/browser OR multiple user login from one device/browser one after the other, you can nicely handle all of that.
in this way "multiple user login from one device/browser one after the other", all of them have the same token (so delete and renew for each login)
As Andres SK mentioned in the first comment of your question, you can delete the token when failure happen (maybe the lost device that user cannot logout from).
A: I also came across the exact challenge and had to resolve to a solution:
Storing each token for the user against the device id.
It's interesting enough to know that this function in fact exists in the firebase messaging method. But more surprising is the fact that there's no documentation to handle such scenario.
https://firebase.google.com/docs/reference/android/com/google/firebase/iid/FirebaseInstanceId.html#getId()
Summarily, while sending the new token to the server, also send along the device id returned by the getId() method and use it to enforce uniqueness of token per device.
And to also apply this solution while taking advantage of device grouping feature of FCM, you can make a server request on the FCM group, in order to delete the old token on that device-id before replacing it with the new.
A: I came across the same problem, How I tried to solve this is to maintain a data structure like this. And save this data to my notifier server's database.
type UserToken struct {
UserId string
Tokens []DeviceToken
}
type DeviceToken struct {
DeviceName string
Token string
}
Whenever you get a new device token from firebase just replace the existing one with new one for that device.
And for the web part I think storing the last one is enough.
A: I have a similar situation and found out the following error response code when trying to send to an expired token:
{
"error": {
"code": 404,
"message": "Requested entity was not found.",
"status": "NOT_FOUND",
"details": [
{
"@type": "type.googleapis.com/google.firebase.fcm.v1.FcmError",
"errorCode": "UNREGISTERED"
}
]
}}
My app server accepts multiple tokens per user assuming they use multiple devices. When sending a new message, I will try to send it to all tokens related to the user. Those that return this error will be deleted so future message will only be sent to active tokens.
|
stackoverflow
|
{
"language": "en",
"length": 610,
"provenance": "stackexchange_0000F.jsonl.gz:850294",
"question_score": "22",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44496966"
}
|
40c8920750262f115e26343e028ec2e8eb96895f
|
Stackoverflow Stackexchange
Q: Mongod Error: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?
2017-06-12T13:06:18.407+0300 I STORAGE [initandlisten]
exception in initAndListen: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?, terminating
2017-06-12T13:06:18.407+0300 I NETWORK [initandlisten]
shutdown: going to close listening sockets...
2017-06-12T13:06:18.407+0300 I NETWORK [initandlisten]
shutdown: going to flush diaglog...
2017-06-12T13:06:18.407+0300 I CONTROL [initandlisten]
now exiting
2017-06-12T13:06:18.407+0300 I CONTROL [initandlisten]
shutting down with code:100
A: The error clearly says
exception in initAndListen: 98 Unable to lock file:
/data/db/mongod.lock Resource temporarily unavailable. Is a mongod
instance already running?, terminating
An instance of mongod is already running and it held a lock on mongod.lock file. Run ps -eaf | grep mongod to find the running instance. If running, kill the process sudo kill <pID> obtained from above grep command.
Then delete the mongod.lock file as mongod wasn't shutdown gracefully. Post deleting the lock file start the mongod process sudo mongod.
Hope this helps!
|
Q: Mongod Error: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?
2017-06-12T13:06:18.407+0300 I STORAGE [initandlisten]
exception in initAndListen: 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a mongod instance already running?, terminating
2017-06-12T13:06:18.407+0300 I NETWORK [initandlisten]
shutdown: going to close listening sockets...
2017-06-12T13:06:18.407+0300 I NETWORK [initandlisten]
shutdown: going to flush diaglog...
2017-06-12T13:06:18.407+0300 I CONTROL [initandlisten]
now exiting
2017-06-12T13:06:18.407+0300 I CONTROL [initandlisten]
shutting down with code:100
A: The error clearly says
exception in initAndListen: 98 Unable to lock file:
/data/db/mongod.lock Resource temporarily unavailable. Is a mongod
instance already running?, terminating
An instance of mongod is already running and it held a lock on mongod.lock file. Run ps -eaf | grep mongod to find the running instance. If running, kill the process sudo kill <pID> obtained from above grep command.
Then delete the mongod.lock file as mongod wasn't shutdown gracefully. Post deleting the lock file start the mongod process sudo mongod.
Hope this helps!
|
stackoverflow
|
{
"language": "en",
"length": 161,
"provenance": "stackexchange_0000F.jsonl.gz:850305",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497009"
}
|
8feead3b3cdc6cccfad262558da51ca53984253a
|
Stackoverflow Stackexchange
Q: “StandardIn has not been redirected” C# Below is my code. We get exception when we try to write to command line.
Process ourProc = Process.GetProcessById(id);
ourProc.StandardInput.WriteLine("echo %PATH%");
I added the below code to make redirect standadrd input true but still it does not worked.
ourProc.StartInfo.RedirectStandardInput = true;
Any help on this will be appreciated.
A: According to the spec, you must also set UseShellExecute = false. Also it might not work with already running processes -- it is start information that should be set before process is started.
|
Q: “StandardIn has not been redirected” C# Below is my code. We get exception when we try to write to command line.
Process ourProc = Process.GetProcessById(id);
ourProc.StandardInput.WriteLine("echo %PATH%");
I added the below code to make redirect standadrd input true but still it does not worked.
ourProc.StartInfo.RedirectStandardInput = true;
Any help on this will be appreciated.
A: According to the spec, you must also set UseShellExecute = false. Also it might not work with already running processes -- it is start information that should be set before process is started.
|
stackoverflow
|
{
"language": "en",
"length": 89,
"provenance": "stackexchange_0000F.jsonl.gz:850306",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497010"
}
|
25179672e3ed572b6b1e81b76cb75d9373846aba
|
Stackoverflow Stackexchange
Q: Generate The Raw MySQL Query From Laravel Query Builder How can i get mysql query of a laravel query
Convert:
App\User::where('balance','>',0)->where(...)->get();
To:
SELECT * FROM users WHERE `balance`>0 and ...
A: you can add this function to your helpers
function getRealQuery($query, $dumpIt = false)
{
$params = array_map(function ($item) {
return "'{$item}'";
}, $query->getBindings());
$result = str_replace_array('\?', $params, $query->toSql());
if ($dumpIt) {
dd($result);
}
return $result;
}
and use like this:
getRealQuery(App\User::where('balance','>',0)->where(...),true)
|
Q: Generate The Raw MySQL Query From Laravel Query Builder How can i get mysql query of a laravel query
Convert:
App\User::where('balance','>',0)->where(...)->get();
To:
SELECT * FROM users WHERE `balance`>0 and ...
A: you can add this function to your helpers
function getRealQuery($query, $dumpIt = false)
{
$params = array_map(function ($item) {
return "'{$item}'";
}, $query->getBindings());
$result = str_replace_array('\?', $params, $query->toSql());
if ($dumpIt) {
dd($result);
}
return $result;
}
and use like this:
getRealQuery(App\User::where('balance','>',0)->where(...),true)
A: use toSql() method of laravel to get the query to be executed like
App\User::where('balance','>',0)->where(...)->toSql();
But Laravel will not show you parameters in your query, because they are bound after preparation of the query. To get the bind parameters, use this
$query=App\User::where('balance','>',0)->where(...);
print_r($query->getBindings() );
enable the query log as DB::enableQueryLog() and then output to the screen the last queries ran you can use this,
dd(DB::getQueryLog());
A: Method 1
To print a single query, use toSql() method of laravel to get the query to be executed like
App\User::where('balance','>',0)->where(...)->toSql();
Method 2
Laravel can optionally log in memory all queries that have been run for the current request. But in some cases, such as when inserting a large number of rows, this can cause the application to use excess memory, so you should avoid this.
To enable the log, you may use the enableQueryLog method as
DB::connection()->enableQueryLog();
To get an array of the executed queries, you may use the getQueryLog method as
$queries = DB::getQueryLog();
you can get more details here Laravel Enable Query Log
Method 3
Another approach to display all queries used in Laravel without enabling the query log install the LaravelDebugBar from here Laravel Debug Bar.
It is a package that allows you to quickly and easily keep tabs on your application during development.
A: To print the raw sql query, try:
DB::enableQueryLog();
// Your query here
$queries = DB::getQueryLog();
print_r($queries);
Reference
A: Here is a helper function who tells you the last SQL executed.
use DB;
public static function getLastSQL()
{
$queries = DB::getQueryLog();
$last_query = end($queries);
// last_query is the SQL with with data binding like
// {
// select ? from sometable where field = ? and field2 = ? ;
// param1,
// param2,
// param3,
// }
// which is hard to read.
$last_query = bindDataToQuery($last_query);
// here, last_query is the last SQL you have executed as normal SQL
// select param1 from sometable where field=param2 and field2 = param3;
return $last_query
}
Here is the bindDataToQuery function, who fill the '?' blanks with real params.
protected static function bindDataToQuery($queryItem){
$query = $queryItem['query'];
$bindings = $queryItem['bindings'];
$arr = explode('?',$query);
$res = '';
foreach($arr as $idx => $ele){
if($idx < count($arr) - 1){
$res = $res.$ele."'".$bindings[$idx]."'";
}
}
$res = $res.$arr[count($arr) -1];
return $res;
}
A: It is so strange that the laravel haven't support any way to get the raw sql easily, it is now version 6 after all...
Here's a workaround I used by myself to quickly get the raw sql with parameters without installing any extension...
Just deliberately make your original sql WRONG
Like change
DB::table('user')
to
DB::table('user1')
where the table "user1" does not exist at all!
Then run it again.
Sure there will be an exception reported by laravel.
SQLSTATE[42S02]: Base table or view not found: 1146 Table 'user1' doesn't exist (SQL: ...)
And now you can see the raw sql with parameters is right after the string "(SQL:"
Change back from the wrong table name to the right one and there you go!
A: In Laravel 5.4 (I didn't check this in other versions), add this function into the
"App"=>"Providers"=>"AppServiceProvider.php" .
public function boot()
{
if (App::isLocal()) {
DB::listen(
function ($sql) {
// $sql is an object with the properties:
// sql: The query
// bindings: the sql query variables
// time: The execution time for the query
// connectionName: The name of the connection
// To save the executed queries to file:
// Process the sql and the bindings:
foreach ($sql->bindings as $i => $binding) {
if ($binding instanceof \DateTime) {
$sql->bindings[$i] = $binding->format('\'Y-m-d H:i:s\'');
} else {
if (is_string($binding)) {
$sql->bindings[$i] = "'$binding'";
}
}
}
// Insert bindings into query
$query = str_replace(array('%', '?'), array('%%', '%s'), $sql->sql);
$query = vsprintf($query, $sql->bindings);
// Save the query to file
/*$logFile = fopen(
storage_path('logs' . DIRECTORY_SEPARATOR . date('Y-m-d') . '_query.log'),
'a+'
);*/
Log::notice("[USER] $query");
}
);
}
}
After that install,
https://github.com/ARCANEDEV/LogViewer
and then you can see every executed SQL queries without editing the code.
A: To get mysql query in laravel you need to log your query as
DB::enableQueryLog();
App\User::where('balance','>',0)->where(...)->get();
print_r(DB::getQueryLog());
Check reference : https://laravel.com/docs/5.0/database#query-logging
A: Instead of interfering with the application with print statements or "dds", I do the following when I want to see the generated SQL:
DB::listen(function ($query) {
Log::info($query->sql, $query->bindings);
});
// (DB and Log are the facades in Illuminate\Support\Facades namespace)
This will output the sql to the Laravel log (located at storage/logs/laravel.log). A useful command for following writes to this file is
tail -n0 -f storage/logs/laravel.log
A: A simple way to display all queries used in Laravel without any code changes at all is to install the LaravelDebugBar (https://laravel-news.com/laravel-debugbar).
As part of the functionality you get a tab which will show you all of the queries that a page has used.
A: Try this:
$results = App\User::where('balance','>',0)->where(...)->toSql();
dd($results);
Note: get() has been replaced with toSql() to display the raw SQL query.
A: A very simple and shortcut way is below
Write the name of column wrong like write 'balancedd' in spite of 'balance' and the query will be displayed on error screen when you execute code with all the parameters and error that column not found.
A: DB::enableQueryLog();
(Query)
$d= DB::getQueryLog(); print"<pre>"; print_r ($d); print"</pre>";
you will get the mysql query that is just run.
A: There is actually no such thing in Laravel and even PHP, since PHP internally sends the parameters with query string to the database where it (possibly) become parsed into raw query string.
The accepted answer is actually optimistic solution, kind of "optionally works".
|
stackoverflow
|
{
"language": "en",
"length": 997,
"provenance": "stackexchange_0000F.jsonl.gz:850342",
"question_score": "26",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497115"
}
|
aec040a82647cff82027e017bcb98849cc873ce8
|
Stackoverflow Stackexchange
Q: Importing just DefaultUrlSerializer class into an Angular project without entire router module Following import statement pulls entire router module into the final webpack bundle.
import { DefaultUrlSerializer } from '@angular/router';
Is there a way to just import the DefaultUrlSerializer without other irrelevant module ?
I'm using Webpack module builder and Angular Cli for AOT/production builds.
A: No, you cannot do that unless you build the Angular yourself. The npm package doesn't ship modules separately, but as a one bundle in the UMD format:
node_modules
@angular
router
bundles
router.umd.js
No matter how you import DefaultUrlSerializer, webpack will include the contents of the entire router.umd.js in the final build as it can't extract code from a file.
|
Q: Importing just DefaultUrlSerializer class into an Angular project without entire router module Following import statement pulls entire router module into the final webpack bundle.
import { DefaultUrlSerializer } from '@angular/router';
Is there a way to just import the DefaultUrlSerializer without other irrelevant module ?
I'm using Webpack module builder and Angular Cli for AOT/production builds.
A: No, you cannot do that unless you build the Angular yourself. The npm package doesn't ship modules separately, but as a one bundle in the UMD format:
node_modules
@angular
router
bundles
router.umd.js
No matter how you import DefaultUrlSerializer, webpack will include the contents of the entire router.umd.js in the final build as it can't extract code from a file.
|
stackoverflow
|
{
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:850352",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497164"
}
|
351ca6d9691a8160c731cd7ee37b18df00258eb7
|
Stackoverflow Stackexchange
Q: Log requests to a flask server using gunicorn as wsgi server ... to AWS cloudwatch I am using a flask server with gunicorn as the wsgi server.
I want to log all requests details to cloudwatch.
from flask import Flask, jsonify, request
app = Flask(__name__)
@app.route('/')
def index():
return jsonify({
'logging': "I want to log this request to cloudwatch",
"request": request
})
if __name__=='__main__':
app.run()
A: one way to to setup a logger, and use watchtower
https://watchtower.readthedocs.io/en/latest/#example-flask-logging-with-watchtower
import watchtower, flask, logging
logging.basicConfig(level=logging.INFO)
app = flask.Flask("loggable")
handler = watchtower.CloudWatchLogHandler()
app.logger.addHandler(handler)
logging.getLogger("werkzeug").addHandler(handler)
@app.route('/')
def index():
logging.info("I want to log this request to cloudwatch")
return jsonify({
"request": request
})
if __name__ == '__main__':
app.run()
|
Q: Log requests to a flask server using gunicorn as wsgi server ... to AWS cloudwatch I am using a flask server with gunicorn as the wsgi server.
I want to log all requests details to cloudwatch.
from flask import Flask, jsonify, request
app = Flask(__name__)
@app.route('/')
def index():
return jsonify({
'logging': "I want to log this request to cloudwatch",
"request": request
})
if __name__=='__main__':
app.run()
A: one way to to setup a logger, and use watchtower
https://watchtower.readthedocs.io/en/latest/#example-flask-logging-with-watchtower
import watchtower, flask, logging
logging.basicConfig(level=logging.INFO)
app = flask.Flask("loggable")
handler = watchtower.CloudWatchLogHandler()
app.logger.addHandler(handler)
logging.getLogger("werkzeug").addHandler(handler)
@app.route('/')
def index():
logging.info("I want to log this request to cloudwatch")
return jsonify({
"request": request
})
if __name__ == '__main__':
app.run()
|
stackoverflow
|
{
"language": "en",
"length": 112,
"provenance": "stackexchange_0000F.jsonl.gz:850364",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497214"
}
|
f1c12fc89544e152fee3e1c7d5489899a76c39be
|
Stackoverflow Stackexchange
Q: How to automatically update charts linked to Google Sheets? I have a Google Slides presentation with charts that are linked to a specific Google Sheets Spreadsheet.
As there are many charts in the presentation, I'm looking for a way to update all these linked charts automatically, or at least all of them at once.
What is the best way to do this?
A: You can add a custom function to a dropdown menu in the Slides UI with the following script. This gets the slides from the current presentation, loops through them, gets any charts in each slides and refreshes (updates) them.
function onOpen() {
var ui = SlidesApp.getUi();
ui.createMenu('Custom Menu')
.addItem('Batch Update Charts', 'batchUpdate')
.addToUi();
}
function batchUpdate(){
var gotSlides = SlidesApp.getActivePresentation().getSlides();
for (var i = 0; i < gotSlides.length; i++) {
var slide = gotSlides[i];
var sheetsCharts = slide.getSheetsCharts();
for (var k = 0; k < sheetsCharts.length; k++) {
var shChart = sheetsCharts[k];
shChart.refresh();
}
}
}
Note: The functionality to update/refresh linked Slides doesn't appear to exist at the time of this response.
|
Q: How to automatically update charts linked to Google Sheets? I have a Google Slides presentation with charts that are linked to a specific Google Sheets Spreadsheet.
As there are many charts in the presentation, I'm looking for a way to update all these linked charts automatically, or at least all of them at once.
What is the best way to do this?
A: You can add a custom function to a dropdown menu in the Slides UI with the following script. This gets the slides from the current presentation, loops through them, gets any charts in each slides and refreshes (updates) them.
function onOpen() {
var ui = SlidesApp.getUi();
ui.createMenu('Custom Menu')
.addItem('Batch Update Charts', 'batchUpdate')
.addToUi();
}
function batchUpdate(){
var gotSlides = SlidesApp.getActivePresentation().getSlides();
for (var i = 0; i < gotSlides.length; i++) {
var slide = gotSlides[i];
var sheetsCharts = slide.getSheetsCharts();
for (var k = 0; k < sheetsCharts.length; k++) {
var shChart = sheetsCharts[k];
shChart.refresh();
}
}
}
Note: The functionality to update/refresh linked Slides doesn't appear to exist at the time of this response.
A: You can find it in official documentation about API (for different lang).
https://developers.google.com/slides/how-tos/add-chart#refreshing_a_chart
You need to write a script for this and run it by schedule or manually.
I have found my own code that worked great.
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/slides.googleapis.com-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/drive'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Google Slides API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'slides.googleapis.com-python-quickstart.json')
store = Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Slides API.
Creates a Slides API service object and prints the number of slides and
elements in a sample presentation:
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('slides', 'v1', http=http)
# Here past your presentation id
presentationId = '1Owma9l9Z0Xjm1OPp-fcchdcxc1ImBPY2j9QH1LBDxtk'
presentation = service.presentations().get(
presentationId=presentationId).execute()
slides = presentation.get('slides')
print ('The presentation contains {} slides:'.format(len(slides)))
for slide in slides:
for element in slide['pageElements']:
presentation_chart_id = element['objectId']
# Execute the request.
try:
requests = [{'refreshSheetsChart': {'objectId': presentation_chart_id}}]
body = {'requests': requests}
#print(element)
requests = service.presentations().batchUpdate(
presentationId=presentationId, body=body).execute()
print('Refreshed a linked Sheets chart with ID: {0}'.format(presentation_chart_id))
except Exception:
pass
if __name__ == '__main__':
main()
A: Latest update: There is now an option in Slides's Tools drop-down menu to see all Linked Objects; the menu that appears has the option at the bottom to "Update all".
|
stackoverflow
|
{
"language": "en",
"length": 512,
"provenance": "stackexchange_0000F.jsonl.gz:850376",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497247"
}
|
431eab30e683c187646fdadb4fa2d833ab38395d
|
Stackoverflow Stackexchange
Q: ECharts generated label overlaps with dataZoom Using ECharts I am giving it a data series that consists of a number of data against really values. (e.g plotting stock prices).
I also have data zoom enabled.
My issue is that the generated X axis' labels overlap with the dataZoom. I can't understand from the documentation how to fix this.
A: You need to set the value of grid.bottom. This will move the whole grid further from the bottom of the canvas and pull the whole X Axis with it.
Example: grid: { bottom: 60 }
// usage
this._displayedChart.setOption({ grid: { bottom: 10 } })
Not a great solution but works.
https://ecomfe.github.io/echarts-doc/public/en/option.html#grid.bottom
|
Q: ECharts generated label overlaps with dataZoom Using ECharts I am giving it a data series that consists of a number of data against really values. (e.g plotting stock prices).
I also have data zoom enabled.
My issue is that the generated X axis' labels overlap with the dataZoom. I can't understand from the documentation how to fix this.
A: You need to set the value of grid.bottom. This will move the whole grid further from the bottom of the canvas and pull the whole X Axis with it.
Example: grid: { bottom: 60 }
// usage
this._displayedChart.setOption({ grid: { bottom: 10 } })
Not a great solution but works.
https://ecomfe.github.io/echarts-doc/public/en/option.html#grid.bottom
|
stackoverflow
|
{
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:850390",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497298"
}
|
c13cbef7a8e7b5ff993e03d929b838c8c23a7338
|
Stackoverflow Stackexchange
Q: Partial file name Search of Azure blob storage without file extension I have image files on azure in Blob container. All files have unique names. I nead to search these image files on name without the extentions. For example i have files:
123.PNG
345.jpg
122.JPG
Present code can search if i give complete name of the file such as 123.PNG.
How to make it work with just passing 123.
Code: ID is being passed as a paramenter which is the file name in blob.:
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("images");
container.CreateIfNotExists();
var blockBlob = container.GetBlockBlobReference(id);
blockBlob.FetchAttributes();
byte[] downloadedImage = new byte[blockBlob.Properties.Length];
blockBlob.DownloadToByteArray(downloadedImage, 0);
var imageBase64 = Convert.ToBase64String(downloadedImage);
A: What you could do is use the ListBlobs method that accepts a string prefix parameter like this:
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("images");
container.CreateIfNotExists();
var blockBlobs = container.ListBlobs(prefix: "123.").OfType<CloudBlockBlob>();
var blockBlob = blockBlobs.First();
blockBlob.FetchAttributes();
byte[] downloadedImage = new byte[blockBlob.Properties.Length];
blockBlob.DownloadToByteArray(downloadedImage, 0);
var imageBase64 = Convert.ToBase64String(downloadedImage);
The above example will find 123.JPG or 123.PNG (or both)
You will get a list of all blobs that have a name starting with the value of prefix.
|
Q: Partial file name Search of Azure blob storage without file extension I have image files on azure in Blob container. All files have unique names. I nead to search these image files on name without the extentions. For example i have files:
123.PNG
345.jpg
122.JPG
Present code can search if i give complete name of the file such as 123.PNG.
How to make it work with just passing 123.
Code: ID is being passed as a paramenter which is the file name in blob.:
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("images");
container.CreateIfNotExists();
var blockBlob = container.GetBlockBlobReference(id);
blockBlob.FetchAttributes();
byte[] downloadedImage = new byte[blockBlob.Properties.Length];
blockBlob.DownloadToByteArray(downloadedImage, 0);
var imageBase64 = Convert.ToBase64String(downloadedImage);
A: What you could do is use the ListBlobs method that accepts a string prefix parameter like this:
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("images");
container.CreateIfNotExists();
var blockBlobs = container.ListBlobs(prefix: "123.").OfType<CloudBlockBlob>();
var blockBlob = blockBlobs.First();
blockBlob.FetchAttributes();
byte[] downloadedImage = new byte[blockBlob.Properties.Length];
blockBlob.DownloadToByteArray(downloadedImage, 0);
var imageBase64 = Convert.ToBase64String(downloadedImage);
The above example will find 123.JPG or 123.PNG (or both)
You will get a list of all blobs that have a name starting with the value of prefix.
A: For newcomers, you should use like this:
var pagesize = 10;
var resultSegment = blobContainerClient.GetBlobsAsync(prefix: "BlobName")
.AsPages(default, pagesize);
// Enumerate the blobs returned for each page.
await foreach (Azure.Page<BlobItem> blobPage in resultSegment)
{
foreach (BlobItem blobItem in blobPage.Values)
{
Console.WriteLine("Blob name: {0}", blobItem.Name);
}
Console.WriteLine();
}
Ref: MSDN(List blobs with Azure Storage client libraries)
|
stackoverflow
|
{
"language": "en",
"length": 241,
"provenance": "stackexchange_0000F.jsonl.gz:850400",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497328"
}
|
19062bc29fa14bfd3dc1efb9a5d45ec0fa308c67
|
Stackoverflow Stackexchange
Q: How to reset values in ng-select in angular I am using ng-select on a modal.
<div class="modal-body">
<div class="form-group has-feedback">
<ng-select #select name="currenies" [allowClear]="true" [items]="currencyList" [disabled]="disabled" (selected)="selected($event)"
placeholder="Currency description + ISO code">
</ng-select>
</div>
</div>
After I close the modal, selected value attached to html remains constant.
How to reset the ng-select?
A: u can reset it using the method clear when the modal is closed :
<div class="modal-body" (onClose)="select.clear()">
|
Q: How to reset values in ng-select in angular I am using ng-select on a modal.
<div class="modal-body">
<div class="form-group has-feedback">
<ng-select #select name="currenies" [allowClear]="true" [items]="currencyList" [disabled]="disabled" (selected)="selected($event)"
placeholder="Currency description + ISO code">
</ng-select>
</div>
</div>
After I close the modal, selected value attached to html remains constant.
How to reset the ng-select?
A: u can reset it using the method clear when the modal is closed :
<div class="modal-body" (onClose)="select.clear()">
A: You can use this
$('#modal_id').on('hidden.bs.modal', function (e) {
$(this).find("select").val('').end();
})
|
stackoverflow
|
{
"language": "en",
"length": 82,
"provenance": "stackexchange_0000F.jsonl.gz:850416",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497377"
}
|
6127edc39241b916aa97da68876364b39116a86f
|
Stackoverflow Stackexchange
Q: TypeScript array to string literal type I currently have both an array of strings and a string literal union type containing the same strings:
const furniture = ['chair', 'table', 'lamp'];
type Furniture = 'chair' | 'table' | 'lamp';
I need both in my application, but I am trying to keep my code DRY. So is there any way to infer one from the other?
I basically want to say something like type Furniture = [any string in furniture array], so there are no duplicate strings.
A: easiest in typescript 3.4:
(note TypeScript 3.4 added const assertions)
const furniture = ["chair", "table", "lamp"] as const;
type Furniture = typeof furniture[number]; // "chair" | "table" | "lamp"
also see https://stackoverflow.com/a/55505556/4481226
or if you have these as keys in an object, you can also convert it to a union:
const furniture = {chair:{}, table:{}, lamp:{}} as const;
type Furniture = keyof typeof furniture; // "chair" | "table" | "lamp"
|
Q: TypeScript array to string literal type I currently have both an array of strings and a string literal union type containing the same strings:
const furniture = ['chair', 'table', 'lamp'];
type Furniture = 'chair' | 'table' | 'lamp';
I need both in my application, but I am trying to keep my code DRY. So is there any way to infer one from the other?
I basically want to say something like type Furniture = [any string in furniture array], so there are no duplicate strings.
A: easiest in typescript 3.4:
(note TypeScript 3.4 added const assertions)
const furniture = ["chair", "table", "lamp"] as const;
type Furniture = typeof furniture[number]; // "chair" | "table" | "lamp"
also see https://stackoverflow.com/a/55505556/4481226
or if you have these as keys in an object, you can also convert it to a union:
const furniture = {chair:{}, table:{}, lamp:{}} as const;
type Furniture = keyof typeof furniture; // "chair" | "table" | "lamp"
A: TypeScript 3.4+
TypeScript version 3.4 has introduced so-called **const contexts**, which is a way to declare a tuple type as immutable and get the narrow literal type directly (without the need to call a function like shown below in the 3.0 solution).
With this new syntax, we get this nice concise solution:
const furniture = ['chair', 'table', 'lamp'] as const;
type Furniture = typeof furniture[number];
More about the new const contexts is found in this PR as well as in the release notes.
TypeScript 3.0+
With the use of generic rest parameters, there is a way to correctly infer string[] as a literal tuple type and then get the union type of the literals.
It goes like this:
const tuple = <T extends string[]>(...args: T) => args;
const furniture = tuple('chair', 'table', 'lamp');
type Furniture = typeof furniture[number];
More about generic rest parameters
A: This answer is out of date; see @ggradnig's answer.
The best available workaround:
const furnitureObj = { chair: 1, table: 1, lamp: 1 };
type Furniture = keyof typeof furnitureObj;
const furniture = Object.keys(furnitureObj) as Furniture[];
Ideally we could do this:
const furniture = ['chair', 'table', 'lamp'];
type Furniture = typeof furniture[number];
Unfortunately, today furniture is inferred as string[], which means Furniture is now also a string.
We can enforce the typing as a literal with a manual annotation, but it brings back the duplication:
const furniture = ["chair", "table", "lamp"] as ["chair", "table", "lamp"];
type Furniture = typeof furniture[number];
TypeScript issue #10195 tracks the ability to hint to TypeScript that the list should be inferred as a static tuple and not string[], so maybe in the future this will be possible.
A: The only adjustement I would suggest is to make the const guaranteed compatible with the type, like this:
type Furniture = 'chair' | 'table' | 'lamp';
const furniture: Furniture[] = ['chair', 'table', 'lamp'];
This will give you a warning should you make a spelling error in the array, or add an unknown item:
// Warning: Type 'unknown' is not assignable to furniture
const furniture: Furniture[] = ['chair', 'table', 'lamp', 'unknown'];
The only case this wouldn't help you with is where the array didn't contain one of the values.
|
stackoverflow
|
{
"language": "en",
"length": 523,
"provenance": "stackexchange_0000F.jsonl.gz:850420",
"question_score": "173",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497388"
}
|
a1f4fe8212af73fcc55dc56857bbc614a2a0e1d5
|
Stackoverflow Stackexchange
Q: iOS - Shortcut for jumping to definition in Xcode 9? In previous Xcode version , I could jump to definition with simple
Cmd + click on that method/variable .
But in Xcode 9, I feel uncomfortable to jump to definition .
Does anyone has a better solution for jumping to definition in Xcode 9 ?
I am tired of selecting options from dropdown list.
A: Standard hot key for jump to definition is ctrl+cmd+j. Set cursor to the class/method you are interested in and press this buttons to switch to declaration. Also you can try to press ctrl+opt+cmd+j. In this case definition will be opened in assistant editor
|
Q: iOS - Shortcut for jumping to definition in Xcode 9? In previous Xcode version , I could jump to definition with simple
Cmd + click on that method/variable .
But in Xcode 9, I feel uncomfortable to jump to definition .
Does anyone has a better solution for jumping to definition in Xcode 9 ?
I am tired of selecting options from dropdown list.
A: Standard hot key for jump to definition is ctrl+cmd+j. Set cursor to the class/method you are interested in and press this buttons to switch to declaration. Also you can try to press ctrl+opt+cmd+j. In this case definition will be opened in assistant editor
A: If dont like to use mouse click(I certainly don't like) you could use
Command + Ctrl + J
A: In Xcode 9 both of these work:
⌘ + Right Click
OR
⌘⌃ + Click
A: When I ⌘-click on a symbol in Xcode 9 I see
That means you have to ⌃⌘-click on the symbol to skip the popup.
Nevertheless there is even a keyboard shortcut:
A: I don't know how Cmd + Option + Left Click worked for you guys, but the shortcut (at least for me) was Cmd + Ctrl + Left Click.
I've tried on both Apple keyboard and MacBook keyboard and this is the one that did it.
A: Solution 1:
*
*Go to Xcode menu
*Click on Preferences
*Select Navigation Tab from Top
*Select Command-click on Code
*Change to "Jumps to Definition"
Solution 2:
Use
Ctrl + ⌘ + Left click
A: There is short cut displayed on drop down menu, just use-
1. Control, Command and left mouse button
OR
2. Command plus Right Mouse Click
instead of command left mouse button.
A: In Xcode 9 Beta, you can go definition by Cmd + Right Click
A: In Xcode 9 Beta, it has been changed to Cmd + Ctrl + Left Click.
A: Solution to your question: Ctrl + ⌘ + Left click
Xcode >> Preference >> Key Bindings >> Here is list of all short cuts
of Xcode.
A: Ashish and Ghulam's answers were great but it still kinda bugged me that things had changed and I couldn't jump to definition as before. Then I found this...
Xcode9Beta2-Preferences->Navigation->Command-click on Code:->Jump To Definition:
A: Deleting everything under the Derived data and re-opening Xcode fixed everything for me.
|
stackoverflow
|
{
"language": "en",
"length": 393,
"provenance": "stackexchange_0000F.jsonl.gz:850432",
"question_score": "69",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497436"
}
|
53e453d931a16110cb3720476cfe2584b58c9c37
|
Stackoverflow Stackexchange
Q: How to add floating action button on the right-bottom side of the screen using material ui in react I am trying to add FloatingActionButton on right bottom side of the screen. I am using this library http://www.material-ui.com/#/components/floating-action-button
import React, { Component } from "react";
import logo from "./logo.svg";
import "./App.css";
import AppBar from "material-ui/AppBar";
import FloatingActionButton from "material-ui/FloatingActionButton";
import MuiThemeProvider from "material-ui/styles/MuiThemeProvider";
import * as strings from "./Strings";
import styles from "./Styles";
import ContentAdd from "material-ui/svg-icons/content/add";
const AppBarTest = () =>
<AppBar
title={strings.app_name}
iconClassNameRight="muidocs-icon-navigation-expand-more"
/>;
class App extends Component {
render() {
return (
<MuiThemeProvider>
<div>
<AppBarTest />
<FloatingActionButton style={styles.fab}>
<ContentAdd />
</FloatingActionButton>
</div>
</MuiThemeProvider>
);
}
}
export default App;
Styles.js
const style = {
fab: {
backgroundColor: '#000000'
},
};
export default style;
Question 1
It is showing FloatingActionButton on top-left side, I want to make this on right-bottom side. What is the way to do this ?
Question 2
Why style is not applying on FloatingActionButton ?
A: Try this style:
const fabStyle = {
right: 20,
position: 'fixed'
};
and later u use margin, top... but don't use auto on position: fixed
|
Q: How to add floating action button on the right-bottom side of the screen using material ui in react I am trying to add FloatingActionButton on right bottom side of the screen. I am using this library http://www.material-ui.com/#/components/floating-action-button
import React, { Component } from "react";
import logo from "./logo.svg";
import "./App.css";
import AppBar from "material-ui/AppBar";
import FloatingActionButton from "material-ui/FloatingActionButton";
import MuiThemeProvider from "material-ui/styles/MuiThemeProvider";
import * as strings from "./Strings";
import styles from "./Styles";
import ContentAdd from "material-ui/svg-icons/content/add";
const AppBarTest = () =>
<AppBar
title={strings.app_name}
iconClassNameRight="muidocs-icon-navigation-expand-more"
/>;
class App extends Component {
render() {
return (
<MuiThemeProvider>
<div>
<AppBarTest />
<FloatingActionButton style={styles.fab}>
<ContentAdd />
</FloatingActionButton>
</div>
</MuiThemeProvider>
);
}
}
export default App;
Styles.js
const style = {
fab: {
backgroundColor: '#000000'
},
};
export default style;
Question 1
It is showing FloatingActionButton on top-left side, I want to make this on right-bottom side. What is the way to do this ?
Question 2
Why style is not applying on FloatingActionButton ?
A: Try this style:
const fabStyle = {
right: 20,
position: 'fixed'
};
and later u use margin, top... but don't use auto on position: fixed
|
stackoverflow
|
{
"language": "en",
"length": 187,
"provenance": "stackexchange_0000F.jsonl.gz:850494",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497631"
}
|
b82b9a477c77635067d7bdce90024a84290f9950
|
Stackoverflow Stackexchange
Q: Windows restore points - The meaning of RestorePointType The API function CreateRestorePoint creates a restore point. The method takes one of the following values as the RestorePointType:
*
*APPLICATION_INSTALL
*APPLICATION_UNINSTALL
*DEVICE_DRIVER_INSTALL
*MODIFY_SETTINGS
What is the difference and does it affect the list of files that are saved for the checkpoint?
I noticed when I manually created it using Checkpoint-Computer that PowerShell function uses APPLICATION_INSTALL by default; it didn't save all the files on Windows 10 Pro: some ~\Documents weren't reverted later when I restored the checkpoint.
*
*Checkpoint-Computer
*RestorePointType
A: As BenH mentioned in the comments, these are only informational, for description.
When an application calls the SRSetRestorePoint function, it can provide any text as a description for the restore point; however, the following table shows the recommended description text.
Installers, such as Windows Installer and InstallShield, use these conventions for the description text:
*
*The product name follows the verb; for example, Installed AppName.
*The product name can be used alone (AppName) or the product name and
*the company name may both be used (MyCompany AppName).
Source
|
Q: Windows restore points - The meaning of RestorePointType The API function CreateRestorePoint creates a restore point. The method takes one of the following values as the RestorePointType:
*
*APPLICATION_INSTALL
*APPLICATION_UNINSTALL
*DEVICE_DRIVER_INSTALL
*MODIFY_SETTINGS
What is the difference and does it affect the list of files that are saved for the checkpoint?
I noticed when I manually created it using Checkpoint-Computer that PowerShell function uses APPLICATION_INSTALL by default; it didn't save all the files on Windows 10 Pro: some ~\Documents weren't reverted later when I restored the checkpoint.
*
*Checkpoint-Computer
*RestorePointType
A: As BenH mentioned in the comments, these are only informational, for description.
When an application calls the SRSetRestorePoint function, it can provide any text as a description for the restore point; however, the following table shows the recommended description text.
Installers, such as Windows Installer and InstallShield, use these conventions for the description text:
*
*The product name follows the verb; for example, Installed AppName.
*The product name can be used alone (AppName) or the product name and
*the company name may both be used (MyCompany AppName).
Source
|
stackoverflow
|
{
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:850593",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44497921"
}
|
c0ea0f5ac5af433dd81fb2e93ee30553141c539b
|
Stackoverflow Stackexchange
Q: Not able to navigate from one Particular screen issue is only in iOS 11 The app is navigating to VC1 to VC2, in VC2 "Back", "Menu" & "Submit" Button are there on click of "Submit" displaying one alert with message and "Ok" button on click of "Ok" button I'm trying to pop to VC1, code is executing but navigation is not happening.
Same is happening for "Back" & "Menu" Buttons also code is executing but not navigating to any other pages. Using Xcode 9 beta 6.
The below piece of code I'm using in my project
NSArray *controllersArray = [[self navigationController] viewControllers];
for(UIViewController *controller in controllersArray)
{
if ([controller isKindOfClass:[ViewController1 class]])
{
ViewController1 *accDetailVC = (ViewController1*)controller;
[[self navigationController] popToViewController:accDetailVC animated:YES];
break;
}
}
working in other iOS versions except for iOS 11 beta. Please help me on this if any one is facing the same issue.
A: Issue got fixed while pushing from VC1 to VC2 before I'm using [[self navigationController] pushViewController:ctrl animated:YES] now changed as [[self navigationController] pushViewController:ctrl animated:NO]
|
Q: Not able to navigate from one Particular screen issue is only in iOS 11 The app is navigating to VC1 to VC2, in VC2 "Back", "Menu" & "Submit" Button are there on click of "Submit" displaying one alert with message and "Ok" button on click of "Ok" button I'm trying to pop to VC1, code is executing but navigation is not happening.
Same is happening for "Back" & "Menu" Buttons also code is executing but not navigating to any other pages. Using Xcode 9 beta 6.
The below piece of code I'm using in my project
NSArray *controllersArray = [[self navigationController] viewControllers];
for(UIViewController *controller in controllersArray)
{
if ([controller isKindOfClass:[ViewController1 class]])
{
ViewController1 *accDetailVC = (ViewController1*)controller;
[[self navigationController] popToViewController:accDetailVC animated:YES];
break;
}
}
working in other iOS versions except for iOS 11 beta. Please help me on this if any one is facing the same issue.
A: Issue got fixed while pushing from VC1 to VC2 before I'm using [[self navigationController] pushViewController:ctrl animated:YES] now changed as [[self navigationController] pushViewController:ctrl animated:NO]
|
stackoverflow
|
{
"language": "en",
"length": 171,
"provenance": "stackexchange_0000F.jsonl.gz:850633",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498071"
}
|
4efead308be0dd577c04841ba2a4ad758ef09d72
|
Stackoverflow Stackexchange
Q: How to use Token number in all controller and view pages in laravel 5? Actually, I am trying to set global variable. My scenario is when I am login with laravel 5 framework.I don't want to use auth email in any controller and view pages.I want to use Token number which is created by registration page instead of Auth email. so please give some tips or idea for this scenario.
A: Providing you store the generated token on the user after generating it something like the below in your AppServiceProvider boot method should work:
<?php
// app/Providers/AppServiceProvider.php
//...
class AppServiceProvider extends ServiceProvider
{
//...
public function boot()
{
view()->composer('*', function($view) {
if(auth()->user()) {
$user = auth()->user();
$token = $user->token;
view()->share("user_token", $token);
$this->app->singleton('user_token', function () {
return $token;
});
}
});
}
}
To retrieve the data in a view:
{{ $user_token }}
in a controller:
app('user_token');
|
Q: How to use Token number in all controller and view pages in laravel 5? Actually, I am trying to set global variable. My scenario is when I am login with laravel 5 framework.I don't want to use auth email in any controller and view pages.I want to use Token number which is created by registration page instead of Auth email. so please give some tips or idea for this scenario.
A: Providing you store the generated token on the user after generating it something like the below in your AppServiceProvider boot method should work:
<?php
// app/Providers/AppServiceProvider.php
//...
class AppServiceProvider extends ServiceProvider
{
//...
public function boot()
{
view()->composer('*', function($view) {
if(auth()->user()) {
$user = auth()->user();
$token = $user->token;
view()->share("user_token", $token);
$this->app->singleton('user_token', function () {
return $token;
});
}
});
}
}
To retrieve the data in a view:
{{ $user_token }}
in a controller:
app('user_token');
|
stackoverflow
|
{
"language": "en",
"length": 147,
"provenance": "stackexchange_0000F.jsonl.gz:850690",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498262"
}
|
67058125ddb85c5cd5a9917ae6c6dd0c3bd08388
|
Stackoverflow Stackexchange
Q: Electron with Pouchdb - unable to import leveldown A dynamic link library (DLL) initialization routine failed.
unable to import leveldown
Error: A dynamic link library (DLL) initialization routine failed.
\\?\(Project Folder)\node_modules\leveldown\build\Release\leveldown.node: unable to import leveldown
at requireLeveldown ((Project Folder)\node_modules\pouchdb\lib\index.js:6206:12)
at PouchDB$5.LevelDownPouch ((Project Folder)\node_modules\pouchdb\lib\index.js:6406:17)
at new PouchDB$5 ((Project Folder)\node_modules\pouchdb\lib\index.js:2732:36)
at database_init ((Project Folder)\index.js:102:12)
at App.app.on ((Project Folder)\index.js:58:2)
at emitTwo (events.js:111:20)
at App.emit (events.js:191:7)
|
Q: Electron with Pouchdb - unable to import leveldown A dynamic link library (DLL) initialization routine failed.
unable to import leveldown
Error: A dynamic link library (DLL) initialization routine failed.
\\?\(Project Folder)\node_modules\leveldown\build\Release\leveldown.node: unable to import leveldown
at requireLeveldown ((Project Folder)\node_modules\pouchdb\lib\index.js:6206:12)
at PouchDB$5.LevelDownPouch ((Project Folder)\node_modules\pouchdb\lib\index.js:6406:17)
at new PouchDB$5 ((Project Folder)\node_modules\pouchdb\lib\index.js:2732:36)
at database_init ((Project Folder)\index.js:102:12)
at App.app.on ((Project Folder)\index.js:58:2)
at emitTwo (events.js:111:20)
at App.emit (events.js:191:7)
|
stackoverflow
|
{
"language": "en",
"length": 63,
"provenance": "stackexchange_0000F.jsonl.gz:850700",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498298"
}
|
4989c5cd07584fcf65bb90be129cf62a2ff55e5f
|
Stackoverflow Stackexchange
Q: How to understand Bazel's output time? Everytime after a build is done, I see something like:
Elapsed time: 1034.748s, Critical Path: 257.54s
Wondering what's the difference between Elapsed Time and Critical Path? What can be causing the time difference?
Forwarded from: https://github.com/bazelbuild/bazel/issues/3164
A: "Elapsed time" shows the wall time of the build, since Bazel started running the first build action until the last action finished.
"Critical path" shows the wall time spent building the longest chain of actions, where each subsequent action depends on the output(s) of the previous one, so they must be run sequentially. The critical path is a lower limit on the clean build time of this build; even if the CPU had more cores than the number of actions Bazel ever runs in parallel, the build could still not complete any faster.
The time difference is caused by Bazel executing other actions too. There were presumably more actions to run than just those on the critical path.
|
Q: How to understand Bazel's output time? Everytime after a build is done, I see something like:
Elapsed time: 1034.748s, Critical Path: 257.54s
Wondering what's the difference between Elapsed Time and Critical Path? What can be causing the time difference?
Forwarded from: https://github.com/bazelbuild/bazel/issues/3164
A: "Elapsed time" shows the wall time of the build, since Bazel started running the first build action until the last action finished.
"Critical path" shows the wall time spent building the longest chain of actions, where each subsequent action depends on the output(s) of the previous one, so they must be run sequentially. The critical path is a lower limit on the clean build time of this build; even if the CPU had more cores than the number of actions Bazel ever runs in parallel, the build could still not complete any faster.
The time difference is caused by Bazel executing other actions too. There were presumably more actions to run than just those on the critical path.
|
stackoverflow
|
{
"language": "en",
"length": 162,
"provenance": "stackexchange_0000F.jsonl.gz:850708",
"question_score": "12",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498316"
}
|
3aa6a3a0c1609198fa6aa45a8275918a4a442f61
|
Stackoverflow Stackexchange
Q: Convert Clojure #inst instant in time to Joda time with clj-time What is the right way to parse Clojure time instants like #inst "2017-01-01T12:00:00" to Joda time using the clj-time library?
A: If you have a java.util.Date object:
(type #inst "2017-01-01T12:00:00+02:00")
;;=> java.util.Date
clj-time has a clj-time.coerce namespace with a from-date function that takes java.util.Date objects as input. Example:
(require '[clj-time.coerce :as c])
((juxt type str) (c/from-date #inst "2017-01-01T12:00:00"))
;;=> [org.joda.time.DateTime "2017-01-01T12:00:00.000Z"]
|
Q: Convert Clojure #inst instant in time to Joda time with clj-time What is the right way to parse Clojure time instants like #inst "2017-01-01T12:00:00" to Joda time using the clj-time library?
A: If you have a java.util.Date object:
(type #inst "2017-01-01T12:00:00+02:00")
;;=> java.util.Date
clj-time has a clj-time.coerce namespace with a from-date function that takes java.util.Date objects as input. Example:
(require '[clj-time.coerce :as c])
((juxt type str) (c/from-date #inst "2017-01-01T12:00:00"))
;;=> [org.joda.time.DateTime "2017-01-01T12:00:00.000Z"]
A: Also clj-time now provides a data-reader to automatically coerce dates in EDN to/from joda datetimes: clj-time EDN support
(clojure.edn/read-string {:readers clj-time.coerce/data-readers}
"#clj-time/date-time \"2019-07-10T06:00:00.000Z\"")
==>
#clj-time/date-time "2019-07-10T06:00:00.000Z"
|
stackoverflow
|
{
"language": "en",
"length": 100,
"provenance": "stackexchange_0000F.jsonl.gz:850735",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498410"
}
|
8dd08ee817562a8717348de7fe19bf6742cae118
|
Stackoverflow Stackexchange
Q: Dropping / recreating type in SQL Server IF OBJECT_ID('[dbo].[Order]') IS NOT NULL
DROP TYPE [dbo].[Order]
GO
CREATE TYPE [dbo].[Order] AS TABLE
(
[Id] INT NULL,
[Order] INT NULL
);
This code doesn't drop the table. I get this error:
The type 'dbo.Order' already exists, or you do not have permission to create it.
How I can drop/recreate type in SQL Server?
A: Any of your stored procedure may be using this table type. That's why it doesn't allow you to drop. You may be getting following error.
Msg 3732, Level 16, State 1, Line 4
Cannot drop type 'tabletype' because it is being referenced by object 'storedprocedurename'. There may be other objects that reference this type.
Msg 219, Level 16, State 1, Line 3
The type 'tabletype' already exists, or you do not have permission to create it.
check that storedprocedure or function name. just comment the usage for a while and once reference is removed. you can drop and recreate the table type. after that uncomment the reference usage in stored procedure / function.
This will surely works !!
|
Q: Dropping / recreating type in SQL Server IF OBJECT_ID('[dbo].[Order]') IS NOT NULL
DROP TYPE [dbo].[Order]
GO
CREATE TYPE [dbo].[Order] AS TABLE
(
[Id] INT NULL,
[Order] INT NULL
);
This code doesn't drop the table. I get this error:
The type 'dbo.Order' already exists, or you do not have permission to create it.
How I can drop/recreate type in SQL Server?
A: Any of your stored procedure may be using this table type. That's why it doesn't allow you to drop. You may be getting following error.
Msg 3732, Level 16, State 1, Line 4
Cannot drop type 'tabletype' because it is being referenced by object 'storedprocedurename'. There may be other objects that reference this type.
Msg 219, Level 16, State 1, Line 3
The type 'tabletype' already exists, or you do not have permission to create it.
check that storedprocedure or function name. just comment the usage for a while and once reference is removed. you can drop and recreate the table type. after that uncomment the reference usage in stored procedure / function.
This will surely works !!
A: Try to replace the check on Object_ID by a SELECT to INFORMATION_SCHEMA.DOMAINS (for SQL Server 2012):
IF EXISTS( SELECT * FROM INFORMATION_SCHEMA.DOMAINS WHERE Domain_Name = 'Order' )
DROP TYPE [dbo].[Order]
CREATE TYPE [dbo].[Order] AS TABLE
(
[Id] INT NULL,
[Order] INT NULL
);
A:
SQL Server 2016 and above
DROP TYPE IF EXISTS [Tablename];
GO
CREATE TYPE [DBO].[Tablename] AS TABLE
(Test VARCHAR(50))
GO
Lower verions of SQL server
IF EXISTS(SELECT * FROM sys.types WHERE is_table_type = 1 AND NAME ='Tablename')
BEGIN
DROP TYPE [dbo].[Tablename]
END
GO
CREATE TYPE [DBO].[Tablename] AS TABLE
(Test VARCHAR(50))
GO
A: IF EXISTS(
Select 1 from sys.table_types where user_type_id = TYPE_ID(N'dbo.Order')
)
Begin
DROP TYPE [dbo].[Order]
CREATE TYPE [dbo].[Order] AS TABLE
(
[Id] INT NULL,
[Order] INT NULL
);
END
A: Commenting out all places where type exists in stored procedure can be daunting task and also uncommenting that can be error prone. One trick that I use there is to right click on type that I want to change and choose Script Create UDT type to new query window, then I just change name (for example adding 2 after type name). Then I go to stored procedure(s) where I'm using this type, change with new created type and alter them. Then I can freely change original type. After I do it I can return original type in stored procedure and drop type 2. This way us much faster and reliable.
|
stackoverflow
|
{
"language": "en",
"length": 417,
"provenance": "stackexchange_0000F.jsonl.gz:850827",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498704"
}
|
598f1191ae139163b1b584821f92d945b9e4daab
|
Stackoverflow Stackexchange
Q: Handsontable: how to get instance by container? I need to create a function that does some data updates and re-rendering to a rendered instance of handsontalbe (var ht = new Handsontable(el,options)). What I can easily get in my case is el (the element which is used as a container of the instance). Is it possible to get ht by only knowing el? Or I have to "remember" ht somewhere to access it later?
(I've tried Handsontable(el) and it creates a new table, it doesn't return an already created instance)
A: If you use jQuery, you should be able to do so:
// Instead of creating a new Handsontable instance
// with the container element passed as an argument,
// you can simply call .handsontable method on a jQuery DOM object.
var $container = $("#example1");
$container.handsontable({
data: getData(),
rowHeaders: true,
colHeaders: true,
contextMenu: true
});
// This way, you can access Handsontable api methods by passing their names as an argument, e.g.:
var hotInstance = $("#example1").handsontable('getInstance');
Here is the documentation link: https://docs.handsontable.com/pro/1.13.0/demo-jquery.html
|
Q: Handsontable: how to get instance by container? I need to create a function that does some data updates and re-rendering to a rendered instance of handsontalbe (var ht = new Handsontable(el,options)). What I can easily get in my case is el (the element which is used as a container of the instance). Is it possible to get ht by only knowing el? Or I have to "remember" ht somewhere to access it later?
(I've tried Handsontable(el) and it creates a new table, it doesn't return an already created instance)
A: If you use jQuery, you should be able to do so:
// Instead of creating a new Handsontable instance
// with the container element passed as an argument,
// you can simply call .handsontable method on a jQuery DOM object.
var $container = $("#example1");
$container.handsontable({
data: getData(),
rowHeaders: true,
colHeaders: true,
contextMenu: true
});
// This way, you can access Handsontable api methods by passing their names as an argument, e.g.:
var hotInstance = $("#example1").handsontable('getInstance');
Here is the documentation link: https://docs.handsontable.com/pro/1.13.0/demo-jquery.html
A: You can't get the handsontable object from the DOM Element from which it has been constructed. The handsontable instance is just a wrapper, which controls DOM Element for viewing, and that element does not have reference to its wrapper.
That means you indeed need to store your reference to ht somewhere, just like you would another variable.
If your problem is the scope, make the table a property of window object, and it will be accessible from everywhere in your page. This can be done simply using:
window.ht = new Handsontable(el,options)
However, if possible avoid making such global variables and keep it in the proper scope.
|
stackoverflow
|
{
"language": "en",
"length": 279,
"provenance": "stackexchange_0000F.jsonl.gz:850834",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498726"
}
|
b4e34be713c84735ad7f40224b14576a4d55eb9f
|
Stackoverflow Stackexchange
Q: retrieving datetime from mysql in time.time in golang i have stored on my database (mySQL) this datatime field, last_activity: 2017-06-12 11:07:09
im using the param parseTime=True on my OpenDB
the problem is that the output is: last activity: {63632862429 0 <nil>}
instead of 2017-06-12 11:07:09
what am i doing wrong?
thanks
type DateType time.Time
type User struct {
LastActivity DateType
}
func (stUser *User) GetUserDataByLogin(login string) {
db := OpenDB()
defer db.Close()
// Test the connection to the database
err := db.Ping()
checkErr(err)
err = db.QueryRow("SELECT last_activity FROM users WHERE login = ?", login).Scan(&stUser.LastActivity)
if err != nil {
if err == sql.ErrNoRows {
// there were no rows, but otherwise no error occurred
} else {
log.Fatal(err)
}
}
fmt.Println("last activity:", stUser.LastActivity)
}
A: You must declare DateType.String() method like this:
func (t DateType) String() string {
return time.Time(t).String()
}
From Language Specification:
The declared type does not inherit any methods bound to the existing type
|
Q: retrieving datetime from mysql in time.time in golang i have stored on my database (mySQL) this datatime field, last_activity: 2017-06-12 11:07:09
im using the param parseTime=True on my OpenDB
the problem is that the output is: last activity: {63632862429 0 <nil>}
instead of 2017-06-12 11:07:09
what am i doing wrong?
thanks
type DateType time.Time
type User struct {
LastActivity DateType
}
func (stUser *User) GetUserDataByLogin(login string) {
db := OpenDB()
defer db.Close()
// Test the connection to the database
err := db.Ping()
checkErr(err)
err = db.QueryRow("SELECT last_activity FROM users WHERE login = ?", login).Scan(&stUser.LastActivity)
if err != nil {
if err == sql.ErrNoRows {
// there were no rows, but otherwise no error occurred
} else {
log.Fatal(err)
}
}
fmt.Println("last activity:", stUser.LastActivity)
}
A: You must declare DateType.String() method like this:
func (t DateType) String() string {
return time.Time(t).String()
}
From Language Specification:
The declared type does not inherit any methods bound to the existing type
A: time.Time is a nullable type. You got to anticipate that and use somethink like the pq package and the NullTime type instead of time.Time.
cf : https://godoc.org/github.com/lib/pq#NullTime
pq.NullTime is a structure :
type NullTime struct {
Time time.Time
Valid bool // Valid is true if Time is not NULL
}
If Valid is True then you'll got a result in Time.
|
stackoverflow
|
{
"language": "en",
"length": 220,
"provenance": "stackexchange_0000F.jsonl.gz:850839",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498738"
}
|
ddffebee6a825eedeff04e254a432a1ce846909f
|
Stackoverflow Stackexchange
Q: How to put a link to another vignette in the same R package in a vignette I have a package on Bioconductor and I'm in the process of adding a second vignette to it.
I want to link the second vignette to the first vignette, as one vignette is on the general workflow of the package and the second is on fine parameter tuning, for more advanced users.
Is there a clean way to do it ?
The only related topic that I found is this one :
best way to link to a vignette from manual in an R package
But it did not really helped me,
Thanks for your help,
Alexis
A: If you're writing the vignettes in markdown, just link to the second vignette as you would link to any html file. Since all the vignettes are in the same directory, you don't need any file path info, just: see here for more info
|
Q: How to put a link to another vignette in the same R package in a vignette I have a package on Bioconductor and I'm in the process of adding a second vignette to it.
I want to link the second vignette to the first vignette, as one vignette is on the general workflow of the package and the second is on fine parameter tuning, for more advanced users.
Is there a clean way to do it ?
The only related topic that I found is this one :
best way to link to a vignette from manual in an R package
But it did not really helped me,
Thanks for your help,
Alexis
A: If you're writing the vignettes in markdown, just link to the second vignette as you would link to any html file. Since all the vignettes are in the same directory, you don't need any file path info, just: see here for more info
|
stackoverflow
|
{
"language": "en",
"length": 158,
"provenance": "stackexchange_0000F.jsonl.gz:850846",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498760"
}
|
d13a78d5a35fc52f0f2cb3a445e55933b4f4d0ad
|
Stackoverflow Stackexchange
Q: ServiceNow renaming attachment not getting SysId In ServiceNow, I have written script in business login - script actions.
While adding and deleting I am getting sysId but when renaming the attachment I am not able to get sys_id.
sendnotification();
function sendnotification()
{
try
{
var r = new sn_ws.RESTMessageV2('IqtrackTest', 'AttachmentPost');
r.setStringParameterNoEscape('sys_id',current.sys_id);
r.setStringParameterNoEscape('sysparm_TableName',current.getTableName());
r.setStringParameterNoEscape('Action',"Attachment_Renamed");
var response = r.execute();
var responseBody = response.getBody();
var httpStatus = response.getStatusCode();
}
catch(ex)
{
var message = ex.getMessage();
}
}
A: try this
var record = new GlideRecord('sys_attachment');
record.addQuery('user_name',gs.getUserName());
record.orderByDesc('sys_updated_on');
record.setLimit(1);
record.query();
if (record.next())
{
gs.print(record.getValue("sys_id"));
gs.print(record.getDisplayValue("file_name"));
gs.error("file name"+record.getDisplayValue("file_name"));
}
|
Q: ServiceNow renaming attachment not getting SysId In ServiceNow, I have written script in business login - script actions.
While adding and deleting I am getting sysId but when renaming the attachment I am not able to get sys_id.
sendnotification();
function sendnotification()
{
try
{
var r = new sn_ws.RESTMessageV2('IqtrackTest', 'AttachmentPost');
r.setStringParameterNoEscape('sys_id',current.sys_id);
r.setStringParameterNoEscape('sysparm_TableName',current.getTableName());
r.setStringParameterNoEscape('Action',"Attachment_Renamed");
var response = r.execute();
var responseBody = response.getBody();
var httpStatus = response.getStatusCode();
}
catch(ex)
{
var message = ex.getMessage();
}
}
A: try this
var record = new GlideRecord('sys_attachment');
record.addQuery('user_name',gs.getUserName());
record.orderByDesc('sys_updated_on');
record.setLimit(1);
record.query();
if (record.next())
{
gs.print(record.getValue("sys_id"));
gs.print(record.getDisplayValue("file_name"));
gs.error("file name"+record.getDisplayValue("file_name"));
}
A: Just run this code in your Background Script
var temp= new GlideRecord('sys_attachment');
temp.addQuery('user_name',gs.getUserName());
temp.orderByDesc('sys_updated_on');
temp.setLimit(1);
temp.query();
if (temp.next()){
gs.print(temp.getValue("sys_id"));
}
|
stackoverflow
|
{
"language": "en",
"length": 116,
"provenance": "stackexchange_0000F.jsonl.gz:850876",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498845"
}
|
5421d7d54ec4d634b0613f20d035d6e04544d01e
|
Stackoverflow Stackexchange
Q: Angular 2 EventEmitter calls for all not nested components I have 2 components with one same nested component in each.
Nested component has EventEmitter which trigger method of parent component
<parent1>
<nested (changed)="onChanged1($event)"></nested>
</parent2>
<parent2>
<nested (changed)="onChanged2($event)"></nested>
</parent2>
onChanged1 and onChanged2 call after every 'changed' emit.
in onChanged2 i have event data from 'changed' eventEmitter from parent1!
Please give an advice how to avoid this problem
|
Q: Angular 2 EventEmitter calls for all not nested components I have 2 components with one same nested component in each.
Nested component has EventEmitter which trigger method of parent component
<parent1>
<nested (changed)="onChanged1($event)"></nested>
</parent2>
<parent2>
<nested (changed)="onChanged2($event)"></nested>
</parent2>
onChanged1 and onChanged2 call after every 'changed' emit.
in onChanged2 i have event data from 'changed' eventEmitter from parent1!
Please give an advice how to avoid this problem
|
stackoverflow
|
{
"language": "en",
"length": 67,
"provenance": "stackexchange_0000F.jsonl.gz:850905",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498935"
}
|
c6bfc124dae84a710ec17a6d806990d0e05519c8
|
Stackoverflow Stackexchange
Q: Prevent script from closing by sys.exit() I need to call 1 script(test.py) every 5 minutes, so i have used another script timer.py with following code:
import time
while(1==1):
execfile("test.py")
time.sleep(300)
This works correctly.
But it stopped working after few iterations. After debugging i found that there is a flow in test.py which uses following code:
sys.exit()
So, this is causing both test.py and timer.py to stop.
what changes should be done, so as to continue timer.py since i want sys.exit() to only exit test.py and not timer.py
A: sys.exit() doesn't do more then raising SystemExit (a BaseException subclass), which can be caught like any exception e.g:
import time
while True:
try:
execfile("test.py")
except SystemExit:
print("ignoring SystemExit")
finally:
time.sleep(300)
|
Q: Prevent script from closing by sys.exit() I need to call 1 script(test.py) every 5 minutes, so i have used another script timer.py with following code:
import time
while(1==1):
execfile("test.py")
time.sleep(300)
This works correctly.
But it stopped working after few iterations. After debugging i found that there is a flow in test.py which uses following code:
sys.exit()
So, this is causing both test.py and timer.py to stop.
what changes should be done, so as to continue timer.py since i want sys.exit() to only exit test.py and not timer.py
A: sys.exit() doesn't do more then raising SystemExit (a BaseException subclass), which can be caught like any exception e.g:
import time
while True:
try:
execfile("test.py")
except SystemExit:
print("ignoring SystemExit")
finally:
time.sleep(300)
A: Use subprocess
import subprocess
import time
while(1==1):
subprocess.call(['python', './test.py'])
time.sleep(300)
You could even remove the python word if the test.py file has a shebang comment on the first line:
#!/usr/bin/env python
This is not exactly the same, as it will start a new interpreter, but the results will be similar.
A: Try this:
import time
import os
while True:
os.system("python test.py") # if you are not running script from same directory then mention complete path to the file
time.sleep(300)
A: You should be able to use:
try:
# Your call here
except BaseException as ex:
print("This should be a possible sys.exit()")
Check out the documentation for more information.
|
stackoverflow
|
{
"language": "en",
"length": 228,
"provenance": "stackexchange_0000F.jsonl.gz:850918",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498977"
}
|
8229ca0157538498e4fbfa8cd4ab336e4ee59c1f
|
Stackoverflow Stackexchange
Q: How to read tensorflow checkpoints of old version? I used tensorflow==0.11.0 to save my model. This uses the old format. My files are:
checkpoint_old/
checkpoint
DCGAN.model-9002
DCGAN.model-9002.meta
How do I read the model using the tensorflow==1.0.0? The format for checkpoints has changed. This doesn't seem to work:
from tensorflow.core.protobuf import saver_pb2
saver = tf.train.Saver(write_version = saver_pb2.SaverDef.V1)
ckpt = tf.train.get_checkpoint_state('checkpoint_old')
if ckpt and ckpt.model_checkpoint_path:
print ckpt.model_checkpoint_path #checkpoint_old/DCGAN.model-9002
saver.restore(sess, ckpt.model_checkpoint_path)
|
Q: How to read tensorflow checkpoints of old version? I used tensorflow==0.11.0 to save my model. This uses the old format. My files are:
checkpoint_old/
checkpoint
DCGAN.model-9002
DCGAN.model-9002.meta
How do I read the model using the tensorflow==1.0.0? The format for checkpoints has changed. This doesn't seem to work:
from tensorflow.core.protobuf import saver_pb2
saver = tf.train.Saver(write_version = saver_pb2.SaverDef.V1)
ckpt = tf.train.get_checkpoint_state('checkpoint_old')
if ckpt and ckpt.model_checkpoint_path:
print ckpt.model_checkpoint_path #checkpoint_old/DCGAN.model-9002
saver.restore(sess, ckpt.model_checkpoint_path)
|
stackoverflow
|
{
"language": "en",
"length": 69,
"provenance": "stackexchange_0000F.jsonl.gz:850926",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44498996"
}
|
7049c6cd33e9f8cfb99204b932f5baf31d5f5560
|
Stackoverflow Stackexchange
Q: Web API: Configure JSON serializer settings on action or controller level Overriding the default JSON serializer settings for web API on application level has been covered in a lot of SO threads. But how can I configure its settings on action level? For example, I might want to serialize using camelcase properties in one of my actions, but not in the others.
A: Option 1 (quickest)
At action level you may always use a custom JsonSerializerSettings instance while using Json method:
public class MyController : ApiController
{
public IHttpActionResult Get()
{
var settings = new JsonSerializerSettings
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
};
var model = new MyModel();
return Json(model, settings);
}
}
Option 2 (controller level)
You may create a new IControllerConfiguration attribute which customizes the JsonFormatter:
public class CustomJsonAttribute : Attribute, IControllerConfiguration
{
public void Initialize(HttpControllerSettings controllerSettings, HttpControllerDescriptor controllerDescriptor)
{
var formatter = controllerSettings.Formatters.JsonFormatter;
controllerSettings.Formatters.Remove(formatter);
formatter = new JsonMediaTypeFormatter
{
SerializerSettings =
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
}
};
controllerSettings.Formatters.Insert(0, formatter);
}
}
[CustomJson]
public class MyController : ApiController
{
public IHttpActionResult Get()
{
var model = new MyModel();
return Ok(model);
}
}
|
Q: Web API: Configure JSON serializer settings on action or controller level Overriding the default JSON serializer settings for web API on application level has been covered in a lot of SO threads. But how can I configure its settings on action level? For example, I might want to serialize using camelcase properties in one of my actions, but not in the others.
A: Option 1 (quickest)
At action level you may always use a custom JsonSerializerSettings instance while using Json method:
public class MyController : ApiController
{
public IHttpActionResult Get()
{
var settings = new JsonSerializerSettings
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
};
var model = new MyModel();
return Json(model, settings);
}
}
Option 2 (controller level)
You may create a new IControllerConfiguration attribute which customizes the JsonFormatter:
public class CustomJsonAttribute : Attribute, IControllerConfiguration
{
public void Initialize(HttpControllerSettings controllerSettings, HttpControllerDescriptor controllerDescriptor)
{
var formatter = controllerSettings.Formatters.JsonFormatter;
controllerSettings.Formatters.Remove(formatter);
formatter = new JsonMediaTypeFormatter
{
SerializerSettings =
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
}
};
controllerSettings.Formatters.Insert(0, formatter);
}
}
[CustomJson]
public class MyController : ApiController
{
public IHttpActionResult Get()
{
var model = new MyModel();
return Ok(model);
}
}
A: Here's an implementation of the above as Action Attribute:
public class CustomActionJsonFormatAttribute : ActionFilterAttribute
{
public override void OnActionExecuted(HttpActionExecutedContext actionExecutedContext)
{
if (actionExecutedContext?.Response == null) return;
var content = actionExecutedContext.Response.Content as ObjectContent;
if (content?.Formatter is JsonMediaTypeFormatter)
{
var formatter = new JsonMediaTypeFormatter
{
SerializerSettings =
{
ContractResolver = new CamelCasePropertyNamesContractResolver()
}
};
actionExecutedContext.Response.Content = new ObjectContent(content.ObjectType, content.Value, formatter);
}
}
}
public class MyController : ApiController
{
[CustomActionJsonFormat]
public IHttpActionResult Get()
{
var model = new MyModel();
return Ok(model);
}
}
A: I needed to return a 404 status error code alongside a json object with error details. I solved it using WebApi.Content with a new new JsonMediaTypeFormatter.
public class MyController : ApiController
{
public IHttpActionResult Get()
{
// Configure new Json formatter
var formatter = new JsonMediaTypeFormatter
{
SerializerSettings =
{
TypeNameHandling = TypeNameHandling.None,
PreserveReferencesHandling = PreserveReferencesHandling.None,
Culture = CultureInfo.InvariantCulture,
Formatting = Formatting.Indented,
NullValueHandling = NullValueHandling.Ignore
}
};
try
{
var model = new MyModel();
return Content(HttpStatusCode.OK, model, formatter);
}
catch (Exception err)
{
var errorDto = GetErrorDto(HttpStatusCode.NotFound, $"{err.Message}");
return Content(HttpStatusCode.NotFound, errorDto, formatter);
}
}
}
|
stackoverflow
|
{
"language": "en",
"length": 363,
"provenance": "stackexchange_0000F.jsonl.gz:850944",
"question_score": "40",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499041"
}
|
5465de7dce891d2177e011cb1d097318614df64d
|
Stackoverflow Stackexchange
Q: openfire add user to group I've read many Posts about adding a user to a group programmatically on openfire dev forum.
Unfortunately, none of them worked for me.
I can programmatically create users via UserManager.createUser(). After I successfully created a new User, I want to add him to a group 'users'.
The group already exists so I was trying to use following code:
public void addUserToGroup(User user) {
Group group = GroupManager.getInstance().getGroup("users");
JID jid = new JID(user.getUsername());
group.getMembers().add(jid);
}
I've found that code snippet in an answer HERE.
Then I've tried to use this example Java Code Example org.jivesoftware.openfire.group.Group (Example Number 9), even if I wouldn't understand how this event will add a new member to a group.
Like I said, none of them worked.
Can you please give information about "how can I add a group member to a group?"
|
Q: openfire add user to group I've read many Posts about adding a user to a group programmatically on openfire dev forum.
Unfortunately, none of them worked for me.
I can programmatically create users via UserManager.createUser(). After I successfully created a new User, I want to add him to a group 'users'.
The group already exists so I was trying to use following code:
public void addUserToGroup(User user) {
Group group = GroupManager.getInstance().getGroup("users");
JID jid = new JID(user.getUsername());
group.getMembers().add(jid);
}
I've found that code snippet in an answer HERE.
Then I've tried to use this example Java Code Example org.jivesoftware.openfire.group.Group (Example Number 9), even if I wouldn't understand how this event will add a new member to a group.
Like I said, none of them worked.
Can you please give information about "how can I add a group member to a group?"
|
stackoverflow
|
{
"language": "en",
"length": 142,
"provenance": "stackexchange_0000F.jsonl.gz:850955",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499060"
}
|
9e0bf826aa8dde96c4b291c8ea1ca456979e3838
|
Stackoverflow Stackexchange
Q: Spring @ControllerAdvice doesn't work I want to handle all exceptions thrown by any controller with the help of my GlobalExceptionHandler class. When I add following code to my controller, it works fine. But in this case I must add following code to all my controllers. But I don't want to repeat following code in each controller.
@ExceptionHandler({ FiberValidationException.class })
public String handleValidationException(HttpServletRequest req, Exception ex)
{
return ex.getMessage();
}
When I remove them and use my GlobalExceptionHandler class, it doesn't handle
exceptions.
What is the reason ? How I can fix it ?
@ControllerAdvice
@EnableWebMvc
public class GlobalExceptionHandler {
private static final Logger LOG = Logger.getLogger(GlobalExceptionHandler.class);
@ExceptionHandler({ FiberValidationException.class })
public String handleValidationException(HttpServletRequest req, Exception ex) {
LOG.error("FiberValidationException handler executed");
return ex.getMessage();
}
@ExceptionHandler({ ChannelOverflowException.class })
public String handleOverflowException(HttpServletRequest req, Exception ex) {
LOG.error("ChannelOverflowException handler executed");
return ex.getMessage();
}
}
A: Extend your global exception class with ResponseEntityExceptionHandler. e.g. public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
|
Q: Spring @ControllerAdvice doesn't work I want to handle all exceptions thrown by any controller with the help of my GlobalExceptionHandler class. When I add following code to my controller, it works fine. But in this case I must add following code to all my controllers. But I don't want to repeat following code in each controller.
@ExceptionHandler({ FiberValidationException.class })
public String handleValidationException(HttpServletRequest req, Exception ex)
{
return ex.getMessage();
}
When I remove them and use my GlobalExceptionHandler class, it doesn't handle
exceptions.
What is the reason ? How I can fix it ?
@ControllerAdvice
@EnableWebMvc
public class GlobalExceptionHandler {
private static final Logger LOG = Logger.getLogger(GlobalExceptionHandler.class);
@ExceptionHandler({ FiberValidationException.class })
public String handleValidationException(HttpServletRequest req, Exception ex) {
LOG.error("FiberValidationException handler executed");
return ex.getMessage();
}
@ExceptionHandler({ ChannelOverflowException.class })
public String handleOverflowException(HttpServletRequest req, Exception ex) {
LOG.error("ChannelOverflowException handler executed");
return ex.getMessage();
}
}
A: Extend your global exception class with ResponseEntityExceptionHandler. e.g. public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
A: You might define the base package of the ControllerAdvice
A: in my case I was specified wrong exception for my function entry arguments, for example I passed EntityNotFoundException exception for MissingFormatArgumentException so my response to this exception was always 500 and it was never catched in my ControllerAdvice.
|
stackoverflow
|
{
"language": "en",
"length": 204,
"provenance": "stackexchange_0000F.jsonl.gz:850980",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499141"
}
|
f5503adba800f04d001a46d0ac9ccfa7b696d361
|
Stackoverflow Stackexchange
Q: Convert Django QuerySet to Pandas Dataframe and Maintain Column Order Given a Django queryset like the following:
qs = A.objects.all().values_list('A', 'B', 'C', 'D', 'E', 'F')
I can convert my qs to a pandas dataframe easily:
df = pd.DataFrame.from_records(qs.values('A', 'B', 'C', 'D', 'E', 'F'))
However, the column order is not maintained. Immediately after conversion I need to specify the new order of columns and I'm not clear why:
df = df.columns['B', 'F', 'C', 'E', 'D', 'A']
Why is this happening and what can I do differently to avoid having to set the dataframe columns explicitly?
A: qs.values() converts the QuerySet into a dictionary, which is unordered. You are OK with qs.values_list(), which returns a list of tuples.
Try:
df = pd.DataFrame.from_records(
A.objects.all().values_list('A', 'B', 'C', 'D', 'E', 'F')
)
check the docs about Django's QuerySets
|
Q: Convert Django QuerySet to Pandas Dataframe and Maintain Column Order Given a Django queryset like the following:
qs = A.objects.all().values_list('A', 'B', 'C', 'D', 'E', 'F')
I can convert my qs to a pandas dataframe easily:
df = pd.DataFrame.from_records(qs.values('A', 'B', 'C', 'D', 'E', 'F'))
However, the column order is not maintained. Immediately after conversion I need to specify the new order of columns and I'm not clear why:
df = df.columns['B', 'F', 'C', 'E', 'D', 'A']
Why is this happening and what can I do differently to avoid having to set the dataframe columns explicitly?
A: qs.values() converts the QuerySet into a dictionary, which is unordered. You are OK with qs.values_list(), which returns a list of tuples.
Try:
df = pd.DataFrame.from_records(
A.objects.all().values_list('A', 'B', 'C', 'D', 'E', 'F')
)
check the docs about Django's QuerySets
A: try:
df = pd.DataFrame.from_records("DATA_GOES_HERE", columns=['A','B','C'.. etc.)
I'm using the columns= parameter found here.
I believe you could also construct the DataFrame by just using pd.DataFrame and put your lists in there with the corresponding column names. This may be more manual work up-front, but if this is for an automated job it could work as well. (may have the ordering issue here again, but can easily be solved by rearranging the columns.. Again, may be more work upfront)
A: The abovementioned answers require adding columns manually. However, this can be circumvented. I wrote a more simple version that does not require column names:
def django_recordset_to_data_frame(django_recordset):
mydf = pd.DataFrame.from_records(django_recordset.values_list())
mydf.columns = [col for col in django_recordset[0].__dict__.keys()][1:]
return mydf
You can use this like below for instance your News table:
django_recordset = News.objects.all()
panda_data_frame = django_recordset_to_data_frame(django_recordset )
|
stackoverflow
|
{
"language": "en",
"length": 270,
"provenance": "stackexchange_0000F.jsonl.gz:851001",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499196"
}
|
5b989cc41ff57bced4b6047b8e055557835eae9e
|
Stackoverflow Stackexchange
Q: Native implementation of an abstract method Is it permitted in Java to have a an abstract method within a class and then have it's implementation in an other with a native language using JNI.
example:
abstract class Mommy {
abstract protected void call();
}
class Son extends Mommy {
native protected void call() /*
'native code'
*/
}
What is the expected behaviour is it a runtime error that may occurs or everything is fine with "workaround" ?
A:
What is the expected behaviour is it a runtime error that may occurs or everything is fine with "workaround" ?
Provided that you implement the native method (correctly) and load the native library containing the implementation, then everything works.
I wonder if bug prone or against any good/best practices?
Nope, and nope.
Or at least, it is not more bug prone or more against "best practice" than any use of native code.
By the way, you really ought to read James Bach's "No Best Practices" article before you bandy around dodgy terms like "best practice".
|
Q: Native implementation of an abstract method Is it permitted in Java to have a an abstract method within a class and then have it's implementation in an other with a native language using JNI.
example:
abstract class Mommy {
abstract protected void call();
}
class Son extends Mommy {
native protected void call() /*
'native code'
*/
}
What is the expected behaviour is it a runtime error that may occurs or everything is fine with "workaround" ?
A:
What is the expected behaviour is it a runtime error that may occurs or everything is fine with "workaround" ?
Provided that you implement the native method (correctly) and load the native library containing the implementation, then everything works.
I wonder if bug prone or against any good/best practices?
Nope, and nope.
Or at least, it is not more bug prone or more against "best practice" than any use of native code.
By the way, you really ought to read James Bach's "No Best Practices" article before you bandy around dodgy terms like "best practice".
A: Seems to be working just fine. This is quite cool solution in case you want to easily switch between native implementations.
https://github.com/mkowsiak/jnicookbook/tree/master/recipes/recipeNo030
and description inside JNI Cookbook can give you quick overview on the solution
http://jnicookbook.owsiak.org/recipe-no-030/
|
stackoverflow
|
{
"language": "en",
"length": 212,
"provenance": "stackexchange_0000F.jsonl.gz:851010",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499221"
}
|
7b6d87dac0623b97c6c6c730f0e76b153309f98a
|
Stackoverflow Stackexchange
Q: Python - Whitespace after Why is there a white space that outputs when I print something like this in Python 3?
(Is it in the '\n' character itself?)
print (my_var1, '\n', my_var_2)
Output :
1
2
I know how to fix it. It is not that complicated, but i was just wondering why...
A: print adds a single space (or the value of the sep keyword argument) after every argument, including `\n'. You might want combine the three strings into a single argument yourself.
print(my_var1 + '\n' + my_var2)
or
print('\n'.join([my_var1, my_var2]))
Better than either of theses would be to use the format string method:
print('{}\n{}'.format(my_var1, my_var2))
which both handles conversion to str if necessary and eliminates any temporary objects.
I would prefer, though, setting sep to \n as in @billy's answer.
|
Q: Python - Whitespace after Why is there a white space that outputs when I print something like this in Python 3?
(Is it in the '\n' character itself?)
print (my_var1, '\n', my_var_2)
Output :
1
2
I know how to fix it. It is not that complicated, but i was just wondering why...
A: print adds a single space (or the value of the sep keyword argument) after every argument, including `\n'. You might want combine the three strings into a single argument yourself.
print(my_var1 + '\n' + my_var2)
or
print('\n'.join([my_var1, my_var2]))
Better than either of theses would be to use the format string method:
print('{}\n{}'.format(my_var1, my_var2))
which both handles conversion to str if necessary and eliminates any temporary objects.
I would prefer, though, setting sep to \n as in @billy's answer.
A: When you have multiple positional arguments in the print function, it writes the str form of those arguments with the sep string between them - which in the default case is one space (' '). There are many ways to print two variables on two separate lines.
print(my_var1)
print(my_var2)
or
for var in (my_var1, my_var2):
print(var)
or
print(my_var1, my_var2, sep='\n')
or the myriad of other examples here.
A: Using multiline comments and f-string
my_var1, my_var_2 = 1,2
print (f"""{my_var1}
{my_var_2}""")
gives
1
2
[Program finished]
Although my personal favourite is
my_var1, my_var_2 = 1,2
varlist = [my_var1, my_var_2]
for var in (varlist):
print(var)
simply bcos you can have n variables which you can put in list and then print using loop
|
stackoverflow
|
{
"language": "en",
"length": 255,
"provenance": "stackexchange_0000F.jsonl.gz:851016",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499240"
}
|
2530eee5a014df8c703cfcfe1af8eacbe3b029c5
|
Stackoverflow Stackexchange
Q: Differences between query with SQL and without SQL in SparkSQL during the learning of Spark 2 in Scala, I found that we can use two ways to query data in SparkSQL:
*
*spark.sql(SQL_STATEMENT) // variable "spark" is an instance of SparkSession
*DataSet/DataFrame.select/.where/.groupBy....
My question is what are the differences(functional, performance, etc.) bewtween the them?
I tried to find the anwser on internet or their documentation, but failed, so I would like to listen to your opinions
A: I think both the query with SQL query and without SQL query are equivalent and equal. Both of same are in internals and use same engines inside. But I would prefer to user without SQL queries which are easier to write and provide some level of type safety.
among these
1. spark.sql(SQL_STATEMENT) // variable "spark" is a SparkSession
2. DataSet/DataFrame.select/.where/.groupBy....
I would choose number 2 for most of the case since it provides some lavel of typesafe
|
Q: Differences between query with SQL and without SQL in SparkSQL during the learning of Spark 2 in Scala, I found that we can use two ways to query data in SparkSQL:
*
*spark.sql(SQL_STATEMENT) // variable "spark" is an instance of SparkSession
*DataSet/DataFrame.select/.where/.groupBy....
My question is what are the differences(functional, performance, etc.) bewtween the them?
I tried to find the anwser on internet or their documentation, but failed, so I would like to listen to your opinions
A: I think both the query with SQL query and without SQL query are equivalent and equal. Both of same are in internals and use same engines inside. But I would prefer to user without SQL queries which are easier to write and provide some level of type safety.
among these
1. spark.sql(SQL_STATEMENT) // variable "spark" is a SparkSession
2. DataSet/DataFrame.select/.where/.groupBy....
I would choose number 2 for most of the case since it provides some lavel of typesafe
A: By using DataFrames which is a Java API one can debug the SQL statements by breaking them down into simple statements. This would help in better understanding.
The only thing that makes difference is what kind of underlying algorithm is used for grouping.
HashAggregation vs SortAggregation
HashAggregation would be more efficient than SortAggregation.
SortAggregation - Will sort the rows and then gather together the matching rows. O(n*log n) HashAggregation creates a HashMap using key as grouping columns where as rest of the columns as values in a Map. Spark SQL uses HashAggregation where possible(If data for value is mutable). O(n)
|
stackoverflow
|
{
"language": "en",
"length": 256,
"provenance": "stackexchange_0000F.jsonl.gz:851018",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499245"
}
|
836416abba61764ac3587a0ae6760b42272ab8f2
|
Stackoverflow Stackexchange
Q: How to enforce Ember.Component to rerender? Is there any way to make Ember.Component force rerender?
There is .rerender() method, but it doesn't help.
Also I tried use .notifyPropertyChange for template, layoute - the same
Right now for such cases I need to wrap piece of template into if wrapper and toggle flag's value. But the way is ugly and boring.
Any ideas?
A: I had the same issue with one of my component. As it say by the community, you should not have to do this if you use correctly ember pattern.
However, as it was on a specific case, I found a way around.
You have to create a action in your routes/somefile.js to refresh as like so :
actions: {
refresh() {
this.refresh();
}
}
and in your component view, add an hidden button to click on action of the router like so
<button id="refresh_invoice" class="hidden" {{action 'refresh'}}></button>
and the in your component, using Jquery, you will be able to click on the hidden button, and this will refresh the component.
It's not a great fix, but it's work.
Hop it will help.
|
Q: How to enforce Ember.Component to rerender? Is there any way to make Ember.Component force rerender?
There is .rerender() method, but it doesn't help.
Also I tried use .notifyPropertyChange for template, layoute - the same
Right now for such cases I need to wrap piece of template into if wrapper and toggle flag's value. But the way is ugly and boring.
Any ideas?
A: I had the same issue with one of my component. As it say by the community, you should not have to do this if you use correctly ember pattern.
However, as it was on a specific case, I found a way around.
You have to create a action in your routes/somefile.js to refresh as like so :
actions: {
refresh() {
this.refresh();
}
}
and in your component view, add an hidden button to click on action of the router like so
<button id="refresh_invoice" class="hidden" {{action 'refresh'}}></button>
and the in your component, using Jquery, you will be able to click on the hidden button, and this will refresh the component.
It's not a great fix, but it's work.
Hop it will help.
A: Take a look at the Ember Run Loop
To override this 'background' process you can use something like this:
showElement: false,
actions: {
buttonClick() {
Ember.run(()=> {
this.toggleProperty('showElement');
})
}
}
This will force the run loop to restart, rather than Ember handling any property changes and deciding on how your code should be executed, much like the JavaScript event loop.
A good explanation of this can be found here
|
stackoverflow
|
{
"language": "en",
"length": 257,
"provenance": "stackexchange_0000F.jsonl.gz:851034",
"question_score": "10",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499286"
}
|
a1f2069bf5de69cb170936bc43e48edd128e4fb5
|
Stackoverflow Stackexchange
Q: How to read application.properties file without Environment Please can you help me to read the properties from application.properties file in Spring Boot, without autowiring the Environment and without using the Environment?
No need to use ${propname} either. I can create properties object but have to pass my properties file path. I want to get my prop file from another location.
A: Try to use plain old Properties.
final Properties properties = new Properties();
properties.load(new FileInputStream("/path/config.properties"));
System.out.println(properties.getProperty("server.port"));
In case you need to use that external properties file in your configuration it can be accomplished with @PropertySource("/path/config.properties")
|
Q: How to read application.properties file without Environment Please can you help me to read the properties from application.properties file in Spring Boot, without autowiring the Environment and without using the Environment?
No need to use ${propname} either. I can create properties object but have to pass my properties file path. I want to get my prop file from another location.
A: Try to use plain old Properties.
final Properties properties = new Properties();
properties.load(new FileInputStream("/path/config.properties"));
System.out.println(properties.getProperty("server.port"));
In case you need to use that external properties file in your configuration it can be accomplished with @PropertySource("/path/config.properties")
A: The following code extracts the environment value from an existing application.properties file which is located in the Deployed Resources under WEB-INF/classes :
// Define classes path from application.properties :
String environment;
InputStream inputStream;
try {
// Class path is found under WEB-INF/classes
Properties prop = new Properties();
String propFileName = "com/example/project/application.properties";
inputStream = getClass().getClassLoader().getResourceAsStream(propFileName);
// read the file
if (inputStream != null) {
prop.load(inputStream);
} else {
throw new FileNotFoundException("property file '" + propFileName + "' not found in the classpath");
}
// get the property value and print it out
environment = prop.getProperty("environment");
System.out.println("The environment is " + environment);
} catch (Exception e) {
System.out.println("Exception: " + e);
}
Here is example, running the above code with the following input from the application.properties (Text file):
# Application settings file
environment=Test
release_date=DATE
session_timeout_minutes=25
## Allowable image types
img_file_extensions="jpeg;pjpeg;jpg;png;gif"
## Images are saved with this extension
img_default_extension=jpg
# Mail Settings / Addresses
mail_debug=false
Output:
The environment is Test
A: This is a core Java feature. You don't have to use any Spring or Spring Boot features if you don't want to.
Properties properties = new Properties();
try (InputStream is = getClass().getResourceAsStream("application.properties")) {
properties.load(is);
}
JavaDoc: http://docs.oracle.com/javase/8/docs/api/java/util/Properties.html
A: OrangeDog solution didn't work for me. It generated NullPointerException.
I've found another solution:
ClassLoader loader = Thread.currentThread().getContextClassLoader();
Properties properties = new Properties();
try (InputStream resourceStream = loader.getResourceAsStream("application.properties")) {
properties.load(resourceStream);
} catch (IOException e) {
e.printStackTrace();
}
A: To read application.properties just add this annotation to your class:
@ConfigurationProperties
public class Foo {
}
If you want to change the default file
@PropertySource("your properties path here")
public class Foo {
}
A: If everything else is properly set, you can the annotation @Value. Springboot will take care of loading the value from property file.
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.beans.factory.annotation.Value;
@Configuration
@PropertySource("classpath:/other.properties")
public class ClassName {
@Value("${key.name}")
private String name;
}
A: Adding to Vladislav Kysliy's elegant solution, below code can be directly plugged as REST API Call to get all the key/value of application.properties file in Spring Boot without knowing any key. Additionally, If you know the Key you can always use @Value annotation to find the value.
@GetMapping
@RequestMapping("/env")
public java.util.Set<Map.Entry<Object,Object>> getAppPropFileContent(){
ClassLoader loader = Thread.currentThread().getContextClassLoader();
java.util.Properties properties = new java.util.Properties();
try(InputStream resourceStream = loader.getResourceAsStream("application.properties")){
properties.load(resourceStream);
}catch(IOException e){
e.printStackTrace();
}
return properties.entrySet();
}
|
stackoverflow
|
{
"language": "en",
"length": 477,
"provenance": "stackexchange_0000F.jsonl.gz:851042",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499306"
}
|
cb3f0ad1df7a9ba90769e9f79474c2bd9dbba432
|
Stackoverflow Stackexchange
Q: To play Youtube Video in iOS app I want to play youtube videos in my iOS App. I searched for that but the only solution I found is to embed youtube videos in the iOS app, in which video plays in webview, so in that, we can scroll and also play other videos which are in suggestion.
I don't want to play video in webview, I want to play video just like it plays in player and user cannot scroll it.
Is there any solution for that in Swift and also I don't want to use libraries which are against terms and condition of Youtube
A: There also are solutions for playing Youtube Videos in an app on Github.
Like this one: https://github.com/rinov/YoutubeKit
Or this one: https://github.com/gilesvangruisen/Swift-YouTube-Player
Just simply add the pod for the project that you want to use, install the pod in terminal, and you can use the functionality in that project.
Hope that this is helpful.
|
Q: To play Youtube Video in iOS app I want to play youtube videos in my iOS App. I searched for that but the only solution I found is to embed youtube videos in the iOS app, in which video plays in webview, so in that, we can scroll and also play other videos which are in suggestion.
I don't want to play video in webview, I want to play video just like it plays in player and user cannot scroll it.
Is there any solution for that in Swift and also I don't want to use libraries which are against terms and condition of Youtube
A: There also are solutions for playing Youtube Videos in an app on Github.
Like this one: https://github.com/rinov/YoutubeKit
Or this one: https://github.com/gilesvangruisen/Swift-YouTube-Player
Just simply add the pod for the project that you want to use, install the pod in terminal, and you can use the functionality in that project.
Hope that this is helpful.
A: Here's another solution if you don't want to use the API provided by YouTube and instead continue using a UIWebView.
YouTube has functionality to load any video in fullscreen in a webview without any of the scrolling features using a URL in the format https://www.youtube.com/embed/<videoId>.
For example, to load Gangnam Style using this method, simply direct the UIWebView to the URL https://www.youtube.com/embed/9bZkp7q19f0.
A: The API that YouTube provides to embed videos in iOS apps is indeed written in Objective-C, but it works just as well in Swift.
To install the library via CocoaPods, follow the CocoaPods setup instructions and add the following line to your Podfile:
pod ‘youtube-ios-player-helper’, ‘~> 0.1’
Once you have run pod install, be sure to use the .xcworkspace file from now on in Xcode.
To import the pod, simply use the following import statement at the top of your Swift files:
import youtube_ios_player_helper
You can then create youtube player views as follows:
let playerView = YTPlayerView()
You can include this view in your layouts as you would any other UIView. In addition, it includes all of the functions listed in the YouTube documentation. For instance, to load and play a video, use the following function:
playerView.load(withVideoId: videoId);
Where videoId is the string id found in the URL of the video, such as "9bZkp7q19f0".
A: Play youtube video in Swift 4.0
if let range = strUrl.range(of: "=") {
let strIdentifier = strUrl.substring(from: range.upperBound)
let playerViewController = AVPlayerViewController()
self.present(playerViewController, animated: true, completion: nil)
XCDYouTubeClient.default().getVideoWithIdentifier(strIdentifier) {
[weak playerViewController] (video: XCDYouTubeVideo?, error: Error?) in
if let streamURLs = video?.streamURLs, let streamURL =
(streamURLs[XCDYouTubeVideoQualityHTTPLiveStreaming] ??
streamURLs[YouTubeVideoQuality.hd720] ??
streamURLs[YouTubeVideoQuality.medium360] ??
streamURLs[YouTubeVideoQuality.small240]) {
playerViewController?.player = AVPlayer(url: streamURL)
} else {
self.dismiss(animated: true, completion: nil)
}
}
}
|
stackoverflow
|
{
"language": "en",
"length": 445,
"provenance": "stackexchange_0000F.jsonl.gz:851051",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499332"
}
|
6adbc0b3a26ce583feec1b49f0031af8ac8fc0aa
|
Stackoverflow Stackexchange
Q: How to trigger Parse.Cloud.afterSave on registers I am looking for a way to trigger a Parse Cloud Job when a user register in my platform. This Job will set his role. Is it possible? I have try with this code but it is never triggered
Parse.Cloud.afterSave(Parse.User, function(request) {
Parse.Cloud.useMasterKey();
console.log('launch cloud request');
if (request.master === false) {
console.log('not mastered');
var query = new Parse.Query(Parse.Role);
query.equalTo('name', 'default');
query.first({
success: (default) => {
var defaultRelation = default.relation('users');
defaultRelation.add(request.object);
default.save();
},
error: (err) => console.error(err)
});
}
});
A: I think Parse.Cloud.useMasterKey() is deprecated at this time,
You can do that
Parse.Cloud.afterSave(Parse.User, function(request) {
console.log("Parse.Cloud.afterSave: ");
request.log.info("Parse.Cloud.afterSave: "); // For back4app user
});
|
Q: How to trigger Parse.Cloud.afterSave on registers I am looking for a way to trigger a Parse Cloud Job when a user register in my platform. This Job will set his role. Is it possible? I have try with this code but it is never triggered
Parse.Cloud.afterSave(Parse.User, function(request) {
Parse.Cloud.useMasterKey();
console.log('launch cloud request');
if (request.master === false) {
console.log('not mastered');
var query = new Parse.Query(Parse.Role);
query.equalTo('name', 'default');
query.first({
success: (default) => {
var defaultRelation = default.relation('users');
defaultRelation.add(request.object);
default.save();
},
error: (err) => console.error(err)
});
}
});
A: I think Parse.Cloud.useMasterKey() is deprecated at this time,
You can do that
Parse.Cloud.afterSave(Parse.User, function(request) {
console.log("Parse.Cloud.afterSave: ");
request.log.info("Parse.Cloud.afterSave: "); // For back4app user
});
|
stackoverflow
|
{
"language": "en",
"length": 111,
"provenance": "stackexchange_0000F.jsonl.gz:851083",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499431"
}
|
862083ef5e95d42383bb95a0eba773d755321fd0
|
Stackoverflow Stackexchange
Q: Jekyll - How to use a markdown file in a parent folder I have a github repository with a Github Page website activated structured as followed:
/
|__README.md
|__docs/
|__ _config.yml
|__ stylesheets
|__ javascripts
|__ images
|__ index.md
The files docs/index.md and README.md are stricly identical. How could I set README.md file as the "index" file with Jekyll ? Or at least make index.md include README.md ?
I tried the following and it does not work:
# First try
{% include ../README.md %}
# Second attempt
{% include_relative ../README.md %}
For the record, I use the theme: jekyll-theme-architect
A: You may set permalink by front matter in README.md to generate as index.html:
---
permalink: index.html
---
|
Q: Jekyll - How to use a markdown file in a parent folder I have a github repository with a Github Page website activated structured as followed:
/
|__README.md
|__docs/
|__ _config.yml
|__ stylesheets
|__ javascripts
|__ images
|__ index.md
The files docs/index.md and README.md are stricly identical. How could I set README.md file as the "index" file with Jekyll ? Or at least make index.md include README.md ?
I tried the following and it does not work:
# First try
{% include ../README.md %}
# Second attempt
{% include_relative ../README.md %}
For the record, I use the theme: jekyll-theme-architect
A: You may set permalink by front matter in README.md to generate as index.html:
---
permalink: index.html
---
A: On my side, I tried in the index.markdown file to add {% include_relative README.md %} and it worked perfectly. The file has only that by the way:
---
# Feel free to add content and custom Front Matter to this file.
# To modify the layout, see https://jekyllrb.com/docs/themes/#overriding-theme-defaults
layout: home
---
{% include_relative README.md %}
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:851105",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499497"
}
|
a21d00229397e2a91ca4abe77004a39fe90ee713
|
Stackoverflow Stackexchange
Q: Ansible win_unzip Module takes far to long At our customer the Ansible module win_unzip takes for to long when executed. Our code is:
- name: unzip zip package into C:\server\dlls
win_unzip:
src: "{{app_path}}\\app_dll.zip"
dest: "{{app_path}}\\dlls"
rm: true
This step takes more than 10 minutes. The zip file is copied with win_copy in the direct step before, code is here:
- name: copy zip package to C:\server
win_copy:
src: "path2zip.zip"
dest: "{{app_path}}\\app_dll.zip"
The extraction successfully finishes, but it blocks our Pipeline for more than 10 minutes, which isn´t acceptable.
A: We reduced the time needed to unzip the package with the help of the Powershell Module Expand-Archive to nearly zero. Here is the code:
- name: name: unzip zip package into C:\server\dlls
win_shell: "Expand-Archive {{app_path}}\\app_dll.zip -DestinationPath {{app_path}}\\dlls"
Our pipeline is now fast again, but it would be nice to have a fast Ansible win_unzip Module!
|
Q: Ansible win_unzip Module takes far to long At our customer the Ansible module win_unzip takes for to long when executed. Our code is:
- name: unzip zip package into C:\server\dlls
win_unzip:
src: "{{app_path}}\\app_dll.zip"
dest: "{{app_path}}\\dlls"
rm: true
This step takes more than 10 minutes. The zip file is copied with win_copy in the direct step before, code is here:
- name: copy zip package to C:\server
win_copy:
src: "path2zip.zip"
dest: "{{app_path}}\\app_dll.zip"
The extraction successfully finishes, but it blocks our Pipeline for more than 10 minutes, which isn´t acceptable.
A: We reduced the time needed to unzip the package with the help of the Powershell Module Expand-Archive to nearly zero. Here is the code:
- name: name: unzip zip package into C:\server\dlls
win_shell: "Expand-Archive {{app_path}}\\app_dll.zip -DestinationPath {{app_path}}\\dlls"
Our pipeline is now fast again, but it would be nice to have a fast Ansible win_unzip Module!
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:851133",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499592"
}
|
1a81e3c9c251791fe0e0a1d2b80b422be3cbc752
|
Stackoverflow Stackexchange
Q: Keras: What is the difference between layers.Input and layers.InputLayer? When should I use Input and when should I use InputLayer? In the source code there is a description, but I am not sure what it means.
InputLayer:
Layer to be used as an entry point into a graph.
It can either wrap an existing tensor (pass an input_tensor argument)
or create its a placeholder tensor (pass arguments input_shape
or batch_input_shape as well as dtype).
Input:
Input() is used to instantiate a Keras tensor.
A Keras tensor is a tensor object from the underlying backend
(Theano or TensorFlow), which we augment with certain
attributes that allow us to build a Keras model
just by knowing the inputs and outputs of the model.
A: I think InputLayer has been deprecated together with the Graph models. I would suggest you use Input, as all the examples on the Keras documentations show.
|
Q: Keras: What is the difference between layers.Input and layers.InputLayer? When should I use Input and when should I use InputLayer? In the source code there is a description, but I am not sure what it means.
InputLayer:
Layer to be used as an entry point into a graph.
It can either wrap an existing tensor (pass an input_tensor argument)
or create its a placeholder tensor (pass arguments input_shape
or batch_input_shape as well as dtype).
Input:
Input() is used to instantiate a Keras tensor.
A Keras tensor is a tensor object from the underlying backend
(Theano or TensorFlow), which we augment with certain
attributes that allow us to build a Keras model
just by knowing the inputs and outputs of the model.
A: I think InputLayer has been deprecated together with the Graph models. I would suggest you use Input, as all the examples on the Keras documentations show.
A: InputLayer is a callable, just like other keras layers, while Input is not callable, it is simply a Tensor object.
You can use InputLayer when you need to connect it like layers to the following layers:
inp = keras.layers.InputLayer(input_shape=(32,))(prev_layer)
and following is the usage of Input layer:
x = Input(shape=(32,))
y = Dense(16, activation='softmax')(x)
model = Model(x, y)
|
stackoverflow
|
{
"language": "en",
"length": 208,
"provenance": "stackexchange_0000F.jsonl.gz:851193",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499755"
}
|
aaf02c3dddd58e0d928a49dcff164590daecf10e
|
Stackoverflow Stackexchange
Q: Is there any faster alternative to POSIXct in R? I am reading a CSV with fread (as it is quicker than read_csv method), timestamp column is taken as character type.
I want to convert it to POSIXct with:
as.POSIXct(strptime(rawTime, "%Y-%m-%d %H:%M:%OS"))
But this POSIXct call is very slow.
Is there any quicker alternatetive to this?
A: We can use fastPOSIXct from fasttime
library(fasttime)
str1 <- rep("2015-01-01", 1e6)
system.time(fastPOSIXct(str1))
# user system elapsed
# 0.08 0.00 0.08
system.time(as.POSIXct(str1))
# user system elapsed
# 24.80 0.26 25.33
|
Q: Is there any faster alternative to POSIXct in R? I am reading a CSV with fread (as it is quicker than read_csv method), timestamp column is taken as character type.
I want to convert it to POSIXct with:
as.POSIXct(strptime(rawTime, "%Y-%m-%d %H:%M:%OS"))
But this POSIXct call is very slow.
Is there any quicker alternatetive to this?
A: We can use fastPOSIXct from fasttime
library(fasttime)
str1 <- rep("2015-01-01", 1e6)
system.time(fastPOSIXct(str1))
# user system elapsed
# 0.08 0.00 0.08
system.time(as.POSIXct(str1))
# user system elapsed
# 24.80 0.26 25.33
|
stackoverflow
|
{
"language": "en",
"length": 86,
"provenance": "stackexchange_0000F.jsonl.gz:851217",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499824"
}
|
5c1fd17379a55e7a988e7f7f1f33a5f9b9812b63
|
Stackoverflow Stackexchange
Q: Java 8 collections streaming - Convert list to Set, transforming result Imaging object with following method:
class A { List<B> getIds(){...} }
Now I have an Collection of A as input;
And I want to get set of unique Ids out of it, normally you would go for:
Set<B> ids = new HashSet<>();
for(A a : input){
ids.addAll(a.getIds());
}
Is there a way to do the same in one line using stream API, like following
Set<List<B>> set = input.stream().map((a) -> a.getIds()).collect(Collectors.toSet());
but making flat set of B
A: You have to use flatMap
input.stream()
.map(a -> a.getIds())
.flatMap(ids -> ids.stream())
.collect(Collectors.toSet());
This will produce flat Set.
|
Q: Java 8 collections streaming - Convert list to Set, transforming result Imaging object with following method:
class A { List<B> getIds(){...} }
Now I have an Collection of A as input;
And I want to get set of unique Ids out of it, normally you would go for:
Set<B> ids = new HashSet<>();
for(A a : input){
ids.addAll(a.getIds());
}
Is there a way to do the same in one line using stream API, like following
Set<List<B>> set = input.stream().map((a) -> a.getIds()).collect(Collectors.toSet());
but making flat set of B
A: You have to use flatMap
input.stream()
.map(a -> a.getIds())
.flatMap(ids -> ids.stream())
.collect(Collectors.toSet());
This will produce flat Set.
|
stackoverflow
|
{
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:851240",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499914"
}
|
4ffd585ac2917b7d708afb2841a323edee5848fa
|
Stackoverflow Stackexchange
Q: ImportError: cannot import name mpl (from matplotlib import mpl) I am trying to run a code that I wrote a couple of years ago that uses mpl from matplotlib. It used to run fine, but now suddently it's throwing an error:
from matplotlib import mpl
ImportError: cannot import name mpl
I am using Python 2.7 and matplotlib 1.5.2.
A: You need to use:
import matplotlib as mpl
It really did work in earlier versions but it was first deprecated (in version 1.3):
The mpl module is now deprecated. Those who relied on this module should transition to simply using import matplotlib as mpl.
and then removed (in version 1.5.0):
Remove the module matplotlib.mpl. Deprecated in 1.3 by PR #1670 and commit 78ce67d161625833cacff23cfe5d74920248c5b2
|
Q: ImportError: cannot import name mpl (from matplotlib import mpl) I am trying to run a code that I wrote a couple of years ago that uses mpl from matplotlib. It used to run fine, but now suddently it's throwing an error:
from matplotlib import mpl
ImportError: cannot import name mpl
I am using Python 2.7 and matplotlib 1.5.2.
A: You need to use:
import matplotlib as mpl
It really did work in earlier versions but it was first deprecated (in version 1.3):
The mpl module is now deprecated. Those who relied on this module should transition to simply using import matplotlib as mpl.
and then removed (in version 1.5.0):
Remove the module matplotlib.mpl. Deprecated in 1.3 by PR #1670 and commit 78ce67d161625833cacff23cfe5d74920248c5b2
A: I think that you cannot import "mpl" from matlotlib because it doesn't exist.
I guess you want to try to import matplotlib AS mpl. You should try this :
import matplotlib as mpl
|
stackoverflow
|
{
"language": "en",
"length": 157,
"provenance": "stackexchange_0000F.jsonl.gz:851259",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499971"
}
|
b1d991d06f104dccc91fb6db5e0509db9bf22511
|
Stackoverflow Stackexchange
Q: Same rest endpoint with different PathVariable I'm trying to make two rest endpoint with same uri but different types.
The first would search by EAN (Int) the second one would search by id (String). Could I overload somehow the endpoint? I use Spring Boot with Kotlin
@GetMapping("/book/{ean}")
fun getABookByEan(@PathVariable ean: Int) : ResponseEntity<*> {
repository.getByEan(ean)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
throw ItemNotFoundException()
}
@GetMapping("/book/{id}")
fun getABookById(@PathVariable id: String) : ResponseEntity<*> {
repository.getById(id)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
throw ItemNotFoundException()
}
After this I got an exception that multiple methods are mapped for the same endpoint.
...NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path...
A: I found out that the only way to do it is regex if I want to stick to my API.
@GetMapping("/book/{ean:[\\d]+}")
@GetMapping("/book/{id:^[0-9a-fA-F]{24}$}")
With it the MongoDB generated hexadecimal 24 character can be differentiated from the simple numbers. If anyone find a better way let me know in comments.
|
Q: Same rest endpoint with different PathVariable I'm trying to make two rest endpoint with same uri but different types.
The first would search by EAN (Int) the second one would search by id (String). Could I overload somehow the endpoint? I use Spring Boot with Kotlin
@GetMapping("/book/{ean}")
fun getABookByEan(@PathVariable ean: Int) : ResponseEntity<*> {
repository.getByEan(ean)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
throw ItemNotFoundException()
}
@GetMapping("/book/{id}")
fun getABookById(@PathVariable id: String) : ResponseEntity<*> {
repository.getById(id)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
throw ItemNotFoundException()
}
After this I got an exception that multiple methods are mapped for the same endpoint.
...NestedServletException: Request processing failed; nested exception is java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path...
A: I found out that the only way to do it is regex if I want to stick to my API.
@GetMapping("/book/{ean:[\\d]+}")
@GetMapping("/book/{id:^[0-9a-fA-F]{24}$}")
With it the MongoDB generated hexadecimal 24 character can be differentiated from the simple numbers. If anyone find a better way let me know in comments.
A: It's not possible to do it on mapping level. Probably you should try paths like:
/book/ean/{ean}
/book/id/{id}
Alternatively just
/book/id/{someUniversalId}
then distinguish between different kinds of ids in your executable code.
A: From HTTP point of view it is the same endpoint as it is a text based protocol and path parameter is always a string. Thus, Spring throws an exception.
To deal with the issue you can either identify argument type inside method body:
@GetMapping("/book/{identifier}")
fun getABookById(@PathVariable identifier: String) : ResponseEntity<*> {
try {
val id = identifier.toInt()
// id case
repository.getById(id)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
} catch (e: NumberFormatException) {
// ean case
repository.getByEan(identifier)?.let {
return ResponseEntity.status(HttpStatus.OK).body(it)
}
}
throw ItemNotFoundException()
}
or pass ean or id as @RequestParam like /book?ean=abcdefg, /book?id=5.
A: It would be beneficial to develop a query filter / query criteria, to process something like:
/book?q=ean+eq+abcdefg (meaning ean=abcdefg)
/book?q=id+gt+1000 (meaning id>1000)
and so on.
A: How about using matrix parameters?
So for id, you can use path parameter - /books/{id}
and for ean, matrix parameter - /books;ean={ean}
In fact, for id too you can use matrix parameter - /books;{id} or /books;id={id}
The general url for matrix parameters - /{resource-name}[;{selector}]/
source - Apigee REST API design best practices
|
stackoverflow
|
{
"language": "en",
"length": 362,
"provenance": "stackexchange_0000F.jsonl.gz:851264",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44499988"
}
|
cdcd986d95b34e3c34882bc876eb2b46a7f3635c
|
Stackoverflow Stackexchange
Q: Google Tag Manager Preview Mode for Mobile I just set up Google Tag Manager in my project and would like to test iOS/Android using the Preview Mode that's available for browser. Starting with Android I followed the instructions here and was able to use the link successfully but I don't see any kind of Preview Mode. Where are the log messages that I would normally see if I used it on browser and how do I get to them? Thanks!
A: Preview mode in mobile doesn't work as similar in web.
You need to verify your changes from Logcat/Android Monitor or Xcode Console for which you have enable verbose logging.
Instructions to open container and see verbose logs;
*
*Setup Google Tag Manager for app
*Generate a preview url
*Run your app
*Open the preview url in emulator or device browser
*Browser will redirect you to your app and open it
*Check your Logcat/Android Monitor or Xcode Console for GoogleTagManager verbose logs
|
Q: Google Tag Manager Preview Mode for Mobile I just set up Google Tag Manager in my project and would like to test iOS/Android using the Preview Mode that's available for browser. Starting with Android I followed the instructions here and was able to use the link successfully but I don't see any kind of Preview Mode. Where are the log messages that I would normally see if I used it on browser and how do I get to them? Thanks!
A: Preview mode in mobile doesn't work as similar in web.
You need to verify your changes from Logcat/Android Monitor or Xcode Console for which you have enable verbose logging.
Instructions to open container and see verbose logs;
*
*Setup Google Tag Manager for app
*Generate a preview url
*Run your app
*Open the preview url in emulator or device browser
*Browser will redirect you to your app and open it
*Check your Logcat/Android Monitor or Xcode Console for GoogleTagManager verbose logs
|
stackoverflow
|
{
"language": "en",
"length": 163,
"provenance": "stackexchange_0000F.jsonl.gz:851292",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500068"
}
|
b0ceabc00d9ae31fc332e61e68d8d484ae0edc4b
|
Stackoverflow Stackexchange
Q: Symfony log deprecated warning on new file I want to log only the deprecated warnings to a new file. But I can't see how to achieve that with monolog. Is there a custom configuration?
Thanks in advance!
A: All Deprecated Message are logged As INFO level, php Channels so if You try this configuration All deprecation message will be logged in one file
monolog:
handlers:
security:
level: INFO
type: stream
path: '%kernel.logs_dir%/deprecated.log'
channels: [php]
|
Q: Symfony log deprecated warning on new file I want to log only the deprecated warnings to a new file. But I can't see how to achieve that with monolog. Is there a custom configuration?
Thanks in advance!
A: All Deprecated Message are logged As INFO level, php Channels so if You try this configuration All deprecation message will be logged in one file
monolog:
handlers:
security:
level: INFO
type: stream
path: '%kernel.logs_dir%/deprecated.log'
channels: [php]
|
stackoverflow
|
{
"language": "en",
"length": 75,
"provenance": "stackexchange_0000F.jsonl.gz:851297",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500078"
}
|
460b4e4a48b340c6038bad75f5b045748a63b85e
|
Stackoverflow Stackexchange
Q: Round time string in gap of 30 minutes I have a time string in UNIX format . I need to round that string to closest interval of 30 minutes.
For eg: I have time as 9:20 AM than it should round it to 9:30 AM.
If minutes are greater than 30 like 9:45 AM it should round to 10:00 AM.
I have tried this so far:
$hour = date('H', $ltdaytmfstr);
$minute = (date('i', $ltdaytmfstr)>30)?'00':'30';
echo "$hour:$minute";
$ltdaytmfstr is time string in unix format.
Any suggestions? It would be better if I can get the value returned in UNIX format .
A: You should try this: This will round it to the nearest half an hour.
Use ceil function.
<?php
$rounded = date('H:i:s', ceil(strtotime('16:20:34')/1800)*1800);
echo $rounded;
?>
Output: 16:30:00
http://codepad.org/4WwNO5Rt
|
Q: Round time string in gap of 30 minutes I have a time string in UNIX format . I need to round that string to closest interval of 30 minutes.
For eg: I have time as 9:20 AM than it should round it to 9:30 AM.
If minutes are greater than 30 like 9:45 AM it should round to 10:00 AM.
I have tried this so far:
$hour = date('H', $ltdaytmfstr);
$minute = (date('i', $ltdaytmfstr)>30)?'00':'30';
echo "$hour:$minute";
$ltdaytmfstr is time string in unix format.
Any suggestions? It would be better if I can get the value returned in UNIX format .
A: You should try this: This will round it to the nearest half an hour.
Use ceil function.
<?php
$rounded = date('H:i:s', ceil(strtotime('16:20:34')/1800)*1800);
echo $rounded;
?>
Output: 16:30:00
http://codepad.org/4WwNO5Rt
A: If you use DateTime:
$dt = new \DateTime;
$diff = $dt
->add(
//This just calculates number of seconds from the next 30 minute interval
new \DateInterval("PT".((30 - $dt->format("i"))*60-$dt->format("s"))."S")
);
echo $dt->getTimestamp();
A: I guess this is what you are looking for
function round_timestamp($timestamp){
$hour = date("H", strtotime($timestamp));
$minute = date("i", strtotime($timestamp));
if ($minute<15) {
return date('H:i', strtotime("$hour:00") );
} elseif($minute>=15 and $minute<45){
return date('H:i', strtotime("$hour:30") );
} elseif($minute>=45) {
$hour = $hour + 1;
return date('H:i', strtotime("$hour:00") );
}
}
echo round_timestamp("11:59");
// 00:00
echo round_timestamp("10:59");
// 11:00
A: Since UNIX time is in seconds, you can just transform it to 30 minute units, round, and convert back to seconds.
$timestamp = time();
$rounded = round($timestamp / (30 * 60)) * 30 * 60
You can also use floor() or ceil() to round up or down if needed.
|
stackoverflow
|
{
"language": "en",
"length": 269,
"provenance": "stackexchange_0000F.jsonl.gz:851317",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500123"
}
|
f4f582fe1bf24d054081ea900321cf6cf23a81cd
|
Stackoverflow Stackexchange
Q: How can I know if C++ compiler make thread-safe static object code? In GCC, local static variable is thread-safe (by special function__cxa_guard_acquire) unless -fno-threadsafe-statics compiler option is given.
Similarly, MSVC 2015 and onward version support the same feature and can be disabled by /Zc:threadSafeInit-.
Is there any macro or other features, like __EXCEPTIONS or __GXX_RTTI to check on compilation stage if such features are enabled or not? I think checking __cplusplus or _MSC_VER won't help.
A: Looks like there is one define __cpp_threadsafe_static_init.
SD-6: SG10 Feature Test Recommendations:
C++11 features
Significant features of C++11
Doc. No. Title Primary Section Macro name Value Header
N2660 Dynamic Initialization and Destruction with Concurrency 3.6 __cpp_threadsafe_static_init 200806 predefined
CLang - http://clang.llvm.org/cxx_status.html#ts (github.com)
GCC - https://gcc.gnu.org/projects/cxx-status.html
MSVC - Feature request under investigation https://developercommunity.visualstudio.com/content/problem/96337/feature-request-cpp-threadsafe-static-init.html
Useful on cppreference.com:
*
*Feature Test Recommendations
*C++ compiler support
|
Q: How can I know if C++ compiler make thread-safe static object code? In GCC, local static variable is thread-safe (by special function__cxa_guard_acquire) unless -fno-threadsafe-statics compiler option is given.
Similarly, MSVC 2015 and onward version support the same feature and can be disabled by /Zc:threadSafeInit-.
Is there any macro or other features, like __EXCEPTIONS or __GXX_RTTI to check on compilation stage if such features are enabled or not? I think checking __cplusplus or _MSC_VER won't help.
A: Looks like there is one define __cpp_threadsafe_static_init.
SD-6: SG10 Feature Test Recommendations:
C++11 features
Significant features of C++11
Doc. No. Title Primary Section Macro name Value Header
N2660 Dynamic Initialization and Destruction with Concurrency 3.6 __cpp_threadsafe_static_init 200806 predefined
CLang - http://clang.llvm.org/cxx_status.html#ts (github.com)
GCC - https://gcc.gnu.org/projects/cxx-status.html
MSVC - Feature request under investigation https://developercommunity.visualstudio.com/content/problem/96337/feature-request-cpp-threadsafe-static-init.html
Useful on cppreference.com:
*
*Feature Test Recommendations
*C++ compiler support
|
stackoverflow
|
{
"language": "en",
"length": 139,
"provenance": "stackexchange_0000F.jsonl.gz:851324",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500144"
}
|
696ea4a77e02dc3f0be04542bbdc0c22f6032cba
|
Stackoverflow Stackexchange
Q: Force branch to be rebased before it is merged and pushed I would want to add a hook on my Gitlab server to prevent pushing merged branches on master if they was not rebased before.
For example :
A---B---C---D ← master
\
E---F---G ← new-feature
I want the user to rebase his feature before merging/pushing.
A---B---C---D-------------H ← master
\ /
E'---F'---G'
I don't want this to be pushed
A---B---C---D---H ← master
\ /
E---F---G
This, is a good beginning but I don't find mean to only refuse not empty merge commits :
Force Feature Branch to be Rebased Before it is Merged or Pushed
A: If you are still looking for this. gitlab is the only git server that implements this. they called it semi-linear history.
look at the 2nd option
This will ENFORCE this kind of history (look at the right) seemlessly inside your merge request:
|
Q: Force branch to be rebased before it is merged and pushed I would want to add a hook on my Gitlab server to prevent pushing merged branches on master if they was not rebased before.
For example :
A---B---C---D ← master
\
E---F---G ← new-feature
I want the user to rebase his feature before merging/pushing.
A---B---C---D-------------H ← master
\ /
E'---F'---G'
I don't want this to be pushed
A---B---C---D---H ← master
\ /
E---F---G
This, is a good beginning but I don't find mean to only refuse not empty merge commits :
Force Feature Branch to be Rebased Before it is Merged or Pushed
A: If you are still looking for this. gitlab is the only git server that implements this. they called it semi-linear history.
look at the 2nd option
This will ENFORCE this kind of history (look at the right) seemlessly inside your merge request:
A: It is definitely possible, but you need to write some code. You must also decide what precisely defines a "good" commit-graph update. Your example says that a request to go from this:
o--o--o--* <-- master
to this:
o--o--o--*---o <-- master
\ /
o--o--o
is to be rejected, while this:
o--o--o--*---------o <-- master
\ /
o--o--o
is to be accepted. But what about this third alternative:
o--o--o--*------o-----o <-- master
\ / /
o--o--o--o
This adds two merges rather than just one; but no new commit's merge has any parent that is an ancestor of the prior value of master.
And, what about this?
o--o--o <-- master
(Here the push has removed the commit that used to be the tip of master.)
If the second push, that adds two merges but none of them reach back to any earlier commits, is not to be accepted, and the last push is also not to be accepted, part of your task is pretty easy: you want to allow at most one merge, perhaps restricting it to exactly two parents with one of its two parents—perhaps this must even be "the first parent"—being the prior value of master (the commit marked *). The rest of your task is probably to allow no merges at all, as long as the proposed new master is not an ancestor of the old master (no commits are to be removed).
If the second (two-merge) push is to be accepted, the coding will be trickier. Note that if it's not to be accepted, someone can still push such a merge, they just have to do it in multiple steps (one push per merge).
A: The usual reason to force rebasing is a pathological hatred of merge commits because the person setting the policy doesn't see their value. It's not clear why you'd want to take the disadvantages of rebase (odds are the intermediate commits will not have been tested) but still have the merge; this seems to me like the least valuable merge commit possible. But if you must...
You imply that the merge you want to accept would be "empty"; that's not exactly true. It applies changes to its first parent (though not to its 2nd parent, since it would be a fast-forward if allowed to be).
What I think you're really saying is that you would accept a merge if the first parent is reachable (via parent pointers) from the second parent. So you could take the output of
git rev-list --merges $oldrev..$newrev
and feed each resulting commit ID as the commit-ID arguments in
git merge-base --is-ancestor commit-ID^ commit-ID^2
rejecting if the merge-base command ever returns non-zero.
(Technically I guess you might also want to make sure the commit didn't have 3 or more parents.)
That still allows something like this
(origin/master)
|
x -- x -- x -------------------- M <--(master)
\ /
x -- x -------x -- x
\ /
x -- x
If avoiding that is a rule, it's significantly harder; you basically would want every merge to be reachable via first-parent pointers from the head commit. (But you can't just say that merges should be reachable from the head commit's first parent, because then you'd still allow
(origin/master)
|
x -- x -- x -------------------- M -- x<--(master)
\ /
x -- x -------x -- x
\ /
x -- x
which is the same thing.) So you could maybe, as a "first step" before you start looking for merges, do a
git rev-list --first-parent $oldrev..$newrev
and hang onto a list of all the commit ID values that returns, so as you find each merge you can confirm that it's in that list.
If this all sounds like no fun at all, I couldn't agree more; which is why I'm not going to the trouble of trying to assemble a working script from this advice, and why I recommend you either allow merges or don't instead of trying to take such an unusual middle ground.
|
stackoverflow
|
{
"language": "en",
"length": 809,
"provenance": "stackexchange_0000F.jsonl.gz:851337",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500174"
}
|
c8af742e1ec1387b046a088a43a21e820da99b17
|
Stackoverflow Stackexchange
Q: How do I retrieve iOS Status Bar height in React-Native app? For Android I know I can use StatusBar.currentHeight but I'm not sure how to do so for iOS.
The answer to how to retrieve the size in Swift(native) has already been answered but I need this in a react native app.
A: You can use React Navigation that already have support of iPhone X.
Even if you don't want use this library because of some reason - you still can read source code to copy implementation in your code
|
Q: How do I retrieve iOS Status Bar height in React-Native app? For Android I know I can use StatusBar.currentHeight but I'm not sure how to do so for iOS.
The answer to how to retrieve the size in Swift(native) has already been answered but I need this in a react native app.
A: You can use React Navigation that already have support of iPhone X.
Even if you don't want use this library because of some reason - you still can read source code to copy implementation in your code
A: You can use this package, it has very good documentation.
react-native-status-bar-height
A: If you're using Expo you can use Constants.statusBarHeight.
import Constants from 'expo-constants';
const statusBarHeight = Constants.statusBarHeight;
If you're using Vanilla React Native with React Navigation you can use the following:
import { useSafeAreaInsets } from 'react-native-safe-area-context';
const insets = useSafeAreaInsets();
const statusBarHeight = insets.top;
See: https://reactnavigation.org/docs/handling-safe-area/#use-the-hook-for-more-control
Sample Code:
import * as React from 'react';
import { Text, View, StatusBar } from 'react-native';
import Constants from 'expo-constants';
import { useSafeAreaInsets, SafeAreaProvider } from 'react-native-safe-area-context';
export default function App() {
return (
<SafeAreaProvider>
<ChildScreen />
</SafeAreaProvider>
);
}
function ChildScreen() {
const insets = useSafeAreaInsets();
return (
<View style={{ flex: 1, justifyContent: 'center'}}>
<Text>
{insets.top}
</Text>
<Text>
{Constants.statusBarHeight}
</Text>
<Text>
{StatusBar.currentHeight ?? 'N/A'}
</Text>
</View>
);
}
Output:
Samsung Galaxy S10 5G
iPhone 8 Plus
iPhone 11 Pro Max
Web
insets.top
39.71428680419922
20
44
0
Constants.statusBarHeight
39
20
44
0
StatusBar.currentHeight ?? 'N/A'
39.42856979370117
N/A
N/A
N/A
Live code: https://snack.expo.dev/@dcangulo/statusbarheight
|
stackoverflow
|
{
"language": "en",
"length": 252,
"provenance": "stackexchange_0000F.jsonl.gz:851344",
"question_score": "24",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500190"
}
|
edb0630177a0c47c935fab2fce077c95c5521946
|
Stackoverflow Stackexchange
Q: Hiding zero values in Excel chart or diagram, legend and labeling As you can see in the attached image, my diagram accesses the table on the left. Though the values are changing and it appears that sometimes a value is 0. In this case the entire row of the table neither should be shown in the diagram nor in the legend or labeling.
How can I implement such a diagram? Example: Table with diagram (In the example, the gas costs should not be displayed)
A: *
*Right click at one of the data labels, and select Format Data Labels
from the context menu
*In the Format Data Labels dialog, Click Number in left pane, then select Custom from the Category list box, and type #"" into the Format Code text box, and click Add button to add it to Type list box.
*Click Close button to close the dialog. Then you can see all zero data labels are hidden.
|
Q: Hiding zero values in Excel chart or diagram, legend and labeling As you can see in the attached image, my diagram accesses the table on the left. Though the values are changing and it appears that sometimes a value is 0. In this case the entire row of the table neither should be shown in the diagram nor in the legend or labeling.
How can I implement such a diagram? Example: Table with diagram (In the example, the gas costs should not be displayed)
A: *
*Right click at one of the data labels, and select Format Data Labels
from the context menu
*In the Format Data Labels dialog, Click Number in left pane, then select Custom from the Category list box, and type #"" into the Format Code text box, and click Add button to add it to Type list box.
*Click Close button to close the dialog. Then you can see all zero data labels are hidden.
|
stackoverflow
|
{
"language": "en",
"length": 160,
"provenance": "stackexchange_0000F.jsonl.gz:851348",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500196"
}
|
6e21cd4b330591d8586c6705bcd95d944d00f28d
|
Stackoverflow Stackexchange
Q: Decimal module and complex numbers in Python Is there a way for manipulating complex numbers in more than floating point precision using python?
For example to get a better precision on real numbers I can easily use the Decimal module. However it doesn't appear to work with complex numbers.
A: Disclaimer: I maintain gmpy2.
gmpy2 supports extended precision integer, rational, real, and complex numbers. It also supports a variety of scientific functions.
Another alternative is mpmath. mpmath is written in pure Python so it may be easier to install. If gmpy2 is available, it will be used automatically to improve performance. mpmath supports a wider variety of functions.
Note that both gmpy2 and mpmath support binary (radix-2) floating point arithmetic while the Decimal module supports decimal (radix-10) arithmetic.
|
Q: Decimal module and complex numbers in Python Is there a way for manipulating complex numbers in more than floating point precision using python?
For example to get a better precision on real numbers I can easily use the Decimal module. However it doesn't appear to work with complex numbers.
A: Disclaimer: I maintain gmpy2.
gmpy2 supports extended precision integer, rational, real, and complex numbers. It also supports a variety of scientific functions.
Another alternative is mpmath. mpmath is written in pure Python so it may be easier to install. If gmpy2 is available, it will be used automatically to improve performance. mpmath supports a wider variety of functions.
Note that both gmpy2 and mpmath support binary (radix-2) floating point arithmetic while the Decimal module supports decimal (radix-10) arithmetic.
A: There isn't anything built-in. You'd have to implement it yourself, or use a third-party library.
|
stackoverflow
|
{
"language": "en",
"length": 145,
"provenance": "stackexchange_0000F.jsonl.gz:851369",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500275"
}
|
d7c5d04ab56f26bd0f2535255131af4e1fb796e9
|
Stackoverflow Stackexchange
Q: How to center an image in the middle of a div? I need to center an image in a middle of a div.
<div class="main">
<img...>
</div>
In the example below the image is centered, but not in the middle.
https://jsfiddle.net/web_garaux/tng7db0k/
A: Simple and easy method to do this,
.test {
background-color: orange;
width: 700px;
height: 700px;
display:flex;
align-items:center;
justify-content:center;
}
<div class="test">
<img src="http://via.placeholder.com/350x150">
</div>
|
Q: How to center an image in the middle of a div? I need to center an image in a middle of a div.
<div class="main">
<img...>
</div>
In the example below the image is centered, but not in the middle.
https://jsfiddle.net/web_garaux/tng7db0k/
A: Simple and easy method to do this,
.test {
background-color: orange;
width: 700px;
height: 700px;
display:flex;
align-items:center;
justify-content:center;
}
<div class="test">
<img src="http://via.placeholder.com/350x150">
</div>
A: To vertically center your div, you can use positioning. Just apply
position: relative;
top: 50%;
transform: translateY(-50%);
to your image, and it will be vertically centered.
.test {
background-color: orange;
width: 700px;
height: 700px;
text-align: center;
}
.test>img {
position: relative;
top: 50%;
transform: translateY(-50%);
}
<div class="test">
<img src="http://via.placeholder.com/350x150">
</div>
A: You can use the simplest way -> display: table-cell; which allows you to use vertical-align attribute
.test {
background-color: orange;
width: 500px;
height: 300px;
text-align: center;
display: table-cell;
vertical-align: middle;
}
<div class="test">
<img src="http://via.placeholder.com/350x150">
</div>
A: You can use display: flex;
DEMO
.test {
display: flex;
justify-content: center;
background-color: orange;
width: 700px;
height: 700px;
}
.test img {
align-self: center;
}
A: Cleanest solution would be to make your div display:flex and align/justify content to center.
.test {
background-color: orange;
width: 700px;
height: 700px;
display: flex;
align-items: center;
justify-content: center;
}
Your updated Fiddle: https://jsfiddle.net/y9j21ocr/1/
More reads on flexbox (recommended)
A: It is really easy if you can give image as background to div
.test {
background-color: orange;
width: 700px;
height: 700px;
text-align: center;
background-repeat: no-repeat;
background-position: center center;
}
<div class="test" style="background-image:url(http://via.placeholder.com/350x150);">
</div>
If you dont want to use inline style you can do
<div class="test">
<img src="http://via.placeholder.com/350x150">
</div>
.test > img{
position:absolute;
top:50%;
left:50%;
transform:translate(-50%,-50%);
}
.test{position: relative}
|
stackoverflow
|
{
"language": "en",
"length": 281,
"provenance": "stackexchange_0000F.jsonl.gz:851406",
"question_score": "6",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500404"
}
|
79b5b855c78bbc344600b605c56461817ec4cf67
|
Stackoverflow Stackexchange
Q: How to get VSCode to add a comma after the `}` it has just autocompleted? If I try to write a method inside an object initializer, for example by typing:
myFunction() {
then vscode adds a }, leaving me to manually add the ,.
Is there a way to get it to always add },?
I should note that in my coding standards, all object properties should end with a comma (ie including the final one).
I'm running vscode 1.13.0 on Windows 10 (outside WSL).
A: You can use ESLint with the ESLint extension.
ESLint is able to "Fix" some of the rules automatically. For this one — comma-dangle.
.eslintrc or .eslintrc.json or some other eslint config file:
{
//...
"rules": {
"comma-dangle": [1, {
"objects": "always",
"arrays": "ignore",
"imports": "ignore",
"exports": "ignore",
"functions": "ignore"
}]
}
}
settings.json:
"eslint.autoFixOnSave": true
P.S. ESLint can auto fix some other things like indentation, spacing, semicolons, parentheses, curly braces, ...
|
Q: How to get VSCode to add a comma after the `}` it has just autocompleted? If I try to write a method inside an object initializer, for example by typing:
myFunction() {
then vscode adds a }, leaving me to manually add the ,.
Is there a way to get it to always add },?
I should note that in my coding standards, all object properties should end with a comma (ie including the final one).
I'm running vscode 1.13.0 on Windows 10 (outside WSL).
A: You can use ESLint with the ESLint extension.
ESLint is able to "Fix" some of the rules automatically. For this one — comma-dangle.
.eslintrc or .eslintrc.json or some other eslint config file:
{
//...
"rules": {
"comma-dangle": [1, {
"objects": "always",
"arrays": "ignore",
"imports": "ignore",
"exports": "ignore",
"functions": "ignore"
}]
}
}
settings.json:
"eslint.autoFixOnSave": true
P.S. ESLint can auto fix some other things like indentation, spacing, semicolons, parentheses, curly braces, ...
A: Automatically add JSON/JavaScript Object comma.
https://marketplace.visualstudio.com/items?itemName=LeonQin.auto-insert-comma
|
stackoverflow
|
{
"language": "en",
"length": 165,
"provenance": "stackexchange_0000F.jsonl.gz:851433",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500470"
}
|
b2a2d0df43893f2d1cc248b1fde02d36008d8696
|
Stackoverflow Stackexchange
Q: Python bytes(...) on custom class I have a custom class in python which I need to pass to an external API. The API only requires to be able to invoke bytes(...) on my class.
My question is, how can I decide the behavior of invoking bytes() on my custom python class?
A: You can give your custom class a __bytes__ method:
Called by bytes to compute a byte-string representation of an object. This should return a bytes object.
Demo:
>>> class Foo:
... def __bytes__(self):
... return b'This is a bytes result for this instance'
...
>>> bytes(Foo())
b'This is a bytes result for this instance'
|
Q: Python bytes(...) on custom class I have a custom class in python which I need to pass to an external API. The API only requires to be able to invoke bytes(...) on my class.
My question is, how can I decide the behavior of invoking bytes() on my custom python class?
A: You can give your custom class a __bytes__ method:
Called by bytes to compute a byte-string representation of an object. This should return a bytes object.
Demo:
>>> class Foo:
... def __bytes__(self):
... return b'This is a bytes result for this instance'
...
>>> bytes(Foo())
b'This is a bytes result for this instance'
|
stackoverflow
|
{
"language": "en",
"length": 107,
"provenance": "stackexchange_0000F.jsonl.gz:851440",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500484"
}
|
e1e1b3aa48250b2b593177afa55e6a289e11edfc
|
Stackoverflow Stackexchange
Q: Rest Assured: JSON path body doesn't match doubles I'm trying to test a API with Rest Assured. There is an AssertionError when I'm checking a double value.
The code for checking the double:
given().body(getTest()).contentType("application/json\r\n").
when()
.port(port)
.basePath("/fff/test")
.post("insert")
.then()
.assertThat()
.statusCode(200)
.body("versie", equalTo(11.0));
This is the output:
java.lang.AssertionError: 1 expectation failed.
JSON path versie doesn't match.
Expected: <11.0>
Actual: 11.0
When I change the line with .body to:
.body("versie", equalTo(""+11.0));
The output is:
java.lang.AssertionError: 1 expectation failed.
JSON path versie doesn't match.
Expected: 11.0
Actual: 11.0
Does anyone know how I can fix this? Because I really don't know how to solve this.
EDIT
The JSON:
{
"id": 1,
"naam": "Test X",
"versie": 11.0
}
A: .body("versie", equalTo(11.0f));
This did work for me.
The answer is based on a comment from @StanislavL.
|
Q: Rest Assured: JSON path body doesn't match doubles I'm trying to test a API with Rest Assured. There is an AssertionError when I'm checking a double value.
The code for checking the double:
given().body(getTest()).contentType("application/json\r\n").
when()
.port(port)
.basePath("/fff/test")
.post("insert")
.then()
.assertThat()
.statusCode(200)
.body("versie", equalTo(11.0));
This is the output:
java.lang.AssertionError: 1 expectation failed.
JSON path versie doesn't match.
Expected: <11.0>
Actual: 11.0
When I change the line with .body to:
.body("versie", equalTo(""+11.0));
The output is:
java.lang.AssertionError: 1 expectation failed.
JSON path versie doesn't match.
Expected: 11.0
Actual: 11.0
Does anyone know how I can fix this? Because I really don't know how to solve this.
EDIT
The JSON:
{
"id": 1,
"naam": "Test X",
"versie": 11.0
}
A: .body("versie", equalTo(11.0f));
This did work for me.
The answer is based on a comment from @StanislavL.
A: try with a cast (float) into equalTo - .body("value", equalTo((float)12.9)
|
stackoverflow
|
{
"language": "en",
"length": 144,
"provenance": "stackexchange_0000F.jsonl.gz:851476",
"question_score": "17",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500643"
}
|
1c11f0af7da21a9b587740b3f44757a057cc927c
|
Stackoverflow Stackexchange
Q: Using JetBrains DataGrip - where do I find Functions? I just created a function, but I cannot find it in the schema. To make sure that I was reading the latest schema I even restarted DataGrip.
Where are Functions found in DataGrip?
A: In Routines section of the database tree.
You can also turn on 'Seperate Procedures and Functions' to see them in different folders.
|
Q: Using JetBrains DataGrip - where do I find Functions? I just created a function, but I cannot find it in the schema. To make sure that I was reading the latest schema I even restarted DataGrip.
Where are Functions found in DataGrip?
A: In Routines section of the database tree.
You can also turn on 'Seperate Procedures and Functions' to see them in different folders.
|
stackoverflow
|
{
"language": "en",
"length": 66,
"provenance": "stackexchange_0000F.jsonl.gz:851477",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500647"
}
|
1bffba86a2693a5fd87defbff47f3e69bf807b09
|
Stackoverflow Stackexchange
Q: Dynamic JMS Serializer types Is it possible to have one class, with 2 types (1 for serialization and 1 for deserialization) on the same property ?
For instance, I use an API that allow me to send an address as a string, and I receive the same address as an object. Like this:
Request:
{
"address": "12 rue rivoli, 75001 Paris"
}
Response
{
"address": {
"street": "12 Rue de Rivoli",
"postcode": "75004",
"city": "Paris",
"country": "France"
}
}
A: Yes, you can achieve this using the @Accessor Annotation Tag of the JmsSerializer:
This annotation can be defined on a property to specify
which public method should be called to retrieve, or set the value of
the given property:
in your entity this could look like this:
# AppBundle/Entity/User
<?php
use JMS\Serializer\Annotation\Accessor;
class User
{
/**
* @var AppBundle\Entity\Address
*
* @Accessor(getter="getAddressAsString",setter="setAddress")
*/
private $address;
// ...
public function getAddressAsString()
{
return sprintf('%s, %s %s', $this->address->getStreet(), $this->address->Postcode(), $this->address->getCity()),
}
public function setAddress(Address $address)
{
$this->address = $address;
}
}
|
Q: Dynamic JMS Serializer types Is it possible to have one class, with 2 types (1 for serialization and 1 for deserialization) on the same property ?
For instance, I use an API that allow me to send an address as a string, and I receive the same address as an object. Like this:
Request:
{
"address": "12 rue rivoli, 75001 Paris"
}
Response
{
"address": {
"street": "12 Rue de Rivoli",
"postcode": "75004",
"city": "Paris",
"country": "France"
}
}
A: Yes, you can achieve this using the @Accessor Annotation Tag of the JmsSerializer:
This annotation can be defined on a property to specify
which public method should be called to retrieve, or set the value of
the given property:
in your entity this could look like this:
# AppBundle/Entity/User
<?php
use JMS\Serializer\Annotation\Accessor;
class User
{
/**
* @var AppBundle\Entity\Address
*
* @Accessor(getter="getAddressAsString",setter="setAddress")
*/
private $address;
// ...
public function getAddressAsString()
{
return sprintf('%s, %s %s', $this->address->getStreet(), $this->address->Postcode(), $this->address->getCity()),
}
public function setAddress(Address $address)
{
$this->address = $address;
}
}
A: So, here is a solution which worked for me:
*
*Define a type which you will serialize field into, i.e.
/**
*
*@var string
*
*
*@Type("string")
*/
private $address;
*Implement subscriber/listener on serializer.pre_deserialize event, i.e.
class MyPreDeserializationSubscriber implements EventSubscriberInterface
{
public static function getSubscribedEvents()
{
return [
[
'event' => 'serializer.pre_deserialize',
'method' => 'onPreDeserialize',
'class' => Video::class,
'format' => 'json',
]
];
}
public function onPreDeserialize(PreDeserializeEvent $event)
{
$data = $event->getData();
$data['address'] = implode(', ', $data['address']);
$event->setData($data);
}
}
|
stackoverflow
|
{
"language": "en",
"length": 251,
"provenance": "stackexchange_0000F.jsonl.gz:851484",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500666"
}
|
1f215e620d750bee09b21d1c8f529c0bc12f30e7
|
Stackoverflow Stackexchange
Q: Pivotal Cloud Foundry - Connecting to external oracle database I am creating a Spring boot application that connects to an oracle database which is not managed by (or residing outside) the PCF. In my local development environment i configured the database connection details in application.properties file. Could someone share how to achieve this in PCF without hard-coding the details in application.properties.
A: Cloud Foundry provides you with something called as User Provided Service, that allows you to connect any other service like Oracle database or a legacy ERP system etc. that is not running on CF.
So in your CF environment you can create a Oracle User Provided Service like
cf create-user-provided-service oracle-database-service -p '{"uri":"oracle://root:[email protected]:1521/mydatabase"}'
Then you can bind it to your existing application on CF using
cf bind-service <app name> <service name>
eg : cf bind-service my-application oracle-database-service
and then just restart the app using cf restart
PS: you will still need to have the appropriate JDBC driver in your application, you can always use Maven or gradle for it, or download one from the official site and include it in your project
Link to Oracle site for JDBC driver :
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html
|
Q: Pivotal Cloud Foundry - Connecting to external oracle database I am creating a Spring boot application that connects to an oracle database which is not managed by (or residing outside) the PCF. In my local development environment i configured the database connection details in application.properties file. Could someone share how to achieve this in PCF without hard-coding the details in application.properties.
A: Cloud Foundry provides you with something called as User Provided Service, that allows you to connect any other service like Oracle database or a legacy ERP system etc. that is not running on CF.
So in your CF environment you can create a Oracle User Provided Service like
cf create-user-provided-service oracle-database-service -p '{"uri":"oracle://root:[email protected]:1521/mydatabase"}'
Then you can bind it to your existing application on CF using
cf bind-service <app name> <service name>
eg : cf bind-service my-application oracle-database-service
and then just restart the app using cf restart
PS: you will still need to have the appropriate JDBC driver in your application, you can always use Maven or gradle for it, or download one from the official site and include it in your project
Link to Oracle site for JDBC driver :
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html
|
stackoverflow
|
{
"language": "en",
"length": 194,
"provenance": "stackexchange_0000F.jsonl.gz:851489",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500674"
}
|
05bb6a97803ed943daf68c257fbfbd5b3d5508b8
|
Stackoverflow Stackexchange
Q: How use HttpWebResponse and WebResponse in .NET Core? I'm porting an old C# Shared Lib to a Standard Library. However I'm facing with a lot of System.Net.HttpWebResponse and System.Net.WebResponse references. It used to exist on .Net Framework 4.5 But I'm not able to find similar in .Net standard.
What can I do to be able to use those?
A: You are actually using System.Net.Requests wich is not available for .NETStandard 1.6 nor for .Net Core.
Try to use the classes in System.Net.Http instead.
Or you can install System.Net.Requests NuGet package through Package Manager Console:
Install-Package System.Net.Requests -Version 4.3.0
This package contains compatible classes to System.Net.Requests and can be used as replacement of it.
|
Q: How use HttpWebResponse and WebResponse in .NET Core? I'm porting an old C# Shared Lib to a Standard Library. However I'm facing with a lot of System.Net.HttpWebResponse and System.Net.WebResponse references. It used to exist on .Net Framework 4.5 But I'm not able to find similar in .Net standard.
What can I do to be able to use those?
A: You are actually using System.Net.Requests wich is not available for .NETStandard 1.6 nor for .Net Core.
Try to use the classes in System.Net.Http instead.
Or you can install System.Net.Requests NuGet package through Package Manager Console:
Install-Package System.Net.Requests -Version 4.3.0
This package contains compatible classes to System.Net.Requests and can be used as replacement of it.
|
stackoverflow
|
{
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:851499",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500708"
}
|
f80d5e1bf45651de6c85e9cb5a84388ff9724ef4
|
Stackoverflow Stackexchange
Q: create time steps array in ruby on rails in 30 or 60 minute increments I am creating a simple appointment keeper in Rails 4, and I need to create an array of times for the user to choose between a min/max. The user can also choose if they are creating a 30 or 60 min appointment.
I tried to create the array like this
def time_range
return [] if time_start.blank? || time_end.blank?
(time_start.to_time .. time_end.to_time).to_a
end
but I keep getting the error
can't iterate from Time
I don't know how I could break this out into increments, either.
I'm just showing them as a list
ul.list-unstyled
- @meeting.time_range.each do |calendar_time|
li
= calendar_time
A: Make use of step
(Time.now.to_i..1.day.from_now.to_i).step(30.minutes).each do |time|
puts Time.at(time)
end
So the method will be
def time_range
return [] if time_start.blank? || time_end.blank?
(time_start.to_time.to_i..time_end.to_time.to_i)
end
And you can use it as
ul.list-unstyled
- @meeting.time_range.step(30.minutes).each do |calendar_time|
li
= Time.at(calendar_time)
|
Q: create time steps array in ruby on rails in 30 or 60 minute increments I am creating a simple appointment keeper in Rails 4, and I need to create an array of times for the user to choose between a min/max. The user can also choose if they are creating a 30 or 60 min appointment.
I tried to create the array like this
def time_range
return [] if time_start.blank? || time_end.blank?
(time_start.to_time .. time_end.to_time).to_a
end
but I keep getting the error
can't iterate from Time
I don't know how I could break this out into increments, either.
I'm just showing them as a list
ul.list-unstyled
- @meeting.time_range.each do |calendar_time|
li
= calendar_time
A: Make use of step
(Time.now.to_i..1.day.from_now.to_i).step(30.minutes).each do |time|
puts Time.at(time)
end
So the method will be
def time_range
return [] if time_start.blank? || time_end.blank?
(time_start.to_time.to_i..time_end.to_time.to_i)
end
And you can use it as
ul.list-unstyled
- @meeting.time_range.step(30.minutes).each do |calendar_time|
li
= Time.at(calendar_time)
|
stackoverflow
|
{
"language": "en",
"length": 154,
"provenance": "stackexchange_0000F.jsonl.gz:851504",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500719"
}
|
f1babb5c8ad9cb2bad11d81760b0217a9187fce0
|
Stackoverflow Stackexchange
Q: Project depends on com.google.android.support:wearable:2.0.2 I've created a demo wear project with android studio without touching anything. In build.Gradle this error occurs, although I find curious that it lets the app compile.
Project depends on com.google.android.support:wearable:2.0.2, so it
must also depend (as a provided dependency) on
com.google.android.wearable:wearable:2.0.2
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.google.android.support:wearable:1.4.0'
compile 'com.google.android.gms:play-services-wearable:9.4.0'
}
A: Just add
provided com.google.android.wearable:wearable:2.0.2
to your dependencies
EDIT:
provided is deprecated now, use compileOnly like so:
compileOnly com.google.android.wearable:wearable:2.0.2
|
Q: Project depends on com.google.android.support:wearable:2.0.2 I've created a demo wear project with android studio without touching anything. In build.Gradle this error occurs, although I find curious that it lets the app compile.
Project depends on com.google.android.support:wearable:2.0.2, so it
must also depend (as a provided dependency) on
com.google.android.wearable:wearable:2.0.2
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
compile 'com.google.android.support:wearable:1.4.0'
compile 'com.google.android.gms:play-services-wearable:9.4.0'
}
A: Just add
provided com.google.android.wearable:wearable:2.0.2
to your dependencies
EDIT:
provided is deprecated now, use compileOnly like so:
compileOnly com.google.android.wearable:wearable:2.0.2
|
stackoverflow
|
{
"language": "en",
"length": 78,
"provenance": "stackexchange_0000F.jsonl.gz:851518",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500752"
}
|
a101f7020cc476eb8cea93fc426dbcdc8eed2e47
|
Stackoverflow Stackexchange
Q: Is there a way to list the attributes of a class without instantiating an object? In Python 3.5, say I have:
class Foo:
def __init__(self, bar, barbar):
self.bar = bar
self.barbar = barbar
I want to get the list ["bar", "barbar"] from the class.
I know I can do:
foo = Foo(1, 2)
foo.__dict__.keys()
Is there a way to get ["bar", "barbar"] without instantiating an object?
A: No because the attributes are dynamic (so called instance attributes). Consider the following,
class Foo:
def __init__( self ):
self.bar = 1
def twice( self ):
self.barbar = 2
f = Foo()
print( list(f.__dict__.keys() ) )
f.twice()
print( list(f.__dict__.keys() ) )
In the first print, only f.bar was set, so that's the only attributes that's shown when printing the attribute keys. But after calling f.twice(), you create a new attribute to f and now printing it show both bar and barbar.
|
Q: Is there a way to list the attributes of a class without instantiating an object? In Python 3.5, say I have:
class Foo:
def __init__(self, bar, barbar):
self.bar = bar
self.barbar = barbar
I want to get the list ["bar", "barbar"] from the class.
I know I can do:
foo = Foo(1, 2)
foo.__dict__.keys()
Is there a way to get ["bar", "barbar"] without instantiating an object?
A: No because the attributes are dynamic (so called instance attributes). Consider the following,
class Foo:
def __init__( self ):
self.bar = 1
def twice( self ):
self.barbar = 2
f = Foo()
print( list(f.__dict__.keys() ) )
f.twice()
print( list(f.__dict__.keys() ) )
In the first print, only f.bar was set, so that's the only attributes that's shown when printing the attribute keys. But after calling f.twice(), you create a new attribute to f and now printing it show both bar and barbar.
A: Warning -
The following isn't foolproof in always providing 100% correct data. If you end up having something like self.y = int(1) in your __init__, you will end up including the int in your collection of attributes, which is not a wanted result for your goals. Furthermore, if you happen to add a dynamic attribute somewhere in your code like Foo.some_attr = 'pork', then you will never see that either. Be aware of what it is that you are inspecting at what point of your code, and understand why you have and don't have those inclusions in your result. There are probably other "breakages" that will not give you the full 100% expectation of what are all the attributes associated with this class, but nonetheless, the following should give you something that you might be looking for.
However, I strongly suggest you take the advice of the other answers here and the duplicate that was flagged that explains why you can't/should not do this.
The following is a form of solution you can try to mess around with:
I will expand on the inspect answer.
However, I do question (and probably would advice against) the validity of doing something like this in production-ready code. For investigative purposes, sure, knock yourself out.
By using the inspect module as indicated already in one of the other answers, you can use the getmembers method which you can then iterate through the attributes and inspect the appropriate data you wish to investigate.
For example, you are questioning the dynamic attributes in the __init__
Therefore, we can take this example to illustrate:
from inspect import getmembers
class Foo:
def __init__(self, x):
self.x = x
self.y = 1
self.z = 'chicken'
members = getmembers(Foo)
for member in members:
if '__init__' in member:
print(member[1].__code__.co_names)
Your output will be a tuple:
('x', 'y', 'z')
Ultimately, as you inspect the class Foo to get its members, there are attributes you can further investigate as you iterate each member. Each member has attributes to further inspect, and so on. For this particular example, we focus on __init__ and inspect the __code__ (per documentation: The __code__ object representing the compiled function body) attribute which has an attribute called co_names which provides a tuple of members as indicated above with the output of running the code.
A: Try classname.annotations.keys()
A: As Lærne mentioned, attributes declared inside of functions (like __init__), are dynamic. They effectively don't exist until the __init__ function is called.
However, there is a way to do what you want.
You can create class attributes, like so:
class Foo:
bar = None
barbar = None
def __init__(self, bar, barbar):
self.bar = bar
self.barbar = barbar
And you can access those attributes like this:
[var for var in vars(Foo).keys() if not var.startswith('__')]
Which gives this result:
['bar', 'barbar']
|
stackoverflow
|
{
"language": "en",
"length": 617,
"provenance": "stackexchange_0000F.jsonl.gz:851552",
"question_score": "30",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500842"
}
|
5a7ddfd500008b2c15e74aa7c84bf4d05780e8e8
|
Stackoverflow Stackexchange
Q: Jestjs tests fail in Gitlab CI I'm trying to run Jest tests in Gitlab CI.
The tests succeed locally but when I run them on Gitlab CI I get the following error:
Test suite failed to run
ProcessTerminatedError: cancel after 2 retries!
at Farm.<anonymous> (node_modules/worker-farm/lib/farm.js:81:25)
at Array.forEach (native)
at Farm.<anonymous> (node_modules/worker-farm/lib/farm.js:75:36)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
A worker process has quit unexpectedly! Most likely this is an
initialization error.
error Command failed with exit code 1.
I tried to add the --runInBand option but this results in a segmentation fault. And I also tried the --maxWorkers option but this results in the same error. Any ideas how to resolve this issue?
A: I resolved this issue. The problem in my case was not Jest itself but with canvas-prebuilt I'm still not sure what the problem was but I replaced canvas-prebuilt with the canvas npm package and installed the required packages. Now everything works fine.
|
Q: Jestjs tests fail in Gitlab CI I'm trying to run Jest tests in Gitlab CI.
The tests succeed locally but when I run them on Gitlab CI I get the following error:
Test suite failed to run
ProcessTerminatedError: cancel after 2 retries!
at Farm.<anonymous> (node_modules/worker-farm/lib/farm.js:81:25)
at Array.forEach (native)
at Farm.<anonymous> (node_modules/worker-farm/lib/farm.js:75:36)
at ontimeout (timers.js:386:14)
at tryOnTimeout (timers.js:250:5)
at Timer.listOnTimeout (timers.js:214:5)
A worker process has quit unexpectedly! Most likely this is an
initialization error.
error Command failed with exit code 1.
I tried to add the --runInBand option but this results in a segmentation fault. And I also tried the --maxWorkers option but this results in the same error. Any ideas how to resolve this issue?
A: I resolved this issue. The problem in my case was not Jest itself but with canvas-prebuilt I'm still not sure what the problem was but I replaced canvas-prebuilt with the canvas npm package and installed the required packages. Now everything works fine.
|
stackoverflow
|
{
"language": "en",
"length": 159,
"provenance": "stackexchange_0000F.jsonl.gz:851569",
"question_score": "5",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44500905"
}
|
07c9a4ac868d5243b248b7d3d4a1fabfa688df4d
|
Stackoverflow Stackexchange
Q: Why SqlParametersCollection require "@" at the start of the parameter name everytime? Well , title of the question is pretty much the question itself, so why is it that I need to write "@" every time in front of sql parameter name like that?
command.Parameters.AddWithValue("@CustomerID", this.CustomerId);
I know that this is how parameters are declared using SQL , but shouldn't that be handled by the framework as it is always like that?
A: I assume, that you use Microsoft SQL Server. MS SQL names its variables this way and therefore you need to call them this way, too.
|
Q: Why SqlParametersCollection require "@" at the start of the parameter name everytime? Well , title of the question is pretty much the question itself, so why is it that I need to write "@" every time in front of sql parameter name like that?
command.Parameters.AddWithValue("@CustomerID", this.CustomerId);
I know that this is how parameters are declared using SQL , but shouldn't that be handled by the framework as it is always like that?
A: I assume, that you use Microsoft SQL Server. MS SQL names its variables this way and therefore you need to call them this way, too.
A: Variable names in MS SQL Server must begin with an at (@) sign. Check how to declare variables in SQL for more info.
|
stackoverflow
|
{
"language": "en",
"length": 123,
"provenance": "stackexchange_0000F.jsonl.gz:851657",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501151"
}
|
4647cb4a8a5c1783a2e3041785a941bed73a59da
|
Stackoverflow Stackexchange
Q: Sonarlint not processing files due to analysis errors I am getting the following error when trying to process files in SonarLint
"File won't be refreshed because there were errors during analysis:"
Unfortunately SonarLint does not report which errors it is
encountering. I set sonar.log.level to DEBUG and sonar.verbose to
true.
My project builds just fine and unit tests run. I am running version 2.10.0.1922 of SonarLint and version 2017.1.4 of IntelliJ. I have tried this on JDK 1.8.121 and JDK 1.8.131 but I get the same results.
Is there any way to retrieve what the errors are? Am I missing something whith the logging parameters?
A: Possible solution:
https://community.sonarsource.com/t/sonarlint-error-during-analysis-with-latest-intellij-and-remote-sonarqube-server/13440/7
*
*https://plugins.jetbrains.com/plugin/12836-choose-runtime -> and select java 8 runtime, e.g. appears to work fine
Another solution is to upgrade SonarJava on the server to a version greater than 5.8
|
Q: Sonarlint not processing files due to analysis errors I am getting the following error when trying to process files in SonarLint
"File won't be refreshed because there were errors during analysis:"
Unfortunately SonarLint does not report which errors it is
encountering. I set sonar.log.level to DEBUG and sonar.verbose to
true.
My project builds just fine and unit tests run. I am running version 2.10.0.1922 of SonarLint and version 2017.1.4 of IntelliJ. I have tried this on JDK 1.8.121 and JDK 1.8.131 but I get the same results.
Is there any way to retrieve what the errors are? Am I missing something whith the logging parameters?
A: Possible solution:
https://community.sonarsource.com/t/sonarlint-error-during-analysis-with-latest-intellij-and-remote-sonarqube-server/13440/7
*
*https://plugins.jetbrains.com/plugin/12836-choose-runtime -> and select java 8 runtime, e.g. appears to work fine
Another solution is to upgrade SonarJava on the server to a version greater than 5.8
A: Now I have an update in place en am finally getting some indication of what the problem was. I should be able to take it from here.
So the issue is resolved by updating SonarLint
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:851678",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501228"
}
|
f1f16bfb67e45a2de1b04affdeacb47564d1e1d9
|
Stackoverflow Stackexchange
Q: Pandas open_excel() fails with xlrd.biffh.XLRDError: Can't find workbook in OLE2 compound document I'm trying to use pandas to parse an .xlsm document. My code worked perfectly with the example file I was given, but once I got the rest of the documents, it failed with the above error. Here's the offending stack trace:
Traceback (most recent call last):
File "@@@@@@@@/UnsupervisedCAM.py", line 9, in <module>
info_dict = read_excel_to_dict('files/' + filename)
File "@@@@@@@@\readCAM.py", line 7, in read_excel_to_dict
df = pandas.read_excel(filename, parse_cols='E,G,I,K,Q,O')
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\pandas\io\excel.py", line 191, in read_excel
io = ExcelFile(io, engine=engine)
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\pandas\io\excel.py", line 249, in __init__
self.book = xlrd.open_workbook(io)
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\__init__.py", line 441, in open_workbook
ragged_rows=ragged_rows,
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\book.py", line 87, in open_workbook_xls
ragged_rows=ragged_rows,
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\book.py", line 595, in biff2_8_load
raise XLRDError("Can't find workbook in OLE2 compound document")
xlrd.biffh.XLRDError: Can't find workbook in OLE2 compound document
I'm not even sure where to start... Haven't found anything of use online.
A: I got the same error message and could solve it by removing the password protection of the xlsx-file.
(not saying that it's the only reason for the error, but worth checking!)
|
Q: Pandas open_excel() fails with xlrd.biffh.XLRDError: Can't find workbook in OLE2 compound document I'm trying to use pandas to parse an .xlsm document. My code worked perfectly with the example file I was given, but once I got the rest of the documents, it failed with the above error. Here's the offending stack trace:
Traceback (most recent call last):
File "@@@@@@@@/UnsupervisedCAM.py", line 9, in <module>
info_dict = read_excel_to_dict('files/' + filename)
File "@@@@@@@@\readCAM.py", line 7, in read_excel_to_dict
df = pandas.read_excel(filename, parse_cols='E,G,I,K,Q,O')
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\pandas\io\excel.py", line 191, in read_excel
io = ExcelFile(io, engine=engine)
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\pandas\io\excel.py", line 249, in __init__
self.book = xlrd.open_workbook(io)
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\__init__.py", line 441, in open_workbook
ragged_rows=ragged_rows,
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\book.py", line 87, in open_workbook_xls
ragged_rows=ragged_rows,
File "@@@@@@@@\Anaconda3\envs\tensorflow\lib\site-packages\xlrd\book.py", line 595, in biff2_8_load
raise XLRDError("Can't find workbook in OLE2 compound document")
xlrd.biffh.XLRDError: Can't find workbook in OLE2 compound document
I'm not even sure where to start... Haven't found anything of use online.
A: I got the same error message and could solve it by removing the password protection of the xlsx-file.
(not saying that it's the only reason for the error, but worth checking!)
A: After a lot of searching, the only way I've found to do this is to open and save all the excel documents, which seems to 'strip' them of their OLE2 format. I automated the process with the following vbs script:
Dim objFSO, objFolder, objFile
Dim objExcel, objWB
Set objExcel = CreateObject("Excel.Application")
Set objFSO = CreateObject("scripting.filesystemobject")
MyFolder = "<PATH/TO/FILES"
Set objFolder = objfso.getfolder(myfolder)
For Each objFile In objfolder.Files
If Right(objFile.Name,4) = "<EXTENSION>" Then
Set objWB = objExcel.Workbooks.Open(objFile)
objWB.save
objWB.close
End If
Next
objExcel.Quit
Set objExcel = Nothing
Set objFSO = Nothing
Wscript.Echo "Done"
Make sure to change the path to the folder and extension.
A: In case you face this issue over Jupyter notebook as I did when searching for the error, you can simply restart the kernel and the issue gets resolved.
|
stackoverflow
|
{
"language": "en",
"length": 313,
"provenance": "stackexchange_0000F.jsonl.gz:851726",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501376"
}
|
7bfa9981b90e0235be4c15de41ec0f3bc2842c20
|
Stackoverflow Stackexchange
Q: Serilog RollingFile sinks Create folder named with date I am in process of moving from NLog to serilog. In NLog I could do the following:
logs/20170101/log-Debug.log
logs/20170101/log-Error.log
logs/20170101/log-Info.log
i.e. date based folder names for my logs. Is there a way to achive same in Serilog?
Thanks
|
Q: Serilog RollingFile sinks Create folder named with date I am in process of moving from NLog to serilog. In NLog I could do the following:
logs/20170101/log-Debug.log
logs/20170101/log-Error.log
logs/20170101/log-Info.log
i.e. date based folder names for my logs. Is there a way to achive same in Serilog?
Thanks
|
stackoverflow
|
{
"language": "en",
"length": 47,
"provenance": "stackexchange_0000F.jsonl.gz:851727",
"question_score": "3",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501377"
}
|
8df99b657ad3ad0d33b050a29ce579b29b709440
|
Stackoverflow Stackexchange
Q: Apply Policy to Resource Controller I have a CRUD Resource defined via Route::resource('User', 'UserController').
Since it is possible to generate CRUD Gates and Policies, is there a way to apply such a Gate / Policy, so that the corresponding gate / policy is applied to a specific route?
I think that would be an elegant way, since my polices would match my routes. I'm looking for a method like applyPolicy or something simliar:
Route::resource('User', 'UserController')->applyPolicy()
Otherwise I would have to add each policy to each action, which doesn't seem so elegant.
A: Take a look at the authorizeResource(Model::class) method.
An example would be in your controller's constructor:
public function __construct()
{
$this->authorizeResource(Task::class);
}
|
Q: Apply Policy to Resource Controller I have a CRUD Resource defined via Route::resource('User', 'UserController').
Since it is possible to generate CRUD Gates and Policies, is there a way to apply such a Gate / Policy, so that the corresponding gate / policy is applied to a specific route?
I think that would be an elegant way, since my polices would match my routes. I'm looking for a method like applyPolicy or something simliar:
Route::resource('User', 'UserController')->applyPolicy()
Otherwise I would have to add each policy to each action, which doesn't seem so elegant.
A: Take a look at the authorizeResource(Model::class) method.
An example would be in your controller's constructor:
public function __construct()
{
$this->authorizeResource(Task::class);
}
|
stackoverflow
|
{
"language": "en",
"length": 114,
"provenance": "stackexchange_0000F.jsonl.gz:851743",
"question_score": "8",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501430"
}
|
e3d29c93f252a34ab39f05347e193c796245c69a
|
Stackoverflow Stackexchange
Q: How can I pull all modules in IntelliJ IDEA ULTIMATE project at once? I have a multi-module project in IntelliJ. Each of the modules is stored in a separate git repository.
I have already set them up, so I marked each of them as VCS root in IntelliJ.
Am I able to pull all of them at once using IDE, or should I use command line tool? Currently I'm pulling them one by one:
A: To pull from all repositories at once, use VCS - Update project (Ctrl/Cmd+T)
The screenshot shows checkout command though.
If branches name in all repositories are the same, you should enable Synchronous branch control in Settings - Version control - Git, and you will be able to checkout all branches at once from the bottom part of the Branches pop-up
|
Q: How can I pull all modules in IntelliJ IDEA ULTIMATE project at once? I have a multi-module project in IntelliJ. Each of the modules is stored in a separate git repository.
I have already set them up, so I marked each of them as VCS root in IntelliJ.
Am I able to pull all of them at once using IDE, or should I use command line tool? Currently I'm pulling them one by one:
A: To pull from all repositories at once, use VCS - Update project (Ctrl/Cmd+T)
The screenshot shows checkout command though.
If branches name in all repositories are the same, you should enable Synchronous branch control in Settings - Version control - Git, and you will be able to checkout all branches at once from the bottom part of the Branches pop-up
|
stackoverflow
|
{
"language": "en",
"length": 136,
"provenance": "stackexchange_0000F.jsonl.gz:851773",
"question_score": "13",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501504"
}
|
c2b5e7cb48315a179499d969ba5d1f9edbb554ad
|
Stackoverflow Stackexchange
Q: Best way to shift a list in Python? I have a list of numbers, let's say :
my_list = [2, 4, 3, 8, 1, 1]
From this list, I want to obtain a new list. This list would start with the maximum value until the end, and I want the first part (from the beginning until just before the maximum) to be added, like this :
my_new_list = [8, 1, 1, 2, 4, 3]
(basically it corresponds to a horizontal graph shift...)
Is there a simple way to do so ? :)
A: How about something like this:
max_idx = my_list.index(max(my_list))
my_new_list = my_list[max_idx:] + my_list[0:max_idx]
|
Q: Best way to shift a list in Python? I have a list of numbers, let's say :
my_list = [2, 4, 3, 8, 1, 1]
From this list, I want to obtain a new list. This list would start with the maximum value until the end, and I want the first part (from the beginning until just before the maximum) to be added, like this :
my_new_list = [8, 1, 1, 2, 4, 3]
(basically it corresponds to a horizontal graph shift...)
Is there a simple way to do so ? :)
A: How about something like this:
max_idx = my_list.index(max(my_list))
my_new_list = my_list[max_idx:] + my_list[0:max_idx]
A: Apply as many as you want,
To the left:
my_list.append(my_list.pop(0))
To the right:
my_list.insert(0, my_list.pop())
A: Alternatively you can do something like the following,
def shift(l,n):
return itertools.islice(itertools.cycle(l),n,n+len(l))
my_list = [2, 4, 3, 8, 1, 1]
list(shift(my_list, 3))
A: Elaborating on Yasc's solution for moving the order of the list values, here's a way to shift the list to start with the maximum value:
# Find the max value:
max_value = max(my_list)
# Move the last value from the end to the beginning,
# until the max value is the first value:
while my_list[0] != max_value:
my_list.insert(0, my_list.pop())
|
stackoverflow
|
{
"language": "en",
"length": 206,
"provenance": "stackexchange_0000F.jsonl.gz:851800",
"question_score": "4",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501591"
}
|
83abf97bd5f61f567623ffde13dba06a30b16a65
|
Stackoverflow Stackexchange
Q: Jenkins Disable CLI over Remoting via a Groovy Script Is is possible to disable Jenkins CLI over Remoting option via Groovy script? I want to put the script into init.groovy.d so that is option is disabled upon start up so I am not prompted to disable it
Thanks
A: Create the file $JENKINS_HOME/jenkins.CLI.xml with the following content:
<?xml version='1.0' encoding='UTF-8'?>
<jenkins.CLI>
<enabled>false</enabled>
</jenkins.CLI>
It will behave as if you pressed the "Disable Jenkins CLI over Remoting" button in the Jenkins GUI once the server restarts.
juhnz's answer covers disabling the CLI completely. However, I believe the intent of the question was just disable the Jenkins CLI over remoting, only, but otherwise enable the CLI.
|
Q: Jenkins Disable CLI over Remoting via a Groovy Script Is is possible to disable Jenkins CLI over Remoting option via Groovy script? I want to put the script into init.groovy.d so that is option is disabled upon start up so I am not prompted to disable it
Thanks
A: Create the file $JENKINS_HOME/jenkins.CLI.xml with the following content:
<?xml version='1.0' encoding='UTF-8'?>
<jenkins.CLI>
<enabled>false</enabled>
</jenkins.CLI>
It will behave as if you pressed the "Disable Jenkins CLI over Remoting" button in the Jenkins GUI once the server restarts.
juhnz's answer covers disabling the CLI completely. However, I believe the intent of the question was just disable the Jenkins CLI over remoting, only, but otherwise enable the CLI.
A: you can do it like this (jenkins2.60.2)
import jenkins.model.Jenkins
jenkins.model.Jenkins.instance.getDescriptor("jenkins.CLI").get().setEnabled(false)
Regards
|
stackoverflow
|
{
"language": "en",
"length": 127,
"provenance": "stackexchange_0000F.jsonl.gz:851801",
"question_score": "9",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501596"
}
|
ef85943024a9b33c19722593619068304fd9bcf7
|
Stackoverflow Stackexchange
Q: DNS record not found after testing I am checking my website on mxtoolbox.com and getting some DNS errors. Two of those errors say "DNS Record not found". One has dmarc as category and the other is category spf.
My questions:
*
*Does this hurt my website?
*How do i go about fixing this?
My website is http://www.zilvertron.com
Thanks for your time!
A: No, neither of those things will directly hurt your website, though they may cause you problems with sending email if the recipients score harshly in any spam management application.
There is some info on DMARC here and some info about spf here They are both used to help validate that messages are 'allowed' to be send from your domain & hosts and that your domain is who it says it is.
If you wanted to fix/add the records you need to have a look in your dns providers control panel and see what options they support, and how to implement them. Spf is easy - it's just a text record, DMARC/DKIM requires a bit more setup.
|
Q: DNS record not found after testing I am checking my website on mxtoolbox.com and getting some DNS errors. Two of those errors say "DNS Record not found". One has dmarc as category and the other is category spf.
My questions:
*
*Does this hurt my website?
*How do i go about fixing this?
My website is http://www.zilvertron.com
Thanks for your time!
A: No, neither of those things will directly hurt your website, though they may cause you problems with sending email if the recipients score harshly in any spam management application.
There is some info on DMARC here and some info about spf here They are both used to help validate that messages are 'allowed' to be send from your domain & hosts and that your domain is who it says it is.
If you wanted to fix/add the records you need to have a look in your dns providers control panel and see what options they support, and how to implement them. Spf is easy - it's just a text record, DMARC/DKIM requires a bit more setup.
|
stackoverflow
|
{
"language": "en",
"length": 179,
"provenance": "stackexchange_0000F.jsonl.gz:851803",
"question_score": "11",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501600"
}
|
772c42f70052586a67197dcede315d586b7020e2
|
Stackoverflow Stackexchange
Q: How can I debounce using async/await? I have an input box. After the user has stopped typing, I want to perform an HTTP request and await the results.
Here's a jsbin
Since network requests aren't allowed on jsbin, I've used setTimeout() instead.
var log = console.log.bind(console)
var delayedResults = new Promise(function(resolve) {
setTimeout(function(){
resolve('Wooo I am the result!')
}, 3000);
});
document.querySelector('input').addEventListener('input', _.debounce(async function(){
log('Doing search')
var result = await delayedResults
log('Result is', result)
}), 500);
However when I type in the box, 'Doing search' appears immediately every character - I want it to only appear after the 500ms has expired.
How can I use debounce and await?
A: The problem was at the last line:
}), 500);
You should close debounce function call after time argument was specified:
}, 500));
var log = console.log.bind(console);
var delayedResults = new Promise(
function(resolve) {
setTimeout(function() {
resolve('Wooo I am the result!');
}, 3000);
}
);
document.querySelector('input')
.addEventListener('keydown', _.debounce(async function() {
log('Doing search');
var result = await delayedResults;
log('Result is', result);
}, 500));
<script src="https://cdn.jsdelivr.net/lodash/4/lodash.min.js"></script>
<input>
|
Q: How can I debounce using async/await? I have an input box. After the user has stopped typing, I want to perform an HTTP request and await the results.
Here's a jsbin
Since network requests aren't allowed on jsbin, I've used setTimeout() instead.
var log = console.log.bind(console)
var delayedResults = new Promise(function(resolve) {
setTimeout(function(){
resolve('Wooo I am the result!')
}, 3000);
});
document.querySelector('input').addEventListener('input', _.debounce(async function(){
log('Doing search')
var result = await delayedResults
log('Result is', result)
}), 500);
However when I type in the box, 'Doing search' appears immediately every character - I want it to only appear after the 500ms has expired.
How can I use debounce and await?
A: The problem was at the last line:
}), 500);
You should close debounce function call after time argument was specified:
}, 500));
var log = console.log.bind(console);
var delayedResults = new Promise(
function(resolve) {
setTimeout(function() {
resolve('Wooo I am the result!');
}, 3000);
}
);
document.querySelector('input')
.addEventListener('keydown', _.debounce(async function() {
log('Doing search');
var result = await delayedResults;
log('Result is', result);
}, 500));
<script src="https://cdn.jsdelivr.net/lodash/4/lodash.min.js"></script>
<input>
|
stackoverflow
|
{
"language": "en",
"length": 174,
"provenance": "stackexchange_0000F.jsonl.gz:851824",
"question_score": "7",
"source": "stackexchange",
"timestamp": "2023-03-29T00:00:00",
"url": "https://stackoverflow.com/questions/44501653"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.