_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d17801 | val | That's how SAS works; SAS has only CHAR equivalent datatype (in base SAS, anyway, DS2 is different), no VARCHAR concept. Whatever the length of the column is (20 here) it will have 20 total characters with spaces at the end to pad to 20.
Most of the time, it doesn't matter; when SAS inserts into another RDBMS for example it will typically treat trailing spaces as nonexistent (so they won't be inserted). You can use TRIM and similar to deal with the spaces if you're using regular expressions or concatenation to work with these values; CATS and similar functions perform concatenation-with-trimming.
If trailing spaces are part of your data, you are mostly out of luck in SAS. SAS considers trailing spaces irrelevant (equivalent to null characters). You can append a non-space character in SQL, or translate the spaces to NBSPs ('A0'x) or something else, while still in SQL, or use quotes or something around your actual values - but whatever you do will be complicated. | unknown | |
d17802 | val | The picture (in the comment) could be fetched with the attachment parameter.
By default, you did not get the attachment field in the result, so you have to write this field explicitly. just like this -
me/posts?fields=comments.message,comments.id,comments.attachment
Demo
Ref: Comments
A: This is the same for page comments as I just discovered:
/{page-post-id}/comments?fields=from,message,id,attachment,created_time,comments.fields(from,message,id,attachment,created_time)
this will return all replies (and replies to those replies) for a particular page post. if there is an image on a reply it will be under 'attachment'
result is a bit like this:
Array
(
[data] => Array
(
[0] => Array
(
[from] => Array
(
[name] => ***********
[id] => ***********
)
[message] => test reply with a picture
[id] => ***********
[attachment] => Array
(
[type] => photo
[target] => Array
(
[id] => ***********
[url] => ***********
)
[url] => ***********
[media] => Array
(
[image] => Array
(
[height] => 540
[src] => ***********
[width] => 720
)
)
)
[created_time] => 2014-03-29T11:59:53+0000
)
[1] => Array
(
[from] => Array
(
[name] => ***********
[id] => ***********
)
[message] => ***********
[id] => ***********
[created_time] => 2014-03-29T11:55:09+0000
)
[2] => Array
(
[from] => Array
(
[name] => ***********
[id] => ***********
)
[message] => ***********
[id] => ***********
[created_time] => 2014-03-29T11:16:45+0000
[comments] => Array
(
[data] => Array
(
[0] => Array
(
[from] => Array
(
[name] => ***********
[id] => ***********
)
[message] => ***********
[id] => ***********
[created_time] => 2014-03-29T11:18:07+0000
)
[1] => Array
(
[from] => Array
(
[name] => ***********
[id] => ***********
)
[message] => ************
[id] => ***********
[created_time] => 2014-03-29T11:18:48+0000
) | unknown | |
d17803 | val | Here is my recommended strategy for what I understand of your task:
*
*Do not mutate the haystack string. Often the the string to be searched is much much longer than the needle(s) used in the search. This potentially heavy lifting should be avoided when possible.
*Your search terms appear to be dynamic (and likely to be coming from user input), so the characters must be escaped to prevent regex pattern breakage. Use preg_quote() for this process.
*Insert \s* between all non-whitespace characters in the escaped search terms (ignoring escaping slashes).
*Then convert all sequences of one or more whitespaces to \s+ in the search terms.
*Now that the terms are prepared, glue them together using pipes. Wrap the piped expression in parentheses, then wrap that capture group in wordboundary markers (\b).
*Though not mentioned in your question, I recommend using case-insensitive matching. If multibyte/unicode characters may be involved, add the u pattern modifier as well.
Recommended Code: (Demo)
function searchSomeText(array $searchTerms, string $stringToBeSearched): bool
{
foreach ($searchTerms as &$searchTerm) {
$searchTerm = preg_replace(
['/\\\\?\S\K(?=\S)/', '/\s+/'],
['\\s*', '\\s+'],
preg_quote($searchTerm, '/')
);
}
$pattern = '/\b(' . implode("|", $searchTerms) . ')\b/i';
echo $pattern . "\n";
return (bool)preg_match($pattern, $stringToBeSearched);
}
var_export(
searchSomeText(
['at', 'cat ', 'the'],
'The catheter in the hat'
)
);
Output: (dynamic regex pattern & return value)
/\b(a\s*t|c\s*a\s*t\s+|t\s*h\s*e)\b/i
true | unknown | |
d17804 | val | It sounds like you're doing a lot of work manipulating the string that's causing you bugs. I think it would be easier to go with Regex to solve this.
Try this code:
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
' Copy string to format from clipboard
Dim strClipText As String = Clipboard.GetText
Dim regex = New Regex("-{0,1}\d+(\.\d+|)")
Dim latlon = String.Join(" ", regex.Matches(strClipText).OfType(Of Match)().Select(Function(m) m.Value))
Clipboard.Clear()
My.Computer.Clipboard.SetText(latlon)
End Sub
A: Please notice that you're never replacing the carriage return.
Don't forget that these constants turn into actual ASCII characters or combinations of ASCII characters
*
*vbCr == Chr(13)
*vbLf == Chr(10)
*vbCrLf == Chr(13) + Char(10)
*vbNewLine == Chr(13) + Char(10)
Now, in your code, you're doing this:
strClipText = Replace(strClipText, vbLf, "")
strClipText = Replace(strClipText, vbCrLf, "")
strClipText = Replace(strClipText, vbNewLine, "")
Which does these three things:
*
*Replace Chr(10) with an empty string
*Replace Chr(13)+Char(10) with an empty string
*Replace Chr(13)+Char(10) with an empty string
Thus, you're never getting rid of the Chr(13) which will sometimes show as a new line. Because, even if the lines begin life as Char(13) + Char(10) (vbCrLf) when you replace vbLf with an empty string, you're breaking up the Char(13) + Char(10).
Do something like this instead:
strClipText = Replace(strClipText, vbCrLf, "")
strClipText = Replace(strClipText, vbNewLine, "")
strClipText = Replace(strClipText, vbCR, "")
strClipText = Replace(strClipText, vbLf, "") | unknown | |
d17805 | val | I know this is an oldie, but I had been searching for something similar until I found Active Choices Plugin. It doesn't hide the parameters, but it's possible to write a Groovy script (either directly in the Active Choices Parameter or in Scriptler) to return different values. For example:
Groovy
if (MY_PARAM.equals("Foo")) {return ['NOT APPLICABLE']}
else if (MY_PARAM.equals("Bar")) {return ['This is the only choice']}
else if (MY_PARAM.equals("Baz")) {return ['Bazoo', 'Bazar', 'Bazinga']}
/Groovy
In this example MY_PARAM is a parameter in the Jenkins job. As long as you put 'MY_PARAM' in the Active Choices 'Referenced Parameters' field the script will re-evaluate the parameter any time it is changed and display the return value (or list of values) which match.
In this way, you can return a different list of choices (including a list of one or even zero choices) depending on the previous selections, but I haven't found a way to prevent the Parameter from appearing on the parameters page. It's possible for multiple Active Choice Parameters to reference the same Parameter, so the instant someone selects "App" or "Svc" all the irrelevant parameters will switch to 'Not Applicable' or whatever suits you. I have played with some HTML text color as well, but don't have code samples at hand to share.
Dirk
A: According to the description you may do this with Dynamic-Jenkins-Parameter plugin:
A Jenkins parameter plugin that allows for two select elements. The second select populates values depending upon the selection made for the first select.
Example provided on the wiki does exactly what you need (at least for one conditional case). I didn't try it by myself.
A: @derik It worked! for me
the second list is populating based on the choice of the first element.
I used Active Choice reactive parameter plugin,
the requirement was the first param will list my servers,
based on the fist selection, the second parm is to connect to selected server and list the backup.
so the list of available backup will the shown here to restore.
*
*enabled parameterized.
*Select Choice Parameter
Name: Server
Choices : Choose..
qa
staging
master
Description : Select the server from the list
*Add new parameter "Active Choice Reactive Parameter"
Name: Backup
Script: Groovy Script
def getbackupsqa = ("sshpass -f /opt/installer/pass.txt /usr/bin/ssh -p 22 -o StrictHostKeyChecking=no [email protected] ls /opt/jenkins/backup").execute()
if (Server.equals("Choose..")) {return ['Choose..'] }
else if (Server.equals("qa")) {return getbackupsqa.text.readLines()}
else if (Server.equals("staging")) {return ['Staging server not yet configured']}
else if (Server.equals("master")) {return ['Master server not yet configured']}
Description : Select the backup from the list
Referenced parameters : Server
The result as here | unknown | |
d17806 | val | string.includes might help
products_name = ["product one", "product two"];
$("#mytxt").keyup(function() {
var txt = $("#mytxt").val();
var results = start_search(txt);
console.log(results);
});
function start_search(text) {
return products_name.filter(pr => pr.includes(text))
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<input type="text" id="mytxt">
--Edit
const products_name = [{
8: "product ninety two"
}, {
21: "product two"
}, {
35: "product nine"
}]
$("#mytxt").keyup(function() {
var txt = $("#mytxt").val();
var results = start_search(txt);
console.log(results);
});
function start_search(text) {
return products_name.filter(pr => Object.values(pr)[0].includes(text)).map(pr => Number(Object.keys(pr)[0]))
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<input type="text" id="mytxt">
A: How about using the array.indexOf function?
products_name = ["product one", "product two"];
function start_search(text){
if(products_name.indexOf(text) > -1){
return true;
}else{
return false;
}
};
start_search('product'); // returns false
start_search('product one'); // returns true
start_search('product two'); // returns true
start_search('product three'); // returns false | unknown | |
d17807 | val | Just to put this in answer form so the question can be closed (please click the check mark next to this answer if it answers your question), at its simplest, you need to change your code like this:
__asm__ __volatile__ (
"movl %1, %%eax;"
"movl %2, %%ebx;"
"CONTD%=: cmpl $0, %%ebx;"
"je DONE%=;"
"xorl %%edx, %%edx;"
"idivl %%ebx;"
"movl %%ebx, %%eax;"
"movl %%edx, %%ebx;"
"jmp CONTD%=;"
"DONE%=: movl %%eax, %0;"
:"=r"(result)
:"r"(var1), "r"(var2)
: "eax", "ebx", "edx", "cc"
);
Using %= adds a unique number to the identifiers to avoid conflicts. And since the contents of registers and flags are being modified, you need to inform the compiler of that fact by 'clobbering' them.
But there are other things you can do that make this a bit faster, and a bit cleaner. For example, instead of doing movl %%eax, %0 at the end, you can just tell gcc that result will be in eax when the block exits:
__asm__ __volatile__ (
"movl %1, %%eax;"
"movl %2, %%ebx;"
"CONTD%=: cmpl $0, %%ebx;"
"je DONE%=;"
"xorl %%edx, %%edx;"
"idivl %%ebx;"
"movl %%ebx, %%eax;"
"movl %%edx, %%ebx;"
"jmp CONTD%=;"
"DONE%=:"
:"=a"(result)
:"r"(var1), "r"(var2)
: "ebx", "edx", "cc"
);
Likewise, you can tell gcc to put var1 and var2 into eax and ebx for you before calling the block instead of you doing it manually inside the block:
__asm__ (
"CONTD%=: cmpl $0, %%ebx;"
"je DONE%=;"
"xorl %%edx, %%edx;"
"idivl %%ebx;"
"movl %%ebx, %%eax;"
"movl %%edx, %%ebx;"
"jmp CONTD%=;"
"DONE%=:"
:"=a"(result), "+b"(var2)
: "a"(var1)
: "edx", "cc"
);
Also, since you will (presumably) always be using result when calling gcd, volatile is unnecessary. If you won't be using result, then there's no point forcing the calculation to be done anyway.
As written, the -S output for this statement will be one very long line, making debugging difficult. That brings us to:
__asm__ (
"CONTD%=: \n\t"
"cmpl $0, %%ebx \n\t"
"je DONE%= \n\t"
"xorl %%edx, %%edx \n\t"
"idivl %%ebx \n\t"
"movl %%ebx, %%eax \n\t"
"movl %%edx, %%ebx \n\t"
"jmp CONTD%= \n"
"DONE%=:"
: "=a"(result), "+b"(var2)
: "a"(var1)
: "edx", "cc"
);
And I see no particular reason to force gcc to use ebx. If we let gcc pick its own register (usually gives best performance), that gives us:
__asm__ (
"CONTD%=: \n\t"
"cmpl $0, %1 \n\t"
"je DONE%= \n\t"
"xorl %%edx, %%edx \n\t"
"idivl %1 \n\t"
"movl %1, %%eax \n\t"
"movl %%edx, %1 \n\t"
"jmp CONTD%= \n"
"DONE%=:"
: "=a"(result), "+r"(var2)
: "a"(var1)
: "edx", "cc"
);
And lastly, avoiding the extra jump when the loop is complete gives us:
__asm__ (
"cmpl $0, %1 \n\t"
"je DONE%= \n"
"CONTD%=: \n\t"
"xorl %%edx, %%edx \n\t"
"idivl %1 \n\t"
"movl %1, %%eax \n\t"
"movl %%edx, %1 \n\t"
"cmpl $0, %1 \n\t"
"jne CONTD%= \n"
"DONE%=:"
: "=a"(result), "+r"(var2)
: "a"(var1)
: "edx", "cc"
);
Looking at the -S output from gcc, this gives us:
/APP
cmpl $0, %ecx
je DONE31
CONTD31:
xorl %edx, %edx
idivl %ecx
movl %ecx, %eax
movl %edx, %ecx
cmpl $0, %ecx
jne CONTD31
DONE31:
/NO_APP
This code uses fewer registers, performs fewer jumps and has fewer asm instructions than the original code. FWIW.
For details about %=, clobbers, etc, check out the official gcc docs for inline asm.
I suppose I should ask why you feel the need to write this in asm rather than just doing it in c, but I'll just assume you have a good reason. | unknown | |
d17808 | val | inner join your questions, categories, and levels together, then left join to the questions already offered. Filter where any field in the questions offered table is null, and you will have a list of unanswered questions? perhaps? | unknown | |
d17809 | val | When you use file methods from Context, Android saves that file into app's directory (and encrypt it, compared to File.()) method). So, you can just clear app data and that file will be removed | unknown | |
d17810 | val | When you add the date in your listview just use something like that:
Dim NewItem As New ListViewItem
NewItem.Text = "My Item"
NewItem.SubItems.Add(mydate.tostring("yyyy-MM-dd"))
ListView1.Items.Add(NewItem)
Just keep in mind mydate.tostring("yyyy-MM-dd") where mydate is a datetime.
Considerate your comments, here is the code:
.SubItems.Add(ds.Tables("studnum").Rows(i).Item(4).ToString)
Dim MyDate As DateTime = CDate(ds.Tables("studnum").Rows(i).Item(5).ToString)
.SubItems.Add(MyDate.ToString("yyyy-MM-dd")) | unknown | |
d17811 | val | Two errors in your thinking (although your R code works so it's not a programming error.
First and foremost you violated your own statement you have not dummy coded schooling it does not have only zeroes and ones it has 0,1 & 2.
Second you forgot the interaction effect in your lm modeling...
Try this...
library(tidyverse)
set.seed(123)
ds <- data.frame(
depression=rnorm(90,10,2),
schooling_dummy=c(0,1,2),
sex_dummy=c(0,1)
)
# if you explicitly make these variables factors not integers R will do the right thing with them
ds$schooling_dummy<-factor(ds$schooling_dummy)
ds$sex_dummy<-factor(ds$sex_dummy)
ds %>% group_by(schooling_dummy,sex_dummy) %>%
summarise(formatC(mean(depression),format="f", digits=5))
# you need an asterisk in your lm model to include the interaction term
lm(depression ~ schooling_dummy * sex_dummy, data = ds)
The results give you the mean(s) you were expecting...
Call:
lm(formula = depression ~ schooling_dummy * sex_dummy, data = ds)
Coefficients:
(Intercept) schooling_dummy1 schooling_dummy2
10.325482 -0.732433 -0.113305
sex_dummy1 schooling_dummy1:sex_dummy1 schooling_dummy2:sex_dummy1
0.228561 0.009778 -0.334254
and FWIW you can avoid this sort of accidental misuse of categorical variables if your data is coded as characters to begin with... so if your data is coded this way:
ds <- data.frame(
depression=rnorm(90,10,2),
schooling=c("A","B","C"),
sex=c("Male","Female")
)
You're less likely to make the same mistake plus the results are easier to read... | unknown | |
d17812 | val | I know lambda is very cool feature and because its coolness it is overused.
Trying forcing lambda here is creating a problem.
Just define a function and problem is resolved.
void myNiceFunction(My_special_t *instance) {
instance->doStuff();
… … …
if (instance->next) {
myNiceFunction(instance->next);
}
}
It is better since it is self documenting (if good name is provided) and it is testable (test can reach this function directly).
A: You can do it with std::function, like this:
#include <functional>
#include <iostream>
using std::cout;
using std::function;
struct My_special_t
{
};
int main()
{
function<void(My_special_t*)> Callback;
auto otherCallback = [](My_special_t* instance)
{
cout << "otherCallback " << static_cast<void*>(instance) << "\n";
};
Callback = [&Callback, &otherCallback](My_special_t* instance)
{
cout << "first callback " << static_cast<void*>(instance) << "\n";
Callback = otherCallback;
};
My_special_t special;
Callback(&special);
Callback(&special);
} | unknown | |
d17813 | val | I guess you need to do this with javascript and add another route to your backend which then updates the database.
maybe something like this, if it should happen automatically:
<input type="checkbox" onchange="updateLike('productId', this.checked)">
<script>
async function updateLike(productId, doesLike) {
let response = await fetch("http://localhost/products/productId/like", {
method:"POST",
headers: {"Content-Type":"application/json"},
body: JSON.stringify({
productId: productId,
like: doesLike
})
});
}
</script>
or you could add a button which sends the request to the server.
<input type="checkbox" name="like"/>
<button onclick="updateLike('productId', document.querySelector('input[name=like]').checked)">confirm</button> | unknown | |
d17814 | val | Since you want the line to go through the majority of points, it sounds quite like a line fitting problem even though you say it isn't. Have you looked at the Theil-Sen estimator (for example this one on fex), which is a linear regression ignoring up to some 30% of the outliers.
If you simply want a line through the extrema you might do something like this:
% Setup data
e = [161 162 193 195 155 40 106 102 125 155 189 192 186 188 185 186 147 148 180 183];
f = [138 92 92 115 258 124 218 114 125 232 431 252 539 463 643 571 582 726 726 676];
% Create scatterplot
figure(1);
scatter(f, e, 5, 'red');
axis ij;
% Fit extrema
[min_e, min_idx_e] = min(e);
[max_e, max_idx_e] = max(e);
[min_f, min_idx_f] = min(f);
[max_f, max_idx_f] = max(f);
% Determine largest range and draw line accordingly
if (max_e-min_e)>(max_f-min_f)
line(f([min_idx_e, max_idx_e]), e([min_idx_e, max_idx_e]), 'color', 'blue')
text(f(max_idx_e), e(max_idx_e), ' Extrema')
else
line(f([min_idx_f, max_idx_f]), e([min_idx_f, max_idx_f]), 'color', 'blue')
text(f(max_idx_f), e(max_idx_f), ' Extrema')
end
% Fit using Theil-Sen estimator
[m, e0] = Theil_Sen_Regress(f', e');
line([min_f, max_f], m*[min_f, max_f]+e0, 'color', 'black')
text(max_f, m*max_f+e0, ' Theil-Sen')
However, as you'll notice neither solution automatically fits the points automatically, simply because there are too many outliers, unless you filter those beforehand. Therefore you probably are better off using the RANSAC algorithm as proposed by Shai and McMa.
A: That's a textbook example for the RANSAC algorithm. This free toolbox for Matlab actually has some very nice examples of line fitting.
A: An easy but not very efficient solution would be to compute slope between each two points and if a set of points lie on a straight line, all pairs of these set would have the same slope. So one algorithm could pick all the poirs with same slope and connect them if they have one point in common. finally you have to choose the largest set. The time complexity of this algorithm will be O(N^2 log N) which N is number of points.
As I see in your figure there is not a real perfect line going through all the points, rather there is a tolerance which in this algorithm could be defined as the criteria by which you connect two pairs. say if two slopes are different by less than 2 percent we connect the pairs. | unknown | |
d17815 | val | like this
item_test.xml
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical">
<TextView
android:id="@+id/name"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
tools:text="name" />
</LinearLayout>
adapter
@Override
public View getView(int position, View convertView, ViewGroup parent) {
Holder holder; // use holder
if (convertView == null) {
convertView = LayoutInflater.from(parent.getContext()).inflate(R.layout.item_test, parent, false);
holder = new Holder(convertView);
convertView.setTag(holder);
} else {
holder = (Holder) convertView.getTag();
}
holder.name.setText("name");
return convertView;
}
public class Holder {
private TextView name;
public Holder(View view) {
name = view.findViewById(R.id.name);
}
} | unknown | |
d17816 | val | When you use compare() it returns 1 if argument is higher than actual.
So you should change this piece of code:
else if (index.compareTo(BigInteger.valueOf(1)) == 1)
for this:
else if (index.compareTo(BigInteger.valueOf(1)) == 0)
A: Java doesn't deal too well with deep recursion. You should convert to using a loop instead.
Also see this thread on tail recursion: https://softwareengineering.stackexchange.com/questions/272061/why-doesnt-java-have-optimization-for-tail-recursion-at-all
A: I think You have standard problem with recursion... It's a problem with method fibonacci, because You have no places, when this method must return final result, so please check your condition and more about compare in BigInteger. Recommends also read about tail recursion
A: You could try to use dynamic programming to reduce space complexity. Something like this should work:
public static BigInteger fibonacci(BigInteger n) {
if (n.compareTo(BigInteger.valueOf(3L)) < 0) {
return BigInteger.ONE;
}
//Map to store the previous results
Map<BigInteger, BigInteger> computedValues = new HashMap<BigInteger, BigInteger>();
//The two edge cases
computedValues.put(BigInteger.ONE, BigInteger.ONE);
computedValues.put(BigInteger.valueOf(2L), BigInteger.ONE);
return fibonacci(n, computedValues);
}
private static BigInteger fibonacci(BigInteger n, Map<BigInteger, BigInteger> computedValues) {
if (computedValues.containsKey(n))
return computedValues.get(n);
BigInteger n1 = n.subtract(BigInteger.ONE);
BigInteger n2 = n.subtract(BigInteger.ONE).subtract(BigInteger.ONE);
computedValues.put(n1, fibonacci(n1, computedValues));
computedValues.put(n2, fibonacci(n2, computedValues));
BigInteger newValue = computedValues.get(n1).add(computedValues.get(n2));
computedValues.put(n, newValue);
return newValue;
} | unknown | |
d17817 | val | {this.state.data.username} or {this.state.data['username']}
But probably in this line inside {this.state.data} you wont have username access in the first render, probably this.state.data will be empty so you need to validate before use this.state.data['username']! | unknown | |
d17818 | val | You need to convert 0.5 into the number of minutes:
var newhour = 20.5;
var hour = new Date();
var newhours = Math.floor(newhour),
newmins = 60 * (newhour - newhours);
hour.setHours(newhours);
hour.setMinutes(newmins);
console.log(hour.toTimeString());
A: You could set the hours, minutes and seconds by getting only the parts for the units.
function setTime(date, time) {
['setHours', 'setMinutes', 'setSeconds']
.reduce((t, k) => (date[k](Math.floor(t)), t % 1 * 60), time);
}
var hour = new Date,
newhour = 8.5;
setTime(hour, newhour);
console.log(hour);
A:
I would like to get 20:30
To set the time to exactly 20:30:00.000 from newhour = 20.5, you would take advantage of setHours() having four (optional) arguments (see MDN). Simply convert newhour to milliseconds (multiply by 3600000), and pass as fourth argument:
hour.setHours(0, 0, 0, 20.5 * 36e5);
Demo code here:
var someDate = new Date;
var newHour = 20.5;
someDate.setHours(0, 0, 0, newHour * 36e5);
console.log(someDate.toString());
A: You could use momentjs duration for that:
const duration = moment.duration(20.5, 'hours')
console.log(duration.hours() + ':' + duration.minutes());
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment.js"></script> | unknown | |
d17819 | val | I am having a similar problem with yours but with Otto library. My problem is that I have a jar in my libs folder and I have added another version(branch) of the same library from maven repository. If I remove one of them this problem is solved but I need both of them. That's because I want to use AndroidAnnotations
But I cant figure out how I can do that.
Returning to your problem from what I can see you can solve it like this:
You have to find which one of the libraries that you have added has a dependency conflict with nineoldandroids library. You have to remove them one by one and find which one of them is. After you find that try to solve the conflict between these 2 libraries.
I hope this helps you.
A: The SpinnerWheel library you are using has an outdated version of nineoldandroids added as a jar. You will need to remove it and add the updated gradle dependency or update the jar to the version you specified in your primary gradle file.
In the most recent (as of this writing) I was doing the same thing, running into the same problem. It took a while to find because Android Studio was hiding the libs directory so I had to navigate to that folder to remove the jar.
If you are no longer using the SpinnerWheel library it will be the the same thing happening in one of your other dependencies. | unknown | |
d17820 | val | You'll have to write the parameters (orderBy and such) into every link, because HTTP is a stateless protocol. This is very tedious, so I suggest to look for a framework that does that for you.
Sessions should work too and for a simple application it might be an easier soultion. How exactly did you store the session variables?
A: I suggest something like this:
// Get our defaults for these variables.
$orderBy = isset($_SESSION['orderBy']) ? $_SESSION['orderBy'] : 'Name';
$orderSort = isset($_SESSION['orderSort']) ? $_SESSION['orderSort'] : 'ASC';
// When options are selected/submitted
if (isset($_POST['submit'])) {
if ($_POST['select'] == "EventType") {
$orderby = $_POST['select'];
if ($_POST['otherType'] != "Select an Event") {
$EventType = 'WHERE `' . $orderby . '`="' . $_POST["otherType"] . '"';
}
} else {
$orderby = $_POST['select'];
}
$orderSort = $_POST['agree'];
// Set our session variables for next time when we set our defaults.
$_SESSION['orderBy'] = $orderBy;
$_SESSION['orderSort'] = $orderSort;
} | unknown | |
d17821 | val | Found what was wrong, the correct code is :
addListener((Property.ValueChangeListener) app);
and not
addListener((Property.ValueChangeListener), app);
Damn comma ! | unknown | |
d17822 | val | Ok I figured it out after some headbutting. The solution 'for me' was provided by the pivot point of the spherical globe's ocean. By finding a question provided by Joker Martini to design a lookAt for each pivot of every mesh to look at a target (the pivot of the spherical world water that is centered) and then flipping through rotation the pivot by learning about objectoffsetrot.
Here it is as it may be useful for someone.
for all in selection do(
one = all
target = $'Globe Sea'
pivotLookAt one two
RotatePivotOnly one ((eulerangles 0 180 0) as quat)
)
fn pivotLookAt obj target =
(
ResetXForm obj
old_tm = obj.transform
obj.dir = normalize (target.pos - obj.pos)
obj.objectOffsetRot = old_tm * (inverse obj.transform)
)
fn RotatePivotOnly obj rotation =
(
local rotValInv = inverse (rotation as quat)
animate off in coordsys local obj.rotation *= RotValInv
obj.objectoffsetrot*=RotValInv
obj.objectoffsetpos*=RotValInv
)
Happy to hear of any opinions out there on this for optimising or constructive improving.
Thank you. | unknown | |
d17823 | val | Don't bother. The compiler optimizes better than you could.
You might perhaps try
len = ((len - 1) & 0x3f) + 1;
(but when len is 0 -or 65, etc...- this might not give what you want)
If that is so important for you, benchmark!
A: I created a program
#include <stdio.h>
int main(void) {
unsigned int len;
scanf("%u", &len);
len = len > 64 ? 64 : len;
printf("%u\n", len);
}
and compiled with gcc -O3 and it generated this assembly:
cmpl $64, 4(%rsp)
movl $64, %edx
leaq .LC1(%rip), %rsi
cmovbe 4(%rsp), %edx
the leaq there loads the "%u\n" string in between - I presume it is because the timing of the instructions. The generated code seems pretty efficient. There are no jumps, just a conditional move. No branch prediction failure.
So the best way to optimize your executable is to get a good compiler. | unknown | |
d17824 | val | Visual Studio Task Runner can run any arbitrary CMD command when a project/solution is opened.
Prerequisites: Command Task Runner extention.
*
*Add Foo.cmd with a target command to your project having dotnet watch package installed. It could have one line of code:
dotnet watch run
Make sure the file is properly encoded to UTF-8 without BOM.
*After Command Task Runner extention install, Add to Task Runner option should be accessible from context menu of *.cmd files. Press it and choose per-project level. As a result, commands.json should appear in the project.
*Go to VS View -> Other Windows -> Task Runner Explorer. Set up the binding for the Foo command in the context menu: Bindings -> Project Open (the window refresh could help to see a recently added command).
*Re-open the solution and check a command execution result in Task Runner Explorer.
How it could look: | unknown | |
d17825 | val | how about the following jquery code?
$('.check-diff').click(function() {
if($(this).prop('checked')){
checkDiff();
} else{
$(".row").each(function(){
$(this).css("background-color","#fff");
});
}
});
function checkDiff(){
$(".row").each(function(){
var diff = false;
var source = $(this).find(".diff").first().text();
$(this).find(".diff").each(function(){
var compare = $(this).text();
if(source != compare){
diff = true;
}
});
if(diff == true){
$(this).css("background-color","red");
}
});
}
hope i got you right and you get an idea, how to move on! :)
A: You can use 2 each loops and a class name for the rows you want to check. Only row that has a class name checkDiff will be validated and cells that has a class name diff.
JSnippet DEMO - validate difference in rows base on the cell text
JS:
$(function(){
$('.check-diff').click(function() {
if($(this).prop('checked')){
$('.row.checkDiff').each(function(i,ele){
var values = $(ele).find('.diff');
var first = values.eq(0).text();
var diff = false;
values.each(function(j,e){
if ($(e).text() !== first) diff = true;
});
if (diff) $(ele).addClass('highlight');
});
} else{
$('.row.checkDiff').removeClass('highlight');
}
});
});
HTML:
<label for="">Click to see differences</label>
<input type="checkbox" class="check-diff">
<div class="compare-diff">
<div class="row">
<div class="col-sm-3 title">Name</div>
<div class="col-sm-3">John</div>
<div class="col-sm-3">Henry</div>
<div class="col-sm-3">Kim</div>
</div>
<div class="row checkDiff">
<div class="col-sm-3 title">Status</div>
<div class="col-sm-3 diff">Single</div>
<div class="col-sm-3 diff">Married</div>
<div class="col-sm-3 diff">Single</div>
</div>
<div class="row checkDiff">
<div class="col-sm-3 title">Car</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">Yes</div>
</div>
<div class="row checkDiff">
<div class="col-sm-3 title">Kids</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">No</div>
</div>
<div class="row checkDiff">
<div class="col-sm-3 title">Home</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">Yes</div>
<div class="col-sm-3 diff">Yes</div>
</div>
</div> | unknown | |
d17826 | val | $_REQUEST
An associative array that by default
contains the contents of $_GET, $_POST
and $_COOKIE.
So if you have $_POST['redirect'], $_GET['redirect'] or $_COOKIE['redirect'], $_REQUEST['redirect'] will be defined. Try to put:
var_dump($_POST['redirect']);
var_dump($_GET['redirect']);
var_dump($_COOKIE['redirect']);
To find out where it's coming from.
A: I don't think this is possible to answer for certain without seeing the actual code but $_REQUEST holds all the variables in $_GET, $_POST and $_COOKIE.
A form can actually populate both $_GET and $_POST if its method is set to 'post' and its action is a url with url encoded variables. Thus the form might be posting all of its data to a url and then adding get variables to the end of that url. For example:
<form method='post' action='example.php?var=test'>
<input name='var2' id='var2' />
</form>
If that form were submitted, the following would be defined: $_POST['var2'], $_GET['var'], $_REQUEST['var2'], $_REQUEST['var'].
$_COOKIE could also be putting hidden variables in $_REQUEST.
A: it have so much possibility that the redirect variable is a cookies. if you cannot find it at the form.
var_dump($_REGISTER);
that will list all your input variable associated with POST, GET and COOKIES.
A: If it's not empty what's the content of it?
I think it should be something like this...
$redirect = base64_decode($_GET['redirect']);
if(!empty($redirect){
header("Location: $redirect");
exit;
}
It doesn't matter that it's not in the script, you can set it via GET,
eg /yourform.php?redirect=index.php
Is it causing unwanted redirection? | unknown | |
d17827 | val | you can use 'inline-block' instead of a table:
<div style="display:inline-block;width:10%;"><!-- #INCLUDE FILE="scripts\Logo2_C.aspx" --></div>
<div style="display:inline-block;width:90%;"><img src="/images/logos/logo.png" /></div> | unknown | |
d17828 | val | You probably already know that there are 2 different OLAP approaches:
*
*MOLAP that requires data load step to process possible aggregations (previously defined as 'cubes'). Internally MOLAP-based solution pre-calculates measures for possible aggregations, and as result it is able to execute OLAP queries very fast. Most important drawbacks of this approach come from the fact that MOLAP acts as a cache: you need to re-load input data to refresh a cube (this can take a lot of time - say, hours), and full reprocessing is needed if you decide to add new dimensions/measures to your cubes. Also, there is a natural limit of the dataset size + cube configuration.
*ROLAP doesn't try to pre-process input data; instead of that it translates OLAP query to database aggregate query to calculate values on-the-fly. "R" means relational, but approach can be used even with NoSQL databases that support aggregate queries (say, MongoDb). Since there is no any data cache users always get actual data (on contrast with MOLAP), but DB should able to execute aggregate queries rather fast. For relatively small datasets usual OLTP databases could work fine (SQL Server, PostgreSql, MySql etc), but in case of large datasets specialized DB engines (like Amazon Redshift) are used; they support efficient distributed usage scenario and able to processes many TB in seconds.
Nowadays it is a little sense to develop MOLAP solution; this approach was actual >10 years ago when servers were limited by small amount of RAM and SQL database on HDD wasn't able to process GROUP BY queries fast enough - and MOLAP was only way to get really 'online analytical processing'. Currently we have very fast NVMe SSD, and servers could have hundreds gigabytes of RAM and tens of CPU cores, so for relatively small database (up to TB or a bit more) usual OLTP databases could work as ROLAP backend fast enough (execute queries in seconds); in case of really big data MOLAP is almost unusable in any way, and specialized distributed database should be used in any way.
A: The general wisdom is that cubes work best when they are based on a 'dimensional model' AKA a star schema that is often (but not always) implemented in an RDBMS. This would make sense as these models are designed to be fast for querying and aggregating.
Most cubes do the aggregations themselves in advance of the user interacting with them, so from the user perspective the aggregation/query time of the cube itself is more interesting than the structure of the source tables. However, some cube technologies are nothing more than a 'semantic layer' that passes through queries to the underlying database, and these are known as ROLAP. In those cases, the underlying data structure becomes more important.
The data interface presented to the user of the cube should be simple from their perspective, which would often rule out non-dimensional models such as basing a cube directly on an OLTP system's database structure. | unknown | |
d17829 | val | Did you clean your cache?
With the command
bin/console cache:clear --env=prod (or env=dev)
or the more "hard" way
rm -rf var/cache/* | unknown | |
d17830 | val | wwwrun doesn't have permissions to read /home and hence can't directly verify that /home/pdfs in fact even exists, much less that it is a directory. | unknown | |
d17831 | val | When you write val = f()(3,4)(5,6), you want f to return a function that also returns a function; compare with the simpler multi-line call:
t1 = f()
t2 = t1(3,4)
val = t2(5,6)
The function f defines and returns also has to define and return a function that can be called with 2 arguments. So, as @jonrsharpe said, you need more nesting:
def f():
def x(a, b):
def y(c, d):
return c + d
return y
return x
Now, f() produces the function named x, and f()(3,4) produces the function named y (ignoring its arguments 3 and 4 in the process), and f()(3,4)(5,6) evaluates (ultimately) to 5 + 6. | unknown | |
d17832 | val | This is a fairly common issue when dealing with small, fast-moving objects. Typically, the best solution is to make the "walls" thicker, if that is possible within your game. Also, you may increase the velocity and position iterations (links below)... just remember that both of these (along with .isBullet=true) may result in a slight performance penalty, so the first approach is the best.
http://docs.coronalabs.com/api/library/physics/setVelocityIterations.html
http://docs.coronalabs.com/api/library/physics/setPositionIterations.html
Brent Sorrentino | unknown | |
d17833 | val | Have you included the SessionHelper module in your controller?
From your example code, I'm assuming you're using the RailsTutorial by Michael Hartl. You have to include the SessionsHelper in your ApplicationController to be able to use it in all controllers. Check out Listing 8.14 in the book:
class ApplicationController < ActionController::Base
protect_from_forgery
include SessionsHelper
# Force signout to prevent CSRF attacks
def handle_unverified_request
sign_out
super
end
end | unknown | |
d17834 | val | The "Canonical" format for connecting to a cluster is couchbase://host where host in your case is localhost.
Depending on the version you're using, newer enhancements may have been added to allow for the common host:8091 antipattern, but would still be incorrect.
Chances are your code can still work if you upgrade to a newer version - but you should still use the couchbase:// variant (without the port). | unknown | |
d17835 | val | To use emoji icons with angularjs, you could use angular-emoji-filter module: https://github.com/globaldev/angular-emoji-filter. | unknown | |
d17836 | val | Well,
*
*Adrian Moors has reimplemented Jeremy Gibbons' Origami programming : The paper. The source.
*Bruno Oliveira and Jeremy Gibbons have re-implemented Hinze's Generics for the masses, Lämmel & Peyton-Jones' Scrap your Boilerplate with Class, and Origami Programming, and written a detailed comparison about it.
Source here.
*Naturally, the Scala Collections library itself can easily be seen as an instance of generic programming, as Martin Odersky explains, if only because of its reliance on implicits, Scala's flavor of Type Classes.
A: Christian Hofer, Klaus Ostermann, Tillmann Rendel and Adriaan Moors's Polymorphic Embedding of DSLs has some accompanying code which is 'very generic'. They cite Finally Tagless, Partially Evaluated as an 'important influence', which endears this paper to me for some reason... | unknown | |
d17837 | val | JaCoCo requires the exact same class files for report generation that were used at execution time, so
*
*if report is completely empty, then classes were not provided
*if report contains classes but their coverage is 0%, then they don't match classes that were used at runtime - this is described along with other related information in JaCoCo documentation on page http://www.jacoco.org/jacoco/trunk/doc/classids.html
and in either case check existence of warnings in log.
Update for updated question
Here is what I did:
*
*downloaded and unpacked JaCoCo 0.7.9 into /tmp/jacoco/jacoco-0.7.9
*downloaded and unpacked Wildfly 9.0.0.CR2 into /tmp/jacoco/wildfly-9.0.0.CR2
*cloned https://github.com/mkyong/spring4-mvc-ajax-example into /tmp/jacoco/spring4-mvc-ajax-example and built as mvn verify
*copied /tmp/jacoco/spring4-mvc-ajax-example/spring4-mvc-maven-ajax-example-1.0-SNAPSHOT.war into /tmp/jacoco/wildfly-9.0.0.CR2/standalone/deployments
*Wildfly started as JAVA_OPTS=-javaagent:/tmp/jacoco/jacoco-0.7.9/lib/jacocoagent.jar=output=tcpserver ./standalone.sh and got enough time to deploy application
*in directory /tmp/jacoco/spring4-mvc-ajax-example executed mvn org.jacoco:jacoco-maven-plugin:0.7.9:dump org.jacoco:jacoco-maven-plugin:0.7.9:report (note that version of used agent matches version of jacoco-maven-plugin) so that it created /tmp/jacoco/spring4-mvc-ajax-example/jacoco.exec and report /tmp/jacoco/spring4-mvc-ajax-example/site/jacoco:
*opened http://localhost:8080/spring4-mvc-maven-ajax-example-1.0-SNAPSHOT/ and did some actions
*executed mvn org.jacoco:jacoco-maven-plugin:0.7.9:dump org.jacoco:jacoco-maven-plugin:0.7.9:report again to get an updated report: | unknown | |
d17838 | val | This is a general question, so I'll try to provide a general answer
In a nutshell, Spring itself does not require an internet connection at runtime in a sense that it is not supposed to contain code that goes "somewhere on the internet" and queries for something.
However, Spring has a lot of dependencies (actually just like your own project probably has dependencies) so that Maven will have to bring them from somewhere upon the first run.
So Maven (that you've mentioned as a build tool) by default will require an internet connection. Of course, there are many options to overcome this "difficulty" all of them boil down to making all these dependencies available so that you'll be able to compile the project without going to the internet.
The actual solution can vary:
*
*List item "install Nexus/Artifactory" that will act as a proxy and will download dependencies for you. It makes sense if your network infrastructure has an option to connect to the internet from some servers leaving your "developed machine" connected only to the internal network.
*Download the whole Maven repository with some crawler (it exposes web interface) to your machine and use it there (if you work for organization that doesn't have any kind of internet connection)
*Just come to the place that has an internet connection with your PC, compile everything once, Maven will download all the dependencies and cache them in your local m2 repository. So next time you'll be able to build your project even without internet connection.
I know the last option sounds more like a joke, but it also technically works if you, say a student that doesn't have any connection at home but wants to try this "Spring thing" out :)
A: You can find some more information about mavens offline flags in this post f.e.: Is there a maven command line option for offline mode? | unknown | |
d17839 | val | Create a fetch request for the entity you wish to retrieve. Don't give it a predicate, set whatever sort descriptor you want.
Execute the fetch request in a managed object context and it will return an array of all the objects of that entity.
This is purposely just a descriptive answer, you can find the specifics of how to do this from the Core Data introductory documentation; you are new in Core Data and this is a good way to learn it.
Also - don't think of Core Data in terms of rows of data that you turn into objects. It's an Object-Relationship graph. It stores the objects of entities and their relationships between them. You don't turn the "rows" into objects, you get the objects back directly.
A: The response of @Abizern with code :
NSManagedObjectContext *moc = // your managed object context;
NSEntityDescription *entityDescription = [NSEntityDescription
entityForName:@"Message" inManagedObjectContext:moc];
NSFetchRequest *request = [[NSFetchRequest alloc] init];
[request setEntity:entityDescription];
// You can also add a predicate or sort descriptor to your request
NSError *error;
NSArray *array = [moc executeFetchRequest:request error:&error];
if (array == nil)
{
// Deal with error...
} | unknown | |
d17840 | val | The prism documentation has a whole section on navigation. The problem with this question is that there are a number of different ways to go when loading modules on demand. I have posted a link that I hope leads you in the right direction. If it does, please mark this as answered. thank you
http://msdn.microsoft.com/en-us/library/gg430861(v=pandp.40).aspx
A: As has been said, there are a number of ways of accomplishing this. For my case, I have a similar shell that has a nav region and a main region, and my functionality is broken into a number of modules. My modules all add their own navigation view to that nav region (in the initialise of the module add their own nav-view to the nav-region). In this way my shell has no knowledge of the individual modules and the commands they may expose.
When a command is clicked the nav view model it belongs to does something like this:
/// <summary>
/// switch to the view given as string parameter
/// </summary>
/// <param name="screenUri"></param>
private void NavigateToView(string screenUri)
{
// if there is no MainRegion then something is very wrong
if (this.regionManager.Regions.ContainsRegionWithName(RegionName.MainRegion))
{
// see if this view is already loaded into the region
var view = this.regionManager.Regions[RegionName.MainRegion].GetView(screenUri);
if (view == null)
{
// if not then load it now
switch (screenUri)
{
case "DriverStatsView":
this.regionManager.Regions[RegionName.MainRegion].Add(this.container.Resolve<IDriverStatsViewModel>().View, screenUri);
break;
case "TeamStatsView":
this.regionManager.Regions[RegionName.MainRegion].Add(this.container.Resolve<ITeamStatsViewModel>().View, screenUri);
break;
case "EngineStatsView":
this.regionManager.Regions[RegionName.MainRegion].Add(this.container.Resolve<IEngineStatsViewModel>().View, screenUri);
break;
default:
throw new Exception(string.Format("Unknown screenUri: {0}", screenUri));
}
// and retrieve it into our view variable
view = this.regionManager.Regions[RegionName.MainRegion].GetView(screenUri);
}
// make the view the active view
this.regionManager.Regions[RegionName.MainRegion].Activate(view);
}
}
So basically that module has 3 possible views it could place into the MainView, and the key steps are to add it to the region and to make it active. | unknown | |
d17841 | val | Is something like this what you want?
# create the data
var1 <- list('2003' = 1:3, '2004' = c(4:3), '2005' = c(6,4,1), '2006' = 1:4 )
var2 <- list('2003' = 1:3, '2004' = c(4:5), '2005' = c(2,3,6), '2006' = 2:3 )
# A couple of nested lapply statements
lapply(setNames(seq_along(var1), names(var1)),
function(i,l1,l2) length(intersect(l1[[i]], Reduce(union,l2[1:i]))),
l1 = var1,l2=var2)
$`2003`
[1] 3
$`2004`
[1] 2
$`2005`
[1] 3
$`2006`
[1] 4
note that Reduce(union,var2)reduces the list var2 by successively combining the elements using union (see ?Reduce)
Reduce(union,var2)
[1] 1 2 3 4 5 6
EDIT elegant alternative
use the accumulate = T argument in Reduce
lapply(mapply(intersect,var1, Reduce(union, var2, accumulate=T)),length)
Because --
Reduce(union, var2, accumulate = T)
## [[1]]
## [1] 1 2 3
##
## [[2]]
## [1] 1 2 3 4 5
##
## [[3]]
## [1] 1 2 3 4 5 6
##
## [[4]]
## [1] 1 2 3 4 5 6 | unknown | |
d17842 | val | The trick here is that getDownloadURL is an async function that happens to return a promise (as per the docs):
var storage = firebase.storage();
var storageRef = storage.ref();
var imgRef = storageRef.child('profile-pictures/1.jpg');
// call .then() on the promise returned to get the value
imgRef.getDownloadURL().then(function(url) {
var pulledProfileImage = url;
dataArray.push(pulledProfileImage);
});
That will work locally, but that list of URLs won't be synchronized across all browsers. Instead, what you want is to use the database to sync the URLs, like how we did in Zero To App (video, code). | unknown | |
d17843 | val | Check your scipy version:
import scipy
print(scipy.__version__)
find_peaks is new in version 1.1.0.
If you want to update:
pip install scipy --upgrade
A: I just had to reinstall scipy and it worked on Mac OS with M1 and Python 3.9
pip uninstall scipy
and then
pip install scipy | unknown | |
d17844 | val | SKSpriteNode inherits from SKNode. You can use childNodeWithName.
SKSpriteNode *someSprite = [SKSpriteNode node];
[someSprite childNodeWithName:@"someChildOfSprite"];
Code for comment below asking how to cast SKNode as an SKSpriteNode:
SKSpriteNode *theChildYouWant = (SKSpriteNode*)[someSprite childNodeWithName:@"someChildOfSprite"]; | unknown | |
d17845 | val | If the array is declared as character array, like char arr[], you can call sizeof(arr) and you'll get the size of the array. But, if it is allocated some heap memory using malloc or calloc, you cannot get the size of the array, except for calling strlen(), which only gives the size of the string, not the memory location. So, either declare your string as an array of characters or store the size of the memory created (when created dynamically) and update it every time you extend/shrink your memory.
In your case, I think it would be simple if you allocate some storage for your output and iterate through input and insert data into output. This way, you know how much data to allocate to output and you don't need to extend it. Required space for your output would be (1 + 2 + 3 + 4 + 5 +... strlen(input) times) + (strlen(input)-1) | unknown | |
d17846 | val | Please check this guide Export log data to Amazon S3 using the AWS CLI
Policy's looks like the document that you share but slight different.
Assuming that you are doing this in same account and same region, please check that you are placing the right region ( in this example is us-east-2)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
I think that bucket owner full control is not the problem here, the only chance is the region.
Anyway, take a look to the other two examples in case that you were in different accounts/ using role instead user.
This solved my issue, that was the same that you mention.
A: Ensure when exporting the data you configure the following aptly
S3 bucket prefix - optional This would be the object name you want to use to store the logs.
While creating the policy for PutBucket, you must ensure the object/prefix is captured adequately. See the diff for the PutBucket statement Resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
- "Resource": "arn:aws:s3:::my-exported-logs/*",
+ "Resource": "arn:aws:s3:::my-exported-logs/**_where_i_want_to_store_my_logs_***",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
A: One thing to check is your encryption settings. According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
Exporting log data to Amazon S3 buckets that are encrypted by AWS KMS is not supported.
Amazon S3-managed keys (SSE-S3) bucket encryption might solve your problem. If you use SSE-KMS, Cloudwatch can't access your encryption key in order to properly encrypt the objects as they are put into the bucket.
A: I had the same situation and what worked for me is to add the bucket name itself as a resource in the Allow PutObject Sid, like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLogsExportGetBucketAcl",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "AllowLogsExportPutObject",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": [
"my-bucket",
"my-bucket/*"
]
}
]
}
I also believe that all the other answers are relevant, especially using the time in milliseconds. | unknown | |
d17847 | val | http://demos.telerik.com/kendo-ui/grid/editing-inline
This should get you where you need to go.
A: I have actually found what I needed.
The event I was looking for is save
It is fired after the item is edited, when the item is being saved. This is where I need to plug in my code. | unknown | |
d17848 | val | As far as I'm aware, your only officially supported option for any accessory using the dock connector would be to use the External Accessory Framework, which requires enrollment in the Made for iPod Program (overkill for your purposes). However, the iPhone and iPod both support line-in via the headphone jack, and there are plenty of apps that make use of audio input. At least it provides some possibilities.
A: Have you looked at the pinouts for the dock connector floating about? It appears that certain resistances connected to pin 21 indicate to the iDevice what is connected.
68kΩ seems to be the magic number for audio in/out. | unknown | |
d17849 | val | I'm also searching for the same question's answer... but I haven't found it.
by the way, I finally write a library to get length of character (generate a range information which shows character length use Console.Write() and Console.CursorLeft, and then convert to C# code, when get character length, use binary search for higher speed)
nuget: NullLib.ConsoleEx
Project: https://github.com/SlimeNull/NullLib.ConsoleEx
A: According to .NET documentation, if you know the font used1, you could use the GlyphTypeface API to get the "AdvanceWidths" of each glyph of your font.
You still have to map the character with glyph index in your font. You can use CharacterToGlyphMap to do that.
var character = 'x';
GlyphTypeface glyphTypeface = new GlyphTypeface(new Uri("file:///C:\\WINDOWS\\Fonts\\Kooten.ttf"));
var index = glyphTypeface.CharacterToGlyphMap[character];
var width = glyphTypeface.AdvanceWidths[index];
[1] To get current console's font details I would recommend to read following question: Get current console font info
Related question: How to find exact size for an arbitrary glyph in WPF? | unknown | |
d17850 | val | The are a few issues with your code:
*
*In the first example, the GoogleJsonResponseException: API call to classroom.courses.courseWork.patch failed with error: updateMask: Update mask is required" error you are receiving is due to the fact that you are not specifying the updateMask field before making the request.
So even though you added the patchDraft.updateMask = newState; line, this won't help as it will be execute after the API call.
*The second example ends up executing properly because you supplied the updateMask field at the time of making the request.
However, since the class was created manually from the Classroom UI, this is the expected behavior, as the class is essentially not associated with any Developer Console Project.
According to the documentation:
ProjectPermissionDenied indicates that the request attempted to modify a resource associated with a different Developer Console project.
Possible Action: Indicate that your application cannot make the desired request. It can only be made by the Developer Console project of the OAuth client ID that created the resource.
What you can do in this situation is to create the class from the API and afterwards execute the patch call.
Reference
*
*Classroom API Access Errors. | unknown | |
d17851 | val | The validation plugin from https://jqueryvalidation.org/ triggered when you submit the form. The issue is, you only clear the username field without resubmit it. You can try validation on keyup event to that field. Here's the example, you just need to define the border color or another css properties in red class.
$("input").keyup(function() {
var value = $(this).val();
if (value.length == 0) {
$(this).addClass('red');
$(this).after('<div class="err-msg">Please fill this field</div>');
}
}); | unknown | |
d17852 | val | You will want to make sure c is big enough, or grows:
std::merge(a.begin(),a.end(),b.begin(),b.end(),std::back_inserter(c));
Alternatively:
c.resize(a.size() + b.size());
std::merge(a.begin(),a.end(),b.begin(),b.end(),c.begin());
See it Live On Coliru
#include <algorithm>
#include <vector>
#include <iterator>
struct t
{
t(int x):a(x){}
int a;
};
bool operator<(const t& p,const t&b)
{
return p.a<b.a;
}
int main()
{
std::vector<t> a,b,c;
a.push_back(t(10));
a.push_back(t(20));
a.push_back(t(30));
b.push_back(t(1));
b.push_back(t(50));
std::merge(a.begin(),a.end(),b.begin(),b.end(),std::back_inserter(c));
return 0;
} | unknown | |
d17853 | val | You're using a condition in your link_to call, just remove the first argument like this :).
link_to("Sign out", destroy_user_session_path) | unknown | |
d17854 | val | Thank you @Lee_Dailey for your help in figuring out how to properly ask a question...
It ended up being additional whitespace (3 tabs) after the characters in the reference file asy_files.txt.
It was an artifact from where I copied from, and powershell was seeing "as2.art" and "as2.art " I am not 100% as to why that matters, but I found that sorting for any whitespace with /S that appears after a word character /W and removing it made the comparison logic work. The Compare-Object|Where-Object worked as well after removing the whitespace. | unknown | |
d17855 | val | …Aaaaand I just solved my problem by using another tool entirely.
I found ImageProcessor. Documentation is a royal b**ch to get at because it only comes in a Windows *.chm help file (it’s not online… cue one epic Whisky. Tango. Foxtrot.), but after looking at a few examples it did solve my issue quite nicely:
public static async Task<byte[]> TinyPng(Stream input, int aspect) {
using(var output = new MemoryStream())
using(var png = new TinyPngClient("kxR5d49mYik37CISWkJlC6YQjFMcUZI0")) {
using(var imageFactory = new ImageFactory()) {
imageFactory.Load(input).Resize(new Size(aspect, 0)).Save(output);
}
var result = await png.Compress(output);
using(var reader = new BinaryReader(await (await png.Download(result)).GetImageStreamData())) {
return reader.ReadBytes(result.Output.Size);
}
}
}
and everything is working fine now. Uploads are much faster now as I am not piping a full-sized image straight through to TinyPNG, and since I am storing both final-“full”-sized images as well as thumbnails straight into the database, I am now not piping the whole bloody image twice.
Posted so that other wheel-reinventing chuckleheads like me will actually have something to go on. | unknown | |
d17856 | val | No, you can't scale to 0 the Flex instances. It's the main problem of the flex instances.
You have to replace the current service by a APp Engine standard service version that can scale to 0 and stop paying.
If your application doesn't run background processes, and the request handling doesn't take more than 60 minutes, I strongly recommend you to have a look to Cloud Run | unknown | |
d17857 | val | You can create proxy for the ConnectionPool and return the proxy in the bean creation method
@Bean
@Scope("singleton")
public ConnectionPool connectionPool(...) throws Exception {
ConnectionPoolImpl delegate = new ConnectionPoolImpl(...);
ConnectionPoolCallHandler callHandler = new ConnectionPoolCallHandler(delegate);
ConnectionPool proxy = Proxy.newProxyInstance(
ConnectionPool.getClass().getClassLoader(),
new Class[]{ConnectionPool.class},
callHandler);
// return new ConnectionPoolImpl(...);
return proxy;
}
and
public class ConnectionPoolCallHandler implements InvocationHandler {
private ConnectionPoolImpl delegate;
public ConnectionPoolCallHandler(ConnectionPoolImpl delegate) {
this.delegate=delegate;
}
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
//all invoked methods should call
//appropriate methods of delegate passing all parameters
//plus your additional tracking logic here
}
}
A: @Pointcut("within(java.sql.Connection.close(..)")
public void closeAspect() {}
@Around("closeAspect()")
public void logAround(ProceedingJoinPoint joinPoint) throws Throwable
{
joinPoint.getThis();//Will return the object on which it(close function) is called
//Do whatever you want to do here
joinPoint.proceed();
//Do whatever you want to do here
} | unknown | |
d17858 | val | Try this following method:
*
*Zoom In : Ctrl+Shift++
*Zoom Out: Ctrl+-
*Zoom 100%: Ctrl+0
Hope this helps!
A: To zoom in do
ctrl Shift +
To zoom out do
ctrl -
A: Try the following keystrokes
Ctrl + -
A: Ctrl + - doesn't work for me. But a workaround is to change the shortcut for the Zoom Out action.
Right click on the terminal and select Preferences. Then Shortcuts. Then change the Shortcut Key for Zoom Out to whatever you want. I chose Ctrl + Backspace. | unknown | |
d17859 | val | Something like this should get you going...
with dates as (select * from unnest(generate_date_array('2018-01-01','2019-12-31', interval 1 day)) as cal_date),
cal as (select cal_date, cast(format_date('%Y', cal_date) as int64) as year, cast(format_date('%V', cal_date) as int64) as week_num, format_date('%A', cal_date) as weekday_name from dates)
select c1.cal_date, c1.week_num, c1.weekday_name, c2.cal_date as previous_year_same_weekday
from cal c1
inner join cal c2
on c1.year = c2.year+1 and c1.week_num = c2.week_num and c1.weekday_name = c2.weekday_name
The above query uses a week starting on a Monday, you may need to play around with the format_date() arguments as seen here to modify it for your needs.
A: This query returns no results, implying that SHIFT works. The function returns NULL if a year does not have the same number of weeks as its predecessor.
CREATE TEMP FUNCTION P_YEAR(y INT64) AS (
MOD(CAST(y + FLOOR(y / 4.0) - FLOOR(y / 100.0) + FLOOR(y / 400.0) AS INT64), 7)
);
CREATE TEMP FUNCTION WEEKS_YEAR(y INT64) AS (
52 + IF(P_YEAR(y) = 4 OR P_YEAR(y - 1) = 3, 1, 0)
);
CREATE TEMP FUNCTION SHIFT(d DATE) RETURNS DATE AS (
CASE
WHEN WEEKS_YEAR(EXTRACT(ISOYEAR FROM d)) != WEEKS_YEAR(EXTRACT(ISOYEAR FROM d) - 1)
THEN null
WHEN WEEKS_YEAR(EXTRACT(ISOYEAR FROM d)) = 52
THEN DATE_SUB(d, INTERVAL 52 WEEK)
ELSE d
END
);
WITH dates AS (
SELECT d
FROM UNNEST(GENERATE_DATE_ARRAY('2000-12-31', '2020-12-31', INTERVAL 1 DAY)) AS d
)
SELECT
d,
EXTRACT(ISOWEEK FROM d) AS orig_iso_week,
EXTRACT(ISOWEEK FROM SHIFT(d)) AS new_iso_week,
SHIFT(d) AS new_d
FROM dates
WHERE EXTRACT(ISOWEEK FROM d) != EXTRACT(ISOWEEK FROM SHIFT(d))
AND SHIFT(d) IS NOT NULL | unknown | |
d17860 | val | There is an on-select attribute, from there you can call a function on your scope.
<ui-select ng-model="person.selected" theme="select2" on-select="someFunction($item, $model)" ... | unknown | |
d17861 | val | Try this (a lot of guessing involved):
function procesForm_mm() {
var e1 = document.mmForm.element1.value;
var e2 = document.mmForm.element2.value;
result_mm = parseInt(eval(e1).A) + parseInt(eval(e2).A);
document.getElementById("resultfield_mm").innerHTML += result_mm;
}
var Fe = new Object();
Fe.denumire = "Fier";
Fe.A = 56;
Fe.Z = 26;
Fe.grupa = "VIIIB";
Fe.perioada = 4;
var Co = new Object();
Co.denumire = "Cobalt";
Co.A = 59;
Co.Z = 27;
Co.grupa = "IXB";
Fe.perioada = 4;
See it working here: http://jsfiddle.net/KJdMQ/.
It's important to keep in mind that use of the JS eval function has some disadvantages: https://stackoverflow.com/a/86580/674700.
A better approach would be to keep your JS objects in an array and avoid the use of the eval function:
function procesForm_mm() {
var e1 = document.mmForm.element1.value;
var e2 = document.mmForm.element2.value;
result_mm = parseInt(tabelPeriodic[e1].A) + parseInt(tabelPeriodic[e2].A);
document.getElementById("resultfield_mm").innerHTML += result_mm;
}
var tabelPeriodic = [];
tabelPeriodic["Fe"] = new Object();
tabelPeriodic["Co"] = new Object();
var el = tabelPeriodic["Fe"];
el.denumire = "Fier";
el.A = 56;
el.Z = 26;
el.grupa = "VIIIB";
el.perioada = 4;
el = tabelPeriodic["Co"];
el.denumire = "Cobalt";
el.A = 59;
el.Z = 27;
el.grupa = "IXB";
el.perioada = 4;
(See it working here)
Note: This looks like a chemistry application, I assumed that the form is supposed to add some chemical property values for the chemical elements (i.e. A possibly being the standard atomic weight). The form would take as input the names of the JS objects (Fe and Co). | unknown | |
d17862 | val | remove this red marked line and it should be fine. | unknown | |
d17863 | val | Apache Ignite's SQL does not have syntax for reading or writing arrays. You can store arrays in text form if you like (for example, you can store JSON snippets in VARCHAR columns), or you can store arrays as fields in POJO objects using Ignite's Java APIs (they will not be accessible as SQL table columns in this case).
You can create an ARRAY column, but there is no way to populate it with array literal currently. | unknown | |
d17864 | val | The pages you are trying to frame forbid being framed and throw a "Refused to display document because display forbidden by X-Frame-Options." error in Chrome.
If they're your pages, then remove the frame limiter. Otherwise, respect the page's author's wishes and DON'T FRAME THEM.
Working StackBlitz
Reference | unknown | |
d17865 | val | ppp.communicate() will not be called unless the process terminates in less than 0.01 seconds, which evidently is not happening. Note that ppp.communicate() will wait for the process to finish, so I'm not sure why you do time.sleep() and ppp.poll(). The following consistently works:
import subprocess
stdout, stderr = subprocess.Popen(
["vim", "-g", "--nofork", "--nonexistent_option", "blah"],
stderr=subprocess.PIPE
).communicate()
print '---stdout---\n%s\n---stderr---\n%s\n---expected---' % (stdout, stderr)
subprocess.Popen(
["vim", "-g", "--nofork", "--nonexistent_option", "blah"]
).wait() | unknown | |
d17866 | val | As my comments have suggested, you should break this down into more specific problems and ask them as separate questions. But here is some information to help you out:
*
*Check out JQuery Sortable
*Check out the Connect Lists option [EDIT] In your JSFIddle, your problem doesn't exist for me (in Chrome)
*This should be a separate question (although you can probably find a duplicate of this already)
*If not solved by your answer to 3, then should be an additional question
For the purposes of a more meaningful answer, here is some sample code showing the use of JQuery Sortable:
<div class="container">
<div class="header">Selected DVDs</div>
<ul id="gallery" class="dvdlist">
<li>DVD 1</li>
<li>DVD 2</li>
<li>DVD 3</li>
</ul>
</div>
<div class="container">
<div class="header">Un-selected DVDs</div>
<ul id="trash" class="dvdlist">
<li>DVD 4</li>
<li>DVD 5</li>
<li>DVD 6</li>
</ul>
</div>
$("#gallery").sortable({
connectWith: "#trash"
});
$("#trash").sortable({
connectWith: "#gallery"
});
You can see this in action here. | unknown | |
d17867 | val | You can use .one(); note js at Question is missing closing parenthesis ) at click()
$(document).ready(function() {
$("#add_app").one("click", function() {
$("#box").append("<p>jQuery is Amazing...</p>");
this.removeAttribute('href');this.className='disabled';
})
});
A: Use this jQuery:
$(document).ready(function() {
$("#add_app").one("click", function() {
$("#box").append("<p>jQuery is Amazing...</p>");
this.removeAttribute('href');
});
});
I'm using the .one() event handler to indicate that you only want the click event to take place once. The rest of the code was more or less correct with a few small changes.
Here's a fiddle: https://jsfiddle.net/0mobshpr/1/
A: you can try this
HTML
<a style="cursor: pointer;" id="add_app" ><strong>Summon new element and disable me</strong></a>
<div id="box"></div>
and in your JS code you can write this i have tested it and it worked fine
$(function(){
$("#add_app").one("click",function(){
$("#box").append("<p>jQuery is Amazing...</p>");
});
});
A: you need to return false whwn the link is click.
try this one JQuery code..
$("#add_app").click(function() {
$("#box").append("<p>jQuery is Amazing...</p>");
return false;
}); | unknown | |
d17868 | val | You can use strtok to tokenize the string at the '&' character, then split the "tokens" at '=' to get the parameter names and values.
The splitting at '=' can either be done with strtok as well (or rather strtok_r) or using strchr and strncpy/strcpy or strndup/strdup.
A: If you are guaranteed that pattern you could use a simple parse function.
If you are guaranteed a max length of key/value then fixed buffer + copy would be the simplest. Else you could first find location of separator, then malloc that size, etc.
As a simple example/concept with fixed size of max 100 i.e.:
#include <stdio.h>
int get_pair(char **p, char *key, char *val)
{
int esc = 0; /* escape level */
char *cp = key; /* current target */
*key = '\0'; /* if either is blank */
*val = '\0';
if (!*p || !**p)
return 0;
/* this could be done more elegant */
while (**p) {
if (**p == '=' && (esc & 1) == 0) {
*cp = '\0'; /* terminate */
cp = val; /* change target */
++(*p);
continue;
} else if (**p == '&' && (esc & 1) == 0) {
++(*p); /* skip & and break */
break;
}
if (**p == '\\') {
if((++esc & 1) == 0) /* if 2, 4, 6 ... \'s */
*cp++ = **p;
} else {
esc = 0;
*cp++ = **p;
}
++(*p);
}
*cp = '\0';
return 1;
}
int main(void)
{
char *data = "ab=123&a\\=42&m\\\\ed\\=\\&do\\\\\\\\=mix";
char key[100];
char val[100];
printf("Parse: %s\n", data);
while (get_pair(&data, key, val))
printf("key: %s\nval: %s\n\n", key, val);
return 0;
}
Output:
Parse: ab=123&a\=42&m\\ed\=\&do\\\\=mix
key: ab
val: 123
key: a=42
val:
key: m\ed=&do\\
val: mix
A: Yes, I have fixed them: while passer give me the parameters, they should use \ to escape the = and &, but the \ itself do not need to escape. While I extracted these parameters, I just replace the \& with &, and \= with '='. If the real value is \\=, just encoded it with \\\=. I do not need to analyse the \ character, just leave them where they are. | unknown | |
d17869 | val | You need to set the root route for the entire application to be served with your blade view, in your case index_extjs.blade.php.
Why? Because when anyone opens up your site, you are loading up that page and hence, loading extjs too. After that page is loaded, you can handle page changes through extjs.
So to achieve this, you need to declare your root route to server this index file:
Route::get('/', function() {
return view('index_extjs');
});
Also you need to revert all extjs config changes back to default, because everything will be relative to your extjs app inside public folder and not relative to the project itself. I hope this makes sense | unknown | |
d17870 | val | The error message does not match your query (there is no userId column in the query) - and it is not related to the size of the table.
Regardless, I would filter with exists:
select w.*
from workers w
where exists (
select 1
from workers w1
where
w1.name = w.name
and w1.jobTitle = w.jobTitle
and w1.description = w.description
and w1.id < w.id
)
For performance, consider an index on (name, jobTitle, description, id).
A: You can do it with an 'INNER JOIN
SELECT DISTINCT t1.*
From workers t1
INNER JOIN workers t2 ON t1.name = t2.name and t1.jobTitle = t2.jobTitle and t1.description = t2.description
Where t1.id > t2.Id ;
But i can't figure out how you got your message, there is no userid in sight | unknown | |
d17871 | val | The problem has been resolved.
Set the encoding option in requests, we were able to obtain the required value.
Thank you very much.
sub_result = requests.get(sub_url)
sub_result.encoding = 'utf-8'
sub_soup = BeautifulSoup(sub_result.text, 'lxml') | unknown | |
d17872 | val | Try the following
SELECT ID
FROM Bids
WHERE auction = 150028
AND Bid < (SELECT MAX(Bid) FROM Bids WHERE auction = 150028)
ORDER By bid DESC
LIMIT 0,1
With this query you select the ID for a specific auction and get only the ID for the second highest bid.
EDIT:
For getting all auctions try the following query:
SELECT DISTINCT (SELECT b1.ID
FROM Bids b1
WHERE b1.auction = b2.auction
AND b1.Bid < (SELECT MAX(Bid) FROM Bids b3 WHERE b3.auction = b1.auction)
ORDER By b1.bid DESC
LIMIT 0,1) as ID, b2.auction
FROM Bids b2
A: To get last bidder before the winner
SELECT *
FROM
(
SELECT b.auction, MAX(b.bid) bid
FROM bids b JOIN winners w
ON b.auction = w.auction
AND b.bidder <> w.winner
GROUP BY b.auction
) q JOIN bids b
ON q.auction = b.auction
AND q.bid = b.bid
Here is SQLFiddle demo
To get all bidders before the winner
SELECT *
FROM bids b JOIN winners w
ON b.auction = w.auction
AND b.bidder <> w.winner
-- WHERE b.auction = 150028 -- use if you need to fetch for particular auction
ORDER BY b.auction, b.bid DESC
Here is SQLFiddle demo | unknown | |
d17873 | val | Both Scala and Java compiles into Java Bytecode (.class files) and packed as .jar files. It's same bytecode, and can be used from any other JVM language under same classpath.
So your app can mix java bytecode produces from any other JVM language, including Java, Groovy, Scala, Clojure, Kotlin, etc. You can put such jars into /lib dir (if your Grails version supports this; it's less preferred way) or as dependency from Maven repository (local or remote). Then you can use such classes from your code
See:
*
*https://en.wikipedia.org/wiki/Java_bytecode
*http://grails.github.io/grails-doc/2.4.x/guide/conf.html#dependencyResolution for grails 2.4.x
*http://grails.github.io/grails-doc/latest/guide/conf.html#dependencyResolution for latest grails
PS there could be some incompatibilities with some classes, because of different nature of sources languages. But in general it should work.
A: Did some simple tests. In hello-world example, just use
groovy -cp the-assembly-SNAPSHOT.jar main-file.groovy
And in the main-file.groovy, should have an import, like in Java/Scala | unknown | |
d17874 | val | Something like this should do it...
$('#ulMenu').children('li').each(function(cat) {
$(this).attr('id', 'cat_' + cat).children('ul').children('li').each(function(sCat) {
$(this).attr('id', 'cat_' + cat + '_' + sCat).children('ul').children('li').each(function(ssCat) {
$(this).attr('id', 'cat_' + cat + '_' + sCat + '_' + ssCat);
});
});
});
Example: http://jsfiddle.net/6YR5p/2/
A: Here's a recursive solution that'll work to any depth you want:
function menuID(el, base) {
base = base || 'cat';
$(el).filter('li').each(function(i) {
this.id = base + '_' + i;
menuID($(this).children('ul').children('li'), this.id);
});
};
menuID('.main');
See http://jsfiddle.net/alnitak/XhnYa/
Alternatively, here's a version as a jQuery plugin:
(function($) {
$.fn.menuID = function(base) {
base = base || 'cat';
this.filter('li').each(function(i) {
this.id = base + '_' + i;
$(this).children('ul').children('li').menuID(this.id);
});
};
})(jQuery);
$('.main').menuID();
See http://jsfiddle.net/alnitak/5hkQU/ | unknown | |
d17875 | val | You have a single list sqd that you are appending scalar values to, so it will always just be a 1-dimensional list. If you want a list of lists (i.e. 2-dimensional matrix), you need to append lists to sqd, not scalar values:
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = []
for i in matrix:
row = [] # create a new list for each row
for e in i:
row.append(e*e) # append scalar to the row list
sqd.append(row) # append row to matrix list
print(sqd)
A: Because you append numbers to sqd inside the inner for e in i loop. Instead, you need to append those numbers to a temp list, then append that list to sqd.
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = []
for i in matrix:
row = []
for e in i:
row.append(e*e)
sqd.append(row)
print(sqd)
Or, as a list-comprehension:
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = [[e * e for e in row] for row in matrix]
print(sqd)
A: You have two for loops here. Your outer loop is going through the Columns of the matrix.
Your inner loop is going through the rows.
Your inner loop runs through the entire loop before going down to your next column.
Now that you understand that flow, you need to see that your list "sqd" has only one operation it is performing. That operation of append will happen for every loop of the inner loop. Each loop you are growing that list by adding the latest operation.
To create the matrix you wish to see, you are going to want some more work between your inner and outer loop.
I would recommend making a new list for every iteration of your outer loop. This new list will be appended by the inner loop, and once the inner loop completes, you can add this new temp list to "sqd". | unknown | |
d17876 | val | Here is the way you can read Your CSV file :
func filterMenuCsvData()
{
do {
// This solution assumes you've got the file in your bundle
if let path = Bundle.main.path(forResource: "products_category", ofType: "csv"){
// STORE CONTENT OF FILE IN VARIABLE
let data = try String(contentsOfFile:path, encoding: String.Encoding.utf8)
var rows : [String] = []
var readData = [String]()
rows = data.components(separatedBy: "\n")
for data in 0..<rows.count - 1{
if data == 0 || rows[data].contains(""){
continue
}
readData = rows[data].components(separatedBy: ",")
Category.append(readData[0])
if readData[2] != ""{
Occassions.append(readData[2])
}
selectedOccassionsRadioButtonIndex = Array(repeating: false, count: Occassions.count)
selectedCategoryRadioButtonIndex = Array(repeating: false, count: Category.count)
}
}
} catch let err as NSError {
// do something with Error}
print(err)
}
} | unknown | |
d17877 | val | Use generic/very light views, pass a queryset to the template, and gather any remaining necessary information using custom template tags.
i.e. pass the queryset containing the categories, and for each category use a template tag to fetch the entries for that category
or B: Use custom/heavy views, pass one or more querysets + extra necessary information through the view, and use less template tags to fetch information.
i.e. pass a list of dictionaries that contains the categories + their entries.
The way I see it is that the view is there to take in HTTP requests, gather the required information (specific to what's been requested) and pass the HTTP request and Context to be rendered. Template tags should be used to fetch superflous information that isn't particularly related to the current template, (i.e. get the latest entries in a blog, or the most popular entries, but they can really do whatever you like.)
This lack of definition (or ignorance on my part) is starting to get to me, and I'd like to be consistent in my design and implementation, so any input is welcome!
A: I'd say that your understanding is quite right. The main method of gathering information and rendering it via a template is always the view. Template tags are there for any extra information and processing you might need to do, perhaps across multiple views, that is not directly related to the view you're rendering.
You shouldn't worry about making your views generic. That's what the built-in generic views are for, after all. Once you need to start stepping outside what they provide, then you should definitely make them specific to your use cases. You might of course find some common functionality that is used in multiple views, in which case you can factor that out into a separate function or even a context processor, but on the whole a view is a standalone bit of code for a particular specific use. | unknown | |
d17878 | val | http://jsfiddle.net/vSmjb/2/
you can trigger click(), to make it work
$("#r_private").click()
A: You need to use .prop() instead of .attr() to set the checked status
$("#r_link").prop("checked", true);
Demo: Fiddle
Read: Attributes vs Properties | unknown | |
d17879 | val | Let's assume the answer to my questions was 'yes', you want the three numbers that occur most often and how often they occur.
I've tried a couple of ways of doing this. One way is to sort the numbers and use FREQUENCY to get the frequencies. Then you could use a query like this to get the top 3
=query(A1:B20,"select A,B order by B desc limit 3")
Another way is to get the mode, then the mode excluding the most frequent, then the mode excluding both of the previous ones etc.
=ArrayFormula(mode(if(iserror(match(A$2:A$20,F$1:F1,0)),A$2:A$20)))
starting in F2 - you don't need to sort them.
Then you can just use COUNTIF to get how many times they occur.
Then just put them into a chart. | unknown | |
d17880 | val | SBT currently supports this for debug purposes. You can enable this by adding below property to you endpoint.
<managed-property>
<property-name>httpProxy</property-name>
<value>IpOfProxy:PortNumberOfProxy</value>
</managed-property>
If you need to enable this for all endpoint, just add this to you sbt.properties directly
sbt.httpProxy=127.0.0.1:8888
We do not support the credentials for now as this is not required by most of the proxies used for debugging like Fiddler or Wireshark.
Can you provide me more details of your environment and I can check if we can enhance the code to work in your environment.
A: Try Ports -> Proxies in Server Document. | unknown | |
d17881 | val | This function
void addnode(node *head, node *tail, int d)
deals with copies of the values of the original pointers head and tail used as argument expressions. Changing the copies does not influence on the original pointers.
This function
void addnode(node *head, node *tail, int d)
{
//...
return head;
}
has the return type void. So the compiler should issue an error message because a return statement shall not return a value if the return type of a function is void.
But if you will declare the function the following way
node * addnode(node *head, node *tail, int d)
{
//...
return head;
}
nevertheless it will have the same problem as the first function relative to the pointer tail because again the function will deal with a copy of the original pointer tail and the new value of the pointer will not be returned to the caller.
The definition of this function
void addnode(node **head, node **tail, int d) // Here head and tail is called with & eg: addnode(&head1,&tail1,data);
{
node *newnode = (node *)malloc(sizeof(node));
newnode->data = d;
newnode->next = NULL;
if(head==NULL)
{
*head = *tail = newnode;
}
else
{
(*tail)->next = newnode;
*tail = newnode;
} //No return
}
has a bug. Instead of this if statement
if(head==NULL)
you have to write
if ( *head == NULL )
The function work because the pointers head and tail are passed to the function by reference through pointers to them. So dereferencing the pointers like for example in this statement
*head = *tail = newnode;
you have a direct access to the original pointers (instead of dealing with copies of the values of the original pointers) and can change them.
But in any case your approach is not good.
You should declare one more structure that will incorporate the pointers head and tail as for example
typedef struct list
{
node *head;
node *tail;
} list;
then in main you can declare an object of the structure type like
list list1 = { .head = NULL, .tail = NULL };
In this case the function addnode will look the following way
int addnode( list *lst, int data )
{
node *newnode = malloc( sizeof( node ) );
int success = newnode != NULL;
if ( success )
{
newnode->data = data;
newnode->next = NULL;
if ( lst->head == NULL )
{
lst->head = newnode;
}
else
{
lst->tail->next = newnode;
}
lst->tail = newnode;
}
return success;
}
and the function can be called like for example
addnode( &list1, data );
or
if ( !addnode( &list1, data ) )
{
puts( "Error. Not enough memory" );
} | unknown | |
d17882 | val | If you are looking to avoid writing separate components or copying your raw SVG file, consider react-inlinesvg;
https://github.com/gilbarbara/react-inlinesvg
import React from "react";
import styled from "styled-components";
import SVG from "react-inlinesvg";
import radio from "./radio.svg";
interface SVGProps {
color: string;
}
const StyledSVG = styled(SVG)<SVGProps>`
width: 24px;
height: 24px;
& path {
fill: ${({ color }) => color};
}
`;
export default function App() {
const color = "#007bff";
return <StyledSVG color={color} src={radio} />;
}
Code Sandbox: https://codesandbox.io/s/stack-56692784-styling-svgs-iz3dc?file=/src/App.tsx:0-414
A: So I looked into this. Turns out you cannot CSS style an SVG image you're loading using the <img> tag.
What I've done is this:
I inlined my SVG like this:
<BurgerImageStyle x="0px" y="0px" viewBox="0 0 38 28.4">
<line x1="0" y1="1" x2="38" y2="1"/>
<line x1="0" y1="14.2" x2="38" y2="14.2"/>
<line x1="0" y1="27.4" x2="38" y2="27.4"/>
</BurgerImageStyle>
Then I used Styled Components to style BurgerImageStyle:
const BurgerImageStyle = styled.svg`
line {
stroke: black;
}
&:hover {
line {
stroke: purple;
}
}
`;
This worked.
A: If you want to have some styling shared across multiple SVGs and you don't want to have an extra dependency on react-inlinesvg you can use this thing instead:
In src prop it accepts SVG React component
import styled from 'styled-components';
import React, { FC, memo } from 'react';
type StyledIconProps = {
checked?: boolean;
};
const StyledIconWrapper = styled.div<StyledIconProps>`
& svg {
color: ${(p) => p.checked ? '#8761DB' : '#A1AAB9'};
transition: 0.1s color ease-out;
}
`;
export const StyledIcon = memo((props: StyledIconProps & { src: FC }) => {
const { src, ...rest } = props;
const Icon = src;
return (
<StyledIconWrapper {...rest}>
<Icon/>
</StyledIconWrapper>
);
});
And then you can use it like:
import { StyledIcon } from 'src/StyledIcon';
import { ReactComponent as Icon } from 'assets/icon.svg';
const A = () => (<StyledIcon src={Icon} checked={false} />)
A: In addition to what JasonGenX I propose the next case when you're using a SVG component (like one generated using SVGR). This is even on the styled-components documentation and in combination with its API it solves it seamlessly.
First import your icon
import React from 'react';
import styled from 'styled-components';
import YourIcon from '../../icons/YourIcon';
In my case I added a styled button like so:
const StyledButton = styled.button`
...
`;
// Provide a styled component from YourIcon
// You can also change the line for path and stroke for fill for instance
const StyledIcon = styled(YourIcon)`
${StyledButton}:hover & line {
stroke: #db632e;
}
`;
const YourButton = () => {
return (
<StyledButton>
<StyledIcon /> Click me
</StyledButton>
);
};
export default YourButton;
After that you'll see your icon changes its color. | unknown | |
d17883 | val | My "guidance" on constructing the object would be to avoid this style of inserting each string separately:
menuItems.product[0].product = "prod1";
menuItems.product[0].item[0] = "prod1Item1";
because this involves a lot of writing the same thing over and over, which is more error-prone and less readable/maintainable. I would prefer inserting more coarse-grained objects:
menuItems.product[0] = {
product: "prod1",
item: ["prod1Item1"];
}
Edit: Your edit is asking a completely different question, but it sounds like what you want to do is sort the elements of prodItems based on their "product" properties, then do the same thing for the "items" array inside the elements.
I think the simplest way to do this would be to use Array.sort() with a custom comparison function that returns -1 on the element you want to see at the top. Something like this (hastily written and untested):
function returnDict(product, item) {
prodItems = prodItems.sort(function(a, b) {
if(a.product === product) {
return -1;
} else if(b.product === product) {
return 1;
} else {
return 0;
}
});
prodItems[0].items = prodItems[0].items.sort(function(a, b) {
if(a === item) {
return -1;
} else if(b === item) {
return 1;
} else {
return 0;
}
});
return prodItems;
}
A: var menuItems = [];
menuItems.push({product:"prod1",item:[]})
menuItems[menuItems.length-1].item.push("product1Item1")
menuItems[menuItems.length-1].item.push("product1Item2")
menuItems.push({product:"prod2",item:[]})
menuItems[menuItems.length-1].item.push("product2Item1")
menuItems[menuItems.length-1].item.push("product2Item2")
menuItems[menuItems.length-1].item.push("product2Item3")
menuItems[menuItems.length-1].item.push("product2Item4")
A: var prodItems = [
{
"product": "prod1",
"item": ["prod1Item1", "prod2Item2"]
},
{
"product": "prod2",
"item": ["prod2Item1", "prod2Item2", "prod2Item3", "prod2Item4"]
}
];
alert(prodItems[0].product);
for(var i = 0; i < prodItems.length; i++) {
alert(prodItems[i].product);
}
A: I see your question has been answered, but I want to suggest a small improvement to keep your code more DRY (Do not Repeat Yourself). You can use a simple function that will save you some extra typing when adding new objects.
var prodItems=[];
function addProduct(productName, items){
var product={
product: productName,
items: items
};
prodItems.push(product);
};
//sample use
addProduct("prod2",["prod2Item3", "prod2Item1", "prod2Item2", "prod2Item4"])
This will definitely also become more flexible if you need to change the object structure at some later point. | unknown | |
d17884 | val | Your attribute selector was missing quotes;
$("input:radio[name='cm-fo-ozlkr']").change( function(){
alert('Handler for .change() called.');
});
A: Is the radio button HTML getting generated dynamically e.g. on an ajax refresh? If so, you want to use jQuery live:
$("input:radio[name=cm-fo-ozlkr]").live('change', function () {
alert('Handler for .change() called.');
});
A: Use the click event instead of change.
Also, the correct selector is input[name=cm-fo-ozlkr]:radio.
A: Try this....
$(document).ready(function(){
$("input:radio[name='cm-fo-ozlkr']").change( function(){
alert('Handler for .change() called.');
});
});
A: if you haven't already done so...
$(document).ready(function() {
$("input:radio[name=cm-fo-ozlkr]").change( function(){
alert('Handler for .change() called.');
});
}); | unknown | |
d17885 | val | heres a way which depends on the id generation strategy used. if Identity is used then this won't do (at least NH discourages use of Identity for various reasons), but with every strategy that inserts the id itself it would work:
class JobMap : ClassMap<Job>
{
public JobMap()
{
Id(x => x.Id);
HasMany(x => x.Tasks)
.KeyColumn("JobId");
}
}
class TaskMap : ClassMap<Task>
{
public TaskMap()
{
Table("JobTask");
Id(x => x.Id, "TaskId");
Component(x => x.CompletionInfo, c =>
{
c.Map(x => x.CompletionDate);
c.References(x => x.User, "CompletedByUserId");
});
Join("Task", join =>
{
join.Map(x => x.Name, "Name");
});
}
} | unknown | |
d17886 | val | you can create relative div and insert there your grid and loader wrapper:
<div class="grid-wrapper">
<div class="loading-wrapper">
<div class='k-loading-image loading'></div>
<!-- k-loading-image is standart kendo class that show loading image -->
</div>
@(Html.Kendo().Grid()....)
</div>
css:
.grid-wrapper {
position: relative;
}
.loading-wrapper {
display: none;
position: absolute;
width: 100%;
height: 100%;
z-index: 1000;
}
.loading {
position: absolute;
height: 4em;
top: 50%;
}
add to imageActionLink in htmlAttributes object class named "edit" (for example), and write click event handler:
$(document).on('click', '.edit', function (e) {
$('.loading-wrapper').show();
$.ajax({
// ajax opts
success: function(response) {
// insert your edit view received by ajax in right place
$('.loading-wrapper').hide();
}
})
});
A: you can do this like :
c.Bound(p => p.sID).Template(@<a href=\"YourLink\@item.sID\">Edit</a>).Title("Edit").Encoded(false);
//encoded false = Html.Raw | unknown | |
d17887 | val | I searched around a bit and supposedly git doesn't have any way to ignore single file lines.
Good news you can do it.
How?
You will use something called hunk in git.
Hunk what?
Hunk allow you to choose which changes you want to add to the staging area and then committing them. You can choose any part of the file to add (as long as its a single change) or not to add.
Once you have chosen your changes to commit you will "leave" the changes you don't wish to commit in your working directory.
You can then choose if you want this file to be tracked as modified or not withe the help of the assume-unchanged flag.
Here is a sample code for you.
# make any changes to any given file
# add the file with the `-p` flag.
git add -p
# now you can choose form the following options what you want to do.
# usually you will use the `s` for splitting up your changes.
git add -P
Using git add -p to add only parts of changes which you will choose to commit.
You can choose which changes you wish to add (picking the changes) and not committing them all.
# once you done editing you will have 2 copies of the file
# (assuming you did not add all the changes)
# one file with the "private" changes in your working dir
# and the "public" changes waiting for commit in the staging area.
Add the file to .gitignore file
This will ignore the file and any changes made to it.
--assume-unchaged
Raise the --assume-unchaged flag on this file so it will stop tracking changes on this file
Using method (2) will tell git to ignore this file even when ts already committed.
It will allow you to modify the file without having to commit it to the repository.
git-update-index
--[no-]assume-unchanged
When this flag is specified, the object names recorded for the paths are not updated. Instead, this option sets/unsets the "assume unchanged" bit for the paths. When the "assume unchanged" bit is on, the user promises not to change the file and allows Git to assume that the working tree file matches what is recorded in the index. If you want to change the working tree file, you need to unset the bit to tell Git. This is sometimes helpful when working with a big project on a filesystem that has very slow lstat(2) system call (e.g. cifs).
Git will fail (gracefully) in case it needs to modify this file in the index e.g. when merging in a commit; thus, in case the assumed-untracked file is changed upstream, you will need to handle the situation manually. | unknown | |
d17888 | val | If I understand your question correctly, your item has three attributes: id,
referenceId and referenceType. You've also defined a global secondary index with a composite primary key of referenceId and referenceType.
Assuming all these attributes are part of the same item, you shouldn't need to read the secondary index before deciding to write to the table. Rather, you could perform a PUT operation on the condition that referenceId and reference type don't yet exist on that item.
ddbClient.putItem({
"TableName": "YOUR TABLE NAME",
"Item": { "PK": "id" },
"ConditionExpression": "attribute_exists(referenceId) AND attribute_exists(referenceType)"
})
You may also want to check out this fantastic article on DynamoDB condition expressions.
A: As I understand the question, your table PK is generated "at call time" so it would be different for two different requests with the same reference ID and reference type.
That being the case, your answer is no.
If you've got a requirement for uniqueness of a key set by a foreign system, you should consider using the same key or some mashup of it for your DDB table.
Alternately, modify the foreign system to set the UUID for a given reference ID & type. | unknown | |
d17889 | val | I took a wild guess that maybe the output of "sha1()" in the documention psuedo-code was a hex-string (like sha1() in PHP, etc), and that seems to output the expected password.
Updated code:
string testDateString = "2015-07-08T11:31:53+01:00";
string testNonce = "186269";
string testSecret = "Ok4IWYLBHbKn8juM1gFPvQxadieZmS2";
SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider();
byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(testNonce + testDateString + testSecret));
var hexString = BitConverter.ToString(hashedDataBytes).Replace("-", string.Empty).ToLower();
string Sha1Password = Convert.ToBase64String(Encoding.UTF8.GetBytes(hexString)); | unknown | |
d17890 | val | You can specify the proxyHost/port directly using JVM args https.proxyHost, https.proxyPort
mvn clean install -Dhttps.proxyHost=localhost -Dhttps.proxyPort=3128 exec:java
then just directly create a client of your choice
TopicAdminSettings topicAdminSettings = TopicAdminSettings.newBuilder().build();
TopicAdminClient topicAdminClient = TopicAdminClient.create(topicAdminSettings);
FYI- Setting ManagedChannelBuilder.forAddress() here overrides the final target for pubsub (which should be pubsub.googleapis.com 443 (not the proxy)
Here is a medium post i put together as well as a gist specificlly for pubsub and pubsub+proxy that requires basic auth headers
*
*
finally, just note, its https.proxyHost even if you're using httpProxy, ref grpc#9561
A: Proxy authentication via HTTP is not supported by Google Pub/Sub, but it can be configured by using GRPC_PROXY_EXP environment variable.
I found the same error that you got here (that's why I assume you are using HTTP) and it got fixed by using what I said.
A: You need to set JVM args: https.proxyHost, https.proxyPort
for proxy authentication an additional configration is needed before any client creation:
Authenticator.setDefault(new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return
new PasswordAuthentication(proxyUsername,proxyPassword).toCharArray());
} | unknown | |
d17891 | val | Unfortunately it is VERY hard to actually guarantee consistent running times even on a dedicated machine versus a VM. If you do want to implement something like this as was mentioned you probably want a VM to keep all the code that will run sandboxed. Usually you don't want to service more than a couple of requests per core so I would say for algorithms that are memory and cpu bound use at most 2 VMs per physical core of the machine.
Although I can only speculate why not try different numbers of VMs per core and see how it performs. Try to aim for about a 90% or higher rate of SLO compliance (or 98-99 if you really need to) and you should be just fine. Again its hard to tell you exactly what to do as a lot of these things require just testing it out and seeing how it does.
A: May be overly simplistic depending on your other requirements which aren't in the question, but;
If the algorithms are CPU bound, simply running it in an isolated VM (or FreeBSD jail, or...) and using the built-in operating system instrumentation would be the simplest.
(Could be as simple as using the 'time' command in unix and setting memory limits with "limit") | unknown | |
d17892 | val | You can try something like this to make divs responsive and their position relative to the size of screen:
<body>
<div class="container">
<div class="row row-centered pos">
<div class="col-lg-8 col-xs-12 col-centered">
<div class="well"></div>
</div>
<div class="col-lg-8 col-xs-12 col-centered">
<div class="well"></div>
</div>
<div class="col-lg-8 col-xs-12 col-centered">
<div class="well"></div>
</div>
</div>
</div>
</body>
A: Problem: The magnifying-glass-image on your website uses the attribute max-width:none which causes the image to be displayed in full size even on smaller screens.
Solution: When making images responsive we usually use the CSS attribute max-width:100% to ensure that big images scale down to fit on the screen (or better: fit into the parent container) | unknown | |
d17893 | val | The directions service will either use the browser's configured language or you can specify the language to use when loading the API.
From the API docs:
Textual directions will be provided using the browser's preferred
language setting, or the language specified when loading the API
JavaScript using the language parameter. (For more information, see
Localization.)
A: Just specify your language using the language parameter when loading the API:
<script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false&language=en-US"></script>
See full list of supported languages here : https://developers.google.com/+/web/api/supported-languages | unknown | |
d17894 | val | Here is a sample to get you started, without knowing your schema:
Select
LocationName,
MaxTaxRate
FROM
(select
Max(tax_rate) as MaxTaxRate,
LocationName
from
MyLocations
group by
LocationName
) as MaxTable
You will have to join up with other information, but this is as far as I can go efficiently without more schema info.
A: select location, max(tax_rate) from tax_brackets where tax_rate > x group by location
the tax_rate > x should filter all tax_rates lower x, which means if the location has no tax_rates over x, they do not have at least one rate over x, which means they won't appear in the group by.
The group by should organize by location.
The select does location and max of those per location. | unknown | |
d17895 | val | I do not know what resource you're using, but this does not say anything about a -l flag. It suggests
cython -a helloCopy.pyx
This creates a yourmod.c file, and the -a switch produces an annotated html file of the source code. Pass the -h flag for a complete list of supported flags.
gcc -shared -pthread -fPIC -fwrapv -O2 -Wall -fno-strict-aliasing -I/usr/include/python2.7 -o helloCopy.so helloCopy.c
(Linux)
On macOS I would try to compile with
gcc -I/usr/bin/python -o helloCopy.so helloCopy.c
to use the standard version of Python. | unknown | |
d17896 | val | is there any way to force docker to check if the cached image has been updated on dockerhub?
No.
is there any other workaround i can do to keep a specific tag on my compose.yml file and update the image when needed without needing to edit the file 1000 times a day?
Just use latest and use docker-compose pull to pull the images. | unknown | |
d17897 | val | It sounds like the root of the problem here is that you are misunderstanding the design of Json.Net's LINQ-to-JSON API. A JObject can never directly contain another JObject. A JObject only ever contains JProperty objects. Each JProperty has a name and a value. The value of a JProperty in turn can be another JObject (or it can be a JArray or JValue). Please see this answer to JContainer, JObject, JToken and Linq confusion for more information on the relationships in the JToken hierarchy.
After understanding the hierarchy, the key to getting the output you want is the Descendants() extension method. This will allow you to do a recursive traversal with a simple loop. To get your output you are basically looking for the Path and Value for each leaf JProperty in the entire JSON. You can identify a leaf by checking whether the Value is a JValue.
So, putting it all together, here is how you would do it (I'm assuming C# here since you did not specify a language in your question):
var root = JObject.Parse(json);
var leafProperties = root.Descendants()
.OfType<JProperty>()
.Where(p => p.Value is JValue);
foreach (var prop in leafProperties)
{
Console.WriteLine($"{prop.Path}\t{prop.Value}");
}
Fiddle: https://dotnetfiddle.net/l5Oqfb | unknown | |
d17898 | val | You can call UserInformation.GetDomainNameAsync to determine if the user is part of a domain. The app must declare the Enterprise Authentication app capability.
To determine if you are on Pro, you might be able to call GetNativeSystemInfo and figure it out from the processor architecture. | unknown | |
d17899 | val | You can still call relationships inside your blade files, so if you have a products relationship setup correctly, you only need to change your index blade to this
<td>{{$item->products()->count()</td>
If you have categories that don't have any products put this in your blade to check before showing the count (Its an if else statement just inline)
<td>{{$item->products ? $item->products()->count() : 'N/A'}}</td>
A: in your blade file add this line before your foreach
@if( $category->posts->count() )
//your @foreach code here
@endif | unknown | |
d17900 | val | As I wrote in the comments, there's two ways of doing this.
The first way is to add a hidden field in your subform to set the current user:
= simple_nested_form_for(@issue) do |f|
= f.input :title
= f.fields_for(:comments) do |cf|
= cf.input(:content)
= cf.hidden(:user, current_user)
= f.submit
If you do not trust this approach in fear of your users fiddling with the fields in the browser, you can also do this in your controller.
class IssuesController < ApplicationController
before_action :authenticate_user! #devise authentication
def new
@issue = Issue.new
@issue.comments.build
end
def create
@issue = Issue.new(issue_params)
@issue.comments.first.user = current_user
@issue.save
end
private
def issue_params
params.require(:issue).permit(:title, comments_attributes: [:content])
end
end
This way you take the first comment that is created through the form and just manually assign the user to it. Then you know for the sure that the first created comment belongs to your current user.
A: You could also add user_id as current_user.id when you use params
class IssuesController < ApplicationController
before_action :authenticate_user! #devise authentication
def new
@issue = Issue.new
@issue.comments.build
end
def create
@issue = Issue.new(issue_params)
@issue.save
end
private
def issue_params
params[:issue][:comment][:user_id] = current_user.id
params.require(:issue).permit(:title, comments_attributes: [:content, :user_id])
end
end | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.