_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d18701 | test | You can if you import static like
import static java.lang.System.out;
then you can do
public static void main(String[] args) {
out.println("Hello, World");
}
A: "Universal", as you're using it, means that you don't need to import System. You still need to qualify references to a field in a different class. What if you (as often happens) want a local field named out?
(And Groovy lets you simply use println.)
A: out.println() would then eliminate out as a valid instance name. By referencing System we know that out is not, for example, File out. | unknown | |
d18702 | test | I'm working on a similar problem.
Currently, my UICollectionViewController has two instance variables of UICollectionViewFlowLayout, each with the appropriate insets for portait or landscape.
On rotation, I do this:
-(void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation
duration:(NSTimeInterval)duration{
if (UIDeviceOrientationIsPortrait(toInterfaceOrientation)) {
[_itemCollection setCollectionViewLayout:_portraitLayout];
[_itemCollection reloadData];
} else {
[_itemCollection setCollectionViewLayout:_landscapeLayout];
[_itemCollection reloadData];
}
}
The only problem that I'm having is that it randomly crashes with exc_bad_access on setCollectionViewLayout randomly.
Something like the above might work for you. I'm not sure if this is the right way to do things. I have only recently started using UICollectionViews. | unknown | |
d18703 | test | As a bare minimum this should work for zooming:
mediaPlayer.video().setScale(float factor);
Where factor is like 2.0 for double, 0.5 for half and so on.
In my experience, it can be a bit glitchy, and you probably do need to use it in conjunction with crop - and by the way, cropping does work.
But if you want an interactive zoom, then you build that yourself invoking setCrop and setScale depending on some UI interactions you control.
For the picture-in-picture type of zoom, if you're using VLC itself you do something like this:
vlc --video-filter=magnify --avcodec-hw=none your-filename.mp4
It shows a small overlay where you can drag a rectangle and change the zoom setting.
In theory, that would have been possible to use in your vlcj application by passing arguments to the MediaPlayerFactory:
List<String> vlcArgs = new ArrayList<String>();
vlcArgs.add("--avcodec-hw=none");
vlcArgs.add("--video-filter=magnify");
MediaPlayerFactory factory = new MediaPlayerFactory(args);
The problem is that it seems like you need "--avcodec-hw=none" (to disable hardware decoding) for the magnify filter to work - BUT that option is not supported (and does not work) in a LibVLC application.
So unfortunately you can't get that native "magnify" working with a vlcj application.
A final point - you can actually enable the magnify filter if you use LibVLC's callback rendering API (in vlcj this is the CallbackMediaPlayer) as this does not use hardware decoding. However, what you would see is the video with the magnify overlays painted on top but they are not interactive and your clicks will have no effect.
So in short, there's no satisfactory solution for this really.
In theory you could build something yourself, but I suspect it would not be easy. | unknown | |
d18704 | test | It's the job of the broadcast/emit event to send arguments to the listeners, so:
scope.$broadcast('location', request);
scope.$emit('location', request);
Or if you want to call updateMap with a parameter you just need to call it within the listener function:
scope.$on('location', function() {
updateMap(request);
}); | unknown | |
d18705 | test | When using rotate3d(x, y, z, a) the first 3 numbers are coordinate that will define the vector of the rotation and a is the angle of rotation. They are not multiplier of the rotation.
rotate3d(1, 0, 0, 90deg) is the same as rotate3d(0.25, 0, 0, 90deg) and also the same as rotate3d(X, 0, 0, 90deg) because we will have the same vector in all the cases. Which is also the same as rotateX(90deg)
.box {
margin:30px;
padding:20px;
background:red;
display:inline-block;
}
<div class="box" style="transform:rotate3d(1,0,0,60deg)"></div>
<div class="box" style="transform:rotate3d(99,0,0,60deg)"></div>
<div class="box" style="transform:rotate3d(0.25,0,0,60deg)"></div>
<div class="box" style="transform:rotate3d(100,0,0,60deg)"></div>
<div class="box" style="transform:rotate3d(-5,0,0,60deg)"></div>
<div class="box" style="transform:rotateX(60deg)"></div>
From this we can also conclude that rotate3d(0, Y, 0, a) is the same as rotateY(a) and rotate3d(0, 0, Y, a) the same as rotate(a). Note the use of 0 in two of the coordinates which will make our vector always in the same axis (X or Y or Z)
rotate3d(1,1,0, 45deg) is not the same as rotateX(45deg) rotateY(45deg). The first one will perform one rotation around the vector defined by (1,1,0) and the second one will perform two consecutive rotation around the X and Y axis.
In other words, rotate3d() is not a combination of the other rotation but a rotation on its own. The other rotation are particular cases of rotate3d() considering predefined axis.
The multiplier trick apply to the coordinate if you keep the same angle. rotate3d(x, y, z, a) is equivalent to rotate3d(p*x, p*y, p*z, a) because if you multiply all the coordinates with the same value, you keep the same vector direction and you change only the vector dimension which is irrelevant when defining the rotation. Only the direction is relevant.
More details here: https://developer.mozilla.org/en-US/docs/Web/CSS/transform-function/rotate3d
You can clearly notice that using values in the range of [-1,1] for x,y,z is enough to define all the combination. In the other hand, any combination of x,y,z can be reduced to values inside the range [-1,1]
Examples:
.box {
margin:30px;
padding:20px;
background:red;
display:inline-block;
}
<div class="box" style="transform:rotate3d(10,5,-9,60deg)"></div>
<div class="box" style="transform:rotate3d(1,0.5,-0.9,60deg)"></div>
<div class="box" style="transform:rotate3d(25,-5,-8,60deg)"></div>
<div class="box" style="transform:rotate3d(1,-0.2,-0.32,60deg)"></div>
We simply divide by the biggest number. | unknown | |
d18706 | test | Ok finally i found the solution!
To avoid any error during the ./mvnw clean package i added -DskipTests and changed the spring datasource from
spring.datasource.url = jdbc:postgresql://localhost:5432/postgres
to
spring.datasource.url = jdbc:postgresql://db:5432/postgres
and problem solved! | unknown | |
d18707 | test | As per the documentaion, a service runs as a separate thread.
Not True
Because Service always run in same process of Application but worker thread is used by IntentService to process received the Intents.
I'm getting a ANR dialog
Probably Service doing some network related work or Api calls on Main Thread.
To prevent ANR dialog launch separate Thread or use AsyncTask, IntentService to do intensive or blocking operations in Service | unknown | |
d18708 | test | The following example selects the first color for use as a comparison color. It first adds that color to a new array, then iterates through the rest of the colors comparing the colors looking for the most similar color.
For each color it iterates, it subtracts the red from the second color in the comparison from that of the first, then the green, then the blue. It then finds the absolute values (no negative numbers). After that it adds those values together and divides by three. This number is the average difference between the two colors.
Once it finds the closest color, it selects that color as the new comparison color, removes it from the original array of colors, and pushes it into the sorted array. It does this until there are no colors left.
It definitely needs some work as can be seen when supplied with a larger data set, but it is all I had time for last night. I will continue to work on this until I have something better.
const sort = data => {
data = Object.assign([], data);
const sorted = [data.shift()];
while(data.length) {
const [a] = sorted, c = { d: Infinity };
for(let [i, b] of Object.entries(data)) {
const average = Math.floor((
Math.abs(a.r - b.r) +
Math.abs(a.g - b.g) +
Math.abs(a.b - b.b)
) / 3);
if(average < c.d) {
Object.assign(c, { d: average, i: i });
}
}
sorted.unshift(data.splice(c.i, 1)[0]);
}
return sorted.reverse();
};
const test = (title, data) => {
document.body.insertAdjacentHTML('beforeend', `<h2>${title}</h2>`);
for(let c of data) {
document.body.insertAdjacentHTML('beforeend', `<swatch style="background: rgb(${c.r},${c.g},${c.b})"></swatch>`);
}
return test;
}
const data = [
{"hex": "#fe4670"},{"hex": "#5641bc"},{"hex": "#d53fc3"},{"hex": "#6b5e09"},
{"hex": "#4dd685"},{"hex": "#88d63f"},{"hex": "#eb93f3"},{"hex": "#f44847"},
{"hex": "#32d159"},{"hex": "#6e9bde"},{"hex": "#c3ec64"},{"hex": "#81cce5"},
{"hex": "#7233b6"},{"hex": "#bb90c3"},{"hex": "#728fde"},{"hex": "#7ef46a"},
{"hex": "#f7cfff"},{"hex": "#c8b708"},{"hex": "#b45a35"},{"hex": "#589279"},
{"hex": "#51f1e1"},{"hex": "#b1d770"},{"hex": "#db463d"},{"hex": "#5b02a2"},
{"hex": "#909440"},{"hex": "#6f53fe"},{"hex": "#4c29bd"},{"hex": "#3b24f8"},
{"hex": "#465271"},{"hex": "#6243"}, {"hex": "#dbcc4"}, {"hex": "#187c6"},
{"hex": "#1085e2"},{"hex": "#b521e9"},{"hex": "#4bd36d"},{"hex": "#11bc34"},
{"hex": "#455c47"},{"hex": "#a71bbf"},{"hex": "#988fc2"},{"hex": "#226cfe"}
].reduce((m, e) => (m.push(Object.assign(e, {
r: parseInt(e.hex.substring(1, 3), 16) || 0,
g: parseInt(e.hex.substring(3, 5), 16) || 0,
b: parseInt(e.hex.substring(5, 7), 16) || 0
})), m), []);
const bigdata = (() => {
const data = [];
const rand = () => Math.floor(Math.random() * 256);
for(let i = 0; i < 1000; ++i) {
data.push({r: rand(), g: rand(), b: rand()});
}
return data;
})();
test('Unsorted', data)('Sorted', sort(data))('A Larger Dataset', sort(bigdata));
swatch { display: inline-block; border: 1px solid; margin-left: 1px; margin-top: 1px; width: 20px; height: 20px; }
h2 { margin: 0; font-family: Verdana, Tahoma, "Sans Serif"}
The following snippet does mostly the same thing, except that it searches the sorted array to find the closest match in that array, then inserts the color from the unsorted array next to its closest match.
It doesn't seem to do as good of a job gradienting the swatches, but it does seem to group the colors together better.
const sort = data => {
data = Object.assign([], data);
const sorted = [data.shift()];
while(data.length) {
const a = data.shift(), c = { d: Infinity };
for(let [i, b] of Object.entries(sorted)) {
const average = Math.floor((
Math.abs(a.r - b.r) +
Math.abs(a.g - b.g) +
Math.abs(a.b - b.b)
) / 3);
if(average < c.d) {
Object.assign(c, { d: average, i: i });
}
}
sorted.splice(c.i, 0, a);
}
return sorted.reverse();
};
const test = (title, data) => {
document.body.insertAdjacentHTML('beforeend', `<h2>${title}</h2>`);
for(let c of data) {
document.body.insertAdjacentHTML('beforeend', `<swatch style="background: rgb(${c.r},${c.g},${c.b})"></swatch>`);
}
return test;
}
const data = [
{"hex": "#fe4670"},{"hex": "#5641bc"},{"hex": "#d53fc3"},{"hex": "#6b5e09"},
{"hex": "#4dd685"},{"hex": "#88d63f"},{"hex": "#eb93f3"},{"hex": "#f44847"},
{"hex": "#32d159"},{"hex": "#6e9bde"},{"hex": "#c3ec64"},{"hex": "#81cce5"},
{"hex": "#7233b6"},{"hex": "#bb90c3"},{"hex": "#728fde"},{"hex": "#7ef46a"},
{"hex": "#f7cfff"},{"hex": "#c8b708"},{"hex": "#b45a35"},{"hex": "#589279"},
{"hex": "#51f1e1"},{"hex": "#b1d770"},{"hex": "#db463d"},{"hex": "#5b02a2"},
{"hex": "#909440"},{"hex": "#6f53fe"},{"hex": "#4c29bd"},{"hex": "#3b24f8"},
{"hex": "#465271"},{"hex": "#6243"}, {"hex": "#dbcc4"}, {"hex": "#187c6"},
{"hex": "#1085e2"},{"hex": "#b521e9"},{"hex": "#4bd36d"},{"hex": "#11bc34"},
{"hex": "#455c47"},{"hex": "#a71bbf"},{"hex": "#988fc2"},{"hex": "#226cfe"}
].reduce((m, e) => (m.push(Object.assign(e, {
r: parseInt(e.hex.substring(1, 3), 16) || 0,
g: parseInt(e.hex.substring(3, 5), 16) || 0,
b: parseInt(e.hex.substring(5, 7), 16) || 0
})), m), []);
const bigdata = (() => {
const data = [];
const rand = () => Math.floor(Math.random() * 256);
for(let i = 0; i < 1000; ++i) {
data.push({r: rand(), g: rand(), b: rand()});
}
return data;
})();
test('Unsorted', data)('Sorted', sort(data))('A Larger Dataset', sort(bigdata));
swatch { display: inline-block; border: 1px solid; margin-left: 1px; margin-top: 1px; width: 20px; height: 20px; }
h2 { margin: 0; font-family: Verdana, Tahoma, "Sans Serif"} | unknown | |
d18709 | test | When all else fails read the instructions:
"The approach you are to implement is to store each integer in an array of digits, with one digit per array element. We will be using arrays of length 50, so we will be able to store integers up to 50 digits long."
Tells me that this line:
String[] myInts = new String[50];
Has some significant problems.
Tip 1: Don't call it myInts when it's an array of String objects. Things are hard enough already.
Tip 2: Understand that new String[50] is not going to give you a string sized to 50 characters. It's going to give you space to store references to 50 string objects.
Tip 3: Understand that each line of your input can be solved separately so there is no need to remember anything from the lines you've solved before.
Tip 4: Read one line at a time into String line;
Tip 5: After reading a line solve the display problem in two parts: left side and right side of ='s.
Tip 6: Left side: display the line with spaces replaced with space + space. line.replace(" "," + ");
Tip 7: Right side: use line.split(" ") to split line on space, loop the split array of strings, each of these strings is what you'll be converting to int arrays.
Tip 8: "convert a String of digits into an array of 50 digits" <- Life will be easier if you write a method that does this. Take a String. Return an int[]. private int[] makeIntArray(String num) Take care of the "right shifting/leading zero" problem here.
Tip 9: int and long aren't big enough to hold the bigger numbers so break the number String down to Strings of digits before converting to int[].
Tip 10: Read Splitting words into letters in Java
Tip 11: Read Split string into array of character strings
Tip 12: Once you have single characters you can use Integer.parseInt(singleCharString[index--]) if you broke it down to an array of strings or Character.digit( chr[index--], 10); if you broke it down to an array of characters.
Tip 13: "write some code that allows you to add together two of these numbers or to add one of them to another." Read that carefully and it tells you that you really need to declare two vars. int[] sum = new sum[SIZE]; and int[] next = new next[SIZE]; where size is private final static int SIZE = 50;
Tip 14: adding two of these int[] numbers to produce a new int[] would be another good time to make a method. int[] sum(int[] op1, int[] op2)
Tip 15: Since all our int[]'s are right shifted already and always 50 long start a loop with i at 49 and count down. result[i-1] = (op1[i] + op2[i] + carry) % 10; and carry = (op1[i] + op2[i] + carry) / 10 will come in handy. Make sure to stop the loop at 1 or [i-1] will go index out of bounds on you.
Tip 16: Test, Test, and Test again. Make small changes then test. Small change, test. Don't just type and pray. Use the debugger if you like but personally I prefer to check values like this System.out.println("line: " + line);//TODO remove debugging code
A: Tip #1: Initialize the array with 0. That way, when you process the file, all you have to worry about is to replace the index locations with the digits obtained from your file.
Tip #2: You have to do some repeated division by 10 and modulus operation to extract the digits from the number (or binary shift if you prefer). For example, to split the the digits from '27', you can do 27 % 10 (7) and 27 / 10 (2). The key here is to store the result as int. After all, each digit is a whole number (not a floating point number). For number of greater magnitude, you will need to discard the process digit position so that the number gets smaller. You will now when you are done when the quotient of the division is equal to zero. Therefore, you can say in pseudo-code: DIVIDE number by 10 WHILE number > 0 (something like that)
Tip #3, you will have to iterate in reverse to store the digits in the array. If the array has a length of 50, you will start with LENGTH-1 and count down to ZER0.
Tip #4: Use an array of ints not an array of Strings if the problem allows you to. Use Integer.parseInt(String s) to convert the numeric String to a primitive int.
A: I think the question says to read each digit into a different array of standard size while you are reading all the words into a same a same array. and it will also be good to process this line by line
something like this
Scanner scanner = new Scanner(file);
int[][] myInts = new int[wordSize][];
int mySpot = 0;
while (scanner.hasNextLine()) {
Scanner scanner1 = new Scanner(scanner.nextLine());
while (scanner1.hasNext()) {
String s = scanner1.next();
int i;
for ( i= 0; i < wordSize - s.length(); i++) {
myInts[i][mySpot] = 0;
}
i--;
for (int j=0;j < s.length(); i++,j++) {
myInts[i][mySpot] = Character.digit(s.charAt(i), 10);
}
mySpot++;
}
// do the additions here and add this line to output file
} | unknown | |
d18710 | test | The issue is how you are defining the input_shape. A single element tuple in python is actually a scalar value as you can see below -
input_shape0 = 32
input_shape1 = (32)
input_shape2 = (32,)
print(input_shape0, input_shape1, input_shape2)
32 32 (32,)
Since Keras function API Input needs an input shape as a tuple, you will have to pass it in the form of (n,) instead of n
It's weird that you get a square bracket because when I run the exact same code, I get an error.
TypeError Traceback (most recent call last)
<ipython-input-828-b564be68c80d> in <module>
33
34 if __name__ == '__main__':
---> 35 mlp = MLP((16))
36 mlp.summary()
<ipython-input-828-b564be68c80d> in __init__(self, input_shape, **kwargs)
6 super(MLP, self).__init__(**kwargs)
7 # Add input layer
----> 8 self.input_layer = klayers.Input(input_shape)
9
10 self.dense_1 = klayers.Dense(64, activation='relu')
~/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_layer.py in Input(shape, batch_size, name, dtype, sparse, tensor, **kwargs)
229 dtype=dtype,
230 sparse=sparse,
--> 231 input_tensor=tensor)
232 # Return tensor including `_keras_history`.
233 # Note that in this case train_output and test_output are the same pointer.
~/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/input_layer.py in __init__(self, input_shape, batch_size, dtype, input_tensor, sparse, name, **kwargs)
89 if input_tensor is None:
90 if input_shape is not None:
---> 91 batch_input_shape = (batch_size,) + tuple(input_shape)
92 else:
93 batch_input_shape = None
TypeError: 'int' object is not iterable
Therefore, the right way to do it (which should fix your model summary as well is as below -
from tensorflow import keras
from tensorflow.keras import layers as klayers
class MLP(keras.Model):
def __init__(self, input_shape=(32,), **kwargs):
super(MLP, self).__init__(**kwargs)
# Add input layer
self.input_layer = klayers.Input(input_shape)
self.dense_1 = klayers.Dense(64, activation='relu')
self.dense_2 = klayers.Dense(10)
# Get output layer with `call` method
self.out = self.call(self.input_layer)
# Reinitial
super(MLP, self).__init__(
inputs=self.input_layer,
outputs=self.out,
**kwargs)
def build(self):
# Initialize the graph
self._is_graph_network = True
self._init_graph_network(
inputs=self.input_layer,
outputs=self.out
)
def call(self, inputs):
x = self.dense_1(inputs)
return self.dense_2(x)
if __name__ == '__main__':
mlp = MLP((16,))
mlp.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_19 (InputLayer) (None, 16) 0
_________________________________________________________________
dense_8 (Dense) (None, 64) 1088
_________________________________________________________________
dense_9 (Dense) (None, 10) 650
=================================================================
Total params: 1,738
Trainable params: 1,738
Non-trainable params: 0
_________________________________________________________________ | unknown | |
d18711 | test | If you use MVC it is recommended to encapsulate the setter (private). This because MVC discribes that your view does NOT change the model but the controller should do this.
You can use ${model.property = 100}, which requires public setter
Allthough in the MVC it is recommended to private the setter | unknown | |
d18712 | test | You can have a separate column in your db table e.g. is_logged and when a user is logged for first time it will be updated to true so every other attempt will fail.
SELECT *
FROM my_table
WHERE username = 'username' AND password = 'password' AND is_logged != 1;
UPDATE (based on your update)
You can have another column such as last_action (timestamp) and set in your application logic a particular time that if a user hasn't interact with the website (e.g. 3600 seconds) then he will be considered as automatically logged out during his next login attempt.
SELECT *
FROM my_table
WHERE username = 'username' AND password = 'password'
AND TIME_TO_SEC(TIMEDIFF(NOW(), last_action)) > 3600;
A: you can add a column in your login table( besides where you have stored password and username) and set it to one whenever a user logs in. Check that column every time with password and username. If it is not one. Grant login. else error message. Set it to zero when a user logs out. | unknown | |
d18713 | test | You could for example use Handbreak to encode the video in a smaller format. I'd suggest to use H.264 (x264) as it can produce good quality at low bitrates (to the quality/size ratio is good) and is widely supported. If you're completely unexperienced with this you'll probably need to try around a bit with the options, but Handbreak makes it fairly easy. However, remember the smaller the file the worse the quality in the end. Also it's better if you have good quality in the source material as the compression can usually work more efficiently.
A: I'm going to say no. Your limited by the server's net connection and your own
Also, a fast internet connection can have different meaning to different people. I have 50mb broadband but someone else might consider 8mb as fast. Plus remember, you uprate is usually around 10x slower than your down rate on your connection. Sometimes more, sometimes less.
The alternative to waiting is to transcode your video to a smaller size before uploading | unknown | |
d18714 | test | The first one are actually two statements causing you to make two roundtrips to the database.
The second one will most likely be faster as it is just one statement.
A: Is this really what you are trying to determine? Are you asking if it is faster to make one trip returning two rows or two trips each returning one row? If that is the question, then I agree with the comments -- try it, measure it, and compare.
If you are trying to make this kind of thing efficient, then you should probably look at using bind variables instead. If your question really means what it says, then probably any answer here will do.
A: Any question with "faster" is always going to be dependent on the specifics of your database. I don't really have anything to add over plhmhck and MJB about the fact that you're talking about 2 queries vs. 1 query.
But be aware the the optimizer will usually (always?) rewrite WHERE id IN (1,2) to WHERE (id = 1 OR id = 2) | unknown | |
d18715 | test | The model configuration is accessible through an attribute called "model_config" on the top group that seems to contain the full model configuration JSON that is produced by model.to_json().
import json
import h5py
model_info = h5py.File('model.h5', 'r')
model_config_json = json.loads(model_info.attrs['model_config'])
A: If you save the full model with model.save, you can access each layer and it's activation function.
from tensorflow.keras.models import load_model
model = load_model('model.h5')
for l in model.layers:
try:
print(l.activation)
except: # some layers don't have any activation
pass
<function tanh at 0x7fa513b4a8c8>
<function softmax at 0x7fa513b4a510>
Here, for example, softmax is used in the last layer.
If you don't want to import tensorflow, you can also read from h5py.
import h5py
import json
model_info = h5py.File('model.h5', 'r')
model_config = json.loads(model_info.attrs.get('model_config').decode('utf-8'))
for k in model_config['config']['layers']:
if 'activation' in k['config']:
print(f"{k['class_name']}: {k['config']['activation']}")
LSTM: tanh
Dense: softmax
Here, last layer is a dense layer which has softmax activation. | unknown | |
d18716 | test | Good Option : Use delegate to dissmiss popover from contentView.
Another option : In iOS 8, you can dismiss the popover by using dismissViewControllerAnimated:completion: from within the popover. Note it doesn't work in iOS 7. | unknown | |
d18717 | test | your xpath
wrong syntax, just get rid of the "." and you'll get all the div elements with attribute class="record"
'//div[@id="records"]//div[@class="record"]'
if (as you said in the comments) you want to get all anchor elements then try this xpath:
'//div[@id="records"]/div[@class="record"]/a[contains(@href,"firma")]' | unknown | |
d18718 | test | Prefer implcit_cast if it is enough in your situation. implicit_cast is less powerful and safer than static_cast.
For example, downcasting from a base pointer to a derived pointer is possible with static_cast but not with implicit_cast. The other way around is possible with both casts. Then, when casting from a base to a derived class, use implicit_cast, because it keeps you safe if you confuse both classes.
Also keep in mind that implicit_cast is often not needed. Using no cast at all works most of the time when implicit_cast does, that's where 'implicit' comes from. implicit_cast is only needed in special circumstances in which the type of an expression must be exactly controlled, to avoid an overload, for example.
A: I'm copying over from a comment i made to answer this comment at another place.
You can down-cast with static_cast. Not so with implicit_cast. static_cast basically allows you to do any implicit conversion, and in addition the reverse of any implicit conversion (up to some limits. you can't downcast if there is a virtual base-class involved). But implicit_cast will only accept implicit conversions. no down-cast, no void*->T*, no U->T if T has only explicit constructors for U.
Note that it's important to note the difference between a cast and a conversion. In the following no cast is going on
int a = 3.4;
But an implicit conversion happens from double to int. Things like an "implicit cast" don't exist, since a cast is always an explicit conversion request. The name construct for boost::implicit_cast is a lovely combination of "cast using implicit conversions". Now the whole implementation of boost::implicit_cast is this (explained here):
template<typename T> struct identity { typedef T type; };
template<typename Dst> Dst implicit_cast(typename identity<Dst>::type t)
{ return t; }
The idea is to use a non-deduced context for the parameter t. That will avoid pitfalls like the following:
call_const_version(implicit_cast(this)); // oops, wrong!
What was desired is to write it out like this
call_const_version(implicit_cast<MyClass const*>(this)); // right!
The compiler can't deduce what type the template parameter Dst should name, because it first must know what identity<Dst> is, since it is part of the parameter used for deduction. But it in turn depends on the parameter Dst (identity could be explicitly specialized for some types). Now, we got a circular dependency, for which the Standard just says such a parameter is a non-deduced context, and an explicit template-argument must be provided.
A: implicit_cast transforms one type to another, and can be extended by writing implicit cast functions, to cast from one type to another.
e.g.
int i = 100;
long l = i;
and
int i = 100;
long l = implicit_cast<long>(i);
are exactly the same code
however you can provide your own implicit casts for your own types, by overloading implicit_cast like the following
template <typename T>
inline T implicit_cast (typename mpl::identity<T>::type x)
{
return x;
}
See here boost/implicit_cast.hpp for more
Hope this helps
EDIT
This page also talks about implicit_cast New C++
Also, the primary function of static_cast is to perform an non changing or semantic transformation from one type to another. The type changes but the values remain identical e.g.
void *voidPtr = . . .
int* intPtr = static_cast<int*>(voidPtr);
I want to look at this void pointer, as if it was an int pointer, the pointer doesn't change, and under the covers voidPtr has exactly the same value as intPtr.
An implicit_cast, the type changes but the values after the transformation can be differnet too.
A: Implicit conversions, explicit conversions and static_cast are all different things. however, if you can convert implicitly, you can convert explicitly, and if you can convert explicitly, you can cast statically. The same in the other direction is not true, however. There is a perfectly reasonable relationship between implicit casts and
static casts. The former is a subset of the the latter.
See section 5.2.9.3 of the C++ Standard for details
Otherwise, an expression e can be
explicitly converted to a type T using
a static_cast of the form static_-
cast(e) if the declaration T t(e);
is well-formed, for some invented
temporary variable t (8.5).
C++ encourages use of static_casts because it makes the conversion 'visible' in the program. Usage of casts itself indicates some programmer enforced rule which is worth a look so better use static_cast. | unknown | |
d18719 | test | This is a known issue with this named parameter https://github.com/dart-lang/sdk/issues/24637
A: The solution is to use postFormData() instead of send(). For example:
final req = await HttpRequest
.postFormData(url, {'action': 'delete', 'id': id});
return req.responseText;
A: Future<String> deleteItem(String id) async {
final req = new HttpRequest()
..open('POST', 'server/controller.php')
..send({'action': 'delete', 'id': id});
// wait until the request have been completed
await req.onLoadEnd.first;
// oh yes
return req.responseText;
}
this one is the point, where sometimes you need "PUT" or "DELETE" | unknown | |
d18720 | test | Use recover, it behaves as you request.
https://github.com/mxcl/PromiseKit/blob/master/Sources/Promise.swift#L254-L278 | unknown | |
d18721 | test | If you are using phonegap than for sure you can work with file system.
Solution is to encode your array into JSON using serializeArray() method in JQuery.
Once you encode your array you will get JSON string which you have to store in a file using PhoneGap's Filewriter() function. For more detail on that visit this link.
I hope it helped you :-).
A: JavaScript cannot tamper with the file system directly. You can do one of two things:
*
*Save the changes onto a cookie and read it the next time
*Send the changes (via AJAX) to a PHP file which would generate a downloadable file on the server, and serve it to the client.
There are probably more solutions, but these are the most reasonable two I can think of.
A: Phonegap (at http://phonegap.com/tools/) is suggesting Lawnchair: http://westcoastlogic.com/lawnchair/
so you'd read that file into data.js instead of storing the data literally there
A: You could also save your array (or better yet its members) using localStorage, a key/value storage that stores your data locally, even when the user quits your app. Check out the guide in the Safari Developer Library.
A: use Lawnchair to save the array as a JSON object. The JSON object will be there in the memory till you clear the data for the application.
If you want to save it permanently to a file on the local filesystem then i guess you can write a phonegap plugin to sent the data across to the native code of plugin which will create/open a file and save it. | unknown | |
d18722 | test | You can try using logarithms, i.e. instead of
P(r, n) = n! / ((n-r)! * r! * r**n)
compute just
log(P(r, r)) = log(n!) - log((n-r)!) - log(r!) - r*log(n)
All factorials are easy computable as logarithms:
log(n!) = log(n) + log(n - 1) + ... + log(2) + log(1)
When obtain log(P(r, n)) all you have to do is to exponentiate. As a further improvement you can use Stirling's approximation for the factorials in case n is large:
n! ~ (n / e)**n * sqrt(2 * PI * n)
so (ln stands for the natural logarithm)
ln(n!) ~ n * ln(n) - n - ln(n)/2 - ln(2 * PI)/2
Edit: If you are looking for CDF (Cumulative Distribution Function, probability that random value is less or equal given x), it can be represented as regularized imcomplete beta function:
https://en.wikipedia.org/wiki/Binomial_distribution
P(x <= k) = I(1 - p, n - r, r+1)
p = 1/2 in your case
in case C++, the implementation can be found in Boost | unknown | |
d18723 | test | You have to use union types:
export type Objs = Array<Obj | string>;
A: If the entries in the array can be either strings or objects in the form {id: string; labels: string[]}, you can use a union type:
export type Obj = string | {id: string; labels: string[]};
const array: Obj[] = [
"",
{id: "", labels: [""]}
];
Playground Example | unknown | |
d18724 | test | you should switch current class on ".quiz-container" element not on ".quiz-step" element.
Js fiddle | unknown | |
d18725 | test | I think this code, would produce a similar output to what you're looking for, but without the loop you wanted to create, because I'm sure that would require a fair few if statements.
I was curious as what this line was doing currentTime because it doesn't look to me like it would do anything at all.
#include <iostream>
#include <conio.h>
using namespace std;
int main()
{
int hr, min;
char period;
cout << "Enter Hour" << endl;
cin >> hr;
cout << "Enter Minute" << endl;
cin >> min;
min++;
cout << "Enter Period (A or P)" << endl;
cin >> period;
cout << "Current Time: " << hr << ":" << min << " " << period << "M" << endl;
_getch();
}
A: Here is an idea of how to accomplish this in C#.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication
{
class Program
{
static void Main(string[] args)
{
double hrs=0;
double mins=0;
DateTime dt = new DateTime(2013, 10, 20, 3, 59, 00);
//dt = DateTime.Now;
Console.WriteLine(dt.ToShortTimeString());
Console.WriteLine("Enter hours:");
hrs = Convert.ToDouble(Console.ReadLine());
dt = dt.AddHours(hrs);
Console.WriteLine("Enter minutes:");
mins = Convert.ToDouble(Console.ReadLine());
dt = dt.AddMinutes(mins);
if (dt.Minute == 59)
dt = dt.AddMinutes(1);
Console.WriteLine(dt.ToShortTimeString());
Console.ReadLine();
}
}
} | unknown | |
d18726 | test | According to the React documentation (https://reactjs.org/docs/error-boundaries.html#component-stack-traces), detailed stack traces can be added with the babel-plugin-transform-react-jsx-source plugin (https://www.npmjs.com/package/babel-plugin-transform-react-jsx-source) | unknown | |
d18727 | test | rangeSeries.columns.template.propertyFields.fill = "colour";
rangeSeries.columns.template.propertyFields.stroke = "colour"; | unknown | |
d18728 | test | what this done function is applied
$.ajax returns a jqXHR object (see first section after the configuration parameter description) wich implements the promise interface and allows you to add callbacks and get notified of changes of the Ajax call.
whats being used as $(this) in example
Inside the callbacks for $.ajax, this refers to the object context refers to in the configuration or the jqXHR instance if context was not set. In this case it refers to document.body:
context: This object will be made the context of all Ajax-related callbacks. By default, the context is an object that represents the ajax settings used in the call ($.ajaxSettings merged with the settings passed to $.ajax).
This and more is all explained in the documentation: http://api.jquery.com/jQuery.ajax/
as the ajax function can't set global variables
That is not correct, any function can set global variables. The problem with asynchronous functions is that you are likely accessing the variable before it was set.
can't the be set in this done too
See above
cant I return value out of done function either
You can return a value (as in putting a return statement inside the callback), but you cannot return it to your code, since jQuery is calling the callback internally and just ignoring the return value. | unknown | |
d18729 | test | Add a wrapper around your code:
document.addEventListener("DOMContentLoaded", function(event) {
// scroll reveal
const ScrollReveal = require('scrollreveal');
// scroll reveal profile listings
if (!/(?:^|\s)ie\-[6-9](?:$|\s)/.test(document.body.className)) {
window.sr = new ScrollReveal({reset: false});
sr.reveal('[data-reveal="true"]', {duration: 1000});
}
}); | unknown | |
d18730 | test | I found that ":hover" is unpredictable in iPhone/iPad Safari. Sometimes tap on element make that element ":hover", while sometimes it drifts to other elements.
For the time being, I just have a "no-touch" class at body.
<body class="yui3-skin-sam no-touch">
...
</body>
And have all CSS rules with ":hover" below ".no-touch":
.no-touch my:hover{
color: red;
}
Somewhere in the page, I have javascript to remove no-touch class from body.
if ('ontouchstart' in document) {
Y.one('body').removeClass('no-touch');
}
This doesn't look perfect, but it works anyway.
A: :hover isn't the issue here. Safari for iOS follows a very odd rule. It fires mouseover and mousemove first; if anything is changed during these events, 'click' and related events don't get fired:
mouseenter and mouseleave appear to be included, though they're not specified in the chart.
If you modify anything as a result of these events, click events won't get fired. That includes something higher up in the DOM tree. For example, this will prevent single clicks from working on your website with jQuery:
$(window).on('mousemove', function() {
$('body').attr('rel', Math.random());
});
Edit: For clarification, jQuery's hover event includes mouseenter and mouseleave. These will both prevent click if content is changed.
A: There are basically three scenarios:
*
*User only has a mouse/pointer device and can activate :hover
*User only has a touchscreen, and can not activate :hover elements
*User has both a touchscreen and a pointer device
The originally accepted answer works great if only the first two scenarios are possible, where a user has either pointer or touchscreen. This was common when the OP asked the question 4 years ago. Several users have pointed out that Windows 8 and Surface devices are making the third scenario more likely.
The iOS solution to the problem of not being able to hover on touchscreen devices (as detailed by @Zenexer) is clever, but can cause straightforward code to misbehave (as noted by the OP). Disabling hover only for touchscreen devices means that you will still need to code a touchscreen friendly alternative. Detecting when a user has both pointer and touchscreen further muddies the waters (as explained by @Simon_Weaver).
At this point, the safest solution is to avoid using :hover as the only way a user can interact with your website. Hover effects are a good way of indicating that a link or button is actionable, but a user should not be required to hover an element to perform an action on your website.
Re-thinking “hover” functionality with touchscreens in mind has a good discussion about alternative UX approaches. The solutions provided by the answer there include:
*
*Replacing hover menus with direct actions (always visible links)
*Replacing on-hover menus with on-tap menus
*Moving large amounts of on-hover content into a separate page
Moving forward, this will probably be the best solution for all new projects. The accepted answer is probably the second best solution, but be sure to account for devices that also have pointer devices. Be careful not to eliminate functionality when a device has a touchscreen just to work around iOS's :hover hack.
A: A better solution, without any JS, css class and viewport check: you can use Interaction Media Features (Media Queries Level 4)
Like this:
@media (hover) {
// properties
my:hover {
color: red;
}
}
iOS Safari supports it
More about:
https://www.jonathanfielding.com/an-introduction-to-interaction-media-features/
A: The browser feature detection library Modernizer includes a check for touch events.
It’s default behavior is to apply classes to your html element for each feature being detected. You can then use these classes to style your document.
If touch events are not enabled Modernizr can add a class of no-touch:
<html class="no-touch">
And then scope your hover styles with this class:
.no-touch a:hover { /* hover styles here */ }
You can download a custom Modernizr build to include as few or as many feature detections as you need.
Here's an example of some classes that may be applied:
<html class="js no-touch postmessage history multiplebgs
boxshadow opacity cssanimations csscolumns cssgradients
csstransforms csstransitions fontface localstorage sessionstorage
svg inlinesvg no-blobbuilder blob bloburls download formdata">
A: Some devices (as others have said) have both touch and mouse events. The Microsoft Surface for example has a touch screen, a trackpad AND a stylus which actually raises hover events when it is hovered above the screen.
Any solution that disables :hover based on the presence of 'touch' events will also affect Surface users (and many other similar devices). Many new laptops are touch and will respond to touch events - so disabling hovering is a really bad practice.
This is a bug in Safari, there's absolutely no justification for this terrible behavior. I refuse to sabotage non iOS browsers because of a bug in iOS Safari which has apparently been there for years. I really hope they fix this for iOS8 next week but in the meantime....
My solution:
Some have suggested using Modernizr already, well Modernizr allows you to create your own tests. What I'm basically doing here is 'abstracting' the idea of a browser that supports :hover into a Modernizr test that I can use throughout my code without hardcoding if (iOS) throughout.
Modernizr.addTest('workinghover', function ()
{
// Safari doesn't 'announce' to the world that it behaves badly with :hover
// so we have to check the userAgent
return navigator.userAgent.match(/(iPad|iPhone|iPod)/g) ? false : true;
});
Then the css becomes something like this
html.workinghover .rollover:hover
{
// rollover css
}
Only on iOS will this test fail and disable rollover.
The best part of such abstraction is that if I find it breaks on a certain android or if it's fixed in iOS9 then I can just modify the test.
A: Adding the FastClick library to your page will cause all taps on a mobile device to be turned into click events (regardless of where the user clicks), so it should also fix the hover issue on mobile devices. I edited your fiddle as an example: http://jsfiddle.net/FvACN/8/.
Just include the fastclick.min.js lib on your page, and activate via:
FastClick.attach(document.body);
As a side benefit, it will also remove the annoying 300ms onClick delay that mobile devices suffer from.
There are a couple of minor consequences to using FastClick that may or may not matter for your site:
*
*If you tap somewhere on the page, scroll up, scroll back down, and then release your finger on the exact same position that you initially placed it, FastClick will interpret that as a "click", even though it's obviously not. At least that's how it works in the version of FastClick that I'm currently using (1.0.0). Someone may have fixed the issue since that version.
*FastClick removes the ability for someone to "double click".
A: The JQuery version
in your .css use
.no-touch .my-element:hover
for all your hover rules
include JQuery and the following script
function removeHoverState(){
$("body").removeClass("no-touch");
}
Then in body tag add
class="no-touch" ontouchstart="removeHoverState()"
as soon as the ontouchstart fires the class for all hover states is removed
A: I agree disabling hover for touch is the way to go.
However, to save yourself the trouble of re-writing your css, just wrap any :hover items in @supports not (-webkit-overflow-scrolling: touch) {}
.hover, .hover-iOS {
display:inline-block;
font-family:arial;
background:red;
color:white;
padding:5px;
}
.hover:hover {
cursor:pointer;
background:green;
}
.hover-iOS {
background:grey;
}
@supports not (-webkit-overflow-scrolling: touch) {
.hover-iOS:hover {
cursor:pointer;
background:blue;
}
}
<input type="text" class="hover" placeholder="Hover over me" />
<input type="text" class="hover-iOS" placeholder="Hover over me (iOS)" />
A: Instead of only having hover effects when touch is not available I created a system for handling touch events and that has solved the problem for me. First, I defined an object for testing for "tap" (equivalent to "click") events.
touchTester =
{
touchStarted: false
,moveLimit: 5
,moveCount: null
,isSupported: 'ontouchend' in document
,isTap: function(event)
{
if (!this.isSupported) {
return true;
}
switch (event.originalEvent.type) {
case 'touchstart':
this.touchStarted = true;
this.moveCount = 0;
return false;
case 'touchmove':
this.moveCount++;
this.touchStarted = (this.moveCount <= this.moveLimit);
return false;
case 'touchend':
var isTap = this.touchStarted;
this.touchStarted = false;
return isTap;
default:
return true;
}
}
};
Then, in my event handler I do something like the following:
$('#nav').on('click touchstart touchmove touchend', 'ul > li > a'
,function handleClick(event) {
if (!touchTester.isTap(event)) {
return true;
}
// touch was click or touch equivalent
// nromal handling goes here.
});
A: Thanks @Morgan Cheng for the answer, however I've slightly modified the JS function for getting the "touchstart" (code taken from @Timothy Perez answer), though, you need jQuery 1.7+ for this
$(document).on({ 'touchstart' : function(){
//do whatever you want here
} });
A: Given the response provided by Zenexer, a pattern that requires no additional HTML tags is:
jQuery('a').on('mouseover', function(event) {
event.preventDefault();
// Show and hide your drop down nav or other elem
});
jQuery('a').on('click', function(event) {
if (jQuery(event.target).children('.dropdown').is(':visible') {
// Hide your dropdown nav here to unstick
}
});
This method fires off the mouseover first, the click second.
A: For those with common use case of disabling :hover events on iOS Safari, the simplest way is to use a min-width media query for your :hover events which stays above the screen width of the devices you are avoiding. Example:
@media only screen and (min-width: 1024px) {
.my-div:hover { // will only work on devices larger than iOS touch-enabled devices. Will still work on touch-enabled PCs etc.
background-color: red;
}
}
A: For someone still looking for a solution if none of the above worked,
Try this,
@media (hover: hover)
{
.Link:hover
{
color:#00d8fe;
}
}
This hover pseudo will only be applied for devices with pointers and works normal on touch devices with just .active classes.
A: Just look at the screen size....
@media (min-width: 550px) {
.menu ul li:hover > ul {
display: block;
}
}
A: heres the code you'll want to place it in
// a function to parse the user agent string; useful for
// detecting lots of browsers, not just the iPad.
function checkUserAgent(vs) {
var pattern = new RegExp(vs, 'i');
return !!pattern.test(navigator.userAgent);
}
if ( checkUserAgent('iPad') ) {
// iPad specific stuff here
} | unknown | |
d18731 | test | UPDATED: if i understand right you want something like this
var result =
from company in db.Companies
from notice in company.Notices
join request in db.Requests.Where(z => z.IsApproved &&
z.Status.Status == "Active") on notice.SubcategoryId equals request.Subcategoryid
group new {notice, request } by company into gr
select new {gr.Key, Value = gr.ToList() }
UPDATE2
for your sql it seems like this
from company in db.Companies
join notice in db.Notices on company.CompanyId equals notice.CompanyId
join account in db.Account on company.Uid equals account.UserId
join request in db.Requests on notice.SubcategoryId equals request.SubcategoryId
group new {notice, request} by new {company, account} into g
select new {g.Key, value = g} | unknown | |
d18732 | test | Well since all the comments turned out to be equally useless (no offence), I've decided to take a different approach.
Instead of sigaction I've used the signal system call. I've also modified a few other things. Since printf is not a signal-safe function I've introduced a global variable named shouldStop which is by default set to false and then changed to true with SIGTERM handler. The printing is then done inside the main function.
Here's the code:
int sigUsr1Count = 0;
int sigUsr2Count = 0;
bool shouldStop = false;
static void sighandler(int signum){
switch(signum){
case SIGUSR1:
sigUsr1Count++;
break;
case SIGUSR2:
sigUsr2Count++;
break;
}
}
static void termhandler(int signum){
shouldStop = true;
}
int main(int argc, char ** argv){
pid_t mypid = getpid();
fprintf(stderr, "My PID is %d\n", mypid);
iAssert(SIG_ERR != signal(SIGUSR1, sighandler), "signal1 failed");
iAssert(SIG_ERR != signal(SIGUSR2, sighandler), "signal2 failed");
iAssert(SIG_ERR != signal(SIGTERM, termhandler), "signal3 failed");
do{
}while(!shouldStop);
printf("%d %d\n", sigUsr1Count, sigUsr2Count);
return 0;
}
And it works fine the way I wanted it.
Thanks anyway. | unknown | |
d18733 | test | In my experience, the requirements on a software solution tend to evolve over time well beyond the initial requirement set.
By following architectural best practices now, you will be much better able to accommodate changes to the solution over its entire lifetime.
The Respository pattern and ViewModels are both powerful, and not very difficult or time consuming to implement. I would suggest using them even for small projects.
A: Yes, you still want to use a repository and view models. Both of these tools allow you to place code in one place instead of all over the place and will save you time. More than likely, it will save you copy paste errors too.
Moreover, having these tools in place will allow you to make expansions to the system easier in the future, instead of having to pour through all of the code which will have poor readability.
Separating your concerns will lead to less code overall, a more efficient system, and smaller controllers / code sections. View models and a repository are not heavily intrusive to implement. It is not like you are going to implement a controller factory or dependency injection.
A: ViewModels: Yes
I only see bad points when passing an EF Entities directly to a view:
*
*You need to do manual whitelisting or blacklisting to prevent over-posting and mass assignment
*It becomes very easy to accidentally lazy load extra data from your view, resulting in select N+1 problems
*In my personal opinion, a model should closely resembly the information displayed on the view and in most cases (except for basic CRUD stuff), a view contains information from more than one Entity
Repositories: No
The Entity Framework DbContext already is an implementation of the Repository and Unit of Work patterns. If you want everything to be testable, just test against a separate database. If you want to make things loosely coupled, there are ways to do that with EF without using repositories too. To be honest, I really don't understand the popularity of custom repositories. | unknown | |
d18734 | test | Remember that the services run under a different user profile (can be a LOCAL_SERVICE, NETWORK_SERVICE, etc.) If you'd like them to be the same, run the service under your user profile (You can specify this ServiceProcessInstaller.Account property when you create the installer, or in the Services manager of windows). | unknown | |
d18735 | test | After learning TCPDF more, this is the conclusion:
Cell() and MultiCell() are not intended to be used for just outputting a string and fitting it's length. Instead, Write() and WriteHtml() should be used. Cells exist for the case where you actually want to control the dimentions of the field manually.
Nevertheless, in some cases one may want to compute the width of the cell, such that it takes into account the sizes of the text inside. For this purpose exists GetStringWidth(). (Unfortunately for me it err's from time to time. Maybe I'm not aware of something)
A: Have an internal "Legacy" application that uses TCPDF to generate a PDF of a checklist. We recently moved from creating a giant string of HTML that described a table, created using the $pdf->writeHTML($myHTMLString); method, to using the MultiCell() methods.
However, we ran into an issue where some text in a description cell would need to run on to a second line, this threw off our layout. As a fix, we created an if block based on 2 variables, one for the string width the other for the actual cell width. (We had 2 instances where the cell width might vary).
If block example:
// Get width of string
$lWidth = $pdf->GetStringWidth(strip_tags($clItem['description']));
// Determine width of cell
$oadWidth = (.01*$width[0])*186;
if ($lWidth < $oadWidth) {
$cHeight = 3.5;
} else {
$cHeight = 7;
}
We then used the variable created by the if block in the MultiCell() like this
$pdf->MultiCell((.01*$width[0])*186, $cHeight, strip_tags($clItem['description']), 1, 'L', 1, 0, '', '', true);
We reused the $cHeight variable for the height params in the other sibling cells so each row of cells had a uniform height. You could most likely reuse this method with any of the other right functions that have a height parameter in TCPDF. Thanks to @shealtiel for the original reference to GetStringWidth() | unknown | |
d18736 | test | (*objp)(x, y, z); would be the obvious alternative. I'm not sure if you consider that nicer or not though.
A: You can use (*objp)(x, y, z); as an alternative.
A: Do it in two lines;
MyType& functorRef = *objp; // Use the appropriate type name.
functorRef(x, y, z);
Or in C++11 you can use auto.
auto& functorRef = *objp;
functorRef(x, y, z); | unknown | |
d18737 | test | You should use a bootstrap action to change Hadoop configuration.
The following AWS doc can be referenced for Hadoop configuratio bootstrap action.
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html#PredefinedbootstrapActions_ConfigureHadoop
This blog article that I bookmarked also has some info.
http://sujee.net/tech/articles/hadoop/amazon-emr-beyond-basics/
For changing the cluster size dynamically, one option is to use the AWS SDK.
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/calling-emr-with-java-sdk.html
Using the following interface you can modify the instance count of the instance group.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/AmazonElasticMapReduce.html | unknown | |
d18738 | test | We can use dplyr. After grouping by 'ID', we slice the rows based on the even index returned by seq
library(dplyr)
Input %>%
group_by(ID) %>%
slice(seq(2, n(), by =2))
# Sample X ID
# <int> <dbl> <chr>
#1 2 -1.294728 EABE_D4
#2 4 -1.287245 EABE_D4
#3 2 -1.315783 EABE_D5
#4 4 -1.304670 EABE_D5
Or we can use data.table for efficiency
library(data.table)
setDT(Input)[Input[, .I[seq(2, .N, by = 2)], by = ID]$V1]
Or with ave from base R, we group by 'ID', apply the modulo operator %% with y as 2, convert to logical by negating (!) and with this logical vector, we subset the rows.
Input[with(Input, !ave(Sample, ID, FUN = function(x) x %%2)),]
# Sample X ID
#15919 2 -1.315783 EABE_D5
#15921 4 -1.304670 EABE_D5
#15924 2 -1.294728 EABE_D4
#15926 4 -1.287245 EABE_D4
A: This might be inefficient. However, you can do this in one more way using lapply
do.call(rbind, lapply(split(df, df$ID), function(x) x[seq(2, nrow(x), by=2),]))
# Sample X ID
#EABE_D4.15924 2 -1.294728 EABE_D4
#EABE_D4.15926 4 -1.287245 EABE_D4
#EABE_D5.15919 2 -1.315783 EABE_D5
#EABE_D5.15921 4 -1.304670 EABE_D5
splitting the dataframe based on ID and then selecting every 2nd row in each group and finally rbinding them using do.call to convert the list returned as dataframe.
If you do not want the row names, you can take the dataframe in one variable (say a) and then
rownames(a) <- NULL | unknown | |
d18739 | test | Currently, the purge API is the recommended way to invalidate cached content on-demand.
Another approach for your scenario could be to look at Workers and Workers KV, and combine it with the Cloudflare API. You could have:
*
*A Worker reading the JSON from the KV and returning it to the user.
*When you have a new version of the JSON, you could use the API to create/update the JSON stored in the KV.
This setup could be significantly performant, since the Worker code in (1) runs on each Cloudflare datacenter and returns quickly to the users. It is also important to note that KV is "eventually consistent" storage, so feasibility depends on your specific application. | unknown | |
d18740 | test | Within your project could the toolkit be in a subfolder which is managed by svn?
A simplistic approach (which misses out the opportunity to import the svn version history git-svn which basicxman mentions) would be to manage the whole project using git including the contents of the folders updated by svn. You may wish to exclude the .svn directories through.
Try adding a line to .git/info/exclude for your project to ignore files or folders called .svn
A: You are making changes to the codeplex code. Make those changes on a branch. Keep master up to date with the codeplex svn repo using git-svn. After an update, merge master into your changes branch.
Then share your codeplex repo with your main repo as a submodule as suggested before. | unknown | |
d18741 | test | Create an empty __init__.py file in the app folder so Python treats the directory as a package. Then do:
from app.models import Result
optionResult = someTestsThatRuns
reverseResult = someOtherTestThatRuns
c = Result()
c.options = optionResult
c.reverse = reverseResult
c.save()
That will save 'c' to the database.
Note that Django's test suite can create its own test database, which runs tests on a separate database. You can read more about Django testing here.
https://docs.djangoproject.com/en/dev/topics/testing/?from=olddocs
A: FIXED As David mentioned in the comments, the environment variable was indeed not set. Since I was in Windows, what I had to do was Start -> Computer -> Properties -> advanced System Settings -> Environment Variables -> add Environment Variable.
There I added 'DJANGO_SETTINGS_MODULE' and its location as 'C:\path\to\your\settings.py' Afterwards, in command prompt, I had to do the following:
enter python
>import sys
>import os
>sys.path.append(r"C:\location\to\settings.py")
>from django.core.management import setup_environ
>setup_environ(settings)
>sys.path.append(os.getcwd() + '\\from\\current\\to\\models.py'
>from models import Result
This is all explained at http://www.b-list.org/weblog/2007/sep/22/standalone-django-scripts/ , though I did find it somewhat difficult to understand. Another problem I had with importing my models is that there were TWO folders named exactly the same (djangoSite), so when importing, the computer had some issues trying to figure out which one. I had to rename, remove, reset environment variable and recheck all of the paths I have throughout my files =/
I am sorry if my explanations aren't the best, I barely understood what I did, but I do hope this will help other in the future | unknown | |
d18742 | test | Use following code:
private void button2_GotFocus(object sender, RoutedEventArgs e)
{
button1.RaiseEvent(new RoutedEventArgs(LostFocusEvent, button1));
}
private void button1_LostFocus(object sender, RoutedEventArgs e)
{
}
If this doesn't solve your problem, post your code and state your problem and purpose clearly so that you can get better solution.
A: you can just call event handler method of Button1's LostFocus event in button2_GotFocus :
private void button2_GotFocus(object sender, RoutedEventArgs e)
{
button1_LostFocus(this.button1, null);
}
A: Try this
private void button2_GotFocus(object sender, RoutedEventArgs e)
{
button1_LostFocus(sender,e)
} | unknown | |
d18743 | test | Yes there seems to be float vs double confusion. You pass in a double array, but pretty much all of the asm code expects floats: you use the ss instructions and you assume size 4 and you return a float too.
– Jester
There was an issue with floats and doubles! I really appreciate both of your responses. I was confused because the instructor had told us to use floats in our assembly program he had used doubles in an example driver. I spoke with the instructor and he had fixed his instructions. I thank you again! – Tyler Weaver
A: here is the algorithm, is a mix between C and pseudo code
My suggestion is to write this program in C.
Then have the compiler output the related asm language
then use that asm output as a guide in writing your own program
! ----------------------------------------------------------
! This program reads a series of input data values and
! computes their arithmetic, geometric and harmonic means.
! Since geometric mean requires taking n-th root, all input
! data item must be all positive (a special requirement of
! this program , although it is not absolutely necessary).
! If an input item is not positive, it should be ignored.
! Since some data items may be ignored, this program also
! checks to see if no data items remain!
! ----------------------------------------------------------
PROGRAM ComputingMeans
IMPLICIT NONE
REAL :: X
REAL :: Sum, Product, InverseSum
REAL :: Arithmetic, Geometric, Harmonic
INTEGER :: Count, TotalNumber, TotalValid
Sum = 0.0 ! for the sum
Product = 1.0 ! for the product
InverseSum = 0.0 ! for the sum of 1/x
TotalValid = 0 ! # of valid items
READ(*,*) TotalNumber ! read in # of items
DO Count = 1, TotalNumber ! for each item ...
READ(*,*) X ! read it in
WRITE(*,*) 'Input item ', Count, ' --> ', X
IF (X <= 0.0) THEN ! if it is non-positive
WRITE(*,*) 'Input <= 0. Ignored' ! ignore it
ELSE ! otherwise,
TotalValid = TotalValid + 1 ! count it in
Sum = Sum + X ! compute the sum,
Product = Product * X ! the product
InverseSum = InverseSum + 1.0/X ! and the sum of 1/x
END IF
END DO
IF (TotalValid > 0) THEN ! are there valid items?
Arithmetic = Sum / TotalValid ! yes, compute means
Geometric = Product**(1.0/TotalValid)
Harmonic = TotalValid / InverseSum
WRITE(*,*) 'No. of valid items --> ', TotalValid
WRITE(*,*) 'Arithmetic mean --> ', Arithmetic
WRITE(*,*) 'Geometric mean --> ', Geometric
WRITE(*,*) 'Harmonic mean --> ', Harmonic
ELSE ! no, display a message
WRITE(*,*) 'ERROR: none of the input is positive'
END IF
END PROGRAM ComputingMeans | unknown | |
d18744 | test | Start "Developer Command Prompt for VS 2019" and run th following command:
wsdl.exe C:\file.wsdl C:\file1.xsd C:\file2.xsd c:\file3.xsd c:\file4.xsd | unknown | |
d18745 | test | what you are trying to do can never work.
The line find(@task.id) will look for comments that have the same id as the task which is normally not how you set up relations.
Normally you would have a task, and a comments table, and the comments table would have a column called task_id. If that is the case, you could write your models as follows:
class Task
has_many :comments
end
class Comment
belongs_to :task
end
and then you can simply write:
@all_comments = @task.comments
A: I don't use AR, but I believe just:
@all_comments = Comment.find(:first, @task.id)
will return nil if there is no record found, unlike #find without any modifiers.
EDIT | There's a shortcut too:
@all_comments = Comment.first(@task.id)
A: I think that your failure is a different one you expect. Your query asks for a Comment with the ID @task.id (which is the ID of the Task).
Your query should go like that:
@all_comments = Comment.where(:task_id => @task.id)
or even better
@task.comments
This should work if you have declared your relations accordingly, and allows some more options (adding comments, ...).
Have a look at the "Rails Guides", and there the "Active Record Query Interface".
A: The rails throws RecordNotfound exception for find calls. Use find_by calls to avoid this.
If you are trying to get a list of tasks by task_id then user the find_all_by method:
# returns an empty array when no tasks are found
@comments = Comment.find_all_by_task_id(@task.id)
Otherwise use find_by
# returns nil when no task is found
@comment = Comment.find_by_task_id(@task.id) | unknown | |
d18746 | test | TL;DR
After the bug fix, you'll want what you put in your "kinda solved" section, or something similar. I think you'll want what I put in my "bottom line" section, really:
/*
!/.*
Long
As jthill noted in a comment, there is a bug in the .gitignore wildcard handling in Git 2.34.0, which will be fixed in 2.34.1. In this case I think the bug is making your wildcarding work better than it would otherwise, though.
The first lines:
# Ignore everything
*
do just what they claim: ignore everything. All files and folders (directories) are ignored. Subsequent lines insert exceptions. But hang on a moment, what does ignored really mean? To get there, we must note what Git's index (or staging area) is and how Git makes new commits from the index / staging-area.
The index, or staging area, in Git, is a central and crucial concept. Trying to use Git without understanding what the index is doing is a bit like trying to pilot an airplane without understanding what the wings and engine are for.1 So: the index is all about the next commit you plan to make. If you never make any new commits, you don't really need to know about it, but if you do want to make new commits, you need to know this.2
When you first extract some commit, in order to use and work on it, Git fills in its index from that commit, so that the index contains all the files from that commit. From this point onward, everything you are doing in your working tree, in the pursuit of making a new commit, is irrelevant to Git. That is, it's irrelevant up until you tell Git that you'd like Git to copy updated and/or new files into Git's index.
The git add command is about updating Git's index. The files you name to git add, with git add file1 file2 for instance, are to be copied into Git's index. If there's already a copy of those two files, those copies get booted out of the index, replaced with the updated ones. If not, those files are newly added to the index.
Once a file is in the index, you can replace it at any time: any .gitignore entry is irrelevant at this point. You can also remove it from the index, with git rm, or by using git add after removing the working tree copy: either one will remove the index copy. Now it's no longer in the index and the .gitignore entries are back in play.
You can use an en-masse git add, as in git add . or git add *,3 to have Git scan directories and files and add them for you. When you do this, Git will skip certain directories and/or files if it can, and this is an area where .gitignore really comes into play.
1"Why should I care about those? I only care about getting my passengers and cargo from point A to point B, and those are inside the plane, not out on the wings."
2To extend the plane analogy a bit more: if you're just planning to use the fuselage as a house, then indeed, you don't need to care about the engines and wings.
3Note that in Unix-like shells, git add * is quite different from git add . because the shell will expand * for Git: Git never sees the literal asterisk. When the shell expands *, it does so with dot-files excluded, by default at least (bash in particular has a control knob to change this behavior). In some CLIs, the literal asterisk * gets through to Git, and then Git will expand *, and now it can act like git add . if Git wants it to. But it's easier to type in git add . (no SHIFT key required) so that's what I always do anyway, which removes the difference in the first place.
How Git scans the working tree
If you run git add . or equivalent (see footnote 3 again), Git will:
*
*Open the directory ..
*Open and read any .gitignore file at this level, adding (appending) these rules to the ignore rules. (These rules then get dropped when we finish this directory.)
*Read this directory: it contains the names of files and sub-directories ("folders", if you prefer that term).
*Check each file and folder name as we read them, against all the ignore rules that are in effect right now. Note that some rules apply only to directories / folders, and others apply to both folders and files. The folder-only rules are those that end with a slash. Also, some rules are "positive" (do ignore) and some are "negative" (do not ignore). The negative rules are the ones starting with !.
Git finds the last applicable rule, whatever that is, in the current set of rules, and then obeys that rule. So first, let's define which rules apply to which directory-scan results, and then what the various rules do.
A rule in a .gitignore can be:
*
*a simple text string with no slashes, such as generated.file;
*a text string with a trailing slash, but no other slashes: somedir/;
*a text string with a leading or embedded slash, with or without a trailing slash: /foo, a/b, /foo/, a/b/, and so on; or
*any of the above with various glob-style wildcard characters.
These can all be negated: if a rule starts with ! it's negated, and we strip off the ! and then use the remaining tests. The two keys tests are these:
*
*Does the entry end with a literal /? If so, it applies only to directories / folders. Ignore that slash while answering the remaining question.
*Does the entry begin with or contain a slash / character? (The one at the end does not count here.) If so, this entry is anchored or rooted (I like the term anchored myself, but I've seen both terms used).
An anchored entry matches only a file or folder name found at this level. That is, /foo or foo/bar won't match sub/foo or sub/foo/bar, only ./foo and ./foo/bar, where . is the directory (folder) that Git is scanning right now. This means that if the entry has several levels—foo/bar or one/two/three for instance—Git will have to remember to apply this entry when it gets around to scanning bar in foo, or two in one and three in one/two. So we do have to consider "higher level" rules. But since lower level rules get appended, a lower level .gitignore can cancel out the higher level one if it wants to.
An un-anchored entry applies here and—unless overridden—in every sub-directory as well. That is, if we do have ./one/two/three, Git will presumably open and read one to find two, and then open and read two to find three, all while still working on the current directory. Meanwhile any un-anchored entry from this .gitignore will apply within the one and one/two directories, and within one/two/three if that's a directory, and so on.
So, there's already a lot to think about. Now we throw in glob matches.
The usual glob is *: people write foo*bar or *.pyc or whatever. Git allows ** as well, with meaning similar to that in bash: zero or more directories. (I've found ** in Git to be weird and in my opinion slightly buggy, where it sometimes seems to mean "one or more" instead of "zero or more", so I recommend avoiding ** if possible. It's hard to reason about, so it's generally not a great idea in the first place, and Git's ignore rules mostly eliminate any need for **. So if you are going to use it, test it carefully and be prepared to have it shift on you in some future Git, in case the one-or-more ?bug? gets fixed, or affects your use case, or whatever.)
Let's suppose, then, that we have these two entries:
*
!.*
Git opens and reads . and finds the following names:
dir
file
.dir
.file
where dir and .dir are directories (folders) and file and .file are non-directories (files).
The * rule matches all four names. The !.* rule matches the last two names. The !.* rule is later in the .gitignore file, so it overrides the * rule. Git therefore "sees" .dir and .file.
Since .file is a file, this means that git add . "sees" it. It will check whether .file needs to be git add-ed to displace the existing .file file, or added to the index.
Since dir and file are excluded, this scanning pass doesn't see them, and does not try to git add either one. Since dir itself is a directory (not a file), it's never in the index itself. There may be a file in the index named dir/thing, and Git will check to see if that should be updated by this git add ., but Git won't scan dir to see if there are other files in dir.
Since file is an excluded file, the scanning pass does not see it. But if file already exists in the index, Git will check to see if it should be updated by this git add ., even though it didn't get scanned here. In other words, these "existing files already in the index" checks happen outside (either before or after) the "scan the directories" pass.
Meanwhile, since .dir isn't excluded, Git now opens and reads .dir, recursively:
*
*Git checks for a .dir/.gitignore (the .gitignore that applies to entries found in .dir). If that exists, Git appends those rules.
*Git scans .dir recursively, using all the same methods. Then it's done scanning .dir so Git removes the appended rules.
Let's look now at the rules Git has in effect as it scans .dir.
The appended-to rules
If there is a .dir/.gitignore, Git opens and reads it and appends to the existing rules. If not, we still have the same set of rules in effect:
* (positive wildcard: ignore every name)
!.* (negative wildcard: don't ignore dot-names)
What's in .dir? Let's say we have:
file1
dir1
.file2
.dir2
The name file1 matches * so it gets ignored. Git won't git add it to the index if it's not already there. Similarly, dir1 matches *, so it gets ignored. Git won't even scan it to see if there are any files there.
The name .file2 matches *, but also matches .*, so the override negative entry is the rule that applies: Git will git add .dir/.file2. The name .dir2 has the same features, so the override applies and Git will open and read .dir/.dir2. This goes through the same recursion as before: Git looks for .dir/.dir2/.gitignore to append rules, and will use the appended-to rules while scanning .dir/.dir2, and then drop back to our own .dir/.gitignore-appended rule set while continuing to scan .dir, and then return from this recursion level and drop the .dir/.gitignore rules.
The bottom line
In the end, the trick here is that we want the * rule to apply only at the top level. Once we get into, say, .foo/, we don't want to ignore .foo/main_config and .foo/secondary_config. So we want * to apply only at the top level.
Using:
# Ignore everything
*
# Except these files and folders
!.*
!.*/*
gets us closer: we ignore everything, but then—via the negative rules !.* and !.*/*—we carefully don't ignore .foo and the like. Once we get into .foo, we carefully don't ignore .foo/main_config.
The bug, or possible bug, depending on what you really do want, here is ... well, suppose we have .foo/thing1/config and .foo/thing2/config. The .*/* pattern contains an embedded slash, which means it is anchored. It matches .foo/thing1, so that directory gets scanned. But it doesn't match .foo/thing1/config.
We could try something like:
!.*/*
!.*/**/
I particularly hate this one because ** is so tough to reason about. We could also write:
!.*/*
!.*/*/
!.*/**/
in case the ** "one or more" bug bites us (I don't think it will, but it's a consideration). But it's simplest to anchor the original globs, by writing:
/*
!/.*
This makes the top level .gitignore rules apply only to top-level work-tree entries. Sub-level .gitignore files, if they exist, can establish sub-level rules and do not need to override any top-level rules, because the top-level rules already don't apply at any sub-level, thanks to anchoring. | unknown | |
d18747 | test | No need to use input mask .. use textbox and use format() function when store the text
Dim MyStr as String = format(val(txtNumber.Text),"000000000000") | unknown | |
d18748 | test | Similar to akrun's answer, but using {{ instead of !!:
foo = function(data, col) {
data %>%
group_by({{col}}) %>%
summarize(count = n()) %>%
ungroup %>%
mutate(
"{{col}}_pct" := count / sum(count)
)
}
foo(mtcars, cyl)
# `summarise()` ungrouping output (override with `.groups` argument)
# # A tibble: 3 x 3
# cyl count cyl_pct
# <dbl> <int> <dbl>
# 1 4 11 0.344
# 2 6 7 0.219
# 3 8 14 0.438
A: Assuming that the input is unquoted, convert to symbol with ensym, evaluate (!!) within group_by while converting the symbol into a string (as_string) and paste the prefix '_pct' for the new column name. In mutate we can use := along with !! to assign the column name from the object created ('colnm')
library(stringr)
library(dplyr)
f1 <- function(dat, grp) {
grp <- ensym(grp)
colnm <- str_c(rlang::as_string(grp), '_pct')
dat %>%
group_by(!!grp) %>%
summarise(count = n(), .groups = 'drop') %>%
mutate(!! colnm := count/sum(count))
}
-testing
f1(mtcars, cyl)
# A tibble: 3 x 3
# cyl count cyl_pct
# <dbl> <int> <dbl>
#1 4 11 0.344
#2 6 7 0.219
#3 8 14 0.438
A: This is probably no different than the one posted by my dear friend @akrun. However, in my version I used enquo function instead of ensym.
There is actually a subtle difference between the two and I thought you might be interested to know:
*
*As per documentation of nse-defuse, ensym returns a raw expression whereas enquo returns a "quosure" which is in fact a "wrapper containing an expression and an environment". So we need one extra step to access the expression of quosure made by enquo.
*In this case we use get_expr for our purpose. So here is just another version of writing this function that I thought might be of interest to whomever read this post in the future.
library(dplyr)
library(rlang)
fn <- function(data, Var) {
Var <- enquo(Var)
colnm <- paste(get_expr(Var), "pct", sep = "_")
data %>%
group_by(!!Var) %>%
summarise(count = n()) %>%
ungroup() %>%
mutate(!! colnm := count/sum(count))
}
fn(mtcars, cyl)
# A tibble: 3 x 3
cyl count cyl_pct
<dbl> <int> <dbl>
1 4 11 0.344
2 6 7 0.219
3 8 14 0.438 | unknown | |
d18749 | test | you can use this...
USE [dbName]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[abc]
@SearchText nvarchar(100)
AS
BEGIN
DECLARE @sql nvarchar(4000)
SET @sql = 'SELECT a,b,c,d,e FROM myTable ' + @SearchText
-- what should be the criteria here.
EXEC sp_executesql @sql
END
GO | unknown | |
d18750 | test | Hope this method helps you 'Application.SetUnhandledExceptionMode'. It instructs the application how to respond to unhandled exceptions.
static void Main(string[] args)
{
Application.ThreadException += new ThreadExceptionEventHandler(OnThreadException);
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(OnUnhandledException);
Application.SetUnhandledExceptionMode(UnhandledExceptionMode.CatchException);
Application.EnableVisualStyles();
Application.Run(form);
}
A: If it's a remoting server and the exception is happening as part of client interaction, then the exception will be sent to the client without causing the server to crash. | unknown | |
d18751 | test | Perhaps try:
$_SESSION['user'] = $row['username'];
header("Location: ../php/home.php");
die();
You usually need to issue a die() command after the header() statement. | unknown | |
d18752 | test | It seems I have to set the encoding for this to work properly:
RTopic topic = redissonClient.getTopic(channel, StringCodec.INSTANCE); | unknown | |
d18753 | test | Using the MembershipProvider essentially boils down to the "build or buy" decision any developer (or dev mgr) has to make: build it from scratch, or buy off-the-shelf (or in this case, use a pre-existing tool).
With that in mind... admittedly, the MembershipProvider isn't perfect - it's a bit clunky, and probably has too much (or too little) of what you'll need - but it's 85% of the way there for most implementations. And as alluded to by others, building your own authentication system from scratch just isn't worth the time or effort. This is a solved problem; use your development energy to solve more urgent and relevant business problems, not re-inventing the wheel!
Remember this axiom: unless you can gain a direct competitive advantage from developing something from scratch, you are (usually) better off using an existing tool for the job (buy, don't build).
A: Some advantages of the ASP.NET membership provider API:
*
*You don't have to reinvent the wheel. New comers on your project will be familiar with a well-known API.
*There are already implementations (SQL Server, Active Directory mostly) available you can re-use or start from.
*It's visually integrated with ASP.NET (Login Controls, etc.)
*You can use the built-in ASP.NET administration tool to create users & roles (it's in fact a good way to check your provider works fine, as it should work with the tool)
*It can be integrated with the .NET (not only ASP.NET) Identity / Principal classes, and can be used to support the PermissionAttribute system (with the associated Role Provider). Although it technically lives in System.Web.dll, you can in fact use it in non-web systems.
*One last thing but quite interesting: you can also use ASP.NET membership providers in WCF services
A: Well MembershipProvider is indeed useful. The complete ASP.Net security infrastructure is build around it. Lot of the control can directly interact with this infrastructure such as Login, LoginStatus. So it does have it's advantage.
But it also has it's fair share of problem due to its fat interface. It's breaking the interface segregation principle and hence is a little cumbersome to use. I believe the advantages outweigh the penalty we pay here. So as long there are simple workarounds there are no harms using it. Building your own security infrastructure is not a trivial task either.
A: It sounds like the default MembershipProvider does everything you need it to do.
Therefore I would definitely recommend creating a UserBusinessclass that wraps the MembershipProvider and only exposes the features you use.
This makes it easy to use the great features of the MembershipProvider but also simplifies the interface for your needs.
A: I used custom provider first but now switched out completely.
I kept just a principle where things like when to lock user, how to hash password, etc
I was fine with custom provider until I needed multi-tenant support with different databases. And last drop was my attempt to unit test WCF service that used membership provider. Now I have my MembershipService and life is good. I still have data structure almost identical to original, but in .NET I use my code | unknown | |
d18754 | test | The withCredentials option is for the browser version of axios, and relies on browser for storing the cookies for your current site.
Since you are using it in Node, you will have to handle the storage yourself.
TL;DR
After the login request, save the cookie somewhere. Before sending other requests, make sure you include that cookie.
To read the cookie, check response.headers object, which should have a set-cookie header (which is all cookies really are - headers with a bit of special convention that has evolved into some sort of standard).
To include the cookie in your HTTP request, set a cookie header.
General example
You could also look for some "cookie-handling" libraries if you need something better than "save this one simple cookie I know I'll be getting".
// 1. Get your axios instance ready
function createAxios() {
const axios = require('axios');
return axios.create({withCredentials: true});
}
const axiosInstance = createAxios();
// 2. Make sure you save the cookie after login.
// I'm using an object so that the reference to the cookie is always the same.
const cookieJar = {
myCookies: undefined,
};
async function login() {
const response = await axiosInstance.post('http://localhost:3003/auth', {});
cookieJar.myCookies = response.headers['set-cookie'];
}
// 3. Add the saved cookie to the request.
async function request() {
// read the cookie and set it in the headers
const response = await axiosInstance.get('http://localhost:3003',
{
headers: {
cookie: cookieJar.myCookies,
},
});
console.log(response.status);
}
login()
.then(() => request());
You could also use axios.defaults to enforce the cookie on all requests once you get it:
async function login() {
const response = await axios.post('http://localhost:3003/auth', {});
axios.defaults.headers.cookie = response.headers['set-cookie']
}
async function request() {
const response = await axios.get('http://localhost:3003');
}
As long as you can guarantee that you call login before request, you will be fine.
You can also explore other axios features, such as interceptors. This may help with keeping all "axios config"-related code in one place (instead of fiddling with defaults in your login function or tweaking cookies in both login and request).
Lambda
AWS Lambda can potentially spawn a new instance for every request it gets, so you might need to pay attention to some instance lifecycle details.
Your options are:
*
*Do Nothing: You don't care about sending a "login request" for every lambda run. It doesn't affect your response time much, and the other api doesn't mind you sending multiple login requests. Also, the other api has no problem with you having potentially multiple simultaneous cookies (e.g. if 10 lambda instances login at the same time).
*Cache within lambda instance: You have a single lambda instance that gets used every once in a while, but generally you don't have more than one instance running at any time. You only want to cache the cookie for performance reasons. If multiple lambda instances are running, they will each get a cookie. Beware the other api not allowing multiple logins.
If this is what you need, make sure you put the axios config into a separate module and export a configured instance. It will be cached between runs of that one lambda instance. This option goes well with interceptors usage.
const instance = axios.create({...});
instance.interceptors.response.use(() => {}); /* persist the cookie */
instance.interceptors.request.use(() => {}); /* set the cookie if you have one */
export default instance;
*Cache between lambda instances: This is slightly more complicated. You will want to cache the cookie externally. You could store it in a database (key-value store, relational, document-oriented - doesn't matter) or you could try using shared disk space (I believe lambda instances share some directories like /tmp, but not 100% sure).
You might have to handle the case where your lambda gets hit by multiple requests at the same time and they all think they don't have the cookie, so they all attempt to login at the same time. Basically, the usual distributed systems / caching problems. | unknown | |
d18755 | test | It seems as if the active network info will stay on the state of when the Context of the Service/Activity/Receiver is started. Hence if you start it on a network, and then later disconnect from that (i.e. moves from 3G to Wifi and disconnect the 3G connection) it will stay on the first active connection making the app believe the phone is offline even though it is not.
It seems to me that the best solution is to user getApplicationContext instead as that will not be tied to when you started the particular "task".
Update: Related is that if you run applications on Androids (in particular Nexus One) for a long period of time when connected to Wifi do check that you make sure you do not let the Wifi sleep when the screen sleeps. You will be able to set that at the Advanced option under Wireless Networks. | unknown | |
d18756 | test | Change the foreach loop to for:
for ($i = 1; $i <= $numbersPerTickets; $i++) {
$query = '
INSERT INTO ticketNumbers(
ticketID,
number
)
VALUES(
"'.$ticketID.'",
"'.$randomTicketNumbers[$i].'"
)
';
Guaranteed to only give you $numbersPerTickets iterations and removes the complexity of the iterator++/break logic.
Sometimes simple is better.
Please correct my php! TIA. | unknown | |
d18757 | test | Employee.java
@Entity
public class Employee {
@Id
@Column(name = "emp_id", length = 8)
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer emp_id;
@Column(name = "emp_name", length = 20, nullable = false)
private String emp_name;
@ManyToOne
private Position position;
Position.java
@Entity
public class Position {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private int pos_id;
@Column(name = "pos_name", length = 16)
private String pos_name;
@OneToMany(mappedBy = "position")
private List<Employee> employees;
Thank you.I can join two tables! | unknown | |
d18758 | test | You can use appcfg.py update app.yaml from AppEngine Python SDK:
https://cloud.google.com/appengine/docs/standard/python/tools/appcfg-arguments#update
Use the files argument to upload one or more YAML files that define
modules. No other types of YAML files can appear in the command line.
Only the specified modules will be updated.
A: You can try using gcloud app deploy inside the directory where your application is located in order to upload the file you need.
Specifying no files with the command deploys only the app.yaml file of a given service.
This command will only upload to the cloud the files where there are changes, so if you have only modified the app.yaml file, it should not take too much time for the upload. However, as that is the configuration file of your application, it might need to be re-deployed completely, as the changes made in that file might affect the behaviour of the whole app. That is the reason why it might be taking longer than expected.
On the other side, you may want to know that if you are using App Engine Flexible environment, the deployment will always be slower than in a Standard environment, as resources have to be deployed before launching the application itself. | unknown | |
d18759 | test | sys.argv[x] is string. Multiplying string by number casue that string repeated.
>>> '2' * 5 # str * int
'22222'
>>> int('2') * 5 # int * int
10
To get multiplied number, first convert sys.argv[1] to numeric object using int or float, ....
import sys
st_run_time_1 = int(sys.argv[1]) * 60 # <---
print ("Station 1 : %s" % st_run_time_1)
A: You are multiplying a string with an integer, and that always means repetition. Python won't ever auto-coerce a string to an integer, and sys.argv is always a list of strings.
If you wanted integer arithmetic, convert the sys.argv[1] string to an integer first:
st_run_time_1 = int(sys.argv[1]) * 60 | unknown | |
d18760 | test | Don't use document.write(). Just don't. (See Why is document.write considered a "bad practice"?)
Try this:
var text = 'alert(1);',
script = document.createElement('script');
script.appendChild(document.createTextNode(text));
document.head.appendChild(script);
A: document.write only works before the DOM is loaded; document.body.innerHTML only works after.
Try using document.body.appendChild to append a new text node instead. | unknown | |
d18761 | test | It's good practice to use both, for example
<nav>
<div>
<ul>
<!-- etc -->
</ul>
</div>
</nav>
If you need to support those obsolete browsers, I wouldn't do anything more than that. The benefits, such as they are, are not worth the extra effort.
A:
I do plan on supporting IE7 and IE8, but I know that these versions don't support the above semantic elements. I've read up about 'plugins' like Modernizr, Initializr and HTML5shiv, and I know that older browsers will then support the new elements IF JavaScript is enabled, but what am I supposed to do if it's not?
If JavaScript is not enabled, then while the content of the new elements will be shown, CSS will not be correctly applied to them. While in theory you could use a noscript element to trigger a redirect to a version of the page not using the new elements (via a meta refresh tag within the noscript), then you'd be maintaining two versions of your site.
For example, given this page: Live Copy
<!DOCTYPE html>
<html>
<head>
<meta charset=utf-8 />
<title>HTML5 Elements</title>
<style>
nav {
color: green;
}
</style>
</head>
<body>
<nav><ul><li>This text should be green</li></ul></nav>
</body>
</html>
...early versions of IE will show the text in the default color. Adding the HTML5 shiv prior to the style element:
<script src="http://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.6.2/html5shiv.js"></script>
...which as you know requires JavaScript, makes the text green: Live Copy
A: It's not necessary to follow the new semantic. New semantic is developed mostly for search engines, not for site functionality. If you really want to support IE, do it for IE.
If you really consider no-script cases and CSS is not enough for you, than all you can do is PHP/ASP magic.
One my friend works exclusively in Flash, because no js, totally client side, no cares about browsers... Who knows...
A:
Should I use the new HTML5 semantic elements?
Yes.
IF JavaScript is disabled, what am I supposed to do about IE8 and below?
The no-js class is added to the tag, so what exactly can I do with that?
You can do something like this,
HTML
<div id='wrapper'>
// Whole website coding here
</div>
<div id='old-browsers'>
Use an upgraded browser
</div>
CSS
.no-js #wrapper {
display: none;
}
#old-browsers {
display: none;
}
.no-js #old-browsers {
display:block;
}
How can I use to my advantage here? I don't want to make pages too large with coding.
IE Consideration:
IE7 is 7 years old and most developers today do not support it. IE7 / IE8 users with js disabled is pretty low and you shouldnt develop for those exceptions. Instead you should give them a suggestion to upgrade with above method. You can use the noscript tag for the same usecase.
A: What you could do (and what a lot of big website's do) is to display a notice when javascript is disabled (optionally when the browser is IE7 or IE8 but you'll need a serverside check for that), that the website would not be displayed the way it is supposed to. See "How to detect if JavaScript is disabled?" on how to do so.
Only 5.3% (source) of internet users are using Internet Explorer < 8 and 0.25% to 2% (source) of the total users have Javascript disabled. You could spend a lot of time making a smooth solution for max 5% x 2% = 0,01% of your visitors or you could spend 5 minutes building the notice system I described. | unknown | |
d18762 | test | What you'll want to do is build a table that contains every minute for the day. There are a number of ways you can do this, just search for "tally table", etc.
Once you have a table containing all of your minutes (in datetime format), it should be straightforward.
Join your login table to the minutes table on minute between login/logout and do a count(*) for each minute.
A: Derek's solution is the way to go.
http://www.ridgway.co.za/archive/2007/11/23/using-a-common-table-expression-cte-to-generate-a-date.aspx explain a way to generate on the fly the timetable
SET DATEFORMAT DMY
GO
DECLARE @STARTDATE DATETIME
DECLARE @ENDDATE DATETIME
SELECT @STARTDATE = '02/01/2011 01:00', @ENDDATE = '03/01/2011 01:00'
;
WITH DateRange(MyDateTime) AS
(
SELECT
@STARTDATE AS MyDateTime
UNION ALL
SELECT
DATEADD(minute, 1, MyDateTime) AS MyDateTime
FROM
DateRange
WHERE
MyDateTime < @ENDDATE
)
SELECT MyDateTime, ConcurrentConnections = COUNT(*)
FROM DateRange INNER JOIN [LOGIN] ON MyDateTime >= [LogIn] AND MyDateTime <= [Log Out]
OPTION (MaxRecursion 10000);
A: To solve this problem I would first get the min and the max datetime values from the dataset. This would provide the time range from which we will need to determine the count of concurrent logins. After getting the time range, I would do a loop to populate a table with each minute in the range and the count of concurrent logins for that minute. After populating the table I would select from it where concurrent login count > 0, this would be my result set. I use SQL Server, you may need to convert some of the syntax to another DBMS.
-- To get the min and max of the time range
DECLARE @min datetime, @max datetime
SELECT @min = MIN(l.[Login]), @max = MAX(l.[Log out])
FROM [LOGIN] l
-- now make a table to how the minutes and the counts
CREATE TABLE #Result
(
[Time] datetime,
[count] int
)
-- now do a loop to fill each minute between @min and @max
DECLARE @currentTime datetime, @count int
SELECT @currentTime = @min, @count = 0
-- go from @min to @max
WHILE @currentTime < @max
BEGIN
-- get the count of concurrent logins for @currentTime
SELECT @count = COUNT(*)
FROM [LOGIN] l
WHERE @currentTime between l.[Login] and l.[Log out]
-- insert into our results table
INSERT #Result ([Time], [count]) VALUES (@currentTime, @count)
-- increment @currentTime for next pass
SELECT @currentTime = DATEADD(minute, 1, @currentTime)
END
-- select final result (where count > 0)
SELECT *
FROM #Result
WHERE [count] > 0
-- clean up our temp table
DROP TABLE #Result | unknown | |
d18763 | test | Clearing emulator logs by running adb logcat -c in did the trick for me!!! | unknown | |
d18764 | test | Try,
=if(isnumber(B39), if(b39>0, "crashed", "no crash"), if(iserror(b39), "hasn't crashed yet", "how'd I get here?")) | unknown | |
d18765 | test | You have to use ClientID
var change = document.getElementById("<%= lblPercentageDifferenceToFillReqCurrentVsPreviousMonth.ClientID %>").value;
Assuming that it is an actual Control like
<asp:TextBox ID="lblPercentageDifferenceToFillReqCurrentVsPreviousMonth" runat="server"></asp:TextBox> | unknown | |
d18766 | test | My guess is that the problem is in your input. Are you entering capital letters or lowercase letters? Their ASCII codes are different. So, you probably want to change the code from
num= static_cast<int>(letters)-static_cast<int>('A');
to something like
if (num >= 'a')
num = letters - 'a';
else
num = letters - 'A';
Also, as mentioned by @jtbandes, use the curly braces { and }. Whitespace does not determine scope in C++. Even if it's for only one line of code after your if-statement, it'll save you headaches in the future.
A: Is the static cast necessary? I recommend using a string stream or just traversing the string character by character using .at() and relying on the ascii values for conversion. http://web.cs.mun.ca/~michael/c/ascii-table.html. | unknown | |
d18767 | test | Use the flag package. For example:
func main() {
encrypt := flag.Bool("encrypt", false, "encrypt file")
decrypt := flag.Bool("decrypt", false, "decrypt file")
flag.Parse()
srcFile, destFile := flag.Arg(0), flag.Arg(1)
if *encrypt {
encryptFileData(srcFile, destFile)
}
if *decrypt {
decryptFileData(srcFile, destFile)
}
} | unknown | |
d18768 | test | If your while is skipping first line while reading from the file then use this command inside the while loop:
file_pointer.seekg(0,ios::beg);
This file pointer will set your pointer to the beginning of the file. | unknown | |
d18769 | test | you can:
g++ name.cpp && ./a.out && gnuplot -e "plot 'name2.dat'; pause -1"
gnuplot exits when you hit return (see help pause for more options)
if you want to start an interactive gnuplot session there is a dirty way I implemented.
g++ name.cpp && ./a.out && gnuplot -e "plot 'name2.dat' -
(pay attention to the final minus sign) | unknown | |
d18770 | test | Seems you are using the latest version of cucumber and it doesn't recognize the old way of mentioning multiple tags.
For the mentioned example ,we can run both scenarios by mentioning tags = {"@Regression or @NegativeTest"} in runner class
(ie tags = {"@Regression,@NegativeTest") should be given as tags = {"@Regression or @NegativeTest"})
Also in the above query , i could see that tags are used above example keyword. Normally the tags can be used at the feature file level or before the scenario/scenario outline keywords | unknown | |
d18771 | test | #1 and #2 are simple but probably won't scale well to more complicated behaviors. Behavior trees (#3) seem to be the preferred system in many games nowadays. You can find some presentations and notes on AiGameDev.com, e.g. http://aigamedev.com/open/coverage/paris09-report/#session3 and http://aigamedev.com/open/coverage/gdc10-slides-highlights/#session2 (the first one from Crytek is quite good)
You probably don't need to worry about "efficiency" here in the sense of CPU usage, since this is very unlikely be a major bottleneck in your game. Reducing the amount of programmer/designer time needed to tweak the behavior is much more important.
A: If you're going to go with options 1 or 2, look into buckets of random results - see Randomness without Replacement. If you go with a tree action, you can still use some of the same concepts to determine which branch of the tree to go down; however, you'd have a bit more determinism built in, as your options will restrict as you traverse down the tree.
If I recall correctly, Neverwinter Nights, which had a fairly nice scripting engine for its NPCs, would use a random probability to determine some of the actions their NPCs would take in certain states, but the states themselves were more driven by the script for that NPC (more of a state machine than a tree, but the concepts are similar). For example, if the NPC was walking, there was a chance they would stop, maybe laugh, etc. - but if they were attacked, they would switch to combat and stay there.
In other words, it all depends on what your actions are, and how you want the character to behave.
A: Markov chain, influenced by real user actions.
From wikipedia:
A Markov chain is a discrete random
process with the property that the
next state depends only on the current
state. | unknown | |
d18772 | test | This is a bit tricky to get it working and since the MDN does not describe it very well, this makes it harder.
What is the problem exactly?
The wrapping element you provided does not meet the expectation for doing such an action. So what does it mean?
To ensure the scroll-snap-type is working correctly we should make sure the only available scrollbar in our window is our wrapping element scrollbar, which is in your case it is a division by class name screen.
How to fix it?
So all you have to do is to make sure the scroll element is exactly related to your parent wrapping element which in your case its indicated by screen class name. To make sure the scrollbar you seeing in the right one you should make the body and html, overflow to hidden to prevent them from scrolling. just like this:
html,
body {
overflow: hidden;
}
Then you need to enable the right scrollbar, which belongs to the screen division.
.screen {
height: 100vh;
overflow-y: scroll;
scroll-snap-type: y mandatory;
}
So it will work like a charm.
Here is the live working example: codepen.io
Also, there is a similar issue in SO, that you can find here. | unknown | |
d18773 | test | you can use common table expression with row_number() function for that
with cte as (
select
*,
row_number() over(
partition by Date, Name
order by case when status = 'Cancled' then 1 else 0 end
) as rn
from Table1
)
select
ID, Date, Name, Status, [Attribute A], [Attribute B]
from cte
where rn = 1
But, if there's more than one record with same Date, Name and status <> 'CANCELED', query will return only one arbitrary row.
=> sql fiddle demo
A: This assumes that other status values are not in a lower alpha order than 'Canceled'.
select max([date]) as [date], [name], max([Status]), [Attribute A], [Attribute B]
From [YourTableName]
group by [Name],[Attribute A], [Attribute B] | unknown | |
d18774 | test | You can use read.xlsx function from openxlsx package. fillMergedCells = T can fill both the columns with the same values.
library(openxlsx)
read.xlsx(data, fillMergedCells=T)
If You are considering only the look and feel of the table as the same as in Excel than try flextables which can show exactly the same merged output as in Excel Like This Table below | unknown | |
d18775 | test | You should be getting an IPN when the profile is created, each time the recurring profile bills, and when the profile is cancelled. Check your IPN history in your account to make sure the IPN's are being sent out, and check to see if there is any type of error being returned to PayPal. Check your server access logs to see PayPal is calling your script and check your error logs to see if anything is being triggered. Try adding www. to your URL, and the ext to the end of your URL for the type of file it is. Also, there are some IPN troubleshooting tips I posted for IPN on this forum POST.
A: The IPN will only be sent to the account on which the profile was created.
Here's how I would do it.
*
*Store the Paypal ProfileID and e-mail address in a database.
*Receive all of the IPN notifications yourself, and in the case of a failure, send an e-mail to the specific user based on their ProfileID, or perform other actions. | unknown | |
d18776 | test | You just need to capture param in your capture list:
transform(a.begin(), a.end(), b.begin(), std::back_inserter(c),
[param](double x1, double x2) {return(x1 - x2)/param; });
Capturing it by reference also works - and would be correct if param was a big class. But for a double param is fine.
A: This is what the lambda capture is for. You need to specify & or = or param in the capture block ([]) of the lambda.
std::vector<double> a{ 10.0, 11.0, 12.0 };
std::vector<double> b{ 20.0, 30.0, 40.0 };
std::vector<double> c;
double param = 1.5;
//The desired function is c = (a-b)/param
transform(a.begin(), a.end(), b.begin(), std::back_inserter(c),
[=](double x1, double x2) {return(x1 - x2)/param; });
// ^ capture all external variables used in the lambda by value
In the above code we just capture by value since copying a double and having a reference is pretty much the same thing performance wise and we don't need reference semantics. | unknown | |
d18777 | test | You create "classes" in Javascript as functions. Using this.x inside the function is like creating a member variable named x:
var Blood = function() {
this.x = 0;
this.y = 0;
}
var blood = new Blood()
console.log(blood.x);
These aren't classes or types in the sense of OO languages like Java, just a way of mimicking them by using Javascript's scoping rules.
So far there isn't really much useful here -- a simple object map would work just as well. But this approach may be useful if you need more logic in the Blood "class", like member functions, etc. You create those by modifying the object's prototype:
Blood.prototype.createSplatter = function() {
return [this.x-1, this.y+1]; // (idk, however you create a splatter)
};
blood.createSplatter();
(Fiddle)
For more details on Javascript classes, I suggest taking a look at CoffeeScript syntax (scroll down to "Classes, Inheritance, and Super"). They have some side-by-side examples of classes in the simplified CS syntax, and the JS translation. | unknown | |
d18778 | test | The Json contains a list of ValuesPos
The ApiInterface.java call for a single ValuesPos
EDIT
Change the ApiInterface.java to call for a list of ValuesPos
@FormUrlEncoded
@POST("pos_distributed.php")
Call<List<ValuesPos>> POS_MODEL_CALL(@Field("user_id") String user_id);
A: Here is the entire code which work for me... ( Thanks for the help )
Modified two class
ApiInterface.java
@FormUrlEncoded
@POST("pos_distributed.php")
Call<List<ValuesPos>> POS_MODEL_CALL(@Field("user_id") String user_id);
Backend.java
public void pos_func(String user_id) {
dataArrayList1 = new ArrayList<>();
ApiInterface apiService = ApiClient.getClient().create(ApiInterface.class);
Call <List<ValuesPos>> call = apiService.POS_MODEL_CALL(user_id);
call.enqueue(new Callback<List<ValuesPos>>() {
@Override
public void onResponse(Call<List<ValuesPos>> call, Response<List<ValuesPos>> response) {
Log.d("sk_log", "Status POS Code = successsss");
dataArrayList1 = response.body();
//ValuesPos valuesPos = (ValuesPos) response.body();
Log.d("sk_log", "name==="+dataArrayList1.get(0).getName());
Log.d("sk_log", "name==="+dataArrayList1.get(0).getSubCategory().get(0).getName());
CustomerCollection.spinner_pos(dataArrayList1);
}
@Override
public void onFailure(Call<List<ValuesPos>> call, Throwable t) {
Log.d("sk_log", "Failed! Error = " + t.getMessage());
}
});
}
:) | unknown | |
d18779 | test | Quoting Wikipedia:
HTML Components (HTCs) are a
nonstandard mechanism to implement
components in script as Dynamic HTML
(DHTML) "behaviors"[1] in the
Microsoft Internet Explorer web
browser. Such files typically use an
.htc extension.
An HTC is typically an HTML file (with
JScript / VBScript) and a set of
elements that define the component.
This helps to organize behavior
encapsulated script modules that can
be attached to parts of a Webpage DOM.
In two paragraphs, the following are mentioned:
*
*Internet Explorer
*JScript
*VBScript
*nonstandard
I think it's obvious why not everybody is using this technology.
A: How to use border-radius.htc with IE to make rounded corners
The server has to server the HTC with the correct MIME type (text/x-component)
That alone is enough to stop JavaScript frameworks such as jQuery or MooTools from being able to use them. The dependency on configuring anything a server in order to get client-side functionality working is beyond unacceptable.
It's a real pity though, htc files really are capable of a lot of interesting things. | unknown | |
d18780 | test | Use the other available properties of
.Height
.Top
There is also
.Left '<==For indent
As per @Comintern's point you do need to adjust .Top and .Height together!
See full property list of shape object here:
https://msdn.microsoft.com/en-us/vba/excel-vba/articles/shape-object-excel | unknown | |
d18781 | test | Your requirement isn't valid. The only way to get the text nil to appear in the text field is to assign the actual text @"nil" to the text field.
It makes no sense that you to see nil without using @"nil" to do so.
Perhaps setting the placeholder to @"nil" is a compromise.
A: i think you can use only myTextField.text = @"nil"; or instead of showing nil you can show proper message inside uitextfield or bellow textfiled input is not valid. showing nil is doesn't helps to user. you can use alert to show no input.
A: As others have said, this seems like a strange requirement, but you could create a subclass of UITextField that overrides the getter of the text property so that it returns @"nil" if text was actually nil.
A: Try This self.textFieldName.placeholder = @"nil"; | unknown | |
d18782 | test | If your goal is to just import the project on another PC, don't rely on the iml files. Some even consider it bad practice to commit IDE specific files in maven projects, as not everyone on a project might use the same version or even a different IDE. If you take a look at popular .gitignore files (e.g. this one), you'll most often find that any IDE specific files get excluded.
Consider importing the projects pom.xml:
Import Project -> from external model -> Maven
EDIT
JetBrains recommends to NOT include the iml file with Maven or Gradle projects, see here | unknown | |
d18783 | test | You need to use a delegated event handler. Try this:
$(document).on('click', '#myElement', function() {
alert('hello world');
});
I used document as the primary selector as an example. You should use the closest element to those dynamically appended.
A: Demo
You can use ;
$( ".class_name" ).bind( "click", function() {
// your code here
}); | unknown | |
d18784 | test | I just coded a bit, as @Ted Lyngmo pointed out, you have to return a value, so you can't use a void-function.
Here is the code:
#include <iostream>
struct P {
int ID;
int some_info;
P* next_node;
};
P* newNode(int ID, int some_info) {
P* temp = new (std::nothrow) P;
if (temp == nullptr){
return nullptr;
}
else {
temp->ID = ID;
temp->some_info = some_info;
temp->next_node = nullptr;
return temp;
}
}
P* search(int targeted_ID, P* current_node) {
// check if current node is valid
if (current_node == nullptr) {
return nullptr;
}
// check for ID of current node
else if (current_node->ID == targeted_ID) {
return current_node;
}
// this node isn't the desired one --> go to nex node
else {
return search(targeted_ID, current_node->next_node);
}
}
void freeMem(P* list) {
P* tmp = nullptr;
while (list != nullptr) {
tmp = list->next_node;
delete list;
list = tmp;
}
}
int main()
{
P* list = nullptr;
P* tmp = nullptr;
P* lastNode = nullptr;
// populate list
for (int i = 0; i < 10; ++i) {
tmp = newNode(i, i * 10);
if (tmp == nullptr) {
// error occured
// free memory
freeMem(list);
list = nullptr;
return -1;
}
if (list == nullptr) {
// first node
list = tmp;
lastNode = list;
}
else {
lastNode->next_node = tmp;
lastNode = tmp;
}
}
tmp = nullptr;
lastNode = nullptr;
int ID_to_find = 1;
P* node = search(ID_to_find, list);
if (node == nullptr) {
// node not found ...
std::cout << "node not found" << std::endl;
}
else {
// node found
std::cout << "node found at: " << node << std::endl;
}
freeMem(list);
list = nullptr;
return 0;
}
This code works, but it is not really C++, it is more like C. If you want to write C++-Code use STL-Containers and don't write your own list, because it will be much more readable and the chance to have a bug is much smaller. Also consider to use exceptions instead of (std::nothrow).
A: Recursive:
void search(int targeted_ID, P*& current_node) {
if(current_node && current_node->ID != targeted_ID) {
current_node = current_node->next_node;
search(targeted_ID, current_node);
}
}
But I suggest a non-recursive version to not get stack overflow when searching a long linked list:
void search(int targeted_ID, P*& current_node) {
while(current_node && current_node->ID != targeted_ID)
current_node = current_node->next_node;
}
Demo | unknown | |
d18785 | test | You can use a construction called "MEMBER OF".
class MembreFamilleRepository extends EntityRepository
{
public function getMembres($emp)
{
return $this->createQueryBuilder('a');
->where(':employee MEMBER OF a.employees')
->setParameter('employee', $emp)
->getQuery()
->getResult()
;
}
}
You can use a construction called "MEMBER OF"
A: You need to add a JoinTable for your ManyToMany association and set the owning and inverse sides:
/**
* @ORM\ManyToMany(targetEntity="PFE\EmployeesBundle\Entity\MembreFamille",
* cascade={"persist"}, mapped="employees")
*/
private $membreFamilles;
.................................
/**
* @ORM\ManyToMany(targetEntity="PFE\UserBundle\Entity\Employee", cascade={"persist"}, inversedBy="membreFamilles")
* @ORM\JoinTable(name="membre_familles_employees")
*/
private $employees; | unknown | |
d18786 | test | The object-fit property normally works together with width, height, max-width and max-height. Example:
.wrapper {
height: 100px;
width: 300px;
border: 1px solid black;
}
.flex {
display: flex;
}
img {
object-fit: contain;
height: 100%;
width: auto;
}
<div class="flex wrapper">
<img src="https://unsplash.it/240/240" />
</div>
In fact, it works fine too even without object-fit, see this jsFiddle.
A:
.wrapper {
height: auto;
width: 100%;
border: 1px solid black;
background-image: url(https://unsplash.it/240/240);
background-size: contain;
background-repeat: no-repeat;
background-position: center;
}
.flex {
display: flex;
}
img {
object-fit: contain;
}
<div class="flex wrapper">
<img src="https://unsplash.it/240/240" />
</div> | unknown | |
d18787 | test | JFreeChart is not overcomplicated. You have to try Time Series chart in your case. See here the example of how they use it.
Although they leave behind the scene the main thing, which should create the data source for chart: see line
final TimeSeries eur = DemoDatasetFactory.createEURTimeSeries();
I used this TimeSeries charts before, and as I remember you have to create this object and put there series of your values (probably iterating over your own values and inserting them one by one in cycle) | unknown | |
d18788 | test | Take a look at this example. Hope this helps.
View
<u:FileUploader change="onChange" fileType="pdf" mimeType="pdf" buttonText="Upload" />
Controller
convertBinaryToHex: function(buffer) {
return Array.prototype.map.call(new Uint8Array(buffer), function(x) {
return ("00" + x.toString(16)).slice(-2);
}).join("");
},
onChange: function(oEvent){
var that = this;
var reader = new FileReader();
var file = oEvent.getParameter("files")[0];
reader.onload = function(e) {
var raw = e.target.result;
var hexString = that.convertBinaryToHex(raw).toUpperCase();
// DO YOUR THING HERE
};
reader.onerror = function() {
sap.m.MessageToast.show("Error occured when uploading file");
};
reader.readAsArrayBuffer(file);
},
A: I figured it out by filling an array everytime that a file was uploaded through the control,
change: function(oEvent) {
//Get file content
file = oEvent.getParameter("files")[0];
//Prepare data for slug
fixname = file.name;
filename = fixname.substring(0, fixname.indexOf("."));
extension = fixname.substring(fixname.indexOf(".") + 1);
//fill array with uploaded file
var fileData = {
file: file,
filename: filename,
extension: extension
}
fileArray.push(fileData);
},
and then I did a loop over that array to post every single file I keept there by using ajax method post.
$.each(fileArray, function(j, valor) {
//get file
file = fileArray[j].file;
//get file lenght
var numfiles = fileArray.length;
//Convert file to binary
var reader = new FileReader();
reader.readAsArrayBuffer(file);
reader.onload = function(evt) {
fileString = evt.target.result;
//get and make slug
filename = fileArray[j].filename;
extension = fileArray[j].extension;
slug = documento + '/' + filename + '/' + extension;
//User url service
var sUrlUpload = "sap url";
runs++;
//Post files
jQuery.ajax({});
}
}); | unknown | |
d18789 | test | There are a lot of bugs in your code:
*
*<p> not closed!
*Using tables for layout.
*More worse: Using nested tables.
Going ahead with your layout, I was able to make these corrections to make it work in CSS.
tr{
display:table;
width:100%;
}
.lfttbl .td1, .td2{
display:table-row;
border-bottom: solid 1px #2b9ed5 !important;
}
.td1 {
display: block;
background: #f00;
}
<table class="out">
<tr><td>
<table class="outtbl">
<tr>
<td>
<table class="lfttbl">
<tr>
<td class= td1>
<p> hihi hii</p>
</td>
<td class= td2>
<table class = rttbl"">
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
A: You must close your <p>hihi hi and use display: block; on the td.
Check this:
tr{
display:table;
width:100%;
}
.lfttbl .td1, .td2{
display:block;
border-bottom: solid 1px #2b9ed5 !important;
}
<table class="out">
<tr><td>
<table class="outtbl">
<tr>
<td>
<table class="lfttbl">
<tr>
<td class= td1>
<p> hihi hii</p>
</td>
<td class= td2>
<table class = rttbl"">
<p>Lorem ipsum</p>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
A: So, here is what I was struggling on. After several trials and errors, I came to know it works in desktop version now but not in mobile devices. I had attached my code and css on which I was working (Initially "edit" and "delete" were in different td). Now I don't know why its not working on mobo device. I know I can't use if else statement in css (either for desktop or for mobile).
#shipIMtd, #shipIMrighttd1, #shipIMrighttd2{display:block;}
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=0;">
<link href="mno.css" rel="stylesheet" type="text/css" />
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<META Name="Robots" Content="noindex">
</head>
<table id="shipIMmaintbl" width="100%" cellpadding="0" cellspacing="0">
<tr><td align="center">
<table id= "shipIMsubtbl" border="0" width="940">
<tr>
<td valign="top" align="center" width="80%">
<div id="shipIMmidacdiv" class="account-mid"><br />
<table width="95%" border="0" cellspacing="0" cellpadding="0" id="table12">
<tr>
<td>
<table id="shipIMshipadtbl" width="735px" style="width:735px;border:solid 1px #D4D6D8;margin-bottom:15px;">
<tr>
<td id="shipIMsubtitle" valign="top" style="border-right: solid 1px #E7E4E4;padding:5px;"><span style="font:600 18px verdana;#2972B6"><strong> Addresses</strong></span></td>
<td height="35" align="left" bgcolor="#F6F4F4" class="padd-top-lt"> </td>
<div id="shipIMtitlediv" class="floatright newAddrBtn"><a href="uvx.asp"><input id="shipIMtitlebtn" type="button" value="+ Add"></a></div>
</div>
</tr>
<tr id="shipIMshipaddtr">
<td id="shipIMtd" valign="top" style="border-right: solid 1px #E7E4E4;width:200px;padding:15px;">
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi.</p>
</td>
<td id="shipIMrighttd1" align="left" class="padd-15 border-tbt1">
<table id="shipIMeditship" cellpadding=3 cellspacing=0 border=0 width=80><tr><td>
<form name="editship" action="abc.asp" method="post" />
<input type="hidden" name="shipID" value="shpid">
<input id="shipIMeditbtn" type="submit" value="Edit">
</form>
</td><td id="shipIMrighttd2">
<form name="deleteship" action="xyz.asp" method="post">
<input type="hidden" name="shipID" value="shpid" />
<input type="hidden" name="asktype" value="firstask">
<input id="shipIMdelbtn" type="submit" value="Delete">
</form>
</td></tr></table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
</td>
</tr>
</table>
</body>
</html>
. | unknown | |
d18790 | test | try this simple method
public void verticalAlert (final String item01, final String item02, final String item03){
String[] array = {item01,item02,item03};
AlertDialog.Builder builder = new AlertDialog.Builder(MyActivity.this);
builder.setTitle("Test")
.setItems(array, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
// The 'which' argument contains the index position
// of the selected item
switch (which) {
case 0:
// case item 1 do...
break;
case 1:
// case item 2 do...
break;
case 2:
// case item 3 do...
break;
}
}
});
builder.show();
}
A: You can use View in the Alertdialog's setview method.
In that View,call the View's LAYOUT_INFLATER_SERVICE and use a customized xml as per your requirement. | unknown | |
d18791 | test | AFAIK both are same. But I feel some differents between Sprint toolkit and Sun toolkit. These are,
Sprint toolkit having Nokia, Samsung, LG emulators. But Sun java toolkit having their own toolkit.
Sprint support touch emulators. Sun toolkit doesn't have touch emulators.
Zooming emulator screen supports Sprint. Sun toolkit doesn't support. | unknown | |
d18792 | test | Decompiling with javap the inner class shows the following for the run method:
public void run();
descriptor: ()V
flags: ACC_PUBLIC
Code:
stack=1, locals=1, args_size=1
0: aload_0
1: getfield #12 // Field this$0:Ltest/ThreadCreationTest;
4: invokestatic #22 // Method test/ThreadCreationTest.access$0:(Ltest/ThreadCreationTest;)V
7: return
LineNumberTable:
line 31: 0
line 32: 7
LocalVariableTable:
Start Length Slot Name Signature
0 8 0 this Ltest/ThreadCreationTest$1;
Notice that there is a static synthetic method access$0 which in turn calls the private method call. The synthetic method is created because call is private and as far as the JVM is concerned, the inner class is just a different class (compiled as ThreadCreationTest$1), which cannot access call.
static void access$0(test.ThreadCreationTest);
descriptor: (Ltest/ThreadCreationTest;)V
flags: ACC_STATIC, ACC_SYNTHETIC
Code:
stack=1, locals=1, args_size=1
0: aload_0
1: invokespecial #68 // Method call:()V
4: return
LineNumberTable:
line 51: 0
LocalVariableTable:
Start Length Slot Name Signature
Since the synthetic method is static, it is waiting for the static initializer to finish. However, the static initializer is waiting for the thread to finish, hence causing a deadlock.
On the other hand, the lambda version does not rely on an inner class. The bytecode of the constructor relies on an invokedynamic instruction (instruction #9) using MethodHandles:
public test.ThreadCreationTest();
descriptor: ()V
flags: ACC_PUBLIC
Code:
stack=3, locals=3, args_size=1
0: aload_0
1: invokespecial #13 // Method java/lang/Object."<init>":()V
4: new #14 // class java/lang/Thread
7: dup
8: aload_0
9: invokedynamic #19, 0 // InvokeDynamic #0:run:(Ltest/ThreadCreationTest;)Ljava/lang/Runnable;
14: invokespecial #20 // Method java/lang/Thread."<init>":(Ljava/lang/Runnable;)V
17: astore_1
18: aload_1
19: invokevirtual #23 // Method java/lang/Thread.start:()V
22: aload_1
23: invokevirtual #26 // Method java/lang/Thread.join:()V
26: goto 36
29: astore_2
30: invokestatic #29 // Method java/lang/Thread.currentThread:()Ljava/lang/Thread;
33: invokevirtual #33 // Method java/lang/Thread.interrupt:()V
36: return | unknown | |
d18793 | test | Secure is a bit of a moving target. Secure against what and for how long. If you are encrypting transaction data that has no value an hour later, almost anything will do. If you need to keep something secure for a long time, you want a long key for your PK systems, the longer the better. But you really pay the price on key generation and some types of stream encryptions/decryptions.
The number one failure of encryption systems is not the algorithm itself, but the implementation of the system, usually how the keys are either generated or stored. That said, Blowfish and AES are both well regarded and when properly implemented should be everything you need. I can't recommend http://www.schneier.com/ highly enough. Applied Cryptography is a bit dated, 10 years or so, but is a cogent explaination of the field specifically geared to programmers. And his blog is a wealth of information. Go there and search if you need more details on algorithms. Won't be a ton of help in java implementation, but you can get that here on SO. | unknown | |
d18794 | test | Make sure to compare apples to apples by running GET index/_count on your index on both sides.
You might see more or less documents depending on where you look (Elasticsearch HEAD plugin, Kibana, Cerebro, etc) and if replicas are taken into account in the count or not.
In your case you had more replicas in your local environment than in your AWS Elasticsearch service, hence the different count. | unknown | |
d18795 | test | The null makes this tricky. I'm not sure if it should be considered "high" or "low". Let me assume "low":
select t.*
from t
where coalesce(t.position, -1) = (select min(coalesce(t2.position, -1))
from t t2
where t2.name = t.name
);
A: SELECT
f.*
FROM
(
SELECT
name,
MIN(IFNULL(position,0)) as min_position
FROM
fruits
GROUP BY
name
) tmp
LEFT JOIN
fruits f ON
f.name = tmp.name AND
IFNULL(f.position,0) = min_position
-- GROUP BY name
-- optional if multiple (name, position) are possible for example
-- [apple,fruit,5], [apple,red,5] | unknown | |
d18796 | test | import javax.swing.*;
import java.awt.Color;
import java.awt.FlowLayout;
import java.awt.event.ActionListener;
import java.awt.event.ActionEvent;
public class ButtonDemo_Extended3 implements ActionListener{
// Definition of global values and items that are part of the GUI.
int redScoreAmount = 0;
int blueScoreAmount = 0;
int greenScoreAmount = 0;
JPanel titlePanel, scorePanel, buttonPanel;
JLabel redLabel, blueLabel,greenLabel, redScore, blueScore, greenScore;
JButton redButton, blueButton, greenButton,resetButton;
public JPanel createContentPane (){
// We create a bottom JPanel to place everything on.
JPanel totalGUI = new JPanel();
totalGUI.setLayout(null);
// Creation of a Panel to contain the title labels
titlePanel = new JPanel();
titlePanel.setLayout(new FlowLayout());
titlePanel.setLocation(0, 0);
titlePanel.setSize(500, 500);
redLabel = new JLabel("Red Team");
redLabel.setLocation(300, 0);
redLabel.setSize(100, 30);
redLabel.setHorizontalAlignment(0);
redLabel.setForeground(Color.red);
titlePanel.add(redLabel, 0 );
blueLabel = new JLabel("Blue Team");
blueLabel.setLocation(900, 0);
blueLabel.setSize(100, 30);
blueLabel.setHorizontalAlignment(0);
blueLabel.setForeground(Color.blue);
titlePanel.add(blueLabel, 1);
greenLabel = new JLabel("Green Team");
greenLabel.setLocation(600, 0);
greenLabel.setSize(100, 30);
greenLabel.setHorizontalAlignment(0);
greenLabel.setForeground(Color.green);
titlePanel.add(greenLabel);
// Creation of a Panel to contain the score labels.
scorePanel = new JPanel();
scorePanel.setLayout(null);
scorePanel.setLocation(10, 40);
scorePanel.setSize(500, 30);
redScore = new JLabel(""+redScoreAmount);
redScore.setLocation(0, 0);
redScore.setSize(40, 30);
redScore.setHorizontalAlignment(0);
scorePanel.add(redScore);
greenScore = new JLabel(""+greenScoreAmount);
greenScore.setLocation(60, 0);
greenScore.setSize(40, 30);
greenScore.setHorizontalAlignment(0);
scorePanel.add(greenScore);
blueScore = new JLabel(""+blueScoreAmount);
blueScore.setLocation(130, 0);
blueScore.setSize(40, 30);
blueScore.setHorizontalAlignment(0);
scorePanel.add(blueScore);
// Creation of a Panel to contain all the JButtons.
buttonPanel = new JPanel();
buttonPanel.setLayout(null);
buttonPanel.setLocation(10, 80);
buttonPanel.setSize(2600, 70);
// We create a button and manipulate it using the syntax we have
// used before. Now each button has an ActionListener which posts
// its action out when the button is pressed.
redButton = new JButton("Red Score!");
redButton.setLocation(0, 0);
redButton.setSize(30, 30);
redButton.addActionListener(this);
buttonPanel.add(redButton);
blueButton = new JButton("Blue Score!");
blueButton.setLocation(150, 0);
blueButton.setSize(30, 30);
blueButton.addActionListener(this);
buttonPanel.add(blueButton);
greenButton = new JButton("Green Score!");
greenButton.setLocation(250, 0);
greenButton.setSize(30, 30);
greenButton.addActionListener(this);
buttonPanel.add(greenButton);
resetButton = new JButton("Reset Score");
resetButton.setLocation(0, 100);
resetButton.setSize(50, 30);
resetButton.addActionListener(this);
buttonPanel.add(resetButton);
totalGUI.setOpaque(true);
totalGUI.add(buttonPanel);
totalGUI.add(scorePanel);
totalGUI.add(titlePanel);
return totalGUI;
}
// This is the new ActionPerformed Method.
// It catches any events with an ActionListener attached.
// Using an if statement, we can determine which button was pressed
// and change the appropriate values in our GUI.
public void actionPerformed(ActionEvent e) {
if(e.getSource() == redButton)
{
redScoreAmount = redScoreAmount + 1;
redScore.setText(""+redScoreAmount);
}
else if(e.getSource() == blueButton)
{
blueScoreAmount = blueScoreAmount + 1;
blueScore.setText(""+blueScoreAmount);
}
else if(e.getSource() == greenButton)
{
greenScoreAmount = greenScoreAmount + 1;
greenScore.setText(""+greenScoreAmount);
}
else if(e.getSource() == resetButton)
{
redScoreAmount = 0;
blueScoreAmount = 0;
greenScoreAmount = 0;
redScore.setText(""+redScoreAmount);
blueScore.setText(""+blueScoreAmount);
greenScore.setText(""+greenScoreAmount);
}
}
private static void createAndShowGUI() {
JFrame.setDefaultLookAndFeelDecorated(true);
JFrame frame = new JFrame("[=] JButton Scores! [=]");
//Create and set up the content pane.
ButtonDemo_Extended3 demo = new ButtonDemo_Extended3();
frame.setContentPane(demo.createContentPane());
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(1024, 768);
frame.setVisible(true);
}
public static void main(String[] args) {
//Schedule a job for the event-dispatching thread:
//creating and showing this application's GUI.
SwingUtilities.invokeLater(new Runnable() {
public void run() {
createAndShowGUI();
}
});
}
}
You have to use a layout manager in order to display your widgets. In this case I used a FlowLayout(). Also, make sure that you add the elements first in the panel and then you add the panel to its parent panel.
Now, the code works as you probably want, but again you should use a particular layout in order to arrange the panels inside the frame.
A: If I run your code unmodified, I see just the "Red Team" label, and some buttons which are too small to read the text (and some of them only appear when moused over).
If you comment out all the null layouts:
//buttonPanel.setLayout(null);
then all three buttons and labels appear properly.
Doing without a layout manager, and using absolute positioning, is possible (see the Java Swing tutorial page on this exact topic) but not usually recommended. There is a lot of information on using layout managers in the Laying Out Components Within a Container lesson of the Swing tutorials. | unknown | |
d18797 | test | This is an antipattern you want to send an action to the controller and do you work with the store in the controller.
However if you have to inject the store into the view you would do this.
Ember.onLoad('Ember.Application', function(Application) {
Application.initializer({
name: "store",
initialize: function(container, application) {
application.register('store:main', application.Store);
...
}
container.lookup('store:main');
}
});
Application.initializer({
name: "injectingTheStore",
initialize: function(container, application) {
application.inject('view', 'store', 'store:main');
}
}); | unknown | |
d18798 | test | Give your logo a width and height.
<img src="https://www.svgrepo.com/show/128647/download.svg" height="150" width="150" alt="header-logo" />
A: Just add this code to your CSS:
.header-logo {
width: 2rem;
}
If you face any problem then let me know. | unknown | |
d18799 | test | Assuming that:
class Product < ApplicationRecord
has_one :product_detail
end
and
class ProductDetail < ApplicationRecord
belongs_to :product
end
You can use simple single-line query to fetch expired products or products without details:
Product.select { |p| p.product_detail.nil? || p.product_detail.expires_on <= Date.today } | unknown | |
d18800 | test | I was spending a lot of time on this as well but you will need to encode the + symbol to the html encoding%2B for it to work... super annoying that it is not in the documentation...
For encoding you can use the website below
https://www.url-encode-decode.com/
A: you can encode the whole email portion and send it to the API call.. Pardot on its backend did the decode for the special characters and saving to its original email state
for example:
Urlencode(test.12+{}[email protected]) and pass it to the API-> it works..
sample API call after encoding the email /4/do/create/email/test.12%21%23%24%25%26%27*%2F%3D%3F%5E_%2B-%60%7B%7C%7D%7E3%40gmail.com?format=json | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.