_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d2201 | train | Try to replace return new Promise((resolve, reject) => {}) in function f with return new Promise(async(resolve, reject) => {}). I hope it will solve your problem
async function f(filename) {
return new Promise(async (resolve, reject) => {
await sleep(1000);
/*
rest of function
*/
});
}
A:
However when I want to use it in a promise
You should never create another promise inside the new Promise executor. Instead, call the sleep function inside the surrounding function f (which you already marked as async, presumably to use the await keyword):
async function f(filename) {
await sleep(1000);
return new Promise((resolve, reject) => {
/* rest of function */
});
}
Your problem was also that the (resolve, reject) => {…} function is not async, so trying to use await inside there was a syntax error (in strict mode) and might also have caused the error message about the unexpected sleep token after the await.
A: Hi try using below code add async in the return code because you are using await inside the return
function sleep(ms) {
return new Promise((resolve) => {
console.log('inside sleep');
setTimeout(resolve, ms);
});
}
function f(filename) {
return new Promise( async(resolve, reject) => {
await sleep(7000);
/*
rest of function
*/
});
} | unknown | |
d2202 | train | You can use negated character class instead:
[^\w\s]
This will match a character that is not a word character and not a white-space.
RegEx Demo
A: You could simply use [^\s\w] which will return all characters that are not space nor letters
Regex101 | unknown | |
d2203 | train | Firstly, there's no need to use trigonometry to solve this. Instead you can use the inverse reciprocal of the slope intercept form of the line segment equation, then calculate points on a perpendicular line passing through a give point.
See Equation from 2 points using Slope Intercept Form
Also your mid points appear incorrect and there are only 2 mid points as 3 points = 2 line segments.
This code appears to work fine
# Function to calculate mid points
mid_point <- function(p1,p2) {
return(c(p1[1] + (p2[1] - p1[1]) / 2,p1[2] + (p2[2] - p1[2]) / 2))
}
# Function to calculate slope of line between 2 points
slope <- function(p1,p2) {
return((p2[2] - p1[2]) / (p2[1] - p1[1]))
}
# Function to calculate intercept of line passing through given point wiht slope m
calc_intercept <- function(p,m) {
return(p[2] - m * p[1])
}
# Function to calculate y for a given x, slope m and intercept b
calc_y <- function(x,m,b) {
return(c(x, m * x + b))
}
# X and Y for 3 points along a line
road_node <- matrix(
c(
381103, 381112, 381117,
370373, 370301, 370290
),
ncol = 2,
)
road_node <- as.data.frame(road_node)
perp_segments <- c()
for (i in 2:nrow(road_node) - 1) {
n1 <- road_node[i, ]
n2 <- road_node[i + 1, ]
# Calculate mid point
mp <- mid_point(n1,n2)
# Calculate slope
m <- slope(n1,n2)
# Calculate intercept subsituting n1
b <- calc_intercept(n1,m)
# Calculate inverse reciprocal of slope
new_m <- -1.0 / m
# Calculate intercept of perpendicular line through mid point
new_b <- calc_intercept(mp,new_m)
# Calculate points 10 units away in x direction at mid_point
p1 <- rbind(calc_y(as.numeric(mp[1])-10,new_m,new_b))
p2 <- rbind(calc_y(as.numeric(mp[1])+10,new_m,new_b))
# Add point pair to output vector
pair <- rbind(p1,p2)
perp_segments <- rbind(perp_segments,pair)
}
This is how it looks geometrically (image)
I hope this helps.
Edit 1:
I thought about this more and came up with this simplified function. If you tink of the problem as a right isosceles triangle (45,45,90), then all you need to do is find the point which is the required distance from the reference point interpolated along the line segment, then invert its x and y distances from the reference points, then add and subtract these from the reference point.
Function calc_perp
Arguments:
p1, p2 - two point vectors defining the end points of the line segment
n - the distance from the line segment
interval - the interval along the line segment of the reference point from the start (default 0.5)
proportion - Boolean defining whether the interval is a proportion of the length or a constant (default TRUE)
# Function to calculate Euclidean distance between 2 points
euclidean_distance <-function(p1,p2) {
return(sqrt((p2[1] - p1[1])**2 + (p2[2] - p1[2])**2))
}
# Function to calculate 2 points on a line perpendicular to another defined by 2 points p,p2
# For point at interval, which can be a proportion of the segment length, or a constant
# At distance n from the source line
calc_perp <-function(p1,p2,n,interval=0.5,proportion=TRUE) {
# Calculate x and y distances
x_len <- p2[1] - p1[1]
y_len <- p2[2] - p1[2]
# If proportion calculate reference point from tot_length
if (proportion) {
point <- c(p1[1]+x_len*interval,p1[2]+y_len*interval)
}
# Else use the constant value
else {
tot_len <- euclidean_distance(p1,p2)
point <- c(p1[1]+x_len/tot_len*interval,p1[2]+y_len/tot_len*interval)
}
# Calculate the x and y distances from reference point to point on line n distance away
ref_len <- euclidean_distance(point,p2)
xn_len <- (n / ref_len) * (p2[1] - point[1])
yn_len <- (n / ref_len) * (p2[2] - point[2])
# Invert the x and y lengths and add/subtract from the refrence point
ref_points <- rbind(point,c(point[1] + yn_len,point[2] - xn_len),c(point[1] - yn_len,point[2] + xn_len))
# Return the reference points
return(ref_points)
}
Examples
> calc_perp(c(0,0),c(1,1),1)
[,1] [,2]
point 0.5000000 0.5000000
1.2071068 -0.2071068
-0.2071068 1.2071068
> calc_perp(c(0,0),c(1,1),sqrt(2)/2,0,proportion=FALSE)
[,1] [,2]
point 0.0 0.0
0.5 -0.5
-0.5 0.5
This is how the revised function looks geometrically with your example and n = 10 for distance from line: | unknown | |
d2204 | train | how to iterate through the map to return the most recent Datetime
Java 8+ using Streams:
// To get latest key (or entry)
String latestKey = myMap.entrySet().stream()
.max(Entry::comparingByValue)
.map(Entry::getKey) // skip this to get latest entry
.orElse(null);
// To get latest value
DateTime latestValue = myMap.values().stream()
.max(Comparator.naturalOrder())
.orElse(null);
Any Java version using for loop:
Entry<String, DateTime> latestEntry = null;
for (Entry<String, DateTime> entry : myMap.entrySet()) {
if (latestEntry == null || entry.getValue().isAfter(latestEntry.getValue()))
latestEntry = entry;
}
String latestKey = (latestEntry != null ? latestEntry.getKey() : null);
In the above, adjust as needed depending on whether you need latest key, value, or entry (key+value).
in the end, only one pair of key/value with the most recent date is left
Best way is to replace the map, or at least replace the content, after finding the latest entry.
Java 8+ using Streams (replacing map):
myMap = myMap.entrySet().stream()
.max(Comparator.comparing(Entry::getValue))
.stream().collect(Collectors.toMap(Entry::getKey, Entry::getValue));
Any Java version using for loop (replacing content):
Entry<String, DateTime> latestEntry = null;
for (Entry<String, DateTime> entry : myMap.entrySet()) {
if (latestEntry == null || entry.getValue().isAfter(latestEntry.getValue()))
latestEntry = entry;
}
myMap.clear();
if (latestEntry != null)
myMap.put(latestEntry.getKey(), latestEntry.getValue());
A: You can use the .isAfter() method on the DateTime objects in the map values to check if one is after the other.
Create a String mostRecentKey variable or something similar and set it to the first key value in the map.
Then iterate through myMap.keySet(), comparing each date object value to the most recent one with .isAfter(). At the end, you will have the most recent date.
e.g
String mostRecentKey;
for (String dateKey : myMap.keySet()){
if (mostRecentKey == null) {
mostRecentKey = dateKey;
}
// 1 - check if date is after the next date in map
if (myMap.get(dateKey).isAfter(myMap.get(mostRecentKey))) {
mostRecentKey = dateKey;
}
}
Then you have the key of the most recent one, and you can choose to delete all entries except that one, save the value or whatever you want.
To delete all but the entry you found, refer to this question here: Remove all entries from HashMap where value is NOT what I'm looking for
Basically, you can do something like this:
myMap.entrySet().removeIf(entry -> !entry.getKey().equals(mostRecentKey));
Edit - Forgot you can't modify a collection you are iterating through, changed method slightly.
A: Java 7 solution
I'm using a minimum sdk in Android that doesn't ler me use Java 8. I
have to use Java 7 features.
Map<String, DateTime> myMap = new HashMap<>();
myMap.put("a", new DateTime("2020-01-31T23:34:56Z"));
myMap.put("b", new DateTime("2020-03-01T01:23:45Z"));
myMap.put("m", new DateTime("2020-03-01T01:23:45Z"));
myMap.put("c", new DateTime("2020-02-14T07:14:21Z"));
if (myMap.isEmpty()) {
System.out.println("No data");
} else {
Collection<DateTime> dateTimes = myMap.values();
DateTime latest = Collections.max(dateTimes);
System.out.println("Latest date-time is " + latest);
}
Output from this snippet is in my time zone (tested on jdk.1.7.0_67):
Latest date-time is 2020-03-01T02:23:45.000+01:00
We need to check first whether the map is empty because Collections.max() would throw an exception if it is.
If you need to delete all entries from the map except that or those holding the latest date:
dateTimes.retainAll(Collections.singleton(latest));
System.out.println(myMap);
{b=2020-03-01T02:23:45.000+01:00, m=2020-03-01T02:23:45.000+01:00}
Is it a little bit tricky? The retainAll method deletes from a collection all elements that are not in the collection passed as argument. We pass a set of just one element, the latest date-time, so all other elements are deleted. And deleting elements from the collection we got from myMap.values() is reflected back in the map from which we got the collection, so entries where the value is not the latest date are removed. So this call accomplishes it.
Side note: consider ThreeTenABP
If you are not already using Joda-Time a lot, you may consider using java.time, the modern Java date and time API and the successor of Joda-Time, instead. It has been backported and works on low Android API levels too.
java.time links
*
*Java Specification Request (JSR) 310, where java.time was first described.
*ThreeTen Backport project, the backport of java.time to Java 6 and 7 (ThreeTen for JSR-310).
*ThreeTenABP, Android edition of ThreeTen Backport
*Question: How to use ThreeTenABP in Android Project, with a very thorough explanation.
*Oracle tutorial: Date Time explaining how to use java.time.
A: You could iterate through entrySet as well, save the latest entry then remove everything and add that one back in.
Map<String, DateTime> myMap = new HashMap<>();
....
Entry<String, DateTime> latest = myMap.entrySet().iterator().next();
for(Entry<String, DateTime> date:myMap.entrySet()){
// 1 - use isAfter method to check whether date.getValue() is after latest.getValue()
// 2 - if it is, save it to the latest
}
myMap.clear();
myMap.put(latest.getKey(), latest.getValue()); | unknown | |
d2205 | train | Since A is invariant, this would be a good fit for a function, not a field.
type room struct {
L int
W int
}
func (r *room) area() int {
return r.L * r.W
}
A: If you would like to keep A as a Field, you can optionally preform the computation in a constructor.
type room struct {
L int
W int
A int
}
func newRoom(length, width, int) room {
return room{
L: length,
W: width,
A: length * width,
}
}
A: If you think about what you're after, you'll see that basically your desire to "not add unnecessary code" is really about not writing any code by hand, rather than not executing any code: sure, if the type definition
type room struct {
L int
W int
A int = room.L*room.H
}
could be possible in Go, that would mean the Go compiler would have make arrangements so than any code like this
var r room
r.L = 42
is compiled in a way to implicitly mutate r.A.
In other words, the compiler must make sure that any modification of either L or W fields of any variable of type room in a program would also perform a calculaton and update the field A of each such variable.
This poses several problems:
*
*What if your formula is trickier—like, say, A int = room.L/room.W?
First, given the casual Go rules for zero values of type int,
an innocent declaration var r room would immediately crash the program because of the integer division by zero performed by the code inserted by the compiler to force the invariant being discussed.
Second, even if we would invent a questionable rule of not calculating a formula on mere declarations (which, in Go, are also initializations), the problem would remain: what would happen in the following scenario?
var r room
r.L = 42
As you can see, even if the compiler would not make the program crash on the first line, it would have to arrange for that on the second.
Sure, we could add another questionable rule to sidestep the problem: either somehow "mark" each field as "explicitly set" or require the user to provide an explicit "constructor" for such types "armed" with a "formula".
Either solution stinks in its own way: tracing write field access incurs performance costs (some fields now have a hidden flag which takes up space, and each access of such fields spends extra CPU counts), and having constructors goes again one of the cornerstone principles of the Go design: to have as little magic as possible.
*The formula creates a hidden write.
This may not be obvious until you start writing "harder-core" Go programs for tasks it shines at—highly concurrent code with lots of simultaneously working goroutines,—but when you do you're forced to think about shared state and the ways it's mutated and—consequently—on the ways such mutations are synchronized to keep the program correct.
So, let's suppose we protect access to either W or L with a mutex; how would the compiler make sure mutation of A is also proteted given that mutex operations are explicit (that is, a programmer explicitly codes locking/unlocking operations)?
*(A problem somewhat related to the previous one.)
What if "the formula" does "interesting things"—such as accessing/mutating external state?
This could be anything from accessing global variables to querying databases to working with a filesystems to exchanges over IPC or via networking protocols.
And this all could be very innocently-looking, like A int = room.L * room.W * getCoefficient() where all the nifty details are hidden in that getCoefficient() call.
Sure, we, again, could work-around this by imposing an arbitrary limit on the compiler to only allow explicit access to the fields of the same enclosing type and only allow them to participate in simple expressions with no function calls or some "whitelisted" subset of them such as math.Abs or whatever.
This clearly reduces the usefulness of the feature while greatly complicating the language.
*What if "the formula" has non-linear complexity?
Suppose, the formula is O(N³) with regard to the value of W.
Then setting W on a value to 0 would be processed almost instantly but setting it to 10000 would slow the program down quite noticeably, and both of these outcomes would result form a seemingly not too different statements: r.W = 0 vs r.W = 10000.
This, again, goes agains the principle of having as little magic as possible.
*Why would we ony allow such things on struct types and not on arbitrary variables—prodived they are all in the same lexical scope?
This looks like another arbitrary restriction.
And another—supposedly—the most obvious problem is what should happen when the programmer goes like
var r room
r.L = 2 // r.A is now 2×0=0
r.W = 5 // r.A is now 2×5=10
r.A = 42 // The invariant r.A = r.L×r.W is now broken
?
Now you can see that all the problems above may be solved by merily coding what you need, say, with the following approach:
// use "unexported" fields
type room struct {
l int
w int
a int
}
func (r *room) SetL(v int) {
r.l = v
updateArea()
}
func (r *room) SetW(v int) {
r.w = v
updateArea()
}
func (r *room) GetA() int {
return r.a
}
func (r *room) updateArea() {
r.a = r.l * r.w
}
With this approach, you may be crystal-clear about all the issues above.
Remember that the programs are written for humans to read and only then for machines to execute; it's paramount for proper software engeneering to keep the code as much without any magic or intricate hidden dependencies between various parts of of it as possible. Please remember that
Software engineering is what happens to programming
when you add time and other programmers.
© Russ Cox
See more. | unknown | |
d2206 | train | You can also now use this plugin : CamerAwesome
Official plugin has been quite abandonned. This plugin includes flash, zoom, auto focus... and no initialisation required.
A: I also received this error when using Flutter camera plugin example when I changed it from CameraController.startVideoRecording() to CameraController.startImageStream(Function<CameraImage>).
To resolve this issue I commented/removed imageFormatGroup: ImageFormatGroup.jpeg at CameraController instantiation:
controller = CameraController(
cameraDescription,
ResolutionPreset.medium,
enableAudio: enableAudio,
//imageFormatGroup: ImageFormatGroup.jpeg, //remove or comment this line
);
This may happen because CameraImage received at startImageStream uses YUV image encoding. | unknown | |
d2207 | train | I couldnt get it how and why its being used
Because you've not closed the stream that's writing to it:
using (var fs = new FileStream(Path.Combine(uploadPath, name), ...)
I would suggest you write the file, close the using statement so the handle can be released, then read it:
string fullName = Path.Combine(uploadPath, name);
using (var fs = ...)
{
// Code as before, but ideally taking note of the return value
// of Stream.Read, that you're currently ignoring. Consider
// using Stream.CopyTo
}
// Now the file will be closed
using (var reader = File.OpenText(fullName))
{
// Read here
} | unknown | |
d2208 | train | I don't know if this will help, but if it doesn't, please write me to delete the answer.
Instead of these options you may want to consider that in the "Availability" table to store only the id(surrogate) of the room and the date on which it is reserved. So when you select the data and join both tables you will get only the reserved rooms. I personally think that there is no point of storing all of the room-date relations with status.
Moreover, to improve the performance you can create non-clustered index on the City column for instance.
A: Please don't fill your database with lots of rows which are default values.
so you don't store availability of a meeting room, you store booking of a meeting room.
primary key of booking table is date and room id , other fields are who booked this room, when booking was asked, a booking id...
If it is possible to book meeting room for part of the day then primary key should be start_date and room id, end date is stored in another field of the table. | unknown | |
d2209 | train | As of 4.2.2 it's not something that can be done.
EDIT: This was added in 4.3:
// Toggle for the content view for our button. This will swap between our red view controller and the fpv view controller.
@IBAction func switchContent(_ sender: UIButton) {
if (isContentViewSwitched) {
isContentViewSwitched = false
self.contentViewController = self.oldContentViewController
} else {
isContentViewSwitched = true
let newContentViewController = UIViewController()
newContentViewController.view.backgroundColor = UIColor.red
self.oldContentViewController = self.contentViewController as? DULFPVViewController
self.contentViewController = newContentViewController
}
}
For the full version, see this sample code. | unknown | |
d2210 | train | Leave your compile SDK at 23, set your TARGET SDK to 19. | unknown | |
d2211 | train | I did try to make something for you. This will move everything from level2 folder to the level1 folder. Pls try it, and let me know how its working.
$master = "C:\Temp\Level0\"
Get-ChildItem $master | ForEach-Object {
$dest = ($_.fullname)+"\"
$Loc = (Get-ChildItem $_.FullName | Select-Object -ExpandProperty fullname)+"\*"
Move-item -Path $Loc -Destination $dest
}
A: This should do what you want. It will however not move any files with the same filename to the parent folder if the filename has already been used.
$FilePath = Read-Host "Enter FilePath"
$FileSource = Get-ChildItem -Path $FilePath -Directory
foreach ($Directory in $FileSource)
{
$Files = Get-ChildItem -Path $Directory -File
foreach ($File in $Files){
Move-item -Path $File.Fullname -Destination $FilePath
}
}
Hope that is of some use. | unknown | |
d2212 | train | Nope, this is not the way to do it!
When the mouse enters the slides, you show the controls, and when the mouse leaves the slides, you hide the controls, except if the mouse enters the controls.
To do this you'll use a small timeout and check if the mouse entered the controls before they are hidden away.
var timer;
$('#slide').on({
mouseenter: function() {
clearTimeout(timer);
$('.controls').show();
},
mouseleave: function() {
timer = setTimeout(function() { //don't hide them right away
$('.controls').hide();
}, 400); // set a timeout
}
});
$('.controls').on({
mouseenter: function() {
clearTimeout(timer); // if the mouse entered the controls,
}, // clear the timeout, keeping the controls visible,
mouseleave: function() {
timer = setTimeout(function() {
$('.controls').hide();
}, 400);
}
});
FIDDLE
A much easier way to do the same thing, would be to place the controls inside the #slide element in the HTML, that way they won't trigger the mouseleave event, but that's not always possible, or convenient. | unknown | |
d2213 | train | Long story short: you can't. Heap size is fixed once you are running it, and there's no way to modify it from the code.
A: I do not think this is possible, but you could of course control the heap with -Xmx or -Xms.
You can also play with : -XX:MaxHeapFreeRatio : this is the maximum percentage (70 by default if I am no mistaken) of the heap that is free before GC will shrink it.
Giving a small -Xms will make the heap grow (if needed and will involve a full GC) and shrink back is possible also.
Generally people try to avoid as much as possible this shrinking and growing because it involves a gull GC aka stop-the-world events that will slow you down. | unknown | |
d2214 | train | Since the lines in your file are delimited by '\r\n', the pattern you search for should account for that.
For convenience, you can still use triple quotes to initialize the string you want to search for, but then use the str.replace() method to replace all occurrences of '\n' with '\r\n':
pattern='''Line1
Line2
Line3'''.replace('\n', '\r\n')
Furthermore, if all you need is a substring match, you can use the in operator instead of the more costly regex match:
if pattern in file_contents:
print pattern
else:
print "No match!!"
A: New line character in a file can be '\n', '\r' or '\r\n'. It depends on OS. To be at safer side, try to match with all new line characters.
pattern='''Line1(\n|\r|\r\n)Line2(\n|\r|\r\n)Line3''' | unknown | |
d2215 | train | Ok it now has a JWT that contains information about the user, but when
the user wants to send a request to the client to do whatever he wants
to do, he should attach a token with his request, right?
Should say "but when the client wants to send a request to the server ..."
if a server uses HTTP as its protocol, it can't send data to the user
if the user didn't issue a request, so it shouldn't be able to send
that token without a request from the user.
The token will have been provided to the client during sign-on process.
To summarise the process:
*
*Client enters credentials (e.g. username and password) and sends those to a login endpoint.
*The login server will generate a JWT and return to client.
*Client receives a JWT and caches it locally at the client end ready to be sent to the server on subsequent requests.
*On all subsequent requests to the server the client will attach the cached JWT in the authorization headers of the http request.
*The server will validate the token to ensure client is authenticated. | unknown | |
d2216 | train | Assuming that output is visible but input is not:
git clone https://${repo_username}:${repo_password}@internalgit.com/scm/project/repo.git -b ${branch_name} $tmp | sed "s/${repo_password}/<redacted>/g"
should do what you want.
I misread the question; for this answer to work you'd have to run it on each push (i.e. git push 2>&1 | sed "s/${repo_password}/<redacted>/g". I also missed that git prints this to stderr, so unless you want to use process substition, it will be difficult to redirect output.
You should escape the password in case it contains any special regex characters (as otherwise sed may match more or less than you mean too). This answer has some ready solutions for escaping strings to use with sed. | unknown | |
d2217 | train | First of all when you use android:layout_weight you should set the android:layout_width="0dp"
Besides that now, I would suggest having a separate layout for xlarge screens. The way to do that is to create a separate folder that will contain a layout with the same name as your original layout (eg. main.xml). The folder name for 10" screens should be something like layout-xlarge. For more details you should check here.
A: For different size screens. you can also make another layout for bigger screen.
res/layout-small
res/layout-large
res/layout-xlarge
create a new folder as named "layout-xlarge" and copy your xml layout in there and you can adjust the sizes.
Android will automatically select the appropriate layout for the current device. Remember that, layout name should be the same in each folder. | unknown | |
d2218 | train | Inflating a drawable from a XML file instead of from resources is actually impossible, because the drawable will try to cast the XmlPullParser to XmlResourceParser which is only implemented by private class XmlBlock.Parser. Even that parser is only used for parsing binary XML files. I tried every possible way of doing this without reflection, it's impossible.
So I found documentation on binary XML files and learned how they were made, helped with some compiled binary XML vector drawable files I had. The documentation dates back to 2011 and is still valid, I guess it will most likely remain this way, so future compatibility isn't an issue.
A previous version was tested for more than a thousand paths, without problem. The new version posted here should work just as well. (Previous versions are available in the answer history) Compared with loading a drawable directly from resources, I found that there's an average of 14 microseconds or so of extra loading, not noticeable.
Here's the code:
public class VectorDrawableCreator {
private static final byte[][] BIN_XML_STRINGS = {
"width".getBytes(),
"height".getBytes(),
"viewportWidth".getBytes(),
"viewportHeight".getBytes(),
"fillColor".getBytes(),
"pathData".getBytes(),
"path".getBytes(),
"vector".getBytes(),
"http://schemas.android.com/apk/res/android".getBytes()
};
private static final int[] BIN_XML_ATTRS = {
android.R.attr.height,
android.R.attr.width,
android.R.attr.viewportWidth,
android.R.attr.viewportHeight,
android.R.attr.fillColor,
android.R.attr.pathData
};
private static final short CHUNK_TYPE_XML = 0x0003;
private static final short CHUNK_TYPE_STR_POOL = 0x0001;
private static final short CHUNK_TYPE_START_TAG = 0x0102;
private static final short CHUNK_TYPE_END_TAG = 0x0103;
private static final short CHUNK_TYPE_RES_MAP = 0x0180;
private static final short VALUE_TYPE_DIMENSION = 0x0500;
private static final short VALUE_TYPE_STRING = 0x0300;
private static final short VALUE_TYPE_COLOR = 0x1D00;
private static final short VALUE_TYPE_FLOAT = 0x0400;
/**
* Create a vector drawable from a list of paths and colors
* @param width drawable width
* @param height drawable height
* @param viewportWidth vector image width
* @param viewportHeight vector image height
* @param paths list of path data and colors
* @return the vector drawable or null it couldn't be created.
*/
public static Drawable getVectorDrawable(@NonNull Context context,
int width, int height,
float viewportWidth, float viewportHeight,
List<PathData> paths) {
byte[] binXml = createBinaryDrawableXml(width, height, viewportWidth, viewportHeight, paths);
try {
// Get the binary XML parser (XmlBlock.Parser) and use it to create the drawable
// This is the equivalent of what AssetManager#getXml() does
@SuppressLint("PrivateApi")
Class<?> xmlBlock = Class.forName("android.content.res.XmlBlock");
Constructor xmlBlockConstr = xmlBlock.getConstructor(byte[].class);
Method xmlParserNew = xmlBlock.getDeclaredMethod("newParser");
xmlBlockConstr.setAccessible(true);
xmlParserNew.setAccessible(true);
XmlPullParser parser = (XmlPullParser) xmlParserNew.invoke(
xmlBlockConstr.newInstance((Object) binXml));
if (Build.VERSION.SDK_INT >= 24) {
return Drawable.createFromXml(context.getResources(), parser);
} else {
// Before API 24, vector drawables aren't rendered correctly without compat lib
final AttributeSet attrs = Xml.asAttributeSet(parser);
int type = parser.next();
while (type != XmlPullParser.START_TAG) {
type = parser.next();
}
return VectorDrawableCompat.createFromXmlInner(context.getResources(), parser, attrs, null);
}
} catch (Exception e) {
Log.e(VectorDrawableCreator.class.getSimpleName(), "Vector creation failed", e);
}
return null;
}
private static byte[] createBinaryDrawableXml(int width, int height,
float viewportWidth, float viewportHeight,
List<PathData> paths) {
List<byte[]> stringPool = new ArrayList<>(Arrays.asList(BIN_XML_STRINGS));
for (PathData path : paths) {
stringPool.add(path.data);
}
ByteBuffer bb = ByteBuffer.allocate(8192); // Capacity might have to be greater.
bb.order(ByteOrder.LITTLE_ENDIAN);
int posBefore;
// ==== XML chunk ====
// https://justanapplication.wordpress.com/2011/09/22/android-internals-binary-xml-part-two-the-xml-chunk/
bb.putShort(CHUNK_TYPE_XML); // Type
bb.putShort((short) 8); // Header size
int xmlSizePos = bb.position();
bb.position(bb.position() + 4);
// ==== String pool chunk ====
// https://justanapplication.wordpress.com/2011/09/15/android-internals-resources-part-four-the-stringpool-chunk/
int spStartPos = bb.position();
bb.putShort(CHUNK_TYPE_STR_POOL); // Type
bb.putShort((short) 28); // Header size
int spSizePos = bb.position();
bb.position(bb.position() + 4);
bb.putInt(stringPool.size()); // String count
bb.putInt(0); // Style count
bb.putInt(1 << 8); // Flags set: encoding is UTF-8
int spStringsStartPos = bb.position();
bb.position(bb.position() + 4);
bb.putInt(0); // Styles start
// String offsets
int offset = 0;
for (byte[] str : stringPool) {
bb.putInt(offset);
offset += str.length + (str.length > 127 ? 5 : 3);
}
posBefore = bb.position();
bb.putInt(spStringsStartPos, bb.position() - spStartPos);
bb.position(posBefore);
// String pool
for (byte[] str : stringPool) {
if (str.length > 127) {
byte high = (byte) ((str.length & 0xFF00 | 0x8000) >>> 8);
byte low = (byte) (str.length & 0xFF);
bb.put(high);
bb.put(low);
bb.put(high);
bb.put(low);
} else {
byte len = (byte) str.length;
bb.put(len);
bb.put(len);
}
bb.put(str);
bb.put((byte) 0);
}
if (bb.position() % 4 != 0) {
// Padding to align on 32-bit
bb.put(new byte[4 - (bb.position() % 4)]);
}
// Write string pool chunk size
posBefore = bb.position();
bb.putInt(spSizePos, bb.position() - spStartPos);
bb.position(posBefore);
// ==== Resource map chunk ====
// https://justanapplication.wordpress.com/2011/09/23/android-internals-binary-xml-part-four-the-xml-resource-map-chunk/
bb.putShort(CHUNK_TYPE_RES_MAP); // Type
bb.putShort((short) 8); // Header size
bb.putInt(8 + BIN_XML_ATTRS.length * 4); // Chunk size
for (int attr : BIN_XML_ATTRS) {
bb.putInt(attr);
}
// ==== Vector start tag ====
int vstStartPos = bb.position();
int vstSizePos = putStartTag(bb, 7, 4);
// Attributes
// android:width="24dp", value type: dimension (dp)
putAttribute(bb, 0, -1, VALUE_TYPE_DIMENSION, (width << 8) + 1);
// android:height="24dp", value type: dimension (dp)
putAttribute(bb, 1, -1, VALUE_TYPE_DIMENSION, (height << 8) + 1);
// android:viewportWidth="24", value type: float
putAttribute(bb, 2, -1, VALUE_TYPE_FLOAT, Float.floatToRawIntBits(viewportWidth));
// android:viewportHeight="24", value type: float
putAttribute(bb, 3, -1, VALUE_TYPE_FLOAT, Float.floatToRawIntBits(viewportHeight));
// Write vector start tag chunk size
posBefore = bb.position();
bb.putInt(vstSizePos, bb.position() - vstStartPos);
bb.position(posBefore);
for (int i = 0; i < paths.size(); i++) {
// ==== Path start tag ====
int pstStartPos = bb.position();
int pstSizePos = putStartTag(bb, 6, 2);
// android:fillColor="#aarrggbb", value type: #rgb.
putAttribute(bb, 4, -1, VALUE_TYPE_COLOR, paths.get(i).color);
// android:pathData="...", value type: string
putAttribute(bb, 5, 9 + i, VALUE_TYPE_STRING, 9 + i);
// Write path start tag chunk size
posBefore = bb.position();
bb.putInt(pstSizePos, bb.position() - pstStartPos);
bb.position(posBefore);
// ==== Path end tag ====
putEndTag(bb, 6);
}
// ==== Vector end tag ====
putEndTag(bb, 7);
// Write XML chunk size
posBefore = bb.position();
bb.putInt(xmlSizePos, bb.position());
bb.position(posBefore);
// Return binary XML byte array
byte[] binXml = new byte[bb.position()];
bb.rewind();
bb.get(binXml);
return binXml;
}
private static int putStartTag(ByteBuffer bb, int name, int attributeCount) {
// https://justanapplication.wordpress.com/2011/09/25/android-internals-binary-xml-part-six-the-xml-start-element-chunk/
bb.putShort(CHUNK_TYPE_START_TAG);
bb.putShort((short) 16); // Header size
int sizePos = bb.position();
bb.putInt(0); // Size, to be set later
bb.putInt(0); // Line number: None
bb.putInt(-1); // Comment: None
bb.putInt(-1); // Namespace: None
bb.putInt(name);
bb.putShort((short) 0x14); // Attributes start offset
bb.putShort((short) 0x14); // Attributes size
bb.putShort((short) attributeCount); // Attribute count
bb.putShort((short) 0); // ID attr: none
bb.putShort((short) 0); // Class attr: none
bb.putShort((short) 0); // Style attr: none
return sizePos;
}
private static void putEndTag(ByteBuffer bb, int name) {
// https://justanapplication.wordpress.com/2011/09/26/android-internals-binary-xml-part-seven-the-xml-end-element-chunk/
bb.putShort(CHUNK_TYPE_END_TAG);
bb.putShort((short) 16); // Header size
bb.putInt(24); // Chunk size
bb.putInt(0); // Line number: none
bb.putInt(-1); // Comment: none
bb.putInt(-1); // Namespace: none
bb.putInt(name); // Name: vector
}
private static void putAttribute(ByteBuffer bb, int name,
int rawValue, short valueType, int valueData) {
// https://justanapplication.wordpress.com/2011/09/19/android-internals-resources-part-eight-resource-entries-and-values/#struct_Res_value
bb.putInt(8); // Namespace index in string pool (always the android namespace)
bb.putInt(name);
bb.putInt(rawValue);
bb.putShort((short) 0x08); // Value size
bb.putShort(valueType);
bb.putInt(valueData);
}
public static class PathData {
public byte[] data;
public int color;
public PathData(byte[] data, int color) {
this.data = data;
this.color = color;
}
public PathData(String data, int color) {
this(data.getBytes(StandardCharsets.UTF_8), color);
}
}
}
A call to getVectorDrawable returns a VectorDrawable from a list of paths. The drawable can contain multiple paths with different colors. There are also parameters for the drawable and viewport size.
Here's an example:
List<PathData> pathList = Arrays.asList(new PathData("M128.09 5.02a110.08 110.08 0 0 0-110 110h220a109.89 109.89 0 0 0-110-110z", Color.parseColor("#7cb342")),
new PathData("M128.09 115.02h-110a110.08 110.08 0 0 0 110 110 110.08 110.08 0 0 0 110-110z", Color.parseColor("#8bc34a")),
new PathData("M207.4 115.2v-.18h-5.1l-61.43-61.43h-25.48v20.6h-6.5a11.57 11.57 0 0 0-11.53 11.53v26.09h.11c-.11.9.5 2 1.7 3.32.12.08.12.08.12.2l3.96 4-46.11 79.91c5.33 4.5 11.04 8.4 17 11.8a109.81 109.81 0 0 0 108.04 0 110.04 110.04 0 0 0 51.52-64.65c.38-1.28.68-2.57 1.1-3.78z", Color.parseColor("#30000000")),
new PathData("M216.28 230.24a6.27 6.27 0 0 0-.9-2.8l-31.99-55.57-10.58-18.48-19.85-34.21-15.08 15.12 18.6 32.28 10.2 17.73 30.92 53.37a5.6 5.6 0 0 0 1.97 2.12l15.42 10.5c.6.39 1.29.39 1.9.08.6-.37.9-.98.9-1.7z", Color.parseColor("#e1e1e1")),
new PathData("M186.98 115.02a58.9 58.9 0 0 1-30.5 51.6 58.4 58.4 0 0 1-56.7 0l18.6-32.28-15.13-15.12-62.48 108.22c-.5.9-.8 1.78-.9 2.8l-1.4 18.6c-.12.71.3 1.28.9 1.7.6.37 1.29.3 1.9-.12l15.41-10.4a7.87 7.87 0 0 0 1.97-2.07l30.92-53.53a78.74 78.74 0 0 0 77.23 0 76.65 76.65 0 0 0 16.6-12.4 79.3 79.3 0 0 0 24.07-56.89z", Color.parseColor("#f1f1f1")),
new PathData("M147.3 74.12h-6.43v-20.6h-25.48v20.6h-6.5a11.57 11.57 0 0 0-11.53 11.5v26.07h.11c-.11 1.02.5 2.12 1.82 3.4l23.05 23.14a8.3 8.3 0 0 0 5.75 2.38v-.07l.07.07c2.12 0 4.2-.75 5.71-2.38l23.1-23.1c1.32-1.32 1.81-2.53 1.81-3.4h.12V85.7a11.68 11.68 0 0 0-11.6-11.6zm-19.14 40.9h-.07a15.4 15.4 0 0 1 0-30.8v-.2l.07.2a15.46 15.46 0 0 1 15.31 15.38 15.46 15.46 0 0 1-15.3 15.42z", Color.parseColor("#646464")));
A: @Andre.Anzi To extend the class to support strokeColor and strokeWidth, you can try to arrange the attributes in an alphabetical order. | unknown | |
d2219 | train | There's no handlerMessage(Message message) method on android.os.Handler class, you should override handleMessage(Message message) method (without the 'r')
A: In both cases:
Handler handler = new Handler(new Handler.Callback () {
@Override
public boolean handleMessage(Message msg) {
TextView JeroensText = (TextView) findViewById(R.id.JText);
JeroensText.setText("Lekker bezig!");
return false;
}
});
}
and
Handler handler = new Handler () {
@Override
public void handleMessage(Message msg) {
TextView JeroensText = (TextView) findViewById(R.id.JText);
JeroensText.setText("Lekker bezig!");
}
};
as you can see there is handleMessage() method, not handlerMessage().
Hope it help
A: Its just a simple miss - remove that extra 'r' from your hanlderMessage method.
Since there is no 'handlerMessage(Message)' method for Handler. Use either
with Callback
Handler handler = new Handler(new Handler.Callback () {
@Override
public boolean handleMessage(Message msg) {
// handle message here
return true;
}
});
or
Handler handler = new Handler () {
@Override
public void handleMessage(Message msg) {
// handle message herer
}
}; | unknown | |
d2220 | train | My best guess is that the file that you want to download is not a .rar but some text (html or json) that contains information for the real download.
Can you open the downloaded file with notepad to see what it contains?
According to https://docs.github.com/en/rest/releases/releases#get-the-latest-release, it should contains a json file with some assets that contain the real download link.
A: You need to keep the WebClient instance alive during the download. What does that mean?
If the WebClient instance is only hold in the wc variable, and wc is being local to whatever method which contains the code in your question, then as soon as this method returns, the wc variable gets out of scope.
If there is no other field/property/whatever holding a reference to the WebClient instance, the WebClient instance becomes subject to the next garbage collection.
Normally, you cannot predict when a garbage collection happens (well, you can, but it's a complicated topic for another day...). If it so happens that the garbage collection happens while the download is still in progress, well, the WebClient instance will be finalized and destroyed anyway, effectively aborting download and leaving you with a partially downloaded file.
Another possibility is that your program just exits without actually waiting for the download to complete before exiting, again leaving you with a partially downloaded file.
Are there other possibilities of what might have gone wrong there with your download? Sure, there are. But for me, it's enough tea leaves reading for a day, so i leave the troubleshooting and debugging of your program and inspection of the downloaded data to you... | unknown | |
d2221 | train | I think your logic is equivalent to count the size of data frames grouped by column a after dropping the duplicated values of combined columns a, b and c, since duplicated tuples within each group must also be duplicated records in the data frame assuming your data frame contains only columns a, b and c and vice versa:
df.drop_duplicates().groupby('a').size()
# a
# A 4
# B 2
# dtype: int64 | unknown | |
d2222 | train | You have a couple of options, the most simple is just to disable the button when you call your API and re-enable when it resolves. You could do it like this:
<button id="register-btn" name="register-btn" class="btn btn-primary" ng-disabled="isRegistering" single-click="createClient()">{{ running ? 'Please wait...' : 'Register' }}</button>
and in your controller
$scope.isRegistering = false;
$scope.createClient = function() {
$scope.isRegistering = true;
return $timeout(function() {
ClientService.createClient($scope.theClient).then(function (aCreateClientResponse) {
$scope.isRegistering = false;
if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_SUCCESS) {
alert('success');
}
else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_DOMAINNAME_ERROR) {
alert('Check your domain name');
}
else if(aCreateClientResponse[0] == $scope.RESPONSE_CODE.CM_INVALID_INPUT) {
alert('Invalid request');
}
else {
alert('Service is not available');
}
}, function(){
$scope.isRegistering = false;
});
}, 1000);
}; | unknown | |
d2223 | train | MODERATOR ATTENTION: This question seems to belong more to dsp.stackexchange than this forum.
There's nothing wrong with either your sound or PortAudio. The sound you're hearing at the end is just the result of the audio being abruptly stopped. Take a look at the following image of a sound that has a constant amplitude through the entire duration. This sound will have an audible pop at the end.
Conversely, if we attenuate the amplitude by modifying the waveform's envelope (the same sound as in image #1) so that it resembles the sound in image #2, we won't hear any abrupt change(s) in the sound at the end.
In conclusion, if your goal is to completely eliminate the pops that you're hearing, fade out (or fade in) your sound(s). | unknown | |
d2224 | train | I had the same problem in a firebase-functions project. I fixed it by giving the tsconfig.json the property "skipLibCheck" with value true.
See more at https://lifesaver.codes/answer/node-modules-tapable-tapable-has-no-exported-member-tapable-12185 | unknown | |
d2225 | train | Try this,
STEP 1 :
Put the following scripts in your html file.
<script type="text/javascript" src="http://code.jquery.com/jquery-1.9.0.min.js"></script>
<script type="text/javascript">
$(function() {
$('.signout-btn').click(function() {
$('#signout').submit();
});
})
</script>
STEP 2 :
Add an attribute class="signout-btn" to anchor tags, which you want to trigger the form submission. | unknown | |
d2226 | train | I'm not sure if the change from 60 days is automatic, you may have to change it manually.
Unfortunately, you can't export old data from GA4. Once you are out of the sandbox and have changed the data limit, you will start to get more days stored. | unknown | |
d2227 | train | Wow, it's nice to know I'm not the only one lost in the void with the V2 API ...
I'm implementing a similar library and ran across the same issue. From my understanding of the documentation there are two Monolithic uploads: The single exchange POST variant (mentioned at the bottom of the docs), and the two exchange POST + PUT variant (mentioned at the top of the docs).
I wasn't able to get the POST-only method working. In my case, I was using it to upload the image manifest after the layer blobs, and before the registry manifest. While the POST appears successful and returns 202, the debug logging on the registry shows that it's never replicated from the staging location into the data store (as happens after chunked uploads). The subsequent attempt to upload the manifest then fails with 400, and debug logging "blob unknown to registry".
I was, however, able to work around this issue by using the POST+PUT method.
The key sections in the docs for me were:
Though the URI format (/v2//blobs/uploads/) for the Location header is specified, clients should treat it as an opaque url and should never try to assemble it.
and
A monolithic upload is simply a chunked upload with a single chunk ...
Following those two instructions I created a new Location header (and UUID) using POST, appended the digest value, and completed the upload by PUTting the blob to the modified location.
Side note: Looking at the registry debug logs, the docker CLI checks the existence of the blobs before starting a new upload (and after completing the uploads too -- assuming as a double check on the status code).
Update: Found myself working on this again, and figured I'd update you on what I found ...
The registry only supports handling the response body during PATCH and PUT operations; the copyFullPayload helper is not invoked for POST. Additionally, all uploads appear to be treated as monolithic uploads (in the sense that they stream the blob from a single request body) as handling of the Content-Range header does not appear to be implemented.
Side Note: I conducted this analysis under the scope of increasing test coverage of the V2 API during an overhaul; here is a working example of the POST+PUT method. In general I've found the official documentation to be out-of-sync with the current implementation with regards to headers and status codes. I have tested this against a local V2 registry and DockerHub, but not against other registries such as DTR, quay, or MCR. | unknown | |
d2228 | train | This is the answer I got on GitHub:
The reason that the export is greyed out is because you are using the
bluemix staged Playground that is in the 'web-connector' mode. In
order to meaningfully export a business network card, you will need to
create a connection to Hyperledger Fabric. The steps you outline above
will not create a business network in Hyperledger Fabric, but within a
web connector.
If you follow the developer tutorial
(https://hyperledger.github.io/composer/tutorials/developer-tutorial)
you will be taken through the process of creating a (local) Fabric,
from which you can connect Playground and export business network
cards.
A: You can create the admin card via composer-cli.
composer card create
-p connection.json
-u PeerAdmin -c [email protected]
-k 114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457_sk
-r PeerAdmin -r ChannelAdmin
Add connection.json file as following:
{
"name": "fabric-network",
"type": "hlfv1",
"mspID": "Org1MSP",
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.org1.example.com"
},
"orderers": [
{
"url" : "grpc://localhost:7050"
}
],
"channel": "composerchannel",
"timeout": 300
}
The certificate file can be found in the signcerts subdirectory (fabric-tools/fabric-scripts/hlfv1/composer/crypto-config/peerOrganizations/org1.example.com/users/[email protected]/msp) and is named [email protected].
The private key file can be found in the keystore subdirectory. The name of the private key file is a long hexadecimal string, with a suffix of _sk, for example 114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457_sk.
Step by step tutorial is available in Hyperledger Composer
Tutorials @ https://hyperledger.github.io/composer/tutorials/deploy-to-fabric-single-org. | unknown | |
d2229 | train | Doing some more debugging, I found out that Silk4J for Eclipse (Java) actually uses a WPF user interface (.NET).
While preinstalled by Windows, I never needed .NET on my machine, so I never installed any updates for it.
Installing the latest .NET updates, the problem was gone. In my case I updated to .NET 4.5.2. | unknown | |
d2230 | train | Let me know if I am not understanding you question.
If you are using a Facebook application and the other page is also located in you project then you can do a simple window.location = "myLocation" or another equivalent call.
If you do another kind of redirect inside of a Facebook iFrame, then the page will just appear inside that iFrame as if it was part of the app. You should not need to do anything special to navigate between pages (as long as they are part of your application)
If it is not part of the application then you can still redirect, and it will appear inside of that iFrame (though it may not look as nice), though you will have to edit the redirected page so you can go back if you don't want to click the back button every time. There may be some settings that you need to edit if you are redirecting from one facebook page to another one though.
If you have a separate webpage that you want to link to (without showing facebook), then you are going to have to find a different way to do so. You would have to do something like:
window.open('www.yoursite.com','blank','fullscreen, yes');
A: Because a canvas app is just a website, facebook built in native navigation which means that I can redirect the canvas URL to the actual url of the website
For example, since I want to redirect people to the donation page after they take a social action, I can just redirect to apps.facebook.com/myappcanvas/donation and it will go there.
I guess I was a bit hasty when I asked this question. Should have tried it first.
In summary:
top.location.href = "apps.facebook.com/myappcanvas/secondarypage/tertiarypage/etc";
This works especially if you want to send a user back to a specific page after they invite friends or send a facebook message, or write on their wall.
Hope this helps other developers out there. | unknown | |
d2231 | train | Assuming data_stuff is an Object, you can try this:
var i = 0;
// We need an array (not object) to loop trough all the posts.
// This saves all keys that are in the object:
var keys = Object.keys(data_stuff);
function postNext() {
// For every POST request, we want to get the key and the value:
var key = keys[i];
var value = data_stuff[key];
$.post( window.location.href, {
stuff: key,
data: value
}).done(function( data ) {
if (data == 1) {
$('div#counter').text(i);
}
// Increment i for the next post
i++;
// If it's the last post, run postsDone
if(i == keys.length) {
postsDone();
}
// Else, run the function again:
else {
postNext();
}
});
});
postNext();
function postsDone() {
// All data was posted
}
This will only post the next bit of data_stuff after the previous $.post was done. | unknown | |
d2232 | train | What you want to do is different from the intent of the template. The template was constructed so that the contents of the <h1> would be your site logo or site name. That is why they hard-coded it into the Site.Master as:
<div class="title">
<h1>
My ASP.NET Application
</h1>
</div>
It wasn't meant to be changed per page.
If you want to change it per page, then you have a couple of options. Here is one.
Since you referenced the header section of the site master: lets say you want the text of the title set to the actual page title. You could do it like this:
<div class="title">
<h1>
<asp:Label ID="_pageTitle" runat="server"></asp:Label>
</h1>
</div>
So you replace My ASP.NET Application with a label so you can easily change it in the code behind.
Then, in your code behind, you have something like:
protected void Page_Load(object sender, EventArgs e)
{
_pageTitle.Text = Page.Title;
//rest of your code
}
This will set the text of the label to the page title of the page.
A: Inside the Site.Master page, change the following block
<h1>
My ASP.NET Application
</h1>
to
<h1><%= Page.Title%></h1>
Then, in each content page, set the page title in the page directive:
<%@ Page Title="Home Page" Language="vb" MasterPageFile="~/Site.Master" AutoEventWireup="false"
CodeBehind="Default.aspx.vb" Inherits="WebApplicationTest._Default" %>
*
*Note the Title="Home Page" attribute.
Or from code-behind:
Page.Title = "Home Page"
A: All you need to do is update the header in the master page and then every single web page that uses that master page will automatically have the header updated
A: If looking to pass some text from each .aspx page to master page,try following. I had the similar situation and I resolve it as:
Place a label control with ID="lblOnMasterPage" on the master page.
<asp:Label ID="lblOnMasterPage" runat="server"></asp:Label>
On code-behind file of the master page create a public property as
public string LabelValue
{
get { return this.lblOnMasterPage.Text; }
set { this.lblOnMasterPage.Text = value; }
}
and use following code to pass the title/text you want for every page of your application..
((yourMasterPage)this.Master).LabelValue = "text you want to pass from each page"; | unknown | |
d2233 | train | You could use SelectSingleNode or SelectNodes with an XPath expression. There are several options to achieve what you want, depending on your intention, but this would be one way to do it:
# finde the nodes
$nodes = $xml.SelectNodes("//*[local-name()='ATTRIBUTE'][@NAME='News- offers_OPT_EMAIL']")
# get value
$nodes.InnerText
Or if the value of the attribute doesn't matter, simply do:
$xml.customers.customer.attribute.'#text' | unknown | |
d2234 | train | Looks like you need extract vowels, does this work:
> vowels <- c('A','E','I','O','U')
> LETTERS[sapply(vowels, function(ch) grep(ch, LETTERS))]
[1] "A" "E" "I" "O" "U"
> | unknown | |
d2235 | train | Use OR and filter only the rows having the same number of instances to the the number of filter specified in the WHERE clause.
SELECT stu.First_Name,
stu.Last_Name,
stu.Phone
FROM Student stu
JOIN Enrollment e
ON stu.Student_Id = e.Student_Id
JOIN Section sec
ON e.Section_Id = sec.Section_Id
JOIN Course c
ON sec.Course_No = c.Course_No
WHERE c.Description IN ('Systems Analysis', 'Project Managment') -- two values
GROUP BY stu.First_Name, stu.Last_Name, stu.Phone
HAVING COUNT(*) = 2 -- rows must have 2 instances | unknown | |
d2236 | train | Your GIF file on disk is already binary and already what a browser will expect if you send a Content-Type: image/gif so you just need to read its contents like this:
with open('image.gif', 'rb') as f:
corpo = f.read()
Your variable corpo will then contain a GIF-encoded image, with a header with the width and height, the palette and the LZW-compressed pixels.
Just by way of clarification, when you use:
im = Image.open('image.gif')
PIL will create a memory buffer according to the width and height in the GIF, then uncompress all the LZW-compressed pixels into that buffer. If you then convert that to bytes, you would be sending the browser the first pixel, then the second, then the third but without any header telling it how big the image is and also you'd be sending it uncompressed. That wouldn't work - as you discovered.
A: You'll need to use the .tobytes() method to get what you want (brief writeup here). Try this modification of your code snippet:
elif format == 'gif':
corpo = PIL.Image.open(path)
corpo_bytes = corpo.tobytes()
answer = corpo_bytes + ("\n").encode('UTF-8') #I need this line to end with a \n because it's the end of a HTTP answer
This might not be exactly what you're looking for, but it should be enough to get you started in the right direction. | unknown | |
d2237 | train | In osCommerce the payment modules have a method called process_button().
This method draws a form with the hidden fields the payment method needs. In the case of Google Checkout it will draw the fields needed by Google to show the information.
You can check in catalog/includes/modules/payment/<your Google Checkout module>.php that function to see which fields are sent to Google Checkout to show the information. | unknown | |
d2238 | train | Check conditionally tags in your funciton
https://codex.wordpress.org/Conditional_Tags
function admin_redirect() {
if( is_page('about_us') )
return;
if ( !is_user_logged_in()) {
wp_redirect( home_url('/login') );
exit;
}
} | unknown | |
d2239 | train | You can define id for shape item <item android:id="@+id/shape_bacground"../> then at runtime you have to get background of your view and cast it to LayerDrawable and use findDrawableByLayerId() for find your shape and set it's color using
setColor(). Here is sample code:
drawable xml
<layer-list xmlns:android="http://schemas.android.com/apk/res/android" >
<item android:drawable="@drawable/float_button_shadow1">
</item>
<item
android:id="@+id/shape_bacground"
android:bottom="2dp"
android:left="1dp"
android:top="1dp"
android:right="1dp">
<shape android:shape="oval" >
<solid android:color="#1E88E5" />
</shape>
</item>
</layer-list>
Changing color
try {
LayerDrawable layer = (LayerDrawable) view.getBackground();
GradientDrawable shape = (GradientDrawable) layer
.findDrawableByLayerId(R.id.shape_bacground);
shape.setColor(backgroundColor);// set new background color here
rippleColor = makePressColor();
} catch (Exception ex) {
// Without bacground
}
A:
You can do same thing by programatically, Where you can change any
color at runtime by making function and passing Color in parameter.
ShapeDrawable sd1 = new ShapeDrawable(new RectShape());
sd1.getPaint().setColor(CommonUtilities.color);
sd1.getPaint().setStyle(Style.STROKE);
sd1.getPaint().setStrokeWidth(CommonUtilities.stroke);
sd1.setPadding(15, 10, 15, 10);
sd1.getPaint().setPathEffect(
new CornerPathEffect(CommonUtilities.corner));
ln_back.setBackgroundDrawable(sd1);
Hope it will help you ! | unknown | |
d2240 | train | They are both red when you first see them. After you click on one of the and come back that one becomes blue since it's marked as visited.
If you want it to still be red then you need to add this to the css rules:
a:visited {
color: red;
}
A: Short answer: you need to color the visited links:
a:visited {
color: red;
}
Long answer: links have four states (unvisited, visited, hover and active). There are four pseudo selectors that enable you to style the state of the links:
a:link {
color: red;
}
a:visited {
color: red;
}
a:hover {
color: red;
}
a:active {
color: red;
} | unknown | |
d2241 | train | Note in the latest version of eclipse you won't see the line width option in jsp files editor, instead this is covered by the line with setting in html files - editor menu
A: Window - Preferences - Web - JSP Files - Editor. Click on the link for your kind of JSP (HTML or XML content), and adjust the line width.
A: Window -> Preferences -> type HTML...You'll see "Editor" so in line width, enter the value as you need. Works for me on eclipse 2018-12 version | unknown | |
d2242 | train | In my opinion the best solution is to use the standard C++17 std::variant. MSVC comes with natvis for this type so that you have a pretty view of the value that is stored.
Here is some natvis code that I just wrote and tested:
<Type Name="boost::variant<*>">
<DisplayString Condition="which_==0">{*($T1*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==1" Optional="true">{*($T2*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==2" Optional="true">{*($T3*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==3" Optional="true">{*($T4*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==4" Optional="true">{*($T5*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==5" Optional="true">{*($T6*)storage_.data_.buf}</DisplayString>
<DisplayString Condition="which_==6" Optional="true">{*($T7*)storage_.data_.buf}</DisplayString>
<Expand>
<Item Name="which">which_</Item>
<Item Name="value0" Condition="which_==0">*($T1*)storage_.data_.buf</Item>
<Item Name="value1" Condition="which_==1" Optional="true">*($T2*)storage_.data_.buf</Item>
<Item Name="value2" Condition="which_==2" Optional="true">*($T3*)storage_.data_.buf</Item>
<Item Name="value3" Condition="which_==3" Optional="true">*($T4*)storage_.data_.buf</Item>
<Item Name="value4" Condition="which_==4" Optional="true">*($T5*)storage_.data_.buf</Item>
<Item Name="value5" Condition="which_==5" Optional="true">*($T6*)storage_.data_.buf</Item>
<Item Name="value6" Condition="which_==6" Optional="true">*($T7*)storage_.data_.buf</Item>
</Expand>
</Type>
It works for any boost::variant<type_or_types>.
It has a DisplayString that takes the variant's member storage_ and extracts the buffer buf. The address of the buffer is then cast to a pointer to the type that was provided to std::variant. As you can see in my code which_ is zero based, whereas the template parameters are 1 based. I am not interested in the address but in the value, so I am adding a * in front of the value.
I also added an Expand section so that you can expand a variant. This allows me to show which_ and to show the value again - this time the column Type will show the correct type as you can see in my screen capture (for the variant itself the type is displayed as boost::variant<…> and I do not know how to add the type name into the DisplayString).
Please note that the Optional="true" are required because otherwise we would get a parsing error in cases where less than 7 type parameters are passed (as in boost::variant<int,bool>and natvis does not have a $T7.
If you need more template parameters, you can easily extend the code.
If you want the DisplayString to also shows the index (as an explicit value or coded into the name value…), you can easily change it accordingly as in
<DisplayString Condition="which_==0">{{which={which_} value0={*($T1*)storage_.data_.buf}}}</DisplayString>
Last but not least please note that I did not test very much and that I did not look into boost::variant into detail. I saw that storage_ has members suggesting that there is some alignment in place. So it might not be sufficient to just use storage_.data_.buf. It might be necessary to adjust the pointer depending on the alignment being used.
A: If you want to show $T1 as a string, wrap it with ". For example, for
<DisplayString>{*($T1*)storage_.data_.buf} {"$T1"}</DisplayString>
in your case you will see 1 "int" | unknown | |
d2243 | train | I made some problem in the code. It made the issue. Change the instances like this. then it will work :). A small mistake caused a big issue :-(
//(mode == GL.RenderMode(RenderingMode.Select))
(mode == RenderingMode.Select) // Removed GL.RenderMode | unknown | |
d2244 | train | It will always assume that the first string is location so, just use the second overload:
public static void L(string location, params string[] message)
{
Write(LogType.Log, message, false, location);
}
you can simply pass null or empty string when location is not available and deal with it in the method.
A: You can create two classes and use them to differentiate between the two overloads. You can even go so far as having one class inherit from the other if you want.
public class LoginWithMessages {
public string[] Messages {get; set;}
}
public class LoginWithLocation : LoginWithMessages {
public string Location {get; set;}
}
Then your method signatures will be:
public static void L(LoginWithMessages loginMessage)
public static void L(LoginWithLocation loginLocation) | unknown | |
d2245 | train | If you are not restricting to determine ui mode within javascript, here are other ways:
*
*If you have a model class for your component, check for this
condition:
AuthoringUIMode.TOUCH.equals(AuthoringUIMode.fromRequest(getRequest()))
*To check from JSP, use this code:
Placeholder.isAuthoringUIModeTouch(slingRequest)
A: You can simply read the cookie value of cq-authoring-mode. It can either be CLASSIC or TOUCH.
var isTouch = $.cookie('cq-authoring-mode') === 'TOUCH'
The other way would be to look for an outstanding JS objects like Granite.UI. This might be painful in the future when the clientlib that created the object will be attached to the other mode (e.g. via an AEM hotfix or unconsciously during the development).
var isTouch = Granite.UI != null | unknown | |
d2246 | train | try this :
This code is add a text or string ON the video and after saving video you will play on any player.
Most Advantage of this code is Provide video with sound. And all things in one code(that is text and image).
#import <AVFoundation/AVFoundation.h>
-(void)MixVideoWithText
{
AVURLAsset* videoAsset = [[AVURLAsset alloc]initWithURL:url options:nil];
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableCompositionTrack *compositionAudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipAudioTrack = [[videoAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//If you need audio as well add the Asset Track for audio here
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) ofTrack:clipVideoTrack atTime:kCMTimeZero error:nil];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:[[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform]];
CGSize sizeOfVideo=[videoAsset naturalSize];
//TextLayer defines the text they want to add in Video
//Text of watermark
CATextLayer *textOfvideo=[[CATextLayer alloc] init];
textOfvideo.string=[NSString stringWithFormat:@"%@",text];//text is shows the text that you want add in video.
[textOfvideo setFont:(__bridge CFTypeRef)([UIFont fontWithName:[NSString stringWithFormat:@"%@",fontUsed] size:13])];//fontUsed is the name of font
[textOfvideo setFrame:CGRectMake(0, 0, sizeOfVideo.width, sizeOfVideo.height/6)];
[textOfvideo setAlignmentMode:kCAAlignmentCenter];
[textOfvideo setForegroundColor:[selectedColour CGColor]];
//Image of watermark
UIImage *myImage=[UIImage imageNamed:@"one.png"];
CALayer layerCa = [CALayer layer];
layerCa.contents = (id)myImage.CGImage;
layerCa.frame = CGRectMake(0, 0, sizeOfVideo.width, sizeOfVideo.height);
layerCa.opacity = 1.0;
CALayer *optionalLayer=[CALayer layer];
[optionalL addSublayer:textOfvideo];
optionalL.frame=CGRectMake(0, 0, sizeOfVideo.width, sizeOfVideo.height);
[optionalL setMasksToBounds:YES];
CALayer *parentLayer=[CALayer layer];
CALayer *videoLayer=[CALayer layer];
parentLayer.frame=CGRectMake(0, 0, sizeOfVideo.width, sizeOfVideo.height);
videoLayer.frame=CGRectMake(0, 0, sizeOfVideo.width, sizeOfVideo.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:optionalLayer];
[parentLayer addSublayer:layerCa];
AVMutableVideoComposition *videoComposition=[AVMutableVideoComposition videoComposition] ;
videoComposition.frameDuration=CMTimeMake(1, 30);
videoComposition.renderSize=sizeOfVideo;
videoComposition.animationTool=[AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mixComposition duration]);
AVAssetTrack *videoTrack = [[mixComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
NSString *documentsDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)objectAtIndex:0];
NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init];
[dateFormatter setDateFormat:@"yyyy-MM-dd_HH-mm-ss"];
NSString *destinationPath = [documentsDirectory stringByAppendingFormat:@"/utput_%@.mov", [dateFormatter stringFromDate:[NSDate date]]];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetMediumQuality];
exportSession.videoComposition=videoComposition;
exportSession.outputURL = [NSURL fileURLWithPath:destinationPath];
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch (exportSession.status)
{
case AVAssetExportSessionStatusCompleted:
NSLog(@"Export OK");
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(destinationPath)) {
UISaveVideoAtPathToSavedPhotosAlbum(destinationPath, self, @selector(video:didFinishSavingWithError:contextInfo:), nil);
}
break;
case AVAssetExportSessionStatusFailed:
NSLog (@"AVAssetExportSessionStatusFailed: %@", exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(@"Export Cancelled");
break;
}
}];
}
Shows the error they will come after saving video.
-(void) video: (NSString *) videoPath didFinishSavingWithError: (NSError *) error contextInfo: (void *) contextInfo
{
if(error)
NSLog(@"Finished saving video with error: %@", error);
}
From :https://stackoverflow.com/a/22016800/3901620 | unknown | |
d2247 | train | I'm not sure I fully understand your question. Maybe if the following doesn't clarify things you can edit your post to include the name of the MATLAB function you are using and a snippet of code?
The convhull function in MATLAB does return the index of coordinates in the convex hull.
In the following example, (x(k), y(k)) are the coordinates. (taken straight from convhull doc)
xx = -1:.05:1; yy = abs(sqrt(xx));
[x,y] = pol2cart(xx,yy);
k = convhull(x,y);
plot(x(k),y(k),'r-',x,y,'b+')
It's the same thing if you are using convexhull instead (convexhull doc).
x = rand(10,1);
y = rand(10,1);
dt = DelaunayTri(x,y);
k = convexHull(dt);
plot(x,y, '.', 'markersize',10);
hold on;
plot(x(k), y(k), 'r');
hold off; | unknown | |
d2248 | train | I found this here:
http://enholm.net/index.php/blog/vba-code-to-transfer-excel-2007-xlsx-books-to-2003-xls-format/
It searches through a dirictory looking for xlsx files and changes them to xls files
I think though it can be changed to look for xlsm files and change them to xls files as well.
When I run it I get:
Run-Time error '9' Subscript out of range
Debug
Sheets("List").Cells(r, 1) = Coll_Docs(i)
is highlighted in yellow
I do not know enough about vba to figure out what is not working.
Thanks
Sub SearchAndChange()
Dim Coll_Docs As New Collection
Dim Search_path, Search_Filter, Search_Fullname As String
Dim DocName As String
Application.DisplayAlerts = False
Application.ScreenUpdating = False
Application.Calculation = xlCalculationManual
Dim i As Long
Search_path = ThisWorkbook.Path & "\360 Compiled Repository\May_2013"
Search_Filter = "*.xlsx"
Set Coll_Docs = Nothing
DocName = dir(Search_path & "\" & Search_Filter)
Do Until DocName = ""
Coll_Docs.Add Item:=DocName
DocName = dir
Loop
r = 1
For i = Coll_Docs.Count To 1 Step -1
Search_Fullname = Search_path & "\" & Coll_Docs(i)
Sheets("List").Cells(r, 1) = Coll_Docs(i)
Call changeFormats(Search_path, Coll_Docs(i))
r = r + 1
Next
Application.DisplayAlerts = True
Application.ScreenUpdating = True
Application.Calculation = xlCalculationAutomatic
End Sub
'**************************************************************
'* Changes format from excel 2007 to 2003
'***************************************************************
Sub changeFormats(ByVal dir As String, ByVal fileName As String)
Workbooks.Open fileName:=dir & fileName
ActiveWorkbook.SaveAs fileName:=dir & Replace(fileName, "xlsx", "xls"), FileFormat:=xlExcel8
ActiveWindow.Close
End Sub | unknown | |
d2249 | train | try using:
'%' + @perberesi + '%'
instead of:
%@perberesi%
Some Examples
A: Ok, I just realized that you are creating a function, which means that you can't use INSERT. You should also really take Gordon's advice and use explicit joins and table aliases.
CREATE FUNCTION perberesit7(@perberesi varchar(100))
RETURNS @menu_rest TABLE ( emri_hotelit varchar(50),
emri_menuse varchar(50),
perberesit varchar(255))
AS
Begin
return(
Select R.Emri_Rest, M.Emri_Pjatës, M.Pershkrimi
From RESTORANTET R
INNER JOIN MENU M
ON M.Rest_ID = R.ID_Rest
Where M.Pershkrimi LIKE '%' + @perberesi + '%')
End
A: Why do you have to define the return table?
The following is a inline table variable function that performs better than a multi-line table. I wrote one to return columns that have the two letters 'id'. Just modify for your own case.
See article from Wayne Sheffield.
http://blog.waynesheffield.com/wayne/archive/2012/02/comparing-inline-and-multistatement-table-valued-functions/
-- Use tempdb
use tempdb;
go
-- Simple ITVF
create function search_columns (@name varchar(128))
returns TABLE
return
(
select * from sys.columns where name like '%' + @name + '%'
)
go
-- Call the function
select * from search_columns('id');
go
However, since you have a '%' in the like clause at the front of the expression, a full table or index scan is likely. You might want to look at full text indexing if you data is large.
http://craftydba.com/?p=1629 | unknown | |
d2250 | train | There is no benefit to putting an interface on a DataContract as they simply represent data and no logic. You typically put those DataContracts inside the same assembly with the ServiceContracts or a separate assembly all together. This will prevent exposing the business logic to your clients. | unknown | |
d2251 | train | I just removed the concat because it has performance issues according to MDN but obviously the real problem is the fetch and there's not much we can do about that unless you can get your external api to dump a bigger batch.
You could initiate each function from a webapp and then have it return via withSuccessHandler and then start the next script in the series and daisy chain your way through all of the subfunctions until your done. Each sub function will take it's 3 minutes or so but you only have to worry about keeping each sub function under 6 minutes that way.
function data1()
{
var addresses = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Import");
var baseUrl = 'https://myapiurl';
var address = addresses.getRange(2, 1, 500).getValues();
for(var i=0;i<address.length;i++){
var responseAPI = UrlFetchApp.fetch(baseUrl + address[i][0]);
var json = JSON.parse(responseAPI.getContentText());
var data = [[json.result]];
var dataRange = addresses.getRange(i+2, 2).setValue(data);
}
} | unknown | |
d2252 | train | What about 2 blocks of code for each case?
Student student = studentRepository.findById(dt.getStudentId());
if(student == null){
Student newStudent = new Student();
//add data
newStudent.save();
} else {
student.setFirstName(dt.getFirstName());
student.setLastName(dt.getLastName ());
student.setPhone(dt.getPhone());
student.setAddress(dt.getAddress());
student.update();
}
A: You can follow what @fladdimir mentioned. Or, here is my suggestion[will not provide any code, leaving up to you]:
*
*filter objects which are old ones[contains id as I can see from your code] and news ones in two separate lists
*List all ids
*use id <--> object as keyValue in a map
*Query all the objects with idList you got using IN clause
*Now, set value as you are doing in your 2nd code block from the map, add them in a list
*merge this list with new objectList
*save them altogether with saveAll(your_merged_list)
I am not saying this is the optimized way, but at least you can reduce load on your db. | unknown | |
d2253 | train | The Promise aggregation function is called Promise.all() not promises.all(). | unknown | |
d2254 | train | You need to apply style to the div, not to the SnackbarContent | unknown | |
d2255 | train | mysqldump has an option to turn on or off using multi-value inserts. You can do either of the following according to which you prefer:
Separate Insert statements per value:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert -uroot -proot database_name table_name > test.sql
Multi-value insert statements:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --extended-insert -uroot -proot database_name table_name > test.sql
So what you can do is dump the schema first with the following:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --no-data -uroot -proot database_name > dbschema.sql
Then dump the data as individual insert statements by themselves:
mysqldump -t -h192.168.212.128 -P3306 --default-character-set=utf8 --skip-extended-insert --no-create-info -uroot -proot database_name table_name > test.sql
You can then split the INSERT file into as many pieces as possible. If you're on UNIX use the split command, for example.
And if you're worried about how long the import takes you might also want to add the --disable-keys option to speed up inserts as well..
BUT my recommendation is not to worry about this so much. mysqldump should not exceed MySQL's ability to import in a single statement and it should run faster than individual inserts. As to file size, one nice thing about SQL is that it compresses beautifully. That multi-gigabyte SQL dump will turn into a nicely compact gzip or bzip or zip file.
EDIT: If you really want to adjust the amount of values per insert in a multi-value insert dump, you can add the --max_allowed_packet option. E.g. --max_allowed_packet=24M . Packet size determines the size of a single data packet (e.g. an insert) so if you set it low enough it should reduce the number of values per insert. Still, I'd try it as is before you start messing with that.
A: clickhouse-client --host="localhost" --port="9000" --max_threads="1" --query="INSERT INTO database_name.table_name FORMAT Native" < clickhouse_dump.sql | unknown | |
d2256 | train | You are setting the value on this.v.offerName. The UI element is not bound to this JavaScript variable and you need to set the value of the UI input element to restrict the value. | unknown | |
d2257 | train | I resolved it by telling leaflet to provide tiles as canvas and not as an svg
jQuery("#print").on("click", function() {
myCapture();
});
function myCapture() {
html2canvas(document.body, {
allowTaint: true,
useCORS: true,
onrendered: function(canvas) {
document.body.appendChild(canvas);
}
});
}
var map = L.map('map', {
renderer: L.canvas()
}); | unknown | |
d2258 | train | Why not just a split to the \n-?
$(document).ready(function() {
$("#textarea").keyup(function() {
const entered = $('#textarea').val()
const lines = entered.split(/\n-/);
let spans = "";
lines.forEach((l,i)=>{
// remove the first -
if(i===0 && l[0]==="-") l = l.slice(1)
spans += "<span style='color:red;'>- " + l + "</span><br/>";
})
$(".results").html(spans);
});
});
.row {
background: #f8f9fa;
margin-top: 20px;
padding: 10px;
}
.col {
border: solid 1px #6c757d;
}
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
<div class="container">
<div class="row">
<div class="col-12">
<form>
<textarea id="textarea" rows="5" cols="60" placeholder="Type something here..."></textarea>
</form>
</div>
<div class="col-12 results"></div>
</div>
</div> | unknown | |
d2259 | train | The problem is that your original data is Base64 encoded UTF16-BE. If you look at a after your first line, you'll see that it has those zero bytes that you see in the final buffer:
let a = Buffer.from("AEEAQgBDAGEAYgBj", "base64").toString("utf-8");
console.log(a.length);
// 12
console.log([...a].map(ch => ch.charCodeAt(0).toString(16).padStart(2, "0")).join(" "));
// 00 41 00 42 00 43 00 61 00 62 00 63
So the question becomes: How to read the UTF16-BE text you have in the buffer from Buffer.from("AEEAQgBDAGEAYgBj", "base64"). Node.js's Buffer doesn't support UTF16-BE directly (there is no "utf16be" encoding in its standard library), but you can get there via swap16 and then reading the buffer as UTF16-LE ("utf16le", which is in Node.js's standard library):
let a = Buffer.from("AEEAQgBDAGEAYgBj", "base64").swap16().toString("utf16le");
console.log(a.length);
// 6
console.log(a);
// ABCabc
Now a is a normal string. If you want a buffer containing its contents in UTF8, you can use Buffer.from(a).toString("utf8"):
let a = Buffer.from("AEEAQgBDAGEAYgBj", "base64").swap16().toString("utf16le");
console.log(a.length);
// 6
console.log(a);
// ABCabc
let b = Buffer.from(a); // (Default is `"utf8"` but you could supply that explicitly)
console.log(b);
// <Buffer 41 42 43 61 62 63> | unknown | |
d2260 | train | The issue was a network connection. When I added sleep (or modified solution from here How to check internet access using bash script in linux?) at the begging of the script, it works perfectly. | unknown | |
d2261 | train | Your CSS selectors are slightly wrong, try:
.box .todo-list > li > .tools > a
And
.box .todo-list > li > .tools > a:hover
The selector parts need to go in the same order as the elements that they select are nested in the HTML.
Check out the W3C Selectors documentation for more details.
A: The > selector means immediate descendent.
In your markup, the immediate descendant of the li is the span.tools.
Therefore, li > a does not select anything.
But, li > .tools > a does select the a element.
An excellent write-up can be found here: Child and Sibling Selectors
A: The > selector above is a child combinator selector. This means it will only select elements that are direct children of a parent. In otherwords, it only looks one level down the markup structure, no deeper.
So
.box .todo-list > li > .tools > a | unknown | |
d2262 | train | Your error means, that rtree, a dependency of osmnx, cannot find spatialindex.
First, make sure that spatialindex is installed:
brew install spatialindex
The next problem is that rtree only checks in very specific locations for spatialindex but brew installs to /opt/homebrew/Cellar.
You set the ENV variable and check if that's the issue with
export SPATIALINDEX_C_LIBRARY='/opt/homebrew/Cellar/spatialindex/1.9.3/lib'
python -c 'import geopandas'
and it should work.
However, depending on your spatialindex and brew version, this path might change.
A: The problem has been solved, and I want to share in case anyone later could try if encountering the same problem.
After I installed the latest version for rtree, which you can see below, it still gave me back the same problem
conda install -c ioos rtree=0.9.7
I had to install brew package for my Mac.
https://brew.sh
Then I ran these 2 lines of code that my friend Philippe found from another page
Installing Rtree from libspacialindex to use .clip() in geopandas
https://github.com/gboeing/osmnx/issues/3
brew install spatialindex
pip install rtree
The problem has been solved completely.
A: This isn't an OSMnx issue - it's an rtree issue. You might want to open an issue directly with the rtree GitHub repo. That said, you're on OS X so you may want to try to install with rtree
conda install -c ioos rtree=0.8.2
Some of my students that have macs struggled to install rtree with pip, but succeeded with that conda command if you're on windows, you can follow these instructions to install geopandas and rtree.
reference | unknown | |
d2263 | train | you're trying to access to a file in the server side ,the server doesn't know about your disc
so use the aliases for tomcat
<Context crossContext="true" docBase="here_the_path_in_disc" path="project_name/resource_name" reloadable="true"/>
A: I haven't worked with ZK for a while, but I'm pretty sure you need to give it an URL, relative to your webapp root directory. So something like "/images/testimg.jpg" or a canonical URL like "https://www.google.dk/images/srpr/logo11w.png", like you would if you ere writing HTML.
A: Best approach will be to use alternativedocroot on Glassfish or aliases option on Tomcat - aliases | unknown | |
d2264 | train | When you create a table, you specify the provisioned capacity for read and write. This will limit the number of records you can read per second and number of records you can write per second. Your use-case will determine your actual needs. You can modify the provisioned capacity of a table after you have created, while it is being used and it will not affect access. This hot upgrade is a powerful feature of DynamoDB.
If you are exceeding your provisioned capacity, then DynamoDB will throttle your API calls. The aws-sdk gem will automatically retry throttled DynamoDB calls up to 10 times, using an exponential backoff strategy, sleeping between attempts.
To configure the retry limit for DynamoDB:
Aws.config[:dynamodb] = { retry_limit: 5 }
You can tell if your request is getting retried by inspecting the response object:
ddb = Aws::DynamoDB::Client.new
resp = ddb.get_item(table_name: 'aws-sdk', key: { id: '123' })
resp.context.retires
#=> 0
Also, you can enable logging:
require 'logger'
ddb = Aws::DynamoDB::Client.new(logger: Rails.logger)
ddb.get_item(table_name: 'aws-sdk', key: { id: '123' })
# sent to the rails logger
[Aws::DynamoDB::Client 200 0.008879 0 retries] get_item(table_name:"aws-sdk",key:{"id"=>{s:"123"}})
The log message contains the service client call, the HTTP status code (200), the time spent waiting on the call, the number of retires and the operation name and params called. You can of course configure the :log_level and a :log_formatter to modify when and what things are logged. | unknown | |
d2265 | train | Whilst it's possible to do this with curl (including the login), I would recommend using a browser extension. Flashgot for Firefox is excellent, you can tell it to download all files of a certain extension, or do things like pattern matching.
http://flashgot.net/ | unknown | |
d2266 | train | You can use the following code to convert the right button click into left button click.
Here when you click inside the div the right button click is converted into left button. So that when you press left or right button both will behave on the same way.
$(document).ready(function() {
$("#rightclickDemo").bind('contextmenu', function(event) {
if (event.which == 3)
{
// prevent right click from being interpreted by the browser:
event.preventDefault();
$(this).click(); // simulate left click
}
});
$('#rightclickDemo').click(function() {
$(this).html("Left clicked!");
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script>
</head>
<body>
<p>This is a web page Demo.</p>
<div id="rightclickDemo" style="width:200px; height:100px; background-color: orange">
Click me with the right or left mouse button. Both will work as left button clicked!
</div>
</body>
</html>
A: I found my answer after MUCH research. this code works exactly the way I want it to! Added the jquery.min.js to my file structure and it works perfectly for standalone use!
<script src="scripts/jquery.min.js"></script>
<script type="text/javascript">
$(document).on("contextmenu", function(e){
e.preventDefault();
e.target.click();
});
</script> | unknown | |
d2267 | train | The LSTM layer and the TimeDistributed wrapper are two different ways to get the "many to many" relationship that you want.
*
*LSTM will eat the words of your sentence one by one, you can chose via "return_sequence" to outuput something (the state) at each step (after each word processed) or only output something after the last word has been eaten. So with return_sequence=TRUE, the output will be a sequence of the same length, with return_sequence=FALSE, the output will be just one vector.
*TimeDistributed. This wrapper allows you to apply one layer (say Dense for example) to every element of your sequence independently. That layer will have exactly the same weights for every element, it's the same that will be applied to each words and it will, of course, return the sequence of words processed independently.
As you can see, the difference between the two is that the LSTM "propagates the information through the sequence, it will eat one word, update its state and return it or not. Then it will go on with the next word while still carrying information from the previous ones.... as in the TimeDistributed, the words will be processed in the same way on their own, as if they were in silos and the same layer applies to every one of them.
So you dont have to use LSTM and TimeDistributed in a row, you can do whatever you want, just keep in mind what each of them do.
I hope it's clearer?
EDIT:
The time distributed, in your case, applies a dense layer to every element that was output by the LSTM.
Let's take an example:
You have a sequence of n_words words that are embedded in emb_size dimensions. So your input is a 2D tensor of shape (n_words, emb_size)
First you apply an LSTM with output dimension = lstm_output and return_sequence = True. The output will still be a squence so it will be a 2D tensor of shape (n_words, lstm_output).
So you have n_words vectors of length lstm_output.
Now you apply a TimeDistributed dense layer with say 3 dimensions output as parameter of the Dense. So TimeDistributed(Dense(3)).
This will apply Dense(3) n_words times, to every vectors of size lstm_output in your sequence independently... they will all become vectors of length 3. Your output will still be a sequence so a 2D tensor, of shape now (n_words, 3).
Is it clearer? :-)
A: return_sequences=True parameter:
If We want to have a sequence for the output, not just a single vector as we did with normal Neural Networks, so it’s necessary that we set the return_sequences to True. Concretely, let’s say we have an input with shape (num_seq, seq_len, num_feature). If we don’t set return_sequences=True, our output will have the shape (num_seq, num_feature), but if we do, we will obtain the output with shape (num_seq, seq_len, num_feature).
TimeDistributed wrapper layer:
Since we set return_sequences=True in the LSTM layers, the output is now a three-dimension vector. If we input that into the Dense layer, it will raise an error because the Dense layer only accepts two-dimension input. In order to input a three-dimension vector, we need to use a wrapper layer called TimeDistributed. This layer will help us maintain output’s shape, so that we can achieve a sequence as output in the end. | unknown | |
d2268 | train | If you're going to be using a Naive Bayes classifier, you don't really need a whole ton of NL processing. All you'll need is an algorithm to stem the words in the tweets and if you want, remove stop words.
Stemming algorithms abound and aren't difficult to code. Removing stop words is just a matter of searching a hash map or something similar. I don't see a justification to switch your development platform to accomodate the NLTK, although it is a very nice tool.
A: I did a very similar project a while ago - only classifying RSS news items instead of twitter - also using PHP for the front-end and WEKA for the back-end. I used PHP/Java Bridge which was relatively simple to use - a couple of lines added to your Java (WEKA) code and it allows your PHP to call its methods. Here's an example of the PHP-side code from their website:
<?php
require_once("http://localhost:8087/JavaBridge/java/Java.inc");
$world = new java("HelloWorld");
echo $world->hello(array("from PHP"));
?>
Then (as someone has already mentioned), you just need to filter out the stop words. Keeping a txt file for this is pretty handy for adding new words (they tend to pile up when you start filtering out irrelevant words and account for typos).
The naive-bayes model has strong independent-feature assumptions, i.e. it doesn't account for words that are commonly paired (such as an idiom or phrase) - just taking each word as an independent occurrence. However, it can outperform some of the more complex methods (such as word-stemming, IIRC) and should be perfect for a college class without making it needlessly complex.
A: You can also use the uClassify API to do something similar to Naive Bayes. You basically train a classifier as you would with any algorithm (except here you're doing it via the web interface or by sending xml documents to the API). Then whenever you get a new tweet (or batch of tweets), you call the API to have it classify them. It's fast and you don't have to worry about tuning it. Of course, that means you lose the flexibility you get by controlling the classifier yourself, but that also means less work for you if that in itself is not the goal of the class project.
A: Try open calais - http://viewer.opencalais.com/ . It has api, PHP classes and many more. Also, LingPipe for this task - http://alias-i.com/lingpipe/index.html
A: you can check this library https://github.com/Dachande663/PHP-Classifier very straight forward
A: you can also use thrift or gearman to deal with nltk | unknown | |
d2269 | train | After hours of code digging I managed to find the answer. I think it is worth sharing as it may be help you if you have some similar issue.
In my case I had some unused libraries imported. One of them was a class that was instantiated when Robot Framework imported the library file. This object had some logger settings that messed up the defaults, that is why I got no result in the robot log.
Without it I got the expected results and automatic propagation worked fine. | unknown | |
d2270 | train | The java2py3 option of AgileUML should be able to translate this. Correctly-formatted Python3 is produced.
https://github.com/eclipse/agileuml/blob/master/translators.zip | unknown | |
d2271 | train | The key chrome_options was deprecated sometime back. Instead you have to use options and your effective code block will be:
from selenium import webdriver
driver_path = 'C:/python/Python38/chromedriver.exe'
brave_path = 'C:/Program Files (x86)/BraveSoftware/Brave-Browser/Application/brave.exe'
option = webdriver.ChromeOptions()
option.binary_location = brave_path
browser = webdriver.Chrome(executable_path=driver_path, options=option)
browser.get("https://www.google.es")
References
You can find a couple of relevant detailed discussion in:
*
*DeprecationWarning: use options instead of chrome_options error using ChromeDriver and Chrome through Selenium on Windows 10 system
*How to initiate Brave browser using Selenium and Python on Windows | unknown | |
d2272 | train | So, I was able to solve the problem and the problem was /. If you put / to the end of url, CSS doesn't load. But, if you don't put / to the end, CSS gets loaded.
A: Replace your if condition, it doesn't make sense to perform actions on a nil webview.
if (webView != nil) {
} else {
self.loadView()
self.webView.navigationDelegate = self
}
if (self.finishedUrl != myUrl.absoluteString) {
webView.load(request as URLRequest)
}
should be:
if (webView != nil) {
self.loadView()
self.webView.navigationDelegate = self
}
if (self.finishedUrl != myUrl.absoluteString) {
webView.load(request as URLRequest) // also make sure this is being executed.
}
A: I had the same problem. My styles.css file was being referenced in the .HTML files just file, and it worked great loading up directly in a browser.
The problem that I had was that my styles.css wasn't included in any of my Targets. After I checked the box, it loaded up great.
A: I've got the same problem but nothing here helped me. The solution was to remove the folder path in the link, even if they were inside these folders in my Bundle.
Instead of having this:
<link rel="stylesheet" href="css/style.css" />
Removed the folder part and left this:
<link rel="stylesheet" href="style.css" />
And it worked. The same happened with the .js file
A: I had the same issue and resolved it by adding the resources as "Create folder instance" instead of "Create groups".
Create groups indexes the file but strips it of its path. Calling the file is done by calling the filename (unique filename is important in the group).
Create folder instance maintains the folder structure
Cfr. Create groups vs Create folder reference in Xcode | unknown | |
d2273 | train | Disregarding the simplicity and the elegance of being able to write event.subscribe(this._event$);, this is not a good idea. From what I noticed by doing this, is that whenever your _event$ completes, the event will also complete, which I don't think is the behavior you want. You're better off emitting the value manually like this:
event.subscribe(e => this._event$.next(e));
And even though you said you omitted the subscription management details, I feel it's important to mention again that you need to manage this subscription in order to avoid memory leaks. | unknown | |
d2274 | train | The problem is a comma and an alias. This query works:
#standardSQL
WITH `projectID.com_dev_sambhav_ANDROID.app_events_2017` AS(
SELECT ARRAY< STRUCT<date STRING, name STRING, params ARRAY< STRUCT<key STRING, value STRUCT<string_value STRING> > > > > [STRUCT('20170814' AS date, 'notification_received' AS name, [STRUCT('notification_title' AS key, STRUCT('Amazing Offers two' AS string_value) AS value ),
STRUCT('firebase_screen_class' AS key, STRUCT('RetailerHomeActivity' AS string_value) AS value),
STRUCT('notification_id' AS key, STRUCT('12345' AS string_value) AS value),
STRUCT('firebase_screen_id' AS key, STRUCT('app' AS string_value) AS value),
STRUCT('item_id' AS key, STRUCT('DEMO-02' AS string_value) AS value),
STRUCT('firebase_screen' AS key, STRUCT('My Order' AS string_value) AS value)] AS params)] event_dim
)
SELECT
event.name,
(SELECT param.value.string_value FROM UNNEST(event.params) AS param WHERE param.key="notification_title") as notification_title,
(SELECT param.value.string_value FROM UNNEST(event.params) AS param WHERE param.key="item_id") as item_id
FROM `projectID.com_dev_sambhav_ANDROID.app_events_20*`, UNNEST(event_dim) as event
WHERE event.name = "notification_received"
If you UNNEST the field event_dim and call it event, then you should use this alias as reference in your query.
As a complement, here's another way of solving your problem as well (it's just another possibility so you have more techniques in your belt when working with BigQuery):
#standardSQL
SELECT
(SELECT date FROM UNNEST(event_dim)) date,
(SELECT params.value.string_value FROM UNNEST(event_dim) event, UNNEST(event.params) params WHERE event.name = 'notification_received' AND params.key = 'notification_title') AS notification_title,
(SELECT params.value.string_value FROM UNNEST(event_dim) event, UNNEST(event.params) params WHERE event.name = 'notification_received' AND params.key = 'item_id') AS item_id
FROM `projectID.com_dev_sambhav_ANDROID.app_events_2017`
WHERE EXISTS(SELECT 1 FROM UNNEST(event_dim) WHERE name = 'notification_received')
When processing up to terabytes you may find this query still perform quite well. | unknown | |
d2275 | train | Not a problem.
I am going to write the response here because it may be quite long.
Unity has the default ISocialPlatform set to Apple. Doing the "PlayGamesClientConfiguration" you change the default ISocialPlatform to Google+.
My comment was talking about your Awake() function. I recommended you to put it in Start() as follows:
void Start()
{
//added code
PlayGamesClientConfiguration config = new PlayGamesClientConfiguration.Builder ().Build ();
PlayGamesPlatform.InitializeInstance(config);
GooglePlayGames.PlayGamesPlatform.DebugLogEnabled = false;
//GooglePlayGames.PlayGamesPlatform.Activate (); //this is wrong
//WHAT YOU ARE MISSING
PlayGamesPlatform.Activate();
//END OF WHAT YOU ARE MISSING
Advertisement.Initialize ("CORRECT_NUMBER", true);
if (PlayerPrefs.HasKey("record"))
{
record.text = "Record actual: " + PlayerPrefs.GetInt("record");
Analytics.CustomEvent ("Start Play", new Dictionary<string, object>{ { "Record", PlayerPrefs.GetInt("record")} });
//you should do this once the user is authenticated and logged in
if (Social.localUser.authenticated)
{
Social.ReportScore (PlayerPrefs.GetInt ("record"), "CORRECT_CODE", (bool success) => { });
}
} else {
record.text = "Consigue un record!!";
}
if (Social.localUser.authenticated)
{
signIn.SetActive (false);
}
test.SetActive (false);
}
Then, you can create a button in the inspector inside your scene and call this:
public void SignIn()
{
Social.localUser.Authenticate( ProcessAuthentication)
}
The CallBackFunction is a void function that takes in a bool. (You were doing this correctly) I like making my code readable and easy to understand.
void ProcessAuthentication (bool success)
{
if (true == success)
{
Debug.Log ("AUTHENTICATED!");
if (Social.localUser.authenticated)
{
Social.ReportScore (PlayerPrefs.GetInt ("record"), "CORRECT_CODE", (bool success) => { });
}
} else {
record.text = "Consigue un record!!";
}
if (Social.localUser.authenticated)
{
signIn.SetActive (false);
//EDIT: you need to load the achievements to use them
Social.LoadAchievements (ProcessLoadedAchievements);
Social.LoadScores(leaderboardID, CALLBACK) // Callback has to be a Action<IScore[]> callback
}
test.SetActive (false);
}
else
{
Debug.Log("Failed to authenticate");
signIn.SetActive (true);
}
}
void ProcessLoadedAchievements (IAchievement[] achievements) {
if (achievements.Length == 0)
{
Debug.Log ("Error: no achievements found");
}
else
{
Debug.Log ("Got " + achievements.Length + " achievements");
}
}
Let me know.
(I can explain it to you in Spanish too if you want)
Saludos!
A: I follow this tutorial: google tutorial in GitHub (README) and the tutorial don't say I must implement that method, in fact, when I read it, I understand these method are part of the engine.
Here is my new code:
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.UI;
using UnityEngine.Advertisements;
using System.Collections;
using System.Collections.Generic;
using GooglePlayGames;
using UnityEngine.SocialPlatforms;
using GooglePlayGames.BasicApi;
using UnityEngine.Analytics;
public class Menu : MonoBehaviour {
public Text record;
public GameObject test;
/**
Indicador del jugador de que no ha iniciado sesión, y no podrá acceder al ranking ni a los logros.
*/
public GameObject signIn;
void Start()
{
PlayGamesClientConfiguration config = new PlayGamesClientConfiguration.Builder ().Build ();
PlayGamesPlatform.InitializeInstance(config);
GooglePlayGames.PlayGamesPlatform.DebugLogEnabled = true;
PlayGamesPlatform.Activate();
Advertisement.Initialize ("NUMBER", true);
test.SetActive (false);
//EDIT:
SignIn ();
if (PlayerPrefs.HasKey("record"))
{
record.text = "Record actual: " + PlayerPrefs.GetInt("record");
Analytics.CustomEvent ("Start Play", new Dictionary<string, object>{ { "Record", PlayerPrefs.GetInt("record")} });
} else {
record.text = "Consigue un record!!";
}
}
public void OnPlay()
{
ShowAd ();
SceneManager.LoadScene("Jugar");
Time.timeScale = 0;
}
public void OnArchievements()
{
if (Social.Active.localUser.authenticated) {
Social.Active.ShowAchievementsUI ();
} else {
Social.Active.localUser.Authenticate ((bool success) => {
if (success) {
Social.Active.ShowAchievementsUI ();
}
});
}
}
public void OnMatching()
{
if (Social.Active.localUser.authenticated) {
Social.Active.ShowLeaderboardUI ();
} else {
Social.Active.localUser.Authenticate ((bool success) => {
if (success) {
Social.Active.ShowLeaderboardUI ();
}
});
}
}
public void OnExit()
{
Analytics.CustomEvent ("Stop Play", new Dictionary<string, object>{ { "Record", PlayerPrefs.GetInt("record")} });
Application.Quit();
}
public void SignIn()
{
Social.Active.localUser.Authenticate (ProcessAuthentication);
}
public void Later()
{
signIn.SetActive (false);
}
public void ShowAd()
{
if (Advertisement.IsReady())
{
Advertisement.Show();
}
}
//EDIT:
void ProcessAuthentication (bool success)
{
if (true == success)
{
Debug.Log ("AUTHENTICATED!");
if (Social.Active.localUser.authenticated)
{
Social.Active.LoadAchievements (ProcessLoadedAchievements);
Social.Active.LoadScores (Fireball.GPGSIds.leaderboard_ranking, ProcessLoadedScores );
if (PlayerPrefs.HasKey ("record"))
{
Social.Active.ReportScore (PlayerPrefs.GetInt ("record"),Fireball.GPGSIds.leaderboard_ranking, (bool report) => { });
}
signIn.SetActive (false);
test.SetActive (true);
}
} else {
Debug.Log("Failed to authenticate");
signIn.SetActive (true);
}
}
void ProcessLoadedAchievements (IAchievement[] achievements)
{
if (achievements.Length == 0)
{
Debug.Log ("Error: no achievements found");
}
else
{
Debug.Log ("Got " + achievements.Length + " achievements");
}
}
void ProcessLoadedScores (IScore[] scores)
{
if (scores.Length == 0)
{
Debug.Log ("Error: no scores found");
}
else
{
Debug.Log ("Got " + scores.Length + " scores");
}
}
}
EDIT:
Again in the editor I can't authenticate (in my device it fail too), I really don't understand the cause... Here the debug log (I active it, may can help):
DEBUG: Activating PlayGamesPlatform.
DEBUG: PlayGamesPlatform activated: GooglePlayGames.PlayGamesPlatform.
DEBUG: Creating platform-specific Play Games client.
DEBUG: Creating IPlayGamesClient in editor, using DummyClient.
DEBUG: Received method call on DummyClient - using stub implementation.
The last message, is when I try to authenticate, maybe you know the problem with the debug log.
PD: I read de documentation you send me, but is oriented to GameCenter (iOS), but I understand now more thing about that, really thanks.
Again thanks a lot for your time. | unknown | |
d2276 | train | Maybe I am misinterpreting what you are trying to accomplish with the CASE statement, but based on my understanding you can use the WHERE clause to conditionally remove data from a table:
DELETE
FROM MyDB.MyTable
WHERE Col1 = 31
AND "Desc" = 'xxxxxx';
EDIT:
Based on your comment then you need to apply the CASE logic to each column returned in the SELECT statement that you wish to obscure.
SELECT CASE WHEN Col1 = 31 and "DESC" = 'yyyyy'
THEN NULL
ELSE ColA
END AS ColA_,
/* Repeat for each column you wish to "delete" */
FROM MyDB.MyTable; | unknown | |
d2277 | train | You can pass multiple items to the same context. A dictionary allows to add multiple key-value pairs (as long as the keys are hashable, and unique):
def list_todo_items(request):
context = {
'todo_list': Todo.objects.all(),
'count': Todo.objects.count()
}
return render(request, 'index.html', context) | unknown | |
d2278 | train | if (!testUser.authorities.contains(adminRole)) {
new SpringUserSpringRole(user: testUser, role: adminRole).save(flush: true,failOnError: true)
}
if (!testUser.authorities.contains(userRole)) {
new SpringUserSpringRole(user: testUser, role: userRole).save(flush: true,failOnError: true)
}
A: Just a suggestion, maybe you should try creating a hierarchy for your roles instead of adding two roles for a single user:
see doc. | unknown | |
d2279 | train | try surrounding the {{ with single quote like
docker inspect --format='{{.State.Health.Status}}' test-db
and executing in the if condition like:
if [[ $(docker inspect --format='{{.State.Health.Status}}' test-db) == "healthy" ]] | unknown | |
d2280 | train | Here are some ideas:
*
*Convert the supplied ID to a hash or encrypt it. This will result in meaningless strings
*Create a dictionary of words you don't want used, and when the supplied ID contains one of those words, reject it... a PHP example can be found at https://scvinodkumar.wordpress.com/2009/06/17/bad-word-filter-and-replace/
*Require that the IDs not contain sequences where two (or however many) alpha characters are next to each other
If you have any additional info/preference/requirements, let me know. | unknown | |
d2281 | train | I'd go with a simpler nested SQL statement:
Delete tbl_to_import.*
From tbl_to_import
Where "XYZ." & tbl_to_import.Account In
(Select master_table.Account From master_table);
This should be fairly fast, especially if your Account fields are indexed.
A: I think you can simplify the query; delete based on the ID, where the IDs are in the query:
DELETE * FROM tbl_to_import
WHERE tbl_to_import.ID IN (
SELECT DISTINCT [DELETED] FROM qry_find_existing_accounts
) | unknown | |
d2282 | train | The code snippet provided in the previous answer, is an elegant way of doing it but a typo or a shell incompatibility may cause it not to function properly.
please try the code below instead. It does the same thing but every shortcut has been explicitly written with debugging echo commands in the loop.
counter=1
cd /my/image/directory
for f in $(ls -1)
do
new_filename=$(printf "%04d" ${counter})
echo "renaming ${f} ..to.. ${new_filename}"
mv ${f} ${new_filename}
(( counter=${counter}+1 ))
done
the screen output will be a little chatty. if you have too many files, you might want to add | tee screen.out to the end of the line with done command. So that you can go back and see what happened to which file recorded in the screen.out.
A: I created my own tool to do this. It also maintains file extensions, which I did not mention, but should probably be included. Here is the code:
#!/bin/sh
dir=$1
cd $dir
echo "Renaming all files in $dir."
COUNTER=1
for i in `ls -1`
do
extension=${i##*.}
mv "$i" "$COUNTER.$extension"
echo "$i ==> $COUNTER.$extension"
COUNTER=$(expr $COUNTER + 1 )
done
It does not (at the time of writing) include the leading zeroes, but it gets the job done.
A: As long as you don't care which file is renamed to what, it's easy :)
counter=1
for f in *; do
mv "$f" "$( printf "%04d" $((counter++)) )"
done
A: Trying to rename all files with suffix .bash to suffix .sh in a folder is easily done with
rename .bash .sh *.bash | unknown | |
d2283 | train | Try this:
function list_all_files_inside_one_folder_without_subfolders(){
var sh = SpreadsheetApp.getActiveSheet();
var folder = DriveApp.getFolderById('1HPv9-umg0XQ8Fa9UV8lDr6O2Y4kAIAJe');
var list = [];
list.push(['Name','ID','Size']);
var files = folder.getFiles();
while (files.hasNext()){
file = files.next();
list.push([file.getName(),file.getId(),file.getSize()]);
}
sh.getRange(1,1,list.length,list[0].length).setValues(list);
} | unknown | |
d2284 | train | check the library to show date with the different colour themes:
Add to your styles.xml
<style name="MyDatePickerDialogTheme" parent="android:Theme.Material.Light.Dialog">
<item name="android:datePickerStyle">@style/MyDatePickerStyle</item>
<item name="android:colorAccent">@color/beautiful_color</item>
</style>
<style name="MyDatePickerStyle" parent="@android:style/Widget.Material.Light.DatePicker">
<item name="android:headerBackground">@color/beautiful_color</item>
<item name="android:datePickerMode">calendar</item>
</style>
Moreover, if you want to change the entire theme of the date picker, use the following dependency
dependencies {
compile 'com.wdullaer:materialdatetimepicker:3.6.3'
}
You can use app:mcv_dateTextAppearance (or the Java setter) to do this. I don't have the full explanation, but for now, you can copy the default implementation and modify as you need. Essentially you need to supply a android:textColor that is a color state list with your desired colors.
Add a style:
<style name="TextAppearance.MyCustomDay" parent="android:TextAppearance.DeviceDefault.Small">
<item name="android:textSize">12sp</item>
<item name="android:textColor">@color/my_custom_day_color</item>
</style>
and create color/my_custom_day_color.xml
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android"
android:enterFadeDuration="@android:integer/config_shortAnimTime"
android:exitFadeDuration="@android:integer/config_shortAnimTime">
<item android:state_checked="true" android:color="SELECTION_COLOR" />
<item android:state_pressed="true" android:color="SELECTION_COLOR" />
<item android:state_enabled="false" android:color="#808080" />
<item android:color="@android:color/black" />
</selector>
then set app:mcv_dateTextAppearance="@style/TextAppearance.MyCustomDay" | unknown | |
d2285 | train | You can bind the Enter key to a function with .bind('<Return>', function). | unknown | |
d2286 | train | from: https://groups.google.com/d/msg/nightwatchjs/n-B4HnnzYg8/rmaipXiTsuwJ
Replying to my own post before - please don't confuse the username and
access_key vars as 'basic auth' ones. They are selenium based
authenticators which can optionally be used for authenticating against
cloud solutions. Best solution for basic auth is the good old URL way.
Try: ``` browser.url('https://' + userName + ':' + password + '@' +
yourUrl)
A: You can do this in globals.js. It will preserve credentials and skip basic auth prompt.
beforeEach: function (browser, cb) {
if (browser.globals.basic_auth_url) {
browser.url(browser.globals.basic_auth_url, cb);
} else {
cb();
}
},
basic_auth_url - add full URL with credentials.
Do not add credentials to launch_url | unknown | |
d2287 | train | If you don't implement willContinueUserActivityWithType or if it returns false, it means that iOS should handle activity. And in this case it can show UIAlertController. So to get rid this warning return true for your activity in this delegate call:
func application(application: UIApplication,
willContinueUserActivityWithType userActivityType: String) -> Bool {
return true
} | unknown | |
d2288 | train | In BookStore class, you are calling
Collection<Book> books = getCollectionOfItems();
which returns a collection of Itemnote that Book can be casted to an Item but not the other way round. So you need to change the above
Collection<Item> books = getCollectionOfItems();
If you then want to display all books, iterate over all items and check if the current item is a book before displaying it,
public void displayAllBooksWrittenByAuthorsOverThisAge(int ageInYears) {
Iterator<Item> items = getCollectionOfItems().iterator();
while (it.hasNext()) {
Item item = it.next();
if(item instanceof Book){
displayBook((Book)item);
}
}
} | unknown | |
d2289 | train | I were about to advice you to use Intent to share data between both activities when i noticed that your "SpinnerActivity" is not really an Activity since it not extends Android Activity class (AppCompactActivity or other classes like this).
Your SpinnerActivity is a Listener. You can use it to implement the action to trigger when an action is performed on your Spinner view. For that you need to do it inside the "@Override" method of onItemSelected.
If you don't like to use the previous methode because of the Override methode you should implement directly the action to trigger at on click events on your spinner view in your MainActivity by doing this:
public class MainActivity extends AppCompatActivity {
SpinnerActivity spinnerActivity = new SpinnerActivity();
Spinner spinnerProvince;
String selectedSpinnerProvince = spinnerActivity.inSpinnerSelectedProvince;
// String selectedSpinnerProvince =
SpinnerActivity.inSpinnerSelectedProvince;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
spinnerProvince = findViewById(R.id.spinnerProvince);
populateSpinnerProvinces();
//spinnerProvince.setOnItemSelectedListener(spinnerActivity);
spinnerProvince.setOnItemSelectedListener(android.widget.AdapterView.OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView<?> parent, View view, int position, long id) {
if (parent.getId() == R.id.spinnerProvince) {
inSpinnerSelectedProvince = parent.getItemAtPosition(position).toString();
Toast.makeText(parent.getContext(), parent.getItemAtPosition(position).toString(), Toast.LENGTH_SHORT).show();
} else {// code here}
}
@Override
public void onNothingSelected(AdapterView<?> parent) {
}
});
}
public void populateSpinnerProvinces() {
ArrayAdapter<String> provincesAdapter = new ArrayAdapter<>(this,
android.R.layout.simple_spinner_item,
getResources().getStringArray(R.array.province));
provincesAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
spinnerProvince.setAdapter(provincesAdapter);
}
} | unknown | |
d2290 | train | Using this may help someone: [[UIApplication sharedApplication].keyWindow.rootViewController presentViewController:picker animated:NO completion:nil];
A: I'm not sure if you have solve this issue. The error message means the viewcontroller you use to present another modal viewcontroller is not visible on the window. This can happen for e.g:
[VC1 presentModalViewController:VC2];
// Error here, since VC1 is no longer visible on the window
[VC1 presentModalViewController:VC3];
If your issue is like above, you can fix it like:
if (self.modalViewController != nil) {
[self.modalViewController presentModalViewController:VC3 animated:YES];
} else {
[self.tabBarController presentModalViewController:VC3 animated:YES];
}
If that doesn't fix your issue, maybe you can try to present using self.tabBarController instead of self. Again just suggestion, not sure if it works though.
A: Since modalViewController and presentModalViewController are deprecated, the following is what works for me:
presentingVC = [[UIApplication sharedApplication] keyWindow].rootViewController;
if (presentingVC.presentedViewController) {
[presentingVC.presentedViewController presentViewController:VC3 animated:YES completion:nil];
} else {
[presentingVC presentViewController:VC3 animated:YES completion:nil];
}
A: You can follow this pattern
[VC1 presentModalViewController:VC2];
//
[**VC2** presentModalViewController:VC3]; | unknown | |
d2291 | train | Since you have two kind of Cells in the Table View, you have to set the height of both cells programatically inside heightForRowAtIndexPath.
Currently, you have only one cell size and I think it is default to 44.0.
A: May be you doesn't set imageview's constraints properly.
Set top, bottom, left, rignt constraint of imageview with tableviewcell frame properly. Give a aspect ratio size of the imageview.
And return UITableViewAutomaticDimension from heightForRowAtIndexPath and estimatedHeightForRowAtIndexPath delegate functions.
This will work !! | unknown | |
d2292 | train | I believe you can use the sys.dm_exec_query_stats dynamic management view. There are two columns in this view called execution_count, and total_worker_time that will help you.
execution_count gives the total number of times the stored procedure in question was executed since the last time it was recompiled.
total_worker_time gives the total CPU time in milliseconds that was spent executing this stored procedure since the last time it was recompiled.
Here is an MSDN link:
http://msdn.microsoft.com/en-us/library/ms189741.aspx
A: You can use dm_exec_cached_plans to look for the stored procedures that have been compiled into query plans. The function dm_exec_query_plan can be used to retrieve the object id for a plan, which in turn can be translated into the procedure's name:
select object_name(qp.objectid)
, cp.usecounts
from sys.dm_exec_cached_plans cp
cross apply
sys.dm_exec_query_plan(cp.plan_handle) qp
where cp.objtype = 'Proc'
order by
cp.usecounts desc
A: I think you want to check SQL Server Profiler for this.
You can check the details in MSDN and other places as well.
But, before using it in production server, you need to keep in mind that:
Profiler adds too much overhead to production server. So, first check when your site has less no of hits, and go ahead.
A: This is what the SQL Server Profiler is for. With it you can keep track of query run count, execution time, etc. | unknown | |
d2293 | train | import re
data = []
df = pd.DataFrame()
regex_contract_number =r"(?:CONTRACT NUMBER\s+(?P<contract_number>\S+?)\s)"
regex_location = r"(?:LOCATION\s+(?P<location>\S+))"
regex_contract_items = r"(?:(?P<contract_items>\d+)\sCONTRACT ITEMS)"
regex_federal_aid =r"(?:FEDERAL AID\s+(?P<federal_aid>\S+?)\s)"
regex_contract_code =r"(?:CONTRACT CODE\s+\'(?P<contract_code>\S+?)\s)"
regexes = [regex_contract_number,regex_location,regex_contract_items,regex_federal_aid,regex_contract_code]
for regex in regexes:
for match in re.finditer(regex, text):
data.append(match.groupdict())
df = pd.concat([df, pd.DataFrame(data)], axis=1)
data = []
df | unknown | |
d2294 | train | As the prefix is set in Nginx, the web server that hosts the Django app has no way of knowing the URL prefix. As orzel said, if you used apache+mod_wsgi of even nginx+gunicorn/uwsgi (with some additional configuration), you could use the WSGIScriptAlias value, that is automatically read by Django.
When I need to use a URL prefix, I generally put it myself in my root urls.py, where I have only one line, prefixed by the prefix and including an other urls.py
(r'^/myapp/', include('myapp.urls')),
But I guess this has the same bottleneck than setting a prefix in settings.py, you have redundant configuration in nginx and Django.
You need to do something in the server that hosts your Django app at :12345. You could set the prefix there, and pass it to Django using the WSGIScriptAlias or its equivalent outside mod_wsgi. I cannot give more information as I don't know how your Django application is run. Also, maybe you should consider running your Django app directly from Django, using uWSGI or gunicorn.
To pass the prefix to Django from the webserver, you can use this :
proxy_set_header SCRIPT_NAME /myapp;
More information here
A: Here is part of my config for nginx which admittedly doesn't set FORCE_SCRIPT_NAME, but then, I'm not using a subdirectory. Maybe it will be useful for setting options related to USE_X_FORWARDED_HOST in nginx rather than Django.
upstream app_server_djangoapp {
server localhost:8001 fail_timeout=0;
}
server {
listen xxx.xxx.xx.xx:80;
server_name mydomain.com www.mydomain.com;
if ($host = mydomain.com) {
rewrite ^/(.*)$ http://www.mydomain.com/$1 permanent;
}
...
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
}
...
}
A: You'll need to update your setting:
USE_X_FORWARDED_HOST = True
FORCE_SCRIPT_NAME = "/myapp"
And update your MEDIA_URL and STATIC_URL accordingly.
I haven't had the experience of deploying under nginx, but under apache, it works fine.
refer to: https://docs.djangoproject.com/en/dev/ref/settings/#use-x-forwarded-host | unknown | |
d2295 | train | Try this:
#include <string>
#include <iostream>
int main() {
std::string digits;
bool error = false;
do {
error = false;
std::cout << "Type 3 digits. (0 to 9)\n";
std::cin >> digits;
if (digits.size() != 3) {
std::cout << "\nError, you must type 3 digits.\n\n";
error = true;
}
else {
for (int i = 0; i < 3; i++) {
if (!isdigit(digits[i])) {
std::cout << "\nError, you must type only digits. (0 to 9)\n\n";
error = true;
break;
}
}
}
} while (error == true);
}
This uses the function isdigit() to check if the current char is a digit.
The reason your code before wasn't working is because the char (0 for this example, as it is a digit) '0' actually has a character code different than 0. Quoting from https://www.tutorialspoint.com/ascii-nul-ascii-0-0-and-numeric-literal-0#:~:text=The%20ASCII%20NUL%20character%20is,The%20decimal%20equivalent%20is%2048
,
When programmer used '0' (character 0) it is treated as 0x30. This is a hexadecimal number. The decimal equivalent is 48.
Table of ascii chars:
https://www.techonthenet.com/ascii/chart.php
A: if (digits[i] < 0 || digits[i] > 9) is checking if digits[i] has a character value less than 0 or greater than 9. The character '0' is character 48 in most encodings Today (most notably, ASCII), so that check will not work. You should be checking if (digits[i] < '0' || digits[i] > '9') - and there is already a function that does this. std::isdigit.
You could also use the standard algorithm std::all_of to check if all chars in the string are in fact digits.
Example:
#include <algorithm> // std::all_of
#include <cctype> // std::isdigit
#include <iostream>
#include <string>
int main() {
std::string digits;
// A char to unsigned char casting lambda (to make isdigit safe)
auto isdigit_lambda = [](char ch) {
return std::isdigit(static_cast<unsigned char>(ch)) != 0; };
while(true) {
std::cout << "Type 3 digits. (0 to 9)\n";
if(not (std::cin >> digits)) {
// input failure - deal with that somehow
}
// if it has the correct size and all are digits, break out of the loop
if(digits.size() == 3 &&
std::all_of(digits.begin(), digits.end(), isdigit_lambda)) break;
std::cout << "\nError, you must type 3 digits.\n\n";
}
}
A: I tried my best to keep the code minimal:
#include <string>
#include <iostream>
#include <algorithm>
int main() {
std::string digits;
bool error = true;
while (error){
std::cout << "Type 3 digits. (0 to 9)\n";
std::cin >> digits;
if (digits.size() not_eq 3 and (error = true))
std::cout << "\nError, you must type 3 digits.\n";
else if(std::any_of(digits.begin(), digits.end(), [](const auto& digit){ return not std::isdigit(digit);}) and (error = true))
std::cout << "\nError, you must type only digits. (0 to 9)\n\n";
else error = false;
}
} | unknown | |
d2296 | train | I couldn't find a way to make the NavBar visible for editing. But, a way around is to double click on the NavItem component and type the text you want for NavItem can change the NavItem.
A: Click on "Show Toolbar" then "Open", this will show the items. | unknown | |
d2297 | train | Change
if (lblSupplierEmailAddress.Content.ToString() == "")
To
if (String.IsNullOrEmpty((string) lblSupplierEmailAddress.Content)
When lblSupplierEmailAddress.Content is actually null you can of course not call ToString on it as it will cause a NullReferenceException. However the static IsNullOrEmpty-method takes respect on this and returns true if Content is null.
A: In C#6.0 This will do
if(lblSupplierEmailAddress?.Content?.ToString() == "")
Else if the lblSupplierEmailAddress always exists, you could simply do:
if(lblSupplierEmailAddress.Content?.ToString() == "")
The equivalent code would be:
if(lblSupplierEmailAddress.Content != null)
if (lblSupplierEmailAddress.Content.ToString() == ""){
//do something
}
A: if( null != lblSupplierEmailAddress.Content
&& string.IsNullOrEmpty(lblSupplierEmailAddress.Content.ToString() ) | unknown | |
d2298 | train | The way you want to handle this is already invented with threading sychronization, so you don't have to implement it your way.
There's a class similar to Semaphore called CountDownLatch. When you declare an object, and activate the lock mechanism via .wait(), it will freeze execution of further code until you issue a .countDown() statement. Indeed, if you declare a CountDownLatch(1) will probably make the effect you're looking for.
On the CountDownLatch's reference page there's a nice example on how to implement this by blocking one Thread's execution depending on the execution of other.
A: To meet my needs, I instead made the synchronize functionality into an AsyncTask, spun that off from my UI thread, and left the outgoing HTTP request as regular functions (and as such, they block).
This means that the synchronization process happens in the background, does not affect the UI thread, and each outgoing API call happens sequentially. | unknown | |
d2299 | train | Right-click your project, select Build Path and Configure Build Path.... In the Source tab, if src/main/resources or src/test/java appear, remove them. This might be a bug with the Maven plugin, I don't know. They appear like they are there, but aren't really.
Then use Add Folder... to add the folders you need. Do this by selecting a folder (to add folders to), like src/main, clicking Create New Folder... and using the folder name resources (or as appropriate).
A: You close the "Existing Folder Selection" dialog (since you don't want an existing folder, but a new one), and you enter src/test/java in "Folder Name". Then you click Finish. You repeat this operation for every new folder you want.
Of course, you could also use your file manager to create the folders, and select them in Eclipse. | unknown | |
d2300 | train | In Laravel's Blade Templating engine, {!! !!} is used to output unescaped content, including (and not limited to) HTML tags. When combined with CKEditor, you typically get things like this:
<span class="descresize">{!! $treatment->description !!}</span>
<!-- <span class="descresize"><p>Something something long description of the associated Treatment</p></span> -->
Since the CSS properties are being assigned to <span class="descresize">, which now equates to <span class="descresize"><p>...</p></span>, the properties may or may not propagate to these nested HTML elements.
If the content of {!! $treatment->description !!} is going to be consistent (i.e. always a <p>...</p> element), you can simply modify the CSS to point at this nested element:
.descresize > p {
display: inline-block;
width: 500px;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
Since the <p> tag only contains text, and no nested elements, this should handle the properties correctly. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.