_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d18801
test
function linkcurl($targetURL){ $linkcurl = curl_init(); curl_setopt($linkcurl, CURLOPT_COOKIEJAR, dirname(__FILE__) . "/cookie.tmpz"); curl_setopt($linkcurl, CURLOPT_COOKIEFILE, dirname(__FILE__) . "/cookie.tmpz"); curl_setopt($linkcurl, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($linkcurl, CURLOPT_CUSTOMREQUEST, 'GET'); curl_setopt($linkcurl, CURLOPT_URL, $targetURL); $datax = curl_exec ($linkcurl); if ($datax) { curl_close($linkcurl); return $datax; } else { return curl_error ( $linkcurl ); } } $prdhtml = linkcurl($product_page_url); //
unknown
d18802
test
The ALL(Table) function tells Power Pivot to ignore any filters applied over the whole table. Therefore, you're telling Power Pivot to count all the rows of the factsales table regarless of the Category or Year being filtered on the pivot table. However, in your case, what you want is the sum for ALL the categories on each year. Since you want the sum of ALL the categories you must use `ALL(factsales[categories]). In this way, you're ignoring only the filters for the categories and not the filters for the years. Based on the previous explanation the dax formula would be: Measure := count(factSales[salesnr]) / calculate(count(factSales[salesnr]);all(factsales[categories]))
unknown
d18803
test
Use the Environ function to retrieve the environment variables that are set in Windows. Sub TfrSec() Dim argh As Double argh = Shell(Environ("USERPROFILE") & "\Desktop\Transfer Rates File.bat", vbNormalFocus) End Sub See here for a list of available variables in Windows 10.
unknown
d18804
test
You have to triple check you path :) <span class="arrow" style="width:100px;height:50px;background-image: url(http://lorempixel.com/100/50/cats/1/);display:inline-block;"></span> For a cleaner code, you can write your CSS code outside of style attribute, in a separate file.
unknown
d18805
test
/home/user/dev/project/ needs to be in your PYTHONPATH to be able to be imported. If your testing_utils is meant to be local only, the easiest way to accomplish this is to add: import sys sys.path.append('/home/user/dev/project/') ... in dummy_factory.py before importing the module. Solutions that would be more proper would be to install users (how exactly you'd make your package installable depends on your system and what version of Python and Pip you need to support), or to use a virtualenv that automatically adds the right directories to PYTHONPATH.
unknown
d18806
test
It looks like you are searching for localization for your application. Please follow this link: Localization example link 1 Localization example Link 2 Link 3 Best link: Best Link... Hope it works for you. A: You can look at incorporating the Greenwich framework into your app. I've not used it myself, but saw it demo'd at a conference recently. It will let the translators make the changes to the app that you distribute to them as an ad-hoc build and then see them running in your app.
unknown
d18807
test
This is an error below The promise do not require to handel - [31mF[0mA Jasmine spec timed out. Resetting the WebDriver Control Flow. Failures: 1) Protractor Alert steps Open Angular js website Alerts Message: [31m Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.[0m Stack: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL. at listOnTimeout (internal/timers.js:549:17) at processTimers (internal/timers.js:492:7) Message: [31m Failed: script timeout A: You're getting a timeout error. Modify your timeout limits in your Configuration file. exports.config = { allScriptsTimeout: 90 * 1000, // Set to 90 seconds getPageTimeout: 15 * 1000, // Set to 15 seconds // if you're using Node, you might want to adjust this as well: jasmineNodeOpts: { showColors: true, defaultTimeoutInterval: 60 * 1000, // 60 second timeout print: function() {} }, }; Just a word of caution: long tests are fragile tests. Adjust your timeouts carefully.
unknown
d18808
test
I solved it by deleting and re-downloading both opencv and opencv_contrib. Then I built everything again: git clone https://github.com/Itseez/opencv.git git clone https://github.com/Itseez/opencv_contrib.git cd opencv mkdir build cd build cmake -D CMAKE_BUILD_TYPE=RELEASE -DOPENCV_EXTRA_MODULES_PATH=/home/myname/Downloads/opencv_contrib/modules -D CMAKE_INSTALL_PREFIX=/usr/local -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=OFF -D WITH_OPENGL=ON -D BUILD_opencv_ximgproc=ON .. make -j7 sudo make install
unknown
d18809
test
If you're looking to turn Azure Service Bus into a Storage Blob, that's probably easy to achieve. You need a Service Bus trigger to retrieve the message payload and its ID to use as a blob name, storing the payload (message body) using whatever mechanism you want. Could be using Storage SDK to write the contents into a blob. Or a blob output binding with a random blob name. Or the message ID as the blob name. Below is an example of a function that will be triggered by a new message in Service bus queue called myqueue and will generate a blob named after message's ID in the messages container. In-process SDK public static class MessageTriggeredFunction { [FunctionName(nameof(MessageTriggeredFunction))] public static async Task Run( [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnectionString")]string payload, string messageId, [Blob("messages/{messageId}.txt", FileAccess.Write, Connection = "StorageAccountConnectionString")] Stream output) { await output.WriteAsync(Encoding.UTF8.GetBytes(payload)); } } Isolated worker SDK public class MessageTriggeredFunctionIsolated { [Function(nameof(MessageTriggeredFunctionIsolated))] [BlobOutput("messages/{messageId}.txt", Connection = "StorageAccountConnectionString")] public string Run( [ServiceBusTrigger("myqueue", Connection = "ServiceBusConnectionString")] string payload, string messageId) { return payload; } }
unknown
d18810
test
How should your regex check something like this?: $a = ''; $b = $a; $c = preg_replace('/.*/', $a); You should use PHP to check the variables and not a regex: include 'config.php'; foreach(array( 'db_host', 'db_pass', ... ) as $varname) { $value = trim($$varname); if(empty($value)) { die($varname . ' must not be empty'); } } Also you should use trim() to avoid variables containing just whitespace content. (thanks @SamuelCook) A: Why use preg_match when you can use trim() & empty() to check if it's empty or not. Take the following examples: <?php $str[0] = ' '; $str[1] = "\t"; $str[2] = "\r\n"; foreach($str as $k=>$val){ $val = trim($val); if(empty($val)){ echo $k.' is empty<br>'; } } They will all return empty. A: You can use this pattern: $pattern = '~\$db_host\s*=\s*(["\'])\s*\1\s*;~'; \s* means "any white character zero or more times". White characters are spaces, tabs, carriage returns and newlines, but if you want, you can replace \s by \h that matches only horizontal white characters (i.e. tabs and spaces). I put the type of quote in a capturing group with (["\']) and then I use a backreference to this group with \1 (1 because it is the first capturing group of the pattern).
unknown
d18811
test
I assume that you did specify your EMAIL_HOST, EMAIL_PORT, EMAIL_HOST_USER and EMAIL_HOST_PASSWORD in your settings.py right? A detailed explanation of how the default django.core.mail.backends.smtp.EmailBackend works is explained - https://docs.djangoproject.com/en/dev/topics/email/ https://docs.djangoproject.com/en/dev/topics/email/#smtp-backend And specifically for your email port, you did open your port for smtp under EC2's security group? SMTP port usually defaults to 25 if you left it as default during your postfix configuration and it is important that you have opened up that port when you created your EC2 instance.
unknown
d18812
test
That needs to be: Document doc = (Document) xstream.fromXML(theInput); If you pass in a second parameter, XStream will try to populate that with the values from the XML. Since in your code, you're passing in a class object, XStream will try to populate the class object and return it. The JavaDoc has the details.
unknown
d18813
test
:defaultM); } } When running the above code the list.stream().forEach(A::defaultM); throws the below exception. Why? Why can't the method reference access the methods defined in the package-private interface while the lambda expression can? I'm running this in Eclipse (Version: 2018-12 (4.10.0)) with Java version 1.8.0_191. imp1 imp2 Exception in thread "main" java.lang.BootstrapMethodError: call site initialization exception at java.lang.invoke.CallSite.makeSite(CallSite.java:341) at java.lang.invoke.MethodHandleNatives.linkCallSiteImpl(MethodHandleNatives.java:307) at java.lang.invoke.MethodHandleNatives.linkCallSite(MethodHandleNatives.java:297) at pkg.Test.main(Test.java:14) Caused by: java.lang.IllegalArgumentException: java.lang.IllegalAccessException: class is not public: pkg.a.AA.defaultM()void/invokeInterface, from pkg.Test at java.lang.invoke.MethodHandles$Lookup.revealDirect(MethodHandles.java:1360) at java.lang.invoke.AbstractValidatingLambdaMetafactory.<init>(AbstractValidatingLambdaMetafactory.java:131) at java.lang.invoke.InnerClassLambdaMetafactory.<init>(InnerClassLambdaMetafactory.java:155) at java.lang.invoke.LambdaMetafactory.metafactory(LambdaMetafactory.java:299) at java.lang.invoke.CallSite.makeSite(CallSite.java:302) ... 3 more Caused by: java.lang.IllegalAccessException: class is not public: pkg.a.AA.defaultM()void/invokeInterface, from pkg.Test at java.lang.invoke.MemberName.makeAccessException(MemberName.java:850) at java.lang.invoke.MethodHandles$Lookup.checkAccess(MethodHandles.java:1536) at java.lang.invoke.MethodHandles$Lookup.revealDirect(MethodHandles.java:1357) ... 7 more A: This appears to be a bug in certain Java versions. I can replicate it if I compile and run it with JDK 8, specifically: tj$ javac -version javac 1.8.0_74 tj$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) ...but not with JDK 11 or 12, specifically: tj$ javac -version javac 11.0.1 tj$ java -version openjdk version "11.0.1" 2018-10-16 OpenJDK Runtime Environment 18.9 (build 11.0.1+13) OpenJDK 64-Bit Server VM 18.9 (build 11.0.1+13, mixed mode) and tj$ javac -version javac 12.0.2 tj$ java -version java version "12.0.2" 2019-07-16 Java(TM) SE Runtime Environment (build 12.0.2+10) Java HotSpot(TM) 64-Bit Server VM (build 12.0.2+10, mixed mode, sharing) I can also replicate it if I compile with JDK 8 but run it with JDK 12's runtime, suggesting a compilation problem. A: This is a bug: Method reference uses wrong qualifying type. A reference to a method declared in a package-access class (via a public subtype) compiles to a lambda bridge; the qualifying type in the bridge method is the declaring class, not the referenced class. This leads to an IllegalAccessError. Fixed in Java 9.
unknown
d18814
test
Unfortunately, in the current stage, it seems that break-inside is not reflected to Utilities.newBlob(html, MimeType.HTML, subject).getAs(MimeType.PDF). But I'm not sure whether this is the current specification. So as a workaround, in your case, how about the following flow? * *Prepare HTML data. * *This has already been prepared in your script. *Convert HTML data to Google Document. * *In this case, Drive API is used. *Convert Google Document to PDF data. *Create blob as a file. When your script is modified, it becomes as follows. Modified script: Before you use this script, please enable Drive API at Advanced Google services. From: var blob = Utilities.newBlob(html, MimeType.HTML, subject).getAs(MimeType.PDF); To: // 1. Prepare HTML data. var blob = Utilities.newBlob(html, MimeType.HTML, subject); // 2. Convert HTML data to Google Doc var id = Drive.Files.insert({title: subject, mimeType: MimeType.GOOGLE_DOCS}, blob).id; // 3. Convert Google Document to PDF data. var url = "https://docs.google.com/feeds/download/documents/export/Export?exportFormat=pdf&id=" + id; var pdf = UrlFetchApp.fetch(url, {headers: {authorization: "Bearer " + ScriptApp.getOAuthToken()}}).getBlob().setName(subject); DriveApp.getFileById(id).setTrashed(true); // 4. Create blob as a file. DriveApp.createFile(pdf); Reference: * *Files: insert A: By experimentation I found that Utilities.newBlob(html, MimeType.HTML, subject).getAs(MimeType.PDF) will respect page-break-inside: avoid; (rather than break-inside: avoid;) but only if it's applied to an entire table. So I had to treat each table row as a separate table. This version of the html template solved the problem: <table style="border-collapse: collapse; border: 1px solid black; table-layout: fixed; width: 100%;"> <tbody> <tr> <th style="border: 1px solid black; width: 30%; padding: 5px;">Question</th> <th style="border: 1px solid black; width: 70%; padding: 5px;">Response</th> </tr> </tbody> </table> <? rows.forEach(function(row){ ?> <table style="page-break-inside: avoid; border-collapse: collapse; border: 1px solid black; table-layout: fixed; width: 100%;"> <tbody> <tr> <td style="width: 30%; border: 1px solid black; padding: 5px; line-height: 1.5rem;"> <?= row.question ?> </td> <td style="width: 70%; avoid; border: 1px solid black; padding: 5px; line-height: 1.5rem;"> <?!= row.answer ?> </td> </tr> </tbody> </table> <? }) ?> For the record, converting to a Google Doc and then to pdf did not solve the problem; it appears that Google Doc also allows table cells to break across pages. However, writing the data to a Google Sheet and then downloading as pdf does prevent breaking cells across pages; Sheets seems to take care of this automatically, so that's an alternate solution.
unknown
d18815
test
Presumably, you have configured VS Code to run the code through Node.js and not through a remote debug session on a Chrome instance. Node.js isn't a web browser. It doesn't have a window.
unknown
d18816
test
The jQuery.data() api expects a dom element as its first param, not a jQuery object var data = $.data(this, 'testdata'); Also removeData() expects the key to be removed as its first param div.removeData('testdata'); Demo: Fiddle Another way is to use the .data() method of the jQuery object like div.data('testdata', { 'Name': 'Ronaldo' }); var data = div.data('testdata'); Demo: Fiddle A: JQuery $(document).ready(function () { $(document).on('click', '#btn',function(){ var thisObject=$(this); if(thisObject.hasClass("addplayer")) { $.data( thisObject[0], "testdata", { Name: "Ronaldo" }); var dataName = $.data(thisObject[0], 'testdata').Name; console.log(dataName); thisObject.val('x').removeClass('addplayer').addClass('removeplayer'); } else { var dataName = $.data(thisObject[0], 'testdata').Name; console.log(dataName); $.removeData(thisObject[0], 'testdata'); thisObject.val('+').removeClass('removeplayer').addClass('addplayer'); } }); }); DEMO
unknown
d18817
test
You can do this by command prompt. First git checkout master git merge page-switcher-fix at the end git branch -d page-switcher-fix I hope I could have helped A: Seems it's possible to delete removed branch .git\refs\heads in cause I step back by timemachine and bug fixed. Anyway, best choise is step back by timemachine or same like snapshots in Virtual Machine A: A distinction is made between remote and local branches. As you describe your case you deleted your branch remotely. For further context please see the code below for the command prompt: # delete branch remotely git push origin --delete page-switcher-fix And as the solution of @Mohammad Afshar correctly suggests you can delete your branch locally as follows: # delete branch locally git branch -d page-switcher-fix
unknown
d18818
test
If you make tpToString a template you can allow the caller to choose the accuracy at compile time. template <typename FloorType = microseconds> string tpToString(const system_clock::time_point& tp) { string formatStr{ "%Y-%m-%d %H:%M:%S" }; return date::format(formatStr, date::floor<FloorType>(tp)); } int main() { std::cout << tpToString(system_clock::now()) << "\n"; std::cout << tpToString<milliseconds>(system_clock::now()) << "\n"; std::cout << tpToString<seconds>(system_clock::now()) << "\n"; std::cout << tpToString<minutes>(system_clock::now()) << "\n"; }
unknown
d18819
test
Frank's answer is by far the simplest, but here's a swatch of code I've worked on for mid-pipe debugging and such. Caveat emptor: * *this code is under-tested; *even if well-tested, there is no intention for this to be used in production or unattended use; *it has not been blessed or even reviewed by any authors or contributors to dplyr and related packages; *it currently works in R-3.4 and dplyr-0.7.4, but it is not taking advantage of many "goodnesses" that should be used, such as rlang and/or lazyeval; *it works for my uses, not tested for yours. Bug reports welcome, if/when you find something amyss. Mid-pipe message This can include just about anything you want: mtcars %>% group_by(cyl) %>% pipe_message(whichcyl = cyl[1], bestmpg = max(mpg)) %>% summarize(mpg=mean(mpg)) # Mid-pipe message (2018-05-01 09:39:26): # $ :List of 2 # ..$ whichcyl: num 4 # ..$ bestmpg : num 33.9 # $ :List of 2 # ..$ whichcyl: num 6 # ..$ bestmpg : num 21.4 # $ :List of 2 # ..$ whichcyl: num 8 # ..$ bestmpg : num 19.2 # # A tibble: 3 x 2 # cyl mpg # <dbl> <dbl> # 1 4. 26.7 # 2 6. 19.7 # 3 8. 15.1 Mid-pipe assert You can optionally just realize what's going on and look at the data quickly, allowing you to see the moment and then exit out of the pipe: mtcars %>% group_by(cyl) %>% pipe_assert(all(mpg > 12), .debug=TRUE) %>% summarize(mpg = mean(mpg)) # # # # all(mpg > 12) is not TRUE ... in Group: cyl:8 # # 'x' is the current data that failed the assertion. # # # Called from: pipe_assert(., all(mpg > 12), .debug = TRUE) # Browse[1]> # debug at c:/Users/r2/Projects/StackOverflow/pipe_funcs.R#81: if (identical(x, .x[.indices[[.ind]], ])) { # stop(.msg, call. = FALSE) # } else { # .x[.indices[[.ind]], ] <- x # return(.x) # } # Browse[2]> x # # A tibble: 14 x 11 # # Groups: cyl [1] # mpg cyl disp hp drat wt qsec vs am gear carb # <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> # 1 18.7 8. 360. 175. 3.15 3.44 17.0 0. 0. 3. 2. # 2 14.3 8. 360. 245. 3.21 3.57 15.8 0. 0. 3. 4. # 3 16.4 8. 276. 180. 3.07 4.07 17.4 0. 0. 3. 3. # 4 17.3 8. 276. 180. 3.07 3.73 17.6 0. 0. 3. 3. # 5 15.2 8. 276. 180. 3.07 3.78 18.0 0. 0. 3. 3. # 6 10.4 8. 472. 205. 2.93 5.25 18.0 0. 0. 3. 4. # 7 10.4 8. 460. 215. 3.00 5.42 17.8 0. 0. 3. 4. # 8 14.7 8. 440. 230. 3.23 5.34 17.4 0. 0. 3. 4. # 9 15.5 8. 318. 150. 2.76 3.52 16.9 0. 0. 3. 2. # 10 15.2 8. 304. 150. 3.15 3.44 17.3 0. 0. 3. 2. # 11 13.3 8. 350. 245. 3.73 3.84 15.4 0. 0. 3. 4. # 12 19.2 8. 400. 175. 3.08 3.84 17.0 0. 0. 3. 2. # 13 15.8 8. 351. 264. 4.22 3.17 14.5 0. 1. 5. 4. # 14 15.0 8. 301. 335. 3.54 3.57 14.6 0. 1. 5. 8. # Browse[2]> c # Error: all(mpg > 12) is not TRUE ... in Group: cyl:8 or you can optionally update/change the data; realize that this modifies the data in the pipe, not the source, so is really only good in dev and/or one-off fixes: mtcars %>% group_by(cyl) %>% pipe_assert(all(mpg > 12), .debug=TRUE) %>% summarize(mpg = mean(mpg)) # # # # all(mpg > 12) is not TRUE ... in Group: cyl:8 # # 'x' is the current data that failed the assertion. # # # Called from: pipe_assert(., all(mpg > 12), .debug = TRUE) # Browse[1]> # debug at c:/Users/r2/Projects/StackOverflow/pipe_funcs.R#81: if (identical(x, .x[.indices[[.ind]], ])) { # stop(.msg, call. = FALSE) # } else { # .x[.indices[[.ind]], ] <- x # return(.x) # } (Ignore the current line of debugged code, if ..., that's my stuff and not beautiful.) I'm in the debugger now, I can look at and alter/fix the data: # Browse[2]> x # ...as before... x$mpg <- x$mpg + 1000 If the data is changed, the pipe continues, otherwise it'll stop. # Browse[2]> c # # A tibble: 3 x 2 # cyl mpg # <dbl> <dbl> # 1 4. 26.7 # 2 6. 19.7 # 3 8. 1015. (The data can be changed but the labels cannot ... so if we had done x$cyl <- 99, it still would have shown 8 in rest of the pipe. This is a consequence of dplyr not allowing you to change grouping variables ... which is a good thing, IMO.) There's also pipe_debug which always debugs, but it is less impressive. It also does not (currently) pass on changed data, so use pipe_assert for that (e.g., pipe_assert(FALSE,.debug=TRUE)). Source, also available in my gist: #' Mid-pipe assertions #' #' Test assertions mid-pipe. Each assertion is executed individually #' on each group (if present) of the piped data. Any failures indicate #' the group that caused the fail, terminating on the first failure. #' #' If `.debug`, then the interpreter enters the `browser()`, allowing #' you to look at the specific data, stored as `x` (just the grouped #' data if `is.grouped_df(.x)`, all data otherwise). If the data is #' changed, then the altered data will be sent forward in the pipeline #' (assuming you fixed the failed assertion), otherwise the assertion #' will fail (as an assertion should). #' #' @param .x data.frame, potentially grouped #' @param ... unnamed expression(s), each must evaluate to a single #' 'logical'; similar to [assertthat::assert_that()], rather than #' combining expressions with `&&`, separate them by commas so that #' better error messages can be generated. #' @param .msg a custom error message to be printed if one of the #' conditions is false. #' @param .debug logical, whether to invoke [browser()] if the #' assertion fails; if `TRUE`, then when the debugger begins on a #' fail, the grouped data will be in the variable `x` #' @return data.frame (unchanged) #' @export #' @import assertthat #' @md #' @examples #' \dontrun{ #' #' library(dplyr) #' library(assertthat) #' #' mtcars %>% #' group_by(cyl) %>% #' pipe_assert( #' all(cyl < 9), #' all(mpg > 10) #' ) %>% #' count() #' # # A tibble: 3 x 2 #' # cyl n #' # <dbl> <int> #' # 1 4 11 #' # 2 6 7 #' # 3 8 14 #' #' # note here that the "4" group is processed first and does not fail #' mtcars %>% #' group_by(cyl, vs) %>% #' pipe_assert( all(cyl < 6) ) %>% #' count() #' # Error: all(cyl < 6) is not TRUE ... in Group: cyl:6, vs:0 #' #' } pipe_assert <- function(.x, ..., .msg = NULL, .debug = FALSE) { if (is.grouped_df(.x)) { .indices <- lapply(attr(.x, "indices"), `+`, 1L) .labels <- attr(.x, "labels") } else { .indices <- list(seq_len(nrow(.x))) } for (assertion in eval(substitute(alist(...)))) { for (.ind in seq_along(.indices)) { .out <- assertthat::see_if(eval(assertion, .x[.indices[[.ind]],])) if (! .out) { x <- .x[.indices[[.ind]],] if (is.null(.msg)) .msg <- paste(deparse(assertion), "is not TRUE") if (is.grouped_df(.x)) { .msg <- paste(.msg, paste("in Group:", paste(sprintf("%s:%s", names(.labels), sapply(.labels, function(z) as.character(z[.ind]))), collapse = ", ")), sep = " ... ") } if (.debug) { message("#\n", paste("#", .msg), "\n# 'x' is the current data that failed the assertion.\n#\n") browser() } if (identical(x, .x[.indices[[.ind]],])) { stop(.msg, call. = FALSE) } else { .x[.indices[[.ind]],] <- x return(.x) } } } } .x # "unmodified" } #' Mid-pipe debugging #' #' Mid-pipe peek at the data, named `x` within [browser()], but #' *changes are not preserved*. #' #' @param .x data.frame, potentially grouped #' @return data.frame (unchanged) #' @export #' @md #' @examples #' \dontrun{ #' #' library(dplyr) #' #' mtcars %>% #' group_by(cyl, vs) %>% #' pipe_debug() %>% #' count() #' #' } pipe_debug <- function(.x) { if (is.grouped_df(.x)) { .indices <- lapply(attr(.x, "indices"), `+`, 1L) .labels <- attr(.x, "labels") } else { .indices <- list(seq_len(nrow(.x))) } # I used 'lapply' here instead of a 'for' loop because # browser-stepping after 'browser()' in a 'for' loop could continue # through all of *this* code, not really meaningful; in pipe_assert # above, since the next call after 'browser()' is 'stop()', there's # little risk of stepping in or out of this not-meaningful code .ign <- lapply(seq_along(.indices), function(.ind, .x) { x <- .x[.indices[[.ind]],] message("#", if (is.grouped_df(.x)) { paste("\n# in Group:", paste(sprintf("%s:%s", names(.labels), sapply(.labels, function(z) as.character(z[.ind]))), collapse = ", "), "\n") }, "# 'x' is the current data (grouped, if appropriate).\n#\n") browser() NULL }, .x = .x) .x # "unmodified" } #' Mid-pipe status messaging. #' #' @param .x data.frame, potentially grouped #' @param ... unnamed or named expression(s) whose outputs will be #' captured, aggregated with [utils::str()], and displayed as a #' [base::message()]; if present, a '.' literal is replace with a #' reference to the `data.frame` (in its entirety, not grouped) #' @param .FUN function, typically [message()] or [warning()] (for #' when messages are suppressed); note: if set to `warning`, the #' argument `call.=FALSE` is appended to the arguments #' @param .timestamp logical, if 'TRUE' then a POSIXct timestamp is #' appended to the header of the `str`-like output (default 'TRUE') #' @param .stropts optional list of options to pass to [utils::str()], #' for example `list(max.level=1)` #' @return data.frame (unchanged) #' @export #' @md #' @examples #' \dontrun{ #' #' library(dplyr) #' #' mtcars %>% #' pipe_message( # unnamed #' "starting", #' group_size(.) #' ) %>% #' group_by(cyl) %>% #' pipe_message( # named #' msg = "grouped", #' grps = group_size(.) #' ) %>% #' count() %>% #' ungroup() %>% #' pipe_message( # alternate function, for emphasis! #' msg = "done", #' .FUN = warning #' ) #' #' head(mtcars) %>% #' pipe_message( #' list(a = list(aa=1, bb=2, cc=3)) #' ) #' head(mtcars) %>% #' pipe_message( #' list(a = list(aa=1, bb=2, cc=3)), #' .stropts = list(max.level = 2) #' ) #' #' } pipe_message <- function(.x, ..., .FUN = message, .timestamp = TRUE, .stropts = NULL) { .expressions <- eval(substitute(alist(...))) if (is.grouped_df(.x)) { .indices <- lapply(attr(.x, "indices"), `+`, 1L) .labels <- attr(.x, "labels") } else { .indices <- list(seq_len(nrow(.x))) .labels <- "" } lst <- mapply(function(.ind, .lbl) { .x <- .x[.ind,,drop=FALSE] lapply(.expressions, function(.expr) { if (is.call(.expr)) .expr <- as.call(lapply(.expr, function(a) if (a == ".") as.symbol(".x") else a)) eval(.expr, .x) }) }, .indices, .labels, SIMPLIFY=FALSE) .out <- capture.output( do.call("str", c(list(lst), .stropts)) ) .out[1] <- sprintf("Mid-pipe message%s:", if (.timestamp) paste(" (", Sys.time(), ")", sep = "")) do.call(.FUN, c(list(paste(.out, collapse = "\n")), if (identical(.FUN, warning)) list(call. = FALSE))) .x # "unmodified" } A: You can still do the printing thing here: df %>% group_by(ID) %>% do({ the_id = unique(.$ID) cat("Working on...", the_id, "which is...", match(the_id, unique(df$ID)), "/", n_distinct(df$ID), "\n") FUN(.) }) which prints Working on... 1 which is... 1 / 3 [1] "TEST" Working on... 2 which is... 2 / 3 Error in 1:which(!is.na(x$value))[1] : NA/NaN argument I routinely do this (using data.table not dplyr, but the same idea). I realize there are more sophisticated ways to debug, but it's worked well enough for me.
unknown
d18820
test
Call respondToRequest: in method - (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request // CBPeripheralManagerDelegate - (void)peripheralManager:(CBPeripheralManager *)peripheral didReceiveReadRequest:(CBATTRequest *)request { [peripheral respondToRequest:request withResult:CBATTErrorSuccess]; NSLog(@"didReceiveReadRequest"); } @Paulw11 ,thanks
unknown
d18821
test
Technically, every module-level variable is global, and you can mess with them from anywhere. A simple example you might not have realized is sys: import sys myfile = open('path/to/file.txt', 'w') sys.stdout = myfile sys.stdout is a global variable. Many things in various parts of the program - including parts you don't have direct access to - use it, and you'll notice that changing it here will change the behavior of the entire program. If anything, anywhere, uses print(), it will output to your file instead of standard output. You can co-opt this behavior by simply making a common sourcefile that's accessible to your entire project: common.py var1 = 3 var2 = "Hello, World!" sourcefile1.py from . import common print(common.var2) # "Hello, World!" common.newVar = [3, 6, 8] fldr/sourcefile2.py from .. import common print(common.var2) # "Hello, World!" print(common.newVar) # [3, 6, 8] As you can see, you can even assign new properties that weren't there in the first place (common.newVar). It might be better practice, however, to simply place a dict in common and store your various global values in that - pushing a new key to a dict is an easier-to-maintain operation than adding a new attribute to a module. If you use this method, you're going to want to be wary of doing from .common import *. This locks you out of ever changing your global variables, because of namespaces - when you assign a new value, you're modifying only your local namespace. In general, you shouldn't be doing import * for this reason, but this is particular symptom of that.
unknown
d18822
test
If you are using .Net and you need some random bytes maybe try the GetBytes method from the rngcryptoprovider. Nice n random. You could also use it to help in selection random positions to update.
unknown
d18823
test
CMake has no other ways for set permissions for installed files except PERMISSIONS option for install command. But there are many ways for simplify permissions settings. For example, you can define variable, contained default permissions set: set(PROGRAM_PERMISSIONS_DEFAULT OWNER_WRITE OWNER_READ OWNER_EXECUTE GROUP_READ GROUP_EXECUTE WORLD_READ WORLD_EXECUTE) and use it as base for add new permissions: install(TARGETS myexe ... PERMISSIONS ${PROGRAM_PERMISSIONS_DEFAULT} SETUID) If some set of permissions is used in many places, you can define variable, contained that set, and use it when needed: set(PROGRAM_PERMISSIONS_BY_ROOT ${PROGRAM_PERMISSIONS_DEFAULT} SETUID) ... install(TARGETS myexe ... PERMISSIONS ${PROGRAM_PERMISSIONS_BY_ROOT})
unknown
d18824
test
There a lot of context missing in the question to be sure, but setting the encoding in the set-payload operation or in the Content-Type header doesn't actually transform the payload to or from UTF-8. A common mistake is to assume because the configurations, or even the XML declaration says UTF-8, to assume that the data is UTF-8 automatically. It is the data that is in UTF-8 or not. Most probably you are using a payload that is in a local encoding (Windows-1252 or something similar) when you test locally, but your server uses UTF-8 by default so it doesn't appear as you are expecting. "GrubiÅ¡iÄ" looks very similar to how the UTF-8 encoding of "Grubišić" would look.
unknown
d18825
test
One way to achieve the desired result: IN = load 'data.txt' using PigStorage(',') as (name:chararray, type:chararray, date:int, region:chararray, op:chararray, value:int); A = order IN by op asc; B = group A by (name, type, date, region); C = foreach B { bs = STRSPLIT(BagToString(A.value, ','),',',3); generate flatten(group) as (name, type, date, region), bs.$2 as OpX:chararray, bs.$0 as OpC:chararray, bs.$1 as OpT:chararray; } describe C; C: {name: chararray,type: chararray,date: int,region: chararray,OpX: chararray,OpC: chararray,OpT: chararray} dump C; (john,ab,20130106,D,20,19,8) (john,ab,20130106,E,98,854,67) Update: If you want to skip order by which adds an additional reduce phase to the computation, you can prefix each value with its corresponding op in tuple v. Then sort the tuple fields by using a custom UDF to have the desired OpX, OpC, OpT order: register 'myjar.jar'; A = load 'data.txt' using PigStorage(',') as (name:chararray, type:chararray, date:int, region:chararray, op:chararray, value:int); B = group A by (name, type, date, region); C = foreach B { v = foreach A generate CONCAT(op, (chararray)value); bs = STRSPLIT(BagToString(v, ','),',',3); generate flatten(group) as (name, type, date, region), flatten(TupleArrange(bs)) as (OpX:chararray, OpC:chararray, OpT:chararray); } where TupleArrange in mjar.jar is something like this: .. import org.apache.pig.EvalFunc; import org.apache.pig.data.Tuple; import org.apache.pig.data.TupleFactory; import org.apache.pig.impl.logicalLayer.schema.Schema; public class TupleArrange extends EvalFunc<Tuple> { private static final TupleFactory tupleFactory = TupleFactory.getInstance(); @Override public Tuple exec(Tuple input) throws IOException { try { Tuple result = tupleFactory.newTuple(3); Tuple inputTuple = (Tuple) input.get(0); String[] tupleArr = new String[] { (String) inputTuple.get(0), (String) inputTuple.get(1), (String) inputTuple.get(2) }; Arrays.sort(tupleArr); //ascending result.set(0, tupleArr[2].substring(1)); result.set(1, tupleArr[0].substring(1)); result.set(2, tupleArr[1].substring(1)); return result; } catch (Exception e) { throw new RuntimeException("TupleArrange error", e); } } @Override public Schema outputSchema(Schema input) { return input; } }
unknown
d18826
test
This appears to be a bug with IWebBrowser2 interface. It can be fixed with td{overflow-x: auto;} <style> @media print { td{overflow-x: auto;} } </style>
unknown
d18827
test
I've never used Jade, but my first thought was that you are adding behaviors and then assuming that Jade will decide to run them at some point. When you say that you never see your behaviors activate, it strengthened that hypothesis. I looked at the source and sure enough, addBehaviour() and removeBehaviour() simply add and remove from a collection called myScheduler. Looking at the usages, I found a private method called activateAllBehaviours() that looked like it ran the Behaviours. That method is called from the public doWake() on the Agent class. I would guess that you simply have to call doWake() on your Agent. This is not very apparent from the JavaDoc or the examples. The examples assume that you use the jade.Boot class and simply pass the class name of your agent to that Boot class. This results in the Agent getting added to a container that manages the "waking" and running of your Agents. Since you are running Swing for your GUI, I think that you will have to run your Agents manually, rather than the way that the examples show. I got more curious, so I wrote my own code to create and run the Jade container. This worked for me: Properties containerProps = new jade.util.leap.Properties(); containerProps.setProperty(Profile.AGENTS, "annoyer:myTest.MyAgent"); Profile containerProfile = new ProfileImpl(containerProps); Runtime.instance().setCloseVM(false); Runtime.instance().createMainContainer(containerProfile); This automatically creates my agent of type myTest.MyAgent and starts running it. I implemented it similar to your code snippet, and I saw messages every 5 seconds. I think you'll want to use setCloseVM(false) since your UI can handle closing down the JVM, not the Jade container.
unknown
d18828
test
The comments about datatypes while true, don't do much to help you with your current problem. A combination of to_date and to_char might work. update yourtable set yourfield = to_char(to_date(yourfield, 'yyyymmdd'), 'dd-mm-yyyy') where length(yourfield) = 8 and yourfield not like '%-%' and yourfield not like '%.%' A: If the column contains all those various formats, you'll need to deal with each one. Assuming that your question includes all known formats, then you have a couple of options. You can use to_char/to_date. This is dangerous because you'll get a SQL error if the source data is not a valid date (of course, getting an error might be preferable to presenting bad data). Or you can simply rearrange the characters in the string based on the format. This is a little simpler to implement, and doesn't care what the delimiters are. Method 1: case when substr(tempdt,3,1)='.' then to_char(to_date(tempdt,'dd.mm.yyyy'),'dd-mm-yyyy') when substr(tempdt,3,1)='-' then tempdt when length(tempdt)=8 then to_char(to_date(tempdt,'yyyymmdd'),'dd-mm-yyyy') when substr(tempdt,3,1)='/' then to_char(to_date(tempdt,'dd/mm/yyyy'),'dd-mm-yyyy') Method 2: case when length(tempdt)=8 then substr(tempdt,7,2) || '-' || substr(tempdt,5,2) || '-' || substr(tempdt,1,4) when length(tempdt)=10 then substr(tempdt,1,2) || '-' || substr(tempdt,4,2) || '-' || substr(tempdt,7,4) end SQLFiddle here A: Convert to date, then format to char: select to_char(to_date('20130713', 'yyyymmdd'), 'dd MON yyyy') from dual; gives 13 JUL 2013.
unknown
d18829
test
This is a really old question, sorry it never got answered, but it's also very broad is some portions. I would recommend asking shorter, more specific questions, and making multiple StackOverflow questions for them. That said, here's some brief answers for people reading this entry: * *Yes, this is possible. Check out the REST connector. *I would probably use multiple parent models that are internal and then a single exposed REST model (not "persisted") that collates that data together. *Sure, you could do that. Writing a connector isn't too difficult, check out our docs on building a connector.
unknown
d18830
test
Following Nick Felker's comment, changing my phone's locale to English (US) enabled me to test the app. Thanks Nick!
unknown
d18831
test
Still you can use Ctrl+F11 to launch the last Run Configuration, so launch once your tests by clicking, and then hit Ctrl+F11. A: CTRL+F11 to work the way you want, you must set (from "Windows/Preferences") the "Run/debug > Launching : Launch Operation" setting to: Answer in this post. https://stackoverflow.com/a/1152039
unknown
d18832
test
Credits to Allan Cameron's comments to run the script, below a function to use exactly that approach. functions_from_source <- function(source) { myEnv <- new.env() source(source, local = myEnv) objects <- ls(envir = myEnv) funs <- sapply(objects, \(x){ is.function(eval(parse(text = x), envir = myEnv)) }) objects[funs] rm(envir = myEnv) } functions_from_source("ex.R") # [1] "a" "b1" "b2" "e" "f1" "f2" "j" "k1" "k2" "m" ex.R including m and l1 (note R does not interpret l2 as a function but as a value) a <- function(x) 1 b1 = b2 <- function() { y <- 1 2 -> j j } d <<- function(x) { k <- function(l) 1 k(x) } (function(x) 2) -> e (function() { y <- 1 2 -> j j }) -> f1 -> f2 (function() 1)() g <- 4 5 -> h i <- lapply(1:3, FUN = function(x) x + 1) assign('j', function() 1) k1 <- (function() {1}) -> k2 l1 <- (1 + (l2 <- (function(x) 2 * x)(3))) (m <- function(x) x) A: Tried reducing the function: Though might have some edge cases. Not sure. get_fun <- function(x){ dp <- deparse1(x[[1]]) if( dp %in% c('<-', '=', '<<-')) c(x[[2]], get_fun(x[[3]])) else if(dp == c('(')) get_fun(x[[2]]) else if(dp == 'assign') as.list(x[-1]) else if(dp == 'function') x else if(any(i<-grepl("<<?-",x))) c(NA, get_fun(x[[which(i)]])) } get_name <- function(y){ x <- head(get_fun(y), -1) if (length(x) > 1 & any(i <- is.na(x))) x <- tail(x,-max(which(i))) as.character(x) } get_fns <- function(file){ unlist(lapply(parse(file), get_name)) } get_fns('ex.R') [1] "a" "b1" "b2" "d" "e" "f2" [7] "f1" "j" "k1" "k2" "l2" "m" The usage of get_fun is quite simple. eg: suppose we have: z <- alist( l1 <- (1 + (l2 <- function(x) 2 * x)(3)), (s = (m <- function(x) x))->k, d <- (((function(x)x+2))), i <- lapply(1:3, FUN = function(x) x + 1), j <- lapply(1:3, FUN <- function(x) x + 1)) Notice that we have l2, m, d, FUN. The FUN is from j and not i since we used <- meaning we assigned the function and not merely a parameter: From l1 we get: get_fun(z[[1]]) [[1]] l1 [[2]] [1] NA [[3]] [1] NA [[4]] l2 [[5]] function(x) 2 * x We only pick everything after all the NAs: get_fun(z[[2]]) [[1]] k [[2]] s [[3]] m [[4]] function(x) x get_fun(z[[3]]) [[1]] d [[2]] function(x) x + 2 get_fun(z[[4]]) [[1]] i For z[[4]] although there is a name, there is no function associated with the name. Thus not a valid function get_fun(z[[5]]) [[1]] j [[2]] [1] NA [[3]] FUN [[4]] function(x) x + 1 A: After a good sleep, I could reduce the (recursive) problem to 5 simple cases: * *If the expression under investigation (expr) is not a call, we stop and return NULL. *If we hit a bracket, simply recurse into the expression and keep the current list of potential identifiers. *If we hit a function, we simply return the vector of potential identifiers collected so far. *If we hit an assignment operator, add the identifier to the list of potential identifiers and recurse into the RHS of the assignment. *In any other case, we loop through all elements of the call, but reset the list of potential identifiers to NULL. extract_function <- function(expr, identifiers = NULL) { .OP <- 1L .LHS <- 2L .RHS <- 3L .ASGNM <- c("<-", "<<-", "=", "assign") if (is.call(expr)) { op <- deparse(expr[[.OP]]) if (op == "(") { ## bracket case: simply recurse into the call and keep identifiers res <- Recall(expr[[-.OP]], identifiers) } else if (op == "function") { ## function case: we can stop and return stored identifiers res <- identifiers } else if (op %in% .ASGNM) { ## assignment case: add LHS to potential list of identifiers res <- Recall(expr[[.RHS]], c(as.character(expr[[.LHS]]), identifiers)) } else { ## else case: drop identifiers and recurse into function res <- lapply(expr, extract_function, identifiers = NULL) |> unlist() } } else { res <- NULL } res } unlist(lapply(parse("ex.r"), extract_function)) # [1] "a" "b2" "b1" "d" "e" "f1" "f2" "j" "k2" "k1" "l2" "m"
unknown
d18833
test
The problem is probably that your process spawned with your start/0 function crashes. When a process crashes, any ETS tables it owns are reaped. Try using spawn_monitor and then use the shell's flush() command to get hold of messages that comes in. It probably dies. Another way is to use the tooling in the proc_lib module and then use erl -boot start_sasl to get some rudimentary crash error reporting up and running for your process. A "naked" spawn(...) is usually dangerous since if the spawned process crashes, you won't learn anything. At least use spawn_link or spawn_monitor. A: I found my problem: I was testing my code but I didn't had a Pid to test with, so I used whereis('event manager'). Instead I had to use self().
unknown
d18834
test
Set columns names by df1.columns: df_cont.columns = df1.columns Sample: df1 = pd.DataFrame([[1,2,3]], columns=['btc', 'eth', 'ltc']) print (df1) btc eth ltc 0 1 2 3 df_cont = pd.DataFrame([[11,22,33]], columns=['bitcoin', 'ethereum', 'litcoin']) print (df_cont) bitcoin ethereum litcoin 0 11 22 33 df_cont.columns = df1.columns print (df_cont) btc eth ltc 0 11 22 33
unknown
d18835
test
You need covariant return types. The return type from the derived class needs to be a pointer or reference derived from the return type of the base class. This can be accomplished by wrapping HWND and HMENU in a class hierarchy making a parallel of the API's organization. Templates may help if it's all generic.
unknown
d18836
test
Thanks to cournape's tip about Actions versus Generators ( and eclipse pydev debugger), I've finally figured out what I need to do. You want to pass in your function to the 'Builder' class as an 'action' not a 'generator'. This will allow you to actually execute the os.system or os.popen call directly. Here's the updated code: import os def my_action(source, target, env): cmd = r'''echo its a small world after all \ its a small world after all''' print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, # <-- CRUCIAL PIECE OF SOLUTION suffix = '.foo') env = Environment() env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) This SConstruct file will produce the following output: scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... my_action(["foo.foo"], ["/bin/bash"]) echo its a small world after all \ its a small world after all its a small world after all its a small world after all scons: done building targets. The other crucial piece is to remember that switching from a 'generator' to an 'action' means the target you're building no longer has an implicit dependency on the actual string that you are passing to the sub-process shell. You can re-create this dependency by adding the string into your environment. e.g., the solution that I personally want looks like: import os cmd = r'''echo its a small world after all \ its a small world after all''' def my_action(source, target, env): print cmd return os.system(cmd) my_cmd_builder = Builder( action=my_action, suffix = '.foo') env = Environment() env['_MY_CMD'] = cmd # <-- CREATE IMPLICIT DEPENDENCY ON CMD STRING env.Append( BUILDERS = {'MyCmd' : my_cmd_builder } ) my_cmd = env.MyCmd('foo.foo',os.popen('which bash').read().strip()) A: You are mixing two totally different things: the command to be executed, and its representation in the command line. By default, scons prints the command line, but if you split the command line, you are changing the commands executed. Now, scons has a mechanism to change the printed commands. They are registered per Action instances, and many default ones are available: env = Environment() env['CCCOMSTR'] = "CC $SOURCE" env['CXXCOMSTR'] = "CXX $SOURCE" env['LINKCOM'] = "LINK $SOURCE" Will print, assuming only C and CXX sources: CC foo.c CC bla.c CXX yo.cc LINK yo.o bla.o foo.o
unknown
d18837
test
If you only have a hand full of result sets, it might be easiest to sort them in java, using a Comparator. If you have to do it in oracle you can use a statement like the following: select * // never do that in production from someTable where id in (10121005444, 206700013, 208700013, 30216118005, 30616118005) order by decode(id, 10121005444, 1, 206700013, 2, 208700013, 3, 30216118005, 4, 30616118005, 5) A: You can't specify the order using the IN clause. I think you have two options: * *perform the query using IN, and sort your result set upon receipt *issue a separate query for each specified id in order. This is obviously less efficient but a trivial implementation. A: you can use this query -- SELECT id FROM table WHERE id in (10121005444, 206700013, 208700013, 30216118005, 30616118005) ORDER BY FIND_IN_SET(id, "10121005444, 206700013, 208700013, 30216118005, 30616118005"); second list define the order in which you want your result set to be
unknown
d18838
test
This is not an answer though but I'm facing similar challenge.. I don't know if you've sorted it out. I tried converting the data to JSON using JSON dump with that the whole params comes out for each of the timers. And I see that in my console.log so next challenge is to be able to pass all to the table through a loop I was thinking although it's not conventional but might work. Creating another table close to the main table to move this timer out of the for loop entirely and work from there with it.. This is the link to a solution that worked for me https://stackoverflow.com/a/65218112/16174649 A: Thanks for the suggestion. I was unable to get it to work within the Django loop itself so my solution was to seperate and load each date object seperately within the views file in django and then to just build a counter for each date by itself. Its probably not the most effective solution but it worked as I only had 11 plus dates to build countdowns for. In the code example below I would simply update/+1 number for the "countDownDate", "date" and "demo" fields until I created 11 scripts for my 11 items so each item/section has its own countdown script. Below is the first and second script as an example. <p id="demo"></p> <script> {{ let countDownDate = new Date({{ date1.dateinmodel|date:"U" }}*1000).getTime(); // Update the count down every 1 second let x = setInterval(function() { // Get today's date and time let now = new Date().getTime(); // Find the distance between now and the count down date let distance = countDownDate - now; // Time calculations for days, hours, minutes and seconds let days = Math.floor(distance / (1000 * 60 * 60 * 24)); let hours = Math.floor((distance % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60)); let minutes = Math.floor((distance % (1000 * 60 * 60)) / (1000 * 60)); let seconds = Math.floor((distance % (1000 * 60)) / 1000); // Display the result in the element with id="demo" document.getElementById("demo").innerHTML ="T-" + days + "D " + hours + "H " + minutes + "M " + seconds + "S "; // If the count down is finished, write some text if (distance < 0) { clearInterval(x); document.getElementById("demo").innerHTML = "T- 0D 0H 0M 0S" ; } }, 1000); }} </script> <p id="demo2"></p> <script> {{ let countDownDate2 = new Date({{ date2.dateinmodel|date:"U" }}*1000).getTime(); // Update the count down every 1 second let x = setInterval(function() { // Get today's date and time let now = new Date().getTime(); // Find the distance between now and the count down date let distance = countDownDate2 - now; // Time calculations for days, hours, minutes and seconds let days = Math.floor(distance / (1000 * 60 * 60 * 24)); let hours = Math.floor((distance % (1000 * 60 * 60 * 24)) / (1000 * 60 * 60)); let minutes = Math.floor((distance % (1000 * 60 * 60)) / (1000 * 60)); let seconds = Math.floor((distance % (1000 * 60)) / 1000); // Display the result in the element with id="demo2" document.getElementById("demo2").innerHTML ="T- " + days + "D " + hours + "H " + minutes + "M " + seconds + "S "; // If the count down is finished, write some text if (distance < 0) { clearInterval(x); document.getElementById("demo2").innerHTML = "T- 0D 0H 0M 0S" ; } }, 1000); }} </script>
unknown
d18839
test
Using removeAll in the old list with the argument being your sub sample. private List<String> selectImages(List<String> images, Random rand, int num) { List<String> copy = new LinkedList<String>(images); Collections.shuffle(copy,rand); List<String> sample = copy.subList(0, num); images.removeAll(sample); return sample; }
unknown
d18840
test
Please see fiddle and code below, JSON unmodified and it currently contains disconnection as seen in fiddle. Fiddle https://jsfiddle.net/shL7tjpa/2/ Code google.charts.load('current', {packages:["orgchart"]}); google.charts.setOnLoadCallback(drawChart); var members = [ { "BossId": "3", "DateOfBirth": "1966-09-27T00:00:00", "FamilyName": "Montejano", "Gender": "Unspecified", "GivenName": "Trinh", "Id": "08", "Title": "Tech Manager" }, { "BossId": "0", "DateOfBirth": "1927-01-29T00:00:00", "FamilyName": "Fetzer", "Gender": "Unspecified", "GivenName": "Winfred", "Id": "00", "Title": "CEO" }, { "BossId": "1", "DateOfBirth": "1927-08-20T00:00:00", "FamilyName": "Dandrea", "Gender": "Male", "GivenName": "Erich", "Id": "02", "Title": "VP of Marketing" }, { "BossId": "1", "DateOfBirth": "1929-02-07T00:00:00", "FamilyName": "Nisbet", "Gender": "Male", "GivenName": "Reinaldo", "Id": "03", "Title": "VP of Technology" }, { "BossId": "1", "DateOfBirth": "1932-06-13T00:00:00", "FamilyName": "Bufford", "Gender": "Unspecified", "GivenName": "Alleen", "Id": "04", "Title": "VP of HR" }, { "BossId": "2", "DateOfBirth": "1936-09-26T00:00:00", "FamilyName": "Klopfer", "Gender": "Female", "GivenName": "Kristyn", "Id": "05", "Title": "Director of Marketing" }, { "BossId": "1", "DateOfBirth": "1937-11-23T00:00:00", "FamilyName": "Duhon", "Gender": "Male", "GivenName": "Sophie", "Id": "01", "Title": "Tech Manager" }, { "BossId": "3", "DateOfBirth": "1948-04-05T00:00:00", "FamilyName": "Mirabal", "Gender": "Female", "GivenName": "Suanne", "Id": "07", "Title": "Tech Manager" }, { "BossId": "4", "DateOfBirth": "1966-10-13T00:00:00", "FamilyName": "Maslowski", "Gender": "Unspecified", "GivenName": "Norah", "Id": "09", "Title": "Tech Manager" }, { "BossId": "6", "DateOfBirth": "1967-08-25T00:00:00", "FamilyName": "Redford", "Gender": "Female", "GivenName": "Gertrudis", "Id": "10", "Title": "Tech Lead" }, { "BossId": "6", "DateOfBirth": "1968-12-26T00:00:00", "FamilyName": "Tobey", "Gender": "Male", "GivenName": "Donovan", "Id": "11", "Title": "Tech Lead" }, { "BossId": "9", "DateOfBirth": "1969-10-16T00:00:00", "FamilyName": "Vermeulen", "Gender": "Male", "GivenName": "Rich", "Id": "12", "Title": "Trainer Lead" }, { "BossId": "9", "DateOfBirth": "1972-10-16T00:00:00", "FamilyName": "Knupp", "Gender": "Male", "GivenName": "Santo", "Id": "13", "Title": "HR Manager" }, { "BossId": "12", "DateOfBirth": "1974-03-23T00:00:00", "FamilyName": "Grooms", "Gender": "Female", "GivenName": "Jazmin", "Id": "14", "Title": "Trainer" }, { "BossId": "13", "DateOfBirth": "1978-08-25T00:00:00", "FamilyName": "Cheeks", "Gender": "Female", "GivenName": "Annelle", "Id": "15", "Title": "Recruiter" }, { "BossId": "15", "DateOfBirth": "1979-08-21T00:00:00", "FamilyName": "Harshaw", "Gender": "Unspecified", "GivenName": "Eliza", "Id": "16", "Title": "Trainer" }, { "BossId": "8", "DateOfBirth": "1980-02-09T00:00:00", "FamilyName": "Broaddus", "Gender": "Unspecified", "GivenName": "Xiomara", "Id": "17", "Title": "Senior Software Developer" }, { "BossId": "11", "DateOfBirth": "1981-09-08T00:00:00", "FamilyName": "Jungers", "Gender": "Unspecified", "GivenName": "Erminia", "Id": "18", "Title": "Software Developer" }, { "BossId": "10", "DateOfBirth": "1984-03-18T00:00:00", "FamilyName": "Moffatt", "Gender": "Female", "GivenName": "Maria", "Id": "19", "Title": "Software Developer" }, { "BossId": "10", "DateOfBirth": "1990-09-24T00:00:00", "FamilyName": "Grimaldo", "Gender": "Female", "GivenName": "Tammera", "Id": "20", "Title": "Senior Software Developer" }, { "BossId": "10", "DateOfBirth": "1992-06-18T00:00:00", "FamilyName": "Das", "Gender": "Female", "GivenName": "Sharyl", "Id": "21", "Title": "Software Developer" }, { "BossId": "8", "DateOfBirth": "1993-11-15T00:00:00", "FamilyName": "Harlan", "Gender": "Unspecified", "GivenName": "Shan", "Id": "22", "Title": "UI Developer" }, { "BossId": "11", "DateOfBirth": "1997-03-23T00:00:00", "FamilyName": "Almeida", "Gender": "Female", "GivenName": "Mariah", "Id": "23", "Title": "QA Tester" }, { "BossId": "11", "DateOfBirth": "1998-11-10T00:00:00", "FamilyName": "Kerfien", "Gender": "Male", "GivenName": "Darnell", "Id": "24", "Title": "QA Tester" }, { "BossId": "11", "DateOfBirth": "2004-04-22T00:00:00", "FamilyName": "Vierra", "Gender": "Female", "GivenName": "Janell", "Id": "25", "Title": "QA Tester" } ]; function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Name'); data.addColumn('string', 'Manager'); data.addColumn('string', 'ToolTip'); $.each(members,function(idx, member){ // For each orgchart box, provide the name, manager, and tooltip to show. member = JSON.parse(JSON.stringify(member)); data.addRow( [{v: ""+parseInt(member.Id), f:member.GivenName+ ' ' + member.FamilyName+'<div style="color:red; font-style:italic">'+member.Title+'</div>'}, ""+parseInt(member.BossId), '']); }); // Create the chart. var chart = new google.visualization.OrgChart(document.getElementById('chart_div')); // Draw the chart, setting the allowHtml option to true for the tooltips. chart.draw(data, {allowHtml:true}); }
unknown
d18841
test
am i adding the driver correctly I don't think so. There is something very odd going on here. According to the official MySQL source code repo for Connector/J on Github, there is no com.mysql.jdbc.DocsConnectionPropsHelper class in Connector/J version 8.0.28. But this class does exist in Connector/J 5.1.x. So it looks like you must somehow be using Connector/J 5.x rather than 8.0.x as the IDE screenshot seems to suggest. Check the dependencies on the actual runtime classpath you are using to run this code. (Note that the screenshot shows the project's build classpath not its runtime classpath.) Certainly you should not be mixing the two generations of driver. Calling RegisterDriver with the (8.0.x) com.mysql.cj.jdbc.Driver class when you have the 5.x driver on the classpath is wrong. A: you can try like this Class.forName("com.mysql.cj.jdbc.Driver").newInstance();
unknown
d18842
test
-[CodeTest setCancelThread:]: unrecognized selector. Means that you don't have a setter defined for the cancelThread property. You're missing @synthesize cancelThread; (in your @implementation section) A: What do you mean by " /* Std C99 code */"? If that code is really being compiled as C99 code, then self.cancelThread is problematic because it is an Objective-C expression. First, it is the equivalent of [self cancelThread], a method call and, secondly, it requiresself` which wouldn't be present in the body of a C99 function. However, given that the code you showed has it in a method, the comment doesn't make sense.
unknown
d18843
test
You can get the ado.net driver (+ python, java, etc..) from this address: https://tools.hana.ondemand.com/#hanatools A: It is part of the SAP HANA client package. Go to https://service.sap.com/hana, login, then click "Software download", and select your edition. Proceed with "SAP HANA Client 1.00", not with the proposed "Studio".
unknown
d18844
test
I think the difference is negligible in most cases - your case might be an exception. Why don't you put together a simple prototype app to explicitly measure the performance of each solution? My guess is that it would not take more than an hour of work... However, think about clarity and maintainability of the code as well. IMHO it is obvious that the function pointer solution wins hands down in this respect. Update: Note also that even if one solution is, let's say, twice as fast as the other as it is, it still may not necessarily justify rewriting your code. You should profile your app first of all to determine how much of the execution time is actually spent in those switches. If it is 50% of total time, there is a reason to optimize it. If it is a couple of percents only, optimizing it would be a waste of effort. A: Péter Török has the right idea about trying both and timing them. You may not like it, but unfortunately this is the reality of the situation. The "premature optimization" chant happens for a reason. I'm always in favour of using performance best-practices right from the start as long as it doesn't sacrifice clarity. But in this kind of case it's not a clear win for either of the options you mentioned. In most cases, this kind of minor change will have no measurable effect. There will be a few major bottlenecks that completely govern the speed of the whole system. On modern computers a few instructions here and there will be basically invisible because the system is blocked by memory cache misses or by pipeline stalls and those can be hard to predict. For instance in your case, a single extra cache miss would likely decide which method would be slower. The real-world solution is to evaluate the performance of the whole system with a profiler. That will show you where your "hot spots" are in the code. The usual next step is to make algorithmic changes to reduce the need for that many calls into the hot code either through better algorithms or through caching. Only if a very tiny section of code is lighting up the profiler is it worth getting down to small micro-optimizations like this. At that point, you have to try different things and test the effect on speed. Without measuring the effect of the changes, you're just as likely to make it worse, even for an expert. All that being said, I would guess that function calls in your case might be very slightly faster if they have very few parameters, especially if the body of each case would be large. If the switch statement doesn't use a jump table, that likely be slower. But it would probably vary a lot by compiler and machine architecture, so I wouldn't spend much time on it unless you have hard evidence later on that it is a bottleneck to your system. Programming is as much about re factoring as it is about writing fresh code. A: Switch statements are typically implemented with a jump table. I think the assembly can go down to a single instruction, which would make it pretty fast. The only way to be sure is to try it both ways. If you can't modify your existing code, why not just make a test app and try it there? A: Writing interpreters is fun, isn't it? I can guess that the function pointer might be quicker, but when you're optimizing, guesses won't take you very far. If you really want to suck every cycle out of that beast, it's going to take multiple passes, and guesses aren't going to tell you what to fix. Here's an example of what I mean.
unknown
d18845
test
Make sure the compiled output file is named example.pyd (or has a symlink of that name pointing to it), and try running python from the same directory. Update: How to build a .pyd in Visual Studio On Windows, compiled Python modules are simply DLL files, but they have a .pyd file extension. You mentioned that your C++ file compiles successfully. Did you compile it as an executable (.exe), or as a .dll? You should compile it as a DLL, but change the file extension to .pyd. The Visual Studio documentation explains how to change your project to create a DLL. Here's what it says: * *Open the project's Property Pages dialog box. For details, see Set C++ compiler and build properties in Visual Studio. *Click the Configuration Properties folder. *Click the General property page. *Modify the Configuration Type property. Also, on that same settings page, you can find an option to change the Target Extension property. Change it to .pyd. (Or simply rename the file yourself after it is built.) Update 2 I think you need to change three settings: * *Target Name * *Change to example *Target Extension * *Change to .pyd *Configuration Type * *Change to Dynamic Library (.dll) Also, I recommend deleting (or commenting out) everything from example.cpp except for the code shown below. (I don't know if the presence of a main() function may cause problems, so just remove it.) After that, building your project should produce the following file: C:\Users\rmili\source\repos\ConsoleApplication5\x64\Debug\example.pyd Than, from the Spyder console, try this: import os d = "C:\\Users\\rmili\\source\\repos\\ConsoleApplication5\\x64\\Debug" os.chdir(d) import example example.add(1,2) I don't have a Windows machine to test with. But in case it's useful, here's how I compiled your example on my Mac. (On Mac and Linux, they use the extension .so instead of .pyd.) // example.cpp #include <pybind11/pybind11.h> int add(int i, int j) { return i + j; } PYBIND11_MODULE(example, m) { m.doc() = "pybind11 example plugin"; m.def("add", &add, "A function which adds two numbers"); } $ # Compile $ clang++ -I${CONDA_PREFIX}/include -I${CONDA_PREFIX}/include/python3.7m -undefined dynamic_lookup -shared -o example.so example.cpp $ # Test $ python -c "import example; print(example.add(10,20))" 30 A: I have found answer to my problem: * *Make sure all steps I described previously in my post are done *this is what I missed -` It is important to make sure the file type of "example" is Python Extension Module, as shown in the following screenshot . As shown in screenshots of my Updates, initially the Type of my "example.pyd" file was just "File". I managed to convert it to Python Extension Module by adding "cp35-win_amd64." in the file extension, resulting in file name "examplelib.cp35-win_amd64.pyd", and then remove the same texts that were added.
unknown
d18846
test
Your code creates a new empty EntityManager with each query and save operation. Instead, you should create a single EntityManager in your TestService, and use it for all query and save operations. var services = function (http) { breeze.config.initializeAdapterInstance("modelLibrary", "backingStore"); var dataService = new breeze.DataService({ serviceName: 'breeze/Zza', hasServerMetadata: false }); var manager = new breeze.EntityManager( { dataService: dataService }); this.getBybreeze = function (successed) { var entityQuery = breeze.EntityQuery; return entityQuery.from('Customers').using(manager).execute().then(successed); } this.saveByBreeze = function () { manager.saveChanges().catch(function (error) { alert("Failed save to server: " + error.message); }); } } services.$inject = ["$http"]; app.service("TestService", services);
unknown
d18847
test
I have investigated a bit this problem and I can see two solutions: Easy way Use a private boolean variable instead of using the IsEnabled property. Then when the clicked event is handled, check the switch's status with the variable. Hard way Overwrite the Switch control to get this behaviour. You can follow this guide. A: As seen from the Switch for Xamarin.Forms, there is currently no option of specifying the color. Conclusively, you will have to create your own renderer for both iOS and Android. Android Essentially, what you need to do is to override the OnElementChanged and OnCheckChanged events and check for the Checked Property of the Control. Afterwards, you can set the ThumbDrawable of the Control and execute SetColorFilter with the color you want to apply. Doing so although requires Android 5.1+. An example is given here. You could also create utilise the StateListDrawable and add new ColorDrawables for the StateChecked, StateEnabled and lastly, a default value as well. This is supported for Android 4.1+. An example of using the StateListDrawable is provided in detail here. iOS iOS is somewhat more simple. Simply create your own SwitchRenderer, override the OnElementChanged event and set the OnTintColor and ThumbTintColor. I hope this helps. A: You need to make custom renderer for your control in this case switch like that and set color when it changes: class CustomSwitchRenderer : SwitchRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Switch> e) { base.OnElementChanged(e); this.Control.ThumbDrawable.SetColorFilter(this.Control.Checked ? Color.DarkGreen : Color.Red, PorterDuff.Mode.SrcAtop); this.Control.TrackDrawable.SetColorFilter(this.Control.Checked ? Color.Green : Color.Red, PorterDuff.Mode.SrcAtop); this.Control.CheckedChange += this.OnCheckedChange; } private void OnCheckedChange(object sender, CompoundButton.CheckedChangeEventArgs e) { this.Control.ThumbDrawable.SetColorFilter(this.Control.Checked ? Color.DarkGreen : Color.Red, PorterDuff.Mode.SrcAtop); this.Control.TrackDrawable.SetColorFilter(this.Control.Checked ? Color.Green : Color.Red, PorterDuff.Mode.SrcAtop); } }
unknown
d18848
test
So to view the image before the the upload you need to first use FileReader which is a javascript object that lets web applications asynchronously read the contents of files. This will give you a base64 version of your image that you can add to your image src. So you can do this in your onChange function and it would look something like the following: onChange = (e) => { switch(e.target.name) { case 'imageUrl': const file = e.target.files[0]; const reader = new FileReader(); reader.onload = () => { this.setState({ imageUrl: file, imgBase64: reader.result }) }; reader.readAsDataURL(file); break; default: this.setState({[e.target.name]: e.target.value}) } } in this example i'm adding to the state the imgBase64 key and adding the base64 value to it but you can use whatever name you like just be sure to add it to your state object. Then to view it you can use this value as the image src like so: <img src={this.state.imgBase64}/> After that when you submit your image to multer you need to return the image's filename so you can access it after you upload it so in your route since it seems like your getting the correct filename back when you log it you can return it to your axios call and use it after that. So in your route just send back a json object and use it. So instead of res.status(201).json(new dish added) send something like the following: res.status(201).json({ msg: 'new dish added', imageUrl: req.file.filename }) and then your axios call will receive this json object and you can access in your frontend afterwards like so: createDishHandler = (event) => { event.preventDefault() const fd = new FormData(); fd.append('name', this.state.name) fd.append('imageUrl', this.state.imageUrl, this.state.imageUrl.name) fd.append('description', this.state.description) axios.post('http://localhost:8080/add-new-dish', fd,) .then(res => { //you can set your res.data.imageUrl to your state here to use it console.log('Success Message: ' + res.data.msg + ' - Image File Name: ' + res.data.imageUrl) }) this.props.history.push('/') } But in the above function I see that you push do a different page when uploading. So if you push to a different page then you obviously aren't going to be using this state anymore so you will need probably need to just retrieve it from your database at that point from the new page. Anyway I hope this helps and if you have any questions let me know. P.S. I just wanted to let you know you may want to use Date.now() + path.extname(file.originalname) instead of new Date().toISOString() + '-' + path.extname(file.originalname) in you multer upload it looks a little cleaner without all of the colons and dashes but it's not necessary. Edit: So if you are going serve up a static folder with express then like I said in previous comments below you are going to have to use absolute urls to access your content. React cannot access anything with relative paths outside of the public folder in your client. So if in your backends root you are going to have a folder named images then in multer you would set the destination to 'images' with express you would serve up the static folder app.use(express.static('images')) and to access this with your image you would need to use an absolute url <img src={ `http://localhost:8080/${imageUrl}`} alt={dish.name}/>
unknown
d18849
test
Instead of .Text, try .Value, should insert current selected value to the cell, after running the macro. You can read some more about this here.
unknown
d18850
test
If you would like to get the smallest value from all files, you will have to sort all their content at once. The command currently sorts file by file, so you get the smallest value in the first sorted file. Check the difference between find "$d" -type f -name 'mod*' -exec sort -k4 -g {} + and find "$d" -type f -name 'mod*' -exec sort -k4 -g {} \; Also it is recommended to use -n instead of -g unless you really need to. Check --general-numeric-sort section of info coreutils 'sort invocation' for more details why. Edit: Just checked the link to your previous question and I see now that you need to use --general-numeric-sort That said, here's a way to get the corresponding filename into the lines, so that you have it in the output: find "$d" -type f -name 'mod*' -exec awk '{print $0, FILENAME}' {} \;|sort -k4 -g |head -1 >> "$resultfile" Essentially awk is invoked for each file separately. Awk print each line of the file, appending the corresponding file name to it. Then all those lines are passed for sorting. Note: The above will print the filename with its path under which find found it. If you are looking to get only the file's basename, you can use the following awk command instead (the rest stays the same as above): awk 'FNR==1{ cnt=split(FILENAME, arr, "/"); basename=arr[cnt] } { print $0, basename}'
unknown
d18851
test
You can use isinstance(angle, list) to check if it is a list. But it won't help you achieve what you really want to do. The following code will help you with that. question = """Please enter the angle you want to convert. If you wish to convert degree in radiant or vice-versa. Follow this format: 'angle/D or R' """ while 1: angle=input(question).split('/') if not isinstance(angle, list): break # This will never happen # It will never happen because string.split() always returns a list # Instead you should use something like this: if len(angle) != 2 or angle[1] not in ['D', 'R']: break try: angle[0]=float(angle[0]) except ValueError: break if (angle[0]>=0 or angle[0]<=360) and angle[1] is 'D': # You could also improve this by taking modulo 360 of the angle. print((angle[0]*np.pi)/180, 'radiants') else: # Just an else is enough because we already checked that angle[1] is either D or R print((angle[0]*180)/np.pi, 'degrees') A: What you want: if not isinstance(angle, list): break What you've done: if angle is not list():break will always evaluate to True as no object will ever have the same identity as the list list(); since is is a check for identity. Even this: >>> list() is not list() True A: break statements are used to get out of for and while loops. Try using a while loop after the input statement to evaluate the input. Use a possible set as a conditional. You do not need to break from an if statement because it will just be bypassed if the the conditional is not met. Sometimes you might see an if statement followed by a break statement. The break statement, however, is not breaking from the if statement. It is breaking from a previous for or while loop.
unknown
d18852
test
Change your OR condition as a UNION query instead of OR SELECT * FROM current_users JOIN current_users um_location ON current_users.id = um_location.id WHERE um_location.meta_key = 'location' AND um_location.meta_value = 'France' UNION SELECT * FROM current_users JOIN current_users um_location ON current_users.id = um_location.id WHERE current_users.id NOT IN (SELECT current_users.id FROM current_users WHERE current_users.meta_key='location' ) A: This should be the most succinct possible: SELECT * FROM current_users LEFT JOIN current_users AS um_location ON current_users.id = um_location.id AND um_location.meta_key = 'location' WHERE um_location.meta_value = 'France' OR um_location.meta_value IS NULL ; ...but as isaace answer hints, MySQL does not handle OR conditions ideally. So, this might perform better. SELECT * FROM current_users LEFT JOIN current_users AS um_location ON current_users.id = um_location.id AND um_location.meta_key = 'location' WHERE um_location.meta_value = 'France' UNION SELECT * FROM current_users LEFT JOIN current_users AS um_location ON current_users.id = um_location.id AND um_location.meta_key = 'location' WHERE um_location.meta_value IS NULL ; ..but it will probably only make a difference if you have meta_value indexed; as the issue is that MySQL tends to ignore indexes when presented with OR. Edit: Due to the type of schema design begin used, there might be some weird issues with these queries using "location" entries as rows on the left side of the join; try this instead. SELECT * FROM current_users AS non_location LEFT JOIN current_users AS um_location ON current_users.id = um_location.id AND um_location.meta_key = 'location' WHERE non_location.meta_key != 'location' AND (um_location.meta_value = 'France' OR um_location.meta_value IS NULL ) ; A: First off, thank you both for your suggestions. Unfortunately I couldn't get them to work as the join LEFT JOIN current_users AS um_location ON current_users.id = um_location.id AND um_location.meta_key = 'location' did not return the records of those users who hadn't yet entered their location. However, after a lot of work I managed to solve the issue. It's not elegant but it works: * *Create a temporary table with a subset of the users who fulfil some basic criteria. *Do a simple INSERT IGNORE INTO ... SELECT DISTINCT ... FROM current_users adding in the missing 'location' with dummy information. Then the final query includes this: um_location.meta_value = 'France' or um_location.meta_value = 'dummyinfo' It's not blisteringly fast but since this is a process which acts in the back end for admin users only they can afford to wait... ;) Again, thanks for your help!
unknown
d18853
test
you have string iden, convert it to int like this: int devId= Convert.ToInt32(iden);
unknown
d18854
test
Yes, e.g. UPDATE table1 t1 JOIN table2 t2 ON t2.id = t1.id -- Your keys. SET t1.column = '...', t2.column = '...' -- Your Updates WHERE ... -- Your conditional
unknown
d18855
test
You do not have to install WSO2 Private PaaS to have API Manager. From WSO2 site, you can download stand-alone API Manager or use the hosted version - WSO2 API Cloud. Once you are familiar with API Manager or API Cloud, if you do decide that you want to spawn it within Private PaaS, you can follow this document to locate the corresponding cartridge and subscribe to it. A: Before getting started with WSO2 Private PaaS, it is worth to watch these well-organized, 6-steps video tutorials on "Getting Started With WSO2 Private PaaS". After watching these, you will have a basic understanding, from there you can move forward without any issues. Step 0 - Finding the Documentation Step 1 - Provisioning an EC2 Instance from WSO2 Private PaaS AMI Step 2 - Provisioning the WSO2 Private Platform as a Service Environment Step 3 - Configuring Private Platform as a Service with Tenants Step 4 - Provisioning Application Platform Services Step 5 - Deploying a Web Application on a Private Platform as a Service Environment running on Amazon EC2
unknown
d18856
test
How to include Moment.js in an Adobe LiveCycle Form: * *Download the minified script *In LiveCycle Designer open your form and create a Script Object called MOMENTJSMIN *Copy the minified script into that Script Object *In the Script Editor window of LiveCycle Designer, edit MOMENTJSMIN Script Object in the following manner: *Remove all the script up to but not including the second curly brace { : !function(a,b){"object"==typeof exports&&"undefined"!=typeof module?module.exports=b():"function"==typeof define&&define.amd?define(b):a.moment=b()}(this,function() *Remove the rounded parenthesis and semicolon from the end of the minified script *Add this line to the beginning of the minified script: if (xfa.momentjs == undefined) xfa.momentjs = function() *In the MOMENTJSMIN Script Object add this function after the end of the script: function getMomentJS(){ return xfa.momentjs(); } Now your MOMENTJSMIN script object is set up to provide Moment.js to scripts throughout your form. To use Moment.js in any of your script, start your script object or event script with this line: var moment = MOMENTJSMIN.getMomentJS(); Now you can use moment() anywhere in the script that starts with that line. eg: var moment = MOMENTJSMIN.getMomentJS(); var jan07 = moment([2007, 0, 29]); app.alert(moment().format("dddd, MMMM Do YYYY, h:mm:ss a")); app.alert(jan07.format("dddd, MMMM Do YYYY") + " was " + jan07.fromNow()); app.alert(moment.isDate(new Date())); A: What I would check first: * *Make sure your script is fully loaded before trying to invoke functions from it. (check the event, where you call the function-calculate, form:readty etc.) *Check the script referencing. Right path? Right name? *Check if the function really exists *Check function parameters.
unknown
d18857
test
I don't know what Supabase does but the problem seems to be Promise related Try to change your app.js as follow const { supabase } = require('./client') function signOut() { /* sign the user out */ supabase.auth.signOut(); } function signInWithGithub() { /* authenticate with GitHub */ return supabase.auth.signIn({ provider: 'github' }); } function printUser() { const user = supabase.auth.user() console.log(user.email) } signInWithGithub().then(() => { printUser() return signOut() })
unknown
d18858
test
Problem here is you need upload the font into /i/ @font-face { font-family: "Pacifico"; src: url("http://localhost:8080/i/Pacifico.ttf"); } body { font-family: "Pacifico", serif; font-weight: 300 !important; line-height: 25px !important; font-size: 14px !important; } I don't know why Apex is not resolving the #WORKSPACE_IMAGES# but you can upload the font in the web server. In my case I'm using tomcat
unknown
d18859
test
Error in this line adapter.setClickListener((PessoaAdapter.ItemClickListener) this); You should do like this * *Let class RecyclerViewActivity implement interface PessoaAdapter.ItemClickListener public class RecyclerViewActivity extends AppCompatActivity implements PessoaAdapter.ItemClickListener{ *Add method to class RecyclerViewActivity void onItemClick(View view, int position){ your code } *You don't need to cast anymore adapter.setClickListener(this); Read this https://docs.oracle.com/javase/tutorial/java/IandI/usinginterface.html A: You need to move out the interface out of the PessoaAdapter class. Create a new class ItemClickListener.java like the following. package com.person.bernardo.myperson; import android.view.View; public interface ItemClickListener { void onItemClick(View view, int position); } Now in your RecyclerViewActivity implement the ItemClickListener like the following and while setting the adapter to the RecyclerView, you set the listener along with the adapter. package com.person.bernardo.myperson; import android.content.Intent; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.support.v7.widget.LinearLayoutManager; import android.support.v7.widget.RecyclerView; import android.view.View; import java.util.ArrayList; import java.util.List; public class RecyclerViewActivity extends AppCompatActivity implements ItemClickListener { PessoaAdapter adapter; Pessoa contato; private List<Pessoa> movieList = new ArrayList<>(); private RecyclerView mRecyclerView; private RecyclerView.Adapter mAdapter; private RecyclerView.LayoutManager mLayoutManager; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.recycler_view_activity); Intent intent = getIntent(); contato = (Pessoa) intent.getSerializableExtra("pessoa2"); ArrayList<String> animalNames = new ArrayList<>(); animalNames.add(contato.getNome()); animalNames.add(contato.getTelefone()); animalNames.add(contato.getEmail()); animalNames.add(contato.getEndereco()); animalNames.add(contato.getFacebook()); animalNames.add(contato.getLinkedIn()); animalNames.add(contato.getInstagram()); animalNames.add(contato.getYoutube()); animalNames.add(contato.getSpotify()); // set up the RecyclerView RecyclerView recyclerView = (RecyclerView) findViewById(R.id.listView); recyclerView.setLayoutManager(new LinearLayoutManager(this)); // Error in the following line adapter = new PessoaAdapter(this, animalNames); adapter.setClickListener(this); recyclerView.setAdapter(adapter); } @Override public void onItemClick(View view, int position) { // Do something based on the item click in the RecyclerView } } And the modified adapter will look like the following. package com.person.bernardo.myperson; import android.content.Context; import android.support.v7.widget.RecyclerView; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.TextView; import java.util.List; public class PessoaAdapter extends RecyclerView.Adapter<PessoaAdapter.ViewHolder> { private List<String> mData; private LayoutInflater mInflater; private ItemClickListener mClickListener; // data is passed into the constructor PessoaAdapter(Context context, List<String> data) { this.mInflater = LayoutInflater.from(context); this.mData = data; } // inflates the row layout from xml when needed @Override public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = mInflater.inflate(R.layout.recyclerview_row, parent, false); return new ViewHolder(view); } // binds the data to the TextView in each row @Override public void onBindViewHolder(ViewHolder holder, int position) { String animal = mData.get(position); holder.myTextView.setText(animal); } // total number of rows @Override public int getItemCount() { return mData.size(); } // convenience method for getting data at click position String getItem(int id) { return mData.get(id); } // allows clicks events to be caught void setClickListener(ItemClickListener itemClickListener) { this.mClickListener = itemClickListener; } // stores and recycles views as they are scrolled off screen public class ViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener { TextView myTextView; ViewHolder(View itemView) { super(itemView); myTextView = itemView.findViewById(R.id.show_nome); itemView.setOnClickListener(this); } @Override public void onClick(View view) { if (mClickListener != null) mClickListener.onItemClick(view, getAdapterPosition()); } } } I have downloaded your code from github and modified the code in a branch of mine which is locally created. Please check the pull request in the fix branch in your github repository. Hope that helps!
unknown
d18860
test
Call the flush()method after you set the cell value to wait. SpreadsheetApp.flush(); //Applies all pending Spreadsheet changes.
unknown
d18861
test
yes there are some examples of using Pure Java, no J2EE. http://bastide.org/2014/01/28/how-to-develop-a-simple-java-integration-with-the-ibm-social-business-toolkit-sdk/ and https://github.com/OpenNTF/SocialSDK/blob/master/samples/java/sbt.sample.app/src/com/ibm/sbt/sample/app/BlogServiceApp.java Essentially, you'll need the dependent jar files. once, you have the jar files you need to configure your class so you can get an Endpoint once you have the endpoint you can use the endpoint in one of the top level services such as ForumsService then you can use the ForumsService to call back to Connections
unknown
d18862
test
When the size of the buffer becomes too small, then I call glBufferData again increasing the size of the buffer, and copying back existing points to it. Not a bad idea. In fact that's the recommended way of doing these things. But don't make the chunks too small. Ideally, I would prefer to avoid storing the point data in the computer memory and keep everything in the GPU memory. That's not how OpenGL works. The contents of a buffer objects can be freely swapped between CPU and GPU memory as needed. But when I would resize the buffer, I would have to copy the data back from the buffer to the CPU, then resize the buffer, and finally copy the data back to the buffer from the CPU. All this, also seems inefficient. Correct. You want to avoid copies between OpenGL and the host program. That's why there is in OpenGL-3.1 and later the function glCopyBufferSubData to copy data between buffers. When you need to resize a buffer you can as well create a new buffer object and copy from the old to the new one^1. [1]: maybe you can also do resizing copys within the same buffer object name, by exploiting name orphaning; but I'd first have to read the specs if this is actually defined, and then cross fingers that all implementations get this right. A: I made a program for scientific graphing before, that could add new data points in real-time. What I did was create a fairly large fixed size buffer with flag GL_DYNAMIC_DRAW, and added individual points to it with glBufferSubData. Once it filled, I created a new buffer with flag GL_STATIC_DRAW and moved all the data there, then started filling the GL_DYNAMIC_DRAW buffer again from the beginning. So I ended up with a small number of static buffers, one dynamic buffer, and since they were all equal size (with monotonically increasing x coordinates) calculating which buffers to use to draw any given segment of the data was easy. And I never had to resize any of them, just keep track of how much of the dynamic buffer was used and only draw that many vertices from it. I don't think I used glCopyBufferSubData as datenwolf suggests, I kept a copy in CPU memory of the data in the dynamic buffer, until I could flush it to a new static buffer. But GPU->GPU copy would be better. I still would allocate more chunk-sized buffers and avoid resizing.
unknown
d18863
test
Look at the url http://www.noncode.org/show_rna.php?id=NONHSAT000002 The search is just passed as a get parameter. So to access the side just set the start url to something like: import requests from bs4 import * id = "NONHSAT146018" page = requests.get("http://www.noncode.org/show_rna.php?id=" + id) soup = BeautifulSoup(page.content, "html.parser") element = soup.findAll('table', class_="table-1")[1] element2 = element.findAll('tr')[1] element3 = element2.findNext('td') your_data = str(element3.renderContents(), "utf-8") print(your_data)
unknown
d18864
test
Try this- from gensim import corpora import gensim from gensim.models.ldamodel import LdaModel from gensim.parsing.preprocessing import STOPWORDS # example docs doc1 = """ Java (Indonesian: Jawa; Javanese: ꦗꦮ; Sundanese: ᮏᮝ) is an island of Indonesia.\ With a population of over 141 million (the island itself) or 145 million (the \ administrative region), Java is home to 56.7 percent of the Indonesian population \ and is the most populous island on Earth.[1] The Indonesian capital city, Jakarta, \ is located on western Java. Much of Indonesian history took place on Java. It was \ the center of powerful Hindu-Buddhist empires, the Islamic sultanates, and the core \ of the colonial Dutch East Indies. Java was also the center of the Indonesian struggle \ for independence during the 1930s and 1940s. Java dominates Indonesia politically, \ economically and culturally. """ doc2 = """ Hydrogen fuel is a zero-emission fuel when burned with oxygen, if one considers water \ not to be an emission. It often uses electrochemical cells, or combustion in internal \ engines, to power vehicles and electric devices. It is also used in the propulsion of \ spacecraft and might potentially be mass-produced and commercialized for passenger vehicles \ and aircraft.Hydrogen lies in the first group and first period in the periodic table, i.e. \ it is the first element on the periodic table, making it the lightest element. Since \ hydrogen gas is so light, it rises in the atmosphere and is therefore rarely found in \ its pure form, H2.""" doc3 = """ The giraffe (Giraffa) is a genus of African even-toed ungulate mammals, the tallest living \ terrestrial animals and the largest ruminants. The genus currently consists of one species, \ Giraffa camelopardalis, the type species. Seven other species are extinct, prehistoric \ species known from fossils. Taxonomic classifications of one to eight extant giraffe species\ have been described, based upon research into the mitochondrial and nuclear DNA, as well \ as morphological measurements of Giraffa, but the IUCN currently recognizes only one \ species with nine subspecies. """ documents = [doc1, doc2, doc3] document_wrd_splt = [[word for word in document.lower().split() if word not in STOPWORDS] \ for document in documents] dictionary = corpora.Dictionary(document_wrd_splt) print(dictionary.token2id) corpus = [dictionary.doc2bow(text) for text in texts] lda = LdaModel(corpus, num_topics=3, id2word = dictionary, passes=50) num_topics = 3 topic_words = [] for i in range(num_topics): tt = lda.get_topic_terms(i,20) topic_words.append([dictionary[pair[0]] for pair in tt]) # output >>> topic_words[0] ['indonesian', 'java', 'species', 'island', 'population', 'million', '(the', 'java.', 'center', 'giraffe', 'currently', 'genus', 'city,', 'economically', 'administrative', 'east', 'sundanese:', 'itself)', 'took', '1940s.'] >>> topic_words[1] ['vehicles', 'fuel', 'hydrogen', 'periodic', 'table,', 'i.e.', 'uses', 'form,', 'considers', 'zero-emission', 'internal', 'period', 'burned', 'cells,', 'rises', 'pure', 'atmosphere', 'aircraft.hydrogen', 'water', 'engines,'] >>> topic_words[2] ['giraffa,', 'even-toed', 'living', 'described,', 'camelopardalis,', 'consists', 'extinct,', 'seven', 'fossils.', 'morphological', 'terrestrial', '(giraffa)', 'dna,', 'mitochondrial', 'nuclear', 'ruminants.', 'classifications', 'species,', 'prehistoric', 'known']
unknown
d18865
test
Try this... import time a = 100 b = 1 while a > b: print("Please select an operation. ") print("1. Multiply") print("2. Divide") print("3. Subtract") print("4. Add") b = 200 x = int(input("Please enter your operation number. ")) if x == 4: y = int(input("Please enter your first number. ")) z = int(input("Please enter your second number. ")) finaladd = (y+z) print(finaladd) c = input("Would you like to do another calculation? ") if c == 'yes' or c == 'Yes': a = 500 elif c == 'no' or c == 'No': b = 1000 elif x == 2: y2 = int(input("Please enter your first number. ")) z2 = int(input("Please enter your second number. ")) finaladd2 = (y2/z2) print(finaladd2) elif x == 3: y3 = int(input("Please enter your first number. ")) z3 = int(input("Please enter your second number. ")) finaladd3 = (y3-z3) print(finaladd3) elif x == 1: y4 = int(input("Please enter your first number. ")) z4 = int(input("Please enter your second number. ")) finaladd4 = (y4*z4) print(finaladd4)
unknown
d18866
test
dates = [] for line in f: dataItem = line.split() #split by while space by default, as a list date = dataItem[0] #index 0 is the first element of the dataItem list dates.append(date) f.close() in summary, you need to split the line string first, then choose the date
unknown
d18867
test
The smartest to do would be to implement a solve function like Stanislav recommended. You can't just iterate over values of x until the equation reaches 0 due to Floating Point Arithmetic. You would have to .floor or .ceil your value to avoid an infinity loop. An example of this would be something like: x = 0 while True: x += 0.1 print(x) if x == 10: break Here you'd think that x eventually reaches 10 when it adds 0.1 to 9.9, but this will continue forever. Now, I don't know if your values are integers or floats, but what I'm getting at is: Don't iterate. Use already built solve libraries.
unknown
d18868
test
Combine the two checks into a single if statement: @if (! empty($variable) && $variable == 'yes') do something here @endif A: if you are searching for one line condition, it will work as expected, {{!empty($variable) ? $variable == 'yes' ? 'do something' : 'do something else' : 'variable is empty'}}
unknown
d18869
test
You can do it like this: $(function() { $( "#orangeBox" ).resizable({ handles:'n,e,s,w' //top, right, bottom, left }); }); working example Link in JSFiddle : http://jsfiddle.net/sandenay/hyq86sva/1/ Edit: If you want it to happen on a button click: $(".btn").on("click", function(){ var height = $("#orangeBox").offset().top; // get position from top console.log(height); $("#orangeBox").animate({ height: "+="+height+'px' }, 1000); }); and add this in your HTML: <button class="btn">Click here to resize</button> Working fiddle: http://jsfiddle.net/sandenay/hyq86sva/4/ A: Try this in your script : $(function() { $( "#orangeBox" ).resizable({ handles: 'n, e, s, w' }); }); Hope this helps !!
unknown
d18870
test
To be honest, I don't know the exact answer. But it may help you. First of, if you call IORegisterForSystemPower, you need to make two calls in this order: - Call IODeregisterForSystemPower with the 'notifier' argument returned here. - Then call IONotificationPortDestroy passing the 'thePortRef' argument returned here (Please visit apple's document for more detail). In case of port binding, if I use CFSocketSetAddress, before releasing this socket no other can use this port for binding. But in case of app terminate/closed without releasing this socket this port is available. That means after terminated the app system automatically releasing this. Does the app get de-registered automatically when it dies in any case? I think it will automatically de-registered by system. I also used similar code as you in one of my project. But recently replaced with below codes: [[[NSWorkspace sharedWorkspace] notificationCenter] addObserver: self selector: @selector(receiveWakeNotification:) name: NSWorkspaceDidWakeNotification object: nil]; [[[NSWorkspace sharedWorkspace] notificationCenter] addObserver: self selector: @selector(receiveSleepNotification:) name: NSWorkspaceWillSleepNotification object: nil];
unknown
d18871
test
If we are talking about C++11 - the only way is to use std::chrono. Google style guide is not an some kind of final authority here (sometimes it is highly questionable). std::chrono is proven to be good and stable, and even used in game engines in AAA games, see for yourself HERE. Good example for exactly what you need is available HERE In case if you still don't want it, there are no other C++11 way to do it, but you, probably, want to look on C-style measure, like HERE. Just for your information - all methods are using system API, and so, <time.h> is included, no way to avoid it. A: For embedded systems there is a common practice to change GPIO state in your code and then hook an oscilloscope to the pin and look for resulting waveform. This has minimal impact on runtime because changing GPIO is a cheap operation. It does not require any libraries and additional code. But it requires additional hardware. Note: embedded is quite stretchable notion. GPIO trick is more related for microcontrollers. A: Is the time measurement necessary in the final product? If not I'd suggest using whatever you like and use a switch to not compile these measurement routines into the final product. A: Something like this you mean? (for windows platform) #include <Windows.h> class stopwatch { double PCFreq = 0.0; double CounterStart = 0; //google style compliant, less accurate //__int64 CounterStart = 0; //for accuracy LARGE_INTEGER li; public: void StartCounter() { if (!QueryPerformanceFrequency(&li))ExitProcess(0); PCFreq = double(li.QuadPart) / 1000.0; QueryPerformanceCounter(&li); CounterStart = li.QuadPart; } double GetCounter() { QueryPerformanceCounter(&li); return double(li.QuadPart - CounterStart) / PCFreq; } }; usage: stopwatch aclock; aclock.StartCounter(); //start //..../// cout<<aclock.GetcCounter()<<endl; //output time passed in milliseconds
unknown
d18872
test
RewriteRule ^/?([a-z][a-z])/p([0-9]+)/?$ But it doesn't check if the language is a valid one (and I don't know if you expect language codes with more than 2 characters if such a thing exists) EDIT : this regexp is shorter : RewriteRule ^([a-z]{2})/p\d+/?$ The first slash is useless and \d+ means one or more digits, see @TerryE's comment
unknown
d18873
test
If I understand you correctly, you want to jump out of the if and the enclosing for loop when condition is true. You can achieve this using break: for(....) { if(condition) { printf(_); break; } else printf(); } Note: I added proper indentation and angle brackets to make the code cleaner. Update after OP's comment int isPrime = 0; for (i = 0; i < 10 && !isPrime; i++) { isPrime = (16 / i == i); } if (isPrime) printf("its a prime no"); else printf("not a prime no."); Disclaimer: th condition above is hardly the proper way to detect whether a given number is prime, but still this code snippet illustrates the general idiom of saving the result of an expression from a loop and inspecting it later. A: The else code is never executed if condition is true: if (condition) // If this condition evaluates to true { printf("Hello"); // Then this code is executed } else { printf("World"); // If it is false then this code is executed. } Edit: Wrapping it in a for loop makes no difference (unless you mean want to actually exit the for loop as well?)
unknown
d18874
test
Please try this way, PFObject * demoObject = [PFObject objectWithClassName:@"Demo"]; demoObject[@"dataColumn"] = @"data value"; [demoObject saveInBackgroundWithBlock:^(BOOL succeeded, NSError *error) { // take any action after saving. }];
unknown
d18875
test
It is highly unlikely that you actually need to patch the OS itself, considering that Windows always provides the ability to hook on the clipboard events directly and override them however you wish. See How to get a clipboard paste notification and provide my own data? . A: Yes, but you haven't specified the language. Here's how you might in VB.NET (put this in a Timer's Tick handler): If Clipboard.ContainsText() Then Dim s As String = Clipboard.GetText() Dim t As String = s.Trim() If s <> t Then Clipboard.SetText(t) 'Trim all text, for example End If
unknown
d18876
test
What happens, if you place the line NSLog(@"results %i,%@,%@",[results intForColumn:@"id"], .... behind the [results next], e.g. inside the while loop? The documentation of FMDB says: "You must always invoke -[FMResultSet next] before attempting to access the values returned in a query, even if you're only expecting one" A: Make sure you have initialized your db into AppDelegate or any of your file where you are using FMDB. If yes & you have initialized it somewhere, then try to run this SQL "SELECT * FROM customers" . in database itself, in SQL tab. And check whether database contains the values or not. Before that make sure you are running SQL in databse which resides into **Application Support** Enjoy Programming!
unknown
d18877
test
After searching for 3 hours total : normally value={this.state.formatToExportTo} should work ( I tried it alone without the rest of my app surrounding it) But since I made some quircky things with my this and the order of update, I just had to replace : value={this.state.formatToExportTo} by defaultValue={this.state.formatToExportTo} That's all ! I hope it helps someone who'll come by this question
unknown
d18878
test
don't know if this is it but I had a similar issue that I fixed with this. (setq default-buffer-file-coding-system 'utf-8-unix) there's a few people who have asked how to get tramp working on windows(I actually gave up) so if you felt like documenting how you did it, there would likely be legions of thankful windows users out there. A: (prefer-coding-system 'utf-8) did the trick! Thanks Tom for the clue... Getting Tramp to work on my windows machine was no trouble at all. I'm using this version of Emacs: GNU Emacs 23.1.50.1 (i386-mingw-nt6.1.7600) of 2009-10-15 on LENNART-69DE564 With this in my init.el: (setq tramp-default-method "plink") (prefer-coding-system 'utf-8) Putty directory with plink-app is in my system path. Then: C-X C-F /[email protected]: and Tab brings up password prompt then autocompletion on servers-files.
unknown
d18879
test
Is there a way to refresh the cursor? Call restartLoader() to have it reload the data. I don't want to have to create a new Cursor each time a checkbox is checked, which happens a lot Used properly, a ListView maintains the checked state for you. You would do this via android:choiceMode="multipleChoice" and row Views that implement Checkable. The objective is to not modify the database until the user has indicated they are done messing with the checklist by one means or another (e.g., "Save" action bar item). I'm also unsure of how calling restartLoader() several times would work, performance-wise, especially since I use onLoadFinish() to perform some animations. Don't call it several times. Ideally, call it zero times, by not updating the database until the user is done, at which point you are probably transitioning to some other portion of your UI. Or, call it just once per requested database I/O (e.g., at the point of clicking "Save"). Or, switch from using a CursorAdapter to something else, such as an ArrayAdapter on a set of POJOs you populate from the Cursor, so that the in-memory representation is modifiable in addition to being separately persistable.
unknown
d18880
test
Try to remove urlConnection.setDoOutput(true); when you run in ICS. Because ICS turns GET request to POST when setDoOutput(true). This problem is reported here and here
unknown
d18881
test
You can access the full Session history using this code: import com.mathworks.mlservices.MLCommandHistoryServices history=MLCommandHistoryServices.getSessionHistory; To achive what you want, use this code: import com.mathworks.mlservices.MLCommandHistoryServices; startcounter=numel(MLCommandHistoryServices.getSessionHistory); disp('mydummycommand'); disp('anotherdummycommand'); history=MLCommandHistoryServices.getSessionHistory; commands=cell(history(startcounter-2:end-1)); Be aware that these functions are undocumented. It uses the command history which is typically located at the bottom right in your matlab.
unknown
d18882
test
no it will not effect , you can remove them from csv and import A: No it will not mess up your import, however be careful when making edits to the csv file as some programs save the file in the wrong csv format I'd recommend open office and when you hit save make sure you push in current format
unknown
d18883
test
I figured it out, in case anyone needs to know this in the future.Simply add the following line under "onValueSelected": @Override public void onValueSelected(Entry e, Highlight h) { String label = dataSets.get(h.getDataSetIndex()).getLabel(); //add this } In this case, dataSets is the "ILineDataSet" used to populate the line chart
unknown
d18884
test
You can simply call public void Success(T objectSuccess) { success(objectSuccess); } objectSuccess is of type T. That's exactly what Action<T> success expects as parameter. No conversion is required. You can think of success as being a method declared like this: void success(T arg) { } And as Jorn Vernee says, remove <T> from Success<T>. You don't need a generic type parameter for this method, as it uses the generic type parameter of the class. If for some reason you need different generic type parameters for class and method, you must give them different names. But that's not the case here.
unknown
d18885
test
Here's a parser subclass that implements the latest suggestion on the https://bugs.python.org/issue9334. Feel free to test it. class ArgumentParserOpt(ArgumentParser): def _match_argument(self, action, arg_strings_pattern): # match the pattern for this action to the arg strings nargs_pattern = self._get_nargs_pattern(action) match = _re.match(nargs_pattern, arg_strings_pattern) # if no match, see if we can emulate optparse and return the # required number of arguments regardless of their values # if match is None: import numbers nargs = action.nargs if action.nargs is not None else 1 if isinstance(nargs, numbers.Number) and len(arg_strings_pattern) >= nargs: return nargs # raise an exception if we weren't able to find a match if match is None: nargs_errors = { None: _('expected one argument'), OPTIONAL: _('expected at most one argument'), ONE_OR_MORE: _('expected at least one argument'), } default = ngettext('expected %s argument', 'expected %s arguments', action.nargs) % action.nargs msg = nargs_errors.get(action.nargs, default) raise ArgumentError(action, msg) # return the number of arguments matched return len(match.group(1)) It replaces one method providing a fall back option when the regular argument matching fails. If you and your users can live with it, the long flag fix is best --arg=-a is simplest. This unambiguously specifies -a as an argument to the --arg Action.
unknown
d18886
test
Have you looked at something like an HTML5 loader to load your initial assets when the DOM is loaded. There is a jQuery plugin, I know it is not Angular and another library, but it may help your order of operation. http://gianlucaguarini.github.io/jquery.html5loader/ A: You can try nganimate , an easy documentation for you to refer https://docs.angularjs.org/api/ngAnimate
unknown
d18887
test
It looks like you didn't initialize your reference lstOrderitem. Debug your code if your references value is null, you need to initialize lstOrderitem before using it. A: You should initialize lstOrderitem property in the constructor, like this: EDIT public MyClass() { lstOrderitem = new List<OrderItem>(); } P.S. Microsoft suggests starting the names of your properties in capital letters, to avoid confusion with member variables, which should be named starting with a lowercase letter. A: It looks like you didn't initialize your reference lstOrderitem. Debug your code if your reference value is null, you need to initialize lstOrderitem before using it. public MyClass() { lstOrderitem = new List<OrderItem>(); }
unknown
d18888
test
$('.unstyled custumer_say_<%= @talk.id %>').remove(); You're calling something that doesn't exist as far as I can see Change your ul to this: <ul id="unstyled custumer_say_<%= @talk.id %>" class="unstyled custumer_say" style="list-style: none;"> And the destroy.js.erb to $("#unstyled custumer_say_<%= @talk.id %>").remove(); Notice the double quotes
unknown
d18889
test
Firstly, if your platform is 64-bit, then why are you casting your pointer values to int? Is int 64-bit wide on your platform? If not, your subtraction is likely to produce a meaningless result. Use intptr_t or ptrdiff_t for that purpose, not int. Secondly, in a typical implementation a 1-byte type will typically be aligned at 1-byte boundary, regardless of whether your platform is 64-bit or not. To see a 8-byte alignment you'd need a 8-byte type. And in order to see how it is aligned you have to inspect the physical value of the address (i.e. whether it is divisible by 1, 2, 4, 8 etc.), not analyze how far apart two variables are spaced. Thirdly, how far apart c1 and c2 are in memory has little to do with alignment requirements of char type. It is determined by how char values are passed (or stored locally) on your platform. In your case they are apparently allocated 4-byte storage cells each. That's perfectly fine. Nobody every promised you that two unrelated objects with 1-byte alignment will be packed next to each other as tightly as possible. If you want to determine alignment by measuring how far from each other two objects are stored, declare an array. Do not try to measure the distance between two independent objects - this is meaningless. A: To determine the greatest fundamental alignment in your C implementation, use: #include <stdio.h> #include <stddef.h> int main(void) { printf("%zd bytes\n", _Alignof(max_align_t)); } To determine the alignment requirement of any particular type, replace max_align_t above with that type. Alignment is not purely a function of the processor or other hardware. Hardware might support aligned or unaligned accesses with different performance effects, and some instructions might support unaligned access while others do not. A particular C implementation might choose to require or not require certain alignment in combination with choosing to use or not use various instructions. Additionally, on some hardware, whether unaligned access is supported is configurable by the operating system.
unknown
d18890
test
NSNumber *num1 = [NSNumber numberWithInt:56]; NSNumber *num2 = [NSNumber numberWithInt:57]; NSNumber *num3 = [NSNumber numberWithInt:58]; NSMutableArray *myArray = [NSMutableArray arrayWithObjects:num1,num2,num3,nil]; NSNumber *num=[NSNumber numberWithInteger:58]; NSInteger Aindex=[myArray indexOfObject:num]; NSLog(@" %d",Aindex); Its giving the correct output, may be u have done something wrong with storing objects in ur array. A: Try this: NSArray's indexOfObject: method. Such as the following: NSUInteger fooIndex = [someArray indexOfObject: someObject]; A: Folks, When an object is not found in the array the indexOfObject method does NOT return a 'garbage' value. Many systems return an index of -1 if the item is not found. However, on IOS - because the indexOfObject returns an UNSIGNED int (aka NSUInteger) the returned index must be greater than or equal to zero. Since 'zero' is a valid index there is no way to indicate to the caller that the object was not found -- except by returning an agreed upon constant value that we all can test upon. This constant agreed upon value is called NSNotFound. The method: - (NSUInteger)indexOfObject:(id)anObject; will return NSNotFound if the object was not in the array. NSNotFound is a very large POSITIVE integer (usually 1 minus the maximum int on the platform). A: The index returned by indexOfObject will be the first index for an occurence of your object. Equality is tested using isEqual method. The garbage value you get is probably equal to NSNotFound. Try testing anIndex against it. The number you are looking for isn't probably in your array : NSNumber *num=[NSNumber numberWithInteger:56]; NSInteger anIndex=[myArray indexOfObject:num]; if(NSNotFound == anIndex) { NSLog(@"not found"); } or log the content of the array to be sure : NSLog(@"%@", myArray); A: If you're using Swift and optionals make sure they are unwrapped. You cannot search the index of objects that are optionals. A: I just checked. Its working fine for me. Check if your array has the particular number. It will return such garbage values if element is not present. A: indexOfObject methord will get the index of the corresponding string in that array if the string is like @"Test" and you find like @"TEST" Now this will retun an index like a long number
unknown
d18891
test
This seems to be an old question, but I was running into the same thing today. I am rather new to git and npm, but I did find something that may be of help to someone. If the git repo does not have a .gitignore, the .git folder is not downloaded / created. If the git repo does have a .gitignore, the .git folder is downloaded / created. I had two repos, one without a .gitignore (because when I made it I was not aware of what .gitignore was or did), and one with a .gitignore. I included both as npm packages in a project, and noticed that the one without the .gitignore was not giving me the EISGIT error (because of the .git folder). So, after I found this question, I removed the .gitignore from that repo and now it too does not create a .git folder. Later on, I discovered that adding both a .gitignore and a .npmignore to the project now stops the .git folder from appearing. I did not add .git in my .npmignore.
unknown
d18892
test
The reason it doesn't work is because rtspsrc's source pad is a so-called "Sometimes pad". The link here explains it quite well, but basically you cannot know upfront how many pads will become available on the rtspsrc, since this depends on the SDP provided by the RTSP server. As such, you should listen to the "pad-added" signal of the rtspsrc, where you can link the rest of your pipeline to the source pad that just showed up in the callback. So summarised: def main(device): GObject.threads_init() Gst.init(None) pipeline = Gst.Pipeline() source = Gst.ElementFactory.make("rtspsrc", "video-source") source.set_property("location", device) source.set_property("latency", 300) source.connect("pad-added", on_rtspsrc_pad_added) pipeline.add(source) # We will add/link the rest of the pipeline later loop = GObject.MainLoop() pipeline.set_state(Gst.State.PLAYING) try: loop.run() except: pass pipeline.set_state(Gst.State.NULL) def on_rtspsrc_pad_added(rtspsrc, pad, *user_data): # Create the rest of your pipeline here and link it depay = Gst.ElementFactory.make("rtph264depay", "depay") pipeline.add(depay) rtspsrc.link(depay) # and so on ....
unknown
d18893
test
At this time the dictionary feature is not available in the NMT custom translator but it is a feature we are working on and will release in a future update.
unknown
d18894
test
There are many ways to colorize the thresholded image. One simple way is by multiplication: palm = im2double(palm); % it’s easier to work with doubles in MATLAB palm2 = palm * thumbFilled; imshow([palm, palm2]) The multiplication uses implicit Singleton expansion. If you have an older version of MATLAB it won’t work, you’ll have to use bsxfun instead.
unknown
d18895
test
For me it was, npm cache clean --force rm -rf node_modules npm install I tried deleting manually but didn't help A: Check permissions of your project root with ls -l /Users/Marc/Desktop/Dev/masterclass/. If the owner is not $USER, delete your node_modules directory, try changing the owner of that directory instead and run npm install again. cd /Users/Marc/Desktop/Dev rm -rf ./masterclass/node_mdoules/ chown -R $USER ./masterclass/ cd masterclass npm install A: I tried everything in this thread with no luck on Big Sur, but then I tried this: sudo npm install -g yarn And it worked! A: I did this for nodemon and it work sudo chown -R $USER /usr/local/lib/node_modules then install the packages that you need A: KIndly run the below commands: * *To check the location of the package: npm config get prefix *Then run this : sudo chown -R $(whoami) $(npm config get prefix)/{lib/node_modules,bin,share} *Enter the password and run installation commands It worked for me. A: For Mac; Run this on the Terminal > sudo chown -R $USER /usr/local/lib/node_modules A: I was having a similar issue, but the accepted answer did not work for me, so I will post my solution in case anyone else comes along needing it. I was running npm install in a project cloned from GitHub and during the clone, for whatever reason the write permission was not actually set on the project directory. To check if this is your problem, pull up Terminal and enter the following: cd path/to/project/parent/directory ls -l If the directory has user write access, the output will include a w in the first group of permissions: drwxr-xr-x 15 user staff 480 Sep 10 12:21 project-name This assumes that you're trying to access a project in the home directory structure of the current user. To make sure that the current user owns the project directory, follow the instructions in the accepted answer. A: I entered the following: cd /Users/Marc/Desktop/Dev rm -rf ./masterclass/node_mdoules/ chown -R $USER ./masterclass/ cd masterclass npm install once this was completed the results indicated warnings and one notice instead of previous result of no permission and error. I then entered the following: % sudo npm install --global firebase-tools my result was success upon completion of the last terminal entry. A: I have same problem because i install it from pkg, and i solve this problem use below step: 1. sudo rm -rf /usr/local/lib/node_modules/npm/ 2. brew doctor 3. brew cleanup --prune-prefix ( or sudo rm -f /usr/local/include/node) 4. brew install node A: i use this command : sudo npm install -g @angular/cli Gave password and worked for mw. Took 10 secs to install angular A: NPM_CONFIG_PREFIX=~/.npm-global Copy this line into ur terminal, then hit enter. Then install the necessary packages you need WITHOUT the term "sudo" in front of npm. i.e., npm install -g jshint A: the only thing that work on me sudo npm i -g clasp --unsafe-perm A: Just do : sudo npm install -g @sanity/cli && sanity init it will ask sudo password and you are good to go A: That is because you dont have the "node modules". You can install with this code: npm install -g node-modules then, create your react app with npm init react-app my-app A: For Macs running Big Sur or Monterey: sudo chown -R $USER /usr/local/bin A: Run on macOS Terminal: sudo chown -R $USER /usr/local/bin It will ask for the password then you're good to go! Hope this helps. A: Ok my problem was that I thought I was installing on the path: /Users/mauro/Documents/dev/react Where my project was setup, but instead I was doing it on: Users/mauro/Documents/dev/ One path higher and that is why it did not perform the installation in my case. I simply did: cd react and voila I was able to install without problem
unknown
d18896
test
Unless I have completely misunderstood your question, can't you use the SYSTEM() function to execute plot.f (well, its compiled executable really) from solve.f? Documentation is here.
unknown
d18897
test
You are making correct http request and getting data too However the problem lies in your widget , I checked the API response it is throwing a response of Map<String,dynamic> While displaying the data you are accessing the data using playerstats key which in turn is giving you a Map which is not a String as required by Text Widget in order to display it ! You can display the data by simply converting it to String by using toString() method like this @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text('CSGO STATS'), centerTitle: true, ), body: ListView.builder( itemCount: 20, itemBuilder: ( BuildContext context, int index, ) { return Card( child: Text( data["playerstats"].toString(), ), ); }), ); } Also, I would like to suggest you some Improvements in your Code * *Use type annotation as much as possible, It makes your code more safe and robust , Like you have declared data variable as Map data; , You should declare it like Map<String,dynamic> data; *I think you forget to call super.initState() in your initState() method , Make sure to call it before all your methods. *Method getData doesn't return anything, So make it as Future<void> instead of Future<String> Since you are new contributor to StackOverflow , I welcome you ! Happy Fluttering ! A: data["playerstats"] is a Map while the Text widget needs String. A: Your method is ok, but the problem is in initiating the text widget. child: Text( data["playerstats"], ), ['playerstats'] is not a single text, its a map of list. You need to specify the exact text field name you want to see. Still it will show you full data if you add .toString() with the field name.
unknown
d18898
test
You can check udev. It's possible to write an udev rule to achieve the behavior. In that way you don't even have to write a daemon.
unknown
d18899
test
You should not make your factory class generic but the method GetObject should be generic: public T GetObject<T>() where T: IMyInterface, new() Then: static void Main(string[] args) { var factory = new MyFactory(); var obj = factory.GetObject<MyClass>(); obj.Method1(); Console.ReadKey(); } So all in all you should get rid of your generic code and simply modify your MyFactory class public class MyFactory : IClassFactory { public T GetObject<T>() { //TODO - get object of T type and return it return new T(); } } By the way - I am not sure what is the purpose of having this generic implementation? Does it make any sense from the perspective of the usage of Factory pattern?
unknown
d18900
test
#createNewFile() can throw an IOException before you get to the close() statement, keeping it from being executed. A: put out.close() in a finally statement: finally { out.close(); } What finally does is that any code within it gets executed even if an error is thrown. A: Scanner implements AutoCloseable; you may use try-with-resources https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html
unknown