_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d13501
train
using http in SSOCircle address instead of https has worked for me
unknown
d13502
train
Use dropWhile() as your filter. val setOfInts = Set(....) val result = LazyList.from(0).dropWhile(setOfInts).head A: I managed to do it like this, but it's not the case.... I need to do that exercise using operations on collections, sorry for answering my question, but I wanted to my solution to be visible better. def minNotContained(set: Set[Int]): Int = { val positive = set.filter(_ >= 0) @tailrec def isInSet(collection: Set[Int], number: Int): Int = { if (collection.contains(number)) isInSet(collection, number+1) else number } isInSet(set,0) } println(minNotContained(Set(-3,0,1,2,4,5,6)))
unknown
d13503
train
In your code sample you did not call the setValues() function. That's why you could not control the input. Here are some modifications to your code: const inputName = (index, event) => { let tempValues = [...values]; tempValues[index].name = event.target.value; setValues(tempValues); }; I hope this code will work now. I have tested it in your codesandbox example also. here is the link to that: https://codesandbox.io/s/purple-water-d74x5?file=/src/App.jsx
unknown
d13504
train
I have solved the problem. I looked up the log file and in my case the table is an external table referring to a directory located on hdfs. This directory contains more than 300000 files. So while reading the files it was throwing an out of memory exception and may be for this reason it was getting an empty string and throwing 'Can not create a Path from an empty string' exception. I tried with a smaller subset of files and it worked. Bottomline, one possible cause for this exception is running out of memory. A: I my case, there is a hive property which is set set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat; in .hiverc file which is throwing the exception. Diagnostic Messages for this Task: Error: java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.Path.checkPathArg(Path.java:131) at org.apache.hadoop.fs.Path.(Path.java:139) at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.getPath(HiveInputFormat.java:110) at org.apache.hadoop.mapred.MapTask.updateJobWithSplit(MapTask.java:463) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:411) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) After changing it to below, it worked set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat; A: I encountered the same error. My hive.log file showed the cause - see the first line in the snippet below where one of the jar file URIs contains file:// without any path: 2018-05-03 04:37:43,706 INFO [main]: mr.ExecDriver (ExecDriver.java:execute(309)) - adding libjars: file://,file:///opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar 2018-05-03 04:38:07,568 WARN [main]: mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 2018-05-03 04:38:07,599 ERROR [main]: exec.Task (SessionState.java:printError(937)) - Job Submission failed with exception 'java.lang.IllegalArgumentException(Can not create a Path from an empty string)' In my case, the issue was caused by a badly configured $HIVE_HOME/conf/hive-env.sh file, where HIVE_AUX_JARS_PATH contained a reference to an environment variable that was not set. For example: export HIVE_AUX_JARS_PATH=$EMPTY_ENV_VARIABLE,/opt/cloudera/parcels/CDH/lib/hive/lib/hive-contrib.jar
unknown
d13505
train
Scipy's stats.entropy in its default sense invites inputs as 1D arrays giving us a scalar, which is being done in the listed question. Internally this function also allows broadcasting, which we can abuse in here for a vectorized solution. From the docs - scipy.stats.entropy(pk, qk=None, base=None) If only probabilities pk are given, the entropy is calculated as S = -sum(pk * log(pk), axis=0). If qk is not None, then compute the Kullback-Leibler divergence S = sum(pk * log(pk / qk), axis=0). In our case, we are doing these entropy calculations for each row against all rows, performing sum reductions to have a scalar at each iteration with those two nested loops. Thus, the output array would be of shape (M,M), where M is the number of rows in input array. Now, the catch here is that stats.entropy() would sum along axis=0, so we will feed it two versions of distributions, both of whom would have the rowth-dimension brought to axis=0 for reduction along it and the other two axes interleaved - (M,1) & (1,M) to give us a (M,M) shaped output array using broadcasting. Thus, a vectorized and much more efficient way to solve our case would be - from scipy import stats kld = stats.entropy(distributions.T[:,:,None], distributions.T[:,None,:]) Runtime tests and verify - In [15]: def entropy_loopy(distrib): ...: n = distrib.shape[0] #n is the number of data points ...: kld = np.zeros((n, n)) ...: for i in range(0, n): ...: for j in range(0, n): ...: if(i != j): ...: kld[i, j] = stats.entropy(distrib[i, :], distrib[j, :]) ...: return kld ...: In [16]: distrib = np.random.randint(0,9,(100,100)) # Setup input In [17]: out = stats.entropy(distrib.T[:,:,None], distrib.T[:,None,:]) In [18]: np.allclose(entropy_loopy(distrib),out) # Verify Out[18]: True In [19]: %timeit entropy_loopy(distrib) 1 loops, best of 3: 800 ms per loop In [20]: %timeit stats.entropy(distrib.T[:,:,None], distrib.T[:,None,:]) 10 loops, best of 3: 104 ms per loop
unknown
d13506
train
You cannot use the ALIAS in WHERE clause that is created from the SELECT clause. Use the computed column instead, AND (COALESCE(core_customers.business_name, core_entities.name) LIKE "blah") if you want to use the ALIAS, you have to wrap it in subquery like the query below, SELECT * FROM ( SELECT stock_orders.*, COALESCE(core_customers.business_name, core_entities.name) AS buyer_name FROM `stock_orders` LEFT JOIN core_customers ON stock_orders.buyer_id = core_customers.id AND stock_orders.buyer_type='Core::Customer' LEFT JOIN core_entities ON stock_orders.buyer_id = core_entities.id AND stock_orders.buyer_type='Core::Entity' ) subquery WHERE `type` IN ('Stock::SalesOrder') AND (buyer_name LIKE "blah")
unknown
d13507
train
To use TestBed you have to alter your karma.conf.js to: // list of files / patterns to load in the browser files: [ 'src/tests/setup.ts', 'src/tests/**/*.spec.ts' ], The file src/tests/setup.ts should look like this for jasmine: import "nativescript-angular/zone-js/testing.jasmine"; import {nsTestBedInit} from "nativescript-angular/testing"; nsTestBedInit(); or if using mocha: import "nativescript-angular/zone-js/testing.mocha"; import {nsTestBedInit} from "nativescript-angular/testing"; nsTestBedInit(); You'll find a sample here: https://github.com/hypery2k/tns_testbed_sample A: I was facing the same issue like you. I finally found a way to make Unit Test in Nativescript-Angular works. To fix my issue I add beforeAll(() => nsTestBedInit()); and afterAll(() => { }); Also change from TestBed to nsTestBed... I just follow the idea on https://github.com/NativeScript/nativescript-angular/blob/master/tests/app/tests/detached-loader-tests.ts Also add into tsconfig.tns.json file this line: "include": ["src/tests/*.spec.ts"], My issue now is split all test into multiple file. Like appComponent in a test file and homeCompenent in a second test file. When the app grow the unit test also grow, we need organize our code. Here my code (file name: src/tests/test.spec.ts): import "reflect-metadata"; import { AppComponent } from "../app/app.component"; import { nsTestBedBeforeEach, nsTestBedAfterEach, nsTestBedRender, nsTestBedInit } from "nativescript-angular/testing"; import { HomeComponent } from "@src/app/home/home.component"; describe("AppComponent", () => { beforeAll(() => nsTestBedInit()); afterAll(() => { }); beforeEach(nsTestBedBeforeEach([AppComponent, HomeComponent])); afterEach(nsTestBedAfterEach()); it("should be: app works!", () => { return nsTestBedRender(AppComponent).then((fixture) => { fixture.detectChanges(); const app = fixture.componentInstance; expect(app.title).toBe("app works!"); }); }); describe("HomeComponent", () => { it("should contain: Home works!", () => { return nsTestBedRender(HomeComponent).then((fixture) => { fixture.detectChanges(); const app = fixture.componentInstance; expect(app.title).toBe("Home works!"); }); }); }); }); And here the result: JS: NSUTR: downloading http://192.168.10.169:9876/context.json JS: NSUTR: eval script /base/node_modules/jasmine-core/lib/jasmine-core/jasmine.js?be3ff9a5e2d6d748de5b900ac3c6d9603e2942a7 JS: NSUTR: eval script /base/node_modules/karma-jasmine/lib/boot.js?945a38bf4e45ad2770eb94868231905a04a0bd3e JS: NSUTR: eval script /base/node_modules/karma-jasmine/lib/adapter.js?3098011cfe00faa2a869a8cffce13f3befc1a035 JS: NSUTR: eval script /base/src/tests/test.spec.bundle.js?6e0098824123f3edc2bb093fa874b3fdf268841e JS: NSUTR: beginning test run NativeScript / 28 (9; Android SDK built for x86): Executed 1 of 2 SUCCESS (0 secs / 0.545 secs) NativeScript / 28 (9; Android SDK built for x86): Executed 2 of 2 SUCCESS (0.829 secs / 0.735 secs) TOTAL: 2 SUCCESS JS: NSUTR: completeAck NativeScript / 28 (9; Android SDK built for x86) ERROR DisconnectedClient disconnected from CONNECTED state (transport error) NativeScript / 28 (9; Android SDK built for x86): Executed 2 of 2 SUCCESS (0.829 secs / 0.735 secs) A: Your issue is happening because of these lines. beforeAll(() => nsTestBedInit()); afterAll(() => { }); Each test file tries to initialize test bed. Make sure that you initialize it in your entry component. Please refer entry file test-main.ts specified in karma.config.js
unknown
d13508
train
You are mixing up methods. value arrays don't have setBackground() method, this is a spreadsheet range method use the code below to do what you want : function onEdit() { var ss =SpreadsheetApp.getActiveSheet(); var myRangeValues = ss.getRange('D7:E').getValues(); var myRangeColors = ss.getRange('D7:E').getBackgrounds();// get the colors for (var i = 0; i < myRangeValues.length; i++) { if(myRangeValues[i][1] == 'ok') { myRangeColors[i][0]='#F00'; } } ss.getRange('D7:E').setBackgrounds(myRangeColors); //set the modified colors }
unknown
d13509
train
ssh was eating up your loop's input. Probably in this case your ssh session exits when it gets EOF from it. That's the likely reason but some input may also cause it to exit. You have to redirect its input by specifying < /dev/null or use -n: ssh -n "root@$ip" ssh "root@$ip" < /dev/null That may also apply with -tt since somehow it's independent. Just try. If you're using Bash or similar shell that supports read -u, you can also specify a differ fd for your file. while read -u 4 ip; do ssh root@$ip exit done 4< <(echo $theIp)
unknown
d13510
train
The simplest way I could find is using https://ngrok.com/ - It opens a tunnel to your local webserver that can be browsed via a public subdomain on ngrok.io. You can then easily test the full circle of domain verification for this subdomain. You can even start multiple tunnels and have multiple subdomains for testing SAN certificates. ngrok provides a local Web-API from where the current tunnel address can be read, this way tests could be automated in continuous integration.
unknown
d13511
train
ToolTip.Show Method (String, IWin32Window) The second argument is the control for which the tool tip is to be shown. toolTip1.Show("Test 123", button1, Int32.MaxValue); Visual Studio tracks the word underneath the mouse and displays tooltips/intellisense accordingly. One way for you to do the same could be to: * *Track the mouse movements *Get the text under mouse *Show tooltip. A: The ToolTip.Show method also has other more appropriate overloads, like this one. You can pass the edit control (that is, your text box) that you want to be associated with the tooltip as the IWin32Window parameter. Then, you can specify the current coordinates of the mouse cursor as the X and Y arguments: * *If you're trying to show this tooltip in one of the mouse event handlers (like MouseMove), the current coordinates of the mouse cursor are passed in as part of the MouseEventArgs—just use the e.X and e.Y properties. *Otherwise, you'll need to use the Control.MousePosition property to get its current location, which will return a Point representing its current location relative to screen coordinates. Another one of the overloads to the ToolTip.Show method accepts a Point parameter that you can use here instead of separate X and Y coordinates
unknown
d13512
train
Based on what you described, it sounds like you want to add a trace and remove the most recent trace added at the same time when the button is pressed. This would still leave the original plot/trace that you started with. I tried simplifying a bit. The first plotlyProxyInvoke will remove the most recently added trace (it is zero-indexed, leaving the first plotly trace in place). The second plotlyProxyInvoke will add the new trace. Note that the (x, y) pair is included twice based on this answer. library(shiny) library(plotly) A <- 1:5 B <- c(115, 406, 1320, 179, 440) data <- data.frame(A, B) ui <- fluidPage(plotlyOutput("fig1"), numericInput("A", label = h5("A"), value = "", width = "100px"), numericInput("B", label = h5("B"), value = "", width = "100px"), actionButton("action3", label = "Add to plot"), ) server <- function(input, output, session) { fig <- plot_ly(data, x = A, y = B, type = 'scatter', mode = 'markers') output$fig1 <- renderPlotly(fig) observeEvent(input$action3, { plotlyProxy("fig1", session) %>% plotlyProxyInvoke("deleteTraces", list(as.integer(1))) plotlyProxy("fig1", session) %>% plotlyProxyInvoke("addTraces", list(x = c(input$A, input$A), y = c(input$B, input$B), type = 'scatter', mode = 'markers') ) }) } shinyApp(ui,server)
unknown
d13513
train
That particular formulation is not supported in .gitignore: An optional prefix "!" which negates the pattern; any matching file excluded by a previous pattern will become included again. It is not possible to re-include a file if a parent directory of that file is excluded. Git doesn’t list excluded directories for performance reasons, so any patterns on contained files have no effect, no matter where they are defined. Put a backslash ("\") in front of the first "!" for patterns that begin with a literal "!", for example, "\!important!.txt". Instead, you can do: foo/* !foo/baz Here's an example session so that you can see it in action: :: tree . `-- foo/ |-- bar `-- baz :: cat .gitignore foo/* !foo/baz :: git status -sb ## Initial commit on master ?? .gitignore ?? foo/ :: git add foo :: git status -sb ## Initial commit on master A foo/baz ?? .gitignore Note that baz was added but bar was ignored. To see what things look like if baz was not in foo, we can reset and then remove baz: :: git reset :: rm foo/baz :: git status -sb ## Initial commit on master ?? .gitignore Note that foo doesn't show up here, even though foo/bar still exists.
unknown
d13514
train
"Otherwise, if the member or constructor is declared private, then access is permitted if and only if it occurs within the body of the top level class (§7.6) that encloses the declaration of the member or constructor." JLS 6.6.1 In this case, TestOutter is the top-level class, so all private fields inside it are visible. Basically, the purpose of declaring a member private is to help ensure correctness by keeping other classes (subclasses or otherwise) from meddling with it. Since a top-level class is the Java compilation unit, the spec assumes that access within the same file has been appropriately managed. A: This is because the inner class, as a member of the outer class, has access to all the private variables of its outer class. And since the other inner class is also a member of the outer class, all of its private variables are accessible as well. Edit: Think of it like you have a couple of couch cushion forts(inner classes) in a house(outer class), one yours the other your siblings. Your forts are both in the house so you have access to all the things in your house. And mom(Java) is being totally lame and saying you have to share with your sibling because everything in the house is everyone elses and if you want your own fort you are going to have to buy it with your own money(make another class?).
unknown
d13515
train
Function names, like arrays, decay into pointers when used. That means you can just: printf("%p", myFunction); On most systems, anyway. To be strictly standard-compliant, check out How to format a function pointer? A: There are a few ways to get at this. The easiest is probably to use a debugger. Using GDB With gdb you can connect to a running program. Say for example I want to connect to a running vim process. First I need to know it's PID number: $ pidof vim 15425 I can now connect to the process using gdb: $ gdb `which vim` 15425 At the prompt, I can now enquire about different symbols: $ info symbol fprintf fprintf in section .text of /lib/x86_64-linux-gnu/libc.so.6 $ info address fprintf Symbol "fprintf" is at 0x7fc9b44314a0 in a file compiled without debugging. Using /proc Another way to get a dump of memory locations of libraries from /proc. Again, you need the PID (see above) and you can dump out the libraries and their locations in the virtual memory of a process. $ cat /proc/15425/maps 7fc9a9427000-7fc9a9432000 r-xp 00000000 fd:02 539295973 /lib/x86_64-linux-gnu/libnss_files-2.19.so 7fc9a9432000-7fc9a9631000 ---p 0000b000 fd:02 539295973 /lib/x86_64-linux-gnu/libnss_files-2.19.so 7fc9a9631000-7fc9a9632000 r--p 0000a000 fd:02 539295973 /lib/x86_64-linux-gnu/libnss_files-2.19.so 7fc9a9632000-7fc9a9633000 rw-p 0000b000 fd:02 539295973 /lib/x86_64-linux-gnu/libnss_files-2.19.so Each library will have multiple sections and this will depend on how it was compiled/linked.
unknown
d13516
train
You can store event Object in any variable than can use in other function. Here is the demo : http://jsfiddle.net/cVDbp/
unknown
d13517
train
That function takes the constants from the KeyEvent class. To send a, use sendDownUpKeyEvents(KeyEvent.KEYCODE_A);
unknown
d13518
train
You have to review how to access elements of 2D array. Also, take look at what comma operator does. You have to use [] twice: adjacencyMatrix[0][i] The following: adjacencyMatrix[0, i] is equivalent to: adjacencyMatrix[i] Which will still leave you with 1D array. And, as the error message says: distanceArray[i] = adjacencyMatrix[i]; // ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^ // unsigned int array of unsigned ints You can not possibly expect this assignment to happen.
unknown
d13519
train
In this specific case, based on your comments, you may be able to sidestep. Create a new class ReqDemPlanMissingForecastFiller_Fix extending ReqDemPlanMissingForecastFiller then copy/paste the erroneous function and correct the mistake. Create an extension class and change the newParameters static funcion. [ExtensionOf(classStr(ReqDemPlanMissingForecastFiller))] class ReqDemPlanMissingForecastFiller_Extention { public static ReqDemPlanMissingForecastFiller newParameters( ReqDemPlanCreateForecastDataContract _dataContract, ReqDemPlanAllocationKeyFilterTmp _allocationKeyFilter, ReqDemPlanTaskLoggerInterface _logger = null) { ReqDemPlanMissingForecastFiller filler = next newParameters(_dataContract, _allocationKeyFilter, _logger); filler = new ReqDemPlanMissingForecastFiller_Fix(); //Throw away previous value filler.parmDataContract(_dataContract); filler.parmAttributeManager(_dataContract.attributeManager()); filler.parmAllocationKeyFilter(_allocationKeyFilter); filler.parmLogger(_logger); filler.init(); return filler; } } Code above was based on AX 2012 code. Stupid solution to a stupid problem. It goes almost without saying that you should report the problem to Microsoft. A: @Jan B. Kjeldsen's answer describes how the specific case can be solved without involving Microsoft. Since overlayering is no longer possible, the solution involves copying a fair bit of standard code. This brings its own risks, because future changes by Microsoft for that code are not reflected in the copied code. Though it cannot always be avoided, other options should be evaluated first: * *As @Jan B. Kjeldsen mentioned, errors in the standard code should be reported to Microsoft (see Get support for Finance and Operations apps or Lifecycle Services (LCS)). This enables them to fix the error. * *Pro: No further work needed. *Con: Microsoft may decline the fix or take a long time to implement it. *If unlike in this specific case the issue is not a downright error, but a lack of extension options, an extensibility request can be created with Microsoft. They will then add an extension option. * *Pro: No further work needed. *Con: Microsoft may decline the extensibility request or take a long time to implement it. *For both errors as well as missing extension options, Microsoft also offers the Community Driven Engineering program (CDE). This enables you to develop changes in the standard code directly via a special Microsoft hosted repository where the standard code is not locked for changes. * *Pro: Most flexible and fastest of all options involving Microsoft. *Con: You have to do the work yourself. Microsoft may decline the change. It can still take some time before the change is available in a GA version. You can also consider a hybrid approach: For a quick solution, copy standard code and customize it as required. But also report an error, create an extensibility request or fix it yourself in the CDE program. When the change is available in standard code, you can then remove the copied code again.
unknown
d13520
train
You forgot to annotate your setup method with @Before such that mockito do not create and inject the mocks, try this: @Before public void setup(){ ... }
unknown
d13521
train
A better solution would be to parse out your document libraries so they aren't exceeding the list view threshold. Assuming you're running 2013 since you tagged it in your post, you could have the workflow do a REST API call to the destination library and check the item count. If it returns >5000, alert the document library manager to archive some old files - or save the file to an alternate library using an If/Then block. The SPD Workflow to do this: Build {...} Dictionary (Output to Variable: requestHeaders) then Call [site url]/_api/web/Lists/GetByTitle('[Library Name to Query]') HTTP web service with request (ResponseContent to Variable: responseContent|ResponseHeaders to Variable: responseHeaders|ResponseStatusCode to Variable: responseCode) then Get d/ItemCount from Variable: responseContent (Output to Variable: count) If Variable: count is less than 5000 [Proceed as normal] If Variable: count is greater than or equal to 5000 [Save to secondary library and notify admin to do some cleanup] (Here's some background on REST API if you haven't used it before)
unknown
d13522
train
In C89 (the original "ANSI C"), values used in initialiser lists must be "constant expressions". One type of constant expression is an address constant, and a pointer to an object with static storage duration is an address constant. However, a pointer to an object with automatic storage duration is not an address constant (its value is not known at compile-time), so it cannot be used in an initialiser list. You only need to make the array a have static storage duration - the following will work too: main() { static int a[]={1,2,3}; int *p[]={a,a+1,a+2}; ...... } C99 has relaxed this restriction for initialisers for objects that have automatic storage duration (which is why gcc is OK with it). Why is it so? Initialiser lists are typically used for initialising potentially large objects, like arrays or structures. Objects with automatic storage duration must be initialised anew each time the function is entered, and the C89 rules allow compiler writes to do this initialisation by emitting a simple memcpy() call (using as source a static "template", which corresponds to the initialiser list). Under the C99 rules, the compiler potentially has to emit an arbitrarily complicated block of initialisation code. A: With Visual C, the code works. However, it generates a level 4 warning C4204, meaning they consider this is a Microsoft-specific extension to the C-standard. As caf and AndreyT said, it is not part of the C standard to allow this. EDIT: With C standard I mean C89.
unknown
d13523
train
Instead of null for the second parameter (URI): TvView view = new TvView(this); view.tune("com.mediatex.tvinput/.hdmi.HDMInputService/HW2", null); You need to make and send a valid Uri: TvView view = new TvView(this) mInitChannelUri = TvContract.buildChannelUriForPassthroughInput("com.mediatex.tvinput/.hdmi.HDMInputService/HW2") view.tune("com.mediatex.tvinput/.hdmi.HDMInputService/HW2", mInitChannelUri) It's a little goofy because you basically send the same input name string twice. But this works for me. Finally, to be TV brand independent you should use the input id parameter instead of static string constants (left as an exercise for the reader haha) A: You don't need to use TvView. Just use an implicit intent with ACTION_VIEW. I tested this code on my Sony TV and it worked well. (I am using Kotlin) // Passthrough inputs are "hardware" inputs like HDMI / Components. Non-passthrough input are // usually internal tv tuners. You can also filter out non-passthrough inputs before this step. val uri = if (inputInfo.isPassthroughInput) TvContract.buildChannelUriForPassthroughInput(inputInfo.id) else TvContract.buildChannelsUriForInput(inputInfo.id) val intent = Intent(Intent.ACTION_VIEW, uri) if (intent.resolveActivity(packageManager) != null) { context.startActivity(intent) }
unknown
d13524
train
Try passing an array as the first arg to form_for, and remove the :url hash. <%= form_for [@high_school, @student], :html => { :multipart => true } %> And be sure that @student is a new record. A: Maybe add delete 'student' => :destroy in routes.rb controller :students do delete 'student' => :destroy end
unknown
d13525
train
Quick-and-dirty solution: select all rows and subtract the non-suspect rows Demo: http://sqlfiddle.com/#!3/f0651/1 Select WORKORDERID, DESCRIPTION, actualstartdate, actualfinishdate FROM [CityWorks].[AZTECA].[WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' EXCEPT Select WORKORDERID, DESCRIPTION, actualstartdate, actualfinishdate FROM [CityWorks].[AZTECA].[WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' AND actualstartdate >= DATEADD(hour, 6,CAST(CAST(actualstartdate AS date) AS datetime)) AND actualfinishdate <= DATEADD(hour,16,CAST(CAST(actualfinishdate AS date) AS datetime)) AND CAST(actualstartdate AS date) = CAST(actualfinishdate AS date) A: SQL Fiddle MS SQL Server 2008 Schema Setup: CREATE TABLE WORKORDER ([WORKORDERID] int, [DESCRIPTION] varchar(3), [actualstartdate] datetime, [actualfinishdate] datetime) ; INSERT INTO WORKORDER ([WORKORDERID], [DESCRIPTION], [actualstartdate], [actualfinishdate]) VALUES (1, 'w1', '2014-05-07 01:00:00', '2014-05-07 05:00:00'), (2, 'w2', '2014-05-07 04:00:00', '2014-05-07 12:00:00'), (3, 'w3', '2014-05-07 05:59:00', '2014-05-07 12:00:00'), (4, 'w4', '2014-05-07 06:00:00', '2014-05-07 12:00:00'), (5, 'w5', '2014-05-07 06:01:00', '2014-05-07 16:00:00'), (6, 'w6', '2014-05-07 06:01:00', '2014-05-07 16:01:00'), (7, 'w7', '2014-05-07 06:01:00', '2014-05-08 12:01:00') ; Query 1: Select WORKORDERID, DESCRIPTION, actualstartdate, actualfinishdate FROM [WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' and (CAST(actualstartdate AS date) = CAST(actualfinishdate AS date) and (((DATEPART(hh, actualstartdate)*3600)+ (DATEPART(mi, actualstartdate)*60)+ DATEPART(ss, actualstartdate)) < 21600 or ((DATEPART(hh, actualfinishdate)*3600)+ (DATEPART(mi, actualfinishdate)*60)+ DATEPART(ss, actualfinishdate)) > 57600) or CAST(actualstartdate AS date) <> CAST(actualfinishdate AS date)) order by actualstartdate desc Results: | WORKORDERID | DESCRIPTION | ACTUALSTARTDATE | ACTUALFINISHDATE | |-------------|-------------|----------------------------|----------------------------| | 6 | w6 | May, 07 2014 06:01:00+0000 | May, 07 2014 16:01:00+0000 | | 7 | w7 | May, 07 2014 06:01:00+0000 | May, 08 2014 12:01:00+0000 | | 3 | w3 | May, 07 2014 05:59:00+0000 | May, 07 2014 12:00:00+0000 | | 2 | w2 | May, 07 2014 04:00:00+0000 | May, 07 2014 12:00:00+0000 | | 1 | w1 | May, 07 2014 01:00:00+0000 | May, 07 2014 05:00:00+0000 | A: You can use the datepart function to look at the hour: SELECT WORKORDERID , DESCRIPTION , actualstartdate , actualfinishdate FROM [CityWorks].[AZTECA].[WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' AND (DATEPART(HOUR, actualstartdate) <= 6 OR DATEPART(HOUR, actualstartdate) => 16) ORDER BY actualstartdate DESC A: Select WORKORDERID, DESCRIPTION, actualstartdate, actualfinishdate FROM [CityWorks].[AZTECA].[WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' and (CONVERT(time,actualstartdate )<'06:00' or CONVERT(time,actualfinishdate )> '16:00') order by actualstartdate desc A: You can easily get the hour portion of the date as follows: DATEPART(hh, actualstartdate); Use it as follows: Select WORKORDERID, DESCRIPTION, actualstartdate, actualfinishdate FROM [CityWorks].[AZTECA].[WORKORDER] WHERE actualstartdate BETWEEN '2014-05-05 01:00:00.000' AND '2014-06-05 23:00:00.000' AND (DATEPART(hh, actualstartdate) < 6 or (DATEPART(hh, actualfinishdate) >= 16) order by actualstartdate desc This will get all rows between your given dates that occurred before six in the morning or after four at night. Edit: If you want to allow an actualfinishdate of exactly 16 hundred you can change the second clause to: or actualfinishdate > DATEADD(hh, 16, cast(actualfinishdate as date)) This would check if the actual finish date is after 4.
unknown
d13526
train
Seems like the arr you are sending to the filter function is not an array, which is why the error is saying that arr.filter is not a function. Just tried this and it works, so your function seems ok: function filter(arr, criteria) { return arr.filter(function (obj) { return Object.keys(criteria).every(function (c) { return obj[c] == criteria[c]; }); }); } const arr = [ { name: "David", age: 54 }, { name: "Andrew", age: 32 }, { name: "Mike", age: 54 } ]; const myFilter = {"age": 54}; const result = filter(arr, myFilter); console.log(result);
unknown
d13527
train
Solved - I've missed a step. git add -A # to add the brand new folders structure And git commit -m 'inicio do projeto' and finaly git push -u origin all A: Try specifying the branch as indicated in the error message. instead of git push -u origin --all try git push origin master
unknown
d13528
train
Include filter criteria in the DLookup. Concatenate variables, reference to the form field/control is a variable. If there is no match, Null will return. Since in your comment you said you want the message only if there is a match in the query: If Not IsNull(DLookup("ID1", "qry_CheckID", "ID1 = " & Forms!MainForm!ID2)) Then MsgBox "Your ID is bad.", vbOKOnly, "" End If
unknown
d13529
train
There is no built-in way to schedule an AudioWorkletProcessor but it's possible to use the global currentTime variable to build it yourself. The processor would then look a bit like this. class ScheduledProcessor extends AudioWorkletProcessor { constructor() { super(); this.port.onmessage = (event) => this.startTime = event.data; this.startTime = Number.POSITIVE_INFINITY; } process() { if (currentTime < this.startTime) { return true; } // Now it's time to start the processing. } } registerProcessor('scheduled-processor', ScheduledProcessor); It can then be "scheduled" to start when currentTime is 15 like this: const scheduledAudioNode = new AudioWorkletNode( audioContext, 'scheduled-processor' ); scheduledAudioNode.port.postMessage(15);
unknown
d13530
train
Without details I can provide a conceptual solution. Initialize the variable that holds the text to: txt = ''; Then the callback will do: txt = strtrim(sprintf('%s %s',txt, get(handleToTextBox,'String'))); A: letter = get(handles.edit1, 'string'); global txt; txt=[txt letter]; txt=[txt ' ']; set(handles.text1, 'string', txt); That's how I solved it.
unknown
d13531
train
makeApolloClient isnt a function, the file just exports an instance of the apollo client. Just import it as if it's a variable. import client from './app/config/apollo' export default function App() { return ( <ApolloProvider client={client}> <Routes /> </ApolloProvider> ); } A: Storing JWT in local storage and session storage is not secure! Use cookie with http-only enabled! This code automatically reads and sets Authorization header after localStorage.setItem('jwt_token', jwt_token) in SignIn mutation. import {ApolloClient, createHttpLink} from "@apollo/client"; import {setContext} from "@apollo/client/link/context"; import {InMemoryCache} from "@apollo/client"; const apolloHttpLink = createHttpLink({ uri: process.env.REACT_APP_APOLLO_SERVER_URI || 'http://localhost/graphql', }) const apolloAuthContext = setContext(async (_, {headers}) => { const jwt_token = localStorage.getItem('jwt_token') return { headers: { ...headers, Authorization: jwt_token ? `Bearer ${jwt_token}` : '' }, } }) export const apolloClient = new ApolloClient({ link: apolloAuthContext.concat(apolloHttpLink), cache: new InMemoryCache(), }) A: Apollo usequery has a context option that allows you to dynamically change or update the values of the header object. import { ApolloClient, InMemoryCache } from "@apollo/client"; const client = new ApolloClient({ cache: new InMemoryCache(), uri: "/graphql" }); client.query({ query: MY_QUERY, context: { // example of setting the headers with context per operation headers: { special: "Special header value" } } }); The code above was copied from the Apollo docs. To find out more check out https://www.apollographql.com/docs/react/networking/advanced-http-networking/#overriding-options A: Try this const authLink = setContext(async (_, { headers }) => { const token = await AsyncStorage.getItem('token'); return { headers: { ...headers, authorization: token ? `Bearer ${token}` : "", } } }); A: In React Native, AsyncStorage methods are async not like localStorage in Web. And the other methods doesn't work, except this. https://github.com/apollographql/apollo-client/issues/2441#issuecomment-718502308
unknown
d13532
train
To estimate the distribution of that sum, you can repeatedly sample with replacement (and then take the sum of) n variates from sample_data. (sample() places equal probability mass on each element of sample_data, just as the ecdf does, so you don't need to calculate ecdf(sample_data) as an intermediate step.) # Create some example data sample_data <- runif(100) n <- 10 X <- replicate(1000, sum(sample(sample_data, size=n, replace=TRUE))) # Plot the estimated distribution of the sum of n variates. hist(X, breaks=40, col="grey", main=expression(sum(x[i], i==1, n))) box(bty="l") # Plot the ecdf of the sum plot(ecdf(X)) A: First, generalize and simplify: solve for step function CDFs X and Y, independent but not identically distributed. For every step jump xi and every step jump yi, there will be a corresponding step jump at xi+yi in the CDF of X + Y, So the CDF of X + Y will be characterized by the list: sorted(x + y for x in X for y in Y) That means if there are k points in X's CDF, there will be kn in (X1 + ... + Xn). We can cut that down to a manageable number at the end by throwing away all but k again, but clearly the intermediate calculations will be costly in time and space. Also, note that even though the original CDF is an ECDF for X, the result will not be an ECDF for (X1 + ... + Xn), even if you keep all kn points. In conclusion, use Josh's solution.
unknown
d13533
train
Instead of using button groups I'd recommend using the Nav component instead styled with the pills modifier class. It's not the same but very close and the Tab panels are built to work with the pills styling. It will solve the problem you have now with the active class remaining on the dropdown options. <div> <ul class="nav nav-pills nav-justified" role="tablist"> <li role="presentation" class="active"><a href="#home" aria-controls="home" role="tab" data-toggle="pill">Home</a></li> <li role="presentation"><a href="#profile" aria-controls="profile" role="tab" data-toggle="pill">Profile</a></li> <li role="presentation" class="dropdown"> <a class="dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> Dropdown <span class="caret"></span> </a> <ul class="dropdown-menu"> <li><a href="#messages" aria-controls="messages" role="tab" data-toggle="pill">Messages</a></li> <li><a href="#settings" aria-controls="settings" role="tab" data-toggle="pill">Settings</a></li> </ul> </li> </ul> <!-- Tab panes --> <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="home">home</div> <div role="tabpanel" class="tab-pane" id="profile">profile</div> <div role="tabpanel" class="tab-pane" id="messages">messages</div> <div role="tabpanel" class="tab-pane" id="settings">settings</div> </div> </div> https://codepen.io/partypete25/pen/Moooaq?editors=1100 A: I tailored @partypete25 answers, to my needs. Thanks again @partypete25! sample code <style> .nav.nav-pills.nav-justified.nav-group > li:not(:first-child):not(:last-child) > .btn { border-radius: 0; margin-bottom: 0; } @media(max-width:768px){ .nav.nav-pills.nav-justified.nav-group > li:first-child:not(:last-child) > .btn { border-bottom-left-radius: 0; border-bottom-right-radius: 0; margin-bottom: 0; } .nav.nav-pills.nav-justified.nav-group > li:last-child:not(:first-child) > .btn { border-top-left-radius: 0; border-top-right-radius: 0; } .nav.nav-pills.nav-justified.nav-group li + li { margin-left: 0; } } @media(min-width:768px){ .nav.nav-pills.nav-justified.nav-group > li:first-child:not(:last-child) > .btn { border-top-right-radius: 0; border-bottom-right-radius: 0; } .nav.nav-pills.nav-justified.nav-group > li:last-child:not(:first-child) > .btn { border-top-left-radius: 0; border-bottom-left-radius: 0; } } </style> <div class="row"> <div class="col-md-12"> <ul class="nav nav-pills nav-justified nav-group"> <li class="active"><a href="#left" class="btn btn-primary" role="tab" data-toggle="tab">Left</a></li> <li><a href="#middle" class="btn btn-primary" role="tab" data-toggle="tab">Middle</a></li> <li class="dropdown"> <a href="#" class="btn btn-primary dropdown-toggle" data-toggle="dropdown" role="button"> Dropdown <span class="caret"></span> </a> <ul class="dropdown-menu"> <li><a href="#drop1" role="tab" data-toggle="tab">drop 1</a></li> <li><a href="#drop2" role="tab" data-toggle="tab">drop 2</a></li> </ul> </li> </ul> <div class="tab-content"> <div class="tab-pane active" role="tabpanel" id="left">please select 1 of the options in the dropdown</div> <div class="tab-pane" role="tabpanel" id="middle">please try the dropdown on the right</div> <div class="tab-pane" role="tabpanel" id="drop1">please try select drop 2 also, then both are marked under the dropdown</div> <div class="tab-pane" role="tabpanel" id="drop2">please try select drop 1 also, then both are marked under the dropdown</div> </div> </div> </div>
unknown
d13534
train
I would recommend a fresh install of your IDE, which can be done by: * *Find the NetBeans Project folder in My Documents *Copy that to another location *Un-install the IDE from Control Panel *Restart you PC *Download the php version here *Install it, start your IDE and just copy paste the project to the folder created in My Document for NetBeansProject As far the preferences are concerned, if they are not very hard to create once again, a fresh install is good to go with as removal of the other modules like C, C++, or Java from IDE directly would still keep some files or cache(NetBeans version dependent) which would still hamper your PC sometimes. A: For installing Netbeans first of all you must have a latest version of Java SDK installed on your PC.After that if you want to gain the data you must have the backup of the www folder which gets created while installation. After that in Netbeans you can start a project and then copy-paste your created pages(php,html,css,js) in that project. And you are all set to start from where you left off.
unknown
d13535
train
You can try like this: # Form class RegistrationForm(UserCreationForm): class Meta: model = CUser fields = ('first_name', 'last_name', 'password1', 'password2') def save(self, **kwargs): email = kwargs.pop('email') user = super(RegistrationForm, self).save(commit=False) user.set_password(self.cleaned_data['password1']) user.email = email user.save() return user # View # your form codes... if form.is_valid(): form.save(email=request.session.get('email')) # rest of the codes What I am doing here is that, first I have override that save method to catch email value from keyword arguments. Then I am passing the email value to the ModelForm's save method. Hope it helps!!
unknown
d13536
train
If I were you, I would use the .split() method to create a list from the text you read. test = re.sub('\ |1|2|3|4|5|6|7|8|9|0|>|s|e|q|:', "", holder) newone = test.split("\n") at this point newone will look like ['', 'ATATAT', '', 'GGGGG', '', 'TTTTT', ''] so to strip out the extra spaces: newone = [x for x in newone if x != ''] Now to the error you are receiving, it is because you in your list comprehension (line 38 of your code) you are using newone instead of line. Each letter of line is an key of your dictionary base_to_integer but the KeyError you get is because \n is not a Key in the dictionary. Even after making the change I suggest above you will get an error: KeyError: 'ATATAT' So you should change this line to: integer_encoded = [base_to_integer[y] for y in line] Fixing this gives me: [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0] [0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1] [0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0] Hope this helps.
unknown
d13537
train
It can be PayPal error - see: https://www.x.com/developers/paypal/forums/instant-payment-notifications-ipn-payment-data-transfer-pdt/ipn-failing-hasn-t-been-changed?page=0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C1 The most iportant messages from the link above are PayPal responses: July 18: "I have found that the problem you are experiencing with IPN in the Sandbox is being caused by some technical issues with the PayPal system. Our engineers are currently working diligently on a solution to this problem." and July 19: "We are aware of this issue and our engineers are currently working on a solution. Unfortunately, i can't give you an exact timeframe when this will be done, however this should be resolved within the next days." A: Make sure that your ipn is actually world-accessible (and not under localhost or a private network) A: If you are trying this on localhost, it wont work. IPN just works on live. And you are saying you are not getting message from paypal sandbox ? If you are working on sandbox, why are you keeping the action url for actual paypal form ?
unknown
d13538
train
Is the el expression evaluated to integer type? A: I have no idea, but you can try a couple of these variations <h:inputSecret id="password" value="#{userBean.user.password}" maxlength="#{myBean.maxSize}"> <f:validateLength minimum="#{myBean.minSize}"/> </h:inputSecret> <h:message for="password" /> <h:inputSecret id="password" value="#{userBean.user.password}" maxlength="#{myBean.maxSize}"> <f:validator validatorId="javax.faces.validator.LengthValidator"> <f:attribute name="minimum" value="#{myBean.minSize}"/> </f:validator> </h:inputSecret> <h:message for="password" />
unknown
d13539
train
So, I've found out how. Once the "new user" installs my app and signs up with his facebook account I can execute this GraphRequest request = GraphRequest.newGraphPathRequest( accessToken, "/me/apprequests", new GraphRequest.Callback() { @Override public void onCompleted(GraphResponse response) { // process the info received } }); request.executeAsync(); The above code will get all/any apprequests from my app only and I only have to check who sent it etc. A: If the App is already installed on the device, are you getting the identity of the "User" who invited the "Friend"? If yes, how are you doing this? Secondly, Applink will only pass the information if it is opened through the notification on Facebook, and not directly through the home screen.
unknown
d13540
train
* *Put your cats into a Map<Integer, Cat> *Get the values of the resulting Map *If you really need a List create a new List from the values of the Map Here is how to merge the lists as you want to be able to over write: Map<Integer, Cat> map = new HashMap<>(); for (Cat cat : catsLegs) { map.put(cat.getId(), cat); } for (Cat cat : catsHeads) { map.put(cat.getId(), cat); } Collection<Cat> cats = map.values(); // If you really need a List, you create a new one as next List<Cat> catsList = new ArrayList<>(cats); A: The simplest way is to loop through the List of cats and check if (catHead.id == catTail.id) but that's not really cool. A better solution is to store your data inside a HashMap instead of a List. This way you can use the Hash Key to be the ID and the Hash Value to be the Cat object. Retrieving the object is much easier this way and it is of course much faster. A: This is what I made. It transfers all the objects heads and I have fresh catList.size() because I don't want old currentCats objects be in final list. public static void updateCats(List<Cat> catsFromServer) { setAllCatsLegs(catsFromServer); List<Cat> currentCats = getAllCats(); for (Cat currentCat : currentCats) { for (Cat fromServer : catFromServer) { if (currentCat.equals(fromServer)) { fromServer.setHeads(currentcat.getHeads()); } } } Contact.saveMultiple(contactsFromServer); } upd: List<Cat> currentCats = getAllCats(); for (Cat cat : catsFromServier) { int index = currentCats.indexOf(cat); if (index != -1) { cat.setHeads(currentCats.get(index).getHeads); } }
unknown
d13541
train
Telegram API documentation is tricky, but once you get the hang of the authors writing style and work through the AuthKey Creation you will be well on your way. The starting point in the documentation is: https://core.telegram.org/mtproto/auth_key and https://core.telegram.org/mtproto/samples-auth_key I had put together a detailed write up for most of this here The procedures and patterns you build along the way will be usable for further exploration of the Telegram API cheers.
unknown
d13542
train
Finally, I try One solution to solve this issue (this is not exact solution i try only to solve cached file redirects), actually problem occurs due to browser cache memory not in above htaccess. (Preciously I had try in both incognito and cache cleared browser but it still these redirects happens in some files) so i try to change name of these files in above screen shoot then reload it after that this problem is solved. my guess is this problem is happens due to cache. because previously we used leverage browser caching in htaccess after that this was removed due to solve the problem. above htaccess is worked fine now.
unknown
d13543
train
Reason that your visual didn't recognized right. Main div in dialog.html doesn't have sizes. Put in md-dialog: style="width: 100px; height: 100px;" You will see that else works correctly.
unknown
d13544
train
An applicative lets you apply a function in a context to a value in a context. So for instance, you can apply some((i: Int) => i + 1) to some(3) and get some(4). Let's forget that for now. I'll come back to that later. List has two representations, it's either Nil or head :: tail. You may be used to fold over it using foldLeft but there is another way to fold over it: def foldr[A, B](l: List[A], acc0: B, f: (A, B) => B): B = l match { case Nil => acc0 case x :: xs => f(x, foldr(xs, acc0, f)) } Given List(1, 2) we fold over the list applying the function starting from the right side - even though we really deconstruct the list from the left side! f(1, f(2, Nil)) This can be used to compute the length of a list. Given List(1, 2): foldr(List(1, 2), 0, (i: Int, acc: Int) => 1 + acc) // returns 2 This can also be used to create another list: foldr[Int, List[Int]](List(1, 2), List[Int](), _ :: _) //List[Int] = List(1, 2) So given an empty list and the :: function we were able to create another list. What if our elements are in some context? If our context is an applicative then we can still apply our elements and :: in that context. Continuing with List(1, 2) and Option as our applicative. We start with some(List[Int]())) we want to apply the :: function in the Option context. This is what the F.map2 does. It takes two values in their Option context, put the provided function of two arguments into the Option context and apply them together. So outside the context we have (2, Nil) => 2 :: Nil In context we have: (Some(2), Some(Nil)) => Some(2 :: Nil) Going back to the original question: // do a foldr DList.fromList(l).foldr(F.point(List[B]())) { // starting with an empty list in its applicative context F.point(List[B]()) (a, fbs) => F.map2(f(a), fbs)(_ :: _) // Apply the `::` function to the two values in the context } I am not sure why the difference DList is used. What I see is that it uses trampolines so hopefully that makes this implementation work without blowing the stack, but I have not tried so I don't know. The interesting part about implementing the right fold like this is that I think it gives you an approach to implement traverse for algebric data types using catamorphisms. For instance given: trait Tree[+A] object Leaf extends Tree[Nothing] case class Node[A](a: A, left: Tree[A], right: Tree[A]) extends Tree[A] Fold would be defined like this (which is really following the same approach as for List): def fold[A, B](tree: Tree[A], valueForLeaf: B, functionForNode: (A, B, B) => B): B = { tree match { case Leaf => valueForLeaf case Node(a, left, right) => functionForNode(a, fold(left, valueForLeaf, functionForNode), fold(right, valueForLeaf, functionForNode) ) } } And traverse would use that fold with F.point(Leaf) and apply it to Node.apply. Though there is no F.map3 so it may be a bit cumbersome. A: This not something so easy to grasp. I recommend reading the article linked at the beginning of my blog post on the subject. I also did a presentation on the subject during the last Functional Programming meeting in Sydney and you can find the slides here. If I can try to explain in a few words, traverse is going to traverse each element of the list one by one, eventually re-constructing the list (_ :: _) but accumulating/executing some kind of "effects" as given by the F Applicative. If F is State it keeps track of some state. If F is the applicative corresponding to a Monoid it aggregates some kind of measure for each element of the list. The main interaction of the list and the applicative is with the map2 application where it receives a F[B] element and attach it to the other F[List[B]] elements by definition of F as an Applicative and the use of the List constructor :: as the specific function to apply. From there you see that implementing other instances of Traverse is only about applying the data constructors of the data structure you want to traverse. If you have a look at the linked powerpoint presentation, you'll see some slides with a binary tree traversal. A: List#foldRight blows the stack for large lists. Try this in a REPL: List.range(0, 10000).foldRight(())((a, b) => ()) Typically, you can reverse the list, use foldLeft, then reverse the result to avoid this problem. But with traverse we really have to process the elements in the correct order, to make sure that the effect is treated correctly. DList is a convenient way to do this, by virtue of trampolining. In the end, these tests must pass: https://github.com/scalaz/scalaz/blob/scalaz-seven/tests/src/test/scala/scalaz/TraverseTest.scala#L13 https://github.com/scalaz/scalaz/blob/scalaz-seven/tests/src/test/scala/scalaz/std/ListTest.scala#L11 https://github.com/scalaz/scalaz/blob/scalaz-seven/core/src/main/scala/scalaz/Traverse.scala#L76
unknown
d13545
train
You can fix this by downgrading to v13 of node. add: "engine": { "node": 13.x.x } to your package.json and heroku will respect this version. The issue is being tracked here
unknown
d13546
train
This is expiration of the session key, which is different than timestamp. For example of you turn establishSecurityContext off (or not use CreateSecureConversationSecurity) you should not get this exception. Otherwise try to increase additional values such as InactivityTimeout, IssuedCookieLifetime, NegotiationTimeout, SessionKeyRenewalInterval and SessionKeyRolloverInterval. If you turn on WCF trace on the server and see the exact stack trace of the error maybe we can dril down to the exact property.
unknown
d13547
train
There is no direct way. But you could publish your own Observable. The main problem is, you need to return a value in the example function. One solution could be to create an Observable in which you pass a TaskCompletionSource. This would allow you to set the result from the Event handler. public class Request { public int Parameter { get; } public Request(int parameter) { Parameter = parameter; } public TaskCompletionSource<IHttpActionResult> Result { get; } = new TaskCompletionSource<IHttpActionResult>(); } public class Handler { public Subject<Request> ExampleObservable { get; } = new Subject<Request>(); [Route("{id}")] [AcceptVerbs("GET")] public async Task<IHttpActionResult> example(int id) { var req = new Request(id); ExampleObservable.OnNext(req); return await req.Result.Task; } } In the example above, we push a Request in the ExampleObservable. You can subscribe to this and use Request.Result.SetResult(...) to return the request. ExampleObservable.Subscribe(req => req.Result.SetResult(Ok(Service.GetExample(id)));
unknown
d13548
train
In Expression Tree, string interpolation is converted to string.Format. Analogue of your sample will be: Func<SomeClass, string> keyFactory = x => string.Format("{0}|{1}", x.PropertyOne, x.PropertoTwo); The following function created such delegate dynamically: private static MethodInfo _fromatMethodInfo = typeof(string).GetMethod(nameof(string.Format), new Type[] { typeof(string), typeof(object[]) }); public static Func<T, string> GenerateKeyFactory<T>(IEnumerable<string> propertyNames) { var entityParam = Expression.Parameter(typeof(T), "e"); var args = propertyNames.Select(p => (Expression)Expression.PropertyOrField(entityParam, p)) .ToList(); var formatStr = string.Join('|', args.Select((_, idx) => $"{{{idx}}}")); var argsParam = Expression.NewArrayInit(typeof(object), args); var body = Expression.Call(_fromatMethodInfo, Expression.Constant(formatStr), argsParam); var lambda = Expression.Lambda<Func<T, string>>(body, entityParam); var compiled = lambda.Compile(); return compiled; } Usage: var keyFactory = GenerateKeyFactory<SomeClass>(new[] { "PropertyOne", "PropertyTwo", "PropertyThree" });
unknown
d13549
train
The working directory should not have to be the directory that contains your DLLs. In fact, you definitely don't want that to be a requirement for running your application. Not only is it a hugely unexpected failure mode, but it could also be a potential security risk. Put the required DLLs in the same directory as your application's executable. That's the first place that the loader will look. If necessary, use a post-build event in your library projects to copy them there. A: Well since no body can help me I decided that I will change the output directory to the bin folder so VS will start my applications from the correct folder. And how I can get rid of all the extra files that don't belong there I will find a way later.
unknown
d13550
train
As already mentioned by @ForceBru, you need a python webserver. If this can be useful to you, this is a possible unsecure implementation using flask: from flask import Flask from flask import request app = Flask(__name__) @app.route('/turnOn') def hello_world(): k = request.args.get('key') if k == "superSecretKey": # Do something .. return 'Ok' else: return 'Nope' If you put this in an app.py name file and, after having installed flask (pip install flask), you run flask run you should be able to see Ok if visiting the url http://localhost:5000/turnOn?key=superSecretKey . You could write a brief html gui with a button and a key field in a form but I leaves that to you (you need to have fun too!). To avoid potential security issues you could use a POST method and https. Look at the flask documentation for more infos.
unknown
d13551
train
Found the issue. Because in on_press i was not using global pressed_key so it was creating local variable. Here is the working code. from pynput import mouse, keyboard from pynput.keyboard import Key, Listener import pickle x_pos = [] y_pos = [] both_pos = [] pressed_key = None def on_press(key): global pressed_key if (key==keyboard.Key.f7): pressed_key = "f7" print(pressed_key) else: pressed_key = None def on_release(key): global pressed_key pressed_key = None def on_click(x, y, button, pressed): if pressed: #print ("{0} {1}".format(x,y)) print(pressed_key) if pressed_key == "f7": x_pos.append("{0}".format(x,y)) y_pos.append("{1}".format(x,y)) #print("test" + x_pos + y_pos) print (x_pos + y_pos) #both_pos = x_pos, y_pos else: pass print (x_pos + y_pos) mouse_listener = mouse.Listener(on_click=on_click) mouse_listener.start() with keyboard.Listener(on_press = on_press, on_release = on_release) as listener: try: #listener.start() listener.join() except MyException as e: print('Done'.format(e.args[0]))
unknown
d13552
train
There are a few issues here. The = operator is the match operator, it is not assignment. To explain the error, syntax-wise, this looks like function invocation on the left hand side of a match, which is not allowed. But this is besides the point of your actual goal. If you want a set of user models that are updated with the new bcrypt information, you need to use a map function: users = Enum.map(users, fn %User{id: id}=user -> %User{user| token: Comeonin.Bcrypt.hashpwsalt("#{id}")} end) You have to remember that everything in Elixir is immutable.
unknown
d13553
train
First, documentation. If the parameter is variadic, the user now needs to check some other source to find out that this really wants something that will takes one template parameter. Second, early checking. If you accidentally pass two arguments to T in S, the compiler won't tell you if it's variadic until a user actually tries to use it. Third, error messages. If the user passes a template that actually needs two parameters, in the variadic version the compiler will give him an error message on the line where S instantiates T, with all the backtrace stuff in-between. In the fixed version, he gets the error where he instantiates S. Fourth, it's not necessary, because template aliases can work around the issue too. S<vector> s; // doesn't work // but this does: template <typename T> using vector1 = vector<T>; S<vector1> s; So my conclusion is, don't make things variadic. You don't actually gain flexibility, you just reduce the amount of code the user has to write a bit, at the cost of poorer readability. A: If you know already that you will need it with high probability, you should add it. Otherwise, You Ain't Gonna Need It (YAGNI) so it would add more stuff to maintain. It's similar to the decision to have a template parameter in the first place. Especially in a TDD type of environment, you would refactor only when the need arises, rather than do premature abstraction. Nor is it a good idea to apply rampant abstraction to every part of the application. Rather, it requires a dedication on the part of the developers to apply abstraction only to those parts of the program that exhibit frequent change. Resisting premature abstraction is as important as abstraction itself Robert C. Martin Page 132, Agile Principles, Patterns and Practices in C# The benefits of variadic templates can be real, as you point out yourself. But as long as the need for them is still speculative, I wouldn't add them.
unknown
d13554
train
Look for existing solutions. Things like Umbraco ( http://umbraco.org) and N2CMS ( http://umbraco.org) and Microsoft Orchard ( http://orchard.codeplex.com) and others are simple open source (not complicated) and should all be good things to start your project from them and develop any functionality you need that doesn't exist as a per their existing plugin architecture. This will save you not just from re-inventing the wheel but also save a lot of time and effort in stuff that is already out there.
unknown
d13555
train
Turns out you need to use Plaintext form. A: This error can occur when one or more pre-requisites for creating the secret has not been followed. There are a few pre-requisites when creating the secret. AWS document for reference. Listing them below for quick access. * *Choose Other type of secrets (e.g. API key) for the secret type. *Your secret name must have the prefix AmazonMSK_ *Your user and password data must be in the following format to enter key-value pairs using the Plaintext option. { "username": "alice", "password": "alice-secret" } A: In addition to @Sourabh 's answer, a secret created with the default AWS KMS key cannot be used with an Amazon MSK cluster, so what you need to do is: * *Open the Secrets Manager console. *In Secret name, choose your secret. *Choose Actions, and then choose dropdown list, select the AWS KMS key, select the check box for Create new version of secret with new encryption key, and then choose Save. that should solve this error Amazon MSK failed to associate 1 secret for cluster sasl-cluster. Wait for a few minutes and try again. If the problem persists, see AWS Support Center . API Response : The provided secret is encrypted with the default key. You can't use this secret with Amazon MSK. A: This is happening because at the time of secret creation you had selected the default aws kms option. Frist you have to create the new KMS then you have to update it in secret manager creation time. After following all you will not get this error.
unknown
d13556
train
your code to fill the datatable is not correct - please try the below eg. private void BindGridview() { string[,] arrlist = { {"Suresh", "B.Tech"}, {"Nagaraju","MCA"}, {"Mahesh","MBA"}, {"Mahendra","B.Tech"} }; DataTable dt = new DataTable(); DataRow dr = null; dt.Columns.Add(new DataColumn("Name", typeof(string))); dt.Columns.Add(new DataColumn("Education", typeof(string))); //dr = dt.NewRow(); for (int i = 0; i < arrlist.GetLength(0);i++) { dr = dt.NewRow(); dr["Name"] = arrlist[i,0].ToString(); dr["Education"] = arrlist[i,1].ToString(); } gvarray.DataSource = dt; gvarray.DataBind(); } A: Private Sub Button3_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button3.Click Dim str_array(,) As String ' 2D array declaration str_array = {{"Suresh", "B.Tech"}, {"Nagaraju", "MCA"}, {"Mahesh", "MBA"}, {"Mahendra", "B.Tech"}} ' array initialization For i As Integer = 0 To (str_array.Length / 2) - 1 'limit is set to this because the length includes both the indices and white space gv.Rows.Add(str_array(i, 0), str_array(i, 1)) Next End Sub
unknown
d13557
train
The problem with your query is that AND rd.CandidateID = 9 on the WHERE clause effectively "kills" the full join by requiring that RoundDetails be present. Move this part of the condition into the ON clause of the join, and replace the join with LEFT OUTER, because you do not need a full outer join anyway: select q.QuestionID , q.TotalMarks , q.Question , isnull(rd.MarksObtained, 0) MarksObtained , convert(bit, isnull(rd.QuestionID, 0)) Attended from Questions q left outer join RoundDetails rd ON q.questionID = q.questionID AND rd.CandidateID = 9 where q.SubjectID = 2 AND q.IsActive = 1 As a general rule, you should be extremely careful adding conditions on outer-joined tables in the WHERE clause, because any condition that is not null-preserving will convert your outer join to an inner join. A: Different SQL JOINs INNER JOIN: Returns all rows when there is at least one match in BOTH tables LEFT JOIN: Return all rows from the left table, and the matched rows from the right table RIGHT JOIN: Return all rows from the right table, and the matched rows from the left table FULL JOIN: Return all rows when there is a match in ONE of the tables As you are using Full Join you will only get the questions that have an answer attempted. you need to use a Left Join instead. take a look at the joins tutorials on W3Schools
unknown
d13558
train
OK - I found something that works. Ugly, but works: Sub EmphesizeSelectedText(color As Long) Dim msg As Outlook.MailItem Dim insp As Outlook.Inspector Set insp = Application.ActiveInspector If insp.CurrentItem.Class = olMail Then Set msg = insp.CurrentItem If insp.EditorType = olEditorWord Then Set document = msg.GetInspector.WordEditor Set rng = document.Application.Selection With rng.font .Bold = True .color = color End With End If End If Set insp = Nothing Set rng = Nothing Set hed = Nothing Set msg = Nothing End Sub Eventually I found a reference that WordEditor returns a Document object. From there it was 2 hrs of going over MSDN's very slow web-help to find out that to get the selected text i needed to go up one level to the Application. Important note - changing rng.Style.Font did not do what i wanted it to do, it changed the entire document, when i started using the with rng.font my problem was solved (Thanks to Excel's marco recording abilities for showing me the correct syntax) A: Annotations are in German Option Explicit 'Sub EmphesizeSelectedText(color As Long) Sub EmphesizeSelectedText() Dim om_msg As Outlook.MailItem Dim oi_insp As Outlook.Inspector Dim ws_selec As Word.Selection Dim wd_Document As Word.Document Dim str_test As String Dim lng_color As Long lng_color = 255 'Zugriff auf aktive E-Mail Set oi_insp = Application.ActiveInspector() 'Überprüft ob es sich wirklich um eine E-Mail handelt If oi_insp.CurrentItem.Class = olMail Then Set om_msg = oi_insp.CurrentItem If oi_insp.EditorType = olEditorWord Then ' es gibt noch "olEditorHTML", "olEditorRTF", "olEditorText" und "olEditorWord" ' ist bei mir aber immer "olEditorWord" (= 4) - egal was ich im E-Mail Editor auswähle ' Set wd_Document = om_msg.Getinspector.WordEditor ' macht das gleiche wie nächste Zeile Set wd_Document = oi_insp.WordEditor Set ws_selec = wd_Document.Application.Selection str_test = ws_selec.Text Debug.Print ws_selec.Text ws_selec.Text = "foo bar" If om_msg.BodyFormat <> olFormatPlain Then ' auch wenn om_msg.BodyFormat = olFormatPlain ist, kann oi_insp.EditorType = olEditorWord sein ' doch dann gehen Formatierungen nicht -> Error !!! With ws_selec.Font .Bold = True .color = lng_color ' = 255 = red .color = wdColorBlue End With End If ws_selec.Text = str_test End If End If Set oi_insp = Nothing Set ws_selec = Nothing Set om_msg = Nothing Set wd_Document = Nothing End Sub Verweise: (I do not know how it is called in the english version) * *Visual Basic for Applications *Microsoft Outlook 15.0 Object Library *OLE Automation *Microsoft Office 15.0 Object Library *Microsoft Word 15.0 Object Library Gruz $3v|\| A: an other example: Option Explicit Private Sub Test_It() Dim om_Item As Outlook.MailItem Dim oi_Inspector As Outlook.Inspector Dim wd_Doc As Word.Document Dim wd_Selection As Word.Selection Dim wr_Range As Word.Range Dim b_return As Boolean Dim str_Text As String str_Text = "Hello World" 'Zugriff auf aktive E-Mail Set oi_Inspector = Application.ActiveInspector() Set om_Item = oi_Inspector.CurrentItem Set wd_Doc = oi_Inspector.WordEditor 'Zugriff auf Textmarkierung in E-Mail Set wd_Selection = wd_Doc.Application.Selection wd_Selection.InsertBefore str_Text 'Zugriff auf 'virtuelle' Markierung 'wr_Range muss auf das ganze Dokument gesetzt werden ! Set wr_Range = wd_Doc.Content 'Suche in E-Mail Text With wr_Range.Find .Forward = True .ClearFormatting .MatchWholeWord = True .MatchCase = False .Wrap = wdFindStop .MatchWildcards = True .Text = "#%*%#" End With b_return = True Do While b_return b_return = wr_Range.Find.Execute If b_return Then ' Es wurde gefunden str_Text = wr_Range.Text 'schneide den Anfangstext und das Ende ab 'str_TextID = Mid$(str_TextID, 11, Len(str_TextID) - 12) MsgBox ("Es wurde noch folgender Schlüssel gefunden:" & vbCrLf & str_Text) End If Loop 'aktiv Range ändern 'wr_Range muss auf das ganze Dokument gesetzt werden ! Set wr_Range = wd_Doc.Content wr_Range.Start = wr_Range.Start + 20 wr_Range.End = wr_Range.End - 20 'Text formatieren With wr_Range.Font .ColorIndex = wdBlue .Bold = True .Italic = True .Underline = wdUnderlineDotDashHeavy End With 'Freigeben der verwendeten Variablen Set oi_Inspector = Nothing Set om_Item = Nothing Set wd_Doc = Nothing Set wd_Selection = Nothing Set wr_Range = Nothing End Sub Gruz $3v|\|
unknown
d13559
train
Here's the bible for Access corruption issues. http://www.granite.ab.ca/access/corruptmdbs.htm First things first: try to decompile and recompile (check the help files on how to do that). Next, try creating a second database and importing your form from the corrupt one. Lastly, use SaveAsText and LoadFromText to export and reimport the form. A: The lack of an error message makes this extra challenging. OTOH, without an error message, how do you know the form hasn't opened? Could it be open but hidden? Try these two commands in the Immediate Window: DoCmd.OpenForm "YourForm", acNormal,,,,acWindowNormal ? Forms("YourForm").Name Do you you get any error messages then? If so, tell us what error messages and at which step they occur.
unknown
d13560
train
create a color.xml into values folder code for color.xml <?xml version="1.0" encoding="utf-8"?> <resources> <color name="dark_blue_Shade1">#000080</color> </resources> if the color.xml already exists there then just put the <color name="dark_blue_Shade1">#000080</color> inside <resources> </resources> tag A: create color.xml file inside values folder <?xml version="1.0" encoding="utf-8"?> <resources> <color name="ColorPrimary">#8E67E0</color> <color name="ColorPrimaryDark">#59419B</color> <color name="LightPrimaryColor">@android:color/holo_blue_bright</color> <color name="AccentColor">#ff4081</color> <color name="PrimaryText">#212121</color> <color name="SecondarText">#727272</color> </resources> and then in style.xml file change like this <resources> <!-- Base application theme. --> <style name="AppTheme" parent="Theme.AppCompat.Light.NoActionBar"> <item name="colorPrimary">@color/ColorPrimaryDark</item> <item name="colorPrimaryDark">@color/ColorPrimaryDark</item> <item name="colorAccent">@color/AccentColor</item> </style> </resources> From this your entire project will take this colors. ColorPrimaryDark is your status bar color you dont want to apply it manually system will get colorPrimaryDark as your status bar color
unknown
d13561
train
With options(scipen=999) you get the full number without e+03 and so on. Maybe there is a way with options(scipen=...)
unknown
d13562
train
You cannot bind to a field. Change your Url field in your ImageList class to a property: public class ImageList { public string Url {get; set;} public ImageList(string _url) { Url = _url; } }
unknown
d13563
train
It seems to be a better approach simply to generate the ids (or any other attributes of those links) dynamically but in a way that you're able to map the given generated attribute value (for instance an id of customers1) to the hash key of your object connected to that link (customers1 would lead you to the key 1 in your hash, for instance) so you're able to get that attribute value from the click event -> e (by examining the value of e.target - http://api.jquery.com/category/events/event-object/) passed to your .click(function(e) {...})and then simply read the hash property you want to populate your modal.
unknown
d13564
train
try this as a boilerplate function chunker($arr, $l) { return array_chunk($arr, $l); } print_r(chunker($hap, 3)); /* Array ( [0] => Array ( [0] => 14477 [1] => 14478 [2] => 14479 ) [1] => Array ( [0] => 14485 [1] => 14486 [2] => 14487 ) ) */ UPDATE php > $h = [ "14477,14478,14479,14485,14486,14487" ]; php > $hap = explode(",", $h[0]); php > print_r($hap); Array ( [0] => 14477 [1] => 14478 [2] => 14479 [3] => 14485 [4] => 14486 [5] => 14487 ) php > print_r(chunker($hap, 3)); Array ( [0] => Array ( [0] => 14477 [1] => 14478 [2] => 14479 ) [1] => Array ( [0] => 14485 [1] => 14486 [2] => 14487 ) ) php > A: A possible solution: $input = array('14477,14478,14479,14485,14486,14487'); $output = array_map( function (array $a){ return implode(',', $a); }, array_chunk( explode(',', $input[0]), 3 ) ); Read it from inside out: * *explode() splits the string $input[0] using comma (,) as delimiter and returns an array; *array_chunk() splits the array into chunks of size 3; it returns an array of arrays, each inner array contains 3 elements (apart from the last one that can contain less); *array_map() applies the function it receives as its first argument to each value of the array it gets as its second argument (the array of arrays returned by array_chunk()); it returns an array whose values are the values returned by the function; *the anonymous function passed to array_map() gets an array (of size 3 or less) and uses implode() to join its elements into a string, using comma (,) to separate the values and returns the string; *array_map() puts together all the values returned by the anonymous function (one for each chunk of 3 elements of the array) into a new array it returns. The output (print_r($output)) looks like this: Array ( [0] => 14477,14478,14479 [1] => 14485,14486,14487 )
unknown
d13565
train
In order to remove the rows containing the same data, you can order them based on the contained elements, so there is not difference between rows containing the same pair of Client_Reference, and then delete the duplicates. After that you can filter the ones containing the same Client_Reference as you did. sensible_matches <- sensible_matches[!duplicated(t(apply(sensible_matches,1,sort))),] View(sensible_matches %>% filter(Client_Reference.x != Client_Reference.y))
unknown
d13566
train
I imagine you are using it to get database like data or config data, this is normally done in the model, though there is not restriction in where you do it. You could do the extracting and preparing of the data in the model and the logic in the controller. Something like loading the config parameters and putting them in variables and then using this variables in the controller. Also you may use cakephp XML library to do all this. Since is a library you MAY do it either in controller or model. Hope this helps you :)
unknown
d13567
train
HttpClient version 4.x and 5.x wrap HTTP response entity with a proxy that releases the underlying connection back to the pool upon reaching the end of the message stream. In all other cases HttpClient assumes the message has not been fully consumed and the underlying connection cannot be re-used. https://github.com/apache/httpcomponents-client/blob/master/httpclient5/src/main/java/org/apache/hc/client5/http/impl/classic/ResponseEntityProxy.java
unknown
d13568
train
Using pandas: import pandas as pd data = {'bin1': {'A': 14545, 'B': 18579, 'C': 5880, 'D': 20771, 'E': 404396}, 'bin2': {'A': 13200, 'D': 16766, 'E': 200344}, } df = pd.DataFrame(data).T df.fillna(0, inplace=True) print(df) prints A B C D E bin1 14545 18579 5880 20771 404396 bin2 13200 0 0 16766 200344 The df.fillna(0) replaces missing values with 0. A: You can use d[j].get(q, '0') instead of d[j][q] to fill in 0 for all missing entries: # print the table header labs = sorted(max(d.values(), key=len)) print "bin" + "\t" + "\t".join(labs) # loop and print the values for j in d: print j + "\t" + "\t".join(str(d[j].get(q, '0')) for q in labs) I also made some slight modifications to the other parts of the code so the columns are ordered.
unknown
d13569
train
It truly is a bug in rails. I created a patch and pull request to fix it.
unknown
d13570
train
This is a cursor object. With the cursor, you would do something like var cursor = collection.find({}); cursor.each(...); See this link for more details: https://mongodb.github.io/node-mongodb-native/markdown-docs/queries.html Note: If you know you have a small result set, you can use find({}).toArray() which will return a list of documents.
unknown
d13571
train
This formula works for your data set. It extracts everything after the last X in the Item and removes the Unit of Measure text as it is specified in the second column. =SUBSTITUTE(RIGHT(A2,LEN(A2)-FIND("@",SUBSTITUTE(A2,"X","@",LEN(A2)-LEN(SUBSTITUTE(A2,"X",""))),1)),B2,"")+0 A: With O365 you have the following approach in cell C1: =LET(x, TEXTAFTER(A2:A5,"X", -1), size, TEXTSPLIT(x, {"ML","G","ML","L"}), unit, SUBSTITUTE(x, size, ""), VSTACK({"Unit of Measure","Pack Size"}, HSTACK(unit, size))) Here is the output, it generates the header, and extract the unit of measure and the pack size, spilling the entire output at once: It assumes the information to find comes after the first X character in reverse order. If you want the pack size as numeric, then replace size inside HSTACK with: 1*size.
unknown
d13572
train
For anyone who's curious I had to add this code to my module.rules array. { test: /\.png$/, loader: 'file-loader' }
unknown
d13573
train
You have three options. using pandas: dfObj.groupby('Type')['q'].value_counts().plot(kind='barh') using pandas stacked bars: dfObj.groupby('Type')['q'].value_counts().unstack(level=0).plot.barh(stacked=True) using seaborn.catplot: import seaborn as sns df2 = dfObj.groupby('Type')['q'].value_counts().rename('count').reset_index() sns.catplot(data=df2, x='q', hue='Type', y='count', kind='bar')
unknown
d13574
train
why do u want to write functionality that already exists. mean excel has it, u can import any web page (just to note excel uses IE engine to render tags). here are steps how it can be achieved. Open excel; go to Data Tab; click From Web; New Web Query child window opens. write into Address Bar and go to the web page u wan to import. after page loads into the window click import button that's all. (i assumed u are using ms office 2007, if diff version, diff steps) just one more note. ms office uses IE rendering engine, so not all tags are supported, so if it looks ugly, do not blame me ;) A: see Create Excel (.XLS and .XLSX) file from C# A: I'm not sure how you want to export html to Excel. In Excel we are talking about rows and columns while html is a document. You would probably want to export html to Word or pdf. In any case, for creating Excel files in C# I've used the following 2 libraries: http://www.codeproject.com/KB/office/biffcsharp.aspx and http://code.google.com/p/excellibrary/
unknown
d13575
train
You have a space in Incident Date column. If you want spark to know the column has space, use ` symbol in start and end of col. Same as Incident Number col. SELECT `Incident Number` FROM fireIncidents where `Incident Date`='04/04/2016' If your Incident Date col is a date, you can cast it to spark format, use select `Incident Date`, to_date(`Incident Date`, 'dd/MM/yyyy') FROM fireIncidents""").show() which yields +-------------+----------------------------------+ |Incident Date|to_date(Incident Date, dd/MM/yyyy)| +-------------+----------------------------------+ | 04/04/2016| 2016-04-04| | 04/04/2016| 2016-04-04| | 04/04/2016| 2016-04-04| +-------------+----------------------------------+
unknown
d13576
train
SEO is a wide field and PageRank one of possibly thousands of signals in Googles ranking algorithms: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is... Wikipedia A: Also what sort of time scale are you talking about? As in if you did all this in the last week it can take a long time for googles bots to re-index your site. Proper SEO can take months before reaping the benefits.
unknown
d13577
train
please try this: use complete url in @font-face such as below : @font-face { font-family : 'G....'; src : url('/content/fonts/.....'); .... } A: Worked around it ../../fonts seems I need it to the wwwroot level
unknown
d13578
train
Instead of using the module method call mlflow.log_metric to log the metrics, use the client MlflowClient which takes run_id as the parameter. Following code logs the metrics in the same run_id passed as the parameter. from mlflow.tracking import MlflowClient from azureml.core import Run run_id = Run.get_context(allow_offline=True).id MlflowClient().log_metric(run_id, "precision", 0.91)
unknown
d13579
train
You cannot use the same function to extract values for classes with different attributes. You need to assign a default value for each attribute in each class , or change the function to check the type of the movie in your loop. For example : for movie in movies: if isinstance(movie, Movie): # add movie attributes to the content content += movie_tile_content.format( movie_title=movie.title, poster_image_url=movie.poster, trailer_youtube_id=trailer_youtube_id, film_badge=movie.badge, film_description=movie.storyline, serie_seasons='no season' ) elif isinstance(movie, Series): # add series attributes to the content content += movie_tile_content.format( movie_title=movie.title, poster_image_url=movie.poster, trailer_youtube_id=trailer_youtube_id, film_badge=movie.badge, film_description=movie.storyline, serie_seasons=movie.number_of_seasons ) This pattern allow you to change the content depending on the type of the media/video. You may need to import the types in your second file: from media import Movie, Series
unknown
d13580
train
You can use Microsoft Graph Api: https://developer.microsoft.com/en-us/graph/docs/api-reference/beta/api/user_list_events Or Outlook Api: https://msdn.microsoft.com/en-us/office/office365/api/calendar-rest-operations Simple googling will get you the above results...
unknown
d13581
train
Use the Array.slice method on the post array. For example, to retrieve 10 items: $.getJSON("http://tumblr-address/api/read/json?callback=?", function(data) { $.each(data.posts.slice(0, 10), function(i,posts){ // ... A: You can use the num query parameter: $.getJSON("http://tumblr-address/api/read/json?num=20", ... And I don't think you need to have a blank callback parameter. You're not doing JSONP. A: old post but updates info cant hurt... yes the old api allowed the num= parameter to specify a l8imit to returned items, the new Api version 2 uses 'limit=' instead. but defaults to 20 if left out.
unknown
d13582
train
Check this: import numpy as np import cv2 img = np.zeros([300,300,3],dtype=np.uint8) img.fill(255) # or img[:] = 255 imageWithCircle = cv2.circle(img, (150,150), 60, (0, 0, 255), 2) r = 60 startpoint = (int(150+(r/(2**0.5))),int(150-(r/(2**0.5)))) endpoint = (int(150-(r/(2**0.5))),int(150+(r/(2**0.5)))) print(startpoint,print(endpoint)) imageWithInscribingSquare = cv2.rectangle(imageWithCircle, startpoint, endpoint, (255, 0, 0) , 2) cv2.imshow("Circle", imageWithCircle) cv2.waitKey(0) cv2.destroyAllWindows() Output: Calculation: If the radius is 'r', then the side of the square will be √2r, From the center the start point would be √2r/2 less in width and √2r/2 more in height, and vice versa for endpoint.
unknown
d13583
train
The HorizontalFieldManager will grow in height to whatever the height of the child field is (as long as the space is available).
unknown
d13584
train
I suggest bigger chars for smaller screens! A: Just to expand on my comment. ( not an answer as subjective ) Using ems for width can tell us how many font characters wide a containing element is. consider <style> body { font size: 0.8em; } /* roughly about 14 px */ .container { width: 30em; } /* 1em now equals 0.8 */ </style> we now know ( or as close as possible ) , that the <div class="container"> .. </div> can hold text with lines of ( close to ) 30 characters. The good part here is that if we change the font size : <style> body { font size: 1.2em; } </style> The container width also changes and we retain our set number of characters per line. Because of this we have a connection between layout and typography and have control over "fixing" paragraph lengths and spacing within our designs. Quite useful when we need to calculate heights, widths and our designed 'page breaks' of layout elements based on layouts that adapt to different sizes and are dealing with dynamic text and font sizes. A: One posible solution is to adjust a div to the size of the viewport of the device and use https://github.com/olegskl/fitText
unknown
d13585
train
Use tib:evaluate instead of dyn:evaluate. Depending on what else your BW process contains, you may need to add the namespace below to the process in order to use the tib:evaluate() function: namespace=http://www.tibco.com/bw/xslt/custom-functions prefix=tib To do that you would select the process, click the "namespace registry" button, and add the namespace above.
unknown
d13586
train
Depending on the source that you give this should work properly : String hashUser = SHA1.Sha1Hash(username); String hashPass = SHA1.Sha1Hash(password); /** * HASH USERNAME * sha1(concat(sha1(substr(concat(sha1('username'),sha1('password')),20,35)),sha1('username'))) */ String userPLUSpass = hashUser+hashPass; String userConcat = ""; String subStringUserHash = userConcat.concat(userPLUSpass); String userHashSubStr = SHA1.Sha1Hash(subStringUserHash.substring(19, 54)); String luser = userHashSubStr+hashUser; String uConcat = ""; lastUser = SHA1.Sha1Hash(uConcat.concat(luser)); /** * HASH PASSWORD * sha1(concat(sha1(substr(concat(sha1('password'),sha1('username')),10,35)),sha1('password'))) */ String passPLUSuser = hashPass+hashUser; String passConcat = ""; String subStringPassHash = passConcat.concat(passPLUSuser); String passHashSubStr = SHA1.Sha1Hash(subStringPassHash.substring(9, 44)); String lpass = passHashSubStr+hashPass; String pConcat = ""; lastPass = SHA1.Sha1Hash(pConcat.concat(lpass));
unknown
d13587
train
Instead of: print $row['FILE_BLOB']; Use something like: file_put_contents( $filename, $row['FILE_BLOB']); //save locally You need to write the blob to a file. If you want to force a download of that file then you need to make use of the correct headers in combinarion with readfile, like so: $file = '/var/www/html/file-to-download.zip'; header('Content-Description: File Transfer'); header('Content-Type: application/force-download'); header('Content-Length: ' . filesize($filename)); header('Content-Disposition: attachment; filename=' . basename($file)); readfile($file);
unknown
d13588
train
You have used wrong logical operator if (this.sampleSize > 0 || this.sampleSize <= 1200) it should be if (this.sampleSize > 0 && this.sampleSize <= 1200) With your || (or) it returns first for every value greater than 0 A: Solved it! Turns out, a certain section in the code added a comma (',') to long numbers to make them easier to read, which in turn made them NaN. I will update the codepen once the project is concluded. To all who took the time, thanks a lot.
unknown
d13589
train
I haven't found it in the Persona Bar, but you can still get to the old site settings, try throwing this on the URL /Admin/Site-Settings
unknown
d13590
train
Markov chains aren't guaranteed to have unique stationary distributions. For example, consider a two state Markov Chain where the transition matrix is the identity matrix. That means that whatever the initial state is, it never changes. So in that case there is no stationary distribution that is independent of the initial case. Where there is a stationary distribution, unless the initial state is the stationary distribution, the stationary distribution is only reached in the limit as n tends to infinity. So iteration n+1 will be closer to it that iteration n, but however large n is, it won't ever actually be the stationary distribution. However, for practical purposes (i.e. to the limit of the accuracy of floating point numbers in computers), the stationary state may well be reached after a handful of iterations. A: You need the underlying graph to be strongly connected and aperiodic. If you want to find the stationary distribution of a periodic Markov chain just by running some chain, add "stay put" transitions with some constant probability to each node and scale the other transitions down appropriately.
unknown
d13591
train
I think I figured out your issue. I suspect you need to download SFML GCC 4.7 TDM (SJLJ) - 32-bit from here http://www.sfml-dev.org/download/sfml/2.1/ - you were probably using the wrong version of the libs.
unknown
d13592
train
That articles states (under "Accessing the Network") you still use the <domainname>\<machinename>$ aka machine account in the domain. So if both servers are in "foobar" domain, and the web box is "bicycle", the login used to the SQL Server Instance is foobar\bicycle$ If you aren't in a domain, then there is no common directory to authenticate against. Use a SQL login with username and password for simplicity Edit, after comment If the domains are trusted then you can use the machien account still (use domain local groups for SQL Server, into which add a global groups etc) As for using app pool identities, they are local to your web server only as per article. They have no meaning to SQL Server. If you need to differentiate sites, then use proper domain accounts for the App Pools. You can't have it both ways...
unknown
d13593
train
when installing firebase don't install "cordova-plugin-firebase" if you are using react with ionic, it will create this error! fixed after removed
unknown
d13594
train
Array.prototype.join() works on array and to insert an element to array you should call .push() instead of +=, read more about += here. Always use var before declaring variables, or you end up declaring global variables. var birthyear = []; for (i = 1800; i < 2018; i++) { birthyear.push(i); } var birth = birthyear.join(", "); document.write(birth); A: I your code your not appending data to array you are adding data to array variable which is wrong 1st Way birthyear=[]; for(i=1800;i<2018;i++) { birthyear.push(i); } birth=birthyear.join(); document.write(birth); 2nd Way birthyear=[]; k=0; for(i=1800;i<2018;i++){ birthyear[k++]=i; } birth=birthyear.join(); document.write(birth); A: You can't apply .push() to a primitive type but to an array type (Object type). You declared var birthyear = []; as an array but in the body of your loop you used it as a primitive: birthyear+=i;. Here's a revision: var birthyear=[]; for(let i=1800;i<2018;i++){ birthyear[i]=i; // careful here: birthyear[i] += i; won't work // since birthyear[i] is NaN } var birth = birthyear.join("\n"); document.write(birth); Happy coding! ^_^
unknown
d13595
train
Replace //add new record getView().findViewById(... with //add new record view.findViewById(... getView() in onCreateView() is too early - you haven't yet returned the view to the framework for getView() to return.
unknown
d13596
train
When using Redis session timeout is configured like this: <bean class="org.springframework.session.data.redis.config.annotation.web.http.RedisHttpSessionConfiguration"> <property name="maxInactiveIntervalInSeconds" value="10"></property> </bean>
unknown
d13597
train
The main difference between EXPECT_* and ASSERT_* macros is that assertions stop the test immediately if it failed, while expectations allow it to continue. Here's what GoogleTest Primer says about it: Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails. So, to illustrate it: TEST_F(myTest, test1) { uut.doSomething(); ASSERT_EQ(uut.field1, expectedvalue1); ASSERT_EQ(uut.field2, expectedvalue2); } If field1 doesn't match assertion, test fails and you have no idea if field2 is correct or not. This makes debugging much more difficult. Choice between these options is mostly matter of agreement in your team, but I'd stick to GTest proposition of using EXPECT_* as the backbone of all checks and ASSERT_* only if you test something crucial to continue, without which further part of test doesn't make any sense (for example, if your unit under test was not created correctly).
unknown
d13598
train
So, this isn't actually a questing about bs4, but more about how to handle data structures in python. Your script lacks the part that loads the data you already know. One way to go about this would be the build a dict that has all your hrefs as keys and then the count as value. So given a csv with rows like this... href,seen_count https://google.com/1234,4 https://google.com/3241,2 ... you first need to build the dict csv_list = list(open("cms_scrape.csv", "r", encoding="utf-8")) # we skip the first line, since it hold your header and not data csv_list = csv_list[1:] # now we convert this to a dict hrefs_dict = {} for line in csv_list: url, count = line.split(",") # remove linebreak from count and convert to int count = int(count.strip()) hrefs_dict[url] = count That yields a dict like this: { "https://google.com/1234": 4, "https://google.com/3241": 2 } Now you can check if all hrefs you come across exist as a key in this dict. If yes - increase the count by one. If no, insert the href in the dict and se the count to 1. To apply this to your code I'd suggest you scrape the data first and write to file once all scraping is completed. Like so: for i in tqdm(links): #print("beginning of crawler code") r = requests.get(i) data = r.text soup = BeautifulSoup(data, 'lxml') all_a = soup.select('.carousel-small.seo-category-widget a') for a in all_a: href = a['href'] print(href) # if href is a key in hrefs_dict increase the value by one if href in hrefs_dict: hrefs_dict[href] += 1 # else insert it into the hrefs_dict and set the count to 1 else: hrefs_dict[href] = 1 Now when the scraping is done, go through every line in the dict and write it to your file. It's generally recommended that you use context managers when you write to files (to avoid blocking if you accidentally forget to close the file). So the "with" takes care of both the opening and closing of the file: with open('cms_scrape.csv', 'w') as csv_file: csv_writer = csv.writer(csv_file) csv_writer.writerow(['hrefs', 'Number of times seen:']) # loop through the hrefs_dict for href, count in hrefs_dict.items(): csv_writer.writerow([href, count]) So if you don't actually have to use a csv-file for this I'd suggest using JSON or Pickle. That way you can read and store the dict without needing to convert back and forth to csv. I hope this solves your problems...
unknown
d13599
train
The JSON support in the standard Scala library is probably not the best choice. Unfortunately the situation with JSON libraries for Scala is a bit confusing, there are many alternatives (Lift JSON, Play JSON, Spray JSON, Twitter JSON, Argonaut, ...), basically one library for each day of the week... I suggest you have a look at these at least to see if any of them is easier to use and more performative. Here is an example using Play JSON which I have chosen for particular reasons (being able to generate formats with macros): object JsonTest extends App { import play.api.libs.json._ type MyDict = Map[String, Int] implicit object MyDictFormat extends Format[MyDict] { def reads(json: JsValue): JsResult[MyDict] = json match { case JsObject(fields) => val b = Map.newBuilder[String, Int] fields.foreach { case (k, JsNumber(v)) => b += k -> v.toInt case other => return JsError(s"Not a (string, number) pair: $other") } JsSuccess(b.result()) case _ => JsError(s"Not an object: $json") } def writes(m: MyDict): JsValue = { val fields: Seq[(String, JsValue)] = m.map { case (k, v) => k -> JsNumber(v) } (collection.breakOut) JsObject(fields) } } val m = Map("hallo" -> 12, "gallo" -> 34) val serial = Json.toJson(m) val text = Json.stringify(serial) println(text) val back = Json.fromJson[MyDict](serial) assert(back == JsSuccess(m), s"Failed: $back") } While you can construct and deconstruct JsValues directly, the main idea is to use a Format[A] where A is the type of your data structure. This puts more emphasis on type safety than the standard Scala-Library JSON. It looks more verbose, but in end I think it's the better approach. There are utility methods Json.toJson and Json.fromJson which look for an implicit format of the type you want. On the other hand, it does construct everything in-memory and it does duplicate your data structure (because for each entry in your map you will have another tuple (String, JsValue)), so this isn't necessarily the most memory efficient solution, given that you are operating in the GB magnitude... Jerkson is a Scala wrapper for the Java JSON library Jackson. The latter apparently has the feature to stream data. I found this project which says it adds streaming support. Play JSON in turn is based on Jerkson, so perhaps you can even figure out how to stream your object with that. See also this question.
unknown
d13600
train
Download the package from 'https://github.com/warner/python-ecdsa' and install it using command python setup.py install Your problem will be solved. A: You can use easy_install to install the lost module "ecdsa" ,which like: easy_install ecdsa, but you have to ready easy_install first! A: this: from ecdsa import SigningKey, VerifyingKey, der, curves ImportError: No module named ecdsa suggests that the python-ecdsa package is missing, you can install it with pip install ecdsa Though in general, you shouldn't need to install paramiko from sources. You can install it with pip install paramiko that has the benefit of automatically resolving the dependencies of a package
unknown