_id
stringlengths
2
6
partition
stringclasses
3 values
text
stringlengths
4
46k
language
stringclasses
1 value
title
stringclasses
1 value
d15401
val
You should really show at least an attempt of some sort when posting something like this, especially with the JavaScript tag. Anyways, to do this you would want to listen to when a user clicks on a radio button and show/hide the corresponding elements. Here's a sample: document.getElementById('colors').addEventListener("click", function(e){ document.getElementById("colors-selections").style.display = "block"; document.getElementById("size-selections").style.display = "none"; }); If you have more than two elements in the future, I'd recommend using an onchange so you can hide it once it's untoggled rather than hiding everything else everytime a new radio button is clicked. If you want to use just CSS you're going to have to used the :checked attribute and hide or display another element also: Other users already have asked about this, ex: https://stackoverflow.com/a/50921066/4107932
unknown
d15402
val
Enable pre-release nugets and search for: Xamarin.GooglePlayServices.Identity 29.0.0-beta1 packages.config: <packages> <package id="Xamarin.Android.Support.v4" version="23.1.1.0" targetFramework="MonoAndroid44" /> <package id="Xamarin.GooglePlayServices.Auth" version="29.0.0-beta1" targetFramework="MonoAndroid44" /> <package id="Xamarin.GooglePlayServices.Base" version="29.0.0-beta1" targetFramework="MonoAndroid44" /> <package id="Xamarin.GooglePlayServices.Basement" version="29.0.0-beta1" targetFramework="MonoAndroid44" /> <package id="Xamarin.GooglePlayServices.Identity" version="29.0.0-beta1" targetFramework="MonoAndroid44" /> </packages> C# version of Integrating Google Sign-In into Your Android App SignInButton button = FindViewById<SignInButton> (Resource.Id.sign_in_button); gso = new GoogleSignInOptions.Builder (GoogleSignInOptions.DefaultSignIn) .RequestEmail () .Build (); mGoogleApiClient = new GoogleApiClient.Builder (this) .EnableAutoManage(mLoginFragment, failedHandler) .AddApi (Auth.GOOGLE_SIGN_IN_API) .Build (); button.Click += delegate { signIn(); };
unknown
d15403
val
It looks like your x data are not in sorted order. Try this ind = np.argsort(C) xx = C[ind] yy = dist.pdf(C)[ind] plt.plot(xx, yy, 'r') Plot just connects all the (x,y) pairs with straight lines, so you need to make sure you trace your function from left-right (or right-left). Alternatively, you can skip the lines between the plot: plt.plot(C, dist.pdf(C), 'ro')
unknown
d15404
val
Looks like a namespace prefix issue. You're qualifying Envelope with "env:" which is good as long as that prefix is mapped to the right namespace. However, you are missing a suitable qualifier on equipmentTest which is in a different namespace. See the Camel docs on how to configure a namespaces. Namespaces ns = new Namespaces("ct", "http://com/fr/test") .add("env", "http://www.w3.org/2003/05/soap-envelope"); from("activemq-ext-in:queue:" + queueName) .id("amq-transfert-downward-route") .filter( ns.xpath("/env:Envelope/env:Header/ct:equipmentType/text()='WCA'") ).to("activemq-ext-out:topic:" + topicName); Updated: changed to use the OP example instead of the one from the Camel docs. I haven't tested this but it looks about right.
unknown
d15405
val
That much logging is not necessary. There's no reason (in production) to know when each method starts and ends. Maybe you need that on certain methods, but having that much noise in the log files makes them nearly impossible to analyze effectively. You should log when important things happen such as errors, user logins (audit log), transactions started, important data updated... so on and so forth. If you have a problem that you can't figure out from the logs, then you can add more to it if necessary... but only if necessary. Also, just for your information, the adding logging in at compile time would be an example of what is called Aspect Oriented Programming. Logging would be the "cross cutting concern". A: I think "logs to code ratio" is a misunderstanding of the problem. In my job I once in a while have a situation where a bug in a Java program cannot be reproduced outside the production environment and where the customer does NOT want it to happen again. Then ALL you have available to you to fix the bug, is the information you yourself have put in the log files. No debugging sessions (that is forbidden in production environments anyway) - no poking at input data - nothing! So the logs are your time machine back to when the bug happened, and since you cannot predict ahead of time what information you will need to fix a bug yet unknown - otherwise you could just fix the bug in the first place - you need to log lots of stuff... Exactly WHAT stuff depends on the scenario, but basically enough to ensure that you are never in doubt what happens where :) Naturally this means that a LOT of logging will happen. You will then create two logs - one with everything which is only kept around for long enough to ensure that you will not need it, and the other one with non-trivial information which can be kept for a lot longer. Dismissing logging as excessive, is usually done by those who have not had to fix a bug with nothing else to go by :) A: Since log4net does a great job at not clogging up the resources, I tend to be a little verbose on logging because when you have to change to debug mode, the more info you have, the better. Here's what I typically log: DEBUG Level * *Any parameters passed into the method *Any row counts from result sets I retrieve *Any datarows that may contain suspicious data when being passed down to the method *Any "generated" file paths, connection strings, or other values that could get mungled up when being "pieced together" by the environment. INFO Level * *The start and end of the method *The start and end of any major loops *The start of any major case/switch statements ERROR Level * *Handled exceptions *Invalid login attempts (if security is an issue) *Bad data that I have intercepted forreporting FATAL Level * *Unhandled exceptions. Also having a lot of logging details prevents me from asking the user what they were doing when they got the error message. I can easily piece it together. A: When you come across a bug during the beta release of your application and can't reproduce it, you know that you should have done excessive logging. Same way if a client reports a bug but you can't reproduce it an excessive logging feature can save the day. A: When you have a customer scenario (i.e., someone whose machine you don't get physical access to), the only things that are "too much logging" are repainting functions and nearly anything called by them (which should be nearly nothing). Or other functions that are called 100's of times per second during operation (program startup is ok, though, to have 100's of calls to get/set routines logged because, in my experience, that's where most of the problems originate). Otherwise, you'll just be kicking yourself when you're missing some key log point that would definitively tell you what the problem is on the user's machine. (Note: here I'm referring to the logging that happens when trace mode is enabled for developer-oriented logs, not user-oriented normal operation logs.) A: I personally believe that first of all there is no hard and fast rule. I have some applications that log a LOT, in and out of methods, and status updates through the middle. These applications though are scheduled processes, run hands off, and the logs are parsed by another application that stores success/failure. I have found that in all reality, many user applications don't need large amounts of logging, as really if issues come up you will be debugging to trace the values there. Additionally you typically don't need the expense of logging. However, it really depends on the project. A: How many of those lines are logging by default? I've worked on a system very much like what you describe - just booting it up would cause over 20MB of logs to be written if logging was cranked way up, but even debugging we didn't turn it all the way up for all modules. By default it would log when a module of code was entered, and major system events. It was great for debugging since QA could just attach a log to a ticket, and even if it wasn't reproducible you could see what was going on when the problem happened. If you have serious multithreading going on then logging is still better than any IDE or debugger I've worked with. A: In my line of work, I write a lot of Windows services. For me, logging isn't a luxury; it's actually my only UI. When we deploy to production, we lose access to debugging and even the databases to which our services write and without logging we would have no way of knowing any specifics of issues that arise. Having said that, I do believe that a concise logging style is the best approach. Log messages tend to be limited to the business logic of the application such as "received message from account xxx" than "entered function yyy". We do log exceptions, thread starts, echoing of environment settings and timings. Beyond that, we look to the debugger to identify logical errors in the development and QA phases. A: I find that logging is much less necessary since I've started using TDD. It makes it much easier to determine where bugs lie. However, I find that logging statements can help understand what's going on in code. Sure, debuggers help give you a low-level idea of what's happening. But I find it easier when I can match a line of output to a line of code if I want to get a high level view of what's happening.. However, one thing that I should add is this: make sure your log statements include the module that the log statement is in! I can't count the number of times I've had to go back through and find where a log statement actually lies. A: Complete log files are amazingly useful. Consider a situation where your application is deployed somewhere like a bank. You can't go in there and debug it by hand and they sure aren't going to send you their data. What you can get is a complete log which can point you to where the problem occured. Having a number of log levels is very helpful. Normally the application would run in a mode such that it only reports on fatal errors or serious errors. When you need to debug it a user can switch on the debug or trace output and get far more information. The sort of logging you're seeing does seem excessive but I can't really say it is for certain without knowing more about the application and where it might be deployed. A: Also in these days of powerful IDEs and remote debugging is that much logging really nescisary? Yes, absolutely, although the mistake that many unskilled developers make is to try to fix bugs using the wrong method, usually tending towards logging when they should be debugging. There is a place for each, but there are at least a few areas where logging will almost always be necessary: * *For examining problems in realtime code, where pausing with the debugger would effect the result of the calculation (granted, logging will have a slight impact on timing in a realtime process like this, but how much depends greatly on the software) *For builds sent to beta testers or other colleagues who may not have access to a debugger *For dumping data to disk that may not be easy to view within a debugger. For instance, certain IDE's which cannot correctly parse STL structures. *For getting a "feel" of the normal flow of your program *For making code more readable in addition to commenting, like so: // Now open the data file fp = fopen("data.bin", "rb"); The above comment could just as easily be placed in a logging call: const char *kDataFile = "data.bin"; log("Now opening the data file %s", kDataFile); fp = fopen(kDataFile, "rb"); That said, you are in some ways correct. Using the logging mechanism as a glorified stack-trace logger will generate very poor quality logfiles, as it doesn't provide a useful enough failure point for the developer to examine. So the key here is obviously the correct and prudent use of logging calls, which I think boils down to the developer's discretion. You need to consider that you're essentially making the logfiles for yourself; your users don't care about them and will usually grossly misinterpret their contents anyways, but you can use them to at least determine why your program misbehaved. Also, it's quite rare that a logfile will point you to the direct source of a certain bug. In my experience, it usually provides some insight into how you can replicate a bug, and then either by the process of replicating it or debugging it, find the cause of the problem. A: There is actually a nice library for adding in logging after the fact as you say, PostSharp. It lets you do it via attribute-based programming, among many other very useful things beyond just logging. I agree that what you say is a little excessive for logging. Some others bring up some good points, especially the banking scenario and other mission critical apps. It may be necessary for extreme logging, or at least be able to turn it on and off if needed, or have various levels set. A: I must confess that when started programming I more or less logged all details as described by "Dillie-O". Believe me... It helped a lot during initial days of production deployment where we heavily relied on log files to solve hundreds of problems. Once the system becomes stable, I slowly started removing log entries as their value add started diminishing. (No Log4j at those point in time.) I think, the ratio of code-to-log entries depends on the project and environment, and it need not be a constant ratio. Nowadays we've lot of flexibility in logging with packages like Log4j, dynamic enabling of log level, etc. But if programmers doesn't use it appropriately, such as when to use, when NOT to use INFO, DEBUG, ERROR etc. as well as details in log messages (I've seen log message like, "Hello X, Hello XX, Hello XXX, etc." which only the programmer can understand) the ratio will continue to be high with less ROI. A: I think another factor is the toolset/platform being used and the conventions that come with it. For example, logging seems to be quite pervasive in the J(2)EE world, whereas I can't remember ever writing a log statement in a Ruby on Rails application.
unknown
d15406
val
Eloquent methods like all and get which retrieve multiple results, an instance of Illuminate\Database\Eloquent\Collection will be returned. The Collection class provides a variety of helpful methods for working with your Eloquent results. Of course, you may simply loop over this collection like an array from the docs so you are right eloquent return a collection. it have a section in the documents how to work with it https://laravel.com/docs/5.2/collections
unknown
d15407
val
Here you pass values as a get method you need to use $_GET['id'] like this while($row = mysqli_fetch_array($sql)){ $id = $row["room_id"]; $room_name = $row["room_name"]; $room_date = strftime("%b %d, %Y", strtotime($row["room_date"])); $dynamicList .= '<ul class="room"><li><a href="#"> <center><a href="stylesurvey.php?id=' . $id . '&room_name= .$room_name.'&room_date=.$room_date.'"><img src="uploads/' . $id . '.jpg" alt="' . $room_name . '" width="170" height="170" /></a> <h4>' . $room_name . '</h4> </center> </a> </li> </ul>'; } } else { $dynamicList = "No room listed"; } mysqli_close($con); ?> <?php if (isset($_GET["id"])) { $id = $_GET["room_id"]; $room_name = $_GET["room_name"]; $survey_id = $_GET["survey_id"]; $date = $_GET["room_date"]; $sql = mysqli_query($con,"INSERT INTO survey (survey_id , room_id, user_id, room_name, date) VALUES('$survey_id','$room_id','$_SESSION[usr_name]','$room_name',now())") or die (mysql_error()); echo "Unable to insert data"; exit(); } ?> Now try this it should work
unknown
d15408
val
It sounds like you want to pass a value from JCL PARM= or from SYSIN to make the COBOL program independent of a hard coded value. This web article has a good explanation of how you can accomplish this. JCL looks like this: //* ******************************************************************* //* Step 2 of 4, Execute the COBOL program with a parameter. //* //PARJ1S02 EXEC PGM=CBLPARC1, // PARM='This is a Parameter from the EXEC and PARM= ...' and in the COBOL program linkage section: ***************************************************************** LINKAGE SECTION. 01 PARM-BUFFER. 05 PARM-LENGTH pic S9(4) comp. 05 PARM-DATA pic X(256). In your case you can validate the data passed in the linkage section based on your criteria. So, once validated, you could move the value from the linkage section after converting it to a numeric value for the test.
unknown
d15409
val
Add Trace or Logs to your code in IncrementIgnoreCount, DecrementIgnoreCount and HandleError function. That will help you to view real call order.
unknown
d15410
val
The way you are applying your css text effects is not ideal. It works in header, but requires a ton of unnecessary logic that is not present in your Footer. And copying all that code would be a big violation of DRY. But even better than abstracting the logic and applying to both components, react router has activeClassName and activeStyle that you can use to style active NavLinks. Just use them similarly in both Header and Footer. Using activeClassName and showing only the parts of the code you need here: // Header.js <Col md = {6} style = {{textAlign: "left"}}> <NavLink to="/" onClick = {this.handleClick}> <img id = "logo" src = {logo} alt = "Breaded" /> </NavLink> </Col> <Col md = {6} style = {...}> <NavLink to="/faqs" activeClassName="active-nav-link"> <span ... >FAQS</span> </NavLink>&nbsp;|&nbsp; <NavLink to="/about" activeClassName="active-nav-link"> <span ... >About us</span> </NavLink>&nbsp;|&nbsp; <NavLink to="/login" activeClassName="active-nav-link"> <span ... >Login</span> </NavLink> </Col> You can do the same in Footer: // Footer.js <Col md = {2}> <NavLink to="/login" activeClassName="active-nav-link"> Login </NavLink> </Col> <Col md = {2}> <NavLink to="/contactus" activeClassName="active-nav-link"> Contact us </NavLink> </Col> <Col md = {4}> <NavLink to="/about" activeClassName="active-nav-link"> About us </NavLink> </Col> <Col md = {2}> <NavLink to="/faqs" activeClassName="active-nav-link"> FAQ </NavLink> </Col> <Col md = {2}> <NavLink to="/privacy" activeClassName="active-nav-link"> Privacy Policy </NavLink> </Col> And then in some associated css file, you can say: .active-nav-link { text-decoration: underline } I don't remember if the NavLink returns an <a> or an <li> with a nested <a>, so that might affect how you write your css. You can also use activeStyle and do inline styles as an object, as you had been doing in your original code. Either way will work.
unknown
d15411
val
Storing the built page in a variable and outputting it at the end will allow you to emit a header any time before then. A: The other option is to create some form of temporary file wherever you are able (not sure about permissions) and read that pre doing any work. Simply list the error types and optionally times in there, perhaps? This is assuming you want to persist this behaviour across runs of your program. This is the database solution without the database, really, so I'm not sure how helpful that is. Whenever I mention database solutions without databases I always have to mention SQLite which is a file based serverless SQL "server". A: I think you should refactor your program to create all its output previously to sending any HTML to the client, that way you'll be able to know beforehand all existing errors and set a cookie. Now, if this is not viable for any reason you should have a temporary file identifying IP address and user agent and errors already shown. A simple text file should be quick enough to parse. A: Using memcached might be a way to keep states throughout different sessions.
unknown
d15412
val
I found it upon clicking kebab menu (vertical ellipsis) next to the project I want to delete and selected 'open details' and then there is 'delete project' button
unknown
d15413
val
We do this with our games where we have a bunch of WCF services provide different functionalities to the Flash clients running in Facebook/MySpace, etc. I suggest you should first have a look at this codeplex project: http://wcfflashremoting.codeplex.com/ It allows you to implement a AMF endpoint for communicating with the Flash clients. All your DataContract need to be mapped exactly including namespace and property names on both sides, so if you have a MyProject.Contracts.Requests.HandShakeRequest object in your WCF project the Flash client needs to have a replicate defined in the SAME namespace. Another which we find very helpful is the request/response pattern because it allows to add/remove parameter/output values easily and have a fair amount of backward compatibility - add a new parameter to the Request object on the server for a new feature and the client doesn't HAVE TO send the new parameter right away. For debugging you absoluately need Charles (http://www.charlesproxy.com), the latest version should have the AMF viewer working properly (I think you used to have to download an add-in) so you can see the AMF messages coming back from the server in a nice, readable format. Hope this helps! There are some other caveats around working with a Flash client from WCF but can't remember them off the top of my head :-P so have a play around with that remoting extension and I'll pop some other bits and bobs down when I can remember them!
unknown
d15414
val
You can use jQuerys function .wrapAll() to wrap the span elements in a parent container. Give that new container a class and set position to absolute, left offset to 25% and right offset to 25%. // JS api.on('revolution.slide.onloaded', function() { var totalSlides = api.revmaxslide(), perc = parseFloat((100 / totalSlides).toFixed(2)); for(var i = 0; i < totalSlides; i++) { progressSlots[i] = jQuery( '<span class="rev-progress-perc" ' + 'style="width: ' + perc + '%; left: ' + (perc * i) + '%" ' + 'data-slide="' + i + '" ' + '/>' ); } progressSlots.wrapAll('slots-wrapper'); api.append(progressSlots); progressBar = jQuery('.tp-bannertimer'); progressSlots = jQuery('.rev-progress-perc').on('click', changeSlides); }) ... // CSS .slots-wrapper{ position: absolute; z-index:99; bottom: 0; left: 25%; right: 25%; }
unknown
d15415
val
Try this: function stop() { x.stop(); document.getElementById('counter').value = formatTime(x.time()); clearInterval(clocktimer); } On your form: <input type="hidden" value="" id="counter" name="counter" /> A: Use MySQLi instead of MySQL, because there's some serious security problems with MySQL
unknown
d15416
val
You can test to see if the old value and the new value are the same. I use "new" loosely, meaning excel things that the cell was edited so it's a "new" value in terms of the Worksheet_Change event understanding. I also got rid of your For loop as it seemed very unnecessary. If I am mistaken, I apologize. Private Sub Worksheet_Change(ByVal Target As Excel.Range) Dim ThisRow As Long ' make sure to declare all the variables and appropiate types ThisRow = Target.Row 'protect Header row from any changes If (ThisRow = 1) Then Application.EnableEvents = False Application.Undo Application.EnableEvents = True MsgBox "Header Row is Protected." Exit Sub End If If Target.Column >= 1 And Target.Column <= 61 Then Dim sOld As String, sNew As String sNew = Target.Value 'capture new value With Application .EnableEvents = False .Undo End With sOld = Target.Value 'capture old value Target.Value = sNew 'reset new value If sOld <> sNew Then ' time stamp corresponding to cell's last update Range("BK" & ThisRow).Value = Now ' Windows level UserName | Application level UserName Range("BJ" & ThisRow).Value = Environ("username") Range("BJ:BK").EntireColumn.AutoFit End If Application.EnableEvents = True End If End Sub
unknown
d15417
val
Just read http://www.sqlite.org/datatype3.html. Sqlite has five type affinities (types preferred by a column of a table) and five storage classes (possible actual value types). There is no CHARACTER type among either of them. Sqlite allows you to specify just about anything as a type for column creation. But it doesn't enforce types for everything except INTEGER PRIMARY KEY columns. It recognizes a few substring of declared type names to determine column type affinity, so CHARACTER, CHARCOAL and CONTEXTUALIZABLE are all just a funny way of writing TEXT.
unknown
d15418
val
Since you didn't provide a reproducible examples, here are some data that hopefully replicate your problem. set.seed(42) dateIntervals<-as.Date(c("2010-08-09", "2020-11-17", "2021-07-04")) possibleDates<-seq(dateIntervals[1]-1000, dateIntervals[3], by = "day") genDF<-function() data.frame(Date = sample(possibleDates, 100), Value = runif(100)) listdf<-replicate(2, genDF(), simplify = FALSE) Now listdf, which should play the role of your wholeDataList_merged, has only two elements and each element just two columns, but it shouldn't make any difference. Next, you can try: lapply(listdf, function(x) split(x, findInterval(x$Date, dateIntervals))) And you will see each element being split into three elements depending on the date.
unknown
d15419
val
for (int i = 0; i < MyWave.NumSamples - 1; i++) That's the core problem statement, you start at 0 every time PrintPage gets called. You need to resume where you left off on the previous page. Make the i variable a field of your class instead of a local variable. Implement the BeginPrint event to set it to zero. The else clause inside the loop need to be deleted.
unknown
d15420
val
passport_string = '''iyr:2013 hcl:#ceb3a1 hgt:151cm eyr:2030 byr:1943 ecl:grn eyr:1988 iyr:2015 ecl:gry hgt:153in pid:173cm hcl:0c6261 byr:1966 ''' Change the location of the bottom '''
unknown
d15421
val
You will need to downgrade werkzeug version from 1.0.0 to 0.16.0 This solved the problem for me. Just run the following commands in your project: python3 -m pip uninstall werkzeug and then python3 -m pip install werkzeug==0.16.0 A: Either downgrade the version to 0.16.0 or replace werkzeug.contrib.cache with cachelib. I can highly recommend upgrading the package. The deprecated module werkzeug.contrib is very easy to replace! Install cachelib and replace all imports from: from werkzeug.contrib.cache import FileSystemCache to from cachelib import FileSystemCache A: Werkzeug 1.0.0 has removed deprecated code, including all of werkzeug.contrib. You should use alternative libraries for new projects. werkzeug.contrib.session was extracted to secure-cookie. If an existing project you're using needs something from contrib, you'll need to downgrade to Werkzeug<1: pip3 install Werkzeug<1 A: If you still need deprecated code from werkzeug.contrib, you can downgrade Werkzeug version to less than 1. pip install Werkzeug<1 A: For Python 3.8 python3 -m pip uninstall werkzeug python3 -m pip install werkzeug python3 -m pip install flask-session A: After downgrading werkzeug: pip install werkzeug==0.16.0 If you get the following: flask 2.0.2 requires Werkzeug>=2.0, but you have werkzeug 0.16.0 which is incompatible Consider doing:pip install flask==1.1.1 A: For my upgrade (0.15.5 -> 2.2.3) Everything is moved or removed from contrib to other modules. I recomend going to the documentation for your specific version of werkzeug and searching the library you are trying to import. I found mine! from werkzeug.middleware.profiler import ProfilerMiddleware app = ProfilerMiddleware(app)
unknown
d15422
val
Moving the current_users.push into the part that cycles through and adds it to redis seemed to fix it.
unknown
d15423
val
You need to cleanup the session table in database, which is used to store the session information. This table should be named ci_sessions.
unknown
d15424
val
Try this <i [ngClass]="{'far': !isFollowing, 'fas': isFollowing}" class="fa-bell"> <i> A: Try with <i *ngIf="!isFollowing; else follow" class="far fa-bell"></i> <ng-template #follow><i class="fas fa-bell"></i></ng-template> A: Why not you doing this with ngClass? <i [ngClass]="{'fas fa-bell': isFollowing == true, 'far fa-bell': isFollowing == false}"></i> A: I ran into this exact same issue, each of the suggested answers did not work. I haven't figured out why this was happening, but I solved it by wrapping the i tags in span tags and moving the *ngIf to the span tags like so: <span *ngIf="!isFollowing"><i class="far fa-bell"></i></span> <span *ngIf="isFollowing"><i class="fas fa-bell"></i></span>
unknown
d15425
val
Override the onSaveInstanceState and onRestoreInstanceState methods in your Activity. You can then keep track of whatever view has focus by grabbing the ID of the view and saving it to the Bundle in the onSaveInstanceState method. Then in the onRestoreInstanceState method, you just grab the ID and find the view with that ID. You don't even need to cast it or anything. You then just request focus for that view. It will look like this: @Override protected void onSaveInstanceState(Bundle outState) { int viewId = this.getCurrentFocus().getId(); outState.putInt("hasFocus", viewId); super.onSaveInstanceState(outState); } @Override protected void onRestoreInstanceState(Bundle savedInstanceState) { super.onRestoreInstanceState(savedInstanceState); int viewId = savedInstanceState.getInt("hasFocus"); View view = findViewById(viewId); view.requestFocus(); }
unknown
d15426
val
This turned out to not be an issue of whether it was waiting on the javascript. My javascript was manipulating text, and some of that text had a \n inside of it. It apparently needed a \\\n
unknown
d15427
val
What you need to listen for is transitionend event before doing anything else. You can read up on MDN about transitionend event. Btw, setTimeout should never be used to guarantee timing. EDIT: This is for reference after clarification from OP. Whenever a style change occurs to an element, there is either a reflow and/or repaint. You can read more about them here. If the second setTimeout is ran before the first reflow, then you get the sliding effect. The reason why 10ms will lead to the desired effect is because .active class is added after the offsetTop property has been adjusted (leading to transition property applied after the element has changed it's offsetTop). Usually, there are 60fps (i.e: ~16ms per frame) which means that you have a 16 ms window to do anything before the new styles are applied. This is why a small delay of 5ms will sometimes lead different results. TL:DR - The browser asks JS and CSS every 16ms for any updates, and calculates what to draw. If you miss the 16ms window, you can have completely different results. A: You are calling a setTimeout() timing method inside another setTimeout() timing method. My thought is, why not call the both setTimeout() method separately like so: The first setTimeout() method should execute first, then at end of execution, it should call the second setTimeout() method. A: Here is a working script for moving the chaser: function _( id ) { return document.getElementById( id ); } window.addEventListener( 'load', function() { var targets = document.querySelectorAll('.target'); var chaser = document.querySelector('#chaser'); setTopPosition( targets ); function setTopPosition( targets ) { for (var i = 0; i < targets.length; i++) { targets[i].addEventListener('mouseenter', function(event) { chaser.className = ''; _( 'status' ).innerText = chaser.className; // to inspect the active class setTimeout(function() { /* at this point, I'm expecting no transition // to be active on the element */ chaser.style.top = event.target.offsetTop + "px"; }, 0); // check if charser.className == '' if ( chaser.className == '') { setClassName(); } else { alert( 0 ); } }); // addEventListener } // for } //function setTopPosition( targets ) function setClassName() { setTimeout(function() { /* at this point, I'm expecting the element to have finished moving to its new position */ chaser.className = 'active'; _( 'status' ).innerText = chaser.className; }, 0); } // function setClassName() }); HTML: <div id="chaser">o</div> <div class="target">x</div> <div class="target">x</div> <div class="target">x</div> <div class="target">x</div> <div class="target">x</div> <div id="status">status</div> CSS: #chaser { position: absolute; opacity: 0; } #chaser.active { transition: all 1s; opacity: 1; } .target { height: 30px; width: 30px; margin: 10px; background: #ddd; }
unknown
d15428
val
use enrich mediator and store the payload in to property <enrich> <source type="body"/> <target type="property" property="REQUEST_PAYLOAD"/> </enrich> https://docs.wso2.com/display/ESB481/Enrich+Mediator A: To complete @Jenananthan answer: * *Store original payload in a property *Call the webservice *Restore the original payload to body: <enrich> <source clone="false" type="property" property="ORIGINAL_PAYLOAD"/> <target action="replace" type="body"/> </enrich>
unknown
d15429
val
I think you want this: select * from inside_sales where x = 'equipment' union all select * from outside_sales where x <> 'equipment'; Note: The second condition is slightly more complicated if x can be NULL. A: Something like this. But what to do with retrieved data? create function sales_report (is_x IN varchar2) return // what to return? is row_i_s inside_sales%rowtype; row_o_s ouside_sales%rowtype; begin case is_x when 'equipment' then select * into row_i_s from inside_sales; else select * into row_o_s from outside_sales; end; return // what to return? end;
unknown
d15430
val
The Experience Cloud Visitor ID is not automatically carried over from the native mobile app to a (mobile) web page. The long story short is native apps don't really store data locally in the same way as web browsers, so there's no automatic ability to use the same local storage mechanism/source between the two. In order to do this, you must add some code to the mobile app to append the mid value to the target URL, e.g. : Android String urlString = "http://www.example.com/index.php"; String urlStringWithVisitorData = Visitor.appendToURL(urlString); Intent browserIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(urlStringWithVisitorData)); startActivity(browserIntent); iOS NSURL *url = [NSURL URLWithString:@”http://www.example.com/index.php"]; NSURL *urlWithVisitorData = [ADBMobile visitorAppendToURL:url]; [[UIApplication sharedApplication] openURL:urlWithVisitorData]; If implemented properly, you should now see a adobe_mc= parameter appended to the target URL. Then on page view of the target page, if you have the Adobe Analytics javascript and Experience Cloud Visitor ID libraries implemented, they will automatically look for and use that value instead of generate a new value (should not require any config / coding on this end). Update: @Ramaiyavraghvendra you made a comment: Hi @Crayon, mny thanks for your profound answer. I am sorry that i missed to inform that this app is not native one but this is a SPA app. so the implementation of entire app is also done through launch. Could you pl help in this case then. I'm not entirely sure I understand your issue. If you are NOT moving from a native mobile app to web page, and your mobile app is really a web based SPA that outputs Launch as regular javascript code throughout the entire app, then you shouldn't have to do anything; the Experience Cloud ID service should carry over the id from page to page. So it sounds to me like perhaps your Experience Cloud Visitor ID and/or Adobe Analytics collection server settings are not configured correctly. the cookie domain period variables may be an issue, if logging in involves moving from say www.mysite.com to www.mysite.co.uk or similar, but shouldn't be a problem if the TLD has the same # of periods. Or, the trackingServer and trackingServerSecure variables may not be configured properly. In practice, I usually do not set trackingServerSecure at all. These variables get kind of confusing and IMO buggy in different scenarios vs. what you are using, so I tend to use the "secure" value in the trackingServer field and leave the trackingServerSecure blank, and then Experience Cloud Visitor ID and Adobe Analytics will just use the secure version 100% of the time. Or..it could be a number of other config variables not properly set. It's hard to say if any of this is off, without access to the app and Launch container. Also you may want to check the response headers for your logged in pages. It may be that they are configured to reject certain existing non-https cookies or something else that effectively causes the existing cookies to be unreadable and make the Experience Cloud ID service generate a new ID and cookies. Or.. maybe your app kind of is a native mobile app but using an http wrapper to pull in web pages, so it is basically a web browser but it is effectively like moving from one web browser to another (e.g. starting on www.site.com/pageA on Chrome, and then copy/pasting that URL over to Internet Explorer to view). So effectively, different cookie jar. Launch (or DTM) + Experience Cloud ID (Javascript methods) In cases such as the last 2 paragraphs, you have to decorate your target links the same as my original answer, but using the Launch + Experience Cloud ID Service javascript syntax: _satellite.getVisitorId().appendVisitorIDsTo('[your url here]'); You write some code to get the target URL of the link. Then run it through this code to return the url with the parameters added to them, and then you update your link with the new URL. Super generic example that just updates all links on the page. In practice, you should only do this for relevant link(s) the visitor is redirected to. var urls = document.querySelectorAll('a'); for (var i = 0, l = urls.length; i < l; i++) { if (urls[i].href) { urls[i].href = _satellite.getVisitorId().appendVisitorIDsTo(urls[i].href); } }
unknown
d15431
val
This is expected behavior with API version 2022-08-01. A change was made with this API version so that a Payment Intent is not created when a Checkout Session is initially created, but is instead created when the Checkout Session is confirmed. You can read more about this and the other changes introduced with this API version here: https://stripe.com/docs/upgrades#2022-08-01
unknown
d15432
val
You should reduce quality of Video to improve audio quality. By default, easyRTC config Video quality with resolution of 1280x720. You could reconfig quality base on status of bandwidth or device and set quality on client side with: easyrtc.setVideoDims(X, Y); Given X and Y params are your intend res. You should refer the detailed of setVideoDims function on easyrtc client as bellow: easyrtc.setVideoDims = function(width, height) { if (!width) { width = 1280; height = 720; } easyrtc.videoFeatures = { mandatory: { minWidth: width, minHeight: height, maxWidth: width, maxHeight: height }, optional: [] }; }; A: I was actually exploring this same issue for a recent project! Here's something that worked for us. (1) First, pass an echoCancellation configuration option into MediaTrackConstraints: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/echoCancellation - this is the built-in echo cancellation feature available natively in most browsers as part of their WebRTC support. (2) Mute any video element where sound isn't needed - I don't need to hear my own video projecting sound, only the other, remote, user does. This applies only if you're using some video (or similar-enough audio) html element. (3) EasyRTC supports: https://easyrtc.com/docs/client-api/Easyrtc_Rates.php - there's an example of how that can be used here: https://demo.easyrtc.com/demos/demo_lowbandwidth.html - play around with the options and you'll likely find a use-case specific setting that works for your needs! Edit: (3) provides actual codec settings for video and audio! Hope that helps!
unknown
d15433
val
First you should set the button's target: exampleButton addTarget:self action:@selector(buttonAction:) forControlEvents:UIControlEventTouchUpInside; Then, in the button's method: -(void)buttonAction:(id)sender { AboutViewController *aboutViewController = [[AboutViewController alloc] init]; [self.navigationController pushViewController:aboutViewController animated:YES]; } Don't forget to add the about view controller's header file #import "AboutViewController.h" If you use storyboard, you should change the button's action method with -(void)buttonAction:(id)sender { [self.navigationController performSegueWithIdentifier:@"aboutSegueIdentifier" sender:sender]; }
unknown
d15434
val
Here's an example taken from http://www.bastisoft.de/programmierung/pascal/pasinet.html program daytime; { Simple client program } uses sockets, inetaux, myerror; const RemotePort : Word = 13; var Sock : LongInt; sAddr : TInetSockAddr; sin, sout : Text; Line : String; begin if ParamCount = 0 then GenError('Supply IP address as parameter.'); with sAddr do begin Family := af_inet; Port := htons(RemotePort); Addr := StrToAddr(ParamStr(1)); if Addr = 0 then GenError('Not a valid IP address.'); end; Sock := Socket(af_inet, sock_stream, 0); if Sock = -1 then SockError('Socket: '); if not Connect(Sock, sAddr, sizeof(sAddr)) then SockError('Connect: '); Sock2Text(Sock, sin, sout); Reset(sin); Rewrite(sout); while not eof(sin) do begin Readln(sin, Line); Writeln(Line); end; Close(sin); Close(sout); Shutdown(Sock, 2); end. A: If you're using FPC or Lazarus(which is basically a rad IDE for FPC and a clone of delphi) you could use the Synapse socket library. It's amazing. A: If you are using Delphi, I highly recommend Indy sockets, a set of classes for easy manipulation of sockets and many other internet protocols (HTTP, FTP, NTP, POP3 etc.) A: You cannot use OpenSSL with Indy version 10.5 that shippes with Delphi 2007. You have to download version 10,6 from http://www.indyproject.org/ and install it into the IDE. Note that other packages might use Indy, like RemObjects, and therefore they have to be re-compiled too and this can be tricky due to cross-references.
unknown
d15435
val
Instead of using config files you can use a configuration database with a scoped systemConfig table and add all your settings there. CREATE TABLE [dbo].[SystemConfig] ( [Id] [int] IDENTITY(1, 1) NOT NULL , [AppName] [varchar](128) NULL , [ScopeName] [varchar](128) NOT NULL , [Key] [varchar](256) NOT NULL , [Value] [varchar](MAX) NOT NULL , CONSTRAINT [PK_SystemConfig_ID] PRIMARY KEY NONCLUSTERED ( [Id] ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[SystemConfig] ADD CONSTRAINT [DF_SystemConfig_ScopeName] DEFAULT ('SystemConfig') FOR [ScopeName] GO With such configuration table you can create rows like such: Then from your your application dal(s) wrapping EF you can easily retrieve the scoped configuration. If you are not using dal(s) and working in the wire directly with EF, you can make an Entity from the SystemConfig table and use the value depending on the application you are on. A: Unfortunately, combining multiple entity contexts into a single named connection isn't possible. If you want to use named connection strings from a .config file to define your Entity Framework connections, they will each have to have a different name. By convention, that name is typically the name of the context: <add name="ModEntity" connectionString="metadata=res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> <add name="Entity" connectionString="metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=SOMESERVER;Initial Catalog=SOMECATALOG;Persist Security Info=True;User ID=Entity;Password=Entity;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /> However, if you end up with namespace conflicts, you can use any name you want and simply pass the correct name to the context when it is generated: var context = new Entity("EntityV2"); Obviously, this strategy works best if you are using either a factory or dependency injection to produce your contexts. Another option would be to produce each context's entire connection string programmatically, and then pass the whole string in to the constructor (not just the name). // Get "Data Source=SomeServer..." var innerConnectionString = GetInnerConnectionStringFromMachinConfig(); // Build the Entity Framework connection string. var connectionString = CreateEntityConnectionString("Entity", innerConnectionString); var context = new EntityContext(connectionString); How about something like this: Type contextType = typeof(test_Entities); string innerConnectionString = ConfigurationManager.ConnectionStrings["Inner"].ConnectionString; string entConnection = string.Format( "metadata=res://*/{0}.csdl|res://*/{0}.ssdl|res://*/{0}.msl;provider=System.Data.SqlClient;provider connection string=\"{1}\"", contextType.Name, innerConnectionString); object objContext = Activator.CreateInstance(contextType, entConnection); return objContext as test_Entities; ... with the following in your machine.config: <add name="Inner" connectionString="Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" /> This way, you can use a single connection string for every context in every project on the machine. A: First try to understand how Entity Framework Connection string works then you will get idea of what is wrong. * *You have two different models, Entity and ModEntity *This means you have two different contexts, each context has its own Storage Model, Conceptual Model and mapping between both. *You have simply combined strings, but how does Entity's context will know that it has to pickup entity.csdl and ModEntity will pickup modentity.csdl? Well someone could write some intelligent code but I dont think that is primary role of EF development team. *Also machine.config is bad idea. *If web apps are moved to different machine, or to shared hosting environment or for maintenance purpose it can lead to problems. *Everybody will be able to access it, you are making it insecure. If anyone can deploy a web app or any .NET app on server, they get full access to your connection string including your sensitive password information. Another alternative is, you can create your own constructor for your context and pass your own connection string and you can write some if condition etc to load defaults from web.config Better thing would be to do is, leave connection strings as it is, give your application pool an identity that will have access to your database server and do not include username and password inside connection string. A: To enable the same edmx to access multiple databases and database providers and vise versa I use the following technique: 1) Define a ConnectionManager: public static class ConnectionManager { public static string GetConnectionString(string modelName) { var resourceAssembly = Assembly.GetCallingAssembly(); var resources = resourceAssembly.GetManifestResourceNames(); if (!resources.Contains(modelName + ".csdl") || !resources.Contains(modelName + ".ssdl") || !resources.Contains(modelName + ".msl")) { throw new ApplicationException( "Could not find connection resources required by assembly: " + System.Reflection.Assembly.GetCallingAssembly().FullName); } var provider = System.Configuration.ConfigurationManager.AppSettings.Get( "MyModelUnitOfWorkProvider"); var providerConnectionString = System.Configuration.ConfigurationManager.AppSettings.Get( "MyModelUnitOfWorkConnectionString"); string ssdlText; using (var ssdlInput = resourceAssembly.GetManifestResourceStream(modelName + ".ssdl")) { using (var textReader = new StreamReader(ssdlInput)) { ssdlText = textReader.ReadToEnd(); } } var token = "Provider=\""; var start = ssdlText.IndexOf(token); var end = ssdlText.IndexOf('"', start + token.Length); var oldProvider = ssdlText.Substring(start, end + 1 - start); ssdlText = ssdlText.Replace(oldProvider, "Provider=\"" + provider + "\""); var tempDir = Environment.GetEnvironmentVariable("TEMP") + '\\' + resourceAssembly.GetName().Name; Directory.CreateDirectory(tempDir); var ssdlOutputPath = tempDir + '\\' + Guid.NewGuid() + ".ssdl"; using (var outputFile = new FileStream(ssdlOutputPath, FileMode.Create)) { using (var outputStream = new StreamWriter(outputFile)) { outputStream.Write(ssdlText); } } var eBuilder = new EntityConnectionStringBuilder { Provider = provider, Metadata = "res://*/" + modelName + ".csdl" + "|" + ssdlOutputPath + "|res://*/" + modelName + ".msl", ProviderConnectionString = providerConnectionString }; return eBuilder.ToString(); } } 2) Modify the T4 that creates your ObjectContext so that it will use the ConnectionManager: public partial class MyModelUnitOfWork : ObjectContext { public const string ContainerName = "MyModelUnitOfWork"; public static readonly string ConnectionString = ConnectionManager.GetConnectionString("MyModel"); 3) Add the following lines to App.Config: <?xml version="1.0" encoding="utf-8"?> <configuration> <connectionStrings> <add name="MyModelUnitOfWork" connectionString=... /> </connectionStrings> <appSettings> <add key="MyModelUnitOfWorkConnectionString" value="data source=MyPc\SqlExpress;initial catalog=MyDB;integrated security=True;multipleactiveresultsets=True" /> <add key="MyModelUnitOfWorkProvider" value="System.Data.SqlClient" /> </appSettings> </configuration> The ConnectionManager will replace the ConnectionString and Provider to what ever is in the App.Config. You can use the same ConnectionManager for all ObjectContexts (so they all read the same settings from App.Config), or edit the T4 so it creates one ConnectionManager for each (in its own namespace), so that each reads separate settings. A: What I understand is you want same connection string with different Metadata in it. So you can use a connectionstring as given below and replace "" part. I have used your given connectionString in same sequence. connectionString="<METADATA>provider=System.Data.SqlClient;provider connection string=&quot;Data Source=SomeServer;Initial Catalog=SomeCatalog;Persist Security Info=True;User ID=Entity;Password=SomePassword;MultipleActiveResultSets=True&quot;" For first connectionString replace <METADATA> with "metadata=res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;" For second connectionString replace <METADATA> with "metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl;" For third connectionString replace <METADATA> with "metadata=res://*/Entity.csdl|res://*/Entity.ssdl|res://*/Entity.msl|res://*/ModEntity.csdl|res://*/ModEntity.ssdl|res://*/ModEntity.msl;" Happy coding! A: Silverlight applications do not have direct access to machine.config.
unknown
d15436
val
I solved it! SImply, the NewsAPI json of Article has a field called Source, which i was trying to parse as a string, but it was NOT! Infact, it is a field described with another object! I simply had to create a class called Source with id and name, and it works! Thanks everyone for the effort! Here's the codes of the classes: public class Article { private Source source; private String author; private String title; private String url; //getters and setters News, which has a list of articles: public class News { private int totalResults; private List<Article> articles; //getters and setters And source, which is called in Article: public class Source { private String id; private String name; //getters and setters Here it is! The parse code is the same of the answer. Just change the return type (Article) as News and the Article.class parameter of getForObject into News.class A: A simple (i.e. missing exception handling, etc) way is as follows: First, you need a class to represent the data you are receiving, with fields that match the API response fields, for example: public class Article { private String source; private String title; ... // more fields // getters and setters } The code to fetch the data from the API then looks like this: RestTemplate template = ... // initialized earlier ResponseEntity<Article[]> response = template.exchange( API_URL, // url to the api HttpMethod.GET, // use the Http verb "GET" new HttpEntity<>(headers), // optional headers, e.g. for basic auth Article[].class // the expected response type is Article[] ); Article[] articles = response.getBody(); List<Article> list = Arrays.asList(articles); // if you need to use collections Note, a ResponseEntity being non-null does not imply that the request was successful. You can use responseEntity.getStatusCode() to determine the status code of the response. Be careful, however, since by default, RestTemplate throws an exception when a non-200 error code is recieved (HttpClientErrorException and HttpServerErrorException for 4XX and 5XX codes respectively). If you want your own custom error handling, you should call: template.setErrorHandler(new ResponseErrorHandler() { @Override public boolean hasError(ClientHttpResponse response) throws IOException { // implement here } @Override public void handleError(ClientHttpResponse response) throws IOException { // implement here } }); For persistence into MongoDB, you can use JPA, although JPA is not a perfect fit for MongoDB due to its inherently relational nature clashing with Mongo's non-relational structure. Something like Spring Data can more sensibly map this, and is worth looking into: https://spring.io/projects/spring-data-mongodb EDIT - calling this code Typically, I will create an class/interface with implementation (called ArticleResource for example) that looks like: public class ArticleResource { private final RestTemplate template = new RestTemplate(); public List<Article> getAllArticles() { ResponseEntity<Article[]> response = template.exchange(API_URL, HttpMethod.GET, new HttpEntity<>(headers), Article[].class); // some error checking here return response.getBody() == null ? Collections.emptyList() : Arrays.asList(response.getBody()); } } For methods that expect a single value (e.g. findArticleByTitle(String title)) I typically return an Optional<Article> (it is bad practice to return Optional<List<T>>, as an empty list represents "no values" already). From there in your code you can call: ArticleResource resource = new ArticeResource(); // if you want to print all the names for example: resource.getAllArticles().stream().map(Article::getName).forEach(System.out::println);
unknown
d15437
val
First you are trying to drive a wire from inside an @always block which is not allowed. If you convert the wires to regs then it will work: module window_averaging( input [16:0]in_noise, //input from noise cancellation input clk, output reg [16:0]window_average // output after window averaging ); integer i; integer k; integer count = 0; reg [16:0] store_elements[0:7][0:128]; // 2-D array for window averaging reg [16:0] temp; ... Also I believe to be consistent with your C code the line count = (count+1)%8; should be outside the for loop like so: window_average = temp/8; end count = (count+1)%8; end endmodule A: I don't know what you are using to compile, but I think the following stuff should give you errors: For the first loop: for(i=0 ; i < 128 ; i = 1+1) change to i= i+1 Also, in line: temp = temp + store_elements[i][k]; remember the declaration store_elements[0:7][0:128] , so may be switch i and k ? This isn't an answer really. Sorry, I don't have comment privilege yet.
unknown
d15438
val
* *Login to the Gateway system and check the logs in transaction /IWFND/ERROR_LOG *Always start transaction SRDEBUG and make sure that the breakpoints are set for the same user you are using for the request.
unknown
d15439
val
Thank you Helder. The IDocumentFilter works. public class GlobalParameterDocumentFilter : IDocumentFilter { public void Apply(OpenApiDocument swaggerDoc, DocumentFilterContext context) { if (swaggerDoc != null && swaggerDoc.Components != null) { swaggerDoc.Components.Parameters.Add(ApiConstants.ApiVersionGlobalParamName, new OpenApiParameter { Name = "api-version", In = ParameterLocation.Query, Required = true, Schema = new OpenApiSchema { Type = "string" }, Description = "The API version" }); } } } The path parameter can then reference this global parameter via an IOperationFilter. public class OperationFilter : IOperationFilter { public void Apply(OpenApiOperation operation, OperationFilterContext context) { _ = operation ?? throw new ArgumentNullException(nameof(operation)); _ = context ?? throw new ArgumentNullException(nameof(context)); if (operation.Parameters == null) { operation.Parameters = new List<OpenApiParameter>(); } operation.Parameters.Add( new OpenApiParameter { Reference = new OpenApiReference { Id = "parameters/api-version", ExternalResource = "" } }); } }
unknown
d15440
val
Change your let nameEl = document.querySelector("#name").val to let nameEl = document.querySelector("#name").value Everything than should work fine.
unknown
d15441
val
You can add additional disks to an Amazon Lightsail instance. (It seems like you cannot extend an existing disk.) The main steps are: * *Select your instance in the Amazon Lightsail console *In the Storage section, click Create new disk and enter details *In Attach to an instance, select your instance *Login to the instance to format and mount the disk See: * *Create and attach additional block storage disks to your Linux-based Lightsail instances | Lightsail Documentation *Creating and attaching a block storage disk to your Windows Server instance in Amazon Lightsail | Lightsail Documentation
unknown
d15442
val
Calling url(../img/icon.png) is correct. Did you try to call the image from somewhere else, in example for background: body { background-image: url(../img/icon.png); } Also please check the configuration in your .htaccess.
unknown
d15443
val
Try and get the specific error message you receive and what O/S you are running on (SAS O/S and SSIS O/S). It is most likely using the wrong credentials. Check SSIS logs and the Event Viewer. You need to determine which system is rejecting the call. Most likely it is SAS which means you are coming across, to SAS, as a different user than you think you are. Also, provide your connection string to IOM or double-check it at https://www.connectionstrings.com/sas-iom-provider/.
unknown
d15444
val
buffer: .space 255 This fills buffer with zeroes. li $v0, 8 # Read in text string la $a0, buffer li $a1, 255 syscall I don't know what environment you're using, but this typically works just like fgets() in C, so if you enter hello, your buffer will end up as: +-----+-----+-----+-----+-----+-----+-----+-----+ more +-----+ | 'h' | 'e' | 'l' | 'l' | 'o' |'\n' | 0 | 0 |..zeroes...| 0 | +-----+-----+-----+-----+-----+-----+-----+-----+ +-----+ The code just after that doesn't seem to do anything useful: la $t0, buffer # Read character from text string lw $t1, length addu $t0, $t0, $t1 but length has never been written and $t0 isn't used again (until it's overwritten inside QS). The call to QS passes a fixed value in $a2 (25 - why?). Assuming that the QS routine actually works as advertised: if your original string is short, some of the zero bytes in the buffer will be included in the range that gets sorted, and so will end up at the beginning of the buffer - the range that gets sorted will look something like this: +-----+ more +-----+-----+-----+-----+-----+-----+-----+-----+ | 0 |..zeroes...| 0 | 0 |'\n' | 'e' | 'h' | 'l' | 'l' | 'o' | +-----+ +-----+-----+-----+-----+-----+-----+-----+-----+ i.e. if the range that gets sorted includes any zero bytes, the first byte of the result will be zero, and so you will print an empty string. Part 2: once you have a correctly sorted string, this should be straightforward, as all of the occurences of each character are adjacent. Just walk along the string, and consider whether each character is the same as the previous one, or different. Part 3: just needs an additional counter while doing the work for part 2.
unknown
d15445
val
I recently spent way too much time trying to do something similar. What you need here, I believe, is a list-column. The code below will do that, but it turns the order number into a character value. library(tidyverse) df <- tibble(order=c(1,1,1,2,2,3,3,3), product=c('a','b','c','b','d','a','c','e')) %>% group_by(product) %>% summarise(order=toString(.$order)) %>% mutate(order=str_split(order, ', ')
unknown
d15446
val
I did research this then writing the odbc-api bindings for Rust. It turns out it is (still) well documented here: https://learn.microsoft.com/en-us/sql/odbc/reference/develop-app/driver-manager-connection-pooling Your code for activating and using ODBC connection pooling is correct. Now to your questions: * *Is it correct that I can use the HENV allocated above, in multiple concurrent threads within a single process, as I call SQLAllocHandle to allocate db connections (HDBC)? Yes, you can use the environment in multiple concurrent threads. Not only that, it is actually best practice to do so, and have only one ODBC environment for each process. *When I want to use connection from the pool, is it correct that the typical sequence is: [...] Yes, the described sequence is reasonable. Of course all depends on the application. *Is there a significant latency benefit if I save the allocated HDBC handle, and re-use it across multiple SQLDriverConnect + SQLDisconnect calls? In other words, I'd skip steps 2.1 and 2.5, for each use of the connection. Or are steps 2.1 and 2.5 basically just malloc/free? (in which case, I don't think I care) This depends both on your driver and your definition of 'significant'. Overall any ODBC call incurs some overhead due to it being a function call into a dynamic library (the driver manager). If that specific function call is driver specific (like SqlAllocHandle), it is then forwarded to the driver which is also a dynamically loaded library. Yet if the driver has any sense it wont send any data around the network, so usually you would not care and it most likely boils down to a somewhat expensive call to malloc/free. So yeah, depends on your definition of 'significant'.
unknown
d15447
val
You're adding a listener to search query that returns one document, then you're making changes to that document. When that document is changed, the results of the query change, and your listener is invoked again with the new results, which means it's going to update yet another document, which means that the query changes. Etc, etc. If you just want to update a document from a query, don't use onSnapshot() for that. Just use get() to obtain the search results a single time, then update the documents from the results it gives you. A: I figured it out all I had to do is set the whole thing equal to a variable and then at the end of the closure just call unsubscribe(); and that stopped the loop. So my code is now: let unsubscribe = this.categoriesCollection.ref.where('name', '==', transaction.category.toLowerCase()).limit(1).onSnapshot((querySnapshot) => { if (querySnapshot.size == 1) { querySnapshot.forEach((doc) => { this.categoriesCollection.doc(doc.id).update({totalSpent: doc.data().totalSpent + transaction.amount}); }); } }); unsubscribe();
unknown
d15448
val
Try to set minRange, for example to one day, otherwise Highcharts won't know what kind of label should be displayed. See docs.
unknown
d15449
val
Ah nevermind, although the documentation doesn't say it, I can use the .in_() function on the result of func.substring. So where(func.substring(table.c.number, 1, 5).in_(numbers)) worked.
unknown
d15450
val
No. You can only store keys and certificates in a keystore.
unknown
d15451
val
your error is : monthly_id cannot be null you can fix it by setting default value for it from your migration or phpmyadmin or in your store function set it to something: $monthly->monthly_id= 'some-value';
unknown
d15452
val
Wouldn't hurt to check if timerTask is null at the beginning of reScheduleTimer and cancel it if it is not null. At the beginning of reScheduleTimer: if(timerTask != null) { timerTask.cancel(); } A: I still don't know how the variable is increasing by more than 1, but I solved the problem by reading the system clock on every invocation of the function and displaying it.
unknown
d15453
val
The proposed duplicate is a misunderstanding of the question. This question appears to be looking for the third highest value overall, but taking duplicates into account. You can get the third row using offset/fetch in SQL Server: select t.* from t where t.sale_amount = (select t2.sale_amount from t t2 group by t2.sale_amount order by t2.sale_amount desc offset 2 fetch first 1 row only ); In MySQL, that would be: select t.* from t where t.sale_amount = (select t2.sale_amount from t t2 group by t2.sale_amount order by t2.sale_amount desc limit 1 offset 2 ); A: SELECT * FROM `sale_amnt` ORDER BY `sale_Amnt` DESC LIMIT 2,1
unknown
d15454
val
You are passing string from A->B using segue, so you have the string now in Controller B. Pass the same string from B-> C using segue like below let cController = segue.destinationVieController cController.string = string where string is the variable in Controller B which you have assigned value while segueing from A->B A: You can immediately perform the segue from b->c in prepare for segue //VCA override func prepareForSegue(segue : UISegue, sender: AnyObject?) { if segue.identifier == "AtoB" { //Cast destination VC as a your B VC let bVC = segue.destinationViewController as! BVC //Set b's .string property to the string property you're going to send to c bVC.string = self.string //perform the segue that goes from b to c bVC.performSegueWithIdentifier("BtoC") } } //VCB override func prepareForSegue(segue : UISegue, sender: AnyObject?) { if segue.identifier == "BtoC" { //Cast destination VC as a your C VC let cVC = segue.destinationViewController as! CVC //Set c's .string property to the string property that you now go from cVC.string = self.string //Now you will have segue'd to C passing the string you got from a } } Make sure that your segue.identifier's match what you set them in storyboard.
unknown
d15455
val
Look at the image you posted. There's a Script Filter object on the Alfred Editor. You just have to double-click on it and replace php vuejs.php "{query}" with /usr/local/bin/php vuejs.php "{query}".
unknown
d15456
val
Docker doesn't uses network outside of it . For the connection between the host to container from outside the world use port bindings. Expose the port in Dockerfile when creating docker image Expose Docker Container to Host : Exposing container is very important for the host to identify in which port container runs. -p in docker run command used to expose the ports. Syntax : docker run -p host_ip:host_port:container_port image_name e.g., docker run -itd -p 192.168.134.122:1234:1500 image_name This binds port 1500 of the container to port 1234 on 192.168.134.122 of the host machine. Use Iptables to view the network process – iptables -L -n -t nat Now the request send to host_ip (192.168.134.122) and port (1243) is redirect to container with ip (172.17.0.2) and port (1500).
unknown
d15457
val
You had various errors in your calls to cudaMemcpy2D (both of them, in the 3 channel code). This code seems to work for me: $ cat t1521.cu #include <cuda_runtime.h> #include <npp.h> #include <nppi.h> #include <nppdefs.h> #include <iostream> #include <stdint.h> #include <stdio.h> #define CUDA_CALL(call) do { cudaError_t cuda_error = call; if(cuda_error != cudaSuccess) { std::cerr << "CUDA Error: " << cudaGetErrorString(cuda_error) << ", " << __FILE__ << ", line " << __LINE__ << std::endl; return(NULL);} } while(0) using namespace std; float* decimate_cuda(float* readbuff, uint32_t nSrcH, uint32_t nSrcW, uint32_t nDstH, uint32_t nDstW, uint8_t byteperpixel) { if (byteperpixel == 1){ // source : Grayscale, 1 x 32f size_t srcStep; size_t dstStep; NppiSize oSrcSize = {nSrcW, nSrcH}; NppiRect oSrcROI = {0, 0, nSrcW, nSrcH}; float *devSrc; CUDA_CALL(cudaMallocPitch((void**)&devSrc, &srcStep, nSrcW * sizeof(float), nSrcH)); CUDA_CALL(cudaMemcpy2D(devSrc, srcStep,readbuff, nSrcW * sizeof(Npp32f), nSrcW * sizeof(Npp32f), nSrcH, cudaMemcpyHostToDevice)); NppiSize oDstSize = {nDstW, nDstH}; NppiRect oDstROI = {0, 0, nDstW, nDstH}; float *devDst; CUDA_CALL(cudaMallocPitch((void**)&devDst, &dstStep, nDstW * sizeof(float), nDstH)); NppStatus result = nppiResize_32f_C1R(devSrc,srcStep,oSrcSize,oSrcROI,devDst,dstStep,oDstSize,oDstROI,NPPI_INTER_SUPER); if (result != NPP_SUCCESS) { std::cerr << "Unable to run decimate_cuda, error " << result << std::endl; } Npp64s writesize; Npp32f *hostDst; writesize = (Npp64s) nDstW * nDstH; // Y if(NULL == (hostDst = (Npp32f *)malloc(writesize * sizeof(Npp32f)))){ printf("Error : Unable to alloctae hostDst in decimate_cuda, exiting...\n"); exit(1); } CUDA_CALL(cudaMemcpy2D(hostDst, nDstW * sizeof(Npp32f),devDst, dstStep, nDstW * sizeof(Npp32f),nDstH, cudaMemcpyDeviceToHost)); CUDA_CALL(cudaFree(devSrc)); CUDA_CALL(cudaFree(devDst)); return(hostDst); } // source : Grayscale 1 x 32f, YYYY... else if (byteperpixel == 3){ // source : 3 x 32f interleaved RGBRGBRGB... size_t srcStep; size_t dstStep; // rows = height; columns = width NppiSize oSrcSize = {nSrcW, nSrcH}; NppiRect oSrcROI = {0, 0, nSrcW, nSrcH}; float *devSrc; CUDA_CALL(cudaMallocPitch((void**)&devSrc, &srcStep, 3 * nSrcW * sizeof(float), nSrcH)); CUDA_CALL(cudaMemcpy2D(devSrc, srcStep,readbuff, 3 * nSrcW * sizeof(Npp32f), 3*nSrcW * sizeof(Npp32f), nSrcH, cudaMemcpyHostToDevice)); NppiSize oDstSize = {nDstW, nDstH}; NppiRect oDstROI = {0, 0, nDstW, nDstH}; float *devDst; CUDA_CALL(cudaMallocPitch((void**)&devDst, &dstStep, 3 * nDstW * sizeof(float), nDstH)); NppStatus result = nppiResize_32f_C3R(devSrc,srcStep,oSrcSize,oSrcROI,devDst,dstStep,oDstSize,oDstROI,NPPI_INTER_SUPER); if (result != NPP_SUCCESS) { std::cerr << "Unable to run decimate_cuda, error " << result << std::endl; } Npp64s writesize; Npp32f *hostDst; writesize = (Npp64s) nDstW * nDstH * 3; // RGB if(NULL == (hostDst = (Npp32f *)malloc(writesize * sizeof(Npp32f)))){ printf("Error : Unable to alloctae hostDst in decimate_cuda, exiting...\n"); exit(1); } CUDA_CALL(cudaMemcpy2D(hostDst, nDstW*3 * sizeof(Npp32f), devDst, dstStep, nDstW*3 * sizeof(Npp32f),nDstH, cudaMemcpyDeviceToHost)); CUDA_CALL(cudaFree(devSrc)); CUDA_CALL(cudaFree(devDst)); return(hostDst); } // source - 3 x 32f, interleaved RGBRGBRGB... return(0); } int main(){ uint32_t nSrcH = 480; uint32_t nSrcW = 640; uint8_t byteperpixel = 3; float *readbuff = (float *)malloc(nSrcW*nSrcH*byteperpixel*sizeof(float)); for (int i = 0; i < nSrcH*nSrcW; i++){ readbuff [i*3+0] = 1.0f; readbuff [i*3+1] = 2.0f; readbuff [i*3+2] = 3.0f;} uint32_t nDstW = nSrcW/2; uint32_t nDstH = nSrcH/2; float *res = decimate_cuda(readbuff, nSrcH, nSrcW, nDstH, nDstW, byteperpixel); for (int i = 0; i < nDstH*nDstW*byteperpixel; i++) if (res[i] != ((i%3)+1.0f)) {std::cout << "error at: " << i << std::endl; return 0;} return 0; } $ nvcc -o t1521 t1521.cu -lnppig $ cuda-memcheck ./t1521 ========= CUDA-MEMCHECK ========= ERROR SUMMARY: 0 errors $ In the future, its convenient if you provide a complete code, just as I have done in my answer. In fact SO requires this, see item 1 here. By the way, the use of pitched allocations on the device, here, which introduce complexity that you were not able to work your way through, should really be unnecessary both for correctness and performance, using any modern GPU and CUDA version. Ordinary linear/flat allocations, where pitch==width, should be just fine.
unknown
d15458
val
You need to specify a layerFilter function in the forEachFeatureAtPixel request: var feature = map.forEachFeatureAtPixel(evt.pixel, function(feature) { return feature; }, { layerFilter: function(layer) { return layer === bottlenecklayer; } }); Layers do not have click events.
unknown
d15459
val
Check your account. You should provide a valid IEC export code to accept any payment. A: As per latest RBI guidelines, Stripe has switched from Charges API to Payment Intent API. Use below API as per data : Stripe::PaymentIntent.create( :customer => customer.id, :amount => params[:amount], :description => 'Rails Stripe transaction', :currency => 'usd', ) It worked for me. Checkout Stripe API documentation here A: you need to change "charges" to "paymentIntents" example: const payment = await stripe.charges.create( //Change here { amount: subTotal * 100, currency: "inr", customer: customer.id, receipt_email: token.email, }, { idempotencyKey: uuidv4(), } );
unknown
d15460
val
Ah, OBVIOUSSLY not. DyGraph is a javascript library. If you want to remove Javascript completely, you need to use a graph library that generates the graph on the server and sends the picture down to the client. Given that DyGraph is a javascript library - the obvious answer is no, it can not be used while at the same time totally disabling javascript. Literally on the homepage it says: "dygraphs is a fast, flexible open source JavaScript charting library.". A: If you want 100% C# and no Javascript, MudBlazor has some charting capabilities: https://mudblazor.com/components/barchart#api As another answer states, "Given that DyGraph is a javascript library - the obvious answer is no, it can not be used while at the same time totally disabling javascript"
unknown
d15461
val
You might want to access the first element of the set as follows: if let first = setOfStrings.first { print(first) } Assuming that you are already familiar with: Set is unordered data structure, i.e: first value is not guaranteed to be "ONE". You cannot access an element in a set via index as an integer (setOfStrings[0]), however, since Set represents a Collection (adopted by sets), SetIndex is probably what are you looking for, by using Set.Index of your current set, as follows: let setOfStrings: Set<String> = ["ONE", "TWO", "THREE"] // for me, it sorted as: {"THREE", "TWO", "ONE"} let mySetIndex = setOfStrings.index(setOfStrings.startIndex, offsetBy: 1) let secondElemnet = setOfStrings[mySetIndex] // "TWO" Note that: * *By using subscript(_:), you should be able to get e specific element. *index(_:offsetBy:): Returns an index that is the specified distance from the given index. * *mySetIndex data type is SetIndex<String>.
unknown
d15462
val
Looks like it's a USB HID device. As such, you should be able to use Win32 API to talk to it - similar to other USB HID devices. A: I think the Microsoft eHome Infared Transceiver is a Human Interface Device (HID), so I'd start with The HID Page. This has a VB.NET sample on it.
unknown
d15463
val
Believe it or not, but your problem potentially had nothing to do with parallelization. In the future I'd recommend you first look at the input to the function you are trying to parallelized. It turned out you always tried a single puzzle. Edit - @Noughtmare pointed out that according to Threadscope results posted in the question there is some parallelization going on. Which is true and it makes me believe that the file posted in question doesn't exactly match the one used for creating the results. If that's the case, then you can skip to Parallelization section for the answer about: "Why parallelizing this code yields almost no performance improvement on six core machine?" Parser Long story short there is a bug in your parser. If you ask my true opinion, it is actually a bug in trifecta package documentation, because it promises to fully consume the input parseString: Fully parse a String to a Result. but instead it consumes the first line only and successfully returns the result. However, honestly, I've never used it before, so maybe it is the expected bahavior. Lets take a look at your parser: parseSudoku :: Parser Sudoku parseSudoku = do lst <- replicateM 81 field (newline *> return ()) <|> eof return $ Sudoku $ generate 81 (lst !!) where field = (char '.' >> return Empty) <|> (Given . read . return <$> digit) At first glance it looks just fine, until input is closely examined. Every empty line between the lines with data also contain a newline character, but your parser expects one at most: .......2143.......6........2.15..........637...........68...4.....23........7.... <this is also a newline> .......241..8.............3...4..5..7.....1......3.......51.6....2....5..3...7... So you parser should instead be: many (newline *> return ()) <|> eof Side note. If it was up to me this is how I would write the parser: parseSudoku :: Parser Sudoku parseSudoku = do (Sudoku <$> V.replicateM 81 field) <* ((() <$ many newline) <|> eof) where field = (Empty <$ char '.') <|> (Given . Data.Char.digitToInt <$> digit) Parallelization When it comes to implementation of parallelization it seems to work fine, but the problem is the work load is really unbalanced. That's why there is only about x2 speed up when using 6 cores. In other words not all puzzles are created equally hard. For that reason solving 6 puzzles using 6 cores in parallel will always get the performance of the longest solution at best. Therefore to gain more from parallelization you either need more puzzles or less CPU cores ;) EDIT: Here are some benchmarks to support my explanation above. These are the results for solving each individual puzzle: And these two are the sequential and parallelized solvers using one core and six cores respectfully. As you can see solving the second puzzle with index 1 took the longest time, which on my computer took a little over a 100 seconds. This is also the time it took for the parallelized algorithm to solve all puzzles. Which makes sense, since all other 5 puzzles were solved much quicker and those cores that were freed up had no other work to do. Also as a sanity check if you sum up the individual times it took for puzzles to be solved it will match up pretty good with the total time it took to solve all of them sequentially.
unknown
d15464
val
I would venture to guess this is a $PYTHONPATH issue. Is it possible that the "thumbnail" directory is on the path and not "sorl"? I suspect this is the issue because you do not want to be able to type "import thumbnail" on the Python interpreter. You should instead have to type "import sorl.thumbnail". Another thing to check is to print the module name after importing: >>> import thumbnail >>> print thumbnail This will display the filesystem location where the module was found, in case it's loading another copy from somewhere you do not expect. You also want to make sure your current working directory is not the root ../sorl/ location (ie. don't run python from the sorl folder). This will allow you to import thumbnail straight-away. You should check your full Python path (it will be more than $PYTHONPATH) from within the python interpreter to verify your package locations: >>> import sys >>> print sys.path It might also be helpful to learn more about Python importing A: Problem solved. When following the django book, it is suggested to create apps within a project directory and to refer to these apps in the INSTALLED APPS statement with a path that begins from the directory containing the project, for example, 'siteproject.books'. I was not able to give django access to apps without appending that directory name to the file path, so, for example, I was not able to simply use 'books', but needed to use 'siteproject.books' in the INSTALLED APPS statement and this was the case with sorl.thumbnail, which needed to be referred to as siteproject.sorl.thumbnail. Other attempts to include 'sorl.thumbnail' would yield a very ugly un-formatted and confusing purple-colored error page (yes, the sorl directory was in $PYTHONPATH, so who knows why these attempts didn't work...). Unfortunately, Django was yielding the 'undefined tag' error, which is a generalized error that it gives in many situations. It doesn't really mean anything and isn't useful for locating problems. The problem was solved when I opened the files in the sorl directory and edited the python files. I found import statements that imported objects from the sorl directory and I appended the 'siteproject.*' to those paths and everything began to work. A: Here's another general tip on the unhelpful 'not a valid tag library' message: for tags that you create, it could be as simple as a syntax error. Hat tip: 'Rock' on Django-users: http://groups.google.com/group/django-users/browse_thread/thread/d65db3940acf16c3?tvc=2
unknown
d15465
val
This function is probably more efficient for real-valued signals. It uses rfft and zero pads the inputs to a power of 2 large enough to ensure linear (i.e. non-circular) correlation: def rfft_xcorr(x, y): M = len(x) + len(y) - 1 N = 2 ** int(np.ceil(np.log2(M))) X = np.fft.rfft(x, N) Y = np.fft.rfft(y, N) cxy = np.fft.irfft(X * np.conj(Y)) cxy = np.hstack((cxy[:len(x)], cxy[N-len(y)+1:])) return cxy The return value is length M = len(x) + len(y) - 1 (hacked together with hstack to remove the extra zeros from rounding up to a power of 2). The non-negative lags are cxy[0], cxy[1], ..., cxy[len(x)-1], while the negative lags are cxy[-1], cxy[-2], ..., cxy[-len(y)+1]. To match a reference signal, I'd compute rfft_xcorr(x, ref) and look for the peak. For example: def match(x, ref): cxy = rfft_xcorr(x, ref) index = np.argmax(cxy) if index < len(x): return index else: # negative lag return index - len(cxy) In [1]: ref = np.array([1,2,3,4,5]) In [2]: x = np.hstack(([2,-3,9], 1.5 * ref, [0,3,8])) In [3]: match(x, ref) Out[3]: 3 In [4]: x = np.hstack((1.5 * ref, [0,3,8], [2,-3,-9])) In [5]: match(x, ref) Out[5]: 0 In [6]: x = np.hstack((1.5 * ref[1:], [0,3,8], [2,-3,-9,1])) In [7]: match(x, ref) Out[7]: -1 It's not a robust way to match signals, but it is quick and easy. A: scipy provides a correlation function which will work fine for small input and also if you want non-circular correlation meaning that the signal will not wrap around. note that in mode='full' , the size of the array returned by signal.correlation is sum of the signal sizes minus one (i.e. len(a) + len(b) - 1), so the value from argmax is off by (signal size -1 = 20) from what you seem to expect. from scipy import signal, fftpack import numpy a = numpy.array([0, 1, 2, 3, 4, 3, 2, 1, 0, 1, 2, 3, 4, 3, 2, 1, 0, 0, 0, 0, 0]) b = numpy.array([0, 0, 0, 0, 0, 1, 2, 3, 4, 3, 2, 1, 0, 1, 2, 3, 4, 3, 2, 1, 0]) numpy.argmax(signal.correlate(a,b)) -> 16 numpy.argmax(signal.correlate(b,a)) -> 24 The two different values correspond to whether the shift is in a or b. If you want circular correlation and for big signal size, you can use the convolution/Fourier transform theorem with the caveat that correlation is very similar to but not identical to convolution. A = fftpack.fft(a) B = fftpack.fft(b) Ar = -A.conjugate() Br = -B.conjugate() numpy.argmax(numpy.abs(fftpack.ifft(Ar*B))) -> 4 numpy.argmax(numpy.abs(fftpack.ifft(A*Br))) -> 17 again the two values correspond to whether your interpreting a shift in a or a shift in b. The negative conjugation is due to convolution flipping one of the functions, but in correlation there is no flipping. You can undo the flipping by either reversing one of the signals and then taking the FFT, or taking the FFT of the signal and then taking the negative conjugate. i.e. the following is true: Ar = -A.conjugate() = fft(a[::-1]) A: It depends on the kind of signal you have (periodic?…), on whether both signals have the same amplitude, and on what precision you are looking for. The correlation function mentioned by highBandWidth might indeed work for you. It is simple enough that you should give it a try. Another, more precise option is the one I use for high-precision spectral line fitting: you model your "master" signal with a spline and fit the time-shifted signal with it (while possibly scaling the signal, if need be). This yields very precise time shifts. One advantage of this approach is that you do not have to study the correlation function. You can for instance create the spline easily with interpolate.UnivariateSpline() (from SciPy). SciPy returns a function, which is then easily fitted with optimize.leastsq(). A: Here's another option: from scipy import signal, fftpack def get_max_correlation(original, match): z = signal.fftconvolve(original, match[::-1]) lags = np.arange(z.size) - (match.size - 1) return ( lags[np.argmax(np.abs(z))] ) A: If one is time-shifted by the other, you will see a peak in the correlation. Since calculating the correlation is expensive, it is better to use FFT. So, something like this should work: af = scipy.fft(a) bf = scipy.fft(b) c = scipy.ifft(af * scipy.conj(bf)) time_shift = argmax(abs(c)) A: Blockquote (A very late answer) to find the time-shift between two signals: use the time-shift property of FTs, so the shifts can be shorter than the sample spacing, then compute the quadratic difference between a time-shifted waveform and the reference waveform. It can be useful when you have n shifted waveforms with a multiplicity in the shifts, like n receivers equally spaced for a same incoming wave. You can also correct dispersion substituting a static time-shift by a function of frequency. The code goes like this: import numpy as np import matplotlib.pyplot as plt from scipy.fftpack import fft, ifft, fftshift, fftfreq from scipy import signal # generating a test signal dt = 0.01 t0 = 0.025 n = 512 freq = fftfreq(n, dt) time = np.linspace(-n * dt / 2, n * dt / 2, n) y = signal.gausspulse(time, fc=10, bw=0.3) + np.random.normal(0, 1, n) / 100 Y = fft(y) # time-shift of 0.235; could be a dispersion curve, so y2 would be dispersive Y2 = Y * np.exp(-1j * 2 * np.pi * freq * 0.235) y2 = ifft(Y2).real # scan possible time-shifts error = [] timeshifts = np.arange(-100, 100) * dt / 2 # could be dispersion curves instead for ts in timeshifts: Y2_shifted = Y2 * np.exp(1j * 2 * np.pi * freq * ts) y2_shifted = ifft(Y2_shifted).real error.append(np.sum((y2_shifted - y) ** 2)) # show the results ts_final = timeshifts[np.argmin(error)] print(ts_final) Y2_shifted = Y2 * np.exp(1j * 2 * np.pi * freq * ts_final) y2_shifted = ifft(Y2_shifted).real plt.subplot(221) plt.plot(time, y, label="y") plt.plot(time, y2, label="y2") plt.xlabel("time") plt.legend() plt.subplot(223) plt.plot(time, y, label="y") plt.plot(time, y2_shifted, label="y_shifted") plt.xlabel("time") plt.legend() plt.subplot(122) plt.plot(timeshifts, error, label="error") plt.xlabel("timeshifts") plt.legend() plt.show() See an example here
unknown
d15466
val
Since you are running two queries, you need to call nextRowset to access the results from the second one. So, do it like this: // code $stmt->execute(); $stmt->nextRowset(); // code When you run two or more queries, you get a multi-rowset result. That means that you get something like this (representation only, not really this): Array( [0] => rowset1, [1] => rowset2, ... ) Since you want the second set -the result from the SELECT-, you can consume the first one by calling nextRowset. That way, you'll be able to fetch the results from the 'important' set. (Even though 'consume' might not be the right word for this, it fits for understanding purposes) A: Executing two queries with one call is only allowed when you are using mysqlnd. Even then, you must have PDO::ATTR_EMULATE_PREPARES set to 1 when using prepared statements. You can set this using: $conn->setAttribute(PDO::ATTR_EMULATE_PREPARES, 1); Alternatively, you can use $conn->exec($sql), which works regardless. However, it will not allow you to bind any data to the executed SQL. All in all, don't execute multiple queries with one call.
unknown
d15467
val
* *question does not include geometry, so have sourced *it's a simple case of plotting a LineString that is the eastern edge. Have generated one for purpose of example import requests import geopandas as gpd import shapely.ops import shapely.geometry res = requests.get("http://data.insideairbnb.com/sweden/stockholms-län/stockholm/2021-10-29/visualisations/neighbourhoods.geojson") # get geometry of stockholm gdf = gpd.GeoDataFrame.from_features(res.json()).set_crs("epsg:4326") # plot regions of stockholm ax = gdf.plot() # get linestring of exterior of all regions in stockhold ls = shapely.geometry.LineString(shapely.ops.unary_union(gdf["geometry"]).exterior.coords) b = ls.bounds # clip boundary of stockholm to left edge ls = ls.intersection(shapely.geometry.box(*[x-.2 if i==2 else x for i,x in enumerate(b)])) # add left edge to plot gpd.GeoSeries(ls).plot(edgecolor="yellow", lw=5, ax=ax)
unknown
d15468
val
I am not quite sure, but could it be that you're seeing the sub-pixel renderer adjusting the inter-colour border in response to elements on the page moving around? Unfortunately, if this is the case, there's little you can do about it from a web application. At best, you can pick a colour scheme with less button border contrast, which would make the wobble less obvious. A: Have you tried to change the jquery custom theme that you are currently using? A: Doing animations with jQuery is actually quite expensive in terms of performance. I would suggest not using animations that is likely your problem. Also, you could also use jquery ui as they have solved the accordion problem already. No sense in re-inventing the wheel A: Try toggling some transition effects to get your desired result. Use CSS3 where possible; or stick with jQuery if you'd rather. $(this).animate({ 'paddingBottom': 5, }, 300, 'linear')
unknown
d15469
val
It seems like you basically want to control other applications. There are roughly 2 ways to do this on windows 1 - Use the low level windows API to blindly fire keyboard and mouse events at your target application. The basic way this works is using the Win32 SendInput method, but there's a ton of other work you have to do to find window handles, etc, etc 2 - Use a higher level UI automation API to interact with the application in a more structured manner. The best (well, newest anyway) way to do this is using the Microsoft UI Automation API which shipped in windows vista and 7 (it's available on XP as well). Here's the MSDN starter page for it. We use the microsoft UI automation API at my job for automated UI testing of our apps, and it's not too bad. Beware though, that no matter how you chose to solve this problem, it is fraught with peril, and whether or not it works at all depends on the target application. Good luck A: Not quite the same domain as what you're looking for, BUT this series of blog posts will tell you what you need to know (and some other cool stuff). http://www.codingthewheel.com/archives/how-i-built-a-working-poker-bot A: If you really want to learn everything from scratch, then you should use C++ and native WIN32 API functions. If you want to play a bit with C#, then you should look the pinvoke.net site and Managed Windows API project. What you'll surely need is the Spy++ tool. A: Check out Autohotkey. This is the fastest way to do what you want. A: http://pinvoke.net/ seems to be the website you are looking for. The site explains how to use Windows API functions in higher level languages. Search on pinvoke for any of the functions I've listed below and it gives you the code necessary to be able to use these functions in your application. You'll likely want to use the FindWindow function to find the window in which you're interested. You'll need the process ID, so use GetWindowThreadProcessId to grab it. Next, you'll need to use OpenProcess allow for reading of the process's memory. Afterwards, you'll want to use ReadProcessMemory to read into the process's memory to see what happening with it. Lastly, you'll want to use the PostMessage function to send key presses to the window handle. Welcome to the wonderful world of Windows API programming.
unknown
d15470
val
Here is a workaround until Cognito includes this information in the event passed to trigger. 
Configure different rules for advanced security features based on the app client id. For App client id 1, configure adaptive authentication to block users from login on detection of risk. And for App client id 2, configure to always allow login. In the custom auth trigger lambda, decide the challenges based on the app client id. So when app client is 1, use normal login challenges. And when app client id is 2, send extra challenges to client.

 The client should log in with app client id 1, and if it fails to login with the reason Unable to login because of security reasons, then log in using app client id 2. Unfortunately, Cognito does not have separate error codes, so had to look for error string in response. This approach does require that client whose request is deemed risky, to make a second cognito request. This will take longer time for that client, but atleast most users will not see slower logins. 
One option that was explored and dropped was to use Cognito admin api from lambda. There are two issues with this. First, every login would be slowed down by this additional http request to Cognito. Secondly, You can get last n events, but there is no way to ensure we request the right event. In case of simultaneous logins attempts, 1 no risk and 1 high risk, and admin api returns the no risk as last event, then both logins would pass through as no risk.
unknown
d15471
val
Right click your project and choose SonarQube. then click on Remove the SonarQube server nature EDIT Another option is to go to Windows -> Preferences -> SonarQube -> Server and to remove or fix your server here. A: Another solution: I had two Sonar-PlugIns (SonarQube and SonarLint). The first posted solution has't worked for me. But the following had worked: in eclipse installation directory, open the file: <yourEclipseInstallDirectory>\configuration\customization.ini And delete what is after the = sign. org.sonar.ide.eclipse.core/servers/default/url=<deleteWhatIsHere>
unknown
d15472
val
Looks like when you assign contacts = rulelines[i] you're actually assigning the rulelines[i] string. You should do contacts.append(rulelines[i]) to add the the contact to the list, otherwise you're constantly overwriting over the last assignment. A: Use this as a template: findres = [5, 7, 15, 22] contacts = list('abcdefghijklmnopqrstuvwxyz') # dummy list result = [ contacts[index] for index in findres ] print result # ['f', 'h', 'p', 'w'] A: If I understand correctly, you need to extract the first number in every elements of findres first. Then, use those extracted numbers as an index for another array >>> findres = ['144 154', '145 151', '145 152', '145 153', '145 154', '146 152', '146 153', '146 154', '147 153', '147 154'] >>> first_elements = [c.split()[0] for c in findres] >>> print first_elements ['144', '145', '145', '145', '145', '146', '146', '146', '147', '147'] >>> contact = [] >>> for i in first_elements: contacts.append(rulelines[i])
unknown
d15473
val
What I found here perfectly works on Ubuntu: sudo apt-get install libxml2-dev libxslt1-dev imagemagick libmagickwand-dev and then, bundle install as usual. HTH A: Installing rmagick is always a pain... If you're having trouble, I'd step back and use Homebrew to reinstall Imagemagick. (This can usually be accomplished with brew install imagemagick.) Make sure to follow any follow-up instructions homebrew gives you, and then try installing the gem once again. A: I resolved the same issue by following these steps: * *Downgrade image-magics from 7 to 6 by running brew install imagemagick@6. *then run PKG_CONFIG_PATH=/usr/local/opt/imagemagick@6/lib/pkgconfig gem install rmagick. A: Be sure, you installed ImageMagick. If you have it, try to reinstall with that script
unknown
d15474
val
Printing the object in the debugger should give you all the properties defined in the class regardless of whether it is a NSManagedObject subclass held by a context or just a plain vanilla custom class. The debugger printout is not only missing the number property but the image one as well. Really, the only way that could happen was if you didn't have the new version of the class file added to the target but were actually using the old version that lacked the new properties. Check the target for the old files and/or check that the new version has been properly added to the build target. A: Try naming the number attribute something different, such as theNumber. There are several known reserved words in Core Data attributes, and an unknown number (no pun intended) which are not documented - you may have stumbled onto another one. A: When in doubt, delete it out. I ended up deleting my two xcdatamodel files and the xcdatamodeld file. Also deleted them in their respective folders. Then I created a new one. Had some issues at first but it actually works now. I have a little issue with xcode thinking that my Group.h file doesn't have an addPeople method, which it does. So it says that it may not respond to that method, or any of its other methods. It also throws me a Lexical or Preprocessor Issue: 'Group.h' file not found error at build-time, but everything seems to still work. I still have no idea what was going on. Thanks to everyone for their suggestions.
unknown
d15475
val
The answer is yes. Just instead of keeping keys in the nodes, you store pointers to keys: #include <stdio.h> #include <stdlib.h> typedef struct s_ListNode { struct s_ListNode *next; int *pointer; } ListNode; main() { int a = 3, b = 5; ListNode *root = malloc(sizeof(ListNode)); ListNode *tail = malloc(sizeof(ListNode)); ListNode *iter; root->next = tail; root->pointer = &a; tail->next = NULL; tail->pointer = &b; for(iter=root; iter!=NULL; iter=iter->next) { printf("%d\n", *iter->pointer); } return 0; } A: A. Yes, it is possible. B. Why doing it ? the node is already a pointer by itself.
unknown
d15476
val
Answer in the comments from the original poster: All good I solved the issue. I changed the 'start = 2;' to 'start = 3'. This generated the titles and only one empty row to begin working form
unknown
d15477
val
There are a lot of different approaches of doing this. I'll just show you one, that should perfectly fit your current page and needs. <div class="col-lg-12 col-md-12 col-sm-12 col-xs-12 d-flex align-items-end overflow-hidden banner-image-container"> <img class="img-responsive" src="img/top-banner.jpg" width="100%"> </div> (Inside your <style tag:) .overflow-hidden { overflow: hidden; } .banner-image-container { max-height: 400px; } What I did is to limit the container around your image to a maximum height of 400px. Because your image is bigger then that, I added overflow: hidden; to the container, so it will cut the rest of the image off. Now your main focus in the image will be the building I guess. So I did move it all the way to the top with d-flex align-items-end.
unknown
d15478
val
Ok, I got the error, and fixed the issue: the CURRENCY field name is also a restricted word and needs to be enclosed within '[]'
unknown
d15479
val
As it currently is, there are no parent selectors in CSS - yet anyways. You can use the :has selector in jQuery. $('a:has(img)').css("background","red"); jsFiddle example A: jQuery Selector: var anchorThatContainsImage = $('a:has(img)'); Or: $('img').each(function(){ var anchorThatContainsImage = $(this).parent('a'); }); A: With CSS selectors this is currently impossible. It may be possible in Selectors Level 4 with the putative subject identifier, but that is still some way in the future. It is possible with a jQuery selector, however. It's the :has selector: $('a:has(img)') or, more optimally, the has method: $('a').has('img') A: $('a img').parent (); $('a').filter (function () { return $(this).find ('img').length > 0; });
unknown
d15480
val
Your current algorithm is O(n ^ 2) because it requires a nested loop. You can make it O(n) by using a rolling sum instead. Start with the sum of elements 0 to k, then on each iteration, subtract the earliest element that makes up the sum and add the next element not included in the sum yet. For example, with a k of 2: * *start out with the sum of elements [0] and [1] *subtract [0], add [2], compare the new sum *subtract [1], add [3], compare the new sum and so on. function arrayMaxConsecutiveSum(inputArray, k) { let rollingSum = inputArray.slice(0, k).reduce((a, b) => a + b); let max = rollingSum; for(let i = 0; i < inputArray.length - k; i++){ rollingSum += inputArray[i + k] - inputArray[i]; max = Math.max(max, rollingSum); } return max; } console.log(arrayMaxConsecutiveSum([2, 3, 5, 1, 6], 2)); A: function arrayMaxConsecutiveSum(inputArray, k) { var max = inputArray.slice(0,k).reduce((a,b)=>a+b); var cur = max; for(var i = k; i < inputArray.length; i++) { cur = cur + inputArray[i] - inputArray[i-k]; if(cur>max) max = cur } return max } console.log(arrayMaxConsecutiveSum([2, 3, 5, 1, 6], 2));
unknown
d15481
val
You are allocating an NSArray instead of an NSMutableArray ? A: Just change NSMutableArray *array = [[NSArray alloc] initWithObjects:@"About", nil]; With NSMutableArray *array = [[NSMutableArray alloc] initWithObjects:@"About", nil]; A: You should instead be creating your array like this: NSMutableArray *array = [NSMutableArray arrayWithObjects:@"About", nil]; Notice we send the message to NSMutableArray's class, not NSArray's, so we get a mutable version of the array created. A: Just Replace the convenience constructor for NSArray with NSMutableArray.. [[NSMutableArray alloc] initWithObjects:@"About", nil;
unknown
d15482
val
Something has corrupted or changed ownership to root your .bash_profile script. Open Terminal.app and do these: * *mv -f ~/.bash_profile ~/.bash_profile *cp ~/.bash_profile.old ~/.bash_profile These commands show not produce errors. Now try to re-run your scripts to append to .bash_profile.
unknown
d15483
val
scanf("%d",&age); When the execution of the program reaches the above line,you type an integer and press enter. The integer is taken up by scanf and the \n( newline character or Enter )which you have pressed remains in the stdin which is taken up by the getchar().To get rid of it,replace your scanf with scanf("%d%*c",&age); The %*c tells scanf to scan a character and then discard it.In your case,%*c reads the newline character and discards it. Another way would be to flush the stdin by using the following after the scanf in your code: while ( (c = getchar()) != '\n' && c != EOF ); Note that c is an int in the above line A: You're only having trouble seeing the result because you're starting the program from a windowing environment, and the window closes as soon as its internal tasks are completed. If you run the compiled program from a command line in a pre-existing shell window (Linux, Mac, or Windows), the results will stay on the screen after you're returned to the prompt (unless you've ended by executing a clear-screen of some sort). Even better, in that case, you don't need the extraneous getchar() call. For Windows, after opening the command prompt window, you'd issue a "cd" command to change to the directory that contains the compiled program, and then type its name. For Linux (and, I presume, Mac, since Mac is UNIX under the hood), you'd need to type ./ ahead of the program name after changing to the appropriate directory with "cd".
unknown
d15484
val
In some how it was an issue on woocommerce api I edited the class-wc-rest-product-reviews.php $prepared_args['type'] = 'review'; changed review to comment and it works
unknown
d15485
val
ir(row_data == "") should be if(row_data == "") A: The ir should be an if, I reckon - it's a typo. To be perfectly frank, you really could have read through and practically immediately noticed the problem while paying attention.
unknown
d15486
val
You can declare your getCustomer() to not support transactions: @TransactionAttribute(NOT_SUPPORTED) public Customer getCustomer() Read more about transactions in the Java EE tutorial: https://docs.oracle.com/javaee/7/tutorial/transactions003.htm A: It is not necessary to mess with transaction management to achieve your requirement. Your code should invoke javax.persistence.EntityManager.clear() before returning from your DAO. This will detach your entity bean from the persistence context, which will no longer track it. From the java doc: Clear the persistence context, causing all managed entities to become detached. Changes made to entities that have not been flushed to the database will not be persisted.
unknown
d15487
val
Yes, I've come across this problem. The most reliable way of copying a master schedule and all it's sub projects without creating the duplicate links is to: * *Select all the files on the share drive *Right click and send them to a zip file *Move this zip file to your local drive *Right click on the zip file and extract all Then do the same in reverse once you've run your macros. This should reliably copy the master/sub project files with the correct links, without creating the erroneous links you've seen.
unknown
d15488
val
This can be implemented using vaccum of the Delta Lake and if the retention is set. Please refer : https://docs.databricks.com/delta/delta-utility.html#delta-vacuum
unknown
d15489
val
Problem is, that you function receives reference to FileName, but you are trying to pass rvalue to it. It's incorrect, temporary value cannot be binded to lvalue-reference, change parameter to const reference, or create FileName object and pass it.
unknown
d15490
val
You should use a question $projection with $elemMatch like so: db.collection.find({'ranges.first': {$lt: 29} ,'ranges.last': {$gt: 29} },{ ranges: { $elemMatch: {first: {$lt: 29} ,last: {$gt: 29} } }}).lean();
unknown
d15491
val
Assuming you intend to remove the items in the input, whose "value" field is 0 and then get the totalValue. Here is a quick one I have come up with(could be improved). %dw 2.0 output application/json //filter the items whose value is zero var filteredPayload= ((payload [-1 to 1] map (item1, index1) -> { (if (item1.value as Number != 0) (item1) else null) }) filter ($ != {})) // get the totalValue from the filteredPayload var totalFilteredPayload = filteredPayload reduce ($.value + $$.value) --- // simply add both the arrays filteredPayload ++ [{ "valueTotal": totalFilteredPayload as String }] A: If by "zeroed out" you mean value = 0 it is just a basic filter operation payload filter ($.valueTotal? or ($.value as Number != 0)) The condition $.valueTotal? is to make the object with valueTotal pass the check. And the other one is for the value itself.
unknown
d15492
val
Paint() - this method holds instructions to paint this component. Actually, in Swing, you should change paintComponent() instead of paint(), as paint calls paintBorder(), paintComponent() and paintChildren(). You shouldn't call this method directly, you should call repaint() instead. repaint() - this method can't be overridden. It controls the update() -> paint() cycle. You should call this method to get a component to repaint itself. If you have done anything to change the look of the component, but not it's size ( like changing color, animating, etc. ) then call this method. validate() - This tells the component to lay itself out again and repaint itself. If you have done anything to change the size of the component or any of it's children(adding, removing, resizing children), you should call this method... I think that calling revalidate() is preferred to calling validate() in Swing, though... update() - This method is in charge of clearing the component and calling paint(). Again, you should call repaint() instead of calling this method directly... If you need to do fast updates in animation you should override this method to just call the paint() method... updateUI() - Call this method if you have changed the pluggable look & feel for a component after it has been made visible. Note: The way you have used switch case in your program is not a good implementation, use a variable(counter) and increment as user clicks and then use if/while condition for further implementation.
unknown
d15493
val
just make these changes it should work <div class="c1" style="position:absolute;z-index:2147483647"> //code that makes a div move downwards </div>
unknown
d15494
val
The easiest way to get your internet ip address from code is to use NSURLConnection. For the URL you can use: http://www.whatismyip.com/m/mobile.asp or http://checkip.dyndns.com/ Just parse the return data and you have your external ip address. A: Check Apple's PortMapper, does exactly what you want. As of iOS7 this is irrelevant. A: Have a look at the example in my second Answer here. In a nutshell it uses *http://www.dyndns.org/cgi-bin/check_ip.cg*i to get the extenal I.P A: Late to the party, but https://api4.ipify.org or http://api4.ipify.org returns nothing else but the external IPv4 address of your connection. Code: NSURL *ipifyUrl = [NSURL URLWithString:@"https://api4.ipify.org/"]; NSString *externalAddr = [NSString stringWithContentsOfURL:ipifyUrl encoding:NSUTF8StringEncoding error:nil]; https://api6.ipify.org returns the external IPv6 address and https://api64.ipify.org either the IPv4 or the IPv6 address. Simple documentation can be found at https://www.ipify.org
unknown
d15495
val
Your code has quite a few problems: * *You are not including all the appropriate headers. How did you get this to compile? If you are using malloc and realloc, you need to #include <stdlib.h>. If you are using strlen and strcpy, you need to #include <string.h>. *Not really a mistake, but unless you are applying sizeof to a type itself you don't have to use enclosing brackets. *Stop using sizeof str to get the length of a string. The correct and safe approach is strlen(str)+1. If you apply sizeof to a pointer someday you will run into trouble. *Don't use sizeof(type) as argument to malloc, calloc or realloc. Instead, use sizeof *ptr. This will avoid your incorrect numElem * sizeof(char**) and instead replace it with numElem * sizeof *arrayString, which correctly translates to numElem * sizeof(char*). This time, though, you were saved by the pure coincidence that sizeof(char**) == sizeof(char*), at least on GCC. *If you are dynamically allocating memory, you must also deallocate it manually when you no longer need it. Use free for this purpose: free(testString);, free(arrayString);. *Not really a mistake, but if you want to cycle through elements, use a for loop, not a while loop. This way your intention is known by every reader. This code compiles fine on GCC: #include <stdio.h> //NULL, printf #include <stdlib.h> //malloc, realloc, free #include <string.h> //strlen, strcpy int main() { char** arrayString = NULL; char* testString; testString = malloc(strlen("1234567890123456789012345678901234567890123456789") + 1); strcpy(testString, "1234567890123456789012345678901234567890123456789"); for (int numElem = 1; numElem < 50; numElem++) { arrayString = realloc(arrayString, numElem * sizeof *arrayString); arrayString[numElem - 1] = malloc(strlen(testString) + 1); strcpy(arrayString[numElem - 1], testString); } free(arrayString); free(testString); printf("done\n"); return 0; }
unknown
d15496
val
If you can then try to mavenize your web application project to get all the dependencies that are required and to get away from all the non-required ones.
unknown
d15497
val
If I understand this correctly, you want a variable you can use in your templates and the controllers without having to pass it into the templates each time. To do this, first, create a function to get the variable. This could be something like getting a user's setting. Then in the context processor, you pass the result of this function through. To access it in your controllers as well, create an additional variable that holds the value of the getter. Note that if the function returns different values, you may need to call the function instead of using a variable. # create a getter function, this returns the property's value. def get_property(): return "Test Text" property = get_property() # this is only to make code pretty later on. # the context processor passes the property to the templates. @app.context_processor def property_processor(): return dict(property=get_property()) In your controllers, you can access the variable created earlier. def view(): if property == true: return redirect(url_for('index')) else: return redirect(url_for('access_denied')) Note that if your value will change, you may need to use the function instead and recall the getter each time you render a view. This would make the first if statement into if get_property() == true: instead.
unknown
d15498
val
I use hibernate as ORM/JPA provider, so an Hibernate solution can be provided if no JPA solution exists. Implementing the acceptable solution (i.e. fetching a Date for the latest B) would be possible using a @Formula. @Entity public class A { @Id private Long id; @OneToMany (mappedBy="parentA") private Collection<B> allBs; @Formula("(select max(b.some_date) from B b where b.a_id = id)") private Date latestBDate; } References * *Hibernate Annotations Reference Guide * *2.4.3.1. Formula Resources * *Hibernate Derived Properties - Performance and Portability A: See, http://en.wikibooks.org/wiki/Java_Persistence/Relationships#Filtering.2C_Complex_Joins Basically JPA does not support this, but some JPA providers do. You could also, - Make the variable transient and lazy initialize it from the OneToMany, or just provide a get method that searches the OneToMany. - Define another foreign key to the latest. - Remove the relationship and just query for the latest.
unknown
d15499
val
Syntax errors, due to incorrect escaping. From your generated JS: d.write(" ^---start of string <!DOCTYPE html> <html> <head> <title>https://api.classmarker.com/v1/groups/recent_results.json result</title> <link rel="style[...snip...] ^---end of string A trivial look at your browser's debug console would have told you this. Running around for 2+ hours, as you say you did, means you didn't look at the ONE thing that would immediately have told you about the problem. Since you need your backslashes to get from PHP -> JS, you need to DOUBLE escape at the PHP level: d.write(\" <!DOCTYPE html> <html> <head> <title>{$address} result</title> <link rel=\\"sty ^^---note the doubled backslash. A: Not sure if it is the problem, but you escape your newline character \\n A: Problem solved by using the following (verbose I know) code: echo "\n<script type='text/javascript'> $(document).ready( function () { var w = window.open('{$address} result', '#', 'width=800,height=600'); var d = w.document.open(); d.write('<html>\ <head>\ <title>{$address} result</title>\ <link rel=\"stylesheet\" href=\"css/base.css\" type=\"text/css\" />\ </head>\ <body class=\"result\">\ <code>Request method: {$request_method}\\n{$address}\\n?{$qry_cfg}&amp;{$man_qry}\\n", htmlentities($result), "</code>\ </body>\ </html>'); d.close(); }); </script>\n"; Not escaping the newlines may have been the crux of the problem. Funny because I knew this about JS and managed to overlook it here as it was working before in a different format.
unknown
d15500
val
You probably want to use the Server-Side Authentication flow. By checking the calls in the documentation it is quite clear, which of your calls are wrong. First, your call to the oauth/access_token endpoint takes no argument 'type' => 'client_cred', but it needs the parameter for your redirect_uri again: $getStr = self::FB_GRAPH_URL . 'oauth/access_token?' . http_build_query(array( 'client_id' => 'APP_ID', 'redirect_uri' => 'REDIRECT_URI', 'client_secret' => 'SECRET_KEY', 'code' => $fbCode) ); Then, you can't just take the answer of this call as your access_token, as there is much more in it: access_token=USER_ACCESS_TOKEN&expires=NUMBER_OF_SECONDS_UNTIL_TOKEN_EXPIRES and you only want the access_token part of it: $response = file_get_contents($getStr); $params = null; parse_str($response, $params); $dbpath = "https://graph.facebook.com/me?access_token=" . $params['access_token'];
unknown