_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d12401 | train | You could try doing
"@echo off>nul" (without quotes)
Which might stop it from displaying anything. | unknown | |
d12402 | train | You can save you current page in the Session and then retrieve it from there:
string previousPage = Session["PreviousPage"] as string;
Session["PreviousPage"] = System.IO.Path.GetFileName(System.Web.HttpContext.Current.Request.FilePath);
This way the previousPage string will always contain the previous page's filename, and the Session variable will contain the current page, ready to be used on the next page.
This way you can also detect if the referrer is an outside link because then the previousPage string will be null.
A: If it's only for this scenario (where you programatically redirect to B.aspx) then why not put something on the querystring to say where the redirect came from. This would be more likely to work across muliple browser types and devices.
One advantage to this approach is that you'll be able to tell the difference between a redirect to B.aspx and a direct link (either via a link on one of your pages, or from the user entering the URL into the address base) to page B.aspx.
The referrer is something the the client provides as part of the HTTP request. As such, you can't rely on it.
By the way, this question is related:
Request.UrlReferrer null?
Update
Given your comments it's not clear that's an easy solution other than "edit all your files". I suspect that global search/replace might be your best bet.
Some more background: If you use Fiddler (or any other http debugging tool) you should be able to see that the Referrer header isn't being populated when you perform a redirect. For example, this is the result of a redirect (i.e. an HTTP 302 response causing IE to redirect to another page):
GET /webapplication1/WebForm3.aspx HTTP/1.1
Accept: image/gif, image/jpeg, image/pjpeg, application/x-ms-application, application/vnd.ms-xpsdocument, application/xaml+xml, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */*
Accept-Language: en-GB
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; InfoPath.2; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30618; MS-RTC LM 8; Zune 3.0)
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Host: (removed)
Here is the HTTP request that is generated by clicking the "Questions" link on StackOverflow.com:
GET /questions HTTP/1.1
Accept: image/gif, image/jpeg, image/pjpeg, application/x-ms-application, application/vnd.ms-xpsdocument, application/xaml+xml, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */*
Referer: https://stackoverflow.com/questions/772780/finding-previous-page-url
Accept-Language: en-GB
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; InfoPath.2; .NET CLR 3.5.21022; .NET CLR 3.5.30729; .NET CLR 3.0.30618; MS-RTC LM 8; Zune 3.0)
Accept-Encoding: gzip, deflate
Host: stackoverflow.com
Connection: Keep-Alive
You can see that the later, generated by a link on a page, generates the Referer header.
A: You can also use Server.Tansfer("B.aspx") instead of Response.Redirect("B.aspx")
Edit: Searock, if you don't want to change your existing code, Request.ServerVariables["HTTP_REFERER"].ToString() should work fine in that case.
A: Just to note that HTTP_REFERER is not reliable. You can't rely on that because so many clients doesn't send for various reasons (paranoid settings, security software etc.).
Also some new windows opened by JS might not have REFERER
SSL > NONE SSL pages won't have REFERER either, so be careful about relying something like that.
Better idea would be sending previous page in Querystring.
If it's ASPX you might do it in more clever way like adding a new hidden parameter to all forms or processing link just before writing out the buffer.
A: Could you just confirm what methods you are actually using here (ideally by editing the original question)?
HttpServerUtility (i.e. Server.) doesn't have a "Redirect" method, it has Transfer and Execute.
HttpResponse (i.e. Response.) does.
HttpResponse.Redirect will send a 302 response to the client, telling it to issue a new request for the value of the Location field. I'm then able to query Request.UrlReferrer to see the value of the page that performed the Redirect.
If you are using HttpServerUtility.Transfer or HttpServerUtility.Execute then these actions happen entirely at the server within ASP.NET, and so the "referrer" may well be null. The client browser will also think it is still on the originally requested page.
See also How to detect if an aspx page was called from Server.Execute | unknown | |
d12403 | train | Ok I have found two changes that fixed the problem.
Add channelId: live in my workflow yaml
Reload the site with the "Empty chache and Hard reload" developer option. | unknown | |
d12404 | train | You can implement _app.tsx, which will run for all pages, though it has some drawbacks too like disabling automatic static generation.
Another option is to implement a custom server using express itself, as shown in this example | unknown | |
d12405 | train | Consider a table or query like:
+----------+------------+---------------+
| CourseID | CourseName | CourseSection |
+----------+------------+---------------+
| 01 | Baking | 101 |
| 01 | Driving | 102 |
| 01 | Writing | 101 |
| 02 | Baking | 102 |
| 02 | Writing | 101 |
+----------+------------+---------------+
And query:
TRANSFORM First(Courses.CourseSection) AS FirstOfCourseSection
SELECT Courses.CourseID
FROM Courses
WHERE (((Courses.CourseName)<>"Writing"))
GROUP BY Courses.CourseID
PIVOT Courses.CourseName;
Output:
+----------+--------+---------+
| CourseID | Baking | Driving |
+----------+--------+---------+
| 01 | 101 | 102 |
| 02 | 102 | |
+----------+--------+---------+
If your course names are less consistent for spelling and the name and section are actually in one field but the section suffix is consistent, then emulate a crosstab like:
SELECT Courses.CourseID, Max(IIf(InStr([CourseName],"Baking")>0,Right([CourseName],3),Null)) AS Baking,
Max(IIf(InStr([CourseName],"Driving")>0,Right([CourseName],3),Null)) AS Driving
FROM Courses
GROUP BY Courses.CourseID; | unknown | |
d12406 | train | I figure out the problem. I noticed the hosting company do not configure Php to display error thus making it difficult for you to know where the problem is. The problem is actually the database connection to the server. Once I fix the database connection my website came on fine. | unknown | |
d12407 | train | SELECT XMLdata AS '*'
FROM ActivityTable
FOR XML PATH(''), ROOT('RootNode')
Columns with a Name Specified as a Wildcard Character
If the column name specified is a wildcard character (*), the content
of that column is inserted as if there is no column name specified.
A: using .query('/Node') is a way of querying for a certain node, and you don't get the XMLData tags back. Hope it helps!
SELECT XMLDATA.query('/Activity') FROM ActivityTable
FOR XML PATH(''), ROOT('root')
SQL Fiddle example | unknown | |
d12408 | train | A primitive combinator is one that's built into the DSL, defined in the base language (ie Haskell). DSLs are often built around an abstract type—a type whose implementation is hidden to the end-user. It's completely opaque. The primitive combinators, presented by the language, are the ones that need to know how the abstraction is actually implemented to work.
A derived combinator, on the other hand, can be implemented in terms of other combinators already in the DSL. It does not need to know anything about the abstract types. In other words, a derived combinator is one you could have written yourself.
This is very similar to the idea of primitive types in Haskell itself. For example, you can't implement Int or the Int operations like + yourself. These require things built into the compiler to work because numbers are treated specially. On the other hand, Bool is not primitive; you could write it as a library.
data Bool = True | False -- ... you can't do this for Int!
"Primitive" and "derived" for DSLs is the same idea except the compiler is actually your Haskell library.
In your example, Signal is an abstract type. It's implemented as a function Time -> a, but that information is not exported from the module. In the future, you (as the author of the DSL) are free to change how Signal is implemented. (And, in fact, you'd really want to: this is not an efficient representation and using Double for time is finicky.)
A function like $$ is primitive because it depends on knowing that Signal is Time -> a. When you change the representation of Signal, you'll have to rewrite $$. Moreover, a user of your library wouldn't be able to implement $$ themselves.
On the other hand, mapS is a derived operation because it could be written entirely in terms of the other things you're exporting. It does not need to know anything special about Signal and could even be written by one of the users of the library. The implementation could look something like:
mapS f signal = constS f $$ signal
Note how it uses constS and $$, but never unwraps signal. The knowledge of how to unwrap signal is hidden entirely in those two functions. mapS is "derived" because it is written just in your DSL without needing anything below your level of abstraction. When you change the implementation of Signal, mapS will still work as-is: you just need to update constS and $$ properly and you get mapS for free.
So: primitive combinators are ones which are built directly into your language and need to know about its internal implementation details. Derived combinators are written purely in terms of your language and do not depend on any of these internal details. They're just convenience functions which could have just as easily been written by the end-user of your library. | unknown | |
d12409 | train | It turns out, I needed to get back to the basics, and check out some of my fundamentals and functions. The way to do an 'is empty' check on a property is as below:
<exec program="C:\Deploy\MyAwesomeDeployProgram.exe">
<!-- Other args... -->
<arg value="MyNewProperty=${deploy.NewArg}" if=${string::get-length(deploy.NewArg) > 0}" />
</exec>
For someone who hasn't yet done the research, if only works with a boolean. That being said, booleans can be generated in one of two ways: either an explicit true or false, or by using an expression. Expressions are always contained in ${} brackets. The use of string::get-length() should be obvious.
Bring it all together, and you only include an argument if it's specified. | unknown | |
d12410 | train | You should override onRequestPermissionsResult in your activity in order to get notified whenever permission is granted or not:
@Override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (requestCode == PERMISSIONS_CODE) {
for (int i = 0; i < permissions.length; i++) {
String permission = permissions[i];
int grantResult = grantResults[i];
if (permission.equals(Manifest.permission.READ_EXTERNAL_STORAGE)) {
if (grantResult == PackageManager.PERMISSION_GRANTED) {
// TODO Open The Crop Activity
} else {
// TODO Tell the user your app can't function properly
}
}
}
}
} | unknown | |
d12411 | train | I think this can help you to solve the problem:
WebElement a = driver.findElement(By.cssSelector("your_selector"));
WebElement b = driver.findElement(By.cssSelector("your_selector"));
int x = b.getLocation().x;
int y = b.getLocation().y;
Actions actions = new Actions(driver);
actions.moveToElement(a)
.pause(Duration.ofSeconds(1))
.clickAndHold(a)
.pause(Duration.ofSeconds(1))
.moveByOffset(x, y)
.moveToElement(b)
.moveByOffset(x,y)
.pause(Duration.ofSeconds(1))
.release().build().perform(); | unknown | |
d12412 | train | the widgets() method returns a list of widget and not a GQuery object
List<Widget> myPasswordInputs = $(e).filter("input[type='password']").widgets();
If you are only one input of type password you can maybe use directly widget() method :
PasswordTextBox myPasswordInput = $(e).filter("input[type='password']").widget();
Question: are you sure of your '$(e).filter("input[type='password']")' ?
Because it means : "Create a GQuery object containing my element 'e' and keep it only if 'e' is an input of type password"
If you want to retrieve all password inputs present within an element e, you have to use :
List<Widget> myPasswordInputs = $("input[type='password']",e).widgets();
Julien
A: Try:
GQuery input = GQuery.$(e).filter("input[type='password']").widgets();
You need to do a static import to use $ directly:
import static com.google.gwt.query.client.GQuery.*;
import static com.google.gwt.query.client.css.CSS.*; | unknown | |
d12413 | train | For cases that the field values are vectors (of same size) and you need the result in a matrix form:
posmat = cell2mat({poses.pose}');
That returns each pose vector in a different row of posmat.
A: Use brackets:
timevec=[poses.time];
tricky matlab, I know I know, you'll just have to remember this one if you're working with structs ;) | unknown | |
d12414 | train | change to using a database ...
import sqlite3
db = sqlite3.connect("my.db")
db.execute("CREATE TABLE IF NOT EXISTS my_items (text PRIMARY KEY, id INTEGER);")
my_list_of_items = [("test",1),("test",2),("asdasd",3)]
db.execute_many("INSERT OR REPLACE INTO my_items (text,id) VALUES (?,?)",my_list_of_items)
db.commit()
print(db.execute("SELECT * FROM my_items").fetchall())
this may have maginally higher overhead in terms of time ... but you will save in RAM
A: Hashing is a well-researched subject in Computer Science. One of the standard uses is for implementing what Python calls a dictionary. (Perl calls the same thing a hash, for some reason. ;-) )
The idea is that for some key, such as a string, you can compute a simple numeric function - the hash value - and use that number as a quick way to look up the associated value stored in the dictionary.
Python has the built-in function hash() that returns the standard computation of this value. It also supports the __hash__() function, for objects that wish to compute their own hash value.
In a "normal" scenario, one way to determine if you have seen a field value before would be to use the field value as part of a dictionary. For example, you might stored a dictionary that maps the field in question to the entire record, or a list of records that all share the same field value.
In your case, your data is too big (according to you), so that would be a bad idea. Instead, you might try something like this:
seen_before = {} # Empty dictionary to start with.
while ... something :
info = read_next_record() # You figure this out.
fld = info.fields[some_field] # The value you care about
hv = hash(fld) # Compute hash value for field.
if hv in seen_before:
print("This field value has been seen before")
else:
seen_before[hv] = True # Never seen ... until NOW!
A: Could use a dict with the text as key and the newest object for each key as value.
Setting up some demo data:
>>> from collections import namedtuple
>>> Object = namedtuple('Object', 'text ID')
>>> objects = Object('foo', 1), Object('foo', 2), Object('bar', 4), Object('bar', 3)
Solution:
>>> unique = {}
>>> for obj in objects:
if obj.text not in unique or obj.ID > unique[obj.text].ID:
unique[obj.text] = obj
>>> unique.values()
[Object(text='foo', ID=2), Object(text='bar', ID=4)] | unknown | |
d12415 | train | Use an expression that means every day at 11:00am e.g. "0 0 11 * * ?".
Then set the startTime of the trigger to 5th of Sep 2011 10:59 am, and set the endTime of the trigger to 10th of June 2012 11:01 am.
A: Another solution I found is to specify a route policy (SimpleScheduledRoutePolicy) for the scheduled route and set the RouteStartDate and setRouteStopDate for this policy object.
A: A single cron expression doesn't facilitate running different schedules for the same period type, no matter which period is concerned, your differing schedule being the year period.
However other than your year difference all other periods have the same schedule. So... using these cron expressions:
cron1 = "0 0 23 5/1 SEP-DEC ? 2012"
cron2 = "0 0 23 1/1 JAN-JUN ? 2013"
you can switch the scheduler from using cron1 to cron2 sometime after 11:00.00 PM on 12/31/2012 but before 10:59.99 PM on 1/1/2013, though I wouldn't cut it so close as shown here. If you scheduler is reading its cron expression from the database or a configuration somewhere then just have it read in a new schedule every day at 11:30 PM. If you are storing your cron expressions in a database you could schedule to have a scheduler swap out the cron expression for your particular task using this chron3 below:
cron3 = "0 0 0 1 JAN ? 2013"
Silly me :o) today's date is Mar 13, 2013 so I'm sure this answer is a bit late for you! | unknown | |
d12416 | train | actually I figured it out. you add " checkimage.setUrl("mvpwebapp/gwt/clean/images/xmark.png"); " to change the images. | unknown | |
d12417 | train | You should declare cars like this:
var cars = {
"1": "transmission.jpg",
"2": "High-tensile-steel-plates.jpg",
"3": "image_306.jpg"
};
I take it that :="variable": is a pre-processor directive that will get replaced with the value of the PLC variable named variable.
Calling cars[:="variable":] will then use the value of variable as a key for the associative array. When variable has the value of 1 then cars[:="variable":] will return transmission.jpg. | unknown | |
d12418 | train | wipe the emulator data from the ADB manager
edit:
I had the same issue on my Mac, the solution was to go to ADB manager on android studio and wipe the emulator data like the screenshot provided: | unknown | |
d12419 | train | Change you code like this
while (reader.Read())
{
var storedProcCommand = new SqlCommand("EXEC addToNotificationTable @ID", connection);
var paramId = new SqlParameter("@ID", reader.GetInt32(0));
storedProcCommand.Parameters.Add(paramId);
storedProcCommand.ExecuteNonQuery();
}
A: First of all, you missed to specify the command type. Also using EXEC in SqlCommand is not a proper way.
Please try with the below code
while (reader.Read())
{
using(SqlCommand storedProcCommand = new SqlCommand("addToNotificationTable", connection)) //Specify only the SP name
{
storedProcCommand.CommandType = CommandType.StoredProcedure; //Indicates that Command to be executed is a stored procedure no a query
var paramId = new SqlParameter("@ID", reader.GetInt32(0));
storedProcCommand.Parameters.Add(paramId);
storedProcCommand.ExecuteNonQuery()
}
}
Since you are calling the sp inside a while loop, wrap the code in using() { } to automatically dispose the command object after each iteration | unknown | |
d12420 | train | Monoid](a: A): MonoidOp[A] = new MonoidOp[A]{
val F = implicitly[Monoid[A]]
val value = a
}
}
I have defined a function (just for the sake of it)
def addXY[A: Monoid](x: A, y: A): A = x |+| y
I want to lift it so that it could be used using Containers like Option, List, etc. But when I do this
def addXYOptioned = Functor[Option].lift(addXY)
It says error: could not find implicit value for evidence parameter of type scalaz.Monoid[A]
def addOptioned = Functor[Option].lift(addXY)
How to lift such functions?
A: Your method addXY needs a Monoid[A] but there is no Monoid[A] in scope when used in addXYOptioned, so you also need to add the Monoid constraint to addXYOptioned.
The next problem is that Functor.lift only lifts a function A => B, but we can use Apply.lift2 to lift a function (A, B) => C.
Using the Monoid from Scalaz itself :
import scalaz._, Scalaz._
def addXY[A: Monoid](x: A, y: A): A = x |+| y
def addXYOptioned[A: Monoid] = Apply[Option].lift2(addXY[A] _)
We could generalize addXYOptioned to make it possible to lift addXY into any type constructor with an Apply instance :
def addXYApply[F[_]: Apply, A: Monoid] = Apply[F].lift2(addXY[A] _)
addXYApply[List, Int].apply(List(1,2), List(3,4))
// List[Int] = List(4, 5, 5, 6)
addXYApply[Option, Int].apply(1.some, 2.some)
// Option[Int] = Some(3) | unknown | |
d12421 | train | For some reason it doesn't work when you don't lock stream with ReadableStream.getReader
Also when you pass ReadableStream to Response ctor, the Response.text method doesn't process the stream and it returns toString() of an object instead.
const createStream = () => new ReadableStream({
start(controller) {
controller.enqueue("<h1 style=\"background: yellow;\">Yellow!</h1>")
controller.close()
}
})
const firstStream = createStream().getReader();
const secondStream = createStream().getReader();
new Response(firstStream, {
headers: {
"Content-Type": "text/html"
}
})
.text()
.then(text => {
console.log(text);
});
secondStream.read()
.then(({
value
}) => {
return new Response(value, {
headers: {
"Content-Type": "text/html"
}
});
})
.then(response => response.text())
.then(text => {
console.log(text);
}); | unknown | |
d12422 | train | I think you need if MultiIndex first compare values of aaa by condition and then filter all values in first level by boolean indexing, last filter again by isin with inverted condition by ~:
print (df)
aaa
date time
2015-12-01 00:00:00 0
00:15:00 0
00:30:00 0
00:45:00 0
2015-12-02 05:00:00 0
05:15:00 200
05:30:00 0
05:45:00 0
2015-12-03 06:00:00 0
06:15:00 0
06:30:00 1000
06:45:00 1000
07:00:00 1000
lvl0 = df.index.get_level_values(0)
idx = lvl0[df['aaa'].gt(100)].unique()
print (idx)
Index(['2015-12-02', '2015-12-03'], dtype='object', name='date')
df = df[~lvl0.isin(idx)]
print (df)
aaa
date time
2015-12-01 00:00:00 0
00:15:00 0
00:30:00 0
00:45:00 0
And if first column is not index only compare column date:
print (df)
date time aaa
0 2015-12-01 00:00:00 0
1 2015-12-01 00:15:00 0
2 2015-12-01 00:30:00 0
3 2015-12-01 00:45:00 0
4 2015-12-02 05:00:00 0
5 2015-12-02 05:15:00 200
6 2015-12-02 05:30:00 0
7 2015-12-02 05:45:00 0
8 2015-12-03 06:00:00 0
9 2015-12-03 06:15:00 0
10 2015-12-03 06:30:00 1000
11 2015-12-03 06:45:00 1000
12 2015-12-03 07:00:00 1000
idx = df.loc[df['aaa'].gt(100), 'date'].unique()
print (idx)
['2015-12-02' '2015-12-03']
df = df[~df['date'].isin(idx)]
print (df)
date time aaa
0 2015-12-01 00:00:00 0
1 2015-12-01 00:15:00 0
2 2015-12-01 00:30:00 0
3 2015-12-01 00:45:00 0 | unknown | |
d12423 | train | VS gets hang many times when we click on its design page (aspx) and it dies.
This issue can be solved by following these steps –
1. Go to control Panel
2. Add or Remove programs
3. Find Microsoft Visual Studio Web Authoring Component.
4. Click on change and then click Repair.
5. Restart your system and hopefully your problem will get solved.
A: It might be the ToolBox refreshing itself. Try this tip:
http://weblogs.asp.net/stevewellens/archive/2009/07/23/speed-up-the-visual-studio-toolbox.aspx
A: There is a good resolution on MSDN blog. Follow the link below:
Troubleshooting "Visual Studio 2008 Design view hangs" issues
A: This link fixed my hang problem:
https://blogs.msdn.microsoft.com/amol/2009/07/23/visual-studio-2008-hangs-on-switching-to-design-view/
Run regedit.exe and look for HKEY_LOCAL_MACHINE\Software\Microsoft\Office\12.0\Common\ProductVersion key. If key is missing, add it and set “LastProduct” value to 12.0.4518.1066. Restart the machine after this step.
Also, you can try repair "Microsoft Visual Studio Web Authoring Component" in programs and features. | unknown | |
d12424 | train | Turns out it works if I fix the stacking of elements. I had nested my navbar component inside the MapContainer, but this was making things wonky mobile-side. I moved my component outside the MapContainer, and things worked.
I still don't understand why it went wonky mobile-side and in iOS only, but this problem at least is solvable.
Fixed example is here.
My map token has been removed, so you can use your own to substitute, but it is not necessary to have the map showing to see the problem/solution. | unknown | |
d12425 | train | It's hard to tell without more details, but my guess is that it's detecting the bash process. The way you're doing it, the remote ssh daemon is running bash -c '<newline>pgrep -f name > p_id<newline>', which then runspgrep -f name(with output to the file "p_id"). Note thatps -fsearches the entire command line for matches, so the "name" part of the argument tobash` is probably matching itself.
One option is to use the old trick to keep ps | grep something from matching itself: use [n]ame instead of just name. But then you need quotes around it, so the quoting gets even more complicated than it already is. It looks simpler to me to just skip the bash -c part and one layer of quoting:
ssh -l $UNAME $HOST 'pgrep -f name >p_id'
A: You didn't include any code from the script, but first look at user permissions, they may be the reason for process command returning different results. Other potential trouble spots are:
Check script permissions and verify execute options. Make sure script user has the correct permissions for the directory where that process runs.
You ran the command via the command prompt, is that with the same user that is executing the script? If not it may be permissions.
Put the -u option in your pgrep command for the user who owns the process you are looking for. Or try inserting an sudo command into the script and run the pgrep as another user, one with admin/root-like privileges.
And don't forget to read the man pages, perhaps there's an option you need. | unknown | |
d12426 | train | The data variable in the done() is a string. You have to transform it to an object like this
var response = $.parseJSON(data);
in order to access the attributes
A: I am sorry I missed dataType : "json" in your code in my previous answer.
Any way I tried your code and it is working. The alert shows the message. I think you have an error somewhere else. I think it has some thing to do with the array you are encoding to json(PHP part). The response you get is not complete. Try to debug your PHP and test the page separately from AJAX and see what is the result
A: After some tinkering I was able to get things working the way I wanted. Here's the updated code.
jQuery
.done(function (data) {
$("#user_add_dialog").dialog({
autoOpen: false,
modal: true,
close: function (event, ui) {
},
title: "Add User",
resizable: false,
width: 500,
height: "auto"
});
$("#user_add_dialog").html(data.message);
$("#user_add_dialog").dialog("open");
});
return false; // required to block normal submit since you used ajax
PHP
<?php
// All processing code above this line has been truncated for brevity
if($rows == "1"){
$resp = array("message"=>"Account created successfully. Waiting for user activation.");
}else{
$resp = array("message"=>"User account already exists.");
}
echo json_encode($resp);
?> | unknown | |
d12427 | train | Breadcrumb remark: I spent an hour with the debugger, partly because I did not realize that stepping-in would continue to move forward after the error was thrown.
In this case, I was acting on bad or obsolete information that localStorage[foo] would return null if nothing was saved; it was returning undefined which passed my non-null check, and JSON.parse() was throwing an exception at being fed the string "undefined". | unknown | |
d12428 | train | There is no reasonable limit* on the client side (or in GWT) of a <input type=file>. That said, it is quite likely that your server has a limit on the size of an upload, so be sure to configure whatever server (and optionally any reverse proxy in use) to use a high enough limit.
* technically, it appears there is a limit - the file cannot be larger than 2^64 bytes (about 18 million terabytes), at least according to MDN. | unknown | |
d12429 | train | Using Big O definition:
f = O(g) iff exist c, n0 > 0 such that forall n >= n0 then 0 <= f(n) <= cg(n)
g = O(h) iff exist k, n1 > 0 such that forall n >= n1 then 0 <= g(n) <= kh(n)
Now take the last unequality and divide all members by c: 0 <= f(n)/c <= g(n) and we can substitute g(n) in the second inequality: 0 <= f(n)/c <= kh(n). Finally multiply all members by c and you obtain 0 <= f(n) <= kch(n) that is the definition of f = O(h):
f = O(h) iff exist j, n2 > 0 such that forall n >= n2 then 0 <= f(n) <= jh(n)
In our case it is: n2 = max(n0, n1) and j = ck.
A: You can use limits interpretation of Bachmann–Landau notations.
Then you can use the following reasoning:
A: If you substitute the second inequalty into the first you should end up with U3 = U1 * U2, but what you called "i" is the crucial point. I think (but my theoretic days are far away in the past, so I could be wrong) that you could end up elegantly with a n >= argmax{ k, j }. | unknown | |
d12430 | train | This should work for you:
SQLcmd.CommandText = ("INSERT INTO students([student_ID], [LastName],
[FirstName],[Address],[City]) VALUES({1},'{2}','{3}','{4}','{5}'"),LastName
,firstName,Address,city)
BUT you will be prone to SQL Injection. The correct way to do this is described here and it's name is by using SQL Parameters
A: You need to instantiate SQLcmd before trying to set properties on it. | unknown | |
d12431 | train | You need a correlated subquery:
DELETE t1
FROM t1
WHERE EXISTS (SELECT t1.col1, t1.col2, t1.col3
FROM t1 as tt1
WHERE t1.col1 = tt1.col1 AND t1.col2 = tt1.col2 AND t1.col3 = tt1.col3 AND t1.id <> tt1.id
) AND
t1.col4 Is Null; | unknown | |
d12432 | train | but when i run CreateDB dbhelp = new CreateDB(this); in main
activity then it works and database is created
Because you forgot to initialize your context object in your AsyncTask.
public class manager extends AsyncTask<Void, Void, String> {
public Context context; <-- Declared but not initialized
@Override
protected String doInBackground(Void... params) {
CreateDB dbhelp = new CreateDB(context);
}
You could create a constructor for your AsyncTask :
public manager(Context context) {
this.context = context;
}
And then in your activity :
new manager(this).execute();
Also try to respect name conventions.
A: You need to pass valid context as a parameter:
CreateDB dbhelp = new CreateDB(getContext()); | unknown | |
d12433 | train | If you need some sort of configuration for this setup then there is no silver bullet. You can use json, yaml or maybe a relational database to store the configuration. Some improvement could come from allowing python code to be used for config, but this creates security issues if configuration can be provided externally.
The second step is translating configuration into actual python class instances, assign correct parameters etc. importlib serves great for this purpose and there isn't much to be improved upon here. You need some kind of factory for your classes, just try not to abstract too much (this is very Java-like and not pythonic), maybe one global method that is able to create objects based solely on configuration fragment? | unknown | |
d12434 | train | For starters, do not use inline handler with jQuery. That separates the handler from the registration for no good reason and leads to maintenance problems. Use classes or IDs to match the elements and use jQuery handlers.
The problem is event propagation. To stop the click propagating use e.stopPropagation() in the handler:
<table>
<tr class="doSomeFunc">
<td>Foo</td>
<td><button class="doAnotherFunc">Bar</button></td>
</tr>
</table>
$('.doSomeFunc').click(function(e){
alert("Foo");
});
$('.doAnotherFunc').click(function(e){
e.stopPropagation();
alert("Bar");
});
If you want to stick with your existing non-jQuery code, just change this:
<button onClick="anotherFunc();return false;">
return false from a mouse handler will do the same as e.stopPropagation() and e.preventDefault().
A: Your click action is being propagated to all parent elements of the button. To stop that, use event.cancelBubble = true (or, if you're using jQuery you can use event.stopPropagation()) in the click event.
A: You need to use e.stopPropagation();
Prevents the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event https://api.jquery.com/event.stoppropagation/
Here is a demo: https://jsfiddle.net/j81czwky/
$("tr").click(function(e){
alert("Foo");
})
$("button").click(function(e){
e.stopPropagation();
alert("Bar");
});
<table>
<tr>
<td>Foo</td>
<td><button>Bar</button></td>
</tr>
</table> | unknown | |
d12435 | train | You can read in this issue that the functionality is no longer supported, the method is still there, but as a no-op.
due to changes in our player infrastructure, the player will no longer
honor requests to set a manual playback quality via the API. As
documented, the player has always made a "best effort" to respect the
requested quality.
The documentation will be updated in the future to indicate this call
is no longer supported, though it will still be available as a "no-op"
for compatibility purposes.
A: So, finally, answer from Google:
setPlaybackQuality is now considered a "no-op"; calling this function
will not change the player behavior. The player will use a variety of
signals to determine the optimal playback quality.
Users are able to manually request a specific playback quality via the
quality selector in the player controls.
A: It was also reported in this thread. You could file a bug report if you think this is a bug. | unknown | |
d12436 | train | Since your element is generated "on the fly", thru javascript, your $('#endorse').click(.. event wont work as that element did not exist on DOM, so in order to add events to your elements, created on the fly, you would need to use event delegation, so
change:
$('#endorse').click(function(){
..
to
$(document).on('click', '#endorse',function(){
...
See:: Updated jsFiddle
A: You can try this: Fiddle setup
$(function () {
$('#engr-choice').buttonset();
$('#cancel-action').button().click(function () {
$('#engr-choice').html('');
$('#endorse-edit').hide();
$('#engr-choice').append('<input id="endorse" class="engr-choice" type="radio" name="encoder-pick"/> <label for="endorse">ENDORSEMENT</label>');
$('#engr-choice input').prop('checked', false);
return false;
});
$('#engr-action').on('click', '#endorse', function () {
$('#engr-choice').html('');
$('#engr-choice').wrapInner('<div id="endorse-edit"><a href="">Edit</a></div>');
$('#endorse-edit').button();
});
});
As you are putting html elements via javascript/jQuery so direct binding of events won't be available for them, so you need to do it via event delegation that is to delegate the event to the static closest parent which is in your case is #engr-action or you can do it with $(document) which is always available to delegate the events. | unknown | |
d12437 | train | First you need to save your response in your state to refer later on. And then use Flatlist component of react-native
const List = () => {
const[post,setPost] = useState([]);
useEffect(() => {
const url = 'http://api.duckduckgo.com/?q=simpsons+characters&format=json';
fetch(url).then((res) => res.json())
.then((resp) => setPost(resp.RelatedTopics))
.catch((err) => console.warn(err));
},[]);
const renderItem = ({ item }) => (
<Text style={styles.textStyle}>{item.Text}</Text>
);
return(
<View>
<FlatList
data={post}
renderItem={renderItem}
keyExtractor={item => item.FirstURL}
/>
</View>
);
};
Refer this official doc with example for more detail
A: First you'd want to take a look at the docs. I didn't read the docs when I started playing around with FlatList and wrote my own buggy, confusing code to figure out what items were on-screen. While trying to make it less buggy, I found an amazingly simple way of doing that. It will save you time, I promise.
With that said you are going to want a few items
*
*Your list (you have).
*a component that renders a single item of your list. This component will have two relevant props: index and item. Index is as you have probably guessed, and item is data the list item has
So assuming you have a list like this
const data = [
{name:"Harry Potter", phone:1234},
{name:"Ron Weasley", phone:4321}
]
Your renderItem component may look like:
const renderItem=({index,item})=>{
let style = [{fontSize:14}]
// if first item give it spacing at top
if(index == 0)
style.push({paddingTop:5})
return(
<View style={style}>
<Text>Name:{item.name}</Text>
<Text>Phone:{item.phone}</Text>
</View>
)
})
*A keyExtractor function, which should generate a string unique for all components in your app for each item of your list.
Bring this all together and you get:
<FlatList
data={data}
renderItem={renderItem}
keyExtractor={(item,key) => item.id || 'harry-potter-character'+i}
/> | unknown | |
d12438 | train | It is Point["step"]();. Here is the snippet:
var Point = {
step: function () {
alert("hello");
}
};
Point["step"](); | unknown | |
d12439 | train | Think about what the meaning of genres to a book is. Taxonomy is just what you use for this kind of thing. There are several pros using the taxonomy rather than using CCK fields.
*
*Taxonomy is meta data, CCK fields are not. This mean that the way the html is generated for taxonomy terms, it will help SE to understand that these genres are important and it will give you a free SEO
*You can setup how genres should be selected in far more detail than a CCK field. Again since taxonomy is made for exactly this kind of thing. You can setup how users are presentated with the genre selection in various ways. You can predefine genres or let users enter their own as they like. You can make child-parent relation ships and more
*It's easier and more lightweight to use taxonomy than CCK fields.
*If there only is 1 or 2 genre inputted you wont have to have empty CCK fields.
*probably more that I can't think of right now
Using taxonomy you can pretty easily make a search with views, where you make it possible for users to select genres using a multiple select list. You can decide if you require all terms or only one of them. Simply put you should really use taxonomy, it should solve all of your problems, if not, you should still use it and try to solve the problems you could get using taxonomy instead of CCK fields.
A: Jergason has a good point saying that taxonomy would probably be a good fit for your fields. However this wouldn't solve your problem of weighted genres.
A possible (though hacky) solution would be to have a fourth field which combined the values of the other three which is only set when a node is saved. This field could then be used for searching.
The non hacky solution is to write your own views filter but this is very advanced.
There may be a way to do this with views out of the box it is flexible, hopefully someone else knows of an easier non hacky solution. | unknown | |
d12440 | train | You can exclude these Bluetooth serial ports from the detection process.
Check the PNPDeviceID property of each serial port that you detect.
On Windows, the PNPDeviceID property of Bluetooth serial ports will contain the string BTHENUM, which you can use to exclude them from detection process.
System.Management.ManagementObjectSearcher searcher = new System.Management.ManagementObjectSearcher("SELECT * FROM Win32_SerialPort");
foreach (System.Management.ManagementObject port in searcher.Get())
{
string deviceId = port["PNPDeviceID"].ToString();
if (!deviceId.Contains("BTHENUM"))
{
string portName = port["DeviceID"].ToString();
Console.WriteLine(portName);
// add the port name to your list of serial ports to test
}
}
This will exclude any Bluetooth serial ports from the list of ports to test, and only return the names of the remaining serial ports. | unknown | |
d12441 | train | Just force an exit when the Python call fails:
python xxx.py || exit 1
You could use break instead of exit to just leave the loop. More advanced error handling can be achieved by evaluating $?; here's an example how to store the return value and reuse it:
python xxx.py
result=$?
if [ ${result} -ne 0 ]; then
# Error
echo "xxx.py exited with ${result}"
...
else
# Success
...
fi
As a general rule, not only regarding Bash scripting but programming in general, always check return codes of commands for errors and handle them. It makes debugging and your working life in general easier in the long run. | unknown | |
d12442 | train | Set log_bin_trust_function_creators = 1 for Parameter group of the RDS instance.
Note: Default Parameter-Group is not editable. Create a new Parameter-Group and assign it to the RDS instance by modifying it from UI (AWS Admin Console) OR maybe using commands
DELIMITER $$
CREATE DEFINER=`DB_USERNAME_HERE`@`%` FUNCTION `GetDistance`(coordinate1 VARCHAR(120), coordinate2 VARCHAR(120)) RETURNS decimal(12,8)
READS SQL DATA
BEGIN
DECLARE distance DECIMAL(12,8);
/*Business logic goes here for the function*/
RETURN distance;
END $$
DELIMITER ;
Here, you have to replace DB_USERNAME_HERE with you RDS database username and function names according to you need.
Important thing is: DEFINER=`DB_USERNAME_HERE`@`%`
This was the problem I was facing after setting log_bin_trust_function_creators = 1 in parameter group. And it worked like a charm.
A: A better way is to apply your own parameter group, with log_bin_trust_function_creators set to true. (its false by default)
A: This happens when you try to create a procedure/function/view with a DEFINER that is not the current user.
To solve this remove or update the DEFINER clause. | unknown | |
d12443 | train | The cause
You have yourself found the cause for the change of position and dimension of the annotation:
It seems that part of the problem may be in the call to pdfStamper.AddAnnotation(annot,1), because annot's values for the /Rect key change after this call is made.
... code from PdfStamper.AddAnnotation() (link to source), lines 1463-1493, is responsible
Indeed, that code changes the annotation rectangle if the page is rotated.
The rationale behind this is that for rotated pages iText attempts to lift the burden of adding the rotation and translation to page content required to draw upright text and have the coordinate system origin in the lower left of the page of the users' shoulders, so that the users don't have to deal with page rotation at all. Consequently, it also does so for annotations.
For page content there is a PdfStamper property RotateContents defaulting to true which allows to switch this off if one explicitly does not want this rotation and translation. Unfortunately there isn't a similar property for annotations, their positions and sizes always get "corrected", i.e. rotated and translated.
A work-around
As the page rotation is the trigger for iText to rotate and translate the rectangle, one can simply remove the page rotation before manipulating the annotations and add it again later:
PdfDictionary pageDict = pdfReader.GetPageN(1);
// hide the page rotation
PdfNumber rotation = pageDict.GetAsNumber(PdfName.ROTATE);
pageDict.Remove(PdfName.ROTATE);
//get annotation array
PdfArray annotArray = pageDict.GetAsArray(PdfName.ANNOTS);
//iterate through annotation array
int size = annotArray.Size;
for (int i = size - 1; i >= 0; i--)
{
...
}
// add page rotation again if required
if (rotation != null)
pageDict.Put(PdfName.ROTATE, rotation);
pdfStamper.Close();
With this addition the annotations stay in place.
The missing Properties
You also observed:
There is no Properties available when the new Textbox is right-clicked
This is because you have not changed the intent (IT) entry, so they still contained FreeTextTypewriter, so Adobe Reader is not sure what kind of object that is and, therefore, offers no Properties dialog. If you also change the intent:
//change annotation type to Textbox
annot.Put(PdfName.SUBTYPE, PdfName.FREETEXT);
annot.Put(new PdfName("IT"), PdfName.FREETEXT); // <======
annot.Put(new PdfName("Subj"), new PdfString("Textbox"));
you'll get the Properties dialog.
As an aside
Your method getFloat first caused weirdest changes in coordinate systems for me because my locale does not use dots as decimal separator.
I changed it to this to make it locale-independent:
private float getFloat(PdfArray arr, int index)
{
return arr.GetAsNumber(index).FloatValue;
}
An alternative approach
Is there a specific reason why you replace the original annotation instead of simply editing it? E.g.:
public void AlternativeReplaceFreetextByTextbox(string InputPath, string OutputPath)
{
PdfName IT = new PdfName("IT");
PdfName FREETEXTTYPEWRITER = new PdfName("FreeTextTypewriter");
using (PdfReader Reader = new PdfReader(InputPath))
{
PdfDictionary Page = Reader.GetPageN(1);
PdfArray Annotations = Page.GetAsArray(PdfName.ANNOTS);
foreach (PdfObject Object in Annotations)
{
PdfDictionary Annotation = (PdfDictionary)PdfReader.GetPdfObject(Object);
PdfName Intent = Annotation.GetAsName(IT);
if (FREETEXTTYPEWRITER.Equals(Intent))
{
// change annotation type to Textbox
Annotation.Put(IT, PdfName.FREETEXT);
}
}
using (PdfStamper Stamper = new PdfStamper(Reader, new FileStream(OutputPath, FileMode.Create)))
{ }
}
} | unknown | |
d12444 | train | From COPY INTO docs:
If SINGLE = TRUE, then COPY ignores the FILE_EXTENSION file format option and outputs a file simply named data. To specify a file extension, provide a filename and extension in the internal or external location path
COPY INTO @mystage/data.csv ...
and:
COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE
String (constant) that specifies to compresses the unloaded data files using the specified compression algorithm.
Supported Values
AUTO
Unloaded files are automatically compressed using the default, which is gzip.
**Default: AUTO**
Either do not compress file(compression method NONE) or provide a proper file name with gz extension:
COPY INTO 'azure://my_account.blob.core.windows.net/test-folder/test_file_8.csv.gz' | unknown | |
d12445 | train | When this type of problem happens, best way is to make sure you have followed below points correctly.
*
*You have proper SDK installed
*You have Intel HAXM & virtualization option enabled in your BIOS
*Configure emulator correctly, download the Intel X86 Atom system image for better performance.
*Disable and then Enable ADB Integration
*User arm instead of x86 if you have AMD processor when creating emulator.(In most of case) | unknown | |
d12446 | train | for(int i = 0; i < othercontent.length ;i++ )
{
if(i == 0 || othercontent[i] != othercontent[i - 1])
{
storage = storage + othercontent[i];
}
}
A: if othercontent is String array :
TreeSet<String> set = new TreeSet<>(Arrays.asList(othercontent));
othercontent = set.toArray(new String[0]);
for (String string : othercontent) {
System.out.println(string);
}
if othercontent is String :
String othercontent = "ZZZZQQWEDDODRAABBNNNNO";
LinkedList<Character> list = new LinkedList<>();
for (Character character : othercontent.toCharArray()) {
list.add(character);
}
TreeSet<Character> set = new TreeSet<>(list);
StringBuilder builder = new StringBuilder();
for (Character character : set) {
builder.append(character);
}
System.out.println(builder.toString());
not only sorting , but also removing dublicates are solved with this code
OUTPUT :
ABDENOQRWZ
A: You can check if you reached the last element:
for(int i = 0;i < othercontent.length -1; i++ ) {
if(othercontent[i] != othercontent[i + 1]) {
storage = storage + othercontent[i];
}
//only gets executed if the last iteration is reached
if(i==othercontent.length-2) {
storage = storage + othercontent[i+1];
}
}
Or, instead of using a condition, just write this after your loop:
storage = storage + othercontent[othercontent.length-1];
A: You can add the last digit to your string outside of for loop as it doesn't require to check any condition
for(int i = 0;i < othercontent.length -1; i++ ) {
if(othercontent[i] != othercontent[i + 1]) {
storage = storage + othercontent[i];
}
}
storage = storage + othercontent[othercontent.length - 1];
A: for(int i = 0; i < othercontent.length -1 ; ++i ) {
if(othercontent[i] != othercontent[i + 1]) {
storage = storage + othercontent[i];
}
}
if(othercontent.length>0){
storage = storage + othercontent[othercontent.length-1];
}
A: If you are checking for the duplicates, you should do something like this outside the loop.
if(othercontent.length>0 && storage[storage.length-1] ! = othercontent[othercontent.length-1])
{
storage = storage+othercontent[othercontent.length-1];
} | unknown | |
d12447 | train | You need to override the isEmpty method on the respective sub classes, and then it's clear to which collection they refer:
class MyList extends Container {
public MyList() {
list = new Object[ORIGINAL_SIZE];
size = 0;
}
@Override
public boolean isEmpty() {
return size > 0;
}
}
Here you need to maintain the size variable each time you modify (add to or remove from) the list.
(Same for the set implementation, just using set instead of list.)
The whole thing is a bit of a weird class design. The Container base class should not have list and set, these should be in the respective child classes MyList and MySet, so that everything is defined where it's used (only).
A: You don't need to check either container. Override the isEmpty() method in both classes to check the size field. As elements are added or removed to either array, increment or decrement the size accordingly. | unknown | |
d12448 | train | In Bootstrap 4, you need to add the classes table table-striped to your table.
Here's a working example:
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<table class="table table-striped">
<thead>
<tr>
<th scope="col">#</th>
<th scope="col">First</th>
<th scope="col">Last</th>
<th scope="col">Handle</th>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">1</th>
<td>Mark</td>
<td>Apple</td>
<td>@youtube</td>
</tr>
<tr>
<th scope="row">2</th>
<td>Jacob</td>
<td>Orange</td>
<td>@facebook</td>
</tr>
<tr>
<th scope="row">3</th>
<td>Larry</td>
<td>Banana</td>
<td>@twitter</td>
</tr>
</tbody>
</table> | unknown | |
d12449 | train | The reason your query doesn't work is because each row has only one category. Instead, you need to do aggregation. I prefer doing the conditions in the having clause, because it is a general approach.
SELECT Name
FROM categorytable
group by Name
having sum(case when category ='Action' then 1 else 0 end) > 0 and
sum(case when category ='Sci-Fi' then 1 else 0 end) > 0;
Each clause in the having is testing for the presence of one category. If, for instance, you changed the question to be "Action films that are not Sci-Fi", then you would change the having clause by making the second condition equal to 0:
having sum(case when category ='Action' then 1 else 0 end) > 0 and
sum(case when category ='Sci-Fi' then 1 else 0 end) = 0;
A: You can use the OR clause, or if you have multiple categories it will probably be easier to use IN
So either
SELECT Name FROM categorytable WHERE category ='Action' OR category ='Sci-Fi'
Or using IN
SELECT Name
FROM categorytable
WHERE category IN ('Action', 'Sci-Fi', 'SomeOtherCategory ')
Using IN should compile to the same thing, but it's easier to read if you start adding more then just two categories.
A: with mustAppear(category) as (
select 'Action'
union all
select 'Sci-Fi'
)
select Name -- select those film names for which
from categorytable as C1
where not exists ( -- there is no category that must appear with that name
select M.category
from mustAppear as M
where not exists ( -- that does not.
select *
from categorytable as C2
where C2.Name = C1.Name
and C2.category = M.category
)
)
You can also be more direct this way:
select Name
from categorytable
where category = 'Sci-Fi'
intersect
select Name
from categorytable
where category = 'Action';
The advantage of the former query is that you can use it without modification if you create a permanent table (mustAppear, or you can use a table variable @mustAppear) to hold the category list that must match. The latter query is simpler, but it must be rewritten when the categories change. | unknown | |
d12450 | train | Replace
cards_file.write(str(cards))
With
for k,v in cards.items():
cards_file.write(f'{k} : {v}\n') | unknown | |
d12451 | train | In Print View some styles for some elements (if not given in CSS) becomes default. Because of that - all your form elements (input and textarea) in Print View got white background covering borders of table. Solution - set background of inputs and textareas to none.
input, textarea {
background: none;
}
And done ;) | unknown | |
d12452 | train | If you want the most recent version, then you can filter using a where clause:
select r.*
from reviews r
where r.id = (select max(r2.id) from reviews r2 where r2.url_id = r.url_id);
You can join in the url itself, if that is necessary.
A: SELECT r.*
FROM reviews r
WHERE r.grade = ( SELECT Max(r2.grade)
FROM reviews r2
WHERE r2.url_id = r.url_id )
ORDER BY r.url_id | unknown | |
d12453 | train | If links returns a list. Then you can fetch the last 8 links using links[:-8]
Consider x contains a list of numbers from 1-10 then x[-8:] will return the last 8 items in the list
x = [i for i in range (0, 10)]
print x[-8:]
# [2, 3, 4, 5, 6, 7, 8, 9]
Also known as list slicing. | unknown | |
d12454 | train | Looks like your rewrite rule is wrong. Did you forget to add ".+" after your url attribute?
<rule name="SocketIO" patternSyntax="ECMAScript">
<match url="socket.io.+"/>
<action type="Rewrite" url="server.js"/>
</rule> | unknown | |
d12455 | train | myform.Controls will gives you a collection that contains all controls withing the container( not only labels). So you have to check for the type for control while iterating the collection in order to avoid throwing exception. In the additional comment you ware specify that label has default text "Label" so you need to include this also in the condition. The whole scenario can be implemented like the following:
foreach (Control item in myform.Controls)
{
if (item is Label)
{
var lbl = (Label)item;
bool labelIsEmpty = false;
try
{
lbl = (Label)item;
labelIsEmpty = (lbl != null && lbl.Text == string.Empty && lbll.Text!="Label");
}
catch
{
//Throw error message
}
if (labelIsEmpty)
{
lbl.Text = "Not Specified";
}
}
}
Note :-
You need to reorder the conditions to avoid exception. Check for
string.Empty should be comes after check for control is null.
Because AND will not check for second condition if first one is
false, lbl.Text will throw NullReferenceException if lbl is null
A: foreach (Control item in myform.Controls)
{
if (item is Label)
{
var lbl = (Label)item;
bool labelIsEmpty = false;
try
{
lbl = (Label)item;
labelIsEmpty = (lbl != null && (lbl.Text == string.Empty || lbl.Text == "Label"));
}
catch
{
//Throw error message
}
if (labelIsEmpty)
{
lbl.Text = "Not Specified";
}
}
} | unknown | |
d12456 | train | Due to security reasons, JavaScript running in the browser should not be used to access the filesystem directly. But definitely you can access it using Node's fs module (but that's on the server side).
Another way is, if you let the user pick files using the <input type="file"> tag then you can use the File API to fetch the contents. But I think that is not what you are looking for.
Recommended reading: https://en.wikipedia.org/wiki/JavaScript#Security | unknown | |
d12457 | train | Add this line in your application tag
<?xml version="1.0" encoding="utf-8"?>
<manifest ...>
<uses-permission android:name="android.permission.INTERNET" />
<application
...
android:usesCleartextTraffic="true"
...>
...
</application>
</manifest> | unknown | |
d12458 | train | The dplyr package provides join functions that can help.
For every row of the production.data, you can bring in the corresponding features for each C1_target, C1_actual to create a large tibble:
library(dplyr)
x <- production.data %>%
inner_join(distinctive.feature.matrix, by = c("C1_target"="Symbol")) %>%
inner_join(distinctive.feature.matrix, by = c("C1_actual"="Symbol"))
Note that there are two calls to inner_join: one to get features corresponding to C1_target, the second for features for C1_actual
The new tibble has column names such as Sonorant.x and Sonorant.y, the first corresponding to C1_target and the second corresponding to C1_actual.
You can create a list of feature names by taking the column names, excluding Symbol:
features <- colnames(distinctive.feature.matrix)[-1]
Now you can do your 'for-loop' to calculate the difference between the x and y values, and then combine the lot into a new data frame:
diffs <- do.call(cbind, sapply(features, function(f) x[paste0(f, '.x')] - x[paste0(f, '.y')]))
colnames(diffs) <- features
The paste0 function concatenates each feature name with .x or .y; the sapply is the equivalent of the for-loop; and the cbind bashes the result of each computation into a new table.
You will end up with a data frame whose column names are the features, and as many rows as your production.data had.
To be fair, the above code is an ugly confection of dplyr-like syntax and base R... | unknown | |
d12459 | train | Explanation
This isn't React's fault. This is defined in the HTML5 Specification to behave this way. Per the link:
4.4.7 The li element
[...]
Content attributes:
Global attributes
If the element is a child of an ol element: value - Ordinal value of the list item
Where value is defined as such:
The value attribute, if present, must be a valid integer giving the ordinal value of the list item.
And a "valid integer" is defined as such:
A string is a valid integer if it consists of one or more ASCII digits, optionally prefixed with a "-" (U+002D) character
[...]
The rules for parsing integers are as given in the following algorithm. When invoked, the steps must be followed in the order given, aborting at the first step that returns a value. This algorithm will return either an integer or an error.
*
*Let input be the string being parsed.
*Let position be a pointer into input, initially pointing at the start of the string.
*Let sign have the value "positive".
*Skip whitespace.
*If position is past the end of input, return an error.
*If the character indicated by position (the first character) is a "-" (U+002D) character:
a. Let sign be "negative".
b. Advance position to the next character.
c. If position is past the end of input, return an error.
*Otherwise, if the character indicated by position (the first character) is a "+" (U+002B) character:
a. Advance position to the next character. (The "+" is ignored, but it is not conforming.)
b. If position is past the end of input, return an error.
*If the character indicated by position is not an ASCII digit, then return an error.
*Collect a sequence of characters that are ASCII digits, and interpret the resulting sequence as a base-ten integer. Let value be that integer.
*If sign is "positive", return value, otherwise return the result of subtracting value from zero.
In steps 8 and 9, it describes the behavior you see. The following examples:
*
*"3f" returns 3
*"H5" returns 0
*"35" returns 35
The first returns 3 because of step 9. It collects all the ASCII digits present if the first character is an integer, which is just 3, and interprets as an integer. In the second example, it returns 0 because of this:
If the value attribute is present, user agents must parse it as an integer, in order to determine the attribute's value. If the attribute's value cannot be converted to a number, the attribute must be treated as if it was absent. The attribute has no default value.
Parsing fails on H5 because the first character is not +, -, or an ASCII digit. Since the attribute is treated as absent as it's invalid, it's just 0, because value still needs to be a valid integer. If you pass an invalid integer that can't be parsed, the result of access attribute value is just 0, which is per the HTML Living Standard, in the applicable paragraph:
If a reflecting IDL attribute has a signed integer type (long) then, on getting, the content attribute must be parsed according to the rules for parsing signed integers, and if that is successful, and the value is in the range of the IDL attribute's type, the resulting value must be returned. If, on the other hand, it fails or returns an out of range value, or if the attribute is absent, then the default value must be returned instead, or 0 if there is no default value. On setting, the given value must be converted to the shortest possible string representing the number as a valid integer and then that string must be used as the new content attribute value.
Here, the value attribute is not defined to have a default value, and in this case H5 isn't a valid integer so parsing fails and 0 is returned by specification. The last example returns 35 because it's a completely valid integer for value.
Solution
So instead, you can use Element.getAttribute. Per the link:
getAttribute() returns the value of a specified attribute on the element
No conversion happens in the method. It just gets the value as it does not need to convert to integer to figure out ordering as HTML does to determine where to place the lis. The HTML Living Standard outlines the internal workings of this method. It just accesses a NamedNodeMap containing attributes and does not to any conversion. Thus:
console.log(document.getElementById("test").getAttribute("value"));
<ul>
<li value="Foobar" id="test">Test</li>
</ul>
This can be applied to your situation by doing this:
keyClicked.target.getAttribute("value");
A: It appears that value when used with an li must be a number. I'd try this instead:
<li data-value="123abc">123abc</li>
keyHandler.target.getAttribute('data-value');
A: The HTMLLIElement.value property gets or sets the ordinal of the list element, and is always a number. If you want to get the string value, you have to read the value attribute instead.
keyHandler.target.getAttribute('value') | unknown | |
d12460 | train | Okay, so race conditions in asynchronous circuits happen when inputs change at different times for a gate. Let's say your logical function looks like this
λ = ab + ~b~a
the most straightforward way to implement this function with gates looks like
NOTE: I assume your basic building blocks are AND, OR, and NOT. Obviously in CMOS circuits, NAND, NOR and NOT is how you build circuits, but the general principal stays the same. I also assume AND, NOR, and NOT have the same delay when in reality NAND and NOR have different delays if the output goes form 0 to 1 or 1 to 0, and NOT is about 20% faster than NAND or NOR.
a ->| AND |-------->| OR | -> λ
b ->| 1 | | |
| |
a ->| NOT |->|AND|->| |
b ->| NOT |->| 2 | | |
Now, assume that AND and NOT both have a delay of 2ns. That means the OR gate sees the value at it's first position change 2 ns before it sees the value at its second position change.
Which means if a and b both go from 1 to 0, you would expect λ to stay the same, since the output of the first AND gate is going from 1 to 0, but the output of the AND gate goes from 0 to 1, meaning the OR condition stays true.
However, if you get the output from the second AND gate a little bit after the first AND gate, then your OR gate will momentarily see 0,0 at it's input while transitioning from 1,0 to 0,1. Which means λ will have a momentary dip and it'll look like
__
a |___________
__
b |___________
____
AND1 |_________
_______
AND2 ______|
______ _____
λ |_|
If you look at the inputs of the OR gate right between when AND1 goes down and AND2 goes up, it propagates a 0 through the OR gate, and sure enough, there's a dip in the output 2ns later.
That's a general overview for how race conditions arise. Hope that helps you understand your question. | unknown | |
d12461 | train | It sounds like you have seamless checkout with Stripe set as your store's Checkout option - which moves all of the shipping and discount calculations to the separate Checkout page.
Unfortunately the design editor hasn't been updated to reflect these new changes with the seamless checkout, so those variables will still return "true" (as if you're using PayPal's checkout) when previewing your store. | unknown | |
d12462 | train | var parsed = moment(myStringDate, 'DD.MM.YYYY');
for Version >= 1.7.0 use:
parsed.isValid()
for Version < 1.7.0 create your own isValid() function:
function isValid(parsed) {
return (parsed.format() != 'Invalid date');
}
checkout the docs:
http://momentjs.com/docs/#/parsing/is-valid/
A: You can try parsing multiple formats. Updated fiddle: http://jsfiddle.net/timrwood/cHRfg/3/
var formats = ['DDMMYYYY', 'DDMMYY'];
var date1 = moment('30082012', formats);
var date4 = moment('300812', formats);
Here are the relevant docs. http://momentjs.com/docs/#/parsing/string-formats/
There is development on adding moment.fn.isValid which will allow you to do validation like in examples 5-8. It will be added in the 1.7.0 release. https://github.com/timrwood/moment/pull/306 | unknown | |
d12463 | train | your query can be converted one query:
select rownum as "row" from globalTable where valid='T' and globalId = "g123" | unknown | |
d12464 | train | To selfhost an application, I have:
*
*deployed to the file-system (to c-drive of my dev machine)
*changed the entry Localhost:5000 (default) in Hosting.ini (C:[app-name]\approot\src[app-name]\ to the IP-Address of my machine
*opened the port 5000 in the windows-firewall of my machine
then, I was able to load the application from another machine over the LAN.
Note you have to start the web.cmd as admin
After doing that, I have done the same on our intranet-server (copied the whole deployed directory to the c:-drive of the intranet-server, changed the IP in hosting.ini to the server-ip and opened the port 5000 on the Windows Firewall on the intranet-server).
Then I have started web.cmd as admin on the intranet-server and the app is reachable in the LAN (Server-IP:5000 or DNS-name:5000).
Note: I have just updated to RC1 and there seems to be various changes to the project-template (therefore, I have re-created my small example-project from scratch).
Especially the Hosting.ini-file (where I have changed the Localhost-entry to the machine-IP) is no loger generated / supported.
I don't have found any description yet, how exactly the IP has to be changed instead (according to MS, new in the hosting.json-file), but find-out, that it's possible to change it in the project.json-file.
I have changed:
"commands": {
"web": "Microsoft.AspNet.Server.Kestrel",
"ef": "EntityFramework.Commands"
},
to
"commands": {
"web": "Microsoft.AspNet.Server.Kestrel --server.urls=http://172.16.1.7:5000",
"ef": "EntityFramework.Commands"
},
Now, it works again.
But attention!
I'm sure that should be done on another way (according to MS in the hosting.json-file) but - as I wrote - I don't have found any description to the entry in hosting.json-file but found a hint on the Internet to an older version how to do it in the project.json-file.
So only take this as temporary workaround, until we know, how to do it correct!
And do that only in the deployed directory (not in the VS-project, as else, the local dev-is won't run anymore!
A: The problem could be solved by extracting the ZIP file with 7-zip instead of the Windows Explorer, see also: .net local assembly load failed with CAS policy
Thanks to @Kiran Challa for the link.
A: An alternative to using 7-zip as suggested by @martin klinke you can use the streams -s -d *.* command from sysinternals / technet.
This will unblock files marked as being 'unsafe' (see the link provided by Martin).
This will unblock files recursively (run it in the root of what was your zip). | unknown | |
d12465 | train | In fact, you do not need to modify the validator settings or first create a complex password and then later change it. Instead you can create a simple password directly bypassing all password validators.
Open the django shell
python manage.py shell
Type:
from django.contrib.auth.models import User
Hit enter and then type (e.g. to use a password consisting only of the letter 'a'):
User.objects.create_superuser('someusername', '[email protected]', 'a')
Hit enter again and you're done.
A: After creating the superuser with a complex password, you can set it to something easier in the shell (./manage.py shell):
from django.contrib.auth.models import User
user = User.objects.get(username='your_user')
user.set_password('simple')
user.save()
A: You can change the AUTH_PASSWORD_VALIDATORS setting in in your dev environment. See the docs: https://docs.djangoproject.com/en/stable/topics/auth/passwords/#s-enabling-password-validation.
It is pretty straightforward: you will recognize the validators that caused your warning messages.
A: mimo's answer is correct but don't works if you don't using default User model
According mimo's answer and this article, I changed script to this one
from django.contrib.auth import get_user_model
User = get_user_model()
user = User.objects.get(email='[email protected]')
# or user = User.objects.get(username='your_user')
user.set_password('simple')
user.save() | unknown | |
d12466 | train | The following will return true if there is one or more processes running that have the supplied name.
public bool IsProcessRunning(string processName)
{
return Process.GetProcessesByName(processName).Length != 0;
}
A: If you are attempting to identify processes/applications that are explicit to .NET, you should look for a dependency/module within the process that is specific to the .NET framework.
Below I am using mscorlib, as it's the first that comes to mind, as my hint to identify that the process is dependent on the .NET framework. e.g.
var processes = Process.GetProcesses();
foreach (var process in processes)
{
try
{
foreach (var module in process.Modules)
{
if (module.ToString().Contains("mscorlib"))
{
Console.WriteLine(process);
break;
}
}
}
catch { // Access violations }
}
It's not bullet proof, as some processes cannot have their modules enumerated due to access restrictions, but if you run it, you'll see that it will pull back .NET dependent processes. Perhaps it will give you a good starting point to get you thinking in the right direction.
A: Check out CorPublishLibrary - a library that lets you interrogate all managed-code processes running on a machine. | unknown | |
d12467 | train | That's the error coming out of the tool bc you had used for arithmetic evaluation. I suspect the variable assignment, $var leading to echo "obase=16; $var" | bc has a malformed/empty value which bc did not like.
If you are using the bash shell, you could very well use its over arithmetic evaluation using the $((..)) construct as
for (( counter=0; counter<10; counter++ )); do
if (( counter % 2 == 0 )); then
var=$(xxd -p -l1 -s "$counter" merge.bmp)
printf -v hex_var '%x' "$var"
fi
done
The %x applies the necessary decimal to hex conversion needed. Moreover you are doing the same action in both if and else clause. Also remove the outdated back-ticks syntax for command substitution, rather use $(..). | unknown | |
d12468 | train | There's no indication at http://line.me that LINE supports Android Wear. So if LINE isn't running on the watch, you'll need to build an interface with the phone, and connecct to the LINE API there.
Get started with the documentation at https://developer.android.com/training/wearables/data-layer/index.html | unknown | |
d12469 | train | Thanks for answering my question.
I have figure it out. The mistake was in the URL to connect to db as u mentioned. So, what found is that the url should be like this :-
Connection con = DriverManager.getConnection("jdbc:oracle:thin:@"+hostname+":1521:orcl", "username", "password");
ORCL is the defualt db name for AWS ( AMAZON cloud )
Thanks
A: Based on your error message ORA-12505, TNS:listener does not currently know of SID given in connect descriptor, I think you have an error in your connection string. Specifically, you are likely using the wrong SID.
oracleinstance.ctnrs4kazdmm.ap-south-1.rds.amazonaws.com:1521:XE
Try the following connection string, oracleinstance.ctnrs4kazdmm.ap-south-1.rds.amazonaws.com:1521:orcl. Normally, when you create an Oracle instance in RDS the default SID is orcl.
You can also use the AWS CLI to determine the SID.
$ aws rds describe-db-instances
{
"DBInstances": [
...
{
"DBInstanceStatus": "available",
"DBInstanceIdentifier": "MYORACLEDB",
"MasterUsername": "USERWITHSYSDBA",
"EngineVersion": "11.2.0.4.v1",
...
"Endpoint": {
"Port": 1521,
"Address": "MYORACLEDB.blahblahblach.us-west-2.rds.amazonaws.c om"
},
"PendingModifiedValues": {},
...
"DBName": "ORCL",
...
Where the DBName is the SID (in this case ORCL). | unknown | |
d12470 | train | There was not much info about online, so I asked this question in the intel community and here is the response of that:
Generally a .wic image is intended to be installed directly to its final destination, whereas an hddimg is for evaluation and installation elsewhere.
By default meta-intel .wic images only have an EFI bootloader, and will not boot via legacy BIOS.
An hddimg will have both an EFI bootloader and the syslinux binaries that let it boot from legacy BIOS.
On startup with your installer USB image do you get a light gray screen with four options? If so it is booting via legacy BIOS. | unknown | |
d12471 | train | Yes, it does. I'm on Python 3.8.5 and using tensorflow==2.3.0.
Usually when a version is given as "3.5-3.8", it includes the patch versions as well. (Sometimes there could be issues that pop up but it's intended to include all patch version of the 'from' & 'to', inclusive.)
A: I would think it works in 3.8.5, but it would be safer to use 3.8.0. | unknown | |
d12472 | train | Two problems. One, the call strlen is allowed to clobber some registers which include rdi and you need that later. So, surround the call strlen with push rdi and pop rdi to save and restore it.
Second, you do not initialize rdi in the print_newline block. You need to set it to 1 for stdout just like you did in the print_string block.
PS: You should learn to use a debugger. | unknown | |
d12473 | train | You just need a plain left join here:
SELECT
CASE WHEN t2.ID IS NOT NULL THEN t1.NAMES END AS ABC
FROM NAME t1
LEFT JOIN XYZ t2
ON t1.ID = t2.ID;
Demo
Note that a CASE expressions else condition, if not explicitly specified, defaults to NULL. This behavior works here because you want to render NULL if a given record in the NAME table does not match to any record in the XYZ table.
A: The problem isn't your logic. It is simply how the code is optimized. The subquery is probably being run for each row in the outer query.
I would recommend switching to exists:
(case when exists (select 1 from xyz where xyz.id = name.id) then name.names
end) as abc
This maintains the semantics of your original query. In particular, there is no danger that duplicates in xyz would return multiple rows (as would happen with a left join).
For performance for this -- or for the left join -- you want an index on xyz(id). | unknown | |
d12474 | train | From what I can see on your table, the condition is never met. Also it might be a performance issue if the table became very large and should be a
self-join. | unknown | |
d12475 | train | Define the secret_key like
app = Flask(__name__)
app.secret_key = settings.SECRET_KEY
or
app.config['SECRET_KEY'] = settings.SECRET_KEY
Look for details here https://flask.palletsprojects.com/en/1.1.x/config
You should load environment for flask app manually as specified here https://flask.palletsprojects.com/en/1.1.x/api/#configuration | unknown | |
d12476 | train | There's a lot going on there, I've simplified the HTML and CSS:
CSS:
.leftCol {
float: left;
width: 50%;
background-color: #ccc;
height: 60px;
}
.rightColContainer {
float: left;
width: 50%;
}
.rightCol1 {
background-color: #333;
height: 30px;
}
.rightCol2 {
background-color: #777;
height: 30px;
}
HTML:
<body>
<div class="leftCol">columncontent1</div>
<div class="rightColContainer">
<div class="rightCol1">columncontent2</div>
<div class="rightCol2">columncontent3</div>
</div>
</body>
You only need to 'contain' the right hand column to stop the 'stacked column' flowing incorrectly.
A: CSS3 actually allows you to make several columns automatically without having to have all those classes. Check out this generator: http://www.generatecss.com/css3/multi-column/
This is however only if you are trying to make your content have multiple columns like a newspaper. | unknown | |
d12477 | train | We can use tryCatch. Here is a complete example based on @Fan Li's answer here
library(RSQLite)
con <- dbConnect(SQLite(), dbname="sample.sqlite")
dbWriteTable(con, "test", data.frame(value1 = letters[1:4], value2 = letters[5:8]))
dbDisconnect(con)
library(shiny)
library(RSQLite)
runApp(list(
ui = bootstrapPage(
#select * from te fail
#select * from test work
textAreaInput("query",'Query'),
actionButton("action", label = "Run Query"),
hr(),
tableOutput("table")
),
server = function(input, output){
#Reactive is eager by definition and it will signal unreal/annoying errors, hence I used eventReactive
data <- eventReactive(input$action,{
tryCatch({
con <- dbConnect(SQLite(), dbname="sample.sqlite")
data<-dbGetQuery(con, input$query)
dbDisconnect(con)
return(data)
},
error = function(e){
showModal(
modalDialog(
title = "Error Occurred",
tags$i("Please enter valid query and try again"),br(),br(),
tags$b("Error:"),br(),
tags$code(e$message)
)
)
})
})
output$table <- renderTable(data())
})) | unknown | |
d12478 | train | Take the lastest version of android-support-v4.jar (in your sdk environement : sdk/extras/android/support/v4/android-support-v4.jar) and replace in your project and library project do not create conflict.
A: The steps to importing a library are:
*
*Download the library
*Place the library in the libs folder of the project.
*Build the project
*Do Not attempt to import the library using some import wizard.
I suspect either your download was corrupted, and you need to do it again, or you put the file into the wrong directly.
A: I think your problem is that the android.support.v4 versions are different, take the one from your project and replace the one in the Facebook sdk lib folder, it should dismiss the clash.
A: As per your question, I think you are not able to compile once you add the SDK to your project. Do you get an error saying unable to run as library?
If so that means you have included this as library and create your project as a library to be used further. It is a common mistake that I have seen many people do when they try to import a library to use and tick mark the checkbox is Library. This actually means you want this project to be treated as a library for future use. Just try to add the SDK and do not tick mark the checkbox and this project will run fine.
Many forums will just tell you to tick that checkbox and you will be stuck on this error for long time.
For any .jar file related issues you have to make the .jars files compatible as there hashing a problem and the two jars are not compatible with each other (a version issue). | unknown | |
d12479 | train | isspace is also a template in C++ which accepts a templated character and also a locale with which it uses the facet std::ctype<T> to classify the given character (so it can't make up its mind what version to take, and as such ignores the template).
Try specifying that you mean the C compatibility version: static_cast<int(*)(int)>(isspace). The differences between the compilers could come from the inconsistent handling of deduction from an overloaded function name among the compilers - see this clang PR. See the second case in Faisal's first set of testcases for an analogous case.
Someone pointed out on IRC that this code would call isspace using a char - but isspace takes int and requires the value given to be in the range of unsigned char values or EOF. Now in case that char is signed on your PC and stores a negative non-EOF value, this will yield to undefined behavior.
I recommend to do it like @Kirill says in a comment and just use the templated std::isspace - then you can get rid of the function object argument too.
A: Try rtrimws(ws, ::isspace);.
Also, just as a note, you should be using the reverse iterator.
A: That's because isspace is a template function in c++. It cannot deduce F. If you want to use C variant of isspace you could fully qualify its name as follows:
void rtrimws(string& ws){
rtrimws(ws, ::isspace); // this will use isspace from global namespace
// C++ version belongs to the namespace `std`
}
This is one more good sample why you shouldn't use using namespace std. | unknown | |
d12480 | train | You need to persist the state for the items that you have modified with the button press and then restore that state when your adapter's getView() is called for that item.
There are many ways you can do this: in memory, database, etc. You'll have to pick the method that works best for your purposes.
i.e. - item 3 gets clicked and the image changes from grey to green, store something to represent the state of the image (grey vs. green, a boolean would be great for this exact case) and then persist that data somewhere. Then when getView() gets called again for item 3 (it's about to be displayed again) you set the color of the image based on the data you persisted for item 3.
You could just modify the value in the original JSONArray that backs the ListView, btw.
A: The reason for this behaviour is because you do not persist the state of the items (if they were clicked or not). Each time the list is scrolled, the getView() is called and it executes the following code and the state is reset:
if(p.getJSON().equals("NO") ){
imageView.setBackgroundResource(R.drawable.gray);
imageView.setTag(0);
}//end if equals NO
if(p.getJSON().equals("YES")){
imageView.setClickable(false);
imageView.setBackgroundResource(R.drawable.green);
imageView.setTag(1);
}//end if equals yes
What is needed is a way to keep track of the state of each item based on its position. So you could confidently tell: item at position k is "YES" or "NO"!
You need to keep track of the items which have been clicked so that when getView() is called, you can update the state of the based on its current value (not based on JSON value).
1) Maintain a map of items positions which are checked, and corresponding state value ("YES" or "NO").
2) If item is clicked, add its (position, new state) to the map. If it is clicked again, update its state inside the map.
3) Use this map and set the state of the item in the getView(), something like:
Private field:
HashMap<Integer, String> map = new HashMap<Integer, String>();
In your getView():
String state = p.getJSON();
if(map.containsKey(position)) {
state = map.get(position);
}
if(state.equals("NO") ){
imageView.setBackgroundResource(R.drawable.gray);
imageView.setTag(0);
}//end if equals NO
if(state.equals("YES")){
imageView.setClickable(false);
imageView.setBackgroundResource(R.drawable.green);
imageView.setTag(1);
}//end if equals yes
final int pos = position;
imageView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View imageView) {
final int status = (Integer) imageView.getTag();
if (status == 0){
imageView.setBackgroundResource(R.drawable.green);
imageView.setTag(1);
map.put(pos, "YES")
}
else {
imageView.setBackgroundResource(R.drawable.gray);
imageView.setTag(0);
map.put(pos, "NO")
}
}//end on click
});
Note: this is just one of the many ways of getting what you want, however the basic idea should be the same.. | unknown | |
d12481 | train | Try sending view in chnageView()
changeView(view);
And in chnageView()
ImageView imgIcon = (ImageView) v.findViewById(R.id.icon);
TextView txtTitle = (TextView) v.findViewById(R.id.title);
A: Analyzing the code, I can deduce that, when you are changing color programmatically and changeView() is called, your color change code part is not executed :
for (int i = 0; i < adapter.getCount(); i++) {
View v = getViewByPosition(i, listview);
ImageView imgIcon = (ImageView) v.findViewById(R.id.icon);
TextView txtTitle = (TextView) v.findViewById(R.id.title);
if (i == position) {
imgIcon.setColorFilter(ContextCompat.getColor(mActivity, R.color.app_red));
txtTitle.setTextColor(ContextCompat.getColor(mActivity, R.color.app_red));
} else {
imgIcon.setColorFilter(ContextCompat.getColor(mActivity, R.color.colorWhite));
txtTitle.setTextColor(ContextCompat.getColor(mActivity, R.color.colorWhite));
}
}
Can you please check if for block is executed. | unknown | |
d12482 | train | If you have defined a function:
myfunc() {
echo 'hi'
}
then you can invoke that function in a case statement without a capturing expression. You do it just like any other command:
case "$param" in
expr) myfunc;;
*) echo 'nope';;
esac
You need not use a capturing expression unless you mean it. In your case, what you have would attempt to execute the output of the function as a command itself:
$ double_down() {
> echo 'ping google.com'
> }
$ $(double_down)
PING google.com (74.125.226.169): 56 data bytes
it's possible, but seems unlikely, that you really want this. | unknown | |
d12483 | train | Sound from your problem, I think you are new to Android.
Ok, look at code below.
To create an AlertDialog with a list of selectable items like the one shown to the right, use the setItems() method:
final CharSequence[] items = {"Red", "Green", "Blue"};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle("Pick a color");
builder.setItems(items, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int item) {
Toast.makeText(getApplicationContext(), items[item], Toast.LENGTH_SHORT).show();
}
});
AlertDialog alert = builder.create();
For more info look at Creating an AlertDialog. | unknown | |
d12484 | train | A pipelined function might help you here.
*
*Create a table type that will match the results your function will return.
*Create a function which returns exactly the dates you want. It can run queries to make sure the date isn't already in your table, is within your desired date range, etc.
*Return the values one by one stopping once you hit your criteria.
*Select from the function using the TABLE() function to turn the results into a table you can query. Use the OBJECT_VALUE to access the actual value being returned (since it doesn't really have a column name).
create or replace type date_tbl_t as table of date;
/
create or replace function all_dates ( max_year_in in integer )
return date_tbl_t
pipelined
as
date_l date;
offset_l pls_integer := 0;
year_l integer;
begin
if date_l is null
then
date_l := sysdate;
end if;
year_l := extract ( year from date_l );
while year_l <= max_year_in
loop
pipe row(date_l);
date_l := date_l + 1;
year_l := extract ( year from date_l );
end loop;
return;
end all_dates;
/
select
to_char(x.object_value, 'yyyymmdd') as my_date_id,
x.object_value as datetime_start
from table ( all_dates (2019) ) x;
/ | unknown | |
d12485 | train | Instead of a single isShown state that is a boolean value that all rendered cards are using, store the id of the card that is clicked on and is associated with the fetched data.
Example:
const [isShownId, setIsShownId] = useState(null);
const handleClick = (id: number) => {
getAmountOfTimesPurchased(id);
setIsShownId(id);
};
return (
<div>
{products.map((product: IProduct) => {
return (
<Card key={product.id}>
...
<Card.Footer>
<Button
onClick={() => {
handleClick(product.id);
}}
>
Purchased For:
</Button>
{isShownId === product.id && timesPurchased}
</Card.Footer>
</Card>
);
})}
</div>
); | unknown | |
d12486 | train | the difference is the type of the button not the id
first example the type of the input is submit which submits forms. second example the type is button which doesn't submit forms
after OP editted question:
see that if you change the id of the submit button to basicly anything other than "submit" it works...
A: This is noted on various sites such as this:
http://www.joezimjs.com/javascript/dont-name-inputs-action-submit/
Having a button with id='submit' is also a known issue in jquery that hasn't been resolved
http://bugs.jquery.com/ticket/1414
Presumably because the issue is outside of jquery's control and it'd require a hack.
I'd be interested to know exactly why. | unknown | |
d12487 | train | If you have Drush installed, you can use drush status and it will print out the Drush version and the Drupal version you have installed.
If not, go into your Drupal root directory and take a look at the CHANGELOG.txt file. It will list the most recent upgrade (the current version) at the top.
Edit
If you do not have the CHANGELOG.txt file, and don't have Drush, you can look in the includes/bootstrap.inc file, as suggested in the third answer to the question you linked to.
This is defined as a global PHP variable in /includes/bootstrap.inc within D7. Example: define('VERSION', '7.14'); So use it like this...
if (VERSION >= 7.1) {
do_something();
} | unknown | |
d12488 | train | there are a couple of ways to achieve it, for example:
*
*you can use server-side events and subscribe your client to be notified once the import is done https://symfony.com/blog/symfony-gets-real-time-push-capabilities
*simpler way is storing a record once the import is done somewhere in DB/redis and ask the backend if the operation is finished on each page load/via ajax | unknown | |
d12489 | train | ConfigParam.joins(:default_configs).where(default_config: { role_id: 1001 }) | unknown | |
d12490 | train | Rect objects are usually axis-aligned, and so they only need 4 values: top, left, bottom, right.
If you want to rotate your rectangle, you'll need to convert it to eight values representing the co-ordinate of each vertex.
You can easily calculate the centre value by averaging all the x- and y-values.
Then it's just basic maths. Here's something from StackOverflow:
Rotating a point about another point (2D)
Your eight values, or four corners are (assuming counter-clockwise from the top right):
v0 : (right, top)
v1 : (left, top)
v2 : (left, bottom)
v3 : (right, bottom)
Create your own rectangle object to cope with this, and compute intersections etc.
Note that I've talked about how to rotate the rectangle's vertices. If you still want a bounding box, this is normally still considered to be axis-aligned, so you could take the max and min of the rotated vertices and construct a new (larger) rectangle. That might not be what you want though. | unknown | |
d12491 | train | The json functions with string_agg() achieve what you want.
Using WITH ORDINALITY guarantees correct ordering of the text elements.
SELECT t.external_id as cod,
t.title as name,
string_agg(a.block->>'text', ' ' ORDER BY rn) as objectives
FROM "table" t
CROSS JOIN LATERAL jsonb_array_elements(t.objectives->'blocks')
WITH ORDINALITY as a(block, rn)
GROUP BY t.external_id, t.title; | unknown | |
d12492 | train | Lets try-
=FILTER(D3:H15,BYROW(E3:E15,
LAMBDA(x,MAX(--ISNUMBER(XMATCH(TOCOL(TEXTSPLIT(x,",")),
TOCOL(TEXTSPLIT(B4,", ")),0)))))
* IF(B3="ALL",D3:D15<>"",D3:D15=B3))
Explanation of the solution to identify if release value is present:
It uses BYROW function which processes each row by a LAMBDA function you define.
The formula: TOCOL(TEXTSPLIT(B4,", ")
Generates a column array with the values of B4, i.e.: {A;B}(semicolon represents a column array) in our case a 2x1 array. TEXTSPLIT spits a string by delimiter (", ").
The formula: TOCOL(TEXTSPLIT(x,", "))
Generates an array column for a value represented by x split by the delimiter (", ") . For example if x is: A it will generate: {A} and for A,C the output will be: {A;C}, i.e 2x1 array.
The XMATCH function with signature: XMATCH(lookup_value, lookup_array, 0)
will return the index position of lookup_array when an exact match is found for look_value, otherwise N/A. If lookup_value is a column array, the XMATCH function is evaluated for each element of the array and return the result in a column array.
For lookup_array: {A;B} it will produce the following output, based on the following input values:
Lookup_value
Result
{A}
{1}
{A;C}
{1;N/A}
{C;D}
{N/A;N/A}
{A;B}
{1;2}
{B;A}
{2;1}
{B;A;C}
{2;1;N/A}
{C}
{N/A}
In our case:
XMATCH(TOCOL(TEXTSPLIT(x,", ")),TOCOL(TEXTSPLIT(B4,", ")),0)
will return for each releases value (x) ({A}, {A;B}, {A;C}, etc.) a column array of size of the number or elements of x, indicating the row position of {A, B} (if matches) or N/A (not found) for each element of x.
ISNUMBER converts the result to TRUE (if matches) or FALSE (for N/A). --ISNUMBER(cell) converts the result to 1 (match) or 0 (for N/A). Finally MAX function returns 1 if there is at least one match, otherwise 0.
Because BYROW processes the LAMBDA function for each row, it returns 1 (at least one match) or 0 (no match) for each row of E3:E15.
=BYROW(E3:E15,LAMBDA(x,
MAX(--ISNUMBER(XMATCH(TOCOL(TEXTSPLIT(x,", ")),
TOCOL(TEXTSPLIT(B4,", ")),0)))))
which is what we need as a filter condition
Note: You can use MATCH function instead of XMATCH, but keep in mind that for the third input argument the default behavior is different. The default value for MATCH is 1 (largest value that is less than or equal to lookup_value) and for XMATCH is 0 (exact match). | unknown | |
d12493 | train | You would need to do some kind of check before printing
$html=‘’;
foreach($current_asset as $asset) {
if($asset[‘hasBeenCheckedBefore’]) {
$checked = ‘checked’;
} else {
$checked = ‘’;
}
$html .= “$asset[‘name’] <input type=‘checkbox’ name=‘current_asset[]’ value=‘$asset[“id”]’ $checked />”;
}
A: Here is an example of one way to do it. I have changed your data structure to be easier to use. I was confused initially because you didn't mention any way to store data. So this is only good for the one page view.
<?php
// initialize data
/**
* Data structure can make the job easy or hard...
*
* This is doable with array_search() and array_column(),
* but your indexes might get all wonky.
*/
$current_asset = [
['name'=>'Land','id'=>1],
['name'=>'Building' ,'id'=>2],
['name'=>'Machinery','id'=>3],
];
/**
* Could use the key as the ID. Note that it is being
* assigned as a string to make it associative.
*/
$current_asset = [
'1'=>'Land',
'2'=>'Building',
'3'=>'Machinery',
];
/**
* If you have more information, you could include it
* as an array. I am using this setup.
*/
$current_asset = [
'1'=> ['name' => 'Land', 'checked'=>false],
'2'=> ['name' => 'Building', 'checked'=>false],
'3'=> ['name' => 'Machinery', 'checked'=>false],
];
// test for post submission, set checked as appropriate
if(array_key_exists('current_asset', $_POST)) {
foreach($_POST['current_asset'] as $key => $value) {
if(array_key_exists($key,$current_asset)) {
$current_asset[$key]['checked'] = true;
}
}
}
// begin HTML output
?>
<html>
<head>
<title></title>
</head>
<body>
<!-- content .... -->
<form method="post">
<?php foreach($current_asset as $key=>$value): ?>
<?php $checked = $value['checked'] ? ' checked' : ''; ?>
<label>
<input type="checkbox" name="current_asset[<?= $key ?>]" value="<?= $key ?>"<?= $checked ?>>
<?= htmlentities($value['name']) ?>
</label>
<?php endforeach; ?>
</form>
<body>
</html> | unknown | |
d12494 | train | Something like this?
var groups = list.GroupBy(l => l.Id)
.Select(g => new {
Id = g.Key,
GoodSum = g.Sum(i=>i.Good),
TotalSum= g.Sum(i=>i.Total),
Perc = (double) g.Sum(i=>i.Good) / g.Sum(i=>i.Total)
}
);
var average = groups.Average(g=>g.Perc);
Note that your answer for Avg should be 0.717 not 7.17.
A: Try this :
var avg = list.GroupBy(G => G.Id)
.Select(G => (G.Sum(T => T.Good)/G.Sum(T => T.TotalSum)))
.Average(); | unknown | |
d12495 | train | Setting it in another cookie is the best method, that or simply appending it to the end of the data you're storing with a delimeter such as #.
A: There's no way to get the expiry time because the browser doesn't send it. So the olny way is to store the time somewhere in another cookie or in session maybe.
A: Check this link How to get cookie's expire time
cookie expiration date can be set in another cookie.This cookie can then be read later to get the expiration date | unknown | |
d12496 | train | In order to play sound from assets, you need a AudioPlayer and set asset to it.
onPressed: () async {
final player = AudioPlayer();
await player.setAsset('note1.wav'); // make sure to add on pubspec.yaml and provide correct path
player.play();
}
A: Finally found the fix, the problem all had to do with the new version of audioplayers having all the functions in one class without having to import seperatly audiocache
onPressed: () {
final player = AudioPlayer();
player.play(AssetSource("note1.wav"));
} | unknown | |
d12497 | train | If you don't want to use any third party libraries the simplest way to do this is to add the following in compilerOptions of your tsconfig.json file
"paths": {
"moment": [
"../node_modules/moment/min/moment.min.js"
]
}
A: There is another solution in this Angular Issue:
https://github.com/angular/angular-cli/issues/6137#issuecomment-387708972
Add a custom path with the name "moment" so it is by default resolved to the JS file you want:
"compilerOptions": {
"paths": {
"moment": [
"./node_modules/moment/min/moment.min.js"
]
}
}
A: This article describe good solution:
https://medium.jonasbandi.net/angular-cli-and-moment-js-a-recipe-for-disaster-and-how-to-fix-it-163a79180173
Briefly:
*
*ng add ngx-build-plus
*Add a file webpack.extra.js in the root of your project:
const webpack = require('webpack');
module.exports = {
plugins: [
new webpack.IgnorePlugin(/^\.\/locale$/, /moment$/),
]
}
*Run:
npm run build --prod --extra-webpack-config webpack.extra.js
enter code here
Warning
moment.js has been deprecated officially
https://momentjs.com/docs/#/-project-status/ (try use day.js or luxon)
A: For anyone on angular 12 or latest
This does not work for me
const webpack = require('webpack');
module.exports = {
plugins: [
new webpack.IgnorePlugin(/^\.\/locale$/, /moment$/),
]
}
However this does
const webpack = require('webpack');
module.exports = {
plugins: [
new webpack.IgnorePlugin({
resourceRegExp: /^\.\/locale$/,
contextRegExp: /moment$/
})
]
};
A: In Angular 12, I did the following:
npm i --save-dev @angular-builders/custom-webpack
to allow using a custom webpack configuration.
npm i --save-dev moment-locales-webpack-plugin
npm i --save-dev moment-timezone-data-webpack-plugin
Then modify your angular.json as follows:
...
"architect": {
"build": {
"builder": "@angular-builders/custom-webpack:browser",
"options": {
"customWebpackConfig": {
"path": "./extra-webpack.config.js"
},
...
and in the extra-webpack.config.js file:
const MomentLocalesPlugin = require('moment-locales-webpack-plugin');
const MomentTimezoneDataPlugin = require('moment-timezone-data-webpack-plugin');
module.exports = {
plugins: [
new MomentLocalesPlugin({
localesToKeep: ['en-ie']
}),
new MomentTimezoneDataPlugin({
matchZones: /Europe\/(Belfast|London|Paris|Athens)/,
startYear: 1950,
endYear: 2050,
}),
]
};
Modify the above options as needed, of course. This gives you far better control on which exact locales and timezones to include, as opposed to the regular expression that I see in some other answers.
A: the solutions above didn't work for me because they address the wrong path (don't use ../ ) in the tsconfig.app.json
{
...
"compilerOptions": {
"paths": {
"moment": [
"node_modules/moment/min/moment.min.js"
]
}
}
}
Works for me in Angular 12.2.X. The changes must be done in the tsconfig.app.json, than also the type information of your IDE will work.
Don't change it in the tsconfig.json or your IDE will lose type information.
This fix the usage in the app as in the lib. I used source-map-explorer to verify it.
ng build --sourceMap=true --namedChunks=true --configuration production && source-map-explorer dist/**/*.js
A: I had the same problem with momentjs library and solve it as below:
The main purpose of this answer is not to use IgnorePlugin for ignoring the library but I use ContextReplacementPlugin to tell the compiler which locale files I want to use in this project.
*
*Do all of the configurations mentioned in this answer: https://stackoverflow.com/a/72423671/6666348
*Then in your webpack.config.js file write this:
const webpack = require("webpack");
module.exports = {
plugins: [
new webpack.ContextReplacementPlugin(/moment[\/\\]locale$/, /(en|fr)$/)
]
};
This configuration will add only en and fr locale in your application dist folder.
A: You can try to use moment-mini-ts instead of moment
npm i moment-mini-ts
import * as moment from 'moment-mini-ts'
Don’t forget to uninstall moment
npm uninstall moment
I’m using angular 9 | unknown | |
d12498 | train | 21.2.5.13 RegExp.prototype.test( S )
The following steps are taken:
*
*Let R be the this value.
*If Type(R) is not Object, throw a TypeError exception.
*Let string be ToString(S).
*ReturnIfAbrupt(string).
*Let match be RegExpExec(R, string).
*ReturnIfAbrupt(match).
*If match is not null, return true; else return false.
And look at the ToString(S) you will see null = "null" and you get what you see.
A: Yes, this is as specified.
Per §21.2.5.13 "RegExp.prototype.test( S )" in the 2015 version (6th Edition), one of the first few steps is to apply the "ToString" operation to the argument S, and use its result. The "ToString" operation converts various non-string values to string values; null, in particular, is converted to "null". | unknown | |
d12499 | train | The binary is compiled for a newer version of the OS (you're now 4 major releases behind — upgrade if you can!) | unknown | |
d12500 | train | It's just a matter of iterating over the list and counting as you go. Try something like this:
select( [X|_] , 0 , X ) .
select( [_,Xs] , N , C ) :- N > 0 , N1 is N-1, select(Xs,N1,C).
or
select( Xs , N , C ) :- select(Xs,0,N,C) .
select( [X|_] , N , N , X ) .
select( [_|Xs] , V , N , C ) :- V1 is V+1, select(Xs,V1,N,C).
The latter will work in a more Prolog-like way, bi-directionally. It doesn't care if you specified an index or not:
*
*select( [a,b,c,d] , N , C ) successively succeeds with
*
*N=0, C=a
*N=1, C=b
*N=2, C=c
*N=3, C=d
*select( [a,b,c,d] , 2 , C ) succeeds with just
*
*C=c
*select( [a,b,c,d] , N , d ) succeeds with just
*
*N=3
A: My approach is also "count the number down and match the thing from the front when it reaches zero":
:- use_module(library(clpfd)).
position(0, [Elem|_], Elem).
position(Pos, [_|T], Elem) :-
Pos #= Pos_ + 1,
position(Pos_, T, Elem).
Your request "a program that return an element from a list, using a specified index" is an imperative request like you might use in Python; taking a single index and returning a single element. Changing to a Prolog relational way of thinking brings you to Nicholas Carey's comment:
will work bi-directionally. It doesn't care if you specified an index or not
So you can give an element and get an index. As well as that, they can check whether an element is at an index; this confirms 'dog' is in position 3, and says 'cow' is not in position 3:
?- position(3, [apple,box,cat,dog], dog).
true
?- position(3, [apple,box,cat,dog], cow).
false
On backtracking, find all positions of an element, this finds 'box' in two places:
?- position(Pos, [apple,box,cat,dog,box], box).
Pos = 1 ;
Pos = 4
Which means compared to Python this code overlaps with x = items[i] and i = items.index(x) and enumerate(items) and items[i] == x. | unknown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.