_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d1301 | train | Why do you think you need the cast? If it is a collection of A * you should just be able to say:
(*begin)->doIt();
A: You can use the std::foreach for that:
std::for_each( v.begin(), v.end(), std::mem_fun( &A::doIt ) );
The std::mem_fun will create an object that calls the given member function for it's operator() argument. The for_each will call this object for every element within v.begin() and v.end().
A: You should provide error messages to get better answers.
In your code, the first thing that comes to mind is to use
elem->doIt();
What is the "serializable" type ? | unknown | |
d1302 | train | Add your libcode.c to:
idf_component_register(SRCS "hello_world_main.c" “libcode.c”
INCLUDE_DIRS ".")
And add a reference to libcode.h:
#include “libcode.h”
Libcode is an artificial name, change it by the correct one and you could also add a directory if needed like “libcode/libcode.h”
Hope this answers your question; with more code it’s easier to understand your problem.
A: To check what the minimal component would look like, you can create one using idf.py create-component NAME. The component named NAME will be made, and you can check what is missing in your external library. | unknown | |
d1303 | train | It doesn't work that way. The service is only accessible via DNS, not IP address.
Upgrading from Standard tier to Premium gives you throughput and latency commitment, but not an IP address. | unknown | |
d1304 | train | There is nothing wrong with using <br /> or <hr />. Neither of them are deprecated tags, even in the new HTML 5 draft spec (relevant spec info). In fact, it's hard to state correct usage of the <br /> tag better than the W3C itself:
The following example is correct usage of the br element:
<p>P. Sherman<br>
42 Wallaby Way<br>
Sydney</p>
br elements must not be used for separating thematic groups in a paragraph.
The following examples are non-conforming, as they abuse the br element:
<p><a ...>34 comments.</a><br>
<a ...>Add a comment.<a></p>
<p>Name: <input name="name"><br>
Address: <input name="address"></p>
Here are alternatives to the above, which are correct:
<p><a ...>34 comments.</a></p>
<p><a ...>Add a comment.<a></p>
<p>Name: <input name="name"></p>
<p>Address: <input name="address"></p>
<hr /> can very well be part of the content as well, and not just a display element. Use good judgement when it comes to what is content and what is not, and you'll know when to use these elements. They're both valid, useful elements in the current W3C specs. But with great power comes great responsibility, so use them correctly.
Edit 1:
Another thought I had after I first hit "post" - there's been a lot of anti-<table> sentiment among web developers in recent years, and with good reason. People were abusing the <table> tag, using it for site layout and formatting. That's not what it's for, so you shouldn't use it that way. But does that mean you should never use the <table> tag? What if you actually have an honest-to-goodness table in your code, for example, if you were writing a science article and you wanted to include the periodic table of the elements? In that case, the use of <table> is totally justified, it becomes semantic markup instead of formatting. It's the same situation with <br />. When it's part of your content (ie, text that should break at those points in order to be correct English), use it! When you're just doing it for formatting reasons, it's best to try another way.
A: <hr /> and <br />, much like everything else, can be abused to do design when they shouldn't be. <hr /> is meant to be used to visually divide sections of text, but in a localized sense. <br /> is meant to do the same thing, but without the horizontal line.
It would be a design flaw to use <hr /> across a site as a design, but in this post, for instance, it would be correct to use both <br /> and <hr />, since this section of text would still have to be a section of text, even if the site layout changed.
A: <hr/> and <br/> are presentational elements that have no semantic value to the document, so from a purist perspective yes, they ought to be avoided.
Think about HTML not as a presentational tool but rather as a document that needs to be self-describing. <hr/> and <br/> add no semantic value - rather they represent a very specific presentation in the browser.
That all being said, be pragmatic in your approach. Try to avoid them at all cost but if you find yourself coding up the walls and across the ceiling to avoid them then its better to just go ahead and use them. Semantics are important but fringe cases like this are not where they matter the most.
A: I believe absolutely avoiding usage of a commonly accepted solution (even it is outdated) is the same thing as designing a table with <div> tags instead of <table> tags, just so you can use <div>.
When designing your website, you probably won't require the use of <br /> tags, but I can still imagine them being useful where user input is needed, for example.
I don't see anything wrong with using <br /> but have not come across many situation where I required using them. In most cases, there probably are more elegant (and prettier) solutions than using <br /> tags if this is what you need for vertically seperating content.
A: No. Why? They're useful constructs.
Adding this addendum (with accompanying HR), in case my brief answer is construed as lacking appropriate consideration. ;)
It can be, and often is, an incredible waste of time -- time someone else is usually paying for -- trying to come up with cross-browser CSS-limited solutions to UI problems that BR and HR tags, and their likes, can solve in two seconds flat. Why some UI folks waste so much time trying to come up with "pure" ways of getting around using tried-and-true HTML constructs like line breaks and horizontal rules is a complete mystery to me; both tags, among many others, are totally legitimate and are indeed there for you to use. "Pure," in that sense, is nonsense.
One designer I worked with simply could not bring himself to do it; he'd waste hours, sometimes days, trying to "code" around it, and still come up with something that didn't work in Opera. I found that totally baffling. Dude, add BR, done. Totally legal, works like a charm, and everyone's happy.
I'm all for abstracting presentation, don't get me wrong; we're all out do to the best work we can. But be sensible. If you've spent five hours trying to figure out some way to achieve, in script, something that BR gives you right now, and the gods won't rain fire down on you for doing it, then do it, and move on. Chances are if it's that problematic, there might be something wrong with your solution, anyway.
A: I put in a <hr style="display:none"> between sections. For example, between columns in a multi-column layout. In browsers with no support for CSS the separation will still be clear.
A: Just so long as you don't use <br/> to form paragraphs, you're probably alright in my book ;) I hate seeing:
<p>
...lengthy first paragraph...
<br/>
<br/>
...lengthy second paragraph...
<br/>
<br/>
...lengthy third paragraph...
</p>
As for an address, I would do it like this:
<address class="address">
<span class="street">1100 N. Wullabee Lane</span><br/>
<span class="city">Pensacola</span>, <span class="state">Florida</span>
<span class="zip">32503</span>
</address>
But that's likely because I love jQuery and would like access to any of those parts at any given moment :)
A: Interesting question. I have always used <br/> as a carriage return (and hence as part of content, really). Not sure if that is the right way of going about it, but its served me well.
<hr/> on the other hand...
A: I'd not say at all costs but if you want to be purist, these tags have nothing to do with structure and everything to layout but HTML is supposed to separate content from presentation. <hr /> can be done through CSS and <br/> through proper use of otehr tags like <p>.
If you do not want to be a purist, use them :)
A: I think you should seldom need the BR tag in your templates. But at times it can be called for in the content, user generated and system generated. Like if you want to keep a piece of text in the paragraph but need a newline before it.
What are the occasions where you feel you are forced to use BR tags?
A: <br> is the HTML way of expressing a line break, as there is no other way of doing it.
Physical line breaks in the source code are rightfully ignored (more correctly: treated as a single white space), so you need a markup way to express them.
Not every line break is the beginning of a new paragraph, and packing text into <div>s (for example) just to avoid <br>s seems overly paranoid to me. Why do they bother you?
A: BR is fine, since a hard line-break could be part of the content, for example in code blocks (even though you'd probably use the PRE-element for that) or lyrics.
HR on the other hand is purely presentational: a horizontal rule, a horizontal line. Use border-top/bottom for neighbouring elements instead.
A: I personally feel that, in the interest of true separation of content and style, both <br /> and <hr /> should be avoided.
As far as doing away with <br /> tags in my own markup, I use <div> in conjunction with the css margin property to handle anything that needs a line break, and a <span> for anything that would be inline, but of a different "content class" (obviously distinguishing them with the class attribute).
To replace an <hr /> tag with css, I simply create a <div> with the border-top-style css property set to solid and the other three sides set to none. The margin and the padding properties of said <div> then handle the actual placement/positioning of the horizontal line.
Like I said, I'm no expert, my web design experience is mostly a side effect of my work with JSP pages (I am a Java programmer), but I came upon this question during some research and wanted to put my two cents in...
A: You can use <br> but don't use <hr>.
<BR> — LINE BREAK— This is display information. Still in
common use, but use with restraint.
<HR> — HORIZONTAL RULE— Display info with no semantic value –
never use it. “Horizontal”, by definition, is a visual attribute.
Source: Proper Use Guide for HTML Tags
A: That's bad usage if you're going Strict.
<br/> and <hr/> are not part of the content. For instance the <hr/> is commonly used to separate blocks of text. But isn't possible to that with border-bottom? And <br/> is seen in many cases as a way to limit text to a certain form, which couldn't be accomplished with css?
Of course you aren't going Strict don't worry to much.
A: HR has some hacky uses right now in preventing mobile browsers' font inflation algorithms from kicking in.
Also if you have lots of small content separated by HRs, the Googlebot will currently see it as 'N separate entries', which might be useful to you.
BRs should really only be used as a line break within a paragraph, which is a pretty rare typographical usage (except perhaps within a table cell), and I'm not aware of any hacky reason to use them.
A: Here is an excerpt from a course that helps you learn html.
<form>
<p>Choose your pizza toppings:</p>
<label for="cheese">Extra cheese</label>
<input id="cheese" name="topping" type="checkbox" value="cheese">
<br>
<label for="pepperoni">Pepperoni</label>
<input id="pepperoni" name="topping" type="checkbox" value="pepperoni">
<br>
<label for="anchovy">Anchovy</label>
<input id="anchovy" name="topping" type="checkbox" value="anchovy"> | unknown | |
d1305 | train | Use this to redirect all traffic to new site using .htaccess of old sites.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTP_HOST} .*hemodialysis-krk.com
RewriteRule (.*) http://www.newsite.com/ [R=301,L]
</IfModule>
Or create more rules, if you can map sections of pages from old to new site. If you use:
RewriteRule (.*) http://www.newsite.com/$1 [R=301,QSA,L]
original path and get query will be used in redirection too. This is always handy working on unique url to reach better position in search engines results.
EDIT:
If you need to redirect only urls, which don't exists anymore, then use (not tested):
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule (.*) / [R=301,L]
</IfModule> | unknown | |
d1306 | train | std::stack::pop doesn't return a value
You need to use std::stack::top, to get top element and then remove it from the stack like following :
vector1[i]=stack1.top();
stack1.pop();
A: std::stack<T>::pop() doesn't return a value (it's a void function). You need to get the top, then pop, i.e.
for(int i=0; i<vector1.size();i++)
{
vector1[i]=stack1.top();
stack1.pop();
}
A: That's because the pop() function does not return a value.
You need to use top() which returns the element at the top of the stack and then pop() to eliminate that element...
for (int i = 0; i<vector1.size(); i++)
{
vector1[i] = stack1.top();
stack1.pop();
} | unknown | |
d1307 | train | It seems as though the only additional resources you're provided with have to do with screen density and resolution: http://developer.android.com/guide/webapps/index.html. However, if you are going to display this page in a WebView, you can utilize the Java-Javascript bridge to access any information available to the standard Java API (or non standard if you want to get creative and use reflection ;-) )
A: Edit: Oops, didn't see you meant from the browser javascript >< disregard information below.
android.os.Build provides a lot of information.
Try:
String s="Debug-infos:";
s += "\n OS Version: " + System.getProperty("os.version") + "(" + android.os.Build.VERSION.INCREMENTAL + ")";
s += "\n OS API Level: " + android.os.Build.VERSION.SDK;
s += "\n Device: " + android.os.Build.DEVICE;
s += "\n Model (and Product): " + android.os.Build.MODEL + " ("+ android.os.Build.PRODUCT + ")"; | unknown | |
d1308 | train | If you want to retrieve the value of each wkext-meta-attr, You can use the`.findAll() method and then loop through each element. Check whether the below code fulfills your task:
from bs4 import BeautifulSoup
with open ("sample.sgm","r")as f:
contents =f.read()
soup = BeautifulSoup(contents, 'html.parser')
meta_attrs = soup.findAll('wkext-meta-attr')
for meta_attr in meta_attrs:
print(meta_attr['value']) | unknown | |
d1309 | train | You need to activate the wrap on the parent element (the flex container) then make the element full width:
.main{
display:flex;
border: 1px solid black;
flex-wrap:wrap;
}
.item, .item2, .item3, .item4{
padding: 10px;
}
.item {
flex-grow: 1;
}
.item2{
flex-grow: 7;
background-color: pink;
}
.item3 {
flex-grow: 2;
background-color: yellow;
}
.item4 {
flex-basis:100%;
}
<div class="main">
<div class="item">1</div>
<div class="item2">2</div>
<div class="item3">3</div>
<div class="item4">4</div>
</div>
A: Use .item-groups to organize your .items by row. Example:
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
.main {
display: flex;
border: 1px solid black;
flex-wrap: wrap;
}
.item-group {
flex-basis: 100%;
display: flex;
}
.item1, .item2, .item3, .item4 {
padding: 10px;
}
.item1 {
flex-grow: 1;
}
.item2{
flex-grow: 7;
background-color: pink;
}
.item3 {
flex-grow: 2;
background-color: yellow;
}
.item4 {
background-color: violet;
}
<div class="main">
<div class="item-group">
<div class="item1">1</div>
<div class="item2">2</div>
<div class="item3">3</div>
</div>
<div class="item-group">
<div class="item4">4</div>
</div>
</div> | unknown | |
d1310 | train | Often for anti-spam purposes, "$mail->From" is required to be the same address as you use for login to your SMTP server.
If that is your case, you can use the "$mail->AddReplyTo" field for the senders address instead. Only a suggestion.
If it is not the solution, some extra debugging information can be enabled by setting
$mail->SMTPDebug = true;
A: You should also be careful about email address aliases, because your SMTP server may be rejecting your mails for that reason (it depends on server configuration). | unknown | |
d1311 | train | I think need:
out = pd.DataFrame({ 'Date': pd.to_datetime(['2015-01-01','2015-05-01','2015-07-01','2015-10-01','2015-04-01','2015-12-01','2016-01-01','2016-02-01','2015-05-01', '2015-10-01']), 'Churn': ['Yes'] * 8 + ['No'] * 2 })
print (out)
Churn Date
0 Yes 2015-01-01
1 Yes 2015-05-01
2 Yes 2015-07-01
3 Yes 2015-10-01
4 Yes 2015-04-01
5 Yes 2015-12-01
6 Yes 2016-01-01
7 Yes 2016-02-01
8 No 2015-05-01
9 No 2015-10-01
df = (out.loc[out['Churn'] == 'Yes']
.groupby([out["Date"].dt.year,out["Date"].dt.quarter])["Churn"]
.count()
.rename_axis(('year','quarter'))
.reset_index(name='count'))
print(df)
year quarter count
0 2015 1 1
1 2015 2 2
2 2015 3 1
3 2015 4 2
4 2016 1 2
For separate DataFrames by years is possible create dictionary of DataFrames:
dfs = dict(tuple(out.groupby(out['Date'].dt.year)))
print (dfs)
{2016: Churn Date
6 Yes 2016-01-01
7 Yes 2016-02-01, 2015: Churn Date
0 Yes 2015-01-01
1 Yes 2015-05-01
2 Yes 2015-07-01
3 Yes 2015-10-01
4 Yes 2015-04-01
5 Yes 2015-12-01
8 No 2015-05-01
9 No 2015-10-01}
print (dfs.keys())
dict_keys([2016, 2015])
print (dfs[2015])
Churn Date
0 Yes 2015-01-01
1 Yes 2015-05-01
2 Yes 2015-07-01
3 Yes 2015-10-01
4 Yes 2015-04-01
5 Yes 2015-12-01
8 No 2015-05-01
9 No 2015-10-01
Tenure column looks like this
out["tenure"].unique()
Out[14]:
array([ 8, 15, 32, 9, 48, 58, 10, 29, 1, 66, 24, 68, 4, 53, 6, 20, 52,
49, 71, 2, 65, 67, 27, 18, 47, 45, 43, 59, 13, 17, 72, 61, 34, 11,
35, 69, 63, 30, 19, 39, 3, 46, 54, 36, 12, 41, 50, 40, 28, 44, 51,
33, 21, 70, 23, 16, 56, 14, 62, 7, 25, 31, 60, 5, 42, 22, 37, 64,
57, 38, 26, 55])
It contains no of months, it seems like 1 to 72.
I need to split tenure column into "range".
For example, this column contains 1 to 72 numbers, need to range up to 4 range.
like 1 to 18 --> 1 range
19 to 36 --> 2nd range
37 to 54 --> 3rd range like that
here i found quarterlywise churn count and with that churn count later i found churn rate with churn count and total count.
quarterly_churn_yes = out.loc[out['Churn'] == 'Yes'].groupby([out["Date"].dt.year,out["Date"].dt.quarter]).count().rename_axis(('year','quarter'))
quarterly_churn_yes["Churn"]
quarterly_churn_rate = out.groupby(out["Date"].dt.quarter).apply(lambda x: quarterly_churn_yes["Churn"] / total_churn).sum()
print(quarterly_churn_rate)
Like this I need to find tenure wise 4 range to find churn count. | unknown | |
d1312 | train | Xml deserialization.
Create your class with attributes:
class Foo
{
[XmlAttribute]
public bool valid;
[XmlAttribute]
public DateTime time;
}
Remember - fields must be public.
And then:
FileStream fs = new FileStream(filename, FileMode.Open);
XmlReader reader = XmlReader.Create(fs);
XmlSerializer xs = new XmlSerializer(typeof(Foo));
Foo foo = (Foo)xs.Deserialize(reader);
fs.Close();
A: .net has a xmlserializer object that lets you serialize and deserialize an object into and from a xml stream but it creates its tags in a different way than your xml file.
Maybe you can create a custom serializer that will act according to you rules.
Here you can find an example.(it uses a xsd file to set the rules of serialization) | unknown | |
d1313 | train | I think this should help
substr(x, str_locate(x, "?")+1, nchar(x))
A: Try this:
sub('.*\\?(.*)','\\1',x)
A: x <- "Name of the Student? Michael Sneider"
sub(pattern = ".+?\\?" , x , replacement = '' )
A: To take advantage of the loose wording of the question, we can go WAY overboard and use natural language processing to extract all names from the string:
library(openNLP)
library(NLP)
# you'll also have to install the models with the next line, if you haven't already
# install.packages('openNLPmodels.en', repos = 'http://datacube.wu.ac.at/', type = 'source')
s <- as.String(x) # convert x to NLP package's String object
# make annotators
sent_token_annotator <- Maxent_Sent_Token_Annotator()
word_token_annotator <- Maxent_Word_Token_Annotator()
entity_annotator <- Maxent_Entity_Annotator()
# call sentence and word annotators
s_annotated <- annotate(s, list(sent_token_annotator, word_token_annotator))
# call entity annotator (which defaults to "person") and subset the string
s[entity_annotator(s, s_annotated)]
## Michael Sneider
Overkill? Probably. But interesting, and not actually all that hard to implement, really.
A: str_match is more helpful in this situation
str_match(x, ".*\\?\\s(.*)")[, 2]
#[1] "Michael Sneider" | unknown | |
d1314 | train | insert into user_permissions (user_id, permission_id)
select
u.user_id,
p.permission_id
from
users u
cross join permissions p
where
not exists (select 0 from user_permissions with (updlock, holdlock)
where user_id = u.user_id and permission_id = p.permission_id)
Reference: Only inserting a row if it's not already there
A: INSERT dbo.user_permissions([user_id], permission_id, [value])
SELECT u.[user_id], p.permission_id, 1
FROM dbo.user_permissions AS u
CROSS JOIN dbo.[permissions] AS p
WHERE NOT EXISTS (SELECT 1 FROM dbo.user_permissions
WHERE [user_id] = u.[user_id]
AND permission_id = p.permission_id
)
GROUP BY [user_id], p.permission_id;
As an aside, you should avoid names that tend to require delimiters, e.g. user_id and permissions are keywords/reserved words.
A: Mansfield,
If I understand you correctly, you want to seed the user_permissions table with a value when you add a new permission.
I'll also assume that you want it to default to 0. After inserting the new permission, running this code will seed the user_permissions table for all users with a default value of 0 for any permissions currently not in use.
--insert into permissions(permission_name) values('newperm');
--select * from permissions
insert into user_permissions(user_id, permission_id, value)
select
u.user_id, p.permission_id, 0
from
users u
cross join permissions p
where
p.permission_id not in(select permission_id from user_permissions)
;
--select * from user_permissions;
A: The below query will give you the missing userpermission rows to be inserted:
select a.USER_ID,b.permission_id from users a,permissions b,user_permissions c
where c.user_id <>a.user_id and c.permission_id <> b.permission_id | unknown | |
d1315 | train | Did you tried to clone it ?
git clone -b 4.17-arcore-sdk-preview https://github.com/google-ar-unreal/UnrealEngine.git
A: That repository is actually private. So you cannot access/clone it. What you can do is fork the repository on the link you provided and then clone your fork.
Edit
So the guide here at time of writing is incorrect.
You have to follow the steps to become part of the Epic Organisation on github. When you follow these steps you'll see a private repos which you need to fork.
TL;DR: The clone command in the guide doesn't work now because the repository is private. Forking it works after become part of Epic organisation @ Github! | unknown | |
d1316 | train | A span around your html tag?
You could do this with XLinq, but it would only support well-formed XML. You might want to look at the HTML Agility Pack instead.
Edit - This works for me:
string xml = "...";
var geoPosition = XElement.Parse(xml).Descendants().
Where(e => e.Name.LocalName == "meta" &&
e.Attribute("name") != null &&
e.Attribute("name").Value == "geo.position").
Select(e => e.Attribute("content").Value).
SingleOrDefault();
A: I'd bet that the problem you're having comes from not referencing the namespace correctly with an XmlNamespaceManager. Here are two ways to do it:
string xml =
@"<span>
<!--CTYPE html PUBLIC ""-//W3C//DTD XHTML 1.0 Transitional//EN""
""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dt -->
<html xmlns=""http://www.w3.org/1999/xhtml"">
<head>
<meta content=""application/xhtml+xml; charset=utf-8"" http-equiv=""Content-Type"" />
<meta content=""text/css"" http-equiv=""Content-Style-Type"" />
<meta name=""geo.position"" content=""41.8;12.23"" />
<meta name=""geo.placename"" content=""RomeFiumicino, Italy"" />
<title>RomeFiumicino, Italy</title>
</head>
<body />
</html>
</span>";
string ns = "http://www.w3.org/1999/xhtml";
XmlNamespaceManager nsm;
// pre-Linq:
XmlDocument d = new XmlDocument();
d.LoadXml(xml);
nsm = new XmlNamespaceManager(d.NameTable);
nsm.AddNamespace("h", ns);
Console.WriteLine(d.SelectSingleNode(
"/span/h:html/h:head/h:meta[@name='geo.position']/@content", nsm).Value);
// Linq - note that you have to create an XmlReader so that you can
// use its NameTable in creating the XmlNamespaceManager:
XmlReader xr = XmlReader.Create(new StringReader(xml));
XDocument xd = XDocument.Load(xr);
nsm = new XmlNamespaceManager(xr.NameTable);
nsm.AddNamespace("h", ns);
Console.WriteLine(
xd.XPathSelectElement("/span/h:html/h:head/h:meta[@name='geo.position']", nsm)
.Attribute("content").Value);
A: I agree with Thorarin - use the HTML Agility pack, it's much more robust.
However, I suspect the problem you are having using LinqToXML is because of the namespace. See MSDN here for how to handle them in your queries.
" If you have XML that is in a default namespace, you still must declare an XNamespace variable, and combine it with the local name to make a qualified name to be used in the query.
One of the most common problems when querying XML trees is that if the XML tree has a default namespace, the developer sometimes writes the query as though the XML were not in a namespace." | unknown | |
d1317 | train | Posted something very similar to this yesterday. As you know, you can only perform such function with dynamic sql.
Now, I don't have your functions, so you will have to supply those.
I've done something very similar in the past to calculate a series of ratios in one pass for numerous income/balance sheets
Below is one approach. (However, I'm not digging the 2 parameters ... seems a little limited, but I'm sure you can expand as necessary)
Declare @Formula table (ID int,Title varchar(25),Formula varchar(max))
Insert Into @Formula values
(1,'Sum' ,'@a+@b')
,(2,'Multiply','@a*@b')
Declare @Parameter table (a varchar(50),b varchar(50))
Insert Into @Parameter values
(1,2),
(5,3)
Declare @SQL varchar(max)=''
;with cte as (
Select A.ID
,A.Title
,ParameterA = A
,ParameterB = B
,Expression = Replace(Replace(Formula,'@a',a),'@b',b)
From @Formula A
Cross Join @Parameter B
)
Select @SQL = @SQL+concat(',(',ID,',',ParameterA,',',ParameterB,',''',Title,''',(',Expression,'))') From cte
Select @SQL = 'Select * From ('+Stuff(@SQL,1,1,'values')+') N(ID,ParameterA,ParameterB,Title,Value)'
Exec(@SQL)
-- Optional To Trap Results in a Table Variable
--Declare @Results table (ID int,ParameterA varchar(50),ParameterB varchar(50),Title varchar(50),Value float)
--Insert Into @Results Exec(@SQL)
--Select * from @Results
Returns
ID ParameterA ParameterB Title Value
1 1 2 Sum 3
2 1 2 Multiply 2
1 5 3 Sum 8
2 5 3 Multiply 15 | unknown | |
d1318 | train | This is caused by your APP_BASE_HREF
{provide: APP_BASE_HREF, useValue: window.document.location.href}
You are telling the app to use window.document.location.href main?id=1 as your basehref. Angular then appends its own routes to the end of the basehref. This is why you are getting the duplication
localhost:4200/main?id=1(< APP_BASE_HREF)/main?id=1(< ROUTE)
Here are the docs on the functionality of APP_BASE_HREF (https://angular.io/api/common/PathLocationStrategy#description) | unknown | |
d1319 | train | actually it says that NSArray doesn't respond to removeObjectAtIndex. Which would be true.
Could it be that you define it as NSMutableArray, but initialise it in a wrong way?
Where do you init the array, is it possible that the current array is actually a NSArray?
because:
NSMutableArray *anArray = [[NSArray alloc] initWithObject:anObject];
is possible, but would result in runtime errors when you select the wrong selectors.
A: You need to use -mutableCopy if you want a mutable copy of an array. Using -copy, even on a mutable array, will give you an immutable array. | unknown | |
d1320 | train | As far as I see the client calls for mediatype text/html. But the objectmapper does not know how to write html for an arraylist.
What kind of format do you expect xml or json?
@Path("chickens")
public class ChickensResource {
@Inject
ChickenService cs;
@GET
@Produces(MediaType.APPLICATION_JSON)
public List<Chicken> chickens() {
return cs.getAllChickens();
}
@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public void save(JsonObject chicken) {
String name = chicken.getString("name");
int age = chicken.getInt("age");
cs.save(new Chicken(name, age));
}
}
An other solution would be to set the right requested Content type in the Request:
GET Header:
Accept: application/json
POST Header:
Accept: application/json
Content-Type: application/json
The Accept header says in which format the response should be.
The Content-Type header says which format the request payload has.
Content Types:
HTML --> text/html
JSON --> application/json
XML --> application/xml
edit: I think the Post has the same issue. We now told the methods that they consume json as input data and return json as output data (produces).
But are those data really set in the request. can you please post how you construct the post.
To match those methods there need to be those two headers in the request:
Accept: application/json says which format the client expects.
This should match the @Produces in the service which sets the output format.
Content-Type: application/json this is the one I think is missing says in which format the POST payload is and this should match the server input @Consumes | unknown | |
d1321 | train | If the only things your don't want comma-separated are strings that have years in, use:
knit_hooks$set(inline = function(x) {
if(is.numeric(x)){
return(prettyNum(x, big.mark=","))
}else{
return(x)
}
})
That works for your calendar string. But suppose you want to just print a year number on its own? Well, how about using the above hook and converting to character:
What about \Sexpr{2014}? % gets commad
What about \Sexpr{as.character(2014)}? % not commad
or possibly (untested):
What about \Sexpr{paste(2014)}? % not commad
which converts the scalar to character and saves a bit of typing. We're not playing code golf here though...
Alternatively a class-based method:
comma <- function(x){structure(x,class="comma")}
nocomma <- function(x){structure(x,class="nocomma")}
options(scipen=999) # turn off scientific notation for numbers
opts_chunk$set(echo=FALSE, warning=FALSE, message=FALSE)
knit_hooks$set(inline = function(x) {
if(inherits(x,"comma")) return(prettyNum(x, big.mark=","))
if(inherits(x,"nocomma")) return(x)
return(x) # default
})
wantcomma <- 1234*5
nocomma1 <- "September 1, 2014" # note name change here to not clash with function
Then just wrap your Sexpr in either comma or nocomma like:
The hook will separate \Sexpr{comma(wantcomma)} and \Sexpr{nocomma(nocomma1)}, but I don't want to separate years.
If you want the default to commaify then change the line commented "# default" to use prettyNum. Although I'm thinking I've overcomplicated this and the comma and nocomma functions could just compute the string format themselves and then you wouldn't need a hook at all.
Without knowing exactly your cases I don't think we can write a function that infers the comma-sep scheme - for example it would have to know that "1342 cases in 2013" needs its first number commad and not its second... | unknown | |
d1322 | train | You can write a Simple procedure for the same:
DECLARE
l_count NUMBER;
CURSOR C1 is
-- YOUR DATA FROM SOURCE
BEGIN
for each_record in c1
l_count := 0;
SELECT COUNT(*) into l_count from destination_table where field1=
eachrecord.field1 and .... and flag = 'Y'; -- find current record in dest table (on ID and flag = Y)
-- if any other fields do not match (Field1, Field2, Field3)
IF L_COUNT > 0 THEN
update current record, set enddate, current flag to n
END IF;
INSERT new record with startdate = sysdate, current flag is Y
END;
Mod by OP: That led to the right direction. The following code will do the trick, providing there is also an index on TableID and (TableID, CurrentFlag).
CREATE OR REPLACE PROCEDURE TESTPROC AS
BEGIN
DECLARE
l_count NUMBER;
CURSOR TRN is
SELECT * from sourceTable;
BEGIN
FOR each_record IN TRN
LOOP
-- if a record found but fields differ ...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID
and (each_record.Field1 <> DIM.Field1
or each_record.Field2 <> DIM.Field2
or each_record.Field13 <> DIM.Field3)
AND DIM.CurrentFlag = 'Y';
-- ... then update existing current record, and add with new data
IF l_count > 0 THEN
UPDATE destTable DIM
SET EndDate = sysdate
,CurrentFlag = 'N'
WHERE each_record.TableID = DIM.TableID;
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
COMMIT;
END IF;
-- if no record found with this key...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID;
-- then add a new record
IF l_count = 0 THEN
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
END IF;
END LOOP;
COMMIT;
END;
END TESTPROC
A: Maybe you could use triggers to perform that.
CREATE OR REPLACE TRIGGER insTableID
BEFORE INSERT OR UPDATE
ON tableID
FOR EACH ROW
DECLARE
v_exists NUMBER := -1;
BEGIN
SELECT COUNT(1) INTO v_exists FROM tableID t where t.Field1 = :new.Field1 and ... ;
IF INSERTING THEN
IF v_exist > 0 THEN
null;--your DML update statement
ELSE
null;--your DML insert statement
END;
END IF;
IF UPDATING THEN
null;--your DML statement for update the old registry and a DML for insert the new registry.
END IF;
END;
In this way, you can update the registry related to the old values and insert a new row with the new values.
I hope that this helps you to solve your problem. | unknown | |
d1323 | train | Stripe has a guide to upgrading or downgrading Subscriptions that covers what you're asking about.
The high-level steps are:
*
*Create a new Price representing the new billing period under the Product you already have
*Update the Subscription using the new Price | unknown | |
d1324 | train | Able to do that by finding the item that exactly matches the specified string ComboBox.FindStringExact Method,
cmbfaculty.SelectedValue = table.Rows[0][1].ToString();
needed to be replaced with
cmbfaculty.SelectedIndex = cmbfaculty.FindStringExact(table.Rows[0][1].ToString()) ; | unknown | |
d1325 | train | Response is correct. I've tried requesting the website (the real one) and it works:
print(response.data.base64EncodedString())
If you decode the BASE64 data, it will render valid HTML code.
The issue seems related to encoding. After checking the website's head tag, it states that the charset is windows-1254
String(data: response.data, encoding: .windowsCP1254) // works. latin1, etc.
Your issue is similar to SWIFT: NSURLSession convert data to String | unknown | |
d1326 | train | Your hdfs entry is wrong. fs.default.name has to be set as hdfs://srv-lab:9000. Set this and restart your cluster. that will fix the issue | unknown | |
d1327 | train | From the question, it would seem that the background job does not modify state of database.
Simplest way to avoid performance hit on main application, while there is a background job running, is to take database dump and perform analysis on that dump. | unknown | |
d1328 | train | You can adapt any encoding to ensure encoding(Text1, Text2) == encoding(Text2, Text1) by simply enforcing a particular ordering of the arguments. Since you're dealing with text, maybe use a basic lexical order:
encoding_adapter(t1, t2)
{
if (t1 < t2)
return encoding(t1, t2)
else
return encoding(t2, t1)
}
If you use a simple single-input hash function you're probably tempted to write:
encoding(t1, t2)
{
return hash(t1 + t2)
}
But this can cause collisions: encoding("AA", "B") == encoding("A", "AB"). There are a couple easy solutions:
*
*if you have a character or string that never appears in your input strings then use it as a delimiter:
return hash(t1 + delimiter + t2)
*
*hash the hashes:
return hash(hash(t1) + hash(t2)) | unknown | |
d1329 | train | When inserting, it does not matter if the parent table has a value or not, as long as the child table has a foreign key, it will work just fine to insert or update data. | unknown | |
d1330 | train | just a suggestion:
Firstly try to insert 'chb01_01.edf' in python working directory. Python will find the file or c:\temp_edf\chb01_01.edf. It is easier to find.
best | unknown | |
d1331 | train | I think there are two things to you think about:
*
*if you want to use redis with django celery (Celery beat).
*if you want to use just redis as M.Q with django.
Below are the references to help you with each case:
For Case 1: check out the link below
*
*https://enlear.academy/hands-on-with-redis-and-django-ed7df9104343
For Case 2: check out the link below
*How can I publish a message in Redis channel asynchronously with python?
(I recommand this one best : https://stackabuse.com/working-with-redis-in-python-with-django/) -- it covers all you need about using redis for caching | unknown | |
d1332 | train | I have personal experience with IIRF from the codeplex site, and I liked it and found it good.
A: I use UrlRewrite from UrlRewriting.Net
Very easy to use for the simple thing I'm doing: send .pdf requests to an aspx page to generate the pdf. | unknown | |
d1333 | train | do you need to use some methods specific to the dictionaries ? If not, here is my suggestion :
*
*Create a class which has a string and a double properties
*Create an ObservableCollection of that class
*Set that collection as the items source of your datagrid.
And that's it ! The headers will be the the name of the properties specified in your class, so easy to change afterward.
Hope it's help
A: Can bind rows to a collection but not columns
Create a class with 200 properties like Jacques answer
For the get you would return Value[0], Value[1], ...
And it can be List
Or you could could build up the columns in code being and in the case you can bind to Value[0], Value[1], ... | unknown | |
d1334 | train | I would rather carry out this from serverside because it cannot be done only by targeting in clientside (images's), as it will become heavylifting from the front end.
Client Side Method:
<video id="video" controls>
<source src="your-video-file.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<canvas id="canvas"></canvas>
<script>
const video = document.getElementById("video");
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
video.addEventListener("loadedmetadata", function() {
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
}
Example :
When a user sees a video on YouTube, the site's servers send these thumbnails, which are saved as JPEG files. The thumbnail of a video will grow and begin playing a preview when a user hovers over it in the search results or on the watch page, giving them a clearer understanding of what the video is about.
<img id="thumbnail" src="your-thumbnail-image.jpg">
<script>
const thumbnail = document.getElementById("thumbnail");
// Modify the thumbnail's properties, such as its width and height
thumbnail.width = 150;
thumbnail.height = 100;
// Add an event listener to handle user interactions
thumbnail.addEventListener("click", function() {
// Perform some action, such as playing the video
});
</script>
A: I found a solution on my own, thanks to my colleague.
This problem seems to occur because rendering engine's performance is forced to be limited (not sure though).
To solve this, use setTimeout() function to make it sequential.
For those who are having simmilar problem, I hope this codes helps.
function initialImg(){
setTimeout(function(){
uploadedVideo.currentTime = timestamp[0];
document.getElementById('img_scene00').setAttribute("src", captureVideo(uploadedVideo));
setTimeout(function(){
uploadedVideo.currentTime = timestamp[1];
document.getElementById('img_scene01').setAttribute("src", captureVideo(uploadedVideo));
setTimeout(function(){
uploadedVideo.currentTime = timestamp[2];
document.getElementById('img_scene02').setAttribute("src", captureVideo(uploadedVideo));
setTimeout(function(){
uploadedVideo.currentTime = timestamp[3];
document.getElementById('img_scene03').setAttribute("src", captureVideo(uploadedVideo));
setTimeout(function(){
uploadedVideo.currentTime = timestamp[4];
document.getElementById('img_scene04').setAttribute("src", captureVideo(uploadedVideo));
}, 0);
}, 0);
}, 0);
}, 0);
}, 0);
} | unknown | |
d1335 | train | You can access the query parameters using from the resource reference. Typically, something like this:
@Get
public String foo() {
Form queryParams = getReference().getQueryAsForm();
String f = queryParams.getFirstValue("f");
return f;
}
Generally speaking (and this would work for other methods that GET), you can access whatever is passed to the request (including the entity, when appropriate) using getRequest() within the ServerResource.
A: hithe question is about one year old, but I just started with Restlet and stumbled into the "same" problem. I am talking about the server, not the client (as Bruno marked it, the original question is mixing server and client part)
I think the question is not completely answered. If you, for instance, prefer to separate the Restlet resource from semantic handling of the request (separating business logics from infrastructure) it is quite likely that you need some parameters, like an Observer, or a callback, or sth. else. So, as I fas as I see, no parameter could be transmitted into this instantiation process. The resource is instantiated by Restlet engine per request. Thus I found no way to pass a parameter directly (is there one?)
Fortunately it is possible to access the Application object of the Restlet engine from within the resource class, and thus also to the class that creates the component, the server, etc.
In the resource class I have sth. like this
protected Application initObjLinkage(){
Context cx = this.getContext();
Client cli = cx.getClientDispatcher();
Application app = cli.getApplication() ;
return app;
}
Subsequently you may use reflection and an interface to access a method in the Application class (still within the resource class), check reflection about this...
Method cbMethod = app.getClass().getMethod("getFoo", parameterTypes) ;
CallbackIntf getmethodFoo = ( CallbackIntf )cbMethod.invoke( app, arguments );
String str = getmethodFoo()
In my application I use this mechanism to get access to an observer that supplies the received data to the classes for the business logics. This approach is a standard for all my resource classes which renders them quite uniform, standardized and small.
So... I just hope this is being helpful and that there is {no/some} way to do it in a much more simple way :) | unknown | |
d1336 | train | A solution could be to create a new column in your DataFrame:
df["Friend-Action"] = [f"{friend} -> {action}" for friend, action in zip(df["Friends"], df["Actions"])]
Then, plot this column:
df.plot (kind='bar', x = "Friend-Action" , y = 'Funds', color='red', figsize=(5,5))
A: For the sake of this question it is easier to create an extra column with the values you desire and then pass it to the df.plot():
df['Friends_actions'] = df['Friends'] + " " + df['Actions']
df.plot(kind='bar', x = "Friends_actions" , y = 'Funds', color='red', figsize=(5,5))
Output: | unknown | |
d1337 | train | It sounds like you're talking about the text_pattern_ops operator class and its application for databases that are in locales other than C.
The issue is not one of encoding, but of collation.
A b-tree index requires that everything have a single, stable sort order following some invariants, like the assumption that if a < b then b > a. Comparison operators are used to sort the tree when building and maintaining and index.
For text strings, the collation rules for the language get applied by the comparison operators when determining whether one string is greater than or less than another so that strings sort "correctly" as far as the user is concerned. These rules are locale-dependent, and can do things like ignore punctuation and whitespace.
The LIKE operator isn't interested in locales. It just wants to find a prefix string, and it can't just ignore punctuation. So it cannot use a b-tree index that was created with a collation that might ignore punctuation/whitespace, etc. LIKE walks down the index tree character by character to find a match, and it can't do that if the index might ignore characters.
That's why, if your DB uses a locale other than "C" (POSIX) you must create different indexes for use with LIKE.
Example of localised sorting, compare:
regress=> WITH x(v) AS (VALUES ('10'),('1'),('1.'),('2.'),('.2'),('.1'),('1-1'),('11')
)
SELECT v FROM x ORDER BY v COLLATE "en_AU";
v
-----
1
.1
1.
10
11
1-1
.2
2.
(8 rows)
regress=> WITH x(v) AS (VALUES ('10'),('1'),('1.'),('2.'),('.2'),('.1'),('1-1'),('11')
)
SELECT v FROM x ORDER BY v COLLATE "C";
v
-----
.1
.2
1
1-1
1.
10
11
2.
(8 rows)
The text_pattern_ops opclass serves this need. In newer PostgreSQL releases you can create an index with COLLATE "C" on the target column instead, serving the same need, e.g.:
CREATE INDEX idx_c ON t2(x COLLATE "C");
LIKE will use such an index, and it can also be used for faster sorting where you don't care about locale for a given operation, e.g.
SELECT x FROM t2 ORDER BY x COLLATE "C"; | unknown | |
d1338 | train | I think you want to update the cache, so try to use @CachePut, method will be executed in all cases, if the key is new, new record will be added in the cache, if the record is old, old value will be updated by new value (refresh the value).
@CachePut(value = "tripDetailsDashboardCache", key = "#userId")
public List<DashBoardObject> getAllDetailsForDashBoard(Integer userId){
List<Master> masters = tripDetailsService.getTripDetailsForCaching(userId);
List<DashBoardObject> dashBoardObject = tripDetailsService.getMasterDetailsForDashBoard(userId, masters);
return dashBoardObject;
}
A: @CacheEvict runs by default after the method invocation. So the method above does the caching the list with key #userId and then clear the cache completely .
it makes the code unclear I’d recommend creating separate cacheable and cache-evict methods
A: Look at docs. There are @Caching annotation where you can define your @Cacheable, @CachePut and @CacheEvict | unknown | |
d1339 | train | Able to resolve this by adding "eureka.instance.hostname=" property. | unknown | |
d1340 | train | That isn't possible right now. If all you want is to use the email address as the username, you could write a custom auth backend that checks if the email/password combination is correct instead of the username/password combination (here's an example from djangosnippets.org).
If you want more, you'll have to hack up Django pretty badly, or wait until Django better supports subclassing of the User model (according to this conversation on the django-users mailing list, it could happen as soon as Django 1.2, but don't count on it).
A: The answer above is good and we use it on several sites successfully. I want to also point out though that many times people want to change the User model they are adding more information fields. This can be accommodated with the built in user profile support in the the contrib admin module.
You access the profile by utilizing the get_profile() method of a User object.
Related documentation is available here. | unknown | |
d1341 | train | The major difference between ready and load i think is the one below:
*
*ready fires when the dom is ready, this means that the elements hierarchy is ready, even if the content (i.e an image still loading) has not yet finished completely. It is safe to handle the DOM at this stage.
*load fires when even the content has finished loading for a given element $(smthing).load. This means that if attached to the document, this event will fire after all content is loaded (i.e images finished downloading etc )
EDIT: Also take a look here jQuery - What are differences between $(document).ready and $(window).load? | unknown | |
d1342 | train | First: it is better to put real-world examples than just nonsensical tables filled with random numeric buzz like you did. You may just get the wrong answers that way.
If you want to process just the last group, you can use the GROUP INDEX for this:
SELECT matnr AS matno
FROM vbap UP TO 100 ROWS
INTO TABLE @DATA(materials)
ORDER BY matnr.
TYPES: ty_mats LIKE materials.
DATA: bom LIKE materials.
DATA(uniques) = lines( VALUE ty_mats( FOR GROUPS value OF <line> IN materials GROUP BY ( matno = <line>-matno ) WITHOUT MEMBERS ( value ) ) ).
LOOP AT materials REFERENCE INTO DATA(refmat) GROUP BY ( id = refmat->matno size = GROUP SIZE gi = GROUP INDEX ) ASCENDING REFERENCE INTO DATA(group_ref).
CHECK group_ref->*-gi = uniques.
bom = VALUE ty_mats( BASE bom FOR <mat> IN GROUP group_ref ( matno = <mat>-matno ) ).
me->process_boms( bom ). "<--here comes the drums
ENDLOOP.
Note: the table must be sorted for INDEX to show correct values.
In this snippet very simple approach is utilized: first we calculate the total number of unique groups and then check each group index whether it is the last one.
Note, that what is last in your dataset depends only on sort order, so you may easily end up with the inaccurate values for your requirement.
A: did you mean this ?
sy-tabix current loop index may help here.
Otherwise You may need to calculate the number of groups prior.
constants: processing_size_threshold type i value 10.
types: begin of ty_material,
material_num type c length 3,
end of ty_material.
data: materials type standard table of ty_material,
materials_bom type standard table of ty_material.
materials = value #( for i = 1 then i + 1 while i <= 3
( material_num = conv #( i ) ) ).
loop at materials reference into data(material_grp) group by material_grp->material_num.
loop at group material_grp reference into data(material).
materials_bom = value #( base materials_bom (
lines of value #( for j = 1 then j + 1 while j <= 5
( material_num = |{ material->material_num }{ j }| ) ) ) ).
endloop.
"The 2nd condition of this IF is what I'm not able to figure out, I need material 3 BOM to be processed here
if lines( materials_bom ) >= processing_size_threshold
or lines( materials ) = sy-tabix.
"me->process_boms(materials_bom).
clear materials_bom.
endif.
endloop. | unknown | |
d1343 | train | If you build a regular F# library project
namespace A
open System.Runtime.CompilerServices
[<Extension>]
module Extension =
[<Extension>]
let Increment(value : System.Int32) = value + 1
and then refer to this library from VB project
Imports A.Extension
Module Module1
Sub Main()
Console.WriteLine((1).Increment())
End Sub
End Module
then VB treats F# extension method as expected <Extension>Public Function Increment() As Integer and works correctly, giving 2 as output.
A clean experiment does not indicate any VB-specific idiosyncrasy to F#-defined extension methods. | unknown | |
d1344 | train | According with this example, yes, it's possible:
https://forums.oracle.com/thread/696634
A: Why go to the trouble of doing that when you can simply pass the cursor itself?
create or replace procedure someprocedure
(
rc1 in out adv_refcur_pkg.rc) -- this is defined below
as
begin
open rc1 for
select distinct e.pref_class_year
from entity e
where e.pref_class_year between i_start_class and i_end_class;
end;
When you call "someprocedure", you have an open cursor you can then fetch from:
BEGIN
...
someprocedure(TheCursor);
LOOP
fetch TheCursor into v_class_year;
exit when TheCursor%NOTFOUND;
...
END LOOP;
CLOSE TheCursor;
...
END; | unknown | |
d1345 | train | Setting the style, might be accomplished defining the inner-page style declaration.
Here is what i mean
var style = document.createElement('style');
style.type = 'text/css';
style.cssText = '.cssClass { color: #F00; }';
document.getElementsByTagName('head')[0].appendChild(style);
document.getElementById('someElementId').className = 'cssClass';
However the part of modifying it can be a lot of tricky than you think. Some regex solutions might do a good job. But here is another way, I found.
if (!document.styleSheets) return;
var csses = new Array();
if (document.styleSheets[0].cssRules) // Standards Compliant {
csses = document.styleSheets[0].cssRules;
}
else {
csses = document.styleSheets[0].rules; // IE
}
for (i=0;i<csses.length;i++) {
if ((csses[i].selectorText.toLowerCase()=='.cssClass') || (thecss[i].selectorText.toLowerCase()=='.borders'))
{
thecss[i].style.cssText="color:#000";
}
}
A: If I understand your question properly, it sounds like you're trying to set placeholder text in your css file, and then use javascript to parse out the text with the css value you want to set for that class. You can't do that in the way you're trying to do it. In order to do that, you'd have to grab the content of the CSS file out of the dom, manipulate the text, and then save it back to the DOM. But that's a really overly-complicated way to go about doing something that...
myElement.style.width = "400px";
...can do for you in a couple of seconds. I know it doesn't really address the issue of decoupling css from js, but there's not really a whole lot you can do about that. You're trying to set css dynamically, after all.
Depending on what you're trying to accomplish, you might want to try defining multiple classes and just changing the className property in your js.
A: could you use jQuery on this? You could use
$(".class").css("property", val); /* or use the .width property */
A: There is a jQuery plugin called jQuery Rule,
http://flesler.blogspot.com/2007/11/jqueryrule.html
I tried it to dynamically set some div sizes of a board game. It works in FireFox, not in Chrome. I didn't try IE9. | unknown | |
d1346 | train | The driver core is the generic code that manages drivers, devices, buses, classes, etc. It is not tied to a specific bus or device. I believe the chapter you refer to provides several examples to the division of labor between the PCI bus driver and the driver core, for example, see Figure 14-3 (Device-creation process).
Of the three kernel modules you mention, two participate in the device core: ib_uverbs registers its character devices to export RDMA functionality to user-space; mlx5_core registers a PCI driver to handle ConnectX NICs; mlx5_ib can also be considered a driver, but the RDMA subsystem doesn't use the device core to register drivers (it has its own API - ib_register_device).
A:
what is driver core ??
Observe following flow of calls in Linux Source.
tps65086_regulator_probe---> devm_regulator_register--> regulator_register-->device_register(/drivers/regulator/tps65086-regulator.c--->/drivers/regulator/core.c---> drivers/base/core.c ).
tps65086 driver calls regulator core which in turn calls driver core.
This driver is following standard driver model.
include/linux/device.h ----> driver model objects are defined here.
drivers/base/ --> All Functions that operate on driver model objects are defined here.
we can refer this as driver core and is base for any driver Frame work.
All the registrations come here from Higher Layers.
Any driver subsystem ..weather PCI/USB/Platform is based on this.
drivers/base/core.c -- is the core file of standard driver model.
A slight confusion in naming - However we can refer drivers/base/ - as driver core .
Any difference character driver vs other driver ??
/drivers/char/tlclk.c
tlclk_init-->register_chrdev -->register_chrdev --> cdev_add --> kobj_map (fs/char_dev.c ---> /drivers/base/map.c ).
a character device registration is done with char_dev, a file system driver, which again uses infrastructure of driver model base.
whare as a non character driver like tps65086-regulator.c, may register with driver core as below.
tps65086_regulator_probe---> devm_regulator_register--> regulator_register-->device_register-->device_add-->kobject_add
(/drivers/regulator/tps65086-regulator.c--->/drivers/regulator/core.c---> drivers/base/core.c )
Also It's not just based on type of driver , but based on what kind of device it is and how device needs to be handled.
Here is a pci-driver which register's a character device.
tw_probe-->register_chrdev --> cdev_add --> kobj_map ( /drivers/scsi/3w-xxxx.c -->fs/char_dev.c ---> /drivers/base/map.c )
There is no standard rule wether a driver should call driver core or not.
Any stack / framework can have its own core to manage a device,which is the case with the driver mlx5_ib (drivers/infiniband/core/).
But Finally mostly it will use Kobject infrastructure and driver model objects,like struct device.
driver model infrastructure is introduced to eliminate redundant kernel code and data structures.
so most drivers are based on this and it is the effective way of writing a linux driver. | unknown | |
d1347 | train | First of all, I'll suggest to use Retrofit library for requests (https://square.github.io/retrofit/)
For creating a json String from your object you can use Gson library
Gson gson = new Gson();
String jsonInString = gson.toJson(obj);
Update
don't forget to add Gson dependency on your gradle file
dependencies {
implementation 'com.google.code.gson:gson:2.8.6'
}
A: Parsing JSON Objects & Arrays on Java we should use GSON Library
Add gradle File:
implementation 'com.google.code.gson:gson:2.8.2' | unknown | |
d1348 | train | Assuming you've already ordered your educa in the desired order, you can use fct_relabel from the forcats package together with str_wrap, to change the factor labels in one step without converting it from character to factor again:
ggplot(educa_genhlth,
aes(x = forcats::fct_relabel(educa,
stringr::str_wrap,
width = 10),
fill = genhlth)) +
geom_bar(position = "fill") +
labs(title = "general health vs education background") +
xlab(NULL) +
scale_fill_discrete(name = "general health")
This approach also keeps the educa_genhlth$educa in the data frame in the original form, leaving you the flexibility to wrap it to other lengths in other plots.
A: The use of str_wrap reorder your factors. So you need first to wrap, and then reorder your factors:
educa_genhlth$educa <- stringr::str_wrap(educa_genhlth$educa,10)
educa_genhlth$educa <-factor(educa_genhlth$educa,ordered=TRUE,
stringr::str_wrap(c("Never attended school or only kindergarten",
"Grades 1 through 8 (Elementary)",
"Grades 9 though 11 (Some high school)",
"Grade 12 or GED (High school graduate)",
"College 1 year to 3 years (Some college or technical school)",
"College 4 years or more (College graduate)"),10))
p<-ggplot(educa_genhlth,aes(x=educa,fill=genhlth))+geom_bar(position="fill")
q<-p+aes(educa)+labs(title="general health vs education background")+xlab(NULL)
r<-q+scale_fill_discrete(name="general health")
r | unknown | |
d1349 | train | You need props as an argument for your component.
import React, {useState} from 'react';
function Test(props) {
let [click, setClick] = useState(0);
function funClick(){
setClick(click++)
}
return(
<div>
{props.render(click, setClick)}
</div>
)
}
export default Test; | unknown | |
d1350 | train | this may help
Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
url = input('Enter - ')
html = urllib.request.urlopen(url, context=ctx).read()
A: Double check your compilation options, looks like something is wrong with your box.
At least the following code works for me:
from urllib.request import urlopen
resp = urlopen('https://github.com')
print(resp.read())
A: urllib.error.URLError: <urlopen error unknown url type: 'https>
The 'https and not https in the error message indicates that you did not try a http:// request but instead a 'https:// request which of course does not exist. Check how you construct your URL.
A: I had the same error when I tried to open a url with https, but no errors with http.
>>> from urllib.request import urlopen
>>> urlopen('http://google.com')
<http.client.HTTPResponse object at 0xb770252c>
>>> urlopen('https://google.com')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/local/lib/python3.7/urllib/request.py", line 548, in _open
'unknown_open', req)
File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 1387, in unknown_open
raise URLError('unknown url type: %s' % type)
urllib.error.URLError: <urlopen error unknown url type: https>
This was done on Ubuntu 16.04 using Python 3.7. The native Ubuntu defaults to Python 3.5 in /usr/bin and previously I had source downloaded and upgraded to 3.7 in /usr/local/bin. The fact that there was no error for 3.5 pointed to the executable /usr/bin/openssl not being installed correctly in 3.7 which is also evident below:
>>> import ssl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/ssl.py", line 98, in <module>
import _ssl # if we can't import it, let the error propagate
ModuleNotFoundError: No module named '_ssl'
By consulting this link, I changed SSL=/usr/local/ssl to SSL=/usr in 3.7 source dir's Modules/Setup.dist and also cp it into Setup and then rebuilt Python 3.7.
$ ./configure
$ make
$ make install
Now it is fixed:
>>> import ssl
>>> ssl.OPENSSL_VERSION
'OpenSSL 1.0.2g 1 Mar 2016'
>>> urlopen('https://www.google.com')
<http.client.HTTPResponse object at 0xb74c4ecc>
>>> urlopen('https://www.google.com').read()
b'<!doctype html>...
and 3.7 has been complied with OpenSSL support successfully. Note that the Ubuntu command "openssl version" is not complete until you load it into Python. | unknown | |
d1351 | train | Spring STS is only a Eclipse plugin which helps the developer managing spring beans. It is not mandatory for developing a spring-based application.
So your second approach was the right one to add the spring library. You can download spring here: Spring Downloads and then add it as a library to your project.
But serious... I would encourage you to do a tutorial first: http://www.springsource.org/tutorials
A: Plugins does not includes library of spring.
Plugin is only to manage the reference of configuration in your project build in eclipse.
Libraries are actual jar file which is required at compile as well as run time.
If you skip the plugin installation then your code will work if your configuration is correct
But if you skip spring library installation or download then your code will not compile of run.
Please go through step by step tutorial by following these tutorial
Spring Step By Step Tutorial | unknown | |
d1352 | train | Based on your description, you can combine window functions with generate_series():
SELECT c.Customer_ID, mon,
SUM(Order_Amt_Total_USD) as month_total,
SUM(SUM(Order_Amt_Total_USD)) OVER (PARTITION BY c.Customer_ID ORDER BY mon) as running_total
FROM (SELECT DISTINCT Customer_Id FROM tbl) c CROSS JOIN
generate_series('2015-01-01'::date, now(), interval '1 month') mon LEFT JOIN
tbl t
ON t.Customer_Id = c.customer_id and
date_trunc('month', t.Order_Date) = mon
GROUP BY c.Customer_ID, mon
ORDER BY 1, 2;
Here is a SQL Fiddle.
A: Sample below shows how you can use partition by to achieve the output-
Schema
CREATE TABLE tbl (Order_ID varchar(10), Customer_ID varchar(10), Order_Date date, Order_Amt_Total_USD bigint);
INSERT INTO tbl (Order_ID, Customer_ID, Order_Date, Order_Amt_Total_USD)
VALUES
('100', 'qwe', '2015-08-04', 6),
('101', 'qwe', '2015-05-20', 7),
('102', 'qwe', '2015-04-08', 8),
('103', 'qwe', '2015-04-07', 9),
('109', 'aaa', '2015-04-28', 1),
('110', 'aaa', '2015-04-28', 2),
('111', 'aaa', '2015-05-19', 3),
('112', 'aaa', '2015-08-06', 4),
('113', 'aaa', '2015-08-27', 5),
('114', 'aaa', '2015-08-07', 6)
Query
select Order_Date , Customer_ID, Order_Amt_Total_USD,
sum(Order_Amt_Total_USD) over (partition by Customer_ID order by Order_Date) as cumulative
from tbl; | unknown | |
d1353 | train | I suggest looking at Couchbase Single Server (CouchDb). It holds a bunch of JSON documents in a schema-less structure. Structure is created through the use of 'Views' or indexes. They have a version running on Android too, although this is still in early development. | unknown | |
d1354 | train | you have not set button type ,it is read only property .so you can not direct assign it so change your code like this:
UIButton* loginButton = [UIButton buttonWithType:UIButtonTypeRoundedRect];
[loginButton setFrame:CGRectMake(5,5,302, 34)];
[loginButton setBackgroundImage:[UIImage imageNamed:@"LoginButton.png"] forState:UIControlStateNormal];
[loginButton setBackgroundImage:[UIImage imageNamed:@"LoginButton_pressed.png"] forState:UIControlStateSelected];
[loginButton setTitle:@"Login" forState:UIControlStateNormal];
[cell setBackgroundColor:[UIColor clearColor]];
[cell addSubview:loginButton]; | unknown | |
d1355 | train | Add these styles
#wrapper {
display: flex;
}
h2 {
margin: 0 !important;
}
#wrapper {
display: flex;
}
h2 {
margin: 0 !important;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<head>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet" id="bootstrap-css">
<!-- Latest compiled JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<!-- jQuery library -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<!--- FontAwesome -->
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.6.3/css/font-awesome.min.css" rel="stylesheet" type="text/css">
</head>
<body>
<div id="wrapper">
<!-- Sidebar -->
<div id="sidebar-wrapper">
<ul class="sidebar-nav">
<li class="sidebar-brand">
<a href="#">
Start Bootstrap
</a>
</li>
<li>
<a href="#">Dashboard</a>
</li>
<li>
<a href="#">Shortcuts</a>
</li>
</ul>
</div>
<!-- /#sidebar-wrapper -->
<div id="page-content-wrapper">
<div class="container-fluid">
<h2>Results</h2>
<ul class="nav nav-pills">
<li class="active"><a data-toggle="pill" href="#topic">Topics</a></li>
<li><a data-toggle="pill" href="#result1">Result1</a></li>
<li><a data-toggle="pill" href="#result2">Result2</a></li>
<li><a data-toggle="pill" href="#result3">Result3</a></li>
<li><a data-toggle="pill" href="#result4">Result4</a></li>
</ul>
<div id="result1">
Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1
Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1
Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1
Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content Result 1 Some content
</div>
<div id="result2">
Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2
Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2 Result2
Result2 Result2 Result2 Result2 Result2
</div>
<div id="result3">
Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3
Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3 Result3
Result3
</div>
<div id="result4">
Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4
Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result
4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4 Result 4
</div>
</div>
<!-- /#container-fluid -->
</div>
<!-- /#page-content-wrapper -->
</div>
<!-- /#wrapper -->
<!-- jQuery -->
<script src="{{ url_for('static', filename='js/jquery.js') }}"></script>
<!-- Bootstrap Core JavaScript -->
<script src="{{ url_for('static', filename='js/bootstrap.min.js') }}"></script>
<!-- Menu Toggle Script -->
<script>
$("#menu-toggle").click(function(e) {
e.preventDefault();
$("#wrapper").toggleClass("toggled");
});
$(".nav-pills a").click(function(e) {
e.preventDefault();
var id = this.hash.substr(1);
document.getElementById(id).scrollIntoView();
})
</script> | unknown | |
d1356 | train | Try changing this line
if group == 10G2:
to:
if group == '10G2': | unknown | |
d1357 | train | SelectedIndex is not the same as SelectedItem.
This is the same as with the default WPF Controls.
SelectedIndex is the Index of the CollectionItem, you have selected/set selected (Integer). The SelectedItem is the Item Object itself.
Example:
Lets take this Collection: new ObservableCollection<string>(){ "String1", "String2", String3"}
If the SelectedItem is/should be String1 the SelectedIndex is 0.
So just replace
<Setter Property="SelectedIndex" Value="{Binding CurrentPlanSet, Mode=TwoWay}"/>
with
<Setter Property="SelectedItem" Value="{Binding CurrentPlanSet, Mode=TwoWay}"/> | unknown | |
d1358 | train | Try here, which has already been asked, and has a solution:
use Datetime() to format your dates before comparison. | unknown | |
d1359 | train | This problem also seems to occur when an exchange is declared only in an outbound endpoint. There is an open bug concerning this in the Mulesoft JIRA, and you can vote for it to help them prioritize it.
I took a look at the source code, and the problem seems to be that there is simply no code to declare exchanges when an outbound endpoint is started. In your case, you'd probably want the code to run at the time the message is sent, or maybe at the time the exchange is deleted. This timing wouldn't be covered by the aforementioned bug, but you might open a new issue describing the use case and the desired functionality. And a pull request would probably be even better! ;) | unknown | |
d1360 | train | This bit of code is never going to work.
An uncaught exception handler is called after a thread has terminated due to an exception that was not caught. If you then attempt to re-throw the exception, there will be nothing else to catch it.
*
*If you want to have your re-thrown exception handled in the normal, you have to do the logging further up the stack.
*The only other option you have is to explicitly chain to another UncaughtExceptionHandler.
I suspect that the freeze is you are experiencing is a direct consequence of the current thread dying.
FOLLOWUP:
... is there some other way in Android to intercept exceptions at a global level, without using Thread.setDefaultUncaughtExceptionHandler?
AFAIK, no.
Or how would the chaining look like?
Something like this:
public void uncaughtException(Thread thread, final Throwable ex) {
// do report ...
getSomeOtherHandler(...).uncaughtException(thread, ex);
}
Or, since ThreadGroup implements UncaughtExceptionHandler, ...
public void uncaughtException(Thread thread, final Throwable ex) {
// do report ...
thread.getThreadGoup().uncaughtException(thread, ex);
}
Basically, you just call some other handler. It's not rocket science.
But note that you can't catch and recover from an exception in the "normal" way once you've landed in one of these handlers. | unknown | |
d1361 | train | RewriteCond %{HTTP_HOST} ^example-old\.uk$ [NC]
RewriteRule ^(.*)$ http://example-new.com/gr [R=301,L]
You've not actually stated the problem you are having. However, if you want to redirect to the same URL-path, but with a /gr/ path segment prefix (language code) then you are missing a backreference to the captured URL path (otherwise there's no reason to have the capturing group in the RewriteRule pattern to begin with).
For example:
RewriteRule (.*) http://example-new.com/gr/$1 [R=301,L]
The $1 backreference contains the value captured by the preceding (.*) pattern.
A: I assume that is what you are looking for:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^example-old\.uk$ [NC]
RewriteRule ^ http://example-new.com/gr%{REQUEST_URI} [R=301,END]
It is a good idea to start out with a 302 temporary redirection and only change that to a 301 permanent redirection later, once you are certain everything is correctly set up. That prevents caching issues while trying things out...
In case you receive an internal server error (http status 500) using the rule above then chances are that you operate a very old version of the apache http server. You will see a definite hint to an unsupported [END] flag in your http servers error log file in that case. You can either try to upgrade or use the older [L] flag, it probably will work the same in this situation, though that depends a bit on your setup.
This implementation will work likewise in the http servers host configuration or inside a distributed configuration file (".htaccess" file). Obviously the rewriting module needs to be loaded inside the http server and enabled in the http host. In case you use a distributed configuration file you need to take care that it's interpretation is enabled at all in the host configuration and that it is located in the host's DOCUMENT_ROOT folder.
And a general remark: you should always prefer to place such rules in the http servers host configuration instead of using distributed configuration files (".htaccess"). Those distributed configuration files add complexity, are often a cause of unexpected behavior, hard to debug and they really slow down the http server. They are only provided as a last option for situations where you do not have access to the real http servers host configuration (read: really cheap service providers) or for applications insisting on writing their own rules (which is an obvious security nightmare). | unknown | |
d1362 | train | Your Employee class did not have detachable = "true".
You should change
@PersistenceCapable(identityType = IdentityType.APPLICATION)
to
@PersistenceCapable(identityType = IdentityType.APPLICATION, detachable = "true")
A: Is it significant that in addEmployee, you obtain the persistenceManager like this:
PersistenceManager pm = PMF.get().getPersistenceManager();
but in getEmployees you call it like this
PersistenceManager pm = getPersistenceManager();
without using PMF.get().
A: I changed the code a bit, and everything is normal now, still I don't know what caused this problem.
I'm using Lists now instead of the collections**(1), I return everything as a simple array trough the RPC(2)** and I changed the way I did the query**(3)**.
(1) List results = (List) query.execute();
(2) return (Employee[]) employees.toArray(new Employee[0]);
(3) Query query = pm.newQuery(Employee.class); | unknown | |
d1363 | train | Are you looking for something like jqconsole?
A: Not JavaScript tools for emulating the console, but here are some other ways around it:
Chrome for Android has remote debugging through Chrome for Desktop
And I think Safari has a similar feature for iOS devices. | unknown | |
d1364 | train | A good solution would probably be to ditch NHibernate for this task and insert your data into a temporary table, then join on that temporary table.
However, if you want to use NHibernate, you could speed this up by not issuing 10,000 separate queries (which is what's happening now). You could try to break your query into reasonably sized chunks instead:
List<object[]> ProcessChunk(
IStatelessSession session,
int start,
IEnumerable<Registro> currentChunk)
{
var disjunction = Restrictions.Disjunction();
foreach (var item in currentChunk)
{
var restriction = Restrictions.Conjunction()
.Add(Restrictions.Where<Registro>(t => t.Asunto == item.Asunto))
/* etc, calling .Add(..) for every field you want to include */
disjunction.Add(restriction);
}
return session.QueryOver<Registro>()
.Where(disjunction)
.SelectList(list => list
.Select(t => t.Id)
/* etc, the rest of the select list */
.List<object[]>()
.ToList();
}
Then call that method from your main loop:
const int chunkSize = 500;
for (int kr = 0; kr < records.Length; kr += chunkSize)
{
var currentChunk = records.Skip(i).Take(chunkSize);
resultList.AddRange(ProcessChunk(session, i, currentChunk));
}
What you're doing here is issuing 20 (instead of 10,000) queries that look like:
select
/* select list */
from
[Registro]
where
([Registro].[Asunto] = 'somevalue' and ... and ... and .. ) or
([Registro].[Asunto] = 'someothervalue' and ... and ... and ... )
/* x500, one for each item in the chunk */
and so on. Each query will return up to 500 records if 500 is the size of each chunk.
This is still not going to be blazing fast. My local test about halved the running time.
Depending on your database engine, you might quickly run up against a maximum number of parameters that you can pass. You'll probably have to play with the chunkSize to get it to work.
You could probably get this down to a few seconds if you used a temporary table and ditched NHibernate though. | unknown | |
d1365 | train | Maybe the new class QCommandLineParser can help you. | unknown | |
d1366 | train | You can save the file as arrays in Get-Content and Set-Content:
$file=(Get-Content "C:\BatchPractice\test.txt")
Then you can edit it like arrays:
$file[LINE_NUMBER]="New line"
Where LINE_NUMBER is the line number starting from 0.
And then overwrite to file:
$file|Set-Content "C:\BatchPractice\test.txt"
You can implement this in code. Create a variable $i=0 and increment it at the end of loop. Here $i will be the line number at each iteration.
HTH
A: Based on your code it seems you want to take any line that contains 'dsa' and remove the contents after the first backslash up until the last backslash. If that's the case I'd recommend simplifying your code with regex. First I made a sample file since none was provided.
$tempfile = New-TemporaryFile
@'
abc\def\ghi\jkl
abc\dsa\ghi\jkl
zyx\vut\dsa\srq
zyx\vut\srq\pon
'@ | Set-Content $tempfile -Encoding UTF8
Now we will read in all lines (unless this is a massive file)
$text = Get-Content $tempfile -Encoding UTF8
Next we'll make a regex object with the pattern we want to replace. The double backslash is to escape the backslash since it has meaning to regex.
$regex = [regex]'(?<=.+\\).+\\'
Now we will loop over every line, if it has dsa in it we will run the replace against it, otherwise we will output the line.
$text | ForEach-Object {
if($_.contains('dsa'))
{
$regex.Replace($_,'')
}
else
{
$_
}
} -OutVariable newtext
You'll see the output on the screen but it's also capture in $newtext variable. I recommend ensuring it is the output you are after prior to writing.
abc\def\ghi\jkl
abc\jkl
zyx\srq
zyx\vut\srq\pon
Once confirmed, simply write it back to the file.
$newtext | Set-Content $tempfile -Encoding UTF8
You can obviously combine the steps as well.
$text | ForEach-Object {
if($_.contains('dsa'))
{
$regex.Replace($_,'')
}
else
{
$_
}
} | Set-Content $tempfile -Encoding UTF8 | unknown | |
d1367 | train | A Qt Quick Layout resize all its children items (e.g. ColumnLayout resizes children's height, RowLayout resizes children's width), so you should use Layout attached property to indicate how to layout them, rather than setting the sizes. e.g.
ScrollView {
Layout.maximumHeight: 150 // height will be updated according to these layout properties
width: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
A: A Layout changes the sizes and positions of its children. But as I was specifying the sizes of the children I only wanted to change the positions. A Positioner is used for this (specifically, a Column instead of a ColumnLayout). Additionally I had not set the size of the parent Layout (/Positioner), so I now do this with anchors.fill: parent.
Column {
anchors.fill: parent
ScrollView
{
width: 150
height: 150
clip: true
ListView {
model: theModel
anchors.fill: parent
delegate: Column {
TextField {
text: display
}
}
}
}
Rectangle {
color: "black"
width: 100
height: 30
}
}
Thanks to other's comment and answer for helping me realize this! | unknown | |
d1368 | train | $sum = array();
for ($i=0;$i<count(array1);$i++) {
$sum[$i] = array1[$i] + array2[$i];
}
print_r($sum);
=====Update======
<?php
$a = array(2, 4, 6, 8);
echo "sum(a) = " . array_sum($a) . "\n";
$b = array("a" => 1.2, "b" => 2.3, "c" => 3.4);
echo "sum(b) = " . array_sum($b) . "\n";
?> | unknown | |
d1369 | train | This is a two part answer; Part 1 addresses the question with a known set of socials (Github, Pinterest, etc). I included that to show how to map a Map to a Codable.
Part 2 is the answer (TL;DR, skip to Part 2) so the social can be mapped to a dictionary for varying socials.
Part 1:
Here's an abbreviated structure that will map the Firestore data to a codable object, including the social map field. It is specific to the 4 social fields listed.
struct SocialsCodable: Codable {
var Github: String
var Pinterest: String
var Soundcloud: String
var TikTok: String
}
struct UserWithMapCodable: Identifiable, Codable {
@DocumentID var id: String?
var socials: SocialsCodable? //socials is a `map` in Firestore
}
and the code to read that data
func readCodableUserWithMap() {
let docRef = self.db.collection("users").document("uid_0")
docRef.getDocument { (document, error) in
if let err = error {
print(err.localizedDescription)
return
}
if let doc = document {
let user = try! doc.data(as: UserWithMapCodable.self)
print(user.socials) //the 4 socials from the SocialsCodable object
}
}
}
Part 2:
This is the answer that treats the socials map field as a dictionary
struct UserWithMapCodable: Identifiable, Codable {
@DocumentID var id: String?
var socials: [String: String]?
}
and then the code to map the Firestore data to the object
func readCodableUserWithMap() {
let docRef = self.db.collection("users").document("uid_0")
docRef.getDocument { (document, error) in
if let err = error {
print(err.localizedDescription)
return
}
if let doc = document {
let user = try! doc.data(as: UserWithMapCodable.self)
if let mappedField = user.socials {
mappedField.forEach { print($0.key, $0.value) }
}
}
}
}
and the output for part 2
TikTok ogotok
Pinterest pintepogo
Github popgit
Soundcloud musssiiiccc
I may also suggest taking the socials out of the user document completely and store it as a separate collection
socials
some_uid
Github: popgit
Pinterest: pintepogo
another_uid
Github: git-er-done
TikTok: dancezone
That's pretty scaleable and allows for some cool queries: which users have TikTok for example. | unknown | |
d1370 | train | Update:
Based on comments updating answer.
JSFiddle Demo
Does this help you?
Logic:
Check if the checkbox is checked, if its checked then assign all the respective elements as selected, no matter if they are selected or unselected already and vice versa for unselected, thus the end user will be able to select/unselect all the columns if he uses the checkbox!
$(':checkbox').on('change', function(e) {
var row = $(this).closest('tr');
var hmc = row.find(':checkbox:checked').length;
row.find('td.counter').text(hmc);
});
$("td.zero").on("click", function () {
if ( $( this ).hasClass( "zero2" ) ) {
$(this).removeClass("zero2");
var row3 = $(this).closest('tr');
var wal4 = $(this).text();
var wal5 = $(this).closest('tr').children('td.counter2').text();
wal6 = parseFloat(wal5, 10) - parseFloat(wal4, 10);
row3.find('td.counter2').text(wal6.toFixed(2));
} else {
$(this).addClass("zero2");
var row2 = $(this).closest('tr');
var wal = $(this).text();
var wal2 = $(this).closest('tr').children('td.counter2').text();
wal3 = parseFloat(wal, 10) + parseFloat(wal2, 10);
row2.find('td.counter2').text(wal3.toFixed(2));
}
});
$(':checkbox.taker').on('change', function(e) {
var elem = $(this).parent().index();
$('tr').each(function(index){
var td = $(this).children().eq(elem);
if(index > 1){
if($(':checkbox.taker:checked').length > 0){
td.addClass('zero2');
}else{
td.removeClass('zero2');
}
}
});
});
td
{
text-align: center;
padding: 8px 8px 8px 8px;
cursor: default;
}
input.ptaszek {
transform: scale(2);
}
td.zero2{
background-color: red;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<table>
<thead>
<tr>
<th>X</th><th>X</th><th>X</th>
<th>Count1</th><th>Count2</th><th>Count3</th>
<th>Val1</th><th>Val2</th><th>Val3</th><th>Val4</th><th>Val5</th>
</tr>
</thead>
<tbody>
<tr><td></td><td></td><td></td><td></td><td></td><td></td>
<td><input type='checkbox' class='taker'></td>
<td><input type='checkbox' class='taker'></td>
<td><input type='checkbox' class='taker'></td>
<td><input type='checkbox' class='taker'></td>
<td><input type='checkbox' class='taker'></td>
</tr>
<tr>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td class='counter'>0</td><td class='counter2'>0</td><td class='counter3'>0</td>
<td class='zero'>0.5</td><td class='zero'>5</td><td class='zero'>2.1</td><td class='zero'>0.2</td><td class='zero'>1.7</td>
</tr>
<tr>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td><div><input type='checkbox' name='chkboxarray' class='ptaszek'></div></td>
<td class='counter'>0</td><td class='counter2'>0</td><td class='counter3'>0</td>
<td class='zero'>1.4</td><td class='zero'>0.5</td><td class='zero'>2</td><td class='zero'>1.1</td><td class='zero'>1.5</td>
</tr>
</tbody>
</table> | unknown | |
d1371 | train | It compiles fine without the private: header. Why do you have this? Is the struct declared inside of a class?
EDIT
You have used Room before you declare it:
const Room * findRoom( int roomNumber );
Also, you can't return a Room object through the public method you have declared, since outside code won't know anything about it.
You need to predeclare it before using it:
class Graph {
public:
struct Room;
const Room * findRoom( int roomNumber );
struct Room
{
bool visted;
int myNumber;
Graph::Room *North;
Graph::Room *East;
Graph::Room *South;
Graph::Room *West;
};
Room room;
};
int main (){
Graph x;
return 0;
}
Or you could just move the second private up, above the public section.
A: If you are using a nested struct as an argument to a mthod of the containing class, then you must use the fully-qualified name such as void outerclass::mymethod(outerclass::room); Try that. You may need to make it public too.
A: *
*Room cannot be private since you use it in your public member functions.
*Either forward declare it like this :
struct Room;
// destructor
~Graph();
*Or just declare and implement it before you use it at the top of the class.
*void move( Room &*room , String direction ); //this is not valid C++
A: You have to declare any type before using it. A forward declaration suffices because you are defining Graph::Room in the same scope. However, since you have to define it anyway, I would suggest moving it up to some point before first using it.
Making Room private within Graph is perfectly legal (it is questionable if it is reasonable, though, if your public interface fumbles with it).
On a side note: Pointers to references are no valid types (references to pointers are!). Your move function is therefore invalid. | unknown | |
d1372 | train | Another possible solution: do not try to avoid the context manager exit method, just duplicate stdout.
with (os.fdopen(os.dup(sys.stdout.fileno()), 'w')
if target == '-'
else open(target, 'w')) as f:
f.write("Foo")
A: Stick with your current code. It's simple and you can tell exactly what it's doing just by glancing at it.
Another way would be with an inline if:
handle = open(target, 'w') if target else sys.stdout
handle.write(content)
if handle is not sys.stdout:
handle.close()
But that isn't much shorter than what you have and it looks arguably worse.
You could also make sys.stdout unclosable, but that doesn't seem too Pythonic:
sys.stdout.close = lambda: None
with (open(target, 'w') if target else sys.stdout) as handle:
handle.write(content)
A: import contextlib
import sys
with contextlib.ExitStack() as stack:
h = stack.enter_context(open(target, 'w')) if target else sys.stdout
h.write(content)
Just two extra lines if you're using Python 3.3 or higher: one line for the extra import and one line for the stack.enter_context.
A: If it's fine that sys.stdout is closed after with body, you can also use patterns like this:
# Use stdout when target is "-"
with open(target, "w") if target != "-" else sys.stdout as f:
f.write("hello world")
# Use stdout when target is falsy (None, empty string, ...)
with open(target, "w") if target else sys.stdout as f:
f.write("hello world")
or even more generally:
with target if isinstance(target, io.IOBase) else open(target, "w") as f:
f.write("hello world")
A: As pointed in Conditional with statement in Python, Python 3.7 allows using contextlib.nullcontext for that:
from contextlib import nullcontext
with open(target, "w") if target else nullcontext(sys.stdout) as f:
f.write(content)
A: An improvement of Wolph's answer
import sys
import contextlib
@contextlib.contextmanager
def smart_open(filename: str, mode: str = 'r', *args, **kwargs):
'''Open files and i/o streams transparently.'''
if filename == '-':
if 'r' in mode:
stream = sys.stdin
else:
stream = sys.stdout
if 'b' in mode:
fh = stream.buffer # type: IO
else:
fh = stream
close = False
else:
fh = open(filename, mode, *args, **kwargs)
close = True
try:
yield fh
finally:
if close:
try:
fh.close()
except AttributeError:
pass
This allows binary IO and pass eventual extraneous arguments to open if filename is indeed a file name.
A: Just thinking outside of the box here, how about a custom open() method?
import sys
import contextlib
@contextlib.contextmanager
def smart_open(filename=None):
if filename and filename != '-':
fh = open(filename, 'w')
else:
fh = sys.stdout
try:
yield fh
finally:
if fh is not sys.stdout:
fh.close()
Use it like this:
# For Python 2 you need this line
from __future__ import print_function
# writes to some_file
with smart_open('some_file') as fh:
print('some output', file=fh)
# writes to stdout
with smart_open() as fh:
print('some output', file=fh)
# writes to stdout
with smart_open('-') as fh:
print('some output', file=fh)
A: Why LBYL when you can EAFP?
try:
with open(target, 'w') as h:
h.write(content)
except TypeError:
sys.stdout.write(content)
Why rewrite it to use the with/as block uniformly when you have to make it work in a convoluted way? You'll add more lines and reduce performance.
A: I'd also go for a simple wrapper function, which can be pretty simple if you can ignore the mode (and consequently stdin vs. stdout), for example:
from contextlib import contextmanager
import sys
@contextmanager
def open_or_stdout(filename):
if filename != '-':
with open(filename, 'w') as f:
yield f
else:
yield sys.stdout
A: Okay, if we are getting into one-liner wars, here's:
(target and open(target, 'w') or sys.stdout).write(content)
I like Jacob's original example as long as context is only written in one place. It would be a problem if you end up re-opening the file for many writes. I think I would just make the decision once at the top of the script and let the system close the file on exit:
output = target and open(target, 'w') or sys.stdout
...
output.write('thing one\n')
...
output.write('thing two\n')
You could include your own exit handler if you think its more tidy
import atexit
def cleanup_output():
global output
if output is not sys.stdout:
output.close()
atexit(cleanup_output)
A: This is a simpler and shorter version of the accepted answer
import contextlib, sys
def writer(fn):
@contextlib.contextmanager
def stdout():
yield sys.stdout
return open(fn, 'w') if fn else stdout()
usage:
with writer('') as w:
w.write('hello\n')
with writer('file.txt') as w:
w.write('hello\n')
A: If you really must insist on something more "elegant", i.e. a one-liner:
>>> import sys
>>> target = "foo.txt"
>>> content = "foo"
>>> (lambda target, content: (lambda target, content: filter(lambda h: not h.write(content), (target,))[0].close())(open(target, 'w'), content) if target else sys.stdout.write(content))(target, content)
foo.txt appears and contains the text foo.
A: How about opening a new fd for sys.stdout? This way you won't have any problems closing it:
if not target:
target = "/dev/stdout"
with open(target, 'w') as f:
f.write(content)
A: if (out != sys.stdout):
with open(out, 'wb') as f:
f.write(data)
else:
out.write(data)
Slight improvement in some cases.
A: The following solution is not a beauty, but from a time long, long ago; just before with ...
handler = open(path, mode = 'a') if path else sys.stdout
try:
print('stuff', file = handler)
... # other stuff or more writes/prints, etc.
except Exception as e:
if not (path is None): handler.close()
raise e
handler.close()
A: One way to solve it is with polymorphism. Pathlib.path has an open method that functions as you would expect:
from pathlib import Path
output = Path("/path/to/file.csv")
with output.open(mode="w", encoding="utf-8") as f:
print("hello world", file=f)
we can copy this interface for printing
import sys
class Stdout:
def __init__(self, *args):
pass
def open(self, mode=None, encoding=None):
return self
def __enter__(self):
return sys.stdout
def __exit__(self, exc_type, exc_value, traceback):
pass
Now we simply replace Path with Stdout
output = Stdout("/path/to/file.csv")
with output.open(mode="w", encoding="utf-8") as f:
print("hello world", file=f)
This isn't necessarily better than overloading open, but it's a convenient solution if you're using Path objects.
A: With python 3 you can used wrap stdout file descriptor with IO object and avoid closing on context leave it with closefd=False:
h = open(target, 'w') if target else open(sys.stdout.fileno(), 'w', closefd=False)
with h as h:
h.write(content) | unknown | |
d1373 | train | fflush(stdin) has undefined behavior. If you want to discard characters entered after the scanf() call, you can read and discard them.
You can use getchar() to clear the character. | unknown | |
d1374 | train | You need to increase the timeOutSeconds of your request.The default time of a request is 60 seconds.You might increase to 10 mins(600 seconds).
[request setTimeOutSeconds:600];
You also need to check that max upload file size that is specified in your php.Please go through this link this might help you.
http://drupal.org/node/97193
A: Turned out to be a problem in my .htpconfig file. I had a hard set limit set on it. Adjusting it worked like a charm! | unknown | |
d1375 | train | You can use except from to compute difference set of two sets:
proc sql;
create table want as
select * from have except select * from to_delete
;
quit; | unknown | |
d1376 | train | Chrome driver has a page on how to identify the exact version that you need, but it's a pain, especially if you regularly update chrome, or want to make it part of a build process.
Not sure about robot framework, but chromedriver-autoinstaller sure helped me out when building a pipeline to get a selenium/chrome project going, as it will detect the version of chrome installed and install the correct driver version automatically. | unknown | |
d1377 | train | Simple, protect the range using Data/Protected Sheets & Ranges. | unknown | |
d1378 | train | you may use Unions ( something like this
SELECT * FROM articles WHERE published = '1' ORDER BY date_time DESC LIMIT 10
UNION
SELECT * FROM articles WHERE WHERE topic="This Week" order by ID desc LIMIT 1
you may use UNION or UNION ALL whichever suits your need
you may wanna check the actual query and format it as per your needs | unknown | |
d1379 | train | Have a look at the layout system.
That icon does not mean your QWidget is disabled, that just mean you do not apply a layout on it.
Try to press like Ctrl+1 in order to apply a basic layout. If nothing has changed, you might need to put a QWidget inside the central widget first and then apply the layout. | unknown | |
d1380 | train | It sounds like you still need to layout the views depending on the orientation. However, you're portrait and landscape frames would have different proportions on the screen.
An Example of how you might layout a view:
int h = 300;
int w = 100;
int space = 50;
CGRect frame;
if( interfaceOrientation == UIInterfaceOrientationPortrait ) {
frame = CGRectMake( space, space, w, h );
}
else if( interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown ) {
frame = CGRectMake( self.view.bounds.size.width - w - space, self.view.bounds.size.height- h - space, w, h );
}
else if( interfaceOrientation == UIInterfaceOrientationLandscapeRight ) {
frame = CGRectMake( space, self.view.bounds.size.height- w - space, h , w );
}
else if( interfaceOrientation == UIInterfaceOrientationLandscapeLeft ) {
frame = CGRectMake( self.view.bounds.size.width - h - space, space, h, w );
}
self.label.frame = frame; | unknown | |
d1381 | train | According to the Docs, SimpleTest has support for FileUpload testing baked in since version 1.0.1:
File upload testing Can simulate the input type file tag 1.0.1
I've looked over the examples at their site and would assume you'd use something along the lines of
$this->get('http://www.example.com/');
$this->setField('filename', 'local path');
$this->click('Go');
to submit the file and then use the regular assertions to check the upload worked as wanted. But that's really just a wild guess, since I am not familiar with SimpleTest and I couldnt find an example at their homepage. You might want to ask in their support forum though.
But basically, there is not much use testing that a form uploads a file. This is tried and tested browser behavior. Testing the code that handles the upload makes more sense. I dont know how you implemented your FileUpload code, but if I had to implement this, I would get rid of the dependency on the $_FILES array as the first thing. Create a FileRequest class that you can pass the $_FILES array to. Then you can handle the upload from the class. This would allow you to test the functionality without actually uploading a file. Just setup your FileRequest instance accordingly. You could even mock the filesystem with vfsStreamWrapper, so you dont even need actual files.
A: You can generate a file upload in a programmatic manner with e.g. the curl extension.
Since this requires PHP running under a web server, it's not much of a unit test. Consequently, the best way would be to to use PHPT tests and fill the --POST_RAW-- section with the data.
If you don't know what to put in the --POST_RAW--, try to install the TamperData Firefox extension, do a file submission from Firefox, and copy-paste the data from the right side.
A: I've found an alternate solution. I've spoofed the $_FILES array with test data, created dummy test files in the tmp/ folder (the folder is irrelevant, but I tried to stick with the default).
The problem was that is_uploaded_file and move_uploaded_file could not work with this spoofed items, because they are not really uploaded through POST.
First thing I did was to wrap those functions inside my own moveUploadedFile and isUploadedFile in my plugin so I can mock them and change their return value.
The last thing was to extends the class when testing it and overwriting moveUploadedFile to use rename instead of move_uploaded_file and isUploadedFile to use file_exists instead of is_uploaded_file.
A: For unit testing (as opposed to functional testing) try uploading a file (a short text file) to a test page, and var_dump($_FILES) and var_dump($_POST). Then you know what to populate them (or your mocks) with. | unknown | |
d1382 | train | You can use:
SELECT Jan,
CASE WHEN Feb > 0 THEN Feb
ELSE Jan END
AS Feb,
CASE WHEN Mar > 0 THEN Mar
WHEN Feb > 0 THEN Feb
ELSE Jan END
AS Mar,
CASE WHEN Apr > 0 THEN Apr
WHEN Mar > 0 THEN Mar
WHEN Feb > 0 THEN Feb
ELSE Jan END
AS Apr,
CASE WHEN May > 0 THEN May
WHEN Apr > 0 THEN Apr
WHEN Mar > 0 THEN Mar
WHEN Feb > 0 THEN Feb
ELSE Jan END
AS May
FROM table_name
Which, for the sample data:
CREATE TABLE table_name ( Jan, Feb, Mar, Apr, May ) AS
SELECT 899.20, 0, 0, 0, 899.20 FROM DUAL UNION ALL
SELECT 439.38, 485.29, 0, 0, 482.29 FROM DUAL;
Outputs:
JAN | FEB | MAR | APR | MAY
-----: | -----: | -----: | -----: | -----:
899.2 | 899.2 | 899.2 | 899.2 | 899.2
439.38 | 485.29 | 485.29 | 485.29 | 482.29
db<>fiddle here | unknown | |
d1383 | train | EDIT: This code is apparently not what's required, but I'm leaving it as it's interesting anyway. It basically treats Key1 as taking priority, then Key2, then Key3 etc. I don't really understand the intended priority system yes, but when I do I'll add an answer for that.
I would suggest a triple layer of Dictionaries - each layer has:
Dictionary<int, NextLevel> matches;
NextLevel nonMatch;
So at the first level you'd look up Key1 - if that matches, that gives you the next level of lookup. Otherwise, use the next level which corresponds with "non-match".
Does that make any sense? Here's some sample code (including the example you gave). I'm not entirely happy with the actual implementation, but the idea behind the data structure is sound, I think:
using System;
using System.Collections;
using System.Collections.Generic;
public class Test
{
static void Main()
{
Config config = new Config
{
{ null, null, null, 1 },
{ 1, null, null, 2 },
{ 1, null, 3, 3 },
{ null, 2, 3, 4 },
{ 1, 2, 3, 5 }
};
Console.WriteLine(config[1, 2, 3]);
Console.WriteLine(config[3, 2, 3]);
Console.WriteLine(config[9, 10, 11]);
Console.WriteLine(config[1, 10, 11]);
}
}
// Only implement IEnumerable to allow the collection initializer
// Not really implemented yet - consider how you might want to implement :)
public class Config : IEnumerable
{
// Aargh - death by generics :)
private readonly DefaultingMap<int,
DefaultingMap<int, DefaultingMap<int, int>>> map
= new DefaultingMap<int, DefaultingMap<int, DefaultingMap<int, int>>>();
public int this[int key1, int key2, int key3]
{
get
{
return map[key1][key2][key3];
}
}
public void Add(int? key1, int? key2, int? key3, int value)
{
map.GetOrAddNew(key1).GetOrAddNew(key2)[key3] = value;
}
public IEnumerator GetEnumerator()
{
throw new NotSupportedException();
}
}
internal class DefaultingMap<TKey, TValue>
where TKey : struct
where TValue : new()
{
private readonly Dictionary<TKey, TValue> mapped = new Dictionary<TKey, TValue>();
private TValue unmapped = new TValue();
public TValue GetOrAddNew(TKey? key)
{
if (key == null)
{
return unmapped;
}
TValue ret;
if (mapped.TryGetValue(key.Value, out ret))
{
return ret;
}
ret = new TValue();
mapped[key.Value] = ret;
return ret;
}
public TValue this[TKey key]
{
get
{
TValue ret;
if (mapped.TryGetValue(key, out ret))
{
return ret;
}
return unmapped;
}
}
public TValue this[TKey? key]
{
set
{
if (key != null)
{
mapped[key.Value] = value;
}
else
{
unmapped = value;
}
}
}
}
A: To answer your question about something which is generic in the number and type of keys - you can't make the number and type of keys dynamic and use generics - generics are all about providing compile-time information. Of course, you could use ignore static typing and make it dynamic - let me know if you want me to implement that instead.
How many entries will there be, and how often do you need to look them up? You may well be best just keeping all the entries as a list and iterating through them giving a certain "score" to each match (and keeping the best match and its score as you go). Here's an implementation, including your test data - but this uses the keys having priorities (and then summing the matches), as per a previous comment...
using System;
using System.Collections;
using System.Collections.Generic;
public class Test
{
static void Main()
{
Config config = new Config(10, 7, 5)
{
{ new int?[]{null, null, null}, 1},
{ new int?[]{1, null, null}, 2},
{ new int?[]{9, null, null}, 21},
{ new int?[]{1, null, 3}, 3 },
{ new int?[]{null, 2, 3}, 4 },
{ new int?[]{1, 2, 3}, 5 }
};
Console.WriteLine(config[1, 2, 3]);
Console.WriteLine(config[3, 2, 3]);
Console.WriteLine(config[8, 10, 11]);
Console.WriteLine(config[1, 10, 11]);
Console.WriteLine(config[9, 2, 3]);
Console.WriteLine(config[9, 3, 3]);
}
}
public class Config : IEnumerable
{
private readonly int[] priorities;
private readonly List<KeyValuePair<int?[],int>> entries =
new List<KeyValuePair<int?[], int>>();
public Config(params int[] priorities)
{
// In production code, copy the array to prevent tampering
this.priorities = priorities;
}
public int this[params int[] keys]
{
get
{
if (keys.Length != priorities.Length)
{
throw new ArgumentException("Invalid entry - wrong number of keys");
}
int bestValue = 0;
int bestScore = -1;
foreach (KeyValuePair<int?[], int> pair in entries)
{
int?[] key = pair.Key;
int score = 0;
for (int i=0; i < priorities.Length; i++)
{
if (key[i]==null)
{
continue;
}
if (key[i].Value == keys[i])
{
score += priorities[i];
}
else
{
score = -1;
break;
}
}
if (score > bestScore)
{
bestScore = score;
bestValue = pair.Value;
}
}
return bestValue;
}
}
public void Add(int?[] keys, int value)
{
if (keys.Length != priorities.Length)
{
throw new ArgumentException("Invalid entry - wrong number of keys");
}
// Again, copy the array in production code
entries.Add(new KeyValuePair<int?[],int>(keys, value));
}
public IEnumerator GetEnumerator()
{
throw new NotSupportedException();
}
}
The above allows a variable number of keys, but only ints (or null). To be honest the API is easier to use if you fix the number of keys...
A: Yet another solution - imagine that the entries are a bit pattern of null/non-null. You have one dictionary per bit pattern (i.e. { 1, null, null } and { 9, null, null } go in the same dictionary, but { 1, 2, 3 } goes in a different one. Each dictionary effectively has a score as well - the sum of the priorities for the non-null parts of the key. You wil end up with 2^n dictionaries, where n is the number of elements in the key.
You order the dictionaries in reverse score order, and then just look up the given key in each of them. Each dictionary needs to ignore the values in the key which aren't in its bit pattern, which is done easily enough with a custom IComparer<int[]>.
Okay, here's the implementation:
------------ Test.cs -----------------
using System;
sealed class Test
{
static void Main()
{
Config config = new Config(10, 7, 5)
{
{ null, null, null, 1 },
{null, null, null, 1},
{1, null, null, 2},
{9, null, null, 21},
{1, null, 3, 3 },
{null, 2, 3, 4 },
{1, 2, 3, 5 }
};
Console.WriteLine(config[1, 2, 3]);
Console.WriteLine(config[3, 2, 3]);
Console.WriteLine(config[8, 10, 11]);
Console.WriteLine(config[1, 10, 11]);
Console.WriteLine(config[9, 2, 3]);
Console.WriteLine(config[9, 3, 3]);
}
}
--------------- Config.cs -------------------
using System;
using System.Collections;
sealed class Config : IEnumerable
{
private readonly PartialMatchDictionary<int, int> dictionary;
public Config(int priority1, int priority2, int priority3)
{
dictionary = new PartialMatchDictionary<int, int>(priority1, priority2, priority3);
}
public void Add(int? key1, int? key2, int? key3, int value)
{
dictionary[new[] { key1, key2, key3 }] = value;
}
public int this[int key1, int key2, int key3]
{
get
{
return dictionary[new[] { key1, key2, key3 }];
}
}
// Just a fake implementation to allow the collection initializer
public IEnumerator GetEnumerator()
{
throw new NotSupportedException();
}
}
-------------- PartialMatchDictionary.cs -------------------
using System;
using System.Collections.Generic;
using System.Linq;
public sealed class PartialMatchDictionary<TKey, TValue> where TKey : struct
{
private readonly List<Dictionary<TKey[], TValue>> dictionaries;
private readonly int keyComponentCount;
public PartialMatchDictionary(params int[] priorities)
{
keyComponentCount = priorities.Length;
dictionaries = new List<Dictionary<TKey[], TValue>>(1 << keyComponentCount);
for (int i = 0; i < 1 << keyComponentCount; i++)
{
PartialComparer comparer = new PartialComparer(keyComponentCount, i);
dictionaries.Add(new Dictionary<TKey[], TValue>(comparer));
}
dictionaries = dictionaries.OrderByDescending(dict => ((PartialComparer)dict.Comparer).Score(priorities))
.ToList();
}
public TValue this[TKey[] key]
{
get
{
if (key.Length != keyComponentCount)
{
throw new ArgumentException("Invalid key component count");
}
foreach (Dictionary<TKey[], TValue> dictionary in dictionaries)
{
TValue value;
if (dictionary.TryGetValue(key, out value))
{
return value;
}
}
throw new KeyNotFoundException("No match for this key");
}
}
public TValue this[TKey?[] key]
{
set
{
if (key.Length != keyComponentCount)
{
throw new ArgumentException("Invalid key component count");
}
// This could be optimised (a dictionary of dictionaries), but there
// won't be many additions to the dictionary compared with accesses
foreach (Dictionary<TKey[], TValue> dictionary in dictionaries)
{
PartialComparer comparer = (PartialComparer)dictionary.Comparer;
if (comparer.IsValidForPartialKey(key))
{
TKey[] maskedKey = key.Select(x => x ?? default(TKey)).ToArray();
dictionary[maskedKey] = value;
return;
}
}
throw new InvalidOperationException("We should never get here");
}
}
private sealed class PartialComparer : IEqualityComparer<TKey[]>
{
private readonly int keyComponentCount;
private readonly bool[] usedKeyComponents;
private static readonly EqualityComparer<TKey> Comparer = EqualityComparer<TKey>.Default;
internal PartialComparer(int keyComponentCount, int usedComponentBits)
{
this.keyComponentCount = keyComponentCount;
usedKeyComponents = new bool[keyComponentCount];
for (int i = 0; i < keyComponentCount; i++)
{
usedKeyComponents[i] = ((usedComponentBits & (1 << i)) != 0);
}
}
internal int Score(int[] priorities)
{
return priorities.Where((value, index) => usedKeyComponents[index]).Sum();
}
internal bool IsValidForPartialKey(TKey?[] key)
{
for (int i = 0; i < keyComponentCount; i++)
{
if ((key[i] != null) != usedKeyComponents[i])
{
return false;
}
}
return true;
}
public bool Equals(TKey[] x, TKey[] y)
{
for (int i = 0; i < keyComponentCount; i++)
{
if (!usedKeyComponents[i])
{
continue;
}
if (!Comparer.Equals(x[i], y[i]))
{
return false;
}
}
return true;
}
public int GetHashCode(TKey[] obj)
{
int hash = 23;
for (int i = 0; i < keyComponentCount; i++)
{
if (!usedKeyComponents[i])
{
continue;
}
hash = hash * 37 + Comparer.GetHashCode(obj[i]);
}
return hash;
}
}
}
It gives the right results for the samples that you gave. I don't know what the performance is like - it should be O(1), but it could probably be optimised a bit further.
A: I'm assuming that there are few rules, and a large number of items that you're going to check against the rules. In this case, it might be worth the expense of memory and up-front time to pre-compute a structure that would help you find the object faster.
The basic idea for this structure would be a tree such that at depth i, you would follow the ith element of the rule, or the null branch if it's not found in the dictionary.
To build the tree, I would build it recursively. Start with the root node containing all possible rules in its pool. The process:
*
*Define the current value of each rule in the pool as the score of the current rule given the path taken to get to the node, or -infinity if it is impossible to take the path. For example, if the current node is at the '1' branch of the root, then the rule {null, null, null, 1} would have a score of 0, and the rule {1, null, null, 2} would have a score 10
*Define the maximal value of each rule in the pool as its current score, plus the remaining keys' score. For example, if the current node is at the '1' branch of the root, then the rule {null, 1, 2, 1} would have a score of 12 (0 + 7 + 5), and the rule {1, null, null, 2} would have a score 10 (10 + 0 + 0).
*Remove the elements from the pool that have a maximal value lower than the highest current value in the pool
*If there is only one rule, then make a leaf with the rule.
*If there are multiple rules left in the pool, and there are no more keys then ??? (this isn't clear from the problem description. I'm assuming taking the highest one)
*For each unique value of the (i+1)th key in the current pool, and null, construct a new tree from the current node using the current pool.
As a final optimization check, I would check if all children of a node are leaves, and if they all contain the same rule, then make the node a leaf with that value.
given the following rules:
null, null, null = 1
1, null, null = 2
9, null, null = 21
1, null, 3 = 3
null, 2, 3 = 4
1, 2, 3 = 5
an example tree:
key1 key2 key3
root:
|----- 1
| |----- 2 = 5
| |-----null
| |----- 3 = 3
| |-----null = 2
|----- 9
| |----- 2
| | |----- 3 = 4
| | |-----null = 21
| |-----null = 21
|-----null
|----- 2 = 4
|-----null = 1
If you build the tree up in this fashion, starting from the highest value key first, then you can possibly prune out a lot of checks against later keys.
Edit to add code:
class Program
{
static void Main(string[] args)
{
Config config = new Config(10, 7, 5)
{
{ new int?[]{null, null, null}, 1},
{ new int?[]{1, null, null}, 2},
{ new int?[]{9, null, null}, 21},
{ new int?[]{1, null, 3}, 3 },
{ new int?[]{null, 2, 3}, 4 },
{ new int?[]{1, 2, 3}, 5 }
};
Console.WriteLine("5 == {0}", config[1, 2, 3]);
Console.WriteLine("4 == {0}", config[3, 2, 3]);
Console.WriteLine("1 == {0}", config[8, 10, 11]);
Console.WriteLine("2 == {0}", config[1, 10, 11]);
Console.WriteLine("4 == {0}", config[9, 2, 3]);
Console.WriteLine("21 == {0}", config[9, 3, 3]);
Console.ReadKey();
}
}
public class Config : IEnumerable
{
private readonly int[] priorities;
private readonly List<KeyValuePair<int?[], int>> rules =
new List<KeyValuePair<int?[], int>>();
private DefaultMapNode rootNode = null;
public Config(params int[] priorities)
{
// In production code, copy the array to prevent tampering
this.priorities = priorities;
}
public int this[params int[] keys]
{
get
{
if (keys.Length != priorities.Length)
{
throw new ArgumentException("Invalid entry - wrong number of keys");
}
if (rootNode == null)
{
rootNode = BuildTree();
//rootNode.PrintTree(0);
}
DefaultMapNode curNode = rootNode;
for (int i = 0; i < keys.Length; i++)
{
// if we're at a leaf, then we're done
if (curNode.value != null)
return (int)curNode.value;
if (curNode.children.ContainsKey(keys[i]))
curNode = curNode.children[keys[i]];
else
curNode = curNode.defaultChild;
}
return (int)curNode.value;
}
}
private DefaultMapNode BuildTree()
{
return new DefaultMapNode(new int?[]{}, rules, priorities);
}
public void Add(int?[] keys, int value)
{
if (keys.Length != priorities.Length)
{
throw new ArgumentException("Invalid entry - wrong number of keys");
}
// Again, copy the array in production code
rules.Add(new KeyValuePair<int?[], int>(keys, value));
// reset the tree to know to regenerate it.
rootNode = null;
}
public IEnumerator GetEnumerator()
{
throw new NotSupportedException();
}
}
public class DefaultMapNode
{
public Dictionary<int, DefaultMapNode> children = new Dictionary<int,DefaultMapNode>();
public DefaultMapNode defaultChild = null; // done this way to workaround dict not handling null
public int? value = null;
public DefaultMapNode(IList<int?> usedValues, IEnumerable<KeyValuePair<int?[], int>> pool, int[] priorities)
{
int bestScore = Int32.MinValue;
// get best current score
foreach (KeyValuePair<int?[], int> rule in pool)
{
int currentScore = GetCurrentScore(usedValues, priorities, rule);
bestScore = Math.Max(bestScore, currentScore);
}
// get pruned pool
List<KeyValuePair<int?[], int>> prunedPool = new List<KeyValuePair<int?[], int>>();
foreach (KeyValuePair<int?[], int> rule in pool)
{
int maxScore = GetCurrentScore(usedValues, priorities, rule);
if (maxScore == Int32.MinValue)
continue;
for (int i = usedValues.Count; i < rule.Key.Length; i++)
if (rule.Key[i] != null)
maxScore += priorities[i];
if (maxScore >= bestScore)
prunedPool.Add(rule);
}
// base optimization case, return leaf node
// base case, always return same answer
if ((prunedPool.Count == 1) || (usedValues.Count == prunedPool[0].Key.Length))
{
value = prunedPool[0].Value;
return;
}
// add null base case
AddChild(usedValues, priorities, prunedPool, null);
foreach (KeyValuePair<int?[], int> rule in pool)
{
int? branch = rule.Key[usedValues.Count];
if (branch != null && !children.ContainsKey((int)branch))
{
AddChild(usedValues, priorities, prunedPool, branch);
}
}
// if all children are the same, then make a leaf
int? maybeOnlyValue = null;
foreach (int v in GetAllValues())
{
if (maybeOnlyValue != null && v != maybeOnlyValue)
return;
maybeOnlyValue = v;
}
if (maybeOnlyValue != null)
value = maybeOnlyValue;
}
private static int GetCurrentScore(IList<int?> usedValues, int[] priorities, KeyValuePair<int?[], int> rule)
{
int currentScore = 0;
for (int i = 0; i < usedValues.Count; i++)
{
if (rule.Key[i] != null)
{
if (rule.Key[i] == usedValues[i])
currentScore += priorities[i];
else
return Int32.MinValue;
}
}
return currentScore;
}
private void AddChild(IList<int?> usedValues, int[] priorities, List<KeyValuePair<int?[], int>> prunedPool, Nullable<int> nextValue)
{
List<int?> chainedValues = new List<int?>();
chainedValues.AddRange(usedValues);
chainedValues.Add(nextValue);
DefaultMapNode node = new DefaultMapNode(chainedValues, prunedPool, priorities);
if (nextValue == null)
defaultChild = node;
else
children[(int)nextValue] = node;
}
public IEnumerable<int> GetAllValues()
{
foreach (DefaultMapNode child in children.Values)
foreach (int v in child.GetAllValues())
yield return v;
if (defaultChild != null)
foreach (int v in defaultChild.GetAllValues())
yield return v;
if (value != null)
yield return (int)value;
}
public void PrintTree(int depth)
{
if (value == null)
Console.WriteLine();
else
{
Console.WriteLine(" = {0}", (int)value);
return;
}
foreach (KeyValuePair<int, DefaultMapNode> child in children)
{
for (int i=0; i<depth; i++)
Console.Write(" ");
Console.Write(" {0} ", child.Key);
child.Value.PrintTree(depth + 1);
}
for (int i = 0; i < depth; i++)
Console.Write(" ");
Console.Write("null");
defaultChild.PrintTree(depth + 1);
}
} | unknown | |
d1384 | train | If you want to ensure that no string in col A is equal to any string in col B, then your existing algorithm is order n^2. You may be able to improve that by the following:
1) Sort col A or a copy of it (order nlogn)
2)Sort col B or a copy of it (order nlogn)
3) Look for duplicates by list traversal, see this previous answer (order n).
That should give you an order nlogn solution and I don't think you can do much better than that. | unknown | |
d1385 | train | From the manual:
The file will be deleted from the temporary directory at the end of the request if it has not been moved away or renamed.
So, you could omit it, however:
Whatever the logic, you should either delete the file from the temporary directory or move it elsewhere.
... it's always nice to be explicit in your script. In short: you don't have to, but I would. | unknown | |
d1386 | train | The GradientStop.Offset property is a value which ranges from 0.0 to 1.0. From the MSDN documentation:
A value of 0.0 specifies that the stop is positioned at the beginning of the gradient vector, while a value of 1.0 specifies that the stop is positioned at the end of the gradient vector.
Change your second stop's offset to 0.5 and your third's to 1.0 and it should work. | unknown | |
d1387 | train | Ditch the trigger. Replace it with a check constraint and a scalar function.
CREATE FUNCTION dbo.GetManagerSalary(@Emp_id int) RETURNS money
AS
BEGIN
RETURN
SELECT es.Salary
FROM [dbo].[Employee] e
JOIN [dbo].[Employee] m ON m.Emp_id = e.Manager_id
JOIN [dbo].[Employee_staff] es ON es.Emp_staff_code = e.Manager_id
WHERE e.Emp_id = @Emp_id;
END
GO
ALTER TABLE Employee_staff
ADD CONSTRAINT CK_Salary
CHECK (ISNULL([Salary], 0) <= ISNULL(dbo.GetManagerSalary([Salary]), 1e9))) | unknown | |
d1388 | train | You can give a try to Chrome custom tabs. They are far more performant than a webview and they give to users a seamless in-app user experience.
Unfortunately webview lacks a lot of features of the common browsers and it's not the best choice to display complex web pages. | unknown | |
d1389 | train | Your problem is that function is a reserved keyword in Julia. Additionally, note that 'string' for strings is OK in Python but in Julia you have "string".
The easiest workaround in your case is py"" string macro:
julia> py"$interpolate.Rbf($x,$y,$z, function='multiquadric', smoothness=1.0)"
PyObject <scipy.interpolate.rbf.Rbf object at 0x000000006424F0A0>
Now suppose you actually need at some point to pass function= parameter?
It is not easy because function is a reserved keyword in Julia. What you could do is to pack it into a named tuple using Symbol notation:
myparam = NamedTuple{(:function,)}(("multiquadric",))
Now if you do:
some_f(x,y; myparam...)
than myparam would get unpacked to a keyword parameter. | unknown | |
d1390 | train | A little crude but here's one solution
$i = 0; // Number of items made so far in the row
$mode = 0; // Current row type enumerated by $elem
$elem = array(2,4,3); // Enumeration of the desired row sizes
while ( have_posts() ) : the_post();
// Make a new row when there's no items yet
if ($i == 0) echo '<div class="row elem'. $elem[$mode] .'">';
echo '<div class="item"></div>';
$i++;
// Once the items in the current row has reached the row's maximum size
if ($i % $elem[$mode] == 0):
echo '</div>';
$i = 0; // Reset items made for the row back to 0
$mode = ($mode + 1) % 3; // Increment mode and wrap if necessary
endif;
endwhile;
if ($i > 0) echo '</div>'; // Finish the last row if it wasn't finished
This is what the modulos was built for! | unknown | |
d1391 | train | Use the Win32 Registry functions.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724875(v=vs.85).aspx | unknown | |
d1392 | train | The /tmp directory is where files are temporarily stored when uploaded.
In your controller you need to go about actually storing that file, the docs cover this in depth; https://laravel.com/docs/7.x/requests#storing-uploaded-files
It's worth mentioning that if you leave the files in your tmp directory, they will be garbage collected at some point and so this is not a safe location to store files. | unknown | |
d1393 | train | Since the picture is not clear enough to reproduce the issue you described, I would only just point out aspects I observed to be out of place in the script you posted.
I have not studied the logic of the code enough to establish its correctness or a possible lack of it. But I believe some of the variables at play in the script ought to be evaluated during scroll since a change in the page content can invalidate their initial state:
$(document).ready(function() {
// 1. These variables factor in statically to the feature
// and it's appropriate that they are evaluated once
var $sticky = $('.sidebar');
var $stickyrStopper = $('.sticky-stopper');
if (!$sticky.offset()) {
return;
}
$(window).scroll(function() {
var stickOffset = 0;
// 2. These variables factor in DYNAMICALLY to the feature,
// and they must be re-evaluated in alignment with
// changing content for instance.
// --
// Therefore they deserve to be evaluated inside the scroll handler.
var generalSidebarHeight = $sticky.innerHeight();
var stickyTop = $sticky.offset().top;
var stickyStopperPosition = $stickyrStopper.offset().top;
var stopPoint = stickyStopperPosition - generalSidebarHeight - stickOffset;
var diff = stopPoint + stickOffset;
var windowTop = $(window).scrollTop();
if (stopPoint < windowTop) {
$sticky.css({
position: 'absolute',
top: diff
});
} else if (stickyTop < windowTop + stickOffset) {
$sticky.css({
position: 'fixed',
top: stickOffset
});
} else {
$sticky.css({
position: 'fixed',
top: 'initial'
});
}
});
});
See if that provides you a stepping stone. | unknown | |
d1394 | train | for this special case ,you could execute the cmd to run your commands as args, i mean, avoid sendKeys and execute the cmd like this : Process.Start("CMD.EXE", "/c echo hello world");
note that /c will execute the cmd and exit while /k execute the cmd and return to the CMD prompt. | unknown | |
d1395 | train | We could create a pattern by pasting vec into one vector and remove their occurrence using sub.
df$name <- sub(paste0("^", vec, collapse = "|"), "", df$name)
df
# serial name
#1 1 vier
#2 2 Kenneth
#3 3 sey
In stringr we can also use str_remove
stringr::str_remove(df$name, paste0("^", vec, collapse = "|"))
#[1] "vier" "Kenneth" "sey"
A: Since we're using fixed length vec strings in this example, it might even be more efficient to use substr replacements. This will only really pay off in the case when df and/or vec is large though, and comes at the price of some flexibility.
df$name <- as.character(df$name)
sel <- substr(df$name, 1, 2) %in% vec
df$name[sel] <- substr(df$name, 3, nchar(df$name))[sel]
# serial name
#1 1 vier
#2 2 Kenneth
#3 3 sey
A: We can also do this with substring
library(stringr)
library(dplyr)
df$name <- substring(df$name, replace_na(str_locate(df$name,
paste(vec, collapse="|"))[,2] + 1, 1))
df$name
#[1] "vier" "Kenneth" "sey"
Or with str_replace
str_replace(df$name, paste0("^", vec, collapse="|"), "")
#[1] "vier" "Kenneth" "sey"
Or using gsubfn
library(gsubfn)
gsubfn("^.{2}", setNames(rep(list(""), length(vec)), vec), as.character(df$name))
#[1] "vier" "Kenneth" "sey" | unknown | |
d1396 | train | Change
try {
int response_code7 = conn7.getResponseCode();
// Check if successful connection made
if (response_code7 == HttpURLConnection.HTTP_OK) {
// Read data sent from server
InputStream input7 = conn7.getInputStream();
BufferedReader reader7 = new BufferedReader(new InputStreamReader(input7));
result7 = new StringBuilder();
String line7;
while ((line7 = reader7.readLine()) != null) {
result7.append(line7);
}
// Pass data to onPostExecute method
}
} catch (IOException e) {
e.printStackTrace();
} finally {
conn7.disconnect();
}
return result7;
To
try {
int response_code7 = conn7.getResponseCode();
result7 = new StringBuilder();
// Check if successful connection made
if (response_code7 == HttpURLConnection.HTTP_OK) {
// Read data sent from server
InputStream input7 = conn7.getInputStream();
BufferedReader reader7 = new BufferedReader(new InputStreamReader(input7));
String line7;
while ((line7 = reader7.readLine()) != null) {
result7.append(line7);
}
// Pass data to onPostExecute method
}
} catch (IOException e) {
e.printStackTrace();
} finally {
conn7.disconnect();
}
return result7;
A: Try something like this
Log.e("dai",MainActivity.this.result7.toString());
Toast.makeText(MainActivity.this,MainActivity.this.result7.toString(),Toast.LENGTH_LONG).show();
OR
@Override
protected void onPostExecute(StringBuilder result) {
super.onPostExecute(result);
Log.e("dai",result.toString());
Toast.makeText(MainActivity.this,result.toString(),Toast.LENGTH_LONG).show();
pdLoading.dismiss();
/* Intent intnt = new Intent(Checklist_activity.this,Task_main.class);
intnt.putExtra("task",hasmap);
startActivity(intnt);*/
}
} | unknown | |
d1397 | train | df.astype(int) should load as integer
Refer to this question for more information
Change data type of columns in Pandas | unknown | |
d1398 | train | A value set by $this->set() will be lost, when redirecting. You would need to set it into the session. Or you do the form generating in the method where you want to redirect to (you could call the code from above in this method, but instead of a redirect you need to return the generated form).
But I think there are other problems: $this->set() is normally a method from a controller (page-, block- etc. controller). But you are using it in a class from the src folder.
Another problem seems to be the path: packages/community_store/postfinance/src/CommunityStore/Payment/Methods. Should this not be packages/postfinance/src/CommunityStore/Payment/Methods because the package name is postfinance and not community_store?
A: Just in case anyone else wanted to know my solution, it was a dumb error on my part as all I had to do was return or echo $paymentForm; This solved my problem and successfully sent the form to PostFinance.
Although now I have the problem of not being able to get the PostFinance backend generated signature to match, PostFinance Signature not matching | unknown | |
d1399 | train | You could try something like the code below. Effectively, check the TotalProcessorTime for each process each time you call CheckCpu() and then subtract this from the previous run and divide by the total time that has elapsed between the two checks.
Sub Main()
Dim previousCheckTime As New DateTime
Dim previousProcessList As New List(Of ProcessInformation)
' Kick off an initial check
previousCheckTime = Now
previousProcessList = CheckCPU(previousProcessList, Nothing)
For i As Integer = 0 To 10
Threading.Thread.Sleep(1000)
previousProcessList = CheckCPU(previousProcessList, Now - previousCheckTime)
previousCheckTime = Now
For Each process As ProcessInformation In previousProcessList
Console.WriteLine(process.Id & " - " & Math.Round(process.CpuUsage, 2).ToString & "%")
Next
Console.WriteLine("-- Next check --")
Next
Console.ReadLine()
End Sub
Private Function CheckCPU(previousProcessList As List(Of ProcessInformation), timeSinceLastCheck As TimeSpan) As List(Of ProcessInformation)
Dim currentProcessList As New List(Of ProcessInformation)
For Each process As Process In Process.GetProcesses()
' Id = 0 is the system idle process so we don't check that
If process.Id <> 0 Then
' See if this process existed last time we checked
Dim cpuUsage As Double = -1
Dim previousProcess As ProcessInformation = previousProcessList.SingleOrDefault(Function(p) p.Id = process.Id)
' If it did then we can calculate the % of CPU time it has consumed
If previousProcess IsNot Nothing AndAlso timeSinceLastCheck <> Nothing Then
cpuUsage = ((process.TotalProcessorTime - previousProcess.TotalProcessorTime).Ticks / (Environment.ProcessorCount * timeSinceLastCheck.Ticks)) * 100
End If
' Add to the current process list
currentProcessList.Add(New ProcessInformation With {.Id = process.Id, .CpuUsage = cpuUsage, .TotalProcessorTime = process.TotalProcessorTime})
End If
Next
Return currentProcessList
End Function
Class ProcessInformation
Public Id As Integer
Public TotalProcessorTime As TimeSpan
Public CpuUsage As Double
End Class
In a production environment you should probably add some more checks because it is possible for a process to be killed between you calling GetProcesses() and then processing the list. If the process has gone away then you will get an error when you try to access the TotalProcessorTime property. | unknown | |
d1400 | train | Try the link below:
http://toolswebtop.com/text/process/decode/BIG-5
I tried your link and got 3Dhttp://0.gg/DFt2U | unknown |
Subsets and Splits