_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d1701 | train | I took your first sample and tried it out here. I get no error wrapping it or not.
TextViewInherit.cs:
using Android.Content;
using Android.Util;
using Android.Widget;
namespace InflationShiz
{
public class TextViewInherit : TextView
{
public TextViewInherit(Context context, IAttributeSet attrs) :
this(context, attrs, 0)
{
}
public TextViewInherit(Context context, IAttributeSet attrs, int defStyle) :
base(context, attrs, defStyle)
{
}
}
}
One.axml:
<?xml version="1.0" encoding="utf-8"?>
<inflationshiz.TextViewInherit
xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent" />
Two.axml:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent">
<inflationshiz.TextViewInherit
android:layout_width="fill_parent"
android:layout_height="fill_parent" />
</LinearLayout>
Both work when I inflate in my Activity like so:
var one = LayoutInflater.Inflate(Resource.Layout.One, null);
var two = LayoutInflater.Inflate(Resource.Layout.Two, null);
I find it hard to reproduce your issue; Your code is scattered over 3 different SO questions and even more scattered because you have created answers to your own question where you try to elaborate on your initial questions. | unknown | |
d1702 | train | If someone could make this a comment, that would be very helpful. I don't have enough reputation to do so.
@Chique_Code, when you say:
The response I get back might be Forecast Placeholder - 1005 or 1007 etc. I wonder if there is a way in Python to tell the code to only return the exact match
Do you mean getting 1005 or 1007 as a number from the string Forecast Placeholder - 1005? If so, you could use the [:] notation.
>>> int("Forecast Placeholder - 1005"[23:]) # Returns an integer from the 23rd character to the end of the string
1005
Bear in mind, if you take this route [23:] will return a string that contains the 23rd character to the end. So if you get "Placeholder - 1234" and do [23:] you will get an empty string, as the 23rd character does not exist, so it returns an empty string.
And BTW, if you hate the API you are using, why are you using it then? | unknown | |
d1703 | train | First of all, you cannot directly access a database in a private subnet. You have to deploy a proxy instance in your public subnet and forward the required ports to access your database.
When using CDK VPC construct, an Internet Gateway is created by default whenever you create a public subnet. The default route is also setup for the public subnet.
So you should remove addGatewayEndpoint() from your code, which adds a Gateway VPC Endpoint that you don't need.
You may also consider using SubnetType.ISOLATED to create a private subnet without a NAT GW, which may be redundant in your case. SubnetType.PRIVATE creates a NAT Gateway by default. | unknown | |
d1704 | train | My problem was with AAAA DNS record. Ipv6 record pointed to another host. | unknown | |
d1705 | train | I was able to constantly reproduce the behavior with:
*
*Python 3.7.6 (pc064 (64bit), then also with pc032)
*PyGraphviz 1.5 (that I built - available for download at [GitHub]: CristiFati/Prebuilt-Binaries - Various software built on various platforms. (under PyGraphviz, naturally).
Might also want to check [SO]: Installing pygraphviz on Windows 10 64-bit, Python 3.6 (@CristiFati's answer))
*Graphviz 2.42.2 ((pc032) same as #2.)
I suspected an Undefined Behavior somewhere in the code, even if the behavior was precisely the same:
*
*OK for 169 graphs
*Crash for 170
Did some debugging (added some print(f) statements in agraph.py, and cgraph.dll (write.c)).
PyGraphviz invokes Graphviz's tools (.exes) for many operations. For that, it uses subprocess.Popen and communicates with the child process via its 3 available streams (stdin, stdout, stderr).
From the beginning I noticed that 170 * 3 = 510 (awfully close to 512 (0x200)), but didn't pay as much attention as I should have until later (mostly because the Python process (running the code below) had no more than ~150 open handles in Task Manager (TM) and also Process Explorer (PE)).
However, a bit of Googleing revealed:
*
*[SO]: Is there a limit on number of open files in Windows (@stackprogrammer's answer) (and from here)
*[MS.Learn]: _setmaxstdio (which states (emphasis is mine)):
C run-time I/O now supports up to 8,192 files open simultaneously at the low I/O level. This level includes files opened and accessed using the _open, _read, and _write family of I/O functions. By default, up to 512 files can be open simultaneously at the stream I/O level. This level includes files opened and accessed using the fopen, fgetc, and fputc family of functions. The limit of 512 open files at the stream I/O level can be increased to a maximum of 8,192 by use of the _setmaxstdio function.
*[SO]: Python: Which command increases the number of open files on Windows? (@NorthCat's answer)
Below is your code that I modified for debugging and reproducing the error. It needs (for code shortness' sake, as same thing can be achieved via CTypes) the PyWin32 package (python -m pip install pywin32).
code00.py:
#!/usr/bin/env python
import os
import sys
#import time
import pygraphviz as pgv
import win32file as wfile
def handle_graph(idx, dir_name):
graph_name = "draw_{:03d}".format(idx)
graph_args = {
"name": graph_name,
"strict": False,
"directed": False,
"compound": True,
"ranksep": "0.2",
"nodesep": "0.2",
}
graph = pgv.AGraph(**graph_args)
# Draw Graph
img_base_name = graph_name + ".png"
print(" {:s}".format(img_base_name))
graph.layout(prog="dot")
img_full_name = os.path.join(dir_name, img_base_name)
graph.draw(img_full_name)
graph.close() # !!! Has NO (visible) effect, but I think it should be called anyway !!!
def main(*argv):
print("OLD max open files: {:d}".format(wfile._getmaxstdio()))
# 513 is enough for your original code (170 graphs), but you can set it up to 8192
#wfile._setmaxstdio(513) # !!! COMMENT this line to reproduce the crash !!!
print("NEW max open files: {:d}".format(wfile._getmaxstdio()))
dir_name = "Graph"
# Create Directory
if not os.path.isdir(dir_name):
os.makedirs(dir_name)
#ts_global_start = time.time()
start = 0
count = 170
#count = 1
step_sleep = 0.05
for i in range(start, start + count):
#ts_local_start = time.time()
handle_graph(i, dir_name)
#print(" Time: {:.3f}".format(time.time() - ts_local_start))
#time.sleep(step_sleep)
handle_graph(count, dir_name)
#print("Global time: {:.3f}".format(time.time() - ts_global_start - step_sleep * count))
if __name__ == "__main__":
print("Python {:s} {:03d}bit on {:s}\n".format(" ".join(elem.strip() for elem in sys.version.split("\n")),
64 if sys.maxsize > 0x100000000 else 32, sys.platform))
rc = main(*sys.argv[1:])
print("\nDone.\n")
sys.exit(rc)
Output:
e:\Work\Dev\StackOverflow\q060876623> "e:\Work\Dev\VEnvs\py_pc064_03.07.06_test0\Scripts\python.exe" ./code00.py
Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)] 064bit on win32
OLD max open files: 512
NEW max open files: 513
draw_000.png
draw_001.png
draw_002.png
...
draw_167.png
draw_168.png
draw_169.png
Done.
Conclusions:
*
*Apparently, some file handles (fds) are open, although they are not "seen" by TM or PE (probably they are on a lower level). However I don't know why this happens (is it a MS UCRT bug?), but from what I am concerned, once a child process ends, its streams should be closed, but I don't know how to force it (this would be a proper fix)
*Also, the behavior (crash) when attempting to write (not open) to a fd (above the limit), seems a bit strange
*As a workaround, the max open fds number can be increased. Based on the following inequality: 3 * (graph_count + 1) <= max_fds, you can get an idea about the numbers. From there, if you set the limit to 8192 (I didn't test this) you should be able handle 2729 graphs (assuming that there are no additional fds opened by the code)
Side notes:
*
*While investigating, I ran into or noticed several adjacent issues, that I tried to fix:
*
*Graphviz:
*
*[GitLab]: graphviz/graphviz - [Issue #1481]: MSB4018 The NativeCodeAnalysis task failed unexpectedly. (merged on 200406)
*PyGraphviz:
*
*[GitHub]: pygraphviz/pygraphviz - AGraph Graphviz handle close mechanism (merged on 200720)
*There's also an issue open for this behavior (probably the same author): [GitHub]: pygraphviz/pygraphviz - Pygraphviz crashes after drawing 170 graphs
A: I tried you code and it generated 200 graphs with no problem (I also tried with 2000).
My suggestion is to use these versions of the packages, I installed a conda environment on mac os with python 3.7 :
graphviz 2.40.1 hefbbd9a_2
pygraphviz 1.3 py37h1de35cc_1 | unknown | |
d1706 | train | Here I got the solution:
As per this comment the problem with fresco, where app bundle is not well supported.
The problem was that app bundle builds multiple dexes, but Fresco was
only looking at one to find the so file. There were no problems with
using apk to send to play store.
From that link I found to update following libraries.
implementation 'com.facebook.fresco:fresco:1.12.0'
implementation 'com.facebook.fresco:webpsupport:1.12.0'
implementation 'com.facebook.fresco:animated-webp:1.12.0'
Update:
implementation 'com.facebook.fresco:fresco:1.12.1'
implementation 'com.facebook.fresco:webpsupport:1.12.1'
implementation 'com.facebook.fresco:animated-webp:1.12.1'
For Me, Following version worked.
Reference Link:
https://github.com/WhatsApp/stickers/issues/410
https://github.com/WhatsApp/stickers/issues/413 | unknown | |
d1707 | train | The problem is you use the same path for input and output. Spark's RDD will be executed lazily. It runs when you call saveAsTextFile. At this point, you have already deleted the newFolderPath. So filerdd will complain.
Anyway, you should not use the same path for input and output. | unknown | |
d1708 | train | With Retrofit you can use the @Headers annotation:
For instance:
@Headers("Cache-Control: max-age=640000")
You could then (if you always know the Content Type) set your Interface to be:
@Headers("Content-Type: application/json")
@GET("widget/list")
Call<List<Widget>> widgetList();
https://square.github.io/retrofit/ | unknown | |
d1709 | train | You may be able to use :
[xml]$x=get-content c:\temp\data.xml
($x.portals.portal | ?{ $_.portalID -eq "IPE"}).spicer.banner.removeAll()
$x.save("c:\temp\newdata.xml") | unknown | |
d1710 | train | Using router.url != '/one' || router.url != '/two' means:
If router.url != '/one' returns true
If router.url != '/two' returns true
The second condition is never evaluated if the first condition is met because you are using OR
A: That's correct that you second condition is ignored, as when router.url != '/one' it already satisfies condition and second one is never evaluated, try this way
<div *ngIf="!(router.url == '/one' || router.url == '/two')">
show something
</div>
A: Run this code, maybe it'll explain something:
<p>Current URL: {{ router.url }}</p>
<p>Statement: router.url != '/one' - <strong>{{ router.url != '/one' }}</strong></p>
<p>Statement: router.url != '/two' - <strong>{{ router.url != '/two' }}</strong></p>
<p>Statement: router.url != '/one' || router.url != '/two' - <strong>{{ router.url != '/one' || router.url != '/two' }}</strong></p>
<div *ngIf="router.url != '/one' || router.url != '/two'">
show something
</div>
A: Your condition is always true, because it checks if a value is either different from either one or another. So the condition would only be false if router.url were at the same time equal to both '/one' and '/two', which is logically impossible.
A: Check your values to make sure the problem is not related to your data and replace != by !==.
However, in your case, I think you should be using and &&, because with an ||, your div will show up since if your route is /one or /two the condition will be true.
Try this:
<div *ngIf="router.url !== '/one' && router.url !== '/two'">
show something
</div>
Which is the same as:
<div *ngIf="!(router.url === '/one' || router.url === '/two')">
show something
</div> | unknown | |
d1711 | train | <xsl:stylesheet
version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
>
<xsl:template match="/turnovers">
<val>
<!-- call the sum function (with the relevant nodes) -->
<xsl:call-template name="sum">
<xsl:with-param name="nodes" select="turnover[@repid='5']" />
</xsl:call-template>
</val>
</xsl:template>
<xsl:template name="sum">
<xsl:param name="nodes" />
<xsl:param name="sum" select="0" />
<xsl:variable name="curr" select="$nodes[1]" />
<!-- if we have a node, calculate & recurse -->
<xsl:if test="$curr">
<xsl:variable name="runningsum" select="
$sum + $curr/@amount * $curr/@rate
" />
<xsl:call-template name="sum">
<xsl:with-param name="nodes" select="$nodes[position() > 1]" />
<xsl:with-param name="sum" select="$runningsum" />
</xsl:call-template>
</xsl:if>
<!-- if we don't have a node (last recursive step), return sum -->
<xsl:if test="not($curr)">
<xsl:value-of select="$sum" />
</xsl:if>
</xsl:template>
</xsl:stylesheet>
Gives:
<val>410</val>
The two <xsl:if>s can be replaced by a single <xsl:choose>. This would mean one less check during the recursion, but it also means two additional lines of code.
A: In plain XSLT 1.0 you need a recursive template for this, for example:
<xsl:template match="turnovers">
<xsl:variable name="selectedId" select="5" />
<xsl:call-template name="sum_turnover">
<xsl:with-param name="turnovers" select="turnover[@repid=$selectedId]" />
</xsl:call-template>
</xsl:template>
<xsl:template name="sum_turnover">
<xsl:param name="total" select="0" />
<xsl:param name="turnovers" />
<xsl:variable name="head" select="$turnovers[1]" />
<xsl:variable name="tail" select="$turnovers[position()>1]" />
<xsl:variable name="calc" select="$head/@amount * $head/@rate" />
<xsl:choose>
<xsl:when test="not($tail)">
<xsl:value-of select="$total + $calc" />
</xsl:when>
<xsl:otherwise>
<xsl:call-template name="sum_turnover">
<xsl:with-param name="total" select="$total + $calc" />
<xsl:with-param name="turnovers" select="$tail" />
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
A: This should do the trick, you'll need to do some further work to select the distinct repid's
<xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
<xsl:variable name="totals">
<product>
<xsl:for-each select="turnovers/turnover">
<repid repid="{@repid}">
<value><xsl:value-of select="@amount * @rate"/></value>
</repid>
</xsl:for-each>
</product>
</xsl:variable>
<totals>
<total repid="5" value="{sum($totals/product/repid[@repid='5']/value)}"/>
</totals>
</xsl:template>
</xsl:stylesheet>
A: In XSLT 1.0 the use of FXSL makes such problems easy to solve:
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:f="http://fxsl.sf.net/"
xmlns:ext="http://exslt.org/common"
exclude-result-prefixes="xsl f ext"
>
<xsl:import href="zipWith.xsl"/>
<xsl:output method="text"/>
<xsl:variable name="vMultFun" select="document('')/*/f:mult-func[1]"/>
<xsl:template match="/">
<xsl:call-template name="profitForId"/>
</xsl:template>
<xsl:template name="profitForId">
<xsl:param name="pId" select="1"/>
<xsl:variable name="vrtfProducts">
<xsl:call-template name="zipWith">
<xsl:with-param name="pFun" select="$vMultFun"/>
<xsl:with-param name="pList1" select="/*/*[@repid = $pId]/@amount"/>
<xsl:with-param name="pList2" select="/*/*[@repid = $pId]/@rate"/>
</xsl:call-template>
</xsl:variable>
<xsl:value-of select="sum(ext:node-set($vrtfProducts)/*)"/>
</xsl:template>
<f:mult-func/>
<xsl:template match="f:mult-func" mode="f:FXSL">
<xsl:param name="pArg1"/>
<xsl:param name="pArg2"/>
<xsl:value-of select="$pArg1 * $pArg2"/>
</xsl:template>
</xsl:stylesheet>
When this transformation is applied on the originally posted source XML document, the correct result is produced:
310
In XSLT 2.0 the same solution using FXSL 2.0 can be expressed by an XPath one-liner:
sum(f:zipWith(f:multiply(),
/*/*[xs:decimal(@repid) eq 1]/@amount/xs:decimal(.),
/*/*[xs:decimal(@repid) eq 1]/@rate/xs:decimal(.)
)
)
The whole transformation:
<xsl:stylesheet version="2.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:f="http://fxsl.sf.net/"
exclude-result-prefixes="f xs"
>
<xsl:import href="../f/func-zipWithDVC.xsl"/>
<xsl:import href="../f/func-Operators.xsl"/>
<!-- To be applied on testFunc-zipWith4.xml -->
<xsl:output omit-xml-declaration="yes" indent="yes"/>
<xsl:template match="/">
<xsl:value-of select=
"sum(f:zipWith(f:multiply(),
/*/*[xs:decimal(@repid) eq 1]/@amount/xs:decimal(.),
/*/*[xs:decimal(@repid) eq 1]/@rate/xs:decimal(.)
)
)
"/>
</xsl:template>
</xsl:stylesheet>
Again, this transformation produces the correct answer:
310
Note the following:
*
*The f:zipWith() function takes as arguments a function fun() (of two arguments) and two lists of items having the same length. It produces a new list of the same length, whose items are the result of the pair-wise application of fun() on the corresponding k-th items of the two lists.
*f:zipWith() as in the expression takes the function f:multiply() and two sequences of corresponding "ammount" and "rate" attributes. The sesult is a sequence, each item of which is the product of the corresponding "ammount" and "rate".
*Finally, the sum of this sequence is produced.
*There is no need to write an explicit recursion and it is also guaranteed that the behind-the scenes recursion used within f:zipWith() is never going to crash (for all practical cases) with "stack overflow"
A: <?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
exclude-result-prefixes="xs"
version="2.0">
<xsl:variable name="repid" select="5" />
<xsl:template match="/">
<xsl:value-of select=
"sum(for $x in /turnovers/turnover[@repid=$repid] return $x/@amount * $x/@rate)"/>
</xsl:template>
</xsl:stylesheet>
You can do this if you just need the value and not xml.
A: The easiest way to do it in XSLT is probably to use programming language bindings, so that you can define your own XPath functions. | unknown | |
d1712 | train | Seems like you have not to run quickstart.py, but create there some function, import your from quickstart import your_function into your views.py and call that your_function from your_custom_view.
Simplified logic like that:
from quickstart import your_function
def your_custom_view(request):
button_was_pressed = request.GET.get("button")
if button_was_pressed:
your_function()
return HttpResponse("Button pressed")
else:
return HttpResponse("No button pressed")
And make your Button work like a link (if you dont need a POST request), smth like:
<a href="{% url "your_custom_view_url" %}?button=True">Button</a>
NOTE: This is not working code, but simplified logic, with so short information that you gave.
UPDATE 1:
settings.py:
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
views.py:
from your_project.settings import BASE_DIR
path_to_json = os.path.join(BASE_DIR, r'client_secret.json')
flow = client.flow_from_clientsecrets(path_to_json, SCOPES) | unknown | |
d1713 | train | Your sizeof(node*) does not represent the size you need.
newnode = malloc(sizeof(node*)) // wrong
newnode = malloc(sizeof (node)) // correct
newnode = malloc(sizeof *newNode) // better
Why is sizeof *newNode better?
Because it prevents accidental forgetting to update the code in two places if the type changes
struct node {
char *data;
struct node *next;
struct node *prev;
};
struct nodeEx {
char *data;
size_t len;
struct nodeEx *next;
struct nodeEx *prev;
};
struct nodeEx *newnode = malloc(sizeof (struct node)); // wrong
struct nodeEx *newnode = malloc(sizeof *newnode); // correct
A: The below line does not allocate the required amount of memory, it allocates memory equal to the size of a pointer to node.
if ((newNode = malloc(sizeof(node*))) == NULL)
So your strcpy fails because there is no memory to copy into.
Change the above to:
if ((newNode = malloc(sizeof(node))) == NULL)
What happens after you do the following is undefined behavior because the memory representing inputString can be overwritten, and that is why you get garbage values later on.
newNode->data = inputString;
You can see the top answer to this question for additional information.
A: newNode->data = inputString;
is incorrect, it overrides the previously malloced memory.
if ((newNode->data = malloc(strlen(inputString) + 1)) == NULL) {
printf("Error: could not allocate memory");
exit(-1);
}
strcpy(newNode->data, inputString);
is enough to allocate memory and copy the string into it. | unknown | |
d1714 | train | A very easy way is to put a public property in the popup form that will return the values you want (Say RtnValue). Then
This is an example for popup form:
public string RtnValue
{
set { textBox1.Text = value; }
}
This is your current code:
frmNumFormatConv form = new frmNumFormatConv();
if (form.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{
dgvalue = form.RtnValue;
// Get data from the form into the selected cell!!!
} | unknown | |
d1715 | train | A"), 2, False) = String
Basically I want to look up a date in a column A, go to the second Column and insert the string.
I've searched and can't find what I'm looking for.
Thanks in advance,
Cory
A: We can use MATCH() to find the row and deposit the string with the correct offset from column A:
Sub Spiral()
Dim s As String, i As Long
s = "whatever"
i = Application.WorksheetFunction.Match(CLng(Date), Range("A:A"))
Cells(i, 2).Value = s
End Sub
A: You can use Range.Find
Dim cell As Range
Set cell = .Range("A:A").Find(date, , , xlWhole)
If Not cell Is Nothing Then cell(, 2) = String ' or cell.Offset(, 1) = String | unknown | |
d1716 | train | The problem is, that your get_title method consumes the Element and therefore can only be called once.
You have to accept &self as parameter instead and can do the following:
Either return a &str instead of String:
pub fn get_title(&self) -> &str {
&self.title
}
or clone the String if you really want to return a String struct.
pub fn get_title(&self) -> String {
self.title.clone()
}
Also have a look at these questions for further clarification:
*
*What are the differences between Rust's `String` and `str`?
*What types are valid for the `self` parameter of a method?
A: Here is a solution to the problem, it required borrowing self object and lifetime specifications.
Moving from &String to &str is only for following better practices, thanks @hellow Playground 2
struct Element<'a> {
title: &'a str
}
impl <'a>Element<'a> {
pub fn get_title(&self) -> &'a str {
&self.title
}
}
fn main() {
let mut items: Vec<Element> = Vec::new();
items.push(Element { title: "Random" });
items.push(Element { title: "Gregor" });
let mut i = 0;
while i < 10 {
for item in &items {
println!("Loop {} item {}", i, item.get_title());
}
i = i + 1;
}
} | unknown | |
d1717 | train | It turned out I had a before_create filter that returned false, causing the save proces to halt. To solve this, I added nil to the class, like this:
# BEFORE:
def set_paid
self.paid = false
end
# AFTER:
def set_paid
self.paid = false
nil
end
Hope this helps others too! | unknown | |
d1718 | train | if 'a' and 'e' and 'i' and 'o' and 'u' in 'eiou'
is equivalent to
if ('a') and ('b') and ('i') and ('o') and ('u' in 'eiou')
in which all the first 4 expressions 'a', 'b', 'i' and 'o' evaluate to True.
The final expression ('u' in 'eiou') is True for the first statement, whereas 'u' in 'aeio' is False
A: emm... It is the way python evaluates conditional expressions. This is how it works:
'a' and 'e' and 'i' and 'o' and 'u' in 'eiou'
'a' -> a non-empty string always evaluates to True, so we got True.
and -> if the expression before 'and' evaluates to True, evaluate the next expression.
'e' -> True
and -> continue evaluation
'i' -> True
and -> continue evaluation
'o' -> True
and -> continue evaluation
'u' in 'eiou' -> True
So you got True.
The next expression:
'a' and 'e' and 'i' and 'o' and 'u' in 'aeio'
'a' -> True
and -> continue evaluation
'e' -> True
and -> continue evaluation
'i' -> True
and -> continue evaluation
'o' -> True
and -> continue evaluation
'u' in 'aeio' -> False
so you got a False.
If what you want is to check if all letters are in the string, you either 'a' in 'aeio' and 'e' in 'aeio' and ... or more efficiently: if set('aeiou').issuperset(('a', 'e', 'i', 'o', ...)). | unknown | |
d1719 | train | You don't need np.where nor list comprehension:
You can use this:
combined['correct'] = (combined.actual == combined.predict).mul(1)
or
combined['correct'] = (combined.actual == combined.predict).astype(int) | unknown | |
d1720 | train | Is it possible to make the entry so that the text goes down instead ...?
No, it is not. The Entry widget is specifically designed for single-line input. If you need multiple lines you need to use the Text widget.
A: Entry widgents have only a single line. You'll have to use a 'Text' widget and print ('\n') for every entery letter. | unknown | |
d1721 | train | What is the correct way to use the key events to trigger an action independent of the focus?
See: How to Use Key Bindings
Or use a JMenuBar with menus and menu items.
A: the focus is important. you may need to click around and experiment, and use component.requestFocusInWindow() to help. | unknown | |
d1722 | train | Step 1: Make sure your version of MAMP is Version 2 because it includes a Universal Binary installer (32-bit & 64-bit)
Step 2: Modify your Make file and eliminate the other compiler versions, similar to:
CPPFLAGS = -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -DNDEBUG
CFLAGS = -Wc,"-arch i386" -Wc,"-arch x86_64" -Wc
LDFLAGS = -arch i386 -arch x86_64 -F/Library/Frameworks -framework Python -u _PyMac_Error
LDLIBS = -ldl -framework CoreFoundation
Step 3: In httpd.conf: LoadModule wsgi_module modules/mod_wsgi.so | unknown | |
d1723 | train | according to this SO post referencing the jquery docs, your best bet would be var myElement = $('g[inkscape\\:label="myLabel"]', svg.root());.
A: It's been impossible for me to make it work as you request. The solution I have arrived, is duplicating the tag, but using an underscode instead of two dots, like this:
$('path').each(function() {
$(this).attr({
"inkscape_label" : $(this).attr('inkscape:label')
});
});
Then I can select by the label I want like this:
$('[inkscape_label = ...
A: For me the only solution that worked was this one:
const t = document.querySelectorAll('g')
const layer = Array.from(t)
.filter(t => t.getAttribute('inkscape:label') === 'myLabel')[0]
A: The title isn't very descriptive so I ended up here when trying to figure out how to select it via CSS.
In case it helps anyone else:
<style>
[inkscape\:label="MyNamedLayer"] {
/*green*/
filter: invert(42%) sepia(93%) saturate(1352%) hue-rotate(87deg) brightness(119%) contrast(119%);
filter: invert(48%) sepia(79%) saturate(2476%) hue-rotate(86deg) brightness(118%) contrast(119%);
}
</style> | unknown | |
d1724 | train | Your MemberSearchRequest isn't a String so the first parts of your compound if statement will never be true.
request.equals("generatedMemberNo") &&
request.equals("firstname") &&
request.equals("firstname") &&
It appears that you could remove them and the null checks would be met.
if(this.clientRepository.getClientByMemberNo(String.valueOf(request.getGeneratedMemberNo())) == null ||
this.clientRepository.getClientByFirstName(String.valueOf(request.getFirstname())) == null ||
this.clientRepository.getClientByLastName(String.valueOf(request.getLastname())) == null) | unknown | |
d1725 | train | Problem is that nginx tries to resolve any DNS names you define in the configuration at startup time (rather than request time) and it fails if it cannot resolve one of the names.
Check this answer for possible solutions/workarounds: https://stackoverflow.com/a/32846603/1078969
A: "Nginx container exits with code 1".
This error can also be caused by a syntax error in a .conf file, such as inserting a colon between name:value pairs when you shouldnt, due to confounding syntax rules with different config files.
You stand a good chance of finding a complaint that points you to the line in question in the log files. | unknown | |
d1726 | train | Try adding a call to Put() on the ManagementObject to explicitly persist the change, like this:
foreach (ManagementObject printer in printerCollection)
{
PropertyDataCollection printerProperties = printer.Properties;
foreach (PropertyData property in printerProperties)
{
if (property.Name == "KeepPrintedJobs")
{
printerProperties[property.Name].Value = true;
}
}
printer.Put();
}
Hope that helps. | unknown | |
d1727 | train | It tries to read the next line after the last line, Try this: Change the code for the scanner to this:
Scanner:
public static Friend build(InputStream in, int counter){
Friend friend;
Scanner reader = new Scanner(in, StandardCharsets.UTF_8);
if(reader.hasNextLine()){
friend = new Friend(reader.nextLine(), null, null, null);
}else if(counter < 0){
if(true){
friend.setFullname(reader.readLine());
}
if(reader.hasNextLine()){
friend.setIp(reader.nextLine());
}
if(reader.hasNextLine()){
friend.setImg(reader.nextLine());
}
return friend;
} else {
return null;
}
if(reader.hasNextLine()){
friend.setFullname(reader.nextLine());
}
if(reader.hasNextLine()){
friend.setIp(reader.nextLine());
}
if(reader.hasNextLine()){
friend.setImg(reader.nextLine());
}
return friend;
}
Main:
public static void main(String[] args) throws FileNotFoundException {
List<Friend> al = new ArrayList<>();
FileInputStream reader = new FileInputStream("Friends.list");
Friend f;
int counter = 0;
while((f = Friend.build(reader, counter))!= null){
al.add(f);
counter++;
}
Collections.sort(al);
printCollection(al.iterator());
}
A: Do not create a new Scanner for each friend, try this:
public static Friend build(Scanner reader){
Friend friend;
if(reader.hasNextLine()){
friend = new Friend(reader.nextLine(), null, null, null);
}else{
return null;
}
if(reader.hasNextLine()){
friend.setFullname(reader.nextLine());
}
if(reader.hasNextLine()){
friend.setIp(reader.nextLine());
}
if(reader.hasNextLine()){
friend.setImg(reader.nextLine());
}
return friend;
}
and call it like in:
...
FileInputStream in = new FileInputStream("Friends.list");
Scanner reader = new Scanner(in, StandardCharsets.UTF_8.name());
Friend f;
while((f = Friend.build(reader))!= null){
al.add(f);
}
...
Reason: Scanner kind of does read ahead the next token(s), so it is/they are already consumed when the next Scanner is created. From its source code:
// Internal buffer used to hold input
private CharBuffer buf;
// Size of internal character buffer
private static final int BUFFER_SIZE = 1024; // change to 1024;
I would use a BufferedReader instead of Scanner (for reading just lines) | unknown | |
d1728 | train | Try to use setters:
[destViewController setReceiver:[citySpots objectAtIndex:indexPath.row]];
[destViewController setSpot:[citySpots objectAtIndex:indexPath.row]]; | unknown | |
d1729 | train | Based on NAMESPACE the data divides.
if "employees" has one namespace it will store under its namespace.
if we provide another NAMESPACE the data will store in that namespace only.
i think u r asking the same thing. | unknown | |
d1730 | train |  is a representation of the character with hex value 18, and this is not a character permitted by the XML specification (it's an ASCII control code of some form). JAXB is quite rightly refusing to parse it.
You need to find out what is writing that data in the first place, and fix it, because it's not writing valid XML.
A:  appears to be the CANCEL character . Is there some way the keyboard signal to stop the program could be ended up being read in to the input on resumption? | unknown | |
d1731 | train | Did you recently add a module name pajas? Based on the error messages you posted, core.php appears to be failing on the init script of a "pajas" module.
Assuming this is the "pajas" module you are using, a quick look at the init script shows it attempts to locate a path from the configuration name 'user_content.dir', and if the path is not writeable it raises an exception with the message 'Directory :dir must be writable' (source).
Check your configs for a file named user_content.php which should have a key name dir. By default, it should be in {MODPATH}/{pajas_directory}/config/user_content.php (default example here), but you should also check if it is overriden somewhere (usually in APPPATH/config/. Next, make sure that the default path APPPATH/user_content (or whatever your overriden path is) exists. | unknown | |
d1732 | train | Hash and salt passwords in C#
https://crackstation.net/hashing-security.htm
https://www.bentasker.co.uk/blog/security/201-why-you-should-be-asking-how-your-passwords-are-stored
As I stated in my comments, hashing passwords is something that you probably shouldn't be doing yourself.
A few things to note:
*
*SHA1 is not recommended for passwords
*Passwords should be salted
*You should use a verified userstore framework rather than attempting to create your own, as you will likely "do it wrong"
*I'm sure there are many more
That being said, to accomplish your specific question, you would want something like this:
Users
----
userId
passwordHashed
passwordHashed stores a hashed version of the user's password (the plain text password is never stored anywhere in persistence.)
for checking for valid password something like this is done:
ALTER procedure [dbo].[proc_UserLogin]
@userid varchar(20),
@password nvarchar(50)
As
declare
@ReturnVal varchar(500)
SET NOCOUNT ON
if exists(select userid,password from LoginManager where userid=@userid and password=HASHBYTES('SHA1', @password))
set @ReturnVal='0|Logged in Successfully'
else
set @ReturnVal='1|Login Failed/Username does not exist'
select @ReturnVal
For inserting/updating user passwords, you need to make sure to store the hashed password not the plain text password, as such;
INSERT INTO users(userId, passwordHashed)
VALUES (@userId, HASHBYTES('SHA1', @rawPassword)
or
UPDATE users
SET passwordHased = HASHBYTES('SHA1', @rawPassword)
WHERE userId = @userId
EDIT:
just realized you're asking how to accomplish the hash in C#, not SQL. You could perform the following (taken from Hashing with SHA1 Algorithm in C#):
public string Hash(byte [] temp)
{
using (SHA1Managed sha1 = new SHA1Managed())
{
var hash = sha1.ComputeHash(temp);
return Convert.ToBase64String(hash);
}
}
Your code snip could be:
conn.Open();
string query = "EXEC dbo.proc_UserLogin'" + username.Text+ "', '" + this.Hash(System.Text.Encoding.UTF8.GetBytes(password.Text))+"'";
OleDbCommand cmd = new OleDbCommand(query, conn);
You should also note that you should parameterize your parameters to your stored procedure rather than passing them in the manner you are - which it looks like you already have a separate question in regarding that. | unknown | |
d1733 | train | You could use a Servlet and directly print out the answer:
public void service(ServletRequest request, ServletResponse response){
response.setContentType("text/xml;charset=UTF-8");
PrintWriter writer = response.getWriter();
writer.append("<?xml version=\"1.0\" encoding=\"UTF-8\"?>");
writer.append("<result>");
// print your result
writer.append("</result>");
It's not from within a JSP, but it almost looks like you are already inside a Servlet.
If you are using Spring Web MVC, what your referral to modelAndView suggests, you might just want to use a method in your controller with @ResponseBodyannotation on the return type.
@RequestMapping(value = "/xmlresponse", method = RequestMethod.GET)
public @ResponseBody ResultObjectWithJaxbAnnotations gernerateXmlResult() {
Don't forget <mvc:annotation-driven /> in your Spring application-context - but you will have that most likely already. | unknown | |
d1734 | train | Check out File.getAbsolutePath():
String path = new File(fd.getFile()).getAbsolutePath();
A: You can combine FileDialog.getDirectory() with FileDialog.getFile() to get a full path.
String path = fd.getDirectory() + fd.getFile();
File f = new File(path);
I needed to use the above instead of a call to File.getAbsolutePath() since getAbsolutePath() was returning the path of the current working directory and not the path of the file chosen in the FileDialog. | unknown | |
d1735 | train | Resolved, the main problem was on ClienteConverter, It was returning null by a if that compared one of attributes of cliente, didnn't match any.
If Help anyone, The converter comes first. | unknown | |
d1736 | train | The error doesn't happen due to that.
When you don't provide keys(), values(), items(), python iterates over the keys by default. You need to provide items() to tell python to get the keys and values.
for k, v in {k: v for k, v in self.seq.items() if k > index}.items(): | unknown | |
d1737 | train | Axum is a language structured in such a way as to make safe and performant concurrent programming simpler. The concepts modelled by the language avoid the need to make thread synchronisation explicit via the use of lock (in C#), Monitor, ReaderWriterLockSlim, etc...
It could be argued that many of the ideas within Axum have been in the Erlang programming language since 1986 -- a language designed by researchers working in Sweden for Ericsson to run on telephone switches, and hence support for massive throughput under highly concurrent load was so essential it was designed into the language. Whilst many of the ideas in Axum aren't new, they are certainly new to .NET and the CLR (at least at the language level.)
Existing .NET libraries that contain some of these ideas are:
*
*Retlang
*Concurrency and Coordination Runtime (CCR)
Like Erlang, message passing is a central concept in Axum. Like Erlang, Axum is largely indifferent as to whether the recipient of the message is located in-process or remotely. Axum currently provides integration with WCF.
Axum differs from the libraries mentioned above in that it includes support for these concepts at the language level, not just via use of libraries. The Axum compiler deals not only with the Axum language, but also with some experimental extensions to the C# language itself; namely the isolated and readonly keywords.
Adding new features to a language is not something to be taken lightly. Spec# is another C#-superset language developed at MSR (unrelated to concurrency). As seen with the support for Code Contracts in .NET 4.0, Microsoft has decided to favour adding a new API rather than new language extensions (this benefits users of all languages on the CLR.) However in the case of Axum, there is not enough richness in the C# 3.0 language to express the kinds of immutability constraints required of types and their members for truly safe concurrent programming.
Having dabbled in Erlang and liking what I saw, I'm very excited about where Axum might take us. Some of the extensions to the C# language proposed by the team are useful for regular C# projects too.
Finally I'd like to point out that there's more to Erlang than just a good concurrency model. Erlang is a strict functional programming language. It supports hot swappable code, meaning that a system can be upgraded without it ever being stopped (a desirable feature of a telephone switch or any other 24x7 system). I heard a report from a large British telecommunications organisation running a switch for a year and only failing to route four calls in that time. Erlang has other characteristics such as remote exception handling as well.
A: Looks to me like you hit the nail on the head in your question. Looks like the Microsoft.NET alternative to some of the languages/frameworks you mentioned. Take a look at the Programmer's Guide here:
Axum Programmer's Guide
Looks like it should play nicely with the rest of the .NET Framework. It might open up some interesting C#/F#/Axum interactions...
A: Axum is the new name for Microsoft's "Maestro" language, which originally was a research language for parallel programming but has been "promoted" to a first-class language just recently.
A bit more information on Channel 9 here:
Maestro: A Managed Domain Specific Language For Concurrent Programming
... and on the official Axum team blog.
A: Here's an update on the state of Axum. Apparrently some of the concurrency features will no longer be part of C#/VB.Net.
...the concepts around safe parallelism and
agent-based programming were seen by many as too far outside the
mainstream to be adopted now in languages like C# and VB. The idea of
Axum was to not force these concepts on general-purpose languages, so
those of us who have work on Axum are not surprised. | unknown | |
d1738 | train | First, let's I'll call those three objects groups instead since they don't use the list function.
The way you define them could be fine, but it's somewhat more direct to go with, e.g., 65:74 rather than c(65, 74). So, ultimately I put the three groups in the following list:
groups <- list(group65_74 = 65:74, group75_84 = 75:84, group85 = 85:100)
Now the first problem with the usage of sample was your x argument value, which is
either a vector of one or more elements from which to choose, or a
positive integer. See ‘Details.’
Meanwhile, you x was just
c(list65_74, list75_84, list85)
# [1] 65 74 75 84 85 100
Lastly, the value of prob is inappropriate. You supply 3 number to a vector of 6 candidates to sample from. Doesn't sound right. Instead, you need to assign an appropriate probability to each age from each group as in
rep(c(0.56, 0.30, 0.24), times = sapply(groups, length))
So that the result is
sample(unlist(groups), size = 10, replace = TRUE,
prob = rep(c(0.56, 0.30, 0.24), times = sapply(groups, length)))
# [1] 82 72 69 74 72 72 69 70 74 70 | unknown | |
d1739 | train | I'd go with tje single table approach, perhaps partitioned by year so that it becomes easy to get rid of old data.
Create an index like
CREATE INDEX ON a (date_trunc('hour', t + INTERVAL '30 minutes'));
Then use your query like you wrote it, but add
AND date_trunc('hour', t + INTERVAL '30 minutes')
= date_trunc('hour', asked_time + INTERVAL '30 minutes')
The additional condition acts as a filter and can use the index.
A: You can use a UNION of two queries to find all timestamps closest to a given one:
(
select t
from a
where t >= timestamp '2019-03-01 17:00:00'
order by t
limit 1
)
union all
(
select t
from a
where t <= timestamp '2019-03-01 17:00:00'
order by t desc
limit 1
)
That will efficiently make use of an index on t. On a table with 10 million rows (~3 years of data), I get the following execution plan:
Append (cost=0.57..1.16 rows=2 width=8) (actual time=0.381..0.407 rows=2 loops=1)
Buffers: shared hit=6 read=4
I/O Timings: read=0.050
-> Limit (cost=0.57..0.58 rows=1 width=8) (actual time=0.380..0.381 rows=1 loops=1)
Output: a.t
Buffers: shared hit=1 read=4
I/O Timings: read=0.050
-> Index Only Scan using a_t_idx on stuff.a (cost=0.57..253023.35 rows=30699415 width=8) (actual time=0.380..0.380 rows=1 loops=1)
Output: a.t
Index Cond: (a.t >= '2019-03-01 17:00:00'::timestamp without time zone)
Heap Fetches: 0
Buffers: shared hit=1 read=4
I/O Timings: read=0.050
-> Limit (cost=0.57..0.58 rows=1 width=8) (actual time=0.024..0.025 rows=1 loops=1)
Output: a_1.t
Buffers: shared hit=5
-> Index Only Scan Backward using a_t_idx on stuff.a a_1 (cost=0.57..649469.88 rows=78800603 width=8) (actual time=0.024..0.024 rows=1 loops=1)
Output: a_1.t
Index Cond: (a_1.t <= '2019-03-01 17:00:00'::timestamp without time zone)
Heap Fetches: 0
Buffers: shared hit=5
Planning Time: 1.823 ms
Execution Time: 0.425 ms
As you can see it only requires very few I/O operations and that is pretty much independent of the table size.
The above can be used for an IN condition:
select *
from a
where t in (
(select t
from a
where t >= timestamp '2019-03-01 17:00:00'
order by t
limit 1)
union all
(select t
from a
where t <= timestamp '2019-03-01 17:00:00'
order by t desc
limit 1)
);
If you know you will never have more than 100 values close to that requested timestamp, you could remove the IN query completely and simply use a limit 100 in both parts of the union. That makes the query a bit more efficient as there is no second step for evaluating the IN condition, but might return more rows than you want.
If you always look for timestamps in the same year, then partitioning by year will indeed help with this.
You can put that into a function if it is too complicated as a query:
create or replace function get_closest(p_tocheck timestamp)
returns timestamp
as
$$
select *
from (
(select t
from a
where t >= p_tocheck
order by t
limit 1)
union all
(select t
from a
where t <= p_tocheck
order by t desc
limit 1)
) x
order by greatest(t - p_tocheck, p_tocheck - t)
limit 1;
$$
language sql stable;
The the query gets as simple as:
select *
from a
where t = get_closest(timestamp '2019-03-01 17:00:00');
Another solution is to use the btree_gist extension which provides a "distance" operator <->
Then you can create a GiST index on the timestamp:
create index on a using gist (t) ;
and use the following query:
select *
from a where t in (select t
from a
order by t <-> timestamp '2019-03-01 17:00:00'
limit 1); | unknown | |
d1740 | train | It's CV_CAP_PROP_POS_FRAMES (note the S) and it should be brought in by highgui.hpp. It's an unnamed enum in the global namespace. | unknown | |
d1741 | train | You could probably store and retrieve it from SharedPreferences | unknown | |
d1742 | train | There are multiple ways to run automated scripts on Azure SQL Database as below:
*
*Using Automation Account Runbooks.
*Using Elastic Database Jobs in Azure
*Using Azure Data factory.
As you are running just one script, I would suggest you to take a look into Automation Account Runbooks. As an example below, a PowerShell Runbook to execute the statement.
$database = @{
'ServerInstance' = 'servername.database.windows.net'
'Database' = 'databasename'
'Username' = 'uname'
'Password' = 'password'
'Query' = 'DELETE FROM Events OUTPUT DELETED.* INTO archieveevents'
}
Invoke -Sqlcmd @database
Then, it can be scheduled as needed:
A: You asked in part for a comparison of Elastic Jobs to Runbooks.
*
*Elastic Jobs will also run a pre-determined SQL script against a
target set of servers/databases.
-Elastic jobs were built
internally for Azure SQL by Azure SQL engineers, so the technology is
supported at the same level of Azure SQL.
*Elastic jobs can be defined and managed entirely through PowerShell scripts. However, they also support setup/configuration through TSQL.
*Elastic Jobs are handy if you want to target many databases, as you set up the job one time, and set the targets and it will run everywhere at once. If you have many databases on a given server that would be good targets, you only need to specify the target
server, and all of the databases on the server are automatically targeted.
*If you are adding/removing databases from a given server, and want to have the job dynamically adjust to this change, elastic jobs
is designed to do this seamlessly. You just have to configure
the job to the server, and every time it is run it will target
all (non excluded) databases on the server.
For reference, I am a Microsoft Employee who works in this space.
I have written a walkthrough and fuller explanation of elastic jobs in a blog series. Here is a link to the entry point of the series:https://techcommunity.microsoft.com/t5/azure-sql/elastic-jobs-in-azure-sql-database-what-and-why/ba-p/1177902
A: You can use Azure data factory, create a pipeline to execute SQL query and trigger it run every day. Azure data factory is used to move and transform data from Azure SQL or other storage. | unknown | |
d1743 | train | From ?relevel,
ref: the reference level, typically a string.
I'll key off of "typically". Looking at the code of stats:::relevel.factor, one key part is
if (is.character(ref))
ref <- match(ref, lev)
This means to me that after this expression, ref is now (assumed to be) an integer that corresponds to the index within the levels. In that context, your ref=1 is saying to use the first level by its index (which is already first).
Try using a string.
relevel(df$x,ref=1)
# [1] 1 2 3
# Levels: 2 1 3
relevel(df$x,ref="1")
# [1] 1 2 3
# Levels: 1 2 3 | unknown | |
d1744 | train | Let's assume your PictureBox starts in the top, left corner of the containing control (i.e. the Form, or a Panel, or whatever). This is Point(0,0).
In this event handler...
Private Sub Timer1_Tick(sender As Object, e As EventArgs) Handles Timer1.Tick
PictureBox1.Location = New Point(PictureBox1.Location.X, PictureBox1.Location.Y + 9)
If (PictureBox1.Location = New Point(700, 1100)) Then
Timer1.Enabled = False
End If
End Sub
...you are checking if the top left corner of PictureBox1 is at position 700,1100 instead of checking if it is at 0,1100. Also, since you're adding + 9 each timer tick, it'll never be at a Y position of exactly 1100.
And then in this event...
Private Sub Timer2_Tick(sender As Object, e As EventArgs) Handles Timer2.Tick
PictureBox1.Location = New Point(PictureBox1.Location.X, PictureBox1.Location.Y - 9)
If (PictureBox1.Location = New Point((Me.Width / 700) - (PictureBox1.Width / 700), (Me.Height / 1000) - (PictureBox1.Height / 1000))) Then
Timer2.Enabled = False
End If
End Sub
You want to check if PictureBox1.Location is now 0,0 (the starting position) instead of all of that position math you are doing.
Here is a cleaned-up version of your code. Note that it first checks the position of the PictureBox and only moves it if necessary.
Private Const INCREMENT As Integer = 9
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Timer1.Enabled = True
End Sub
Private Sub Timer1_Tick(sender As Object, e As EventArgs) Handles Timer1.Tick
If PictureBox1.Location.Y >= 1100 Then
Timer1.Enabled = False
Else
PictureBox1.Location = New Point(PictureBox1.Location.X, PictureBox1.Location.Y + INCREMENT)
End If
End Sub
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
Timer2.Enabled = True
End Sub
Private Sub Timer2_Tick(sender As Object, e As EventArgs) Handles Timer2.Tick
If PictureBox1.Location.Y <= 0 Then
Timer2.Enabled = False
Else
PictureBox1.Location = New Point(PictureBox1.Location.X, PictureBox1.Location.Y - INCREMENT)
End If
End Sub | unknown | |
d1745 | train | An elegant way is to use kwargs:
credential = './credentials.json'
key = "credentials" if os.path.exists(credentials) else "serialize"
auth_kwargs = {"client_config": secrets, key: credential}
account = authenticate(**auth_kwargs)
A: I think your way is fine too, but you can do this
credential = './credentials.json'
params = {'serialize': credential}
if os.path.exists(credentials):
params['credentials'] = params.pop('serialize')
account = authenticate(client_config=secrets, **params)
A: You can pass an (unpacked) dictionary to a function:
credential = './credentials.json'
arguments = {'client_config': secrets, 'serialize': credential} # default
if os.path.exists(credentials):
arguments.pop('serialize')
arguments['credentials'] = credential
account = authenticate(**arguments)
A: There is functools.partial for this:
from functools import partial
credential = './credentials.json'
auth = partial(authenticate, client_config=secrets)
if os.path.exists(credential):
account = auth(credentials=credential)
else:
account = auth(serialize=credential) | unknown | |
d1746 | train | This conversion does not change anything. On reqest it will be converted back. This conversion happened because chars in URL is limited. For more information look at http://www.blooberry.com/indexdot/html/topics/urlencoding.htm | unknown | |
d1747 | train | You can use computed property for data2
export default {
data() {
return {
data1: 1
}
},
mounted() {
this.data1 = 2
console.log(this.data2)
},
computed: {
data2() {
return this.data1 * 3
}
}
}
A: The watcher will not trigger in the middle of your mounted() handler, you need to wait until the handler has finished. This is how Javascript works, it is not a limitation in Vue.
A: I am not sure what the purpose of your code, but below will work
mounted() {
this.data1 = 2
this.$nextTick(() => {
console.log(this.data2)
})
},
A:
Current behavior: the browser write number 2 to the console
It becauses Vue performs DOM updates asynchronously first run into mounted you made change to data1 (also puts console.log(this.data2) to callback queue at this point of time this.data2 = 2 thats why you see log 2) it also triggers watch of data1 If you log data2 here you see value of 6.
In order to wait until Vue.js has finished updating the DOM after a data change, you can use Vue.nextTick(callback) immediately after the data is changed
mounted() {
this.data1 = 2
this.$nextTick(() => {
console.log(this.data2); // 6
})
}
or use Vue.$watch - Watch an expression or a computed function on the Vue instance for changes
data() {
return {
data1: 1,
data2: 2
}
},
watch: {
data1: function(val) {
this.data2 = val * 3
}
},
mounted() {
this.data1 = 2
this.$watch('data2', (newVal) => {
console.log(newVal) // 6
})
}
A: As another answer explains, this is expected default behaviour and can be addressed by using nextTick, most times it's not the case because a computed is used instead, as suggested in yet another answer.
In Vue 3, this can be changed by the use of flush option:
mounted() {
this.data1 = 2
console.log(this.data2) // 6
},
watch: {
data1: {
handler(newData1) {
this.data2 = newData1 * 3
},
flush: 'sync'
}
} | unknown | |
d1748 | train | Try this
$('input').on('change', function () {
var x = $('#x').val();
if (x > 3 && x < 7) {
//code to show image
}
//elseif for other statements and so on...
});
A: Something like this will sort you out:
if( result >= 3 && result <= 7 ){
... show image 1 ...
}else if( result >= 8 && result <= 12){
... show image 1 ...
}else{
... show default image . . .
} | unknown | |
d1749 | train | There's no way to know exactly, but there are rules of thumb that will allow you to obtain a very rough approximation of the running time.
Insertion sort is an O(n2) algorithm. What that means in the real world is that, all other things being equal, increasing the size of n by some number, x, will increase the running time by approximately n2. So if you double the size of n, running time will increase by approximately 4 times. If you multiply n by 6, running time will increase by approximately 36 times.
Unfortunately, you can't always keep other things equal. You have to think about the effects of cache misses, virtual memory, other things running on the computer, etc. Asymptotic analysis isn't really a tool for computing real-world running times, but it can serve as a very rough approximation: a "ball park estimate," if you will.
Other sorting algorithms have different orders of complexity. Merge sort, for example, has computational complexity of O(n log n), meaning that running time increases with the logarithm of the number of items. That increases much more slowly than does n^2. For example, sorting 1,024 items (2^10) requires on the order of (2^10 * 10) comparisons. Sorting 2^20 items will require on the order of (2^20 * 20) comparisons.
I can't stress strongly enough that those calculations are very rough approximations. They'll get you in the area--probably within an order of magnitude--but you'll never get an exact number that way.
What you can't do with any degree of certainty is say that if insertion sort takes x time, then merge sort will take y time. Asymptotic analysis ignores constant factors. So you can't even approximate the running time of merge sort based on the running time of insertion sort. Nor, for that matter, can you approximate bubble sort (another O(n^2) algorithm) based on the running time of insertion sort.
The whole idea of using asymptotic analysis to estimate the real world running time of an algorithm is fraught with error. It often works when estimating the running time of one implementation of one algorithm on specific hardware, but beyond that it's useless.
Update
You asked:
for n = 2^20, M seconds for merge sort, and B seconds for bubble sort, If I have a different size that takes 4B for bubble sort, how do i know the run time of merge sort?
As I pointed out above, you cannot estimate the time for merge sort based on the time for bubble sort. What you can do, is estimate the running time of merge sort for the new size based on the running time for merge sort at size n=2^20.
For example, at n=2^20, merge sort requires on the order of (2^20 * 20) comparisons. At n=2^21, it requires on the order of (2^21 * 21) comparisons. At n=2^32, (2^32 * 32) comparisons.
What you can do then, is take the expected number of comparisons for your new n, compute the expected number of comparisons, and divide that by (2^20 * 20). So, for example, when n=2^22, the expected number of comparisons is approximately 92,274,688. Divide that by 20,971,520 (2^20 * 20), and you get 4.4. So, if sorting 2^20 items with merge sort takes time x, then sorting 2^22 items will take approximately 4.4*x.
Again, let me point out that these are very rough approximations that assume everything else remains equal. | unknown | |
d1750 | train | I don't have 50 reputation to comment in your post. So, here is a "attempt" to answer your question. Feel free to comment and mark if this solves your question.
Take a look at the code:
import csv
import re
per_user = {}
file = open("syslog.log")
for line in file:
#Set regular expression to find lines containing INFO or Error followed by colon the log message as an option and the username in parentheses at the end of the line... Contains 3 groups)
info = re.findall(r"ticky: (?P<logtype>INFO|ERROR): (?P<logmessage>[\w].*)? \((?P<username>[\w]*)\)$", line, re.MULTILINE)
for logtype, logmessage, username in info:
if username not in per_user:
per_user[username] = {
"username": username,
"INFO": 0,
"ERROR": 0
} # Creates a new dict for that user
per_user[username][logtype] += 1 # Sum one to INFO or ERROR counters
file.close()
for user_data in per_user.values():
print(user_data) # TODO: This is only for debugging
Notes:
*
*if username does not exist in the current dictionary, it instance a new dictionary for that user and also initializes it with 3 keys: username, INFO, ERROR (last 2 are the counters)
*then tries to sum up 1 to the current logtype
*per_user.values() search for the values of each user (which are dictionaries)
I've made a log file with 3 entries, with the same format as your question posts:
Jan 31 00:21:30 ubuntu.local ticky: ERROR: The ticket was modified while updating (breee)
Jan 31 00:21:30 ubuntu.local ticky: ERROR: The ticket was modified while updating (sam)
Jan 31 00:21:30 ubuntu.local ticky: INFO: The ticket was successful (breee)
And the output of the script is:
{'username': 'breee', 'INFO': 1, 'ERROR': 1}
{'username': 'sam', 'INFO': 0, 'ERROR': 1}
This will become handy when using DictWriter of the csv module.
Note aside. You can use the context manager called "with" to open the log file.
...
with open("syslog.log", 'r') as file:
for line in file:
#Set...
info = re.findall(...
...
With this, you can safely omit closing the file.
CSV Module: https://docs.python.org/3/library/csv.html
DictWriter Method: https://docs.python.org/3/library/csv.html#csv.DictWriter
See the example code of the python documentation:
import csv
with open('names.csv', 'w', newline='') as csvfile:
fieldnames = ['first_name', 'last_name']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
writer.writerow({'first_name': 'Baked', 'last_name': 'Beans'})
writer.writerow({'first_name': 'Lovely', 'last_name': 'Spam'})
writer.writerow({'first_name': 'Wonderful', 'last_name': 'Spam'})
For your case:
with open('output.csv', 'w') as csvfile:
fieldnames = ['username', 'INFO', 'ERROR']
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader() # Writes the field names
for user_data in per_user.values():
writer.writerow(user_data)
Output in the file 'output.csv':
username,INFO,ERROR
breee,1,1
sam,0,1
You can adapt this code to your needs.
Best regards,
Goran
A: You could use nested dictionary to create structure
{
"bob": {"error": ..., "info": ...},
"breee": {"error": ..., "info": ...},
}
And you can do it with nested defaultdict
def errors():
return defaultdict(int)
per_user = defaultdict(errors)
or using lambda
per_user = defaultdict(lambda:defaultdict(int))
(defaultdict needs function's name (without ()) so it can't be defaultdict(defaultdict(int)))
And now you can count it
for logtype, logmessage, username in info:
per_user[username][logtype] += 1
After counting you can display it.
Because some users may not have some messages in log (and in per_user) so you have to use logtype.get('ERROR', 0) instead of logtype['ERROR']
for user, logtype in per_user.items():
error = logtype.get('ERROR', 0)
info = logtype.get('INFO', 0)
print(user, error, 'errors', info, 'infos')
And the same way you can save in CSV
with open('output.csv', 'w') as f:
csvwriter = csv.writer(f)
# write header
csvwriter.writerow(['name', 'error', 'info'])
# write rows
for user, logtype in per_user.items():
error = logtype.get('ERROR', 0)
info = logtype.get('INFO', 0)
print(user, error, 'errors', info, 'infos')
csvwriter.writerow([user, error, info])
Minimal working example. I used list file instead of real data from file so everyone can run it without problems. But you can use open(), close()
import csv
import re
from collections import defaultdict
per_user = defaultdict(lambda:defaultdict(int))
#file = open("syslog.log")
file = [
'Jan 31 00:21:30 ubuntu.local ticky: ERROR: The ticket was modified while updating (breee)',
'Jan 31 00:21:30 ubuntu.local ticky: INFO: hello world (bob)',
]
for line in file:
#Set regular expression to find lines containing INFO or Error followed by colon the log message as an option and the username in parentheses at the end of the line... Contains 3 groups)
info = re.findall(r"ticky: (?P<logtype>INFO|ERROR): (?P<logmessage>[\w].*)? \((?P<username>[\w]*)\)$", line, re.MULTILINE)
for logtype, logmessage, username in info:
per_user[username][logtype] += 1
#file.close()
print(per_user)
with open('output.csv', 'w') as f:
csvwriter = csv.writer(f)
# write header
csvwriter.writerow(['name', 'error', 'info'])
# write rows
for user, logtype in per_user.items():
error = logtype.get('ERROR', 0)
info = logtype.get('INFO', 0)
print(user, error, 'errors', info, 'infos')
csvwriter.writerow([user, error, info]) | unknown | |
d1751 | train | With Varnish you can proactively cache page content and use grace to display stale, cached content if a response doesn't come back in time.
Enable grace period (varnish serves stale (but cacheable) objects while retriving object from backend)
You may need to tweak the dials to determine the best settings for how long to serve the stale content and how long it takes something to be considered stale, but it should work for you. More on the Varnish performance wiki page.
A: I recommend caching in webserver level rather than the application
A: I have done just this recently for a couple of different things, in each case, the basics are the same - in this instance the info can be pre-generated before use.
A PHP job is run regularly (maybe from
CRON) which generates information into
Memcached, which is then used
potentially hundreds of times till
it's rebuilt again.
Although they are cached for well-defined periods (be it 60 mins, or 1 minute), they are regenerated more often than that. Therefore, unless something goes wrong, they will never expire from Memcache, because a newer version is cached before they can expire. Of course, you could just arrange for them to never expire.
I've also done similar things via a queue - you can see previous questions I've answered regarding 'BeanstalkD'.
A: Depending on the content the jQuery.load() might be an option.
(I used it for a twitter feed)
Step 1
Show the cached version of the feed.
Step 2
Update the content on the page via jQuery.load() and cache the results.
.
This way the page loads fast and displays up2date content (after x secs offcourse)
But if rebuilding/loading a full page this wouldn't give a nice user experience.
A: You describe a few problems, perhaps some general ideas would be helpful.
One problem is that your generated content is too large to store entirely so you can only cache a subset of that total content, you will need: a method for uniquely identifying each content object that can be generated, a method for identifying if a content object is already in the cache, a policy for marking data in the cache stale to indicate that background regeneration should be run, and a policy for expiring and replacing data in the cache. Ultimately keeping the unique content identification simple should help with performance while your policy for expiring objects and marking stale objects should be used to define the priority for background regeneration of content objects. These may be simple updates to your existing caching scheme, on the other hand it may be more effective for you to use a software package specifically made to address this need as it is not an uncommon problem.
Another problem is that you don't want to duplicate the work to regenerate content. If you have multiple parallel generation engines with differing capabilities this may not be so bad of a thing and it may be best to queue a task to each and remove the task from all other queues when the first generator completes the job. Consider tracking the object state when a regeneration is in progress so that multiple background regeneration tasks can be active without duplicating work unintentionally. Once again, this can be supplanted into your existing caching system or handled by a dedicated caching software package.
A third problem is concerned with what to do when a client requests data that is not cached and needs to be regenerated. If the data needs to be fully regenerated you will be stuck making the client wait for regeneration to complete, to help with long content generation times you could identify a policy for predictive prefetching content objects into cache but requires a method to identify relationships between content objects. Whether you want to serve the client a "regenerating" page until the requested content is available really depends on your client's expectations. Consider multi-level caches with compressed data archives if content regeneration cannot be improved from 10-15 seconds.
Making good use of a mature web caching software package will likely address all of these issues. Nick Gerakines mentioned Varnish which appears to be well suited to your needs. | unknown | |
d1752 | train | Why not use symlinks for releases? Below is an example of a deployment process that I've used on Laravel applications with Envoy. Aside from the PHP variable notation, it would be straightforward to substitute a purely bash/shell script if you are not using Envoy. Essentially, having a script automates the deployment, and using symlinks can make the update nearly instantaneous. Additional benefits include previous releases existing for the unfortunate time when a rollback is necessary.
Note: The below script makes some basic assumptions:
*
*Your .env file is in the $root_dir (ex: /var/www/my-website/.env).
*Your vhost points to the site/public directory within the $root_dir (ex: /var/www/my-website/site/public). However, if you can not update the vhost, you can simply add the following to number 4 below in the empty line:
ln -nfs {{ $app_dir }}/public {{ $root_dir }}/public ;
sudo chgrp -h www-data {{ $root_dir }}/public;
*You have added SSH keys to pull from Git repo
*(optional) nodejs is installed
Here are the relevant example variables for the script:
$repo = '[email protected]:myusername/my-repo.git';
$root_dir = '/var/www/my-website';
$release_dir = '/var/www/my-website/releases';
$app_dir = '/var/www/my-website/site';
$release = 'release_' . date('YmdHis');
$branch = 'master';
Here is the gist of the deployment process with code:
*
*Fetch the updated code into a new release directory:
@task('fetch_repo')
[ -d {{ $release_dir }} ] || mkdir {{ $release_dir }};
cd {{ $release_dir }};
git clone {{ $repo }} -b {{ $branch }} {{ $release }};
@endtask
*Install the dependencies by running composer:
@task('run_composer')
cd {{ $release_dir }}/{{ $release }};
composer install;
@endtask
*(optional) If we are using asset precompiler like Elixir, we will want to fetch npm dependencies, reset permissions, and run gulp:
@task('npm_install')
cd {{ $release_dir }}/{{ $release }};
sudo npm install;
@endtask
@task('update_permissions')
cd {{ $release_dir }};
sudo chgrp -R www-data {{ $release }};
sudo chmod -R ug+rwx {{ $release }};
@endtask
@task('compile_assets')
cd {{ $release_dir }}/{{ $release }};
gulp --production;
@endtask
*Update symlinks
@task('update_symlinks')
ln -nfs {{ $root_dir }}/.env {{ $release_dir }}/{{ $release }}/.env;
ln -nfs {{ $release_dir }}/{{ $release }} {{ $app_dir }};
sudo chgrp -h www-data {{ $app_dir }};
sudo service php5-fpm restart;
@endtask
*(Optional) Prune old release folders (30+ days old) so we don't fill up the server.
@task('prune_old')
sudo find {{ $release_dir }} -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;
@endtask
Note: Restarting the php5-fpm service clears the cache that ensures the new symlink is followed.
I found it somewhat difficult to find deployment script examples (like the aforementioned) when I initially began developing with Laravel, so hopefully this will help alleviate some searching. | unknown | |
d1753 | train | Couple of things:
*
*You're not actually referring to the sheet in the loop by qualifying Range() with your ws variable
*You don't need to activate the worksheet to append a value to it
Try the below code instead:
Dim ws As Worksheet
Dim i As Long
Dim Coverage_ID As String
For Each ws In ActiveWorkbook.Worksheets
If ws.Range("C3").Value = "" Then
Coverage_ID = ws.Range("C2").Value
MsgBox Coverage_ID
ThisWorkbook.Worksheets(2).Range("A1").Offset(i, 0) = Coverage_ID
i = i + 1
Else
Coverage_ID = ws.Range("C3").Value
MsgBox Coverage_ID
ThisWorkbook.Worksheets(2).Range("A1").Offset(i, 0) = Coverage_ID
i = i + 1
End If
MsgBox ws.Name
Next ws | unknown | |
d1754 | train | Set the background color of the items layout? look here https://github.com/cplain/custom-list the concepts should be the same - just ignore my runnable
A: A quick way to do this is to create a couple custom styles; in the drawable folder you can create styles for normal and hover or pressed states:
So in ../drawable/ you want to make a couple elements:
1) The list_bg.xml file:
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle">
<gradient
android:startColor="#db0000"
android:centerColor="#c50300"
android:endColor="#b30500"
android:angle="270" />
</shape>
2) The list_bg_hover.xml file:
<?xml version="1.0" encoding="utf-8"?>
<shape xmlns:android="http://schemas.android.com/apk/res/android"
android:shape="rectangle">
<gradient
android:startColor="#db0000"
android:centerColor="#c50300"
android:endColor="#b30500"
android:angle="270" />
</shape>
3) The list_selector.xml file:
<?xml version="1.0" encoding="utf-8"?>
<selector xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@drawable/list_bg" android:state_pressed="false" android:state_selected="false"/>
<item android:drawable="@drawable/list_bg_hover" android:state_pressed="true"/>
<item android:drawable="@drawable/list_bg_hover" android:state_pressed="false" android:state_selected="true"/>
</selector>
Now, in order to use this all you have to do is attach the style to your layout for the ListView row item like this, android:listSelector="@drawable/list_selector" and that should do the trick. | unknown | |
d1755 | train | A combination of keys would help you:
*
*Home
*Shift + End
*Ctrl + Alt + C
But since you want to do it with just Ctrl + Alt + C, you can install extension called macros to make a macro, recorded multiple key combinations.
Create your own custom macros by adding them to your settings.json:
"macros": {
"copyWithoutNewLine": [
"cursorHome",
"cursorEndSelect",
"editor.action.clipboardCopyAction",
"cancelSelection",
"cursorUndo",
"cursorUndo",
"cursorUndo"
]
}
Created macro can have a custom name, in this example it's copyWithoutNewLine. And this macro executes all above stated commands to copy line.
After creating macro, you need to add it to keybindings.json to run it:
{
"key": "ctrl+alt+c",
"command": "macros.copyWithoutNewLine",
"when": "editorTextFocus && !editorHasSelection"
}
When key combination of Ctrl + Alt + C is pressed, it will copy it without a new line, and you can paste it where ever you want.
A: Having struggled with this for long myself too, I finally stumbled across the solution. Add these lines to keybindings.json:
{
"key": "cmd+alt+ctrl+v", // insert your desired shortcut here
"command": "editor.action.insertSnippet",
"args": {
"snippet": "$CLIPBOARD"},
"when": "inputFocus"
},
Now, pressing cmd+option+ctrl+v (or whatever shortcut you define) should paste without newline, regardless of how it was copied.
For an explanation and more cool things you can do with snippets, see https://code.visualstudio.com/docs/editor/userdefinedsnippets#:~:text=In%20Visual%20Studio%20Code%2C%20snippets,%3A%20Enable%20it%20with%20%22editor. | unknown | |
d1756 | train | Yes:
var arg = Expression.Parameter(typeof(object));
var expr = Expression.Property(Expression.Convert(arg, type), propertyName);
Note: the return type (object) means that many types will need to be boxed. Since you mention you are doing this for filtering: if possible, try to avoid this box by creating instead a Func<object,bool> that does any comparisons etc internally without boxing. | unknown | |
d1757 | train | Can you use one of the other configuration methods (SQL Server or XML)? Since you have file access, using XML seems appropriate.
See the section for "Types of Configurations":
http://msdn.microsoft.com/en-us/library/cc895212.aspx | unknown | |
d1758 | train | The only risk of underpopulated facets are when they misrepresent the search. I'm sure you've used a search site where the metadata you want to facet on is underpopulated so that when you apply the facet you also eliminate from your result set a number of records that should have been included. The thing to watch is that the facet values are populated consistently where they are appropriate. That means that your "tea" records don't need to have a number of cores listed, and it won't impact anything, but all of your "processor" records should, and (to whatever extent possible) they should be populated consistently. This means that if one processor lists its number of cores as "4", and another says "quadcore", these are two different values and a user applying either facet value will eliminate the other processor from their result. If a third quadcore processor is entirely missing the "number of cores" stat from the no_cores facet field (field name is arbitrary), then your facet could be become counterproductive.
So, we can throw all of these records into the same Solr, and as long as the facets are populated consistently where appropriate, it's not really necessary that they be populated for all records, especially when not applicable.
Applying facets dynamically
Most of what you need to know is in the faceting documentation of Solr. The important thing is to specify the appropriate arguments in your query to tell Solr which facets you want to use. (Until you actually facet on a field, it's not a facet but just a field that's both stored="true" and indexed="true".) For a very dynamic effect, you can specify all of these arguments as part of the query to Solr.
&facet=true
This may seem obvious, but you need to turn on faceting. This argument is convenient because it also allows you to turn off faceting with facet=false even if there are lots of other arguments in your query detailing how to facet. None of it does anything if faceting is off.
&facet.field=no_cores
You can include this field over and over again for as many fields as you're interested in faceting on.
&facet.limit=7
&f.no_cores.facet.limit=4
The first line here limits the number of values for returned by Solr for each facet field to 7. The 7 most frequent values for the facet (within the search results) will be returned, with their record counts. The second line overrides this limit for the no_cores field specifically.
&facet.sort=count
You can either list the facet field's values in order by how many appear in how many records (count), or in index order (index). Index order generally means alphabetically, but depends on how the field is indexed. This field is used together with facet.limit, so if the number of facet values returned is limited by facet.limit they will either be the most numerous values in the result set or the earliest in the index, depending on how this value is set.
&facet.mincount=1
There are very few circumstances that you will want to see facet values that appear zero times in your search results, and this can fix the problem if it pops up.
The end result is a very long query:
http://localhost/solr/collecion1/search?facet=true&facet.field=no_cores&
facet.field=socket_type&facet.field=processor_type&facet.field=speed&
facet.limit=7&f.no_cores.facet.limit=4&facet.mincount=1&defType=dismax&
qf=name,+manufacturer,+no_cores,+description&
fl=id,name,no_cores,description,price,shipment_mode&q="Intel"
This is definitely effective, and allows for the greatest amount of on-the-fly decision-making about how the search should work, but isn't very readable for debugging.
Applying facets less dynamically
So these features allow you to specify which fields you want to facet on, and do it dynamically. But, it can lead to a lot of very long and complex queries, especially if you have a lot of facets you use in each of several different search modes.
One option is to formalize each set of commonly used options in a request handler within your solrconfig.xml. This way, you apply the exact same arguments but instead of listing all of the arguments in each query, you just specify which request handler you want.
<requestHandler name="/processors" class="solr.SearchHandler">
<lst name="defaults">
<str name="defType">dismax</str>
<str name="echoParams">explicit</str>
<str name="fl">id,name,no_cores,description,price,shipment_mode</str>
<str name="qf">name, manufacturer, no_cores, description</str>
<str name="sort">score desc</str>
<str name="rows">30</str>
<str name="wt">xml</str>
<str name="q.alt">*</str>
<str name="facet.mincount">1</str>
<str name="facet.field">no_cores</str>
<str name="facet.field">socket_type</str>
<str name="facet.field">processor_type</str>
<str name="facet.field">speed</str>
<str name="facet.limit">10</str>
<str name="facet.sort">count</str>
</lst>
<lst name="appends">
<str name="fq">category:processor</str>
</lst>
</requestHandler>
If you set up a request hander in solrconfig.xml, all it does is serve as a shorthand for a set of query arguments. You can have as many request handlers as you want for a single solr index, and you can alter them without rebuilding the index (reload the Solr core or restart the server application (JBoss or Tomcat, e.g.), to put changes into effect).
There are a number of things going on with this request handler that I didn't get into, but it's all just a way of representing default Solr request arguments so that your live queries can be simpler. This way, you might make a query like:
http://localhost/solr/collection1/processors?q="Intel"
to return a result set with all of your processor-specific facets populated, and filtered so that only processor records are returned. (This is the category:processor filter, which assumes a field called category where all the processor records have a value processor. This is entirely optional and up to you.) You will probably want to retain the default search request handler that doesn't filter by record category, and which may not choose to apply any of the available (stored="true" and indexed="true") fields as active facets. | unknown | |
d1759 | train | $Content = hot("britney") ? "britney found" : "<br>";
A: Just store stuff in a variable then write variable.
// put stuff in $content instead of printing
$content = '';
$content .= hot("britney") ? "britney found" : "<br>";
$content .= hot("gaga") ? "gaga found" : "<br>";
$content .= hot("carol") ? "carol found" : "<br>";
// write to file
$handle = fopen($filename, 'x+');
fwrite($handle, $content);
fclose($handle);
A: As others have already stated, put the contents of your echos into a variable then write that into a file. There are two ways to do this. You can use file handlers:
<?php
// "w" will create the file if it does not exist and overwrite if it does
// "a" will create the file if it does not exist and append to the end if it does
$file = fopen('/path/to/file', 'w');
fwrite($file, $content);
fclose($file);
A slightly simpler way is to use file_put_contents():
<?php
file_put_contents('/path/to/file', $contents);
And if you want to append to the file:
<?php
file_put_contents('/path/to/file', $contents, FILE_APPEND);
As for the parenthesis you have around the conditional for your echos, I prefer something more like the following:
<?php
$contents = '';
$contents .= (hot('britney') ? 'britney found' : '<br />');
If you want to be able to easily read the file outside of a web browser, you should use a new line instead of a <br /> to separate your output. For example:
<?php
$contents = '';
$contents .= (hot('britney') ? 'britney found'."\n" : "\n");
A: Replace all of the echo statements with a variable, then write that variable to a file.
$Content.= ( hot("britney") )?"britney found":"<br>";
etc...
A: First off, your parentheses are in the wrong place. Should be like this:
echo ( hot("britney") ? "britney found" : "" );
If you want to capture your echo and other output, use the ob_start and ob_flush methods.
But if you're making HTTP requests to that file it won't echo on the screen.
If that's what you meant, then that's your answer. | unknown | |
d1760 | train | As per the docs
When you create a new Handler, it is bound to the thread / message queue of the thread that is creating it -- from that point on, it will deliver messages and runnables to that message queue and execute them as they come out of the message queue.
And, in your code, you are trying to manipulate the UI elements, so the Handler should be created on the UI Thread
Handler handler = new Handler();
mainThing.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
randomTimeDelay = randomNumber * 1000;
if (previousTapDetected){
mainThing.setText("You taped too fast");
mainThing.setBackgroundColor(ContextCompat
.getColor(getApplicationContext(), R.color.red));
handler.postDelayed(new Runnable() {
@Override
public void run() {
mainThing.setText("Try again");
}
}, 750); | unknown | |
d1761 | train | That looks like an EEGLAB dataset file, which is simply a regular MAT-file with a structure variable stored inside. This structure contains various info about the biosignals.
You could manually load the data using LOAD function, or use the provided GUI to open the dataset.
>> load Target_1.set -mat
>> EEG
EEG =
setname: 'Target_1'
filename: 'Target_1.set'
filepath: '/home/julie/FiveBox_JO'
subject: ''
group: ''
condition: ''
session: []
comments: [1x803 char]
nbchan: 238
trials: 1
pnts: 99129
srate: 256
xmin: 0
xmax: 387.22
times: []
data: [238x99129 single]
icaact: []
icawinv: [238x238 double]
icasphere: [238x238 double]
icaweights: [238x238 double]
icachansind: [1x238 double]
chanlocs: [1x238 struct]
urchanlocs: []
chaninfo: [1x1 struct]
ref: 'common'
event: [1x616 struct]
urevent: [1x18879 struct]
eventdescription: {'' '' '' ''}
epoch: []
epochdescription: {}
reject: [1x1 struct]
stats: [1x1 struct]
specdata: []
specicaact: []
splinefile: ''
icasplinefile: ''
dipfit: [1x1 struct]
history: [1x1022 char]
saved: 'yes'
etc: []
A: You can add a plug-in of inporting( about BDF) in eeglab, then load your data to eeglab and export as .bdf file. after that you will obtain a .bdf file. But this file can not be uesed still, you need to add file extension as '*.dbf'. Finally, the bdf can be loaded into another software to analysis. | unknown | |
d1762 | train | The solution turned out to me a case of manipulating the underlying draggable of the dialog box:
$("#dialog2").dialog("widget").draggable(
{
containment : [ 0, 0, 10000, 10000 ],
scroll: true,
scrollSensitivity : 100
});
Obviously, these values can be played with to achieve different results. I hope this helps anyone else in the same position!
jsFiddle
A: I looked over the documentation and apparently you are able to achieve this with using CSS and changing the overflow value.
http://jsfiddle.net/vnVhE/1/embedded/result/
As you can see the CSS applied is:
// disable scrolling in the other panes
.ui-layout-pane-north ,
.ui-layout-pane-west,
.ui-layout-pane-south {
overflow: hidden !important;
}
.ui-layout-layout-center {
overflow: auto
}
NOTE: Please keep in mind while this allows horizontal scrolling it is a bit tricky and hackish at best in my opinion. Under Chrome I could scroll just fine if I held the mouse near the edge of the vertical scroll bar and it moved properly. | unknown | |
d1763 | train | Use document.getElementById("innerPart").offsetWidth
See this answer
If you wanted the width property it's off the style property. E.g. document.getElementById("innerPart").style.width
The difference being that offsetWidth is a computed property, it takes into account borders and padding whereas width doesn't. You may find that width is null and I'm assuming that is true if the width hasn't been specifically set by you. | unknown | |
d1764 | train | After making those changes quit and rerun manage.py runserver
try changing this <a href="{{ MEDIA_URL }}{{project.thirdquestiondetail.third_seven.url}}">Click here to see the file</a>
to <a href="{% get_media_prefix %}{{project.thirdquestiondetail.third_seven.url}}">Click here to see the file</a>
A: I noticed that also the upload is not actually working.
I solved as follow:
1) In my views.py I have to change the request.POST['chosen-name'] to request.FILES['chosen-name']
2) Add in my hard-coded html form the tag: enctype="multipart/form-data in my template.html
So from:
<form method="post">..</form>
to:
<form method="post" enctype="multipart/form-data">..</form>
Check: Django FileField upload is not working for me | unknown | |
d1765 | train | You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it.
To require that users access your content through CloudFront URLs, you perform the following tasks:
*
*Create a special CloudFront user called an origin access identity.
*Give the origin access identity permission to read the objects in your bucket.
*Remove permission for anyone else to use Amazon S3 URLs to read the objects.
Please see documentation here | unknown | |
d1766 | train | This happens because the ul has a default padding-inline-start of 40px.
Adding a padding: 0px to the nav ul selector would fix the issue.
You can see that default value in Chrome Dev Tools, as you can see in the bottom right of this screenshoot | unknown | |
d1767 | train | Without looking at any CSS and HTML code it is pretty difficult to tell.
But since it looks like you are using the Google Chrome browser, hover over the grey stripe with your mouse, right-click and select Inspect Element. You can then review the html and css code related to what you are looking at. You can also open the Chrome dev console at any time by hitting the CTRL-SHIFT-i keys. | unknown | |
d1768 | train | The javac compiler is expected to be a JVM application (if only because javax.lang.model is a Java-based API). So it can naturally use runtime Reflection during it's execution.
What the documentation tries to say, — a bit clumsily perhaps, — is that JVM compiler isn't expected to load the classes it builds from source code (or their binary compilation dependencies). But when you use Element#getAnnotation(Class<A> annotationType), it might have to.
The documentation you cited actually lists several Exception classes, that may be thrown due to this:
*
*MirroredTypeException. As you already realized, it is thrown when you try to load a type that hasn't been built yet, used as argument within an annotation.
*AnnotationTypeMismatchException, EnumConstantNotPresentException and IncompleteAnnotationException. Those exceptions can be thrown when the version of annotation loaded by javac does not match the version, referenced by build classpath
The mismatched versions can be hard to avoid, because some build systems, such as Gradle, require you to specify annotation processor separately from the rest of dependencies. | unknown | |
d1769 | train | Restart the TiDB service, add the -skip-grant-table=true parameter in the configuration file. Log into the cluster without password and recreate the user, or recreate the mysql.user table using the following statement:
DROP TABLE IF EXIST mysql.user;
CREATE TABLE if not exists mysql.user (
Host CHAR(64),
User CHAR(16),
Password CHAR(41),
Select_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Insert_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Update_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Delete_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Drop_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Process_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Grant_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
References_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Alter_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Show_db_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Super_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_tmp_table_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Lock_tables_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Execute_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Show_view_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Alter_routine_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Index_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Create_user_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Event_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
Trigger_priv ENUM('N','Y') NOT NULL DEFAULT 'N',
PRIMARY KEY (Host, User));
INSERT INTO mysql.user VALUES ("%", "root", "", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y", "Y"); | unknown | |
d1770 | train | When you have a table partitioned by Day, you can directly reference the partition day you want to query.
In order to demonstrate your case, I have used the following table schema:
Field name Type Mode Policy tags Description
date_formatted DATE NULLABLE
fullvisitorId STRING NULLABLE
Other table's details,
Table type Partitioned
Partitioned by Day
Partitioned on field date_formatted
Partition filter Not required
And some sample data,
Row date_formatted fullvisitorId
1 2016-12-30 6449885916997461186
2 2016-12-30 3401232735815769402
3 2016-12-30 2100622457042859506
4 2016-12-30 4434434796889840043
5 2016-12-31 9382207991125014696
6 2017-12-30 4226029488400478200
7 2017-12-31 4304624161918005939
8 2017-12-31 4239590118714521081
9 2018-12-30 0030006068136142781
10 2018-12-30 7849866399135936504
You can use the syntax below to query the above sample data,
DECLARE dt DATE DEFAULT Date(2016,12,30);
SELECT * FROM `project.dataset.table_name` WHERE date_formatted = dt
The output,
Row date_formatted fullvisitorId
1 2016-12-30 6449885916997461186
2 2016-12-30 3401232735815769402
3 2016-12-30 2100622457042859506
4 2016-12-30 4434434796889840043
As you can see it only retrieved the data for the specific date I declared.
Notice that I have used the DECLARE clause because it facilitates modifying the date filter. Also, if your field is formatted as a TIMESTAMP, you can replace DATE() to TIMESTAMP() to define your filter within your variable.
As an additional information, if you want to use a range, consider using the BETWEEN clause such as WHERE partition_field BETWEEN date_1 and date_2.
UPDATE:
I have used your sample data this time, I have used the below syntax to create a table exactly like you described. Below is the code:
create table dataset.table_name(_time timestamp, dummy_column string) partition by date(_time)
as select timestamp '2020-06-15 23:57:00 UTC' as _time, "a" as dummy_column union all
select timestamp '2020-06-15 23:58:00 UTC' as _time, "b" as dummy_column union all
select timestamp '2020-06-15 23:59:00 UTC' as _time, "c" as dummy_column union all
select timestamp '2020-06-16 00:00:00 UTC' as _time, "d" as dummy_column union all
select timestamp '2020-06-16 00:00:01 UTC' as _time, "e" as dummy_column union all
select timestamp '2020-06-16 00:00:02 UTC' as _time, "f" as dummy_column
The table:
The schema:
The details:
In order to select only one date from your timestamp field (_time), you can do as follows:
SELECT * FROM `project.dataset.table` WHERE DATE(_time) = "2020-06-15"
And the output,
As it is shown above the output is as you desired.
Moreover, as an extra information I would like to encourage you to have a look at this documentation about partition by. | unknown | |
d1771 | train | It is irrelevant for me now because from now on I will use .on, since I upgraded jQuery. | unknown | |
d1772 | train | In addition to setting a replication factor of 2 or 3 on your topics to ensure backup replica copies are eventually created, you should also publish messages with acks=all to ensure that acknowledgements indicate a guarantee that the data has been written to all the replicas. Otherwise with acks=1 you get an ack after only 1 copy is committed, and with acks=0 you get no acks at all so you would never know if your published messages ever made it into the Kafka commit log or not.
Also set unclean leader election parameter to false to ensure that only insync replicas can ever become the leader.
A: In Kafka you can define the replication factor for a topic and in this way each partition is replicated on more broker. One of them is a leader where producer and consumer connect for exchanging messages. The other will be followers which get copies of messages from the leader to be in sync. If the leader goes down, a new leader election starts between all in sync replicas. Kafka will support N-1 failed brokers where N is the replication factor.
A: yes , the replication factor defines this. | unknown | |
d1773 | train | I would recommend Microsoft's Detours (C++ x86 only) or EasyHook (C++ & C#, x86/x64).
http://easyhook.codeplex.com/
I've used it before, works pretty well. You have to pass a function or address and where you want it redirected to, and you can have all calls (for all processes or a specific one) sent into your function. The tutorials cover most of the basics, but I can edit code into this answer if you'd like.
A bit of trivia is that it also works the other way. Pass a pointer to your function and you can redirect calls into external code. Makes for some interesting integration with old apps or closed-source ones.
A: You can use Deviare API Hook, use DeviareCSharpConsole that is a tool that is in the package that let you hook any API and see parameter values in a treeview-like control.
The only trick that it needs in Windows7 is to be load as admin, I reported.
A: How I Built a Working Poker Bot has samples of injecting code and hooking gdi events. | unknown | |
d1774 | train | If I'm interpreting your intent correctly, then you can just do this:
String dynamoDBTypeName = getDynamoDBClassName(someInterface);
Class<?> clazz = Class.forName(dynamoDBTypeName);
Object loaded = mapper.load(clazz, hashKey);
Whether you get the class from Someclass.class or Class.forName(...), it's the same class object. The values of type parameters are only defined at compile time, so there is absolutely no difference in the information passed to mapper.load, the code that implements mapper.load, or the value that it returns.
The only difference is that if you pass a Class<X> to mapper.load() then the compiler knows that it will return an X and you can assign it to a variable of type X without casting. If you pass a Class<?> the compiler only knows that it will return an Object. It can be cast to an instance of the class by some code that knows what type it's supposed to be. | unknown | |
d1775 | train | You can only save primitive types in shared preference . If you need to save an ArrayList then you can save it as a String which is Comma seperated . At the time of fetching this use split(",") on this String And you will get a String [] of it .
And if you want to save a list of Object then i Suggest to use a Singleton Class for it . Here is example of Singelton Class .Try this it you want .
public class ReferenceWrapper {
private Context context;
private static ReferenceWrapper wrapper;
private ArrayList<Object> list;
private ReferenceWrapper(Context context) {
this.context = context;
}
public static ReferenceWrapper getInstance(Activity activity) {
if (wrapper == null) {
wrapper = new ReferenceWrapper(activity);
}
return wrapper;
}
public ArrayList<Object> getList() {
return list;
}
public void setList(ArrayList<Object> list) {
this.list = list;
}
}
And use it like this
ReferenceWrapper wrapper=ReferenceWrapper.getInstance(MainActivity.this);
wrapper.setList(yourArrarlist);
And get It in any activity
ReferenceWrapper wrapper=ReferenceWrapper.getInstance(MainActivity.this);
ArrarList<Object> list= wrapper.getList(yourArrarlist);
It will return the same ArrayList list you saved recently . Because its only one object is Created
.Let me know if it helps
A: Answer is about write object in File instead of SharedPreference. Hope this may help
try {
ArrayList<List_addr> addrList = new ArrayList<>();
addrList.add(new List_addr("Bangalore", "Its a City"));
addrList.add(new List_addr("Delhi", "Its also a City"));
//write object into a file
FileOutputStream fos = openFileOutput("addrList", Context.MODE_PRIVATE);
ObjectOutputStream oos = new ObjectOutputStream(fos);
oos.writeObject(addrList);
oos.close();
//read object from the file
FileInputStream fis = openFileInput("addrList");
ObjectInputStream ois = new ObjectInputStream(fis);
ArrayList<List_addr> readAddrList = (ArrayList<List_addr>) ois.readObject();
ois.close();
for (List_addr address : readAddrList) {
Log.i("TAG", "Name " + address.getTitle() + " City " + address.getDetail());
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Important : Please mind @MeetTitan answer comments.
A: Step 1:
Put string array set into the shared preference.
pref.putStringSet(String key, Set<String> values);
Step 2:
Convert array list of pojo into set.
Set<String> set = new HashSet<String>(list);
Set 3:
Retrive using the getStringSet.
A: To save arraylist in shared preferences do the following
// Create List of address that you want to save
ArrayList addressList = new ArrayList();
addressList.add(new list_addr());
addressList.add(new list_addr());
SharedPreferences prefs = getSharedPreferences("address", Context.MODE_PRIVATE);
//save the user list to preference
Editor editor = prefs.edit();
try {
editor.putString("addressList", ObjectSerializer.serialize(addressList));
} catch (IOException e) {
e.printStackTrace();
}
editor.commit();
and then to retrieve arraylist use this
ArrayList addressList = new ArrayList();
// Load address List from preferences
SharedPreferences prefs = getSharedPreferences("address", Context.MODE_PRIVATE);
try {
addressList = (ArrayList) ObjectSerializer.deserialize(prefs.getString("addressList", ObjectSerializer.serialize(new ArrayList())));
} catch (IOException e) {
e.printStackTrace();
}
And this is the link for objectSerializer class
Object Serializer | unknown | |
d1776 | train | First of all, you cannot recongnize if a browser is closed by HTML and PHP. You would need ajax and constant polling or some kind of thing to know the browser is still there. Possible, but a bt complicated, mainly because you might run into troubles if a browser is still there (session is valid) but has no internet connection for a few minutes (laptop, crappy wlan, whatever).
You cannot have a sessionHandler which does this for you in PHP because PHP is executed when a script is retrieved from your server. After the last line is executed, it stops. If no one ever retrieves the script again, how should it do something? There is no magic that restarts the script to check if the session is still there.
So, what to do? First of all you want to make the session visible by using database session storage or something like that. Then you need a cronjob starting a script, looking up all sessions and deciding which one is invalid by now and then does something with it (like deleting the folder). Symfony can help as it allows you to configure session management in a way that it stores sessions in the database (see here) as well as creating a task which can be executed via crontab (see here).
The logical part, which contains deciding which session is invalid and what to do with this sessions) is your part. But it shouldn't be very hard as you got the session time and value in the database. | unknown | |
d1777 | train | Partial Mode is relatively rare. According to the content you provide, I think it may be caused by the following reasons:
The project is currently loading. Once loading completes, you will start getting project-wide IntelliSense for it. In these cases, VS Code's IntelliSense will operate in partial mode. Partial mode tries its best to provide IntelliSense for any Python files you have open, but is limited and is not able to offer any cross-file IntelliSense features.
I think you could spend more time waiting for vscode to load. Of course, if it still doesn't work, you could reinstall the python extension. | unknown | |
d1778 | train | I can figure out how to do a 3 color scale in XLSX writer, but there doesnt seem to be an option (I can see) for midpoint being a number:
You can use the min_type, mid_type and max_type parameters to set the following types:
min (for min_type only)
num
percent
percentile
formula
max (for max_type only)
See Conditional Format Options
So in your case it should be something like.
worksheet1.conditional_format('D2:D12', {'type': '3_color_scale',
'min_color': "red",
'mid_color': "yellow",
'max_color': "green",
'mid_type': "num"})
However, I'm not sure if that will fix your overall problem. Maybe add that to your example and if it doesn't work then open a second question.
One thing that you will have to figure out is how to do what you want in Excel first. After that it is generally easier to figure out what is required in XlsxWriter.
A: I know this is an old question but I just ran into this problem and figured out how to solve it.
Below is a copy of a utility function I wrote for my work. The main thing is that the min, mid and max types ALL need to be 'num' and they need to specify values for these points.
If you only set the mid type to 'num' and value to 0 then the 3 color scale will still use min and max for the end points. This means that if the contents of the column are all on one side of the pivot point the coloring will in effect disregard the pivot.
from xlsxwriter.utility import xl_col_to_name as index_to_col
MIN_MIN_FORMAT_VALUE = -500
MAX_MAX_FORMAT_VALUE = 500
def conditional_color_column(
worksheet, df, column_name, min_format_value=None, pivot_value=0, max_format_value=None):
"""
Do a 3 color conditional format on the column.
The default behavior for the min and max values is to take the min and max values of each column, unless said value
is greater than or less than the pivot value respectively at which point the values MIN_MIN_FORMAT_VALUE and
MAX_MAX_FORMAT_VALUE are used. Also, if the min and max vales are less than or greater than respectively of
MIN_MIN_FORMAT_VALUE and MAX_MAX_FORMAT_VALUE then the latter will be used
:param worksheet: The worksheet on which to do the conditional formatting
:param df: The DataFrame that was used to create the worksheet
:param column_name: The column to format
:param min_format_value: The value below which all cells will have the same red color
:param pivot_value: The pivot point, values less than this number will gradient to red, values greater will gradient to green
:param max_format_value: The value above which all cells will have the same green color
:return: Nothing
"""
column = df[column_name]
min_value = min(column)
max_value = max(column)
last_column = len(df.index)+1
column_index = df.columns.get_loc(column_name)
excel_column = index_to_col(column_index)
column_to_format = f'{excel_column}2:{excel_column}{last_column}'
if min_format_value is None:
min_format_value = max(min_value, MIN_MIN_FORMAT_VALUE)\
if min_value < pivot_value else MIN_MIN_FORMAT_VALUE
if max_format_value is None:
max_format_value = min(max_value, MAX_MAX_FORMAT_VALUE)\
if max_value > pivot_value else MAX_MAX_FORMAT_VALUE
color_format = {
'type': '3_color_scale',
'min_type': 'num',
'min_value': min_format_value,
'mid_type': 'num',
'mid_value': pivot_value,
'max_type': 'num',
'max_value': max_format_value
}
worksheet.conditional_format(column_to_format, color_format) | unknown | |
d1779 | train | You need to assign a linkage name to the textField in the library first. Let's say you give it a linkage name of title. Now you can do
title.width = title.parent.parent.width/2; | unknown | |
d1780 | train | It seems you are calling .call passing in the current scope which would be window.
(function(send) {
XMLHttpRequest.prototype.send = function(data) {
var self = this;
setTimeout( function () {
send.call(self, data); //Updated `this` context
},3000);
};
})(XMLHttpRequest.prototype.send); | unknown | |
d1781 | train | After searching through the XML response from the server, it looks like Netsuite was responding with only the columns declared in my saved search as I wanted. The other null values I was receiving were initialized as default values when the response object was initialized. | unknown | |
d1782 | train | If you have FileZilla, you can use this trick:
*
*click on the folder(s) whose size you want to calculate
*click on Add files to queue
This will scan all folders and files and add them to the queue. Then look at the queue pane and below it (on the status bar) you should see a message indicating the queue size.
A: WinSCP (free GUI on Microsoft Windows):
A: If you just need the work done, then SmartFTP might help you, it also has a PHP and ASP script to get the total folder size by recursively going through all the files.
A: You can use the du command in lftp for this purpose, like this:
echo "du -hs ." | lftp example.com 2>&1
This will print the current directory's disk size incl. all subdirectories, in human-readable format (-h) and omitting output lines for subdirectories (-s). stderr output is rerouted to stdout with 2>&1 so that it is included in the output.
However, lftp is a Linux-only software, so to use it from C# under Windows you would need to install it in the integrated Windows Subsystem for Linux (WSL) or using Cygwin or MSYS2. (Thanks to the commenters for the hints!)
The lftp du command documentation is missing from its manpage, but available within the lftp shell with the help du command. For reference, I copy its output here:
lftp :~> help du
Usage: du [options] <dirs>
Summarize disk usage.
-a, --all write counts for all files, not just directories
--block-size=SIZ use SIZ-byte blocks
-b, --bytes print size in bytes
-c, --total produce a grand total
-d, --max-depth=N print the total for a directory (or file, with --all)
only if it is N or fewer levels below the command
line argument; --max-depth=0 is the same as
--summarize
-F, --files print number of files instead of sizes
-h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G)
-H, --si likewise, but use powers of 1000 not 1024
-k, --kilobytes like --block-size=1024
-m, --megabytes like --block-size=1048576
-S, --separate-dirs do not include size of subdirectories
-s, --summarize display only a total for each argument
--exclude=PAT exclude files that match PAT
A: You could send the LIST command which should give you a list of files in the directory and some info about them (fairly certain the size is included), which you could then parse out and add up.
Depends on how you connect to the server, but if you're useing the WebRequest.Ftp class there's the ListDirectoryDetails method to do this. See here for details and here for some sample code.
Just be aware, if you want to have the total size including all subdirectories, I think you'll have to enter each subdirectory and call it recursively so it could be quite slow. It can be quite slow thought so normally I'd recommended, if possible, to have a script on the server calculate the size and return the result in some way (possibly storing it in a file you could download and read).
Edit: Or if you just mean that you'd be happy with a tool that does it for you, I think FlashFXP does it and probably other advanced FTP clients will as well. Or if it's a unix server I have a vague memory that you could just login and type ls -laR or something to get a recursive directory listing.
A: I use the FTPS library from Alex Pilotti with C# to execute some FTP commands in a few production environments. The library works well, but you have to recursively get a list of files in the directory and add their sizes together to get the result. This can be a bit time consuming on some of our larger servers (sometimes 1-2 min) with complex file structures.
Anyway, this is the method I use with his library:
/// <summary>
/// <para>This will get the size for a directory</para>
/// <para>Can be lengthy to complete on complex folder structures</para>
/// </summary>
/// <param name="pathToDirectory">The path to the remote directory</param>
public ulong GetDirectorySize(string pathToDirectory)
{
try
{
var client = Settings.Variables.FtpClient;
ulong size = 0;
if (!IsConnected)
return 0;
var dirList = client.GetDirectoryList(pathToDirectory);
foreach (var item in dirList)
{
if (item.IsDirectory)
size += GetDirectorySize(string.Format("{0}/{1}", pathToDirectory, item.Name));
else
size += item.Size;
}
return size;
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
return 0;
}
A: Simplest and Efficient way to Get FTP Directory Size with it's all Contents recursively.
var size = FtpHelper.GetFtpDirectorySize("ftpURL", "userName",
"password");
using System;
using System.Collections.Generic;
using System.IO;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
public static class FtpHelper
{
public static long GetFtpDirectorySize(Uri requestUri, NetworkCredential networkCredential, bool recursive = true)
{
//Get files/directories contained in CURRENT directory.
var directoryContents = GetFtpDirectoryContents(requestUri, networkCredential);
long ftpDirectorySize = default(long); //Set initial value of the size to default: 0
var subDirectoriesList = new List<Uri>(); //Create empty list to fill it later with new founded directories.
//Loop on every file/directory founded in CURRENT directory.
foreach (var item in directoryContents)
{
//Combine item path with CURRENT directory path.
var itemUri = new Uri(Path.Combine(requestUri.AbsoluteUri + "\\", item));
var fileSize = GetFtpFileSize(itemUri, networkCredential); //Get item file size.
if (fileSize == default(long)) //This means it has no size so it's a directory and NOT a file.
subDirectoriesList.Add(itemUri); //Add this item Uri to subDirectories to get it's size later.
else //This means it has size so it's a file.
Interlocked.Add(ref ftpDirectorySize, fileSize); //Add file size to overall directory size.
}
if (recursive) //If recursive true: it'll get size of subDirectories files.
//Get size of selected directory and add it to overall directory size.
Parallel.ForEach(subDirectoriesList, (subDirectory) => //Loop on every directory
Interlocked.Add(ref ftpDirectorySize, GetFtpDirectorySize(subDirectory, networkCredential, recursive)));
return ftpDirectorySize; //returns overall directory size.
}
public static long GetFtpDirectorySize(string requestUriString, string userName, string password, bool recursive = true)
{
//Initialize Uri/NetworkCredential objects and call the other method to centralize the code
return GetFtpDirectorySize(new Uri(requestUriString), GetNetworkCredential(userName, password), recursive);
}
public static long GetFtpFileSize(Uri requestUri, NetworkCredential networkCredential)
{
//Create ftpWebRequest object with given options to get the File Size.
var ftpWebRequest = GetFtpWebRequest(requestUri, networkCredential, WebRequestMethods.Ftp.GetFileSize);
try { return ((FtpWebResponse)ftpWebRequest.GetResponse()).ContentLength; } //Incase of success it'll return the File Size.
catch (Exception) { return default(long); } //Incase of fail it'll return default value to check it later.
}
public static List<string> GetFtpDirectoryContents(Uri requestUri, NetworkCredential networkCredential)
{
var directoryContents = new List<string>(); //Create empty list to fill it later.
//Create ftpWebRequest object with given options to get the Directory Contents.
var ftpWebRequest = GetFtpWebRequest(requestUri, networkCredential, WebRequestMethods.Ftp.ListDirectory);
try
{
using (var ftpWebResponse = (FtpWebResponse)ftpWebRequest.GetResponse()) //Excute the ftpWebRequest and Get It's Response.
using (var streamReader = new StreamReader(ftpWebResponse.GetResponseStream())) //Get list of the Directory Contentss as Stream.
{
var line = string.Empty; //Initial default value for line
while (!string.IsNullOrEmpty(line = streamReader.ReadLine())) //Read current line of Stream.
directoryContents.Add(line); //Add current line to Directory Contentss List.
}
}
catch (Exception) { throw; } //Do nothing incase of Exception occurred.
return directoryContents; //Return all list of Directory Contentss: Files/Sub Directories.
}
public static FtpWebRequest GetFtpWebRequest(Uri requestUri, NetworkCredential networkCredential, string method = null)
{
var ftpWebRequest = (FtpWebRequest)WebRequest.Create(requestUri); //Create FtpWebRequest with given Request Uri.
ftpWebRequest.Credentials = networkCredential; //Set the Credentials of current FtpWebRequest.
if (!string.IsNullOrEmpty(method))
ftpWebRequest.Method = method; //Set the Method of FtpWebRequest incase it has a value.
return ftpWebRequest; //Return the configured FtpWebRequest.
}
public static NetworkCredential GetNetworkCredential(string userName, string password)
{
//Create and Return NetworkCredential object with given UserName and Password.
return new NetworkCredential(userName, password);
}
}
A: As the answer by @FranckDernoncourt shows, if you want a GUI tool, you can use WinSCP GUI. Particularly its folder properties dialog.
If you need a code, you can use WinSCP too. Particularly with WinSCP .NET assembly and its Session.EnumerateRemoteFiles method it is easy to implement in many languages, including C#.
It is also doable with .NET built-in FtpWebRequest, but that's lot more work.
Both are covered in How to get a directories file size from an FTP protocol in a .NET application.
A: Just use FTP "SIZE" command...
A: You can use The FileZilla client. Download here: https://filezilla-project.org/download.php?type=client
If you want more readable size go to:
Edit -> Settings -> Interface -> filesize format -> size formatting -> select binary prefixes using SI symbols.
When you select a directory you can see its size. | unknown | |
d1783 | train | You can set the user inputs to the chart by either using chart options or by using the set() methods of the CanvasJS API.
I have modified your jsfiddle, and its working now.
function addDataPointsAndRender(){
chart.options.title.text =document.getElementById("chartTitle").value;
chart.options.data[0].dataPoints.push({
y: parseFloat(document.getElementById("yValue1").value),
indexLabel: document.getElementById("indexLabel1").value
});
chart.render();
}
Also have a look at :
*
*Tutorial on Rendering chart from user Input
*Updating Chart Options
*CanvasJS Methods & Properties Documentation | unknown | |
d1784 | train | You just need simple concatenation with the . (dot) operator.
E.g.
echo '<button type="button" class="button-7" data-toggle="modal" data-target="#update_modal'.$fetch['id'].'"><span class="glyphicon glyphicon-plus"></span>edit</button>';
...etc.
This is used to join any two string values (whether hard-coded literals or variables) together. What's happening here is your code is building a string from several components and then echoing it.
Documentation: https://www.php.net/manual/en/language.operators.string.php
A: You don't need to run PHP inside PHP.
You may use a comma (or a dot):
<?php
if ($status_code == "1") {
echo '<button type="button" class="button-7" data-toggle="modal" data-target="#update_modal<'
, $fetch['id']
, '"><span class="glyphicon glyphicon-plus"></span>edit</button>';
} else {
echo '<button type="button" class="button-7" data-toggle="modal" data-target="#checkout_modal'
, $fetch['id']
, '">Check-Out</button>';
}
Or using short tags:
<?php if ($status_code === '1') : ?>
<button type="button" class="button-7" data-toggle="modal" data-target="#update_modal<?= $fetch['id'] ?>">
<span class="glyphicon glyphicon-plus"></span>edit</button>'
<?php else: ?>
<button type="button" class="button-7" data-toggle="modal" data-target="#checkout_modal<?= $fetch['id'] ?>">Check-Out</button>'
<?php endif; ?>
You can have conditions (and other expressions) in short tags:
<button
type="button"
class="button-7"
data-toggle="modal"
data-target="#<?=
$status_code === '1'
? 'update_modal'
: 'checkout_modal'
?><?= $fetch['id'] ?>">
<?php if ($status_code === '1') : ?>
<span class="glyphicon glyphicon-plus"></span>edit
<?php else: ?>
Check-Out
<?php endif; ?>
</button>
And concatenate strings with dots:
<?php
$dataTargetPrefix =
$status_code === '1'
? 'update_modal'
: 'checkout_modal';
?>
<button
type="button"
class="button-7"
data-toggle="modal"
data-target="#<?= $dataTargetPrefix . $fetch['id'] ?>">
<?php if ($status_code === '1') : ?>
<span class="glyphicon glyphicon-plus"></span>edit
<?php else: ?>
Check-Out
<?php endif; ?>
</button> | unknown | |
d1785 | train | The Blade output tags changed between Laravel 4 and Laravel 5. You're looking for:
{!! $a->content !!}
In Laravel 4, {{ $data }} would echo data as is, whereas {{{ $data }}} would echo data after running it through htmlentities.
However, Laravel 5 has changed it so that {{ $data }} will echo data after running it through htmlentities, and the new syntax {!! $data !!} will echo data as is.
Documentation here.
A: In Laravel 5, by default {{ ... }} will escape the output using htmlentities. To output raw HTML that get's interpreted use {!! ... !!}:
@foreach ($json as $a)
{!! $a->content !!}
@endforeach
Here's a comparison between the different echo brackets and how to change them | unknown | |
d1786 | train | If your rails cache is configured with a namespace, that namespace will be prepended to the cache key automatically. So, when you Rails.cache.write("FOO", "BAR") the key will actually be $NAMESPACE:FOO. Keys are just strings and can't be navigated like a file system or anything fancy (AFAIK).
I think your best option is instantiate a separate instance of a dalli client for your alternative namespace to delete the key. | unknown | |
d1787 | train | You can define your date with another date object as parameter.
var params = new Date;
var date = new Date(params);
console.log(date) | unknown | |
d1788 | train | I think this should work:
<Grid>
<ListBox AllowDrop="True" DragOver="lbx1_DragOver"
Drop="lbx1_Drop"></ListBox>
</Grid>
Let's assume you want to allow only C# files:
private void lbx1_DragOver(object sender, DragEventArgs e)
{
bool dropEnabled = true;
if (e.Data.GetDataPresent(DataFormats.FileDrop, true))
{
string[] filenames =
e.Data.GetData(DataFormats.FileDrop, true) as string[];
foreach (string filename in filenames)
{
if(System.IO.Path.GetExtension(filename).ToUpperInvariant() != ".CS")
{
dropEnabled = false;
break;
}
}
}
else
{
dropEnabled = false;
}
if (!dropEnabled)
{
e.Effects = DragDropEffects.None;
e.Handled = true;
}
}
private void lbx1_Drop(object sender, DragEventArgs e)
{
string[] droppedFilenames =
e.Data.GetData(DataFormats.FileDrop, true) as string[];
} | unknown | |
d1789 | train | I assume you want to map several codes arrays against the same table. Suppose you first read the table into an array:
a = [["1", "Animal", "Dog", "1"],
["1", "Animal", "Cat", "2"],
["1", "Animal", "Bird", "3"],
["2", "Place", "USA", "1"],
["2", "Place", "Other", "2"],
["3", "Color", "Red", "a"],
["3", "Color", "Blue", "b"],
["3", "Color", "Orange", "c"],
["4", "Age", "Young", "a"],
["4", "Age", "Middle", "b"],
["4", "Age", "Old", "c"],
["5", "Alive", "Yes", "y"],
["5", "Alive", "No", "n"]]
and
codes = ["1","1","a","b","y"]
Then you could do this:
codes.zip(a.chunk { |a| a.first }).map { |l,(_,b)|
b.find { |c| c.last == l}[1...-1] }.to_h
#=> {"Animal"=>"Dog", "Place"=>"USA", "Color"=>"Red",
# "Age"=>"Middle", "Alive"=>"Yes"}
The steps:
enum0 = a.chunk { |a| a.first }
#=> #<Enumerator:
# #<Enumerator::Generator:0x007f8d6a0269b8>:each>
To see the contents of the enumerator,
enum0.to_a
#=> [["1", [["1", "Animal", "Dog", "1"], ["1", "Animal", "Cat", "2"],
# ["1", "Animal", "Bird", "3"]]],
# ["2", [["2", "Place", "USA", "1"], ["2", "Place", "Other", "2"]]],
# ["3", [["3", "Color", "Red", "a"], ["3", "Color", "Blue", "b"],
["3", "Color", "Orange", "c"]]],
# ["4", [["4", "Age", "Young", "a"], ["4", "Age", "Middle", "b"],
# ["4", "Age", "Old", "c"]]],
# ["5", [["5", "Alive", "Yes", "y"], ["5", "Alive", "No", "n"]]]]
p = codes.zip(enum0)
#=> [["1", ["1", [["1", "Animal", "Dog", "1"],
# ["1", "Animal", "Cat", "2"],
# ["1", "Animal", "Bird", "3"]]]],
# ["1", ["2", [["2", "Place", "USA", "1"],
# ["2", "Place", "Other", "2"]]]],
# ["a", ["3", [["3", "Color", "Red", "a"],
# ["3", "Color", "Blue", "b"],
# ["3", "Color", "Orange", "c"]]]],
# ["b", ["4", [["4", "Age", "Young", "a"],
# ["4", "Age", "Middle", "b"],
# ["4", "Age", "Old", "c"]]]],
# ["y", ["5", [["5", "Alive", "Yes", "y"],
# ["5", "Alive", "No", "n"]]]]]
l,(_,b) = enum1.next
l #=> "1"
b #=> [["1", "Animal", "Dog", "1"], ["1", "Animal", "Cat", "2"],
# ["1", "Animal", "Bird", "3"]]
enum1 = b.find
#=> #<Enumerator: [["1", "Animal", "Dog", "1"],
# ["1", "Animal", "Cat", "2"],
# ["1", "Animal", "Bird", "3"]]:find>
c = enum1.next
#=> ["1", "Animal", "Dog", "1"]
c.last == l
#=> true
so enum1 returns
d = ["1", "Animal", "Dog", "1"]
e = d[1...-1]
#=> ["Animal", "Dog"]
So the first element of x.zip(y) is mapped to ["Animal", "Dog"].
After performing the same operations for each of the other elements of enum1, x.zip(y) equals:
f = [["Animal", "Dog"], ["Place", "USA"], ["Color","Red"],
["Age", "Middle"], ["Alive", "Yes"]]
The final steps is
f.to_h
#=> {"Animal"=>"Dog", "Place"=>"USA", "Color"=>"Red",
# "Age"=>"Middle", "Alive"=>"Yes"}
or for < v2.0
Hash[f] | unknown | |
d1790 | train | I am trying to reproduce the issue, however failed. Here is the script which works well for your reference:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<base href="/">
<title></title>
<script src="node_modules\angular\angular.js"></script>
<script src="node_modules\adal-angular\lib\adal-angular.js"></script>
<script src="node_modules\adal-angular\lib\adal.js"></script>
<script src="node_modules\angular-ui-router\release\angular-ui-router.js"></script>
<script src="node_modules\angular-route\angular-route.js"></script>
</head>
<body>
<div ng-app="myApp">
<div ng-controller="LoginController">
<ul class="nav navbar-nav navbar-right">
<li ng-show="userInfo.isAuthenticated"><a class="btn btn-link" ng-click="logout()">Logout</a></li>
<li ng-hide="userInfo.isAuthenticated" ><a class="btn btn-link" ng-click="login()">Login</a></li>
<a ui-sref="home">Home</a> | <a ui-sref="about">About</a>
<div ui-view></div>
</ul>
</div>
</div>
<script>
var myApp = angular.module('myApp', ['AdalAngular', 'ui.router', 'ngRoute'])
.config(['$httpProvider', 'adalAuthenticationServiceProvider', '$stateProvider', '$routeProvider','$locationProvider','$urlRouterProvider', function ($httpProvider, adalProvider, $stateProvider, $routeProvider,$locationProvider,$urlRouterProvider) {
$locationProvider.html5Mode(true,false).hashPrefix('!');
$stateProvider.state("home",{
template: "<h1>HELLO!</h1>"
}).state("about",{
templateUrl: '/app/about.html',
requireADLogin: true
})
adalProvider.init(
{
instance: 'https://login.microsoftonline.com/',
tenant: '',
clientId: '',
extraQueryParameter: 'nux=1',
popUp:true
},
$httpProvider
);
}])
myApp.controller('LoginController', ['$rootScope','$scope', '$http', 'adalAuthenticationService', '$location','$stateParams','$state',
function ($rootScope,$scope, $http, adalService, $location, $stateParams,$state) {
$scope.login = function () {
adalService.login();
};
$scope.logout = function () {
adalService.logOut();
};
$rootScope.$on('adal:loginSuccess', function (event, token) {
console.log('loggedin');
$state.go('about');
});
}]);
</script>
</body>
</html>
I used the latest version of adal-angular library(v1.0.14). This code sample will go to the about state after login with the popup page. Please let me know if the code works. And if you still have the problem, would you mind sharing with us a demo which could run? | unknown | |
d1791 | train | As noted in the comments, the Git Community Book is a great resource for learning all about git, from basic usage right up to really advanced stuff.
If for some reason that's not to your liking, then this question links to a large number of reference guides for using git. You may want to look at the "beginner's references" section. Also some answers to that question point to GUI tools for Git that you may find useful.
The basic commands you're going to need to learn to use git effectively include add, status, diff, commit, pull, push, checkout, merge, and branch. You can get basic help for each command by using git help followed by the command you need help for (e.g. git help add). | unknown | |
d1792 | train | Give it a try to change the inDirection: to UITextWritingDirection.rightToLeft.rawValue
This worked for me (even that it is logically wrong to me). Hope it helps:
guard let wordRange = textView.tokenizer.rangeEnclosingPosition(tapPos, with: .word, inDirection: UITextDirection(rawValue: UITextWritingDirection.rightToLeft.rawValue) ) else {
return nil
}
return textView.text(in: wordRange) | unknown | |
d1793 | train | Change all your int type variables to double or float. I would personally use double because they have more precision than float types.
A: int datatype stands for integer (i.e. positive and negative whole numbers, including 0)
If you want to represent decimal numbers, you will need to use float.
A: Use the float or double type, like the others already said.
But it ain't as simple as that. You need to understand what floating-point numbers actually are, and why (0.1 + 0.1 + 0.1) != (0.3). This is a complicated subject, so I won't even try to explain it here - just remember that a float is not a decimal, even if the computer is showing it to you in the form of a decimal.
A: use floats not ints an integer (int) is a whole number, floats allow decimal places (as do doubles)
float length; // declares variable for length
float width; // declares variable for width
float area; // declares variable for area
float perimeter; // declares variable for perimete
A: You've defined your variables as integers. Use double instead.
Also, you can look up some formatting for cout to define the number of decimal places you want to show, etc. | unknown | |
d1794 | train | A much better approach would be to move your code (including XAML) to one or several User Control libraries. Then your main EXE would just load these controls.
A: using another application-object will most likely break many things as you have no direct control over the process - you should avoid that and use simple function/method-calls instead, these work flawlessly and thats what DLLs are for.
In fact, i actually think this wont be possible in the way you imagined it.
Using DLLs like everbody does has loads of advantages, you can move 99% of your code into the DLL and make the primary entry-point (the EXE-file) absolutely static, theres no need to fiddle around with the application-object. | unknown | |
d1795 | train | If you haven't already I highly recommend you familiarize yourself with the Memory Management Programing Guide from Apple
In there you will find a section specifically on retain counts
A: The best explanation I ever heard was from Aaron Hillegass:
Think of the object as a dog. You need a leash for a dog to keep it from running away and disappearing, right?
Now, think of a retain as a leash. Every time you call retain, you add a leash to the dog's collar. You are saying, "I want this dog to stick around." Your hold on the leash insures that the dog will stay until you are done with it.
Think of a release as removing one leash from the dog's collar. When all the leashes are removed, the dog can run away. There's no guarantee that the dog will be around any longer.
Now, say you call retain and put a leash on the dog. I need the dog, too, so I walk along with you and start training him. When you are done with the dog, you call release and remove your leash. There are no more leashes and the dog runs away, even though I was still training him!
If, instead, I call retain on the dog before I start training him, I have a second leash on the collar. When you call release and remove your leash, I still have one and the dog can't go away just yet.
Different objects can "own" the dog by calling retain and putting another leash on its collar. Each object is making sure that the dog doesn't go away until it is done with it. The dog can't go away until all of the leashes have been removed.
Autorelease pools get more complicated, but simplistically you can think of calling autorelease as handing your leash to a trainer. You don't need the dog anymore, but you haven't removed your leash right away. The trainer will take the leash off later; there is still no guarantee that the dog will be around when you need him. | unknown | |
d1796 | train | formpanel.setValues(formpanel.getValues());
Aafter you call this, the form has not longer the dirty status | unknown | |
d1797 | train | pool.query(
"select * from article where upper(content) LIKE upper('%' || $1 || '%')",
[temp]
).then( res => {console.log(res)}, err => {console.error(err)})
This works for me. I just looked at this Postgres doc page to try and understand what concat was doing to the parameter notation. Can't say that I understand the difference between using || operators and using concat string function at this time.
A: The easiest way I found to do this is like the following:
// You can remove [0] from character[0] if you want the complete value of character.
database.query(`
SELECT * FROM users
WHERE LOWER(users.name) LIKE LOWER($1)
ORDER BY users.id ASC`,
["%" + character[0] + "%"]
);
// [%${character}%] string literal alternative to the last line in the function call.
There are several things going on here, so let me break each line it down.
*
*SELECT * FROM users
*
*This is selecting all the columns associated with table users
*WHERE LOWER(users.name) LIKE $1
*
*This is filtering out all the results from the first line so that where the name(lowercased) column of the users table is like the parameter $1.
*ORDER BY users.id ASC
*
*This is optional, but I like to include it because I want the data returned to me to be in ascending order (that is from 0 to infinity, or starting low and going high) based on the users.id or the id column of the users table. A popular alternative for client-side data presentation is users.created_at DESC which shows the latest user (or more than likely an article/post/comment) by its creation date in reverse order so you get the newest content at the top of the array to loop through and display on the client-side.
*["%" + character + "%"]
*
*This part is the second argument in the .query method call from the database object (or client if you kept with that name, you can name it what you want, and database to me makes for more a sensical read than "client", but that is just my personal opinion, and it's highly possible that "client" may be the more technically correct term to use).
The second argument needs to be an array of values. It takes the place of the parameters inserted in the query string, for example, $1 or ? are examples of parameter placeholders which are filled in with a value in the 2nd argument's array of values. In this case, I used JavaScript's built-in string concatenation to provide a "includes" like pattern, or in plain-broken English, "find me columns that contain a 'this' value" where name(lowercased) is the column and character is the parameter variable value. I am pulling in the parameter value for the character variable from req.params (the URL, so http://localhost:3000/users/startsWith/t), so combining that with % on both ends of the parameter, it returns me all the values that contain the letter t since is the first (and only) character here in the URL.
I know this is a VERY late response, but I wanted to respond with a more thorough answer in case anyone else needed it broken down further.
A: In my case :
My variable was $1, instead of ?1 ...
I was customizing my query with @Query | unknown | |
d1798 | train | Unfortunately, when trying to use the XPATHs in this page, they ended up breaking after the first try, not sure why it happened.
However, their corresponding CSS_SELECTOR counterparts did manage to fulfill my expectations pretty well.
Here's the solution:
###improvements were applied from this part
if driver.current_url == 'https://opensea.io/asset/create':
button_plus_properties = driver.find_element(By.XPATH, '//*[@id="main"]/div/div/section/div[2]/form/section/div[1]/div/div[2]/button').click() #click on the "+" button of Properties
wait_xpath('/html/body/div[5]/div/div/div') #wait for "Add properties" dialog to be loaded and located
type_array = list(current_dictionary.keys()) #get the keys which will be send as types
name_array = list(current_dictionary.values()) #get the values which will be send as values
i = 1
while i <= len(current_dictionary): #iterate over the types and values lulz
css_type = f'body > div:nth-child(25) > div > div > div > section > table > tbody > tr:nth-child({i}) > td:nth-child(1) > div > div > input' #selector of the i type
css_name = f'body > div:nth-child(25) > div > div > div > section > table > tbody > tr:nth-child({i}) > td:nth-child(2) > div > div > input' #selector of the i value
button_css_type = driver.find_element(By.CSS_SELECTOR, css_type).send_keys(type_array[i-1]) #find the ith textbox type and paste the ith-1 element from the type_array variable
button_css_name = driver.find_element(By.CSS_SELECTOR, css_name).send_keys(name_array[i-1]) #find the ith textbox value and paste the ith-1 element from the name_array variable
if i != len(current_dictionary): #as long as i is not equal to the lenght of the current_dictionary
button_add_more = driver.find_element(By.XPATH, '/html/body/div[5]/div/div/div/section/button').click() #add a new textbox type and textbox value
i +=1
button_save_metadata = driver.find_element(By.XPATH, '/html/body/div[5]/div/div/div/footer/button') #find the save button in this dialog
button_save_metadata.click() #save the metadata
The improvement above first creates an array for both Type and Name elements to store the Keys and Values of the current_dictionary respectively.
Then, it will start a while loop, in which it is set the general CSS_SELECTOR for any new textbox created after clicking the Add more button until the counter i is equal to the lenght of the current_dictionary, then it sends the corresponding Key and Value to their respective textboxes, and finally it saves everything by clicking the Save button.
Output: | unknown | |
d1799 | train | Don't always trust textbooks...
From the errata:
p. 772, Solution to Practice Problem 8.3. The sequence bcac is not
possible. Strike the second to last sentence. The last sentence should
be “There are three possible sequences: acbc, abcc, and bacc.” Please
see Web Aside ECF:GRAPHS on the Web Aside page for an example of the
process graph. | unknown | |
d1800 | train | The issue is that the dimensions are incorrect. If you do
data1 = permute(data, [2 3 1 4]); implay(data1)
It should work. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.