_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d18901 | test | If flag is YES and the receiver can’t be converted without losing some information, some characters may be removed or altered in conversion. For example, in converting a character from NSUnicodeStringEncoding to NSASCIIStringEncoding, the character ‘Á’ becomes ‘A’, losing the accent. | unknown | |
d18902 | test | Yes, but you'll need code that instantiates the GtkBuilder, gets the application object from it, and runs it.
It's more usual to subclass GtkApplication in your code and override its virtual functions, and then inside your GtkApplication instantiate your GtkBuilder. | unknown | |
d18903 | test | There is no way to change the typeface of TextView in RemoteView.
To check properties that can be changed in RemoteView, in your case just go to Button and TextView classes and check all methods with annotation @android.view.RemotableViewMethod. As you can see, setTypeface don't have an annotation, so it can not be changed in RemoteView anyhow | unknown | |
d18904 | test | I had the problem of not having the app on my device, so I couldn't manually launch it to accept the prompt. For me, I got this to work after deleting all expired provisioning profiles from my device, which forced Xcode to install a new one.
After this, I was able to get my app to run.
A: I just got this issue running on an iOS 8 device for the first time as it required me to launch manually on the device (it copies it fine but doesn't launch it) and then state that I trust the developer.
A: I had the same issue solved like this:
It can be happen because your developer profile is not assigned as TRUSTED in your phone or watchos settings.
You can set your profile as TRUSTED as below:
*
*Go to Settings,
*Profile
*Assign your profile as Trusted there.
A: If you sign the app with Enterprise provisioning you will get this error. It will still install the app on your phone, but apparently you cannot debug an app signed this way. You must either sign the app with Developer provisioning or manually launch the app in the phone.
A: *
*Choose Window->Devices.
*Right click on the device in left column, choose "Show Provisioning Profiles".
*Click on the provisioning profile in question.
*Press the "-" button Continue to removing all affected profiles.
*Re-install the app.
A: To fix the process launch failed: Security issue, tap the app icon on your iOS device after running the app via Xcode.
Be sure to tap the app icon while the Xcode alert is still shown. Otherwise the app will not run.
*
*Run the app via Xcode. You will see the security alert below. Do not press OK.
*On your iOS device, tap the newly installed app icon:
*After tapping the icon, you should now see an alert asking you to "Trust" the Untrusted App Developer. After doing so the app will immediately run, unconnected to the Xcode debugger.
*
*If you do not see this "Trust" alert, you likely pressed "OK" in Xcode too soon. Do not press "OK" on the Xcode alert until after trusting the developer.
*Finally, go back and press "OK" on the Xcode alert. You will have to re-run the app to connect the running app on your iOS device to the Xcode debugger.
A: Happened to me when my iPhone was in offline mode. Giving it access to the Internet fixed the problem.
A: Using xcode 7 with an iOS device running version 9.2, I had to:
*
*Open 'Settings'
*Tap 'General'
*Tap 'Device Management'
*Tap 'Developer App' that's in the list
*Tap 'Trust (developer name)'
*Tap 'Trust' in the popup
The app should load and launch when you run xcode.
A: Apparently after upgrading the OS and such you must manually launch the app on the device and say that you trust the developer of the software.
That error message disappeared now.
A: I had the same problem as above and resolved it by changing the code signing identity to iOS Developer
(I had tried all of the other steps above first)
I can now run the app in xcode and see debug output
A: My solution was, use Internet on your phone cause the app must verify the email, then build again, this time will show a popup in which you can press trust and now everything works fine.
Side note: I'am developing with Flutter | unknown | |
d18905 | test | I think the problem might be that you were trying to register GF 3.1.2 to use Java SE 5. Java EE 6 requires a Java SE 6 JDK to run successfully.
A: Okay I don't know what was wrong with it. Just tried it one more time and it worked! Sry for taking your times. | unknown | |
d18906 | test | This should work.
<rules>
<rule name="myproduct" stopProcessing="true">
<match url="^([^/]{2,3}/)?myproduct(/$|$)" />
<action type="Redirect" url="{R:1}products/myproduct" />
</rule>
</rules> | unknown | |
d18907 | test | I think the problem is:
$result = implode(",", $data);
$nr = randWithout(500, 550, array($result));
while you should do is remove the implode and send the $data array directly.
$randWithout(500, 550, $data);
A: Tim is right, do not implode results in a string.
But that is the quite odd function for getting random number with exceptions. Try something like:
function getRndWithExceptions($from, $to, array $ex = array()) {
$i = 0;
do {
$result = rand($from, $to);
} while( in_array($result, $ex) && ++$i < ($to - $from) );
if($i == ($to - $from)) return null;
return $result;
}
<...>
$nr = getRndWithExceptions(500, 550, $data); // $data is array | unknown | |
d18908 | test | Can tell which distribution url you are using ? (can find in ../android/gradle/wrapper/gradle-wrapper.properties)
many time it give error
and if you are using physical device then please check api level of your device if it is old then try in new one | unknown | |
d18909 | test | Check http://api.jquery.com/on/
You could do something like this:
$("body").on({
click: function() {
//...
}
mouseleave: function() {
//...
},
//other event, etc
}, "#yourthing");
A: You can try this and can use any other mouse events according to your need:
$("#mainContainer").on('hover', function(){
$(selector).slideDown("slow");
}), function(){
$(selector).slideUp("slow");
});
A: Maybe something like this? You might not need any JavaScript for the mouseover effect.
#mainConatiner {
width: 300px;
height: 30px;
border: 1px solid #000;
}
ul {
opacity: 0;
transition: opacity 250ms ease;
}
#mainContainer:hover ul {
opacity: 1;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="mainContainer">
Hover me!
<ul>
<li>User 1</li>
<li>User 2</li>
</ul>
</div>
Let me know if it helps! Good luck.
A: JQuery has mouse based events. See here.
See this fiddle where I have adapted the w3schools example to work on mouseenter and mouseleave
A: You could use it like this, by calling the same function for multiple events:
$("mainContainer").on('click mouseenter',function (event) {
//This gives you what event happened, might be 'click' or 'mouseenter'
var type = event.type;
}); | unknown | |
d18910 | test | Go to the ‘Tools’ tab inside the WooCommerce > System Status of your WordPress administration panel. Here you first use the ‘Recount terms’ button and after that use the ‘Clear transients’ button. This will force the system to recount all the products the next time a category is loaded.
A: File Manager >> public_html >> wp-admin >> includes >> nav-menu.php
search for paginate there is two it will be the one in line 692 (it was for me).
// Paginate browsing for large numbers of objects.
$per_page = 50;
Change 50 to your needs
Also add to child theme functions.php | unknown | |
d18911 | test | If you have already added SSH key then try setting URL
get the SSH URL from bit-bucket then,
git remote set-url origin "SSHURL"
paste URL without quotes.
A: Make sure that the ~/.ssh folder and the keys have the correct permissions set.
$ chmod 700 ~/.ssh
$ chmod 400 ~/.ssh/id_rsa
$ chmod 400 ~/.ssh/id_rsa.pub
Remember that you can specify which key to use, in case you got more than one key-pair. Specify the private key, not the public key:
$ ssh -i ~/.ssh/id_rsa user@host
When dealing with several key-pairs, the ssh client needs to know which key to use. Add the following lines in ~/.ssh/config:
Host bitbucket.org
PreferredAuthentications publickey
IdentityFile ~/.ssh/another_private_key
A: You can fix it by adding these two lines to the end of your /etc/ssh/ssh_config file:
HostkeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa
Alternatively you can add them to your ~/.ssh/config file either for all hosts or only to a specific one (change * to desired host):
Host *
HostkeyAlgorithms +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa | unknown | |
d18912 | test | This is common with angular when you are using jquery events to update a $scope value. You will have to manually triger a $scope apply:
$scope.$apply(function(){
$scope.show = true;
});
Another solution would be to use Angular's $timeout
$timeout(function () {
$scope.show = true;
});
See the documentation for $scope.apply() and more scope information. | unknown | |
d18913 | test | Is this what you're looking for? (sorry had to change up the classes a bit). What I did was added a display:grid; and align-items:center; on all parents of the p tags.
HTML:
<div class="flex">
<div class="flex-item">
<section>
<p class="text-center">Section</p>
</section>
<article>
<p class="text-center">Article</p>
</article>
</div>
<aside class="center flex-item">
<p>Aside</p>
</aside>
</div>
CSS
body {
background-color: silver;
}
div {
background-color: white;
}
header, nav, section, article, aside, footer {
background-color: white;
padding: 10px;
border: solid;
}
.flex {
display: flex;
}
.flex-item {
flex: 1;
}
.flex-item section,
.flex-item article,
.center {
display: grid;
align-items: center;
text-align: center;
}
.flex-item section,
.flex-item article {
height: 200px; /* this is just to see the vertical layout */
}
You should now be able to freely align, left, center and right, but there are more ways to do this. If you're worried about using grid, it has good support and it works fine on most major browsers :)
A: This is my first answer every, I hope this helps!
This happens because you put flex: 1; on the parent of the "aside" text, changing the size of the parent but not the child. Try changing flex: initial; on the "aside" text parent and you'll see what I mean. You can fix this in a ton of ways, one way is to put width: 100%; on the "aside" text <p></p>
A: you can put either p {flex-grow: 1} or p {min-width: 100%} | unknown | |
d18914 | test | I think you want the defaults command:
defaults write "myPlist.plist" TestKey "TestStringForKey"
A: I use this to write to a plist file with iPhone terminal. Just make sure you have ericautilities installed from Cydia.
plutil -key ShowedAlert -value nope /dir/ect/ory/to/playlist.plist
A: /myPlist.plist means, that myPlist.plist is in your root directory. I think, it's in your current working directory, so just use
write "myPlist.plist" TestKey "TestStringForKey" | unknown | |
d18915 | test | You don't write anything to terminal because there's no terminal. You pass name of a program to run and its arguments as arguments of the QProcess::start method. If you only need to know if ping was successful or not it's enough to check the exit code of the process which you started earlier using QProcess::start; you don't have to read its output.
from ping(8) - Linux man page
If ping does not receive any reply
packets at all it will exit with code
1. If a packet count and deadline are both specified, and fewer than count
packets are received by the time the
deadline has arrived, it will also
exit with code 1. On other error it
exits with code 2. Otherwise it exits
with code 0. This makes it possible to
use the exit code to see if a host is
alive or not.
By default ping under Linux runs until you stop it. You can however use -c X option to send only X packets and -w X option to set timeout of the whole process to X seconds. This way you can limit the time ping will take to run.
Below is a working example of using QProcess to run ping program on Windows. For Linux you have to change ping options accordingly (for example -n to -c). In the example, ping is run up to X times, where X is the option you give to Ping class constructor. As soon as any of these executions returns with exit code 0 (meaning success) the result signal is emitted with value true. If no execution is successful the result signal is emitted with value false.
#include <QCoreApplication>
#include <QObject>
#include <QProcess>
#include <QTimer>
#include <QDebug>
class Ping : public QObject {
Q_OBJECT
public:
Ping(int count)
: QObject(), count_(count) {
arguments_ << "-n" << "1" << "example.com";
QObject::connect(&process_,
SIGNAL(finished(int, QProcess::ExitStatus)),
this,
SLOT(handlePingOutput(int, QProcess::ExitStatus)));
};
public slots:
void handlePingOutput(int exitCode, QProcess::ExitStatus exitStatus) {
qDebug() << exitCode;
qDebug() << exitStatus;
qDebug() << static_cast<QIODevice*>(QObject::sender())->readAll();
if (!exitCode) {
emit result(true);
} else {
if (--count_) {
QTimer::singleShot(1000, this, SLOT(ping()));
} else {
emit result(false);
}
}
}
void ping() {
process_.start("ping", arguments_);
}
signals:
void result(bool res);
private:
QProcess process_;
QStringList arguments_;
int count_;
};
class Test : public QObject {
Q_OBJECT
public:
Test() : QObject() {};
public slots:
void handle(bool result) {
if (result)
qDebug() << "Ping suceeded";
else
qDebug() << "Ping failed";
}
};
int main(int argc, char *argv[])
{
QCoreApplication app(argc, argv);
Test test;
Ping ping(3);
QObject::connect(&ping,
SIGNAL(result(bool)),
&test,
SLOT(handle(bool)));
ping.ping();
app.exec();
}
#include "main.moc" | unknown | |
d18916 | test | Your crontab * 23 * * * /home/obe/env/crawl/cron_set.sh means :
The command /home/obe/env/crawl/cron_set.sh will execute every minute of 11pm every day.
If you want it to run once in a day , it should be : 0 23 * * * /home/obe/env/crawl/cron_set.sh which means
The command /home/obe/env/crawl/cron_set.sh will execute at 11:00pm every day.
Next time refer to : http://www.cronchecker.net/
Happy crons | unknown | |
d18917 | test | Aha! We found the smoking gun. Here is what the message actually says:
SmtpException: Mailbox unavailable. The server response was: 5.7.1
Invalid credentials for relay [ffff:fff:ffff:ffff:ffff:ffff:ffff:ffff]
I've obfuscated the last part, but note that the IP address this appears to come from is an IPV6 address. Out relay whitelist only includes IPV4 addresses. So I turned IPV6 off on that machine - because honestly, what is that good for anyway? | unknown | |
d18918 | test | Yes, if you are connecting to the third-party server over TCP port 25, there is a limit imposed by the EC2 infrastructure, as an anti-spam measure.
You can request that this restriction be lifted, or, the simplest and arguably most correct solution, connect to the server on port 587 (SMTP-MSA) instead of 25 (SMTP-MTA). (The third party mail server should support it unless they really haven't been paying attention for several years.)
See http://en.m.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol
Or, using SSL would be even better.
If you aren't connecting to the 3rd party server on port 25, then there's absolutely no limit.
https://aws-portal.amazon.com/gp/aws/html-forms-controller/contactus/ec2-email-limit-rdns-request ... is the form you can use if you want to request removal of the port 25 block, but this also requires you to establish reverse dns to take additional responsibility for the removed restriction on port 25, if you want to take that route, instead. | unknown | |
d18919 | test | If you want to overwrite based on the last modified date, then the File object has the property you want: DateLastModified. (You can check all properties of the File object here.)
You already have access to the source file objects (your code's Photo variable) so you just need to get the target's file object.
Something like this should work:
Dim Photo
Dim targetFile, bmpTargetFilename, jpgTargetFilename
SourceFolder = "C:\Photo1"
DistinationFolder = "C:\Photo2"
Set ObjPhoto = CreateObject("Scripting.FileSystemObject")
For Each Photo In ObjPhoto.GetFolder(SourceFolder).Files
bmpTargetFilename = ObjPhoto.BuildPath(DistinationFolder, Replace(Photo.Name, ".jpg", ".bmp"))
jpgTargetFilename = ObjPhoto.BuildPath(DistinationFolder, Photo.Name)
If ObjPhoto.FileExists(bmpTargetFilename) Then
' Get the target file object
Set targetFile = ObjPhoto.GetFile(jpgTargetFilename)
' Now compare the last modified dates of both files
If Photo.DateLastModified > targetFile.DateLastModified Then
Photo.Copy jpgTargetFilename, True
End If
Else
Photo.Copy jpgTargetFilename, True
End If
Next
A couple of notes:
*
*It seems you are checking for the existence of a .BMP file yet copying a .JPG file, so I made it explicit by using two variables.
*I am also assuming you want to compare the JPG files, since those are the ones being copied. | unknown | |
d18920 | test | Do you have graphics on the report? Even a small one on the page header? If so don't use the format event to fill the graphic. Or change the grapic to a BMP. | unknown | |
d18921 | test | A dependency convergence error means that
*
*the dependency is not in dependencyManagement
*there are different versions of the dependency in the dependency tree
The typical resolution is to define an entry in dependencyManagement that resolves the issue or to import an appropriate BOM into the dependencyManagement.
This is best done in the main POM of a multi module project, but also possible in modules.
Note that it is better to leave out the <version> tag in the <dependencies> section so that dependencyManagement will be used everywhere. | unknown | |
d18922 | test | You should do your copy operation in another thread.
label.text = "Ready";
var tasks = Task[files.length];
for (var i=0 ; i<files.length; i++) {
tasks[i] = Task.Run(()=>{
File.Copy(firstDest, secondDest);
});
}
label.text = "Working..";
await Task.WhenAll(tasks);
label.text = "Ready";
In case you want to run it all in one task and not each copy in parallel
label.text = "Ready";
var task =Task.Run(()=>{
foreach (file in files){
File.Copy(firstDest, secondDest);
}
});
label.text = "Working..";
await task;
label.text = "Ready"; | unknown | |
d18923 | test | Nevermind folks -- problem solved, but haven't quite figured out why. File encoding is my guess. | unknown | |
d18924 | test | How about the angle parameter in styleColorBar function?
Try this:
dft <- dft %>% formatStyle('WGT',
background = styleColorBar(df[,'WGT'], 'yellow', angle = -90),
backgroundSize = '100% 80%',
backgroundRepeat = 'no-repeat',
backgroundPosition = 'center')
Output : | unknown | |
d18925 | test | you can use jquery with something like this:
$("input[name=fields\\[first-name\\]]").val() | unknown | |
d18926 | test | I'm a bit uncertain what you need to do. Would cloning help ? Replacing
#set( $new_arr = $arr )
by
#set( $new_arr = $arr.clone() )
will keep your $arrayOfArray untouched, while the $new_arrOfArray will be [[1, [true], [5, 6]]] at the end.
But maybe I'm missing some point here ...
A: By #set( $new_arr = $arr ) you are setting $new_arr to be a reference to $arr.
$arr, in turn, is a reference to $arrayOfArray at a certain index.
When calling new_arr.add(), you're thereby calling $arrayOfArray[$someIndex].add() by reference. | unknown | |
d18927 | test | If "yourImageView" is the ImageView and you want to set background of it and the name of image is "imageName"
yourImageView.setImageResource(context.getResources().
getIdentifier("drawable/" + imageName, null,context.getPackageName()));
But i should say sorry as it doesn't really give you drawable but it gives Resource, But i hope this will help you use the string name of drawable file. | unknown | |
d18928 | test | I am not sure why you are looking to set up a maintenance plan.But, the alternate approach would be to set up a SQL server agent job to execute your T-SQL statements (which can be put together as procedures) and schedule it accordingly.
At the same time, you can execute SQL jobs through maintenance plans as well. This page will also help you : https://learn.microsoft.com/en-us/sql/relational-databases/maintenance-plans/use-the-maintenance-plan-wizard?view=sql-server-2017
A: First I connected to my SQL server using SQL Server Management Studio.
I went to the node Management, right-clicked the subnode Maintenance Plans and created a new maintenance plan called Test. My maintenance plan automatically got a subplan called Subplan_1. I just kept it and saved the maintenance plan.
Next, I went to the node SQL Server Agent, opened the subnode Jobs and double-clicked node Test.Subplan_1. It had a job step called Subplan_1. Double-clicking that job step opened the job step's properties. There I could choose the type Transact-SQL script (T-SQL) and enter my SQL code.
I did not encounter any problems. I used SQL Server 2017, but I am pretty sure it works about the same way in earlier versions of SQL Server...
Edit:
Like sabhari karthik commented and answered, it is very well possible to just create a new job with SQL Server Agent and schedule that job. So perhaps you do not need a maintenance plan at all. But if you do use maintenance plans (or are required to use and/or edit existing maintenance plans), it might be just the case that a maintenance plan's subplan automatically gets a related SQL Server Agent job. But I am not sure. I have never configured and used any maintenance plans before. I'm just a software developer, not a DBA.
Edit 2:
I see in the Maintenance Plan Wizard that there is an option to execute a SQL Server Agent Job as a maintenance task as well. But it seems you need to create that SQL Server Agent Job first. | unknown | |
d18929 | test | Secondary index builds are part of the normal operation of Cassandra when you have secondary indexes on tables. Any new mutation that a node receives will get indexed.
It runs as a compaction thread within the same JVM as the Cassandra process so you won't see a separate process running on a machine's process table.
There's no operation to "force" them to finish. They will finish when the required indexing of data has completed.
Repairs are also part of the normal operation of Cassandra. When new data is streamed to a node during a repair, that data will also be indexed by the receiving node. What I'm getting at is that those operations go hand-in-hand and one does not prevent the other from working. Cheers! | unknown | |
d18930 | test | You can make a payout system "add to each user a field which holds the total gained money and when this user collect a specific amount you can send money from stripe to his bank account" because it's not right to connect each user with Stripe as it or any other payment gateways allow to connect the app with one account and it requires some information to be able to receive money .. etc , you see that you put private key and other key to connect with stripe and those keys and you should hide'em with env properties so no one can see'em
I hope you got what i want to say
A: Stripe Connect has a 3 different charge types to select from : https://stripe.com/docs/connect/charges
In general, you would either create the charge on your platform then transfer the funds to the connected account, or create a charge on the connected account. You would want to follow one of the two guides below depending on which charge type [0] you choose :
*
*https://stripe.com/docs/connect/collect-then-transfer-guide
*https://stripe.com/docs/connect/enable-payment-acceptance-guide
Following the flows mentioned above, funds will accumulate in the connected account's balance and will be available to be paid out. You can read about connected account payouts in more detail here : https://stripe.com/docs/connect/bank-debit-card-payouts
You would probably want to write in to Stripe support if you need further guidance on how Stripe Connect works as a product.
A: *
*This documentation provides detailed steps on how you can
process payments with Firebase using a stripe account.
*It walks you through customizing and deploying your own version of
the open-source cloud-functions-stripe-sample.web.app example
app using stripe payments whose source code is available at this
GitHub link.
*Also have a look at this stackoverflow thread where a
stackoverflow user has shown how he has integrated Firebase with his
stripe account using Firebase functions.
*For a detailed description on how you can create a stripe account for
Firebase Flutter, go through this article. | unknown | |
d18931 | test | You refer to an Objective-C example, but you have not done what it says to do! Your second method is the wrong method. You want to say this:
override func tableView(tableView: UITableView, canPerformAction action: Selector,
forRowAtIndexPath indexPath: NSIndexPath, withSender sender: AnyObject?)
-> Bool {
return action == #selector(copy(_:))
}
You will also need a third override:
override func tableView(tableView: UITableView, performAction action: Selector,
forRowAtIndexPath indexPath: NSIndexPath, withSender sender: AnyObject?) {
// ...
} | unknown | |
d18932 | test | You might try setting the CATALINA_OPTS environment variable, e.g.:
set CATALINA_OPTS=-XX:+UnlockCommercialFeatures -XX:+FlightRecorder -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.rmi.port=7091 -Dcom.sun.management.jmxremote.port=7091 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
Then start Tomcat as usual. | unknown | |
d18933 | test | Use a temporary string in strcat instead of strcat(out[0],*pa);.
Also, make sure that you allocate enough memory for out.
int main()
{
char a[10]="abcdefg123";
char temp[2] = {0};
char *pa=a;
// This is not good for `strcat`.
// char *out[2]={"",""};
// Use this instead.
char out[2][20]={"",""};
int counter=0;
while(*pa != '\0'){
temp[0] = *pa;
if (counter%2==0){
strcat(out[0], temp);
}
else{
strcat(out[1], temp);
}
counter++;
pa++;
}
printf("%s,%s\n",out[0],out[1]);
return 0;
}
A: Point 1. As per the man page of strcat()
. .. the dest string must have enough space for the result.
Point 2. The second argument for strcat() is const char *, so the call cannot be strcat(out[0],*pa);. You need to use a temporary array for that, which will hold only the value *pa. Otherwise, you can make use of strncat() like
strncat(out[0], pa, 1); // copy only 1 char
In later case, you don't need to have any temporary array.
Reference: From man page, again,
The strncat() function is similar, except that it will use at most n bytes from src and src does not need to be null-terminated if it contains n or more bytes.
A: strcat() requires a pointer to a null-terminated string. You are dereferencing a pointer (with the * operator) which gives a char. You cannot simply cast a char to a pointer to sting. You might want to be using memcpy() instead of strcat(), but for copying single bytes a simple assignment using * operators on both the left and right sides would be fine. But as others have pointed out your code isn't allocating space for you to copy the chars into, so you're going to need to make additional changes to fix that. Also, you'll have to remember to copy a final null byte to the end of both your output strings.
A: In C, a "string" is really just a pointer to a character; the string runs from that character to the terminating 0 byte (the NUL).
If you dereference a string pointer, you get the character at that position. If the pointer is pointing to the start of the string, you get the first character of the string.
Your program has some problems. For one, you need to allocate space for the new strings. strcat() will try to copy characters wherever you tell it to, but it's your job to make sure that there is room there and that it's okay to write there. The declaration of out just declares two pointers, and initializes them to point to a zero-length constant string. Instead, you need to allocate storage, something like:
char out0[64], out1[64];
char *out[]={out0, out1};
This makes two output buffers of 64 characters each, then sets up out with pointers to them.
Another problem: you declared a length of 10 for your char array, but then you initialize it with a length-10 string. This means there is no room for a terminating NUL byte and C won't put one. Then strcpy() or strcat() will copy extra garbage from the string, until there happens to be a NUL byte. If you are lucky there will be one right away and you won't spot the error, but if you aren't lucky you will get weird garbage.
Just let the compiler count how many bytes in your string and do the right thing. Leave out the length:
char a[]="abcdefg123"; | unknown | |
d18934 | test | Well, I've fixed the issue but not exactly sure why/how yet. The project had imported a jar that contained a class that extended WebMvcConfigurationSupport like the following:
@Configuration
public class EnableUriMatrixVariableSupport extends WebMvcConfigurationSupport {
@Override
@Bean
public RequestMappingHandlerMapping requestMappingHandlerMapping() {
RequestMappingHandlerMapping hm = super.requestMappingHandlerMapping();
hm.setRemoveSemicolonContent(false);
return hm;
}
}
Additionally I also has @EnableMvc annotation which imports DelegatingWebMvcConfiguration. I think this ends up creating two instances of a WebMvcConfigurationSupport and this causes havoc in the spring container. Unfortunately I upgraded to spring 4.x in the process of fixing this issue so I'm not sure yet if this helped in some way. | unknown | |
d18935 | test | Sets would appear to be the obvious solution. The following approach reads each column into its own set(). It then simply uses the difference() function to give you entries that are in col1 but not in col2 (which is the same as simply using the - operator):
import csv
col1 = set()
col2 = set()
with open('input.csv') as f_input:
for row in csv.reader(f_input):
if len(row) == 2:
col1.add(row[0])
col2.add(row[1])
elif len(row) == 1:
col1.add(row[0])
print col1
print col2
print sorted(col2 - col1)
So if your CSV file had the following entries:
aaa,aaa
bbb,111
ccc,bbb
ddd,222
eee
fff
The required output would be:
['111', '222']
The data in your CSV file might need sanitizing before being added to the set, for example EXAMPLE.COM and example.com would currently be considered different. | unknown | |
d18936 | test | That's a bug. For now the easiest would be to just copy them manually over to the fonts folder.
A: The bug Sindre mentioned has now been fixed. You can either start a new project with generator-webapp >= 0.4.2 or apply this patch manually, which only involves one new line to the copy task:
copy: {
dist: {
files: [{
expand: true,
dot: true,
cwd: '<%%= yeoman.app %>',
dest: '<%%= yeoman.dist %>',
src: [
'*.{ico,png,txt}',
'.htaccess',
'images/{,*/}*.{webp,gif}',
'styles/fonts/{,*/}*.*',
'bower_components/sass-bootstrap/fonts/*.*' // <-- New line
]
}]
}
}
A: yeoman 1.1.2 does not seem to work with the answer above.
Change your Gruntfile.js and add:
copy: {
dist: {
files: [{
expand: true,
dot: true,
cwd: '<%= yeoman.app %>',
dest: '<%= yeoman.dist %>',
src: [
'*.{ico,png,txt}',
'.htaccess',
'*.html',
'views/{,*/}*.html',
'bower_components/**/*',
'images/{,*/}*.{webp}',
'fonts/*',
]
}, {
expand: true,
cwd: '.tmp/images',
dest: '<%= yeoman.dist %>/images',
src: ['generated/*']
}, { <--- add this start
expand: true,
cwd: '<%= yeoman.app %>/bower_components/bootstrap/fonts',
dest: '<%= yeoman.dist %>/fonts',
src: '*.*'
}] <--- end add
},
styles: {
Add a new block that copies the fonts out of the bower components into the dist directory.
Replace bootstrap with sass-bootstrap if you use the sass distribution.
A: Copy fonts from app/bower_components/bootstrap-sass-official/vendor/assets/fonts/bootstrap
To app/fonts
In application.scss change $icon-font-path
From
$icon-font-path: "/bower_components/bootstrap-sass-official/vendor/assets/fonts/bootstrap/"
To
$icon-font-path: "/fonts/"
A: cssmin with root option replaces all relative paths.
you can deactivate the root option of cssmin in your Gruntfile.js
cssmin: {
options: {
//root: '<%= yeoman.app %>'
}
},
A: It worked for me ;)
copy: {
dist: {
files: [{
expand: true,
dot: true,
cwd: '<%= yeoman.app %>',
dest: '<%= yeoman.dist %>',
src: [
'*.{ico,png,txt}',
'.htaccess',
'**/*.html',
'views/**/*.html',
'images/{,*/}*.{webp}',
'styles/fonts/{,*/}*.*'
]
}, {
expand: true,
cwd: '.tmp/images',
dest: '<%= yeoman.dist %>/images',
src: ['generated/*']
},
{
expand: true,
cwd: '<%= yeoman.app %>/bower_components/bootstrap/fonts',
dest: '<%= yeoman.dist %>/fonts',
src: '*.*'
},
{
expand: true,
cwd: '<%= yeoman.app %>/bower_components/font-awesome/fonts',
dest: '<%= yeoman.dist %>/fonts',
src: '*.*'
}
/*{
expand: true,
cwd: 'bower_components/bootstrap/dist',
src: 'fonts*//*',
dest: '<%= yeoman.dist %>'
}*/]
},
styles: {
expand: true,
cwd: '<%= yeoman.app %>/styles',
dest: '.tmp/styles/',
src: '{,*/}*.css'
}
}, | unknown | |
d18937 | test | Play Protect Appeals Submission Form can solve your problem. Just send your apk details to Google and wait for appeal process. When you enter your apk's URL, Google will control your apk. Just enter your URL to URL to download your APK file section. You do not need publish your app. | unknown | |
d18938 | test | As mentioned in the nano documentation:
In nano the callback function receives always three arguments:
*
*err - The error, if any.
*body - The HTTP response body from CouchDB, if no error. JSON parsed body, binary for non JSON responses.
*header - The HTTP response header from CouchDB, if no error.
Therefore, in the case of db.find you will have:
db.find(query, (err, body, header) => {
if (err) {
console.log('Error thrown: ', err.message);
return;
}
console.log('HTTP header received: ', header)
console.log('HTTP body received: ', body)
});
I didn't work with typescript, however I think you can do the same with typescript. | unknown | |
d18939 | test | IIUC, you have a pandas DataFrame and want to drop all rows that contain at least one string that ends with the letter 'A'. One fast way to accomplish this is by creating a mask via numpy:
import pandas as pd
import numpy as np
Suppose our df looks like this:
0 1 2 3 4 5
0 ADFC FDGA HECH AFAB BHDH 0
1 AHBD BABG CBCA AHDF BCAG 1
2 HEFH GEHH CBEF DGEC DGFE 2
3 HEDE BBHE CCCB DDGB DCAG 3
4 BGEC HACB ACHH GEBC GEEG 4
5 HFCC CHCD FCBC DEDF AECB 5
6 DEFE AHCH CHFB BBAA BAGC 6
7 HFEC DACC FEDA CBAG GEDD 7
Goal: we want to get rid of rows with index 0, 1, 6, 7.
Try:
mask = np.char.endswith(df.to_numpy(dtype=str),'A') # create ndarray with booleans
indices_true = df[mask].index.unique() # Int64Index([0, 1, 6, 7], dtype='int64')
df.drop(indices_true, inplace=True) # drop indices_true
print(df)
out:
0 1 2 3 4 5
2 HEFH GEHH CBEF DGEC DGFE 2
3 HEDE BBHE CCCB DDGB DCAG 3
4 BGEC HACB ACHH GEBC GEEG 4
5 HFCC CHCD FCBC DEDF AECB 5
A: A bit unclear on your requirements but maybe this fits. Generate some words in columns for which end in 'A'. If any string in the designated columns ends with 'A' then delete the row.
nb_cols = 9
nb_vals = 6
def wgen():
return ''.join(random.choices(string.ascii_lowercase, k=5)) + random.choice('ABCDEFGH')
df = pd.DataFrame({'Col'+str(c): [wgen() for c in range(1,nb_vals)] for c in range(1,nb_cols+1)})
print(df)
Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8 Col9
0 aawivA qorjeA qfjuoD nkwgzF auablC aehnqE cwuvzF diqwaF qlnpzG
1 aidjuH ljalaB ldhgsC zaangH mdtgkF lypfnB kynrxG qlnygH zzqyrC
2 pzqibD jdumcF ddufmG xstdcH vqpbkG rjnqxD ugscrA kmvyaE cykutE
3 gqpycH ynaeeA onirjE mnbtyH swjuzF dyvmvC tpxgsH ssnhbD spsojD
4 isptdF qzpitH akzwgE klgqpH pqpcqH psryiD tjaurC daaieC piduzE
Say that we are looking for the "ending A" in Col4-Col7. Then row with index 2 needs to be deleted:
df[~df[['Col'+str(c) for c in range(4,7+1)]]
.apply(lambda x: x.str.match('.*A$').any(), axis=1)]
Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8 Col9
0 aawivA qorjeA qfjuoD nkwgzF auablC aehnqE cwuvzF diqwaF qlnpzG
1 aidjuH ljalaB ldhgsC zaangH mdtgkF lypfnB kynrxG qlnygH zzqyrC
3 gqpycH ynaeeA onirjE mnbtyH swjuzF dyvmvC tpxgsH ssnhbD spsojD
4 isptdF qzpitH akzwgE klgqpH pqpcqH psryiD tjaurC daaieC piduzE | unknown | |
d18940 | test | I do not understand what you do with do while loop in your code. So i just propose another loop for your case. I'm not sure but hope that code is what you want.
int main() {
char str[22];
int alp, digit, splch, i;
printf("\n\nCount total number of alphabets, digits and special characters :\n");
printf("--------------------------------------------------------------------\n");
printf("Input the string : ");
while (fgets(str, sizeof str, stdin)){
alp = digit = splch = 0;
for (i = 0; i < strlen(str); i++ ) {
if ((str[i] >= 'a' && str[i] <= 'z') || (str[i] >= 'A' && str[i] <= 'Z'))
{
alp++;
}
else if (str[i] >= '0' && str[i] <= '9')
{
digit++;
}
else if(str[i] != '\n')
{
splch++;
}
}
printf("alp = %d, digit = %d, splch = %d\n", alp, digit, splch);
printf("Input the string : ");
}
return 0;
}
OT, for determine the alpha or digit, you can use isdigit() and isalpha() functions. It's more simple than something you used in your code.
A: It seems the program can look the following way
#include <stdio.h>
#include <ctype.h>
#include <string.h>
int main(void)
{
enum { str_size = 22 };
char str[str_size];
puts( "\nCount total number of alphabets, digits and special characters");
puts( "--------------------------------------------------------------");
while ( 1 )
{
printf( "\nInput a string less than or equal to %d characters (Enter - exit): ",
str_size - 2 );
if ( fgets( str, str_size, stdin ) == NULL || str[0] == '\n' ) break;
unsigned int alpha = 0, digit = 0, special = 0;
// removing the appended new line character by fgets
str[ strcspn( str, "\n" ) ] = '\0';
for ( const char *p = str; *p != '\0'; ++p )
{
unsigned char c = *p;
if ( isalpha( c ) ) ++alpha;
else if ( isdigit( c ) ) ++digit;
else ++special;
}
printf( "\nThere are %u letters, %u digits and %u special characters in the string\n",
alpha, digit, special );
}
return 0;
}
The program output might look like
Count total number of alphabets, digits and special characters
--------------------------------------------------------------
Input a string less than or equal to 20 characters (Enter - exit): April, 22, 2020
There are 5 letters, 6 digits and 4 special characters in the string
Input a string less than or equal to 20 characters (Enter - exit):
If the user will just press the Enter key the loop will finish.
Pay attention to that the program considers white space and punctuation characters as special characters. | unknown | |
d18941 | test | I think ArgumentOutOfRangeException occurred because you're not setting DataKeyNames attribute property on the grid, hence the row index is still out of bounds when calling e.RowIndex. You should set it to ID/primary key column name like this:
DataKeyNames="[ID or PK column name]"
Here is an example usage:
<asp:GridView ID="GridView1" runat="server" DataKeyNames="id"
AutoGenerateColumns="False" CellPadding="4" ForeColor="#333333" GridLines="None"
Width="1650px" AutoGenerateDeleteButton="True" OnRowDeleting="GridView1_RowDeleting">
</asp:GridView>
Update 1
Additionally, I found parameter name mismatch on this query:
string query = "DELETE FROM mainpage WHERE id=@ID";
NpgsqlCommand cmd = new NpgsqlCommand(query, cn);
cmd.Parameters.Add("@ID2", NpgsqlDbType.Integer).Value = ID2;
The correct one should be like example below:
string query = "DELETE FROM mainpage WHERE id=@ID";
NpgsqlCommand cmd = new NpgsqlCommand(query, cn);
cmd.Parameters.Add("@ID", NpgsqlDbType.Integer).Value = ID2;
Reference: GridView.DataKeyNames Property | unknown | |
d18942 | test | SELECT t1.*
FROM src_table t1
JOIN ( SELECT ID_PERSON
FROM src_table t2
GROUP BY ID_PERSON
HAVING COUNT(DISTINCT(NAME_PERSON) > 1 ) t3 USING (ID_PERSON)
SELECT *
FROM src_table t1
WHERE EXISTS ( SELECT NULL
FROM src_table t2
WHERE t1.ID_PERSON = t2.ID_PERSON
AND t1.NAME_PERSON <> t2.NAME_PERSON )
There are more variants. | unknown | |
d18943 | test | You can not cast functions. In the current Xcode version 6.2 you will get the following run time exception: Swift dynamic cast failure
There is however a workaround for this problem which I implemented in my connect function of https://github.com/evermeer/EVCloudKitDao The solution is to wrap the function instead of casting it. The code will look like this :
public var insertedHandlers = Dictionary<String, (item: EVCloudKitDataObject) -> Void>()
public var updateHandlers = Dictionary<String, (item: EVCloudKitDataObject, dataIndex:Int) -> Void>()
public func connect<T:EVCloudKitDataObject>(
type:T,
completionHandler: (results: [T]) -> Void,
insertedHandler:(item: T) -> Void,
updatedHandler:(item: T, dataIndex:Int) -> Void,
deletedHandler:(recordId: String, dataIndex:Int) -> Void,
errorHandler:(error: NSError) -> Void
) -> Void {
func insertedHandlerWrapper(item:EVCloudKitDataObject) -> Void {
insertedHandler(item: item as T)
}
func updatedHandlerWrapper(item:EVCloudKitDataObject, dataIndex:Int) -> Void {
updatedHandler(item: item as T, dataIndex: dataIndex)
}
self.insertedHandlers[filterId] = insertedHandlerWrapper
self.updateHandlers[filterId] = updatedHandlerWrapper
...
Now the updateHandler still uses the T instead of the EVCloudKitDataObject and in the handler itself you can use the original type and does not need to cast it. | unknown | |
d18944 | test | The XAML designer in Studio takes information about types not from projects, but from their assemblies.
Therefore, if you made a change to the project, then the Designer will not see them until you make a new assembly of the project.
You do not need to close / open the Studio for this.
Go to the "Project" menu, select "Build" there.
Or the same can be done in the "Solution Explorer" in the context menu of the Project.
Also, if you made changes to the project, a new build will be performed automatically when the Solution is launched for execution.
In very rare cases (usually after some bugs, incorrect closing of the Studio), before building, you still need to perform a "Cleanup" of the Project or Solution.
P.S. Due to this peculiarity of the XAML Designer, in order not to constantly stumble upon such errors (and in some cases they can be very confusing and even lead to compilation errors), it is recommended to create all types used in XAML in other projects.
This makes it much easier to understand the source of the XAML Designer and Compiler errors and warnings.
A: I fixed it by myself!
I closed VS, rebuilt the project and the error did not pop up again (WPF designer issues : XDG0008 The name "NumericTextBoxConvertor" does not exist in the namespace "clr-namespace:PulserTester.Convertors").
I hate VS. | unknown | |
d18945 | test | You get these nodes by xpath starts-with
//tr[starts-with(@id,'__TOC')]
Then do foreach these results to process each block with hard code:
*
*div array order to get district name, address,...
*div id AUTOGENBOOKMARK_4, AUTOGENBOOKMARK_5 to get Apartment, Number,... | unknown | |
d18946 | test | In a generic way, in CouchDB it's only possible to traverse a graph one level deep. If you need more levels, using a specialized graph database might be the better approach.
There are several ways to achieve what you want in CouchDB, but you must model your documents according to the use case.
*
*If your "C" type is mostly static, you can embed the name in the document itself. Whenever you modify a C document, just batch-update all documents referring to this C.
*In many cases it's not even necessary to have a C type document or a reference from B to C. If C is a tags document, for example, you could just store an array of strings in the B document.
*If you need C from A, you can also store a reference to C in A, best accompanied with the name of C cached in A, so you can use the cached value if C has been deleted.
*If there are only a few instances of one of the document types, you can also embed them directly. Depending on the use case, you can embed B in A, you can embed all As in an array inside of B, or you can even put everything into one document.
With CouchDB, it makes most sense to think of the frequency and distribution of document updates, instead of normalizing data.
This way of thinking is quite different from what you do with SQL databases, but in the typical read-mostly scenarios we have on the web, it's a better trade-off than expensive read queries to model documents like independent entities.
When I model a CouchDB document, I always think of it as a passport or a business letter. It's a single entity that holds valid, correct and complete information, but it's not strictly guaranteed that I am still as tall as in the passport, that I look exactly as in the picture, that I haven't changed my name, or that I have a different address than the one stated on the business letter.
If you provide more information on what you actually want to do with some examples, I will happily elaborate further! | unknown | |
d18947 | test | I guess what you presented is what is given. If you came up with the design it is ok, but I believe it could be improved. Anyway, I try to respond to what I believe was your question straight away.
Vehiculo is the super type of Moto (which can have a side car and becomes 3 wheeler).
Vehiculo has a method esDe2Ruedas, which returns false.
Moto inherits that method <-- this is wrong, it should override it and, depending on side car, return the expected boolean value.
In the check method you can now distinguish between Moto and "Moto with sidecar" by using that method. | unknown | |
d18948 | test | There is nothing wrong in principle with doing things in header file. Indeed, header only libraries are quite popular in C++ nowadays. In some cases (such as templates) doing things in header file is the only way to go.
The art of splitting definitions between header file and .cpp file is often a judgment call. Generally, when you define functions in header file, you might hope for better performance (since inlining would be more easily achieved), but you might end up with larger codebase (depending on linker behavior), and you are likely to increase your compilation time.
Instead of asking for best practices, I wholeheartedly suggest you understand what are the mechanics at play there, and make a conscious choice yourself.
A: IMHO ...
Use of std::cout in any file in a library is a symptom of poor design. If you need to output something, provide the interface for the client code to pass a ostream or an ostream-like object that supports inserting data to it.
Use of std::cout in an application-specific file, be it a header file or a .cpp file, is perfectly fine.
A:
Is std::cout in a header file bad practice?
Not necessarily. For example, you might have a function which outputs character sequence into a stream. It would be useful to let the client of the function to choose which stream to use, so the function accepts the stream as an argument. It might make a lot of sense for the function to have the default behaviour of streaming to the standard output stream. Therefore you might have a function declaration in a header such as:
void stream_fancy_stuffs(std::ostream& output_stream = std::cout); | unknown | |
d18949 | test | having the [{ngModel}] in there was bad.
the code below allowed me to make it so that: 'clientsClone' is an array of objects returned from the server and the [value]="clnt.id" with the formControlName="clientId" lets me say "hey this id int is what you need for that form's value!"
code here:
<select formControlName="clientId" (change)="onOptionsChanged(2)">
<option *ngFor="let clnt of clientsClone" [value]="clnt.id">{{clnt.id}} - {{clnt.pointOfContact}} </option>
</select> | unknown | |
d18950 | test | I think what you are looking for is the SWITCH function:
You can in the cell D6 use the following formula:
=SWITCH(F2; I2; E1/E2; I3; E1*12/E2; I4; E1*52/E2; I5; E1*365/E2)
The logic is:
*
*check the cell F2 (where you have the dropdown)
*if the value of F2 equals I2 (Year) then, just divide the cost by the number of years
*if the value of F2 equals I3 (Month), then make E1*12 and divide it by the the E2 (same as (E1/E2)/12
*if the value of F2 equals I4 (Week), then calculate E1*52/E2 (same as above but with 52 weeks)
*if the value of F2 equals I5 (Day), then calculate E1*365/E2 (same as above but with 365 days)
And so on on the other cells, just change the differences between the formulas, between day, week, month and year. | unknown | |
d18951 | test | your title says "index" but your example shows you wanting to return a string. If, in fact, you are wanting to return the string, try this:
if(initString.includes('/digital/collection/')) {
var components = initString.split('/');
return components[3];
}
A: If the path is always the same, and the field you want is the after the third /, then you can use split.
var initString = '/digital/collection/music/bunch/of/other/stuff';
var collection = initString.split("/")[2]; // third index
In the real world, you will want to check if the index exists first before using it.
var collections = initString.split("/");
var collection = "";
if (collections.length > 2) {
collection = collections[2];
}
A: You can use const desiredString = initString.slice(19, 24); if its always music you are looking for.
A: If you need to find the next path param that comes after '/digital/collection/' regardless where '/digital/collection/' lies in the path
*
*first use split to get an path array
*then use find to return the element whose 2 prior elements are digital and collection respectively
const initString = '/digital/collection/music/bunch/of/other/stuff'
const pathArray = initString.split('/')
const path = pathArray.length >= 3
? pathArray.find((elm, index)=> pathArray[index-2] === 'digital' && pathArray[index-1] === 'collection')
: 'path is too short'
console.log(path)
A: Think about this logically: the "end index" is just the "start index" plus the length of the substring, right? So... do that :)
const sub = '/digital/collection/';
const startIndex = initString.indexOf(sub);
if (startIndex >= 0) {
let desiredString = initString.substring(startIndex + sub.length);
}
That'll give you from the end of the substring to the end of the full string; you can always split at / and take index 0 to get just the first directory name form what remains.
A: You can also use regular expression for the purpose.
const initString = '/digital/collection/music/bunch/of/other/stuff';
const result = initString.match(/\/digital\/collection\/([a-zA-Z]+)\//)[1];
console.log(result);
The console output is:
music
A: If you know the initial string, and you have the part before the string you seek, then the following snippet returns you the string you seek. You need not calculate indices, or anything like that.
// getting the last index of searchString
// we should get: music
const initString = '/digital/collection/music/bunch/of/other/stuff'
const firstPart = '/digital/collection/'
const lastIndexOf = (s1, s2) => {
return s1.replace(s2, '').split('/')[0]
}
console.log(lastIndexOf(initString, firstPart)) | unknown | |
d18952 | test | Our solution was to create a custom ANT task which gets all the classes annotated with @Entity (using reflections). This will generate the persistence.xml for us, with and nodes. So every class you want to map into the PersistenceContext, needs to be listed in the persistence.xml. This persistence.xml is placed inside a folder META-INF which is wrapped into my-persistence.jar. This jar is placed in the lib folder of my EAR. | unknown | |
d18953 | test | A possible solution is to override the drawForeground() method to paint the vertical line, to calculate the positions you must use the mapToPosition() method:
import sys
from PyQt5.QtCore import Qt, QPointF
from PyQt5.QtGui import QColor, QPainter, QPen
from PyQt5.QtWidgets import QApplication, QMainWindow
from PyQt5.QtChart import (
QBarCategoryAxis,
QBarSet,
QChart,
QHorizontalBarSeries,
QChartView,
QValueAxis,
)
class ChartView(QChartView):
_x = None
@property
def x(self):
return self._x
@x.setter
def x(self, x):
self._x = x
self.update()
def drawForeground(self, painter, rect):
if self.x is None:
return
painter.save()
pen = QPen(QColor("indigo"))
pen.setWidth(3)
painter.setPen(pen)
p = self.chart().mapToPosition(QPointF(self.x, 0))
r = self.chart().plotArea()
p1 = QPointF(p.x(), r.top())
p2 = QPointF(p.x(), r.bottom())
painter.drawLine(p1, p2)
painter.restore()
def main():
app = QApplication(sys.argv)
set0 = QBarSet("Jane")
set1 = QBarSet("John")
set2 = QBarSet("Axel")
set3 = QBarSet("Mary")
set4 = QBarSet("Samantha")
set0 << 1 << 2 << 3 << 4 << 5 << 6
set1 << 5 << 0 << 0 << 4 << 0 << 7
set2 << 3 << 5 << 8 << 13 << 8 << 5
set3 << 5 << 6 << 7 << 3 << 4 << 5
set4 << 9 << 7 << 5 << 3 << 1 << 2
series = QHorizontalBarSeries()
series.append(set0)
series.append(set1)
series.append(set2)
series.append(set3)
series.append(set4)
chart = QChart()
chart.addSeries(series)
chart.setTitle("Simple horizontal barchart example")
chart.setAnimationOptions(QChart.SeriesAnimations)
categories = ["Jan", "Feb", "Mar", "Apr", "May", "Jun"]
axisY = QBarCategoryAxis()
axisY.append(categories)
chart.addAxis(axisY, Qt.AlignLeft)
series.attachAxis(axisY)
axisX = QValueAxis()
chart.addAxis(axisX, Qt.AlignBottom)
series.attachAxis(axisX)
axisX.applyNiceNumbers()
chart.legend().setVisible(True)
chart.legend().setAlignment(Qt.AlignBottom)
chartView = ChartView(chart)
chartView.setRenderHint(QPainter.Antialiasing)
chartView.x = 11.5
window = QMainWindow()
window.setCentralWidget(chartView)
window.resize(420, 300)
window.show()
app.exec()
if __name__ == "__main__":
main() | unknown | |
d18954 | test | As indicated in the comments on the question, what we were looking for was to provide a Middleware. The far simplest way to do this is by adding this piece of code into the Configure method, and that is what we decided to go with.
app.Map("/HealthCheck", a =>
{
a.Run(async context =>
{
await context.Response.WriteAsync($"{env.ApplicationName} is alive in {env.EnvironmentName}");
});
});
Since that specifically does NOT answer the question of creating a way to do this in a NuGet package, here is how you would do it in an extensions class.
public static class ApplicationBuilderExtensions
{
/// <summary>
/// Gives a happy little response when someone makes a request to healthCheckUrl
/// </summary>
public static IApplicationBuilder UseHealthCheck(this IApplicationBuilder app, string environmentName, string applicationName, string healthCheckUrl)
{
app.Map(healthCheckUrl, a =>
{
a.Run(async context =>
{
await context.Response.WriteAsync($"{applicationName} is alive in environmentName");
});
});
return app;
}
}
And then call it with
app.UseHealthCheck(env.EnvironmentName, env.ApplicationName, "/myHealthCheckUrl");
in the Startup.Configure method. | unknown | |
d18955 | test | Happened to me after I updated one of the packages in node_modules. Probably integrity/checksum-related issue. The cure was to flush node_modules and run
$ npm install
again. | unknown | |
d18956 | test | I use a TemplateFieldand direct render the link, here is how:
On the GridView aspx page I use:
<asp:TemplateField >
<ItemTemplate >
<%#LinkToGoto(Container.DataItem)%>
</ItemTemplate>
</asp:TemplateField>
and on code behind I make the link as:
protected string LinkToGoto(object oItem)
{
// read the data from database
var cOrderId = (int)DataBinder.Eval(oItem, "OrderId");
// format and render back the link
return String.Format("<a href=\"http://domain.com/Details.aspx?OrderId={0}\">go to order</a>", OrderId);
} | unknown | |
d18957 | test | You don't really need to do that: .NET BCL already has everything you need.
A: Take a look at App.Config and the ConfigurationManager class.
A: If you expand the Properties folder in the SolutionExplorer you should find a Settings.Settings item. Double clicking on this will open the settings editor. This enables you to declare and provide initial values for settings that can either be scoped to the application or the current user. Since the values are persisted in Isolated storage you do not need to worry about what privileges the user is executing under.
For a wee example:
I created a new string setting with the name Drink and a TextBox named drinkTextBox. The code to assign the current value to the text box is:
drinkTextBox.Text = Properties.Settings.Default.Drink;
and to update the value persisted:
Properties.Settings.Default.Drink = drinkTextBox.Text;
Properties.Settings.Default.Save();
A: Depending on how flexible you want it to be, you can use the build in Settings designer (go to Project Properties > Settings) and you can add settings there.
These are strongly typed and accessible through code.
It has built in features like Save, Load and Reload
A: We'll often create a sealed class that has a number of properties that wrap calls to the the System.Configuration.ConfigurationManager class. This allows us to use the standard configuration managagement capabilities offered by the class and the app/web.config file but make the data very easy to access by other classes.
For example we might create a property to expose the connection string to a database as
public static string NorthwindConnectionString
{
get{return ConfigurationManager.ConnectionStrings["Northwind"].ConnectionString
}
While it creates a wrapper around one line of code, which we usually try to avoid, it does make certain confiuration properties accessible via intellisense and provides some insullation around changes to the location of underlying configuration data. If we wanted to move the connection string to the registry, we could do so without major impact to the application.
We find this most helpful when we have larger teams or when we need to hand off code from one team to another. It keeps people from needing to remember what the various settings were named in the config files and even where configuration information is stored (config file, database, registry, ini file, etc.)
A: For noddy apps I use appSettings. For enterprise apps I usually create some custom config sections. CodeProject has some excellent articles on this.
For your scenario of key/value pairs I'd probably use something like this.
A: Building a dictionary in the standard settings
Using the standard Settings, it isn't possible to store dictionary style settings.
To emulate the System.Collections.Specialized.StringDictionary,
what I've done in the past is used two of the System.Collections.Specialized.StringCollection typed settings (this is one of your options for the setting type).
I created one called Keys, and another called values. In a class that needs these settings I've created a static constructor and looped through the two StringCollections and built the StringDictionary into a public static property. The dictionary is then available when needed.
public static StringDictionary NamedValues = new StringDictionary();
public static ClassName() // static construtor
{
StringCollection keys = Properties.Settings.Default.Keys;
StringCollection vals = Properties.Settings.Default.Values;
for(int i = 0; i < keys.Count(); i++)
{
NamedValues.Add(keys[i], vals[i]);
}
} | unknown | |
d18958 | test | Unfortunately there is not a way to capture a returned value from a cell magic. With a line magic you can do:
a = %prun -r ...
But cell magics have to start at the beginning of the cell, with nothing before them. | unknown | |
d18959 | test | It's probably because the rec.dedication = tot_late / 8 is outside the for rec in self loop. Which means the value is only set on the last record it computes.
Also, the pass value seems unnecessary here. | unknown | |
d18960 | test | You are trying to get a named instance, but from what I can see of the code you have provided, you dont name your instances. The line of code that name your instances is commented out.
But even if you would just use the ObjectFactory.GetInstance<IPropertyType>(); here, you would have got an error because structuremap dont know what constructor to use. There are several solutions to theis problem.
*
*Change your design so you only have one constructor
*Mark your default constructor with the [DefaultConstructor] attribute, then it will work.
*You can register it with objectFactory manually with something like this:
x.For().Use().Ctor("propertyName").Is("someValue").Ctor("displayName").Is("someValue");
*You can write a custom registrationconvention as described here | unknown | |
d18961 | test | RxJS has a timeout operator. Probably you can use that to increase the timeout
getBookingInfo(dateType: string) {
...
return this.ServiceHandler.getTxnInfo([], params).pipe(
timeout(10*60*1000) // 10 minutes
);
}
And then you can update the calling function to
getBookingDetails() {
this.getBookingInfo('BOOKING').subscribe(
bookings => {
console.log(bookings);
});
}
A: You can use the timeout operator of rxjs, including it in a pipe with timeout:
import { timeout, catchError } from 'rxjs/operators';
import { of } from 'rxjs/observable/of';
...
getTxnInfo(headers: any[], params: any[]) {
this.apiService.get(environment.rm_url + 'rm-analytics-api/dashboard/txn-info', headers, params)
.pipe(
timeout(20000),
catchError(e => {
return of(null);
})
);
}
Using it:
this.ServiceHandler.getTxnInfo([], params).subscribe(
txnInfos => {
console.log(txnInfos );
}); | unknown | |
d18962 | test | map is a function that takes request and produces a response:
HttpRequest => HttpResponse
The challenge is that response is a type of Future. Therefore, you need a function that deals with it. The function that takes HttpRequest and returns Future of HttpResponse.
HttpRequest => Future[HttpResponse]
And voila, mapAsync is exactly what you need:
val requestHandler: Flow[HttpRequest, HttpResponse, _] = Flow[HttpRequest].mapAsync(2) {
case HttpRequest(HttpMethods.GET, Uri.Path("/api"), _, _, _) =>
Http().singleRequest(HttpRequest(uri = "http://www.google.com")).map (resp => {
resp.discardEntityBytes()
println(s"The request was successful")
HttpResponse(StatusCodes.OK)
})
} | unknown | |
d18963 | test | IIUC:
you need value_counts()+reset_index()
out=df.value_counts(subset=['c2','c1']).reset_index(name='count')
output of out:
c2 c1 count
0 p1 q1 2
1 p1 q2 1
2 p1 q3 1
3 p2 q1 1
4 p2 q2 1
If you need piechart(decorate it according to your need):
df.value_counts(subset=['c2','c1']).plot(kind='pie',autopct='%.2f%%')
output: | unknown | |
d18964 | test | My answer: don't do this via SSIS if it is a hassle. Add a default of GETDATE() on the new column in the destination table. No need to change the SSIS package this way, guaranteed data in the column each time.
A: I can't think of any reason derived column would not work. That being said, a way to test it could be to add a script component in between that writes to another column in the DB or out to an excel file. To see if it is getting triggered with every record flowing through it.
The script component would be a simple:
Row.ColumnName = DateTime.Now;
This would do the same thing as the derived column albeit with slightly more overhead. | unknown | |
d18965 | test | To solve this problem:
*
*We can use the method overloading to capture all data
*Each method will use a different data type but will have the same name - data()
*The number of null values of each array should be found out.
*the variable n will determine which is the largest size among all the 3 integers.
*n will be the test expression limit when printing the table
public class overLoadEg {
//array that will store integers
static int[] intArray = new int[10];
//array that will store doubles
static double[] doubleArray = new double[10];
//array that will store strings
static String[] stringArray = new String[10];
static int i = 0, j = 0, k = 0, m, n;
public static void main(String[] args) {
//input values
data(23);
data(23.4554);
data("Hello");
data("world");
data("help");
data(2355);
data(52.56);
data("val");
data("kkj");
data(34);
data(3);
data(2);
data(4);
data(5);
data(6);
data(7);
data(8);
display();
}
public static void data(int val){
//add int value to int array
intArray[i] = val;
System.out.println("Int " + intArray[i] + " added to IntArray");
i++;
}
public static void data(Double val){
//add double value to double array
doubleArray[j] = val;
System.out.println("Double " + doubleArray[j] + " added to doubleArray");
j++;
}
public static void data(String val){
//add string value to stringarray
stringArray[k] = val;
System.out.println("String " + stringArray[k] + " added to stringArray");
k++;
}
public static void max(){
//To get the maximum number of values in each array
int x, y, z;
x = y = z = 0;
//counting all the null values in each array and storing in x, y and z
for(m=0;m<10;m++){
if(intArray[m] == 0){
++x;
}
if(doubleArray[m] == 0){
++y;
}
if(stringArray[m] == null){
++z;
}
}
//subtracting the null/0 count from the array size
//this gives the active number of values in each array
x = 10 - x;
y = 10 - y;
z = 10 - z;
//comparing all 3 arrays and check which has the max number of values
//the max numbe is stored in n
if(x > y){
if(x > z){
n = x;
}
else{
n = z;
}
}
else{
if(y > z){
n = y;
}
else{
n = z;
}
}
}
public static void display(){
//printing the arrays in table
//All the null/0 values are excluded
System.out.println("\n\nInt\tDouble\t\tString");
max();
for(m = 0; m < n; m++){
System.out.println(intArray[m] + "\t" + doubleArray[m] + "\t\t" + stringArray[m]);
}
System.out.println("Count : " + m);
}
} | unknown | |
d18966 | test | I created a custom validator to solve this issue.
The validator:
export function oneValueHasToBeChangedValidator(values: { controlName: string, initialValue: string | number | boolean }[]): ValidatorFn {
return (form: FormControl): { [key: string]: any } => {
let sameValues = true;
for (let comparingValues of values) {
if (form.value[comparingValues.controlName] != comparingValues.initialValue) {
sameValues = false;
break;
}
}
return sameValues ? {'sameValues': {values: values}} : null;
};
}
How I took use of it:
this.userForm = this.formBuilder.group({
status: this.selectedUser.status == 1,
username: [this.selectedUser.username, [Validators.required, Validators.minLength(LlqaConstants.USERNAME_MIN_LENGTH)]],
realname: [this.selectedUser.realname, [Validators.required, Validators.minLength(LlqaConstants.REALNAME_MIN_LENGTH)]],
password: ['', [Validators.minLength(LlqaConstants.PASSWORD_MIN_LENGTH)]],
usercomment: this.selectedUser.comment == null ? "" : this.selectedUser.comment
});
this.userForm.setValidators(oneValueHasToBeChangedValidator([{
controlName: "status",
initialValue: this.selectedUser.status == 1
}, {
controlName: "username",
initialValue: this.selectedUser.username
}, {
controlName: "realname",
initialValue: this.selectedUser.realname
}, {
controlName: "password",
initialValue: ""
}, {
controlName: "usercomment",
initialValue: this.selectedUser.comment == null ? "" : this.selectedUser.comment
}]));
this.userForm.updateValueAndValidity();
A: You could cache the initial value as soon as you set the form object. Then change your disableSaveButton method to check the equality of the two values.
For instance:
export class MyComponent {
initialValue: any;
constructor(private fb: FormBuilder) {
this.form = fb.group({...});
this.initialValue = this.form.value;
}
disableSaveButton() {
return JSON.stringify(this.initialValue) === JSON.stringify(this.form.value);
}
}
The disable method will check if the current form value is the same as the initial value. | unknown | |
d18967 | test | This extra space is because of margin-right applied to links in Bootstrap's default styles.
You can fix this by overriding that styles or remove width and use left: 0 and right: 2px to stretch line.
jQuery(function () {
jQuery('#myTab a:last').tab('show')
})
@import url('http://netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css');
li.active:after {
position: absolute;
padding: 1px;
top: -1px;
content: '';
background: #000;
height: 4px;
right: 2px;
left: 0;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
<ul class="nav nav-tabs" id="myTab">
<li class="active"><a data-target="#home" data-toggle="tab">Home</a></li>
<li><a data-target="#profile" data-toggle="tab">Profile</a></li>
<li><a data-target="#messages" data-toggle="tab">Messages</a></li>
<li><a data-target="#settings" data-toggle="tab">Settings</a></li>
</ul>
<div class="tab-content">
<div class="tab-pane active" id="home">Home</div>
<div class="tab-pane" id="profile">Profile</div>
<div class="tab-pane" id="messages">Message</div>
<div class="tab-pane" id="settings">Settings</div>
</div> | unknown | |
d18968 | test | 1. Don't try to mix JSTL tags and JSF tags; they're chalk and cheese.
2. JSF is an MVP framework, so you're going against the grain by trying to define your data sources in the view.
3. To emit data via an outputText control, bind its value attribute to the model (e.g. a managed bean).
It is probably possible to do something like this:
<!-- other code elided -->
<x:set var="x" select="$simple/child" />
<h:outputText value="#{x}" />
...but, in general, see points 1 and 2.
Just a suggestion: ensure you've added the http://java.sun.com/jsp/jstl/core namespace to the page to use JSTL core. | unknown | |
d18969 | test | Tell the static part of the graph that the shape is unknown from the start as well.
a = tf.Variable([3,3,3], validate_shape=False)
Now, to get the shape, you cannot know statically, so you have to ask the session, which makes perfect sense:
print(sess.run(tf.shape(a))) | unknown | |
d18970 | test | Yeah- random benchmark variability, not to mention the fact that the whole program is slower might have nothing at all to do with this specific class.
A: Using templates in your container class may lead to the known issue of template code bloat . Roughly it could lead to more page fault in your program decreasing performance.
So why would you ask ? Because a template would generate the classes for each class instance of your template instead of one, leading to more pages in your binary product, more code pages if you prefer. Which could statistically lead to more page fault, depending on your run-time execution.
Have a look to the size of your binary with one class template instance, and with two instances which must be the heaviest. It will give you a grasp of the new code size introduced with the new instance.
Here is the wikipedia article on that topic: Code bloat article. The issue could be the same when forcing the compiler to inline every functions and methods in your program, if only it could be available with your compiler. The standard tries to prevent that with making the inline keyword a "request" that the compiler must not follow everytime. For instance GCC produces your code in an intermediate langage in order to evaluate if the resulting binary won't be lead to code bloat, and may discard the inline request as a result. | unknown | |
d18971 | test | It doesn't look like the coremltools Keras converter lets you specify which inputs are optional.
However, the proto files that contain the MLModel definition say that a Model object has a ModelDescription, which has an array of FeatureDescription object for the inputs, which has a FeatureType object, which has an isOptional boolean.
So something like this should work:
mlmodel = keras.convert(...)
spec = mlmodel._spec
spec.description.input[1].type.isOptional = True
mlmodel.save(...)
I didn't actually try this, so the exact syntax may be different, but this is the general idea. | unknown | |
d18972 | test | Some ideas :
You're using stosb but you don't setup ES. Are you sure it's already OK?
Does line.Substring use 0-based or 1-based indexing? | unknown | |
d18973 | test | UserController is session-scoped, but the producer is not. I.e. the producer has @Dependent scope, so the User bean gets injected once when the servlet is initialized.
Try adding @SessionScoped to your producer method. | unknown | |
d18974 | test | I'm not sure if this speaks to your exact problem, or whether you really need to create this yourself, but if you're open to additional dependencies I use the exception_notifier gem for this. | unknown | |
d18975 | test | You could change the \w to \B to verify that there is not a word boundary.
console.log('entities '.replace(/\Bies\b/g, 'y'));
A: Just capture the character before the "ies":
'entities '.replace(/(\w)(ies)(?:[\W|$|_])+/g, '$1y');
Now your question asked about using a function; you can do that too:
'entities '.replace(/(\w)(ies)(?:[\W|$|_])+/g, function(_, before, repl) {
return before + "y";
});
I don't know what you want to do with the subsequent stuff after "ies"; you can either capture it and glue it back into the replacement, or else use positive look-ahead. Portions of the input text matched by look-ahead are not part of the match involved with the replacement operation. In other words, the look-ahead does succeed or fail based on the pattern, but the characters matched are not made part of the "to be replaced" grouping. | unknown | |
d18976 | test | No. All elements are rectangles by defintion. Even the <area> tag wouldn't get past that. | unknown | |
d18977 | test | I read the article that the OP code is originally from and I believe it's overkill. What should be done to avoid so much work is to setup the elements angles initially so you know what to start from or reset the elements to 0.
Example A features a <form> that allows the user to rotate an element by adding positive and/or negative numbers (min -360, max 360).
Example B features a function that operates the same as the event handler (spin(e)) in Example A.
Details are commented in both examples
Example A
<form> as User Interface
// Bind <form> to the submit event
document.forms.spin.onsubmit = spin;
function spin(e) {
// Stop normal behavior when submit is triggered
e.preventDefault();
// Reference all form controls
const IO = this.elements;
// Reference <output>
const comp = IO.compass;
// Reference <input>
const turn = IO.turn;
// Get <input> value and convert it into a number
let deg = +turn.value;
// Add comp value with turn value and assign to comp value
comp.value = +comp.value +(deg);
// If comp value is ever over 360, reset it
if (+comp.value > 360) {
comp.value = +comp.value - 360;
}
// .cssText is like .textContent for the style property
comp.style.cssText = `transform: rotate(${comp.value}deg)`;
}
fieldset {
display: flex;
justify-content: center;
align-items: center;
}
#turn {
width: 3rem;
text-align: center;
}
#compass {
position: relative;
display: flex;
justify-content: center;
align-items: center;
width: 100px;
height: 100px;
border-radius: 50%;
background: rgba(0, 0, 255, 0.3);
}
#compass::before {
content: '➤';
position: absolute;
z-index: 1;
transform: rotate(-90deg) translate(55%, -5%);
transform-origin: center center;
font-size: 3rem;
}
<form id='spin'>
<fieldset>
<input id='turn' type='number' min='-360' max='360' step='any'><input id='add' type='submit' value='Add'>
</fieldset>
<fieldset>
<output id='compass' value='0'></output>
</fieldset>
</form>
Example B
No <form>, Only a Function
// Declare variable to track angle
let degree;
/**
* @desc - Rotates a given element by a given number of
* degrees.
* @param {object<DOM>} node - The element to rotate
* @param {number} deg - The number of degrees to rotate
* @param {boolean} init - If true the element's rotate value
* will be 0 and degree = 0 @default is false
*/
function turn(node, deg, init = false) {
// If true reset node rotate and degree to 0
if (init) {
node.style.cssText = `transform: rotate(0deg)`;
degree = 0;
}
/*
Simple arithmatic
Reset degrees when more than 360
*/
degree = degree + deg;
if (degree > 360) {
degree = degree - 360;
}
// .cssText is like .textContent for the style property
node.style.cssText = `transform: rotate(${degree}deg)`;
console.log(node.id + ': ' + degree);
}
const c = document.getElementById('compass');
turn(c, 320, true);
fieldset {
display: flex;
justify-content: center;
align-items: center;
}
#compass {
position: relative;
display: flex;
justify-content: center;
align-items: center;
width: 100px;
height: 100px;
border-radius: 50%;
background: rgba(0, 0, 255, 0.3);
}
#compass::before {
content: '➤';
position: absolute;
z-index: 1;
transform: rotate(-90deg) translate(55%, -5%);
transform-origin: center center;
font-size: 3rem;
}
<fieldset>
<output id='compass' value='0'></output>
</fieldset> | unknown | |
d18978 | test | I do know that express is a free version. If you are talking about registration keys as in free to premium, then you do not have to worry as all your codes save via cloud and you don't have to get a new one. | unknown | |
d18979 | test | This is a pretty standard sorting problem.
Start with a test for prime on both elements and end with a comparison of the Date value of the created date.
You cannot compare Createdate directly as this would result in an alphabetical comparison of two string, not the mathematical comparison of timestamps.
var x = {
"accts": [{
"Id": "Acc1",
"Person": true,
"Name": "Hello Roy",
"ExternalID": "123456",
"AddressTotal": [{
"Account": "Acc1",
"Id": "Ad3",
"Name": "1 camac Street",
"City": "Chennai",
"State": "KN",
"Zip": "23451",
"AddType": "office",
"Prime": false,
"RecTypeId": "R3",
"Createdate": "5th Feb 2018"
}, {
"Account_vod__c": "Acc2",
"Id": "Ad2",
"Name": "1 strand Road",
"City": "Mumbai",
"State": "JK",
"Zip": "12345",
"AddType": "College",
"Prime": false,
"RecTypeId": "R2",
"Createdate": "2nd Feb 2018"
}, {
"Account": "Acc1",
"Id": "Ad1",
"Name": "1 Park Street",
"City": "Bangalore",
"State": "TN",
"Zip": "74324",
"AddType": "School",
"Prime": true,
"RecTypeId": "R1",
"Createdate": "1st Feb 2018"
}],
"Rectype": {
"Name": "ABC",
"Id": "Id1"
}
}],
"hasAccess": ["A1"]
};
//TEST
var arr = x.accts[0].AddressTotal.sort(function sorter(a, b) {
if (a.Prime) {
return -1;
}
if (b.Prime) {
return 1;
}
return new Date(a.Createdate).getTime() - new Date(b.Createdate).getTime();
});
console.log(arr);
By the way, is there any reason accts is an array and not just an object? | unknown | |
d18980 | test | You should probably filter out the 'null' emails, like this.
AND (
(tenants.email != '' AND tenants.email = reports.email) OR
(tenants.alt_email != '' AND tenants.alt_email = reports.alt_email)
)
In reality, this seems like it ought to be a left join, i.e.:
SELECT
reports.person_reporting, reports.request_type, reports.details, tenants.office,
tenants.email, tenants.alt_email, tenants.office_phone, tenants.personal_cell,
tenants.emergency_phone, tenants.address, tenants.building_name
FROM reports
LEFT JOIN tenants ON (
(tenants.email != '' AND tenants.email = reports.email)
OR (tenants.alt_email != '' AND tenants.alt_email = reports.alt_email)
)
WHERE reports.id = '{$id}'
This way, you will get the reports with no tenants at least once.
A: I'd assume that your problem is that there are a bunch of tenants with alt_email = NULL, and a bunch of reports with alt_email = NULL, and your OR clause will match each report with alt_email = NULL with all the tenants records with alt_email = NULL.
You should probably catch the NULL case:
WHERE reports.id = '{$id}'
AND (
(tenants.email IS NOT NULL AND tenants.email = reports.email)
OR (tenants.alt_email IS NOT NULL AND tenants.alt_email = reports.alt_email)
) | unknown | |
d18981 | test | Uninstall the MySql.Data NuGet package and install MySqlConnector instead; it has better cross-platform compatibility with Xamarin.
FWIW, initiating a database connection from an Android device is a bad idea, because the credentials are easily extracted from the application and could be used by anyone to log into your database. The better approach is for the Xamarin app to authenticate against a web service (that you author), e.g., with username and password, and for the web service to connect to the database. | unknown | |
d18982 | test | Take the transpose which also converts it to a matrix, and then convert to vector:
as.vector(t(a))
[1] 1 2 3 4 2 44 66 77 9 0 0 4
A: Use James' answer.
Here is another alternative: unlist and sort.
unlist(a)[order(rep(seq_len(nrow(a)),ncol(a)))]
#qq1 ee1 rr1 tt1 qq2 ee2 rr2 tt2 qq3 ee3 rr3 tt3
# 1 2 3 4 2 44 66 77 9 0 0 4
That way you retain information in names, which could be useful. If you don't want the names, use unlist with use.names=FALSE.
A: For fun, here's another alternative:
> scan(textConnection(do.call(paste, a)))
Read 12 items
[1] 1 2 3 4 2 44 66 77 9 0 0 4
Where "a" is:
a <- read.table(textConnection("qq ee rr tt
1 2 3 4
2 44 66 77
9 0 0 4"), header=T) | unknown | |
d18983 | test | If you are using Kubernetes, here are the high level steps:
*
*Create your micro-service Deployments/Workloads using your docker images
*Create Services pointing to these deployments
*Create Ingress using Path Based rules pointing to the services
Here is sample manifest/yaml files: (change docker images, ports etc as needed)
apiVersion: v1
kind: Service
metadata:
name: svc-gateway
spec:
ports:
- port: 80
selector:
app: gateway
---
apiVersion: v1
kind: Service
metadata:
name: svc-messaging
spec:
ports:
- port: 80
selector:
app: messaging
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-gateway
spec:
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-messaging
spec:
replicas: 1
template:
metadata:
labels:
app: messaging
spec:
containers:
- name: messaging
image: messaging/image:v1.0
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-for-chat-application
spec:
rules:
- host: chat.example.com
http:
paths:
- backend:
serviceName: svc-gateway
servicePort: 80
path: /api/v1/users
- backend:
serviceName: svc-messaging
servicePort: 80
path: /api/v1/messages
If you have other containers running in the same namespace and would like to communicate with these services you can directly use their service names.
For example:
curl http://svc-messaging or curl http://svc-gateway
You don't need to run your own service discovery, that's taken care by Kubernetes!
Some visuals:
Step 1:
Step 2:
Step 3: | unknown | |
d18984 | test | You may have a method that only takes an instance of Bird. Since Swan is a Bird, you can use an instance of Swan and treat it as Bird.
That's the beauty of polymorphism. It allows you to change out the implementation of the class' internals without breaking the rest of your code.
A: Where it is calling new Swan(), it is creating a new Swan object in memory.
When the Swan object is assigned to the Bird variable (possible because a Swan is a sub type of Bird) the Bird variable simply has a pointer to the Swan in memory. Because titi is a verb, this object can now be accessed / treated like a bird for static time type checking and functionality... though it is always a Swan, and can be cast to a Swan to extended Swan functionality.
A: Bird titi = new Swan();
is actually known as: programming to superclass but sometimes programming to interfaces
means:
titi is a Bird, pointing to a Swan reference... note that this is only valid because of inheritance... which means in this case that Swan is a class that extends the Bird class
why: it allows modifications in the code... you can latter decide not to use a Swan but an Owl so
Bird titi = new Swan(); will be replaced with Bird titi = new Owl(); and everything will work fine
programming to interfaces is more elegant because you can do:
IFly titi = new Swan();
which is valid if the Swan implemtents the interface IFly, so it is:
titi is something that can fly, pointing to a Swan ref.
A: Bird titi = new Swan();
It's all about Polymorphism.
A Parent class/Interface type can hold it's Child class's Object.
Here Bird is Parent class/Interface and Swan is Child.
The same example is below :
List list = new ArrayList();
Here List is Parent Interafce and ArrayList it's Child class. | unknown | |
d18985 | test | You can see the x3schools' documentation. It gives you a sample popup at the top of a div.
This code opens a popup:
// When the user clicks on <div>, open the popup
function myFunction() {
var popup = document.getElementById("myPopup");
popup.classList.toggle("show");
}
/* Popup container */
.popup {
position: relative;
display: inline-block;
cursor: pointer;
}
/* The actual popup (appears on top) */
.popup .popuptext {
visibility: hidden;
width: 160px;
background-color: #555;
color: #fff;
text-align: center;
border-radius: 6px;
padding: 8px 0;
position: absolute;
z-index: 1;
bottom: 125%;
left: 50%;
margin-left: -80px;
}
/* Popup arrow */
.popup .popuptext::after {
content: "";
position: absolute;
top: 100%;
left: 50%;
margin-left: -5px;
border-width: 5px;
border-style: solid;
border-color: #555 transparent transparent transparent;
}
/* Toggle this class when clicking on the popup container (hide and show the popup) */
.popup .show {
visibility: visible;
-webkit-animation: fadeIn 1s;
animation: fadeIn 1s
}
/* Add animation (fade in the popup) */
@-webkit-keyframes fadeIn {
from {opacity: 0;}
to {opacity: 1;}
}
@keyframes fadeIn {
from {opacity: 0;}
to {opacity:1 ;}
}
<div class="popup" onclick="myFunction()" style="left: 35px; top: 60px">Click me!
<span class="popuptext" id="myPopup">Popup text...</span>
</div>
You also can use microtip, it is a pretty library witch give you the opportunity to create simply popup. This is the only declaration: <button aria-label="Hey tooltip!" data-microtip-position="top-left" role="tooltip">. However, you have to download a package (just 1kb) with rpm in your server. | unknown | |
d18986 | test | Yes, the DB name is usually the system name; though it doesn't have to be.
Originally, the AS/400 support only a single DB.
With the introduction of independent storage pools (iASP), today's IBM i machines can have multiple DBs.
From a 5250 session, try:
WRKRDBDIRE
Look for the *LOCAL entry, may be the only one.
You can also see the DB names using IBM i Navigator for Windows or the web based IBM Navigator. The DB names are shown under the "Databases" ,
there are three DBs on the system: Rchasma1, Iasp320, Ima1db1. | unknown | |
d18987 | test | Did you check this: File Docs
as per this doc you can do this as follow:
$request->file('photo')->move($destinationPath, $fileName);
where $fileName is an optional parameter that renames the file.
so you can use this like:
$fileName = str_random(30); // any random string
then pass this as above. | unknown | |
d18988 | test | This is a horrible data layout. You should have an association table, with one row per customer and option.
But, you can do it:
select c.customer, sum(o.cost) as cost
from customers c left outer join
options o
on (c.sunroof = true and o.option = 'sunroof' or
c.mag_wheels = true and o.option = 'mag_wheels' or
c.spoiler = true and o.option = 'spoiler'
)
group by c.customer;
EDIT:
You do not want all options in a single record. Instead, you need an association table:
create table customer_options (
customer_optionid unsigned auto_increment,
customer varchar(255) references customer(name),
option varchar(255) references option(option)
);
Actually you should really have integer primary keys for all the tables, and use them for the foreign key references. If you need data in the output in the question, then just write a query to return it in that format.
A: Looking at the table structure, even I think it will not be possible to write joins because as you mentioned, the table structure doesn't have a relation between them.
I am assume you have just started the project, so it's time you first re-visit your DB structure and correct it.
Ideally, you should have a customer table with a customer id.
Then you should have products table with product id.
One table which will have data on what products customers have purchased - something like customer_products. This will be a one to many relation. So customer 1 can have product 1,3 and 5. Which would means in customer_product there will be three entries.
And then when you want to do a sum total, you can first join the table customer, product based on customer_product and then do a sum of the price also to get the total amount for individual customer.
A: Bad design. You must a have a customers table like this:
custumer_id
customer_name
other fileds...
On the other hand you should have an accesories table, where you usually describe each item-
accesory_id
accesory_name
supplier_id
country_of_origin
other stuff
Also an accesory_price table, where prices are added due the fact that prices change.
accesory_id
price
active_price
date_price_added
An finally you should relate all in a customer_accesory table:
customer_id
accesory_id
By having this, you can join tables and select both customer basket size and customer preferences of accesories. Basket size, or the amount purchased by each customer can be summarized SUM, AVG, COUNT or you can pivot data using GROUP_CONCAT in order to generate high quality reports. | unknown | |
d18989 | test | Your main problem appears to be related to the concept of how a semaphore works. Semaphores are best viewed as a signal between a producer and consumer. When the producer have done something they post a signal on the semaphore, and the consumer will wait on the semaphore until the producer post a signal.
So in your case, there should only be one semaphore between the consumer and the producer -- they should share this semaphore for their synchronization. Also, the semaphore should start at the value zero since nothing have been produced yet. Every time the producer post to the semaphore the value will increase by one, the consumer when it waits on the semaphore will sleep if the value is zero, until such a time when the producer post and the value increases and becomes one. If the producer is much faster than the consumer the value of the semaphore can go up and be more than one which is fine, as long as the consumer is consuming the output in the same size of units as the producer produces them.
So a working example here, but without any error handling -- adding error handling is beyond the scope of this -- I have used threads but you can do the same with processes as long as you can share the semaphore between them
#include <semaphore.h>
#include <pthread.h>
#include <stdio.h>
#include <unistd.h>
sem_t thereBeData;
void* readFromFifoSendToFile(void*) {
FILE *fp = stdin;
char buffer[100];
FILE *file;
file = fopen("file", "a+");
while(1) {
fscanf(fp, "%s", buffer);
fprintf(file,"%s\n",buffer);
fflush(file);
sem_post(&thereBeData); // signal the consumer
}
}
void* readFromFileAndPrint(void*) {
FILE *fp = 0;
char buffer[100];
int counter = 0;
while(1) {
sem_wait(&thereBeData); // Waiting for the producer
if (!fp) fp = fopen("file", "r");
fscanf(fp, "%s", buffer);
printf("%s\n", buffer);
}
}
int main(void)
{
pthread_attr_t attr;
pthread_t thread1;
pthread_t thread2;
sem_init(&thereBeData, 0,0);
pthread_attr_init(&attr);
pthread_create(&thread1, &attr, readFromFifoSendToFile, (void*)0);
pthread_create(&thread2, &attr, readFromFileAndPrint, (void*)0);
sleep(10);
} | unknown | |
d18990 | test | If the url and controller name are not equal best way is the following method.
match "/sharer/:id/share" => redirect{ |params, request| "/posts/#{params[:id]}/share?#{request.query_string}" }
If the url name and the action name is same you can use something like this.
resources :sharer do
member do
get :share
end
end
You can use collection instead of member routing according to your url. I used member because my example is taking id in the url path. Therefore it becomes a member route. | unknown | |
d18991 | test | The documentation for NEST 2.18 and 2.20 is misleading in this respect. The binary option has no effect (it sets the ios::binary flag when opening the file, but that has no significant consequences).
If you want to write spikes in binary format, you need to switch to NEST 3.0 and use the sionlib recording backend by setting the recorder's record_to property:
neurons = nest.Create('iaf_psc_alpha', 5)
sr = nest.Create('spike_recorder')
nest.Connect(neurons, sr)
sr.SetStatus({'record_to': 'sionlib'})
A guide for recording from simulations is available in the docs. | unknown | |
d18992 | test | Removing just server folder will not work because the webpack dev configuration is utilising it for hot reload as well as your npm start command starts express server from this folder.
If you want to remove server folder completely and still want the application to be working as it was like hot reloading etc, follow the below steps. We will require webpack dev server in that case:
*
*Remove ./server folder manually.
*Install webpack-dev-server and react-hot-loader as dev dependencies.
*In your ./internals/webpack/webpack.dev.babel.js, do the following modifications:
*
*Change entry to this:
entry: [
require.resolve('react-app-polyfill/ie11'),
'react-hot-loader/patch',
`webpack-dev-server/client?http://localhost:3000/`,
'webpack/hot/only-dev-server',
path.join(process.cwd(), 'app/app.js'), // Start with js/app.js
],
*Add publicPath in output:
output: {
filename: '[name].js',
chunkFilename: '[name].chunk.js',
publicPath: `http://localhost:3000/`,
},
*Add webpack dev server config property in the same file:
devServer: {
port: 3000,
publicPath: `http://localhost:3000/`,
compress: true,
noInfo: false,
stats: 'errors-only',
inline: true,
lazy: false,
hot: true,
open: true,
overlay: true,
headers: { 'Access-Control-Allow-Origin': '*' },
contentBase: path.join(__dirname, '..', '..', 'app', 'build'),
watchOptions: {
aggregateTimeout: 300,
ignored: /node_modules/,
poll: 100,
},
historyApiFallback: {
verbose: true,
disableDotRule: false,
},
},
*In ./internals/webpack/webpack.base.babel.js, add the line:
devServer: options.devServer,
And finally, modify your start script in package.json as below:
"start": "cross-env NODE_ENV=development node --trace-warnings ./node_modules/webpack-dev-server/bin/webpack-dev-server --color --config internals/webpack/webpack.dev.babel.js",
And you are good to go!!
A: Just remove with rm -rf ./server if you feel harassed :) | unknown | |
d18993 | test | I think rgeos::gIntersection would be the method of choice, if your lines perfectly overlap. Consider the following simple example:
l1 <- SpatialLines(list(Lines(list(Line(rbind(c(1, 1), c(5, 1)))), 1)))
l2 <- SpatialLines(list(Lines(list(Line(rbind(c(3, 1), c(10, 1)))), 1)))
plot(0, 0, ylim = c(0, 2), xlim = c(0, 10), type = "n")
lines(l1, lwd = 2, lty = 2)
lines(l2, lwd = 2, lty = 3)
lines(gIntersection(l1, l2), col = "red", lwd = 2)
One solution to your problem, although not perfect and maybe someone else has a better solution, would be to add a tiny buffer.
xx <- as(sobj, "SpatialLines")
xx <- gBuffer(xx, width = 1e-5, byid = TRUE)
xx <- gIntersection(xx[1, ], xx[2, ])
plot(sobj)
plot(xx, border = "red", add = TRUE, lwd = 2) | unknown | |
d18994 | test | You can change your regexp a liitle:
.split(/[\r\n]+/)
+ character in regexp
matches the preceding character 1 or more times. Equivalent to {1,}.
Demo: http://jsfiddle.net/ahRHC/1/
UPD
Improved solution would use another regexp using negative lookahead:
`/[\r\n]+(?!\s*$)/`
This means: match new lines and carriage returns only if they are not followed by any number of white space characters and the end of line.
Demo 2: http://jsfiddle.net/ahRHC/2/
UPD 2 Final
To prevent regexp from becoming too complicated and solve the problem of leading new lines, there is another solution using $.trim before splitting a value:
function counter(field) {
var val = $.trim($(field).val());
var lineCount = val ? val.split(/[\r\n]+/).length : 0;
jQuery('.paraCount').text(lineCount);
}
Demo 3: http://jsfiddle.net/ahRHC/3/
A: Exclude blank lines explicitly:
function nonblank(line) {
return ! /^\s*$/.test(line);
}
.. the_result_of_the_split.filter(nonblank).length ..
Modified fiddle: http://jsfiddle.net/Qq38a/ | unknown | |
d18995 | test | Well...here is how you would do it. It looks like the data for some of the things in wmi needs to be converted to be readable.
$Monitors = Get-WmiObject -Namespace root\wmi -Class wmiMonitorID
$obj = Foreach ($Monitor in $Monitors)
{
[pscustomobject] @{
'MonitorMFG' = [char[]]$Monitor.ManufacturerName -join ''
'MonitorSerial' = [char[]]$monitor.SerialNumberID -join ''
'MonitorMFGDate' = $Monitor.YearOfManufacture
}
}
$obj
$obj | export-csv
Edit...alternative that more closely matches the formatting you are wanting...I think the above is better though personally.
$Monitors = Get-WmiObject -Namespace root\wmi -Class wmiMonitorID
$i = 1
$obj = new-object -type psobject
Foreach ($Monitor in $Monitors)
{
$obj | add-member -Name ("Monitor$i" +"MFG") -Value ([char[]]$Monitor.ManufacturerName -join '') -MemberType NoteProperty -Force
$obj | add-member -Name ("Monitor$i" + "Serial") -Value ([char[]]$monitor.SerialNumberID -join '') -MemberType NoteProperty -Force
$obj | add-member -Name ("Monitor$i" + "MFGDate") -Value ($Monitor.YearOfManufacture) -MemberType NoteProperty -Force
$i++
}
$obj
$obj | export-csv | unknown | |
d18996 | test | I know its a bit late, but i had the same problem. Ricks answer is right, you need to inherit from Freezable.
The following Code gave me the same error as you got
Not working resource:
public class PrintBarcodesDocumentHelper : DependencyObject
{
public IEnumerable<BarcodeResult> Barcodes
{
get { return (IEnumerable<BarcodeResult>)GetValue(BarcodesProperty); }
set { SetValue(BarcodesProperty, value); }
}
public static readonly DependencyProperty BarcodesProperty =
DependencyProperty.Register("Barcodes", typeof(IEnumerable<BarcodeResult>), typeof(PrintBarcodesDocumentHelper), new PropertyMetadata(null, HandleBarcodesChanged));
private static void HandleBarcodesChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)
{
// Do stuff
}
}
Xaml:
<UserControl.Resources>
<Barcodes:PrintBarcodesDocumentHelper x:Key="docHelper" Barcodes="{Binding BarcodeResults}"/>
</UserControl.Resources>
My viewmodel is bound to the DataContext of the UserControl.
Error:
System.Windows.Data Error: 2 : Cannot find governing FrameworkElement or FrameworkContentElement for target element. BindingExpression:Path=BarcodeResults; DataItem=null; target element is 'PrintBarcodesDocumentHelper' (HashCode=55335902); target property is 'Barcodes' (type 'IEnumerable`1')
Working resource class:
public class PrintBarcodesDocumentHelper : Freezable
{
// Same properties
protected override Freezable CreateInstanceCore()
{
return new PrintBarcodesDocumentHelper();
}
}
Unfortunately i dont know why it have to be a Freezable.
A: In order to enable binding, GroupingProvider needs to be derived from Freezable or FrameworkElement or FrameworkContentElement and GroupValue needs to be a DependencyProperty. | unknown | |
d18997 | test | Try the steps below to see if that could help:
1) From Outlook, click File from the top left > Options > Advanced
2) Scroll down until you see "International Options"
3) Check "Automatically Select Encoding for Outgoing..."
4) Select UTF-8 encoding from the drop down menu.
A: Try changing (or setting) the encoding in the html template. If it doesn't help, convert characters to html entities - that works in all email clients. | unknown | |
d18998 | test | Probably Symfony .htaccess tries to change some settings that is not allowed by your configuration. At first I suggest change line: AllowOverride FileInfo AuthConfig Limit Indexes into AllowOverride all. Or if you can't do this for security reasons, look into symfony .htaccess, and try to change tihs AllowOverride directives list to work with this used in symfony .htacces. | unknown | |
d18999 | test | Here are some overviews on the topic:
*
*https://sweetcode.io/using-html5-server-sent-events/
*https://juxt.pro/blog/posts/course-notes.html
*https://www.lucagrulla.com/posts/server-sent-events-with-ring-and-compojure/
*Server push of data from Clojure to ClojureScript
*https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events
Here is a highly voted comparison on StackOverflow between Server Sent Events and WebSockets (my favorite):
*
*WebSockets vs. Server-Sent events/EventSource
And here is a nice comparison from IBM (2017):
*
*https://www.ibm.com/developerworks/library/wa-http-server-push-with-websocket-sse/index.html
A: immutant.web has support for SSE built in: http://immutant.org/documentation/current/apidoc/guide-web.html#h3155
There is also this middleware for other web servers: https://github.com/kumarshantanu/ring-sse-middleware, although I have not tried it myself. | unknown | |
d19000 | test | Easy way (for simple testing):
curl -X POST -H "Content-Type: application/json" -d '{ \"field\": \"value\"}'
A: Pipe the data into curl.exe, instead of trying to escape it.
$data = @{
fields = @{
project = @{
key = "key"
}
summary = "summary"
description = "description - here"
type = @{
name = "Task"
}
}
}
$data | ConvertTo-Json -Compress | curl.exe -X POST -u username:password -H "Content-Type: application/json" -d "@-"
curl.exe reads stdin if you use @- as your data parameter.
P.S.: I strongly suggest you use a proper data structure and ConvertTo-Json, as shown, instead of building the JSON string manually. | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.