_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d5801 | train | The above error occurs when the Stripe Secret API key is passed incorrectly.
Just for the help,
const Stripe = require('stripe');
const stripe = Stripe(env.STRIPE_TEST_SECRET_KEY);
make sure you change the key as per the production/development environment.
Cheers. | unknown | |
d5802 | train | I think you're looking for this:-
@Temporal(TemporalType.TIMESTAMP)
@Column(name = "curr", length = 10)
public Date getCurrrent() {
return this.currrent;
}
A: What Jelies mentioned is the correct approach. You simply instantiate the date object with
private Date currrent = new Date();
during object creation and then persist it.
To add on to that, its not safe to use @Temporal(TemporalType.TIMESTAMP) for storing the date and time, if there is a chance of you comparing the field current with any java.util.Date objects as the saved field will be loaded as java.sql.Timestamp. The reason is mentioned here.
Note: This type is a composite of a java.util.Date and a separate nanoseconds value. Only integral seconds are stored in the java.util.Date component. The fractional seconds - the nanos - are separate. The Timestamp.equals(Object) method never returns true when passed a value of type java.util.Date because the nanos component of a date is unknown. As a result, the Timestamp.equals(Object) method is not symmetric with respect to the java.util.Date.equals(Object) method. Also, the hashcode method uses the underlying java.util.Date implementation and therefore does not include nanos in its computation.
Due to the differences between the Timestamp class and the java.util.Date class mentioned above, it is recommended that code not view Timestamp values generically as an instance of java.util.Date. The inheritance relationship between Timestamp and java.util.Date really denotes implementation inheritance, and not type inheritance.
The work around is to use TypeDef. This post discusses about the issue and the workaround. The code used there will be good enough for you.
A: I think the easiest way to achieve that is to initialize your field with the new Date():
private Date currrent = new Date();
IMHO, I don't recommend changing the behavior of the setter method, other developers will not expect that.
If you want this field to be filled immediately before the entity is persisted/updated, you can define a method with @PrePersist or @PreUpdate JPA annotations to populate current property then. | unknown | |
d5803 | train | There's a syntax error at the end of sifr-config.js. | unknown | |
d5804 | train | It's a function taking XElement as argument and returning an XElement, so for instance:
public XElement someFunction(XElement argument)
{
XElement someNewElement = new XElement();
... // do something with someNewElement, taking into account argument
return someNewElement;
}
Func<XElement, XElement> variableForFunction = someFunction;
.... .Select(variableForFunction);
I'm not interely sure if you have to assign it to a variable first, you could probably just do this:
... .Select(variableForFunction);
give it a try (and let me know if it works :) )
oh and for more information, here's the msdn article, it also explains how to use delegates:
Func<XElement, XElement> variableForFunction = delegate(XElement argument)
{
....//create a new XElement
return newXElement;
}
and how to use lambdas, for instance:
Func<XElement, XElement> variableForFunction = s => {
....;//create an XElement to return
return newXElement;
}
or, in this instance, use the lambda directly:
.... .Select( s => {
....;//create an XElement to return
return newXElement;
})
edited it following Pavel's comment | unknown | |
d5805 | train | Try to change your Insert into into this:
$sql = "INSERT INTO datepicker(ngno, date) VALUES('$ngno', '$dateValue')";
let me know if works.
A: You can use this. I hope it will work fine for you.
<?php
$host="localhost";
$username="root";
$password="";
$db_name="test";
$con=mysql_connect("$host", "$username", "$password")or die("cannot connect");
mysql_select_db($db_name, $con)or die("cannot select DB");
$ngno = '112';
$myArray = array("date"=> "Mon Apr 11 00:00:00 GMT+05:30 2016", "Thu Mar 31 00:00:00 GMT+05:30 2016");
foreach($myArray as $dateSelected => $dateValue){
$sql = "INSERT INTO datepicker (`ngno`, `date`) VALUES('$ngno', '$dateValue')";
$result = mysql_query($sql);
}
?> | unknown | |
d5806 | train | It looks like you're not linking to yaml-cpp; you need to add the argument -lyaml-cpp (to the command that begins /usr/bin/g++ -o ./Debug/MyProject).
A: If you are considering a CMakeLists.txt project ...
cmake_minimum_required(VERSION 3.10)
project(Test_yaml_cpp)
set(CMAKE_CXX_STANDARD 14)
# In case of third party library
#find_package(yaml-cpp PATHS ./thirparty/yaml-cpp/build)
# In case of installed library
find_package(yaml-cpp)
add_executable(yaml_exec main.cpp)
target_link_libraries(yaml_exec yaml-cpp)
In case of the third-party approach, the yaml-cpp library should be already built inside build folder and recommended by their documentation.
A: Try add following to your cmakelist:
find_package(PkgConfig REQUIRED) | unknown | |
d5807 | train | Did you copy and paste your code, or retype it? It seems as though what your log is outputting might be because the line you've shown above:
Log.e("AllEvents Reporter", "Torneo numero: "+ j + " Nome: " + tornei.get(j).getName());
is actually
Log.e("AllEvents Reporter", "Torneo numero: "+ j + " Nome: " + tornei.get(i).getName());
where tornei.get(i).getName() would just output the last one in the dataset, which is what you're seeing, but this is just a guess. As Bill has said, your code seems fine | unknown | |
d5808 | train | It is strange. What are you using, the installer, the virtual machine or the cloud image? If the sidekiq server is not running it is possible that the repository was not created properly. Could you check if there is any error in the sidekiq log file?
/opt/bitnami/apps/gitlab/htdocs/logs/sidekiq.log
Did you modify any configuration file for GitLab?
EDITED:
The problem seems a wrong configuration in the gitlab.yml. It is also important the white spaces. Could you check your change in that file?
/opt/bitnami/ruby/lib/ruby/1.9.1/psych.rb:203:in `parse': (): found character that cannot start any token while scanning for the next token at line 73 column 1 (Psych::SyntaxError)
GitLab CI ships Ruby 1.9.3 latest stable version. The folder name uses 1.9.1 for backguard compatibility Why is my gem "INSTALLATION DIRECTORY:" ...1.9.1 when the "RUBY VERSION:" is 1.9.3
Please post the gitlab.yml file if you do not find the exact error.
A: In my case I had <tab> in my yml file.
It's quite strange error though! | unknown | |
d5809 | train | The reason why you get null is because you are trying to get the id of Test before you add it to the DOM.
Where you have:
if(result.Verified === 'false'){
document.getElementById("Test").html("not verified")
}
$(context).html($('#Accounttmpl').render(result));
Change the order round:
$(context).html($('#Accounttmpl').render(result));
if(result.Verified === 'false'){
document.getElementById("Test").html("not verified")
}
Also, you are mixing JavaScript with jQuery here:
document.getElementById("Test").html("not verified")
Either do:
document.getElementById("Test").textContent = "not verified";
Or
$("#Test").html("not verified") | unknown | |
d5810 | train | The await keyword is only valid on places where asynchronous code is accepted.
The easiest here is to make the onSubmit funcion async.
const onSubmit = async (e) => {
let [res1, res2] = await Promise.all([
fetch(
`https://fastapi-ihub-n7b7u.ondigitalocean.app/predict_two?${params}`,
{
method: "GET"
}
),
fetch(
`https://fastapi-ihub-n7b7u.ondigitalocean.app/predict_solubility_json`,
{
method: "GET"
}
)
]);
// etc
}
A: React components cannot be made async like this. What you probably want is to execute an async callback once the button is clicked.
const onSubmit = async (e) => {
e.preventDefault();
let [res1, res2] = await Promise.all([ /* ... */ ]);
setFetchData(res1);
setjsonData(res2);
}
If you don't want your handler function itself to be async, you can also just call a new async function:
const onSubmit = (e) => {
e.preventDefault();
const fetchData = async () => {
let [res1, res2] = await Promise.all([ /* ... */ ]);
setFetchData(res1);
setjsonData(res2);
}
fetchData();
} | unknown | |
d5811 | train | Made working plunker for this:
Update:
I have updated the plunker and made it work with addHTML() function:
var pdf = new jsPDF('p','pt','a4');
//var source = document.getElementById('table-container').innerHTML;
console.log(document.getElementById('table-container'));
var margins = {
top: 25,
bottom: 60,
left: 20,
width: 522
};
// all coords and widths are in jsPDF instance's declared units
// 'inches' in this case
pdf.text(20, 20, 'Hello world.');
pdf.addHTML(document.body, margins.top, margins.left, {}, function() {
pdf.save('test.pdf');
});
You can see plunker for more details.
A: Change this in app.js:
pdf.addHTML(document.getElementById('pdfTable'),function() {
});
pdf.save('pdfTable.pdf');
for this one:
pdf.addHTML(document.getElementById('pdfTable'),function() {
pdf.save('pdfTable.pdf');
});
A: I took out everything but these:
<script src="//mrrio.github.io/jsPDF/dist/jspdf.debug.js"></script>
<script src="//html2canvas.hertzen.com/build/html2canvas.js"></script>
then I used this:
$(document).ready(function() {
$("#pdfDiv").click(function() {
var pdf = new jsPDF('p','pt','letter');
var specialElementHandlers = {
'#rentalListCan': function (element, renderer) {
return true;
}
};
pdf.addHTML($('#rentalListCan').first(), function() {
pdf.save("rentals.pdf");
});
});
});
I can print the table... looks great with chrome, safari or firefox, however, I only get the first page if my table is very large.
A: In app.js change:
pdf.addHTML(document.getElementById('pdfTable'),function() {
});
pdf.save('pdfTable.pdf');
to
pdf.addHTML(document.getElementById('pdfTable'),function() {
pdf.save('pdfTable.pdf');
});
If you see a black background, you can add to sytle.css "background-color: white;"
A: Instead of using the getElementById() you should use querySelector()
var pdf = new jsPDF('p','pt','a4');
pdf.addHTML(document.querySelector('#pdfTable'), function () {
console.log('pdf generated');
});
pdf.save('pdfTable.pdf');
A: I have test this in your plunker.
Just update your app.js
pdf.addHTML(document.getElementById('pdfTable'), function() {
pdf.save('pdfTable.pdf');
});
update your css file:
#pdfTable{
background-color:#fff;
} | unknown | |
d5812 | train | I've tried to do the best I can with your code, the following will work for you:
<div class="container" style="overflow:hidden; text-align:center;">
<div style="display:inline-block; margin: 0px 80px;">
<div class="overlay">
<img class="img1" height="225" src="NYC/wtc1.JPG" width="225">
</div>
</div>
<div style="display:inline-block; margin: 0px 80px;">
<div class="overlay">
<img class="img2" height="225" src="NYC/wtcmem.jpg" width="225">
</div>
</div>
<div style="display:inline-block; margin: 0px 80px;">
<div class="overlay">
<img class="img3" height="225" src="NYC/sky.jpg" width="225">
</div>
</div>
</div>
Note that <overlay> is not a valid HTML element. also I've seen on the page you used something like <margin>. It's not a good practice to invent HTML elements.. You can get all the functionality you need using regular <div>s (although I don't think this will break your page.. maybe only in older browsers..).
What I basically did:
*
*Wrapped the three <div>s with a container with text-align:center. This will make the three divs inside it aligned to the center.
*Added display:inline-block; to make all the divs follow the text-align.
*Added margins to the divs to space them
Note that I strongly recommend to replace your <overlay> with something like <div class="overlay">
A: If you have some markup like this:
<div class="wrapper">
<div><img class="img1" height="225" src="http://rwzimage.com/albums/NYC/wtc1.JPG" width="225" /></div>
<div><img class="img2" height="225" src="http://rwzimage.com/albums/NYC/wtcmem.jpg" width="225" /></div>
<div><img class="img3" height="225" src="http://rwzimage.com/albums/NYC/sky.jpg" width="225" /></div>
</div>
Then I think this CSS will have approximately the effect you're after:
.wrapper {
display: table;
width: 960px;
}
.wrapper > div {
display: table-cell;
width: 33%;
text-align: center;
}
.wrapper > div:hover img {
opacity: 0.5;
}
Demo. I set width: 960px; so that it would force things to be wider than the JSFiddle window, but you could set width: 100%; for your page.
A: div tag naturally stack vertically. So you will need to add an id to each div or you could just put all the img in one div.
The block css attribute is effecting the layout. It is pushing the next img to the next line. | unknown | |
d5813 | train | Here's a function I created in Java a while back that returns a String of the file contents. Hope it helps.
There might be some issues with \n and \r but it should get you started at least.
// Converts a file to a string
private String fileToString(String filename) throws IOException
{
BufferedReader reader = new BufferedReader(new FileReader(filename));
StringBuilder builder = new StringBuilder();
String line;
// For every line in the file, append it to the string builder
while((line = reader.readLine()) != null)
{
builder.append(line);
}
reader.close();
return builder.toString();
}
A: This will read a file from an URL and write it to a local file. Just add try/catch and imports as needed.
byte buf[] = new byte[4096];
URL url = new URL("http://path.to.file");
BufferedInputStream bis = new BufferedInputStream(url.openStream());
FileOutputStream fos = new FileOutputStream(target_filename);
int bytesRead = 0;
while((bytesRead = bis.read(buf)) != -1) {
fos.write(buf, 0, bytesRead);
}
fos.flush();
fos.close();
bis.close(); | unknown | |
d5814 | train | Dispose calls Flush, which writes the internal bytes stored in a buffer to disk
Without closing or disposing a file, you are leaving unmanaged resources around and will potentially lock the file, not to mention memory leaks. Instead always use a using statement
using (TextWriter writer = File.CreateText(@"...txt"))
{
writer.Write("Hello World");
}
However if you want to continually write to the file, you will have to flush it
FileStream.Flush Method
Clears buffers for this stream and causes any buffered data to be
written to the file.
TextWriter writer = File.CreateText(@"...txt");
...
writer.Write("Hello World");
...
writer.Flush(); // at this point the bytes are flushed to disk
...
...
writer.Dispose();
In short most streams are backed by an internal array (buffer), so you dont thrash writes. The Default size is 4k, when it hits the buffer size then it automatically flushes. If you want to see immediate writes, the you have to flush it every time
Lastly some streams have an auto flush feature, which can do this job for you
AutoFlush
Gets or sets a value indicating whether the StreamWriter will flush
its buffer to the underlying stream after every call to Write(Char)
When you are finished with the stream always dispose it | unknown | |
d5815 | train | Change the R flag to 301:
RewriteEngine on
RewriteCond %{HTTPS} off [OR]
RewriteCond %{HTTP_HOST} !^www
RewriteCond %{HTTP_HOST} ^(?:www\.)?(.+)$
RewriteRule ^ https://www.%1%{REQUEST_URI} [NE,L,R=301]
because by default, it is a 302 (Temporary) redirect, and you avoid very many redirects.
http://httpd.apache.org/docs/current/rewrite/flags.html | unknown | |
d5816 | train | The Google Assistant does not provide user recognition as a core part of the platform or as part of the Assistant SDK. | unknown | |
d5817 | train | The ADT will throw that error if its finds more than one instance of the same package but with different version number to them. Looking at your pom one of your dependencies is adding the LineTokenizer which is not up-to-date in comparison to the one supplied by the compiler. I would suggest you go to the Dependency Hierarchy using the maven POM Editor. This will allow you to select the extraneous dependency, which you can then exclude by right-clicking and selecting "Exclude Maven Artifact..." which will automatically add an <exclusions> element to your POM. This will remove the duplicate JAR from your Eclipse classpath and allow you to build you project. Do a mvn clean install after you done editing the POM though. | unknown | |
d5818 | train | You can search a specific field or fields by specifying it (them) with the Ntk parameter.
Or if you wish to search a specific group of fields frequently you can set up an interface (also specified with the Ntk parameter), that includes that group of fields.
A: This is how you can do it using presentation API.
final ENEQuery query = new ENEQuery();
final DimValIdList dimValIdList = new DimValIdList("0");
query.setNavDescriptors(dimValIdList);
final ERecSearchList searches = new ERecSearchList();
final StringBuilder builder = new StringBuilder();
for(final String productId : productIds){
builder.append(productId);
builder.append(" ");
}
final ERecSearch eRecSearch = new ERecSearch("product.id", builder.toString().trim(), "mode matchany");
searches.add(eRecSearch);
query.setNavERecSearches(searches);
Please see this post for a complete example.
A: Use Search Interfaces in Developer Studio.
Refer - http://docs.oracle.com/cd/E28912_01/DeveloperStudio.612/pdf/DevStudioHelp.pdf#page=209 | unknown | |
d5819 | train | I have Added two new column to determine the recharge information:
I think you have recharge information in another table, if you can put that information like i did this query will work.
DECLARE @tbl table(
Userid int,
Date datetime,
Balance int,
Voice int,
Data int,
Recharge int,
RechargeSN int
)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/5/2018' , 100 , 10 , 15,100,3)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/6/2018' , 75 , 5 , 10,0,3)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/7/2018' , 60 , 10 , 10,0,3)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/8/2018' , 90 , 10 , 20,50,2)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/9/2018' , 60 , 10 , 20,0,2)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(1,'4/10/2018' , 50 , 20 , 30,20,1)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(2,'4/1/2018' , 200 , 50 , 40,200,2)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(2,'4/2/2018' , 110 , 20 , 20,0,2)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(2,'4/3/2018' , 70 , 20 , 10,0,2)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(2,'4/4/2018' , 55 , 10 , 40,15,1)
INSERT INTO @tbl( Userid, [Date], Balance, Voice, Data,Recharge,RechargeSN) VALUES(2,'4/5/2018' , 5 , 2 , 2,0,1)
--SELECT * FROM @tbl t
SELECT userid, RechageDate = max(date), Avg_Voice = (cast(SUM(voice) AS numeric) / Count(voice)) , Avg_Data = cast(SUM(Data) AS numeric) / Count(Data) *1.00
--, t.RechargeSN
FROM @tbl t
GROUP BY t.Userid, t.RechargeSN
ORDER BY t.Userid
A: Since you haven't posted any attempt to solve this, but have shown an understanding of the data, I assume you just need to be given a broad strategy to begin following, and you can handle the coding from there.
Using LAG() partitioned by UserID, you can join each row to the previous row. You already have stated that you understand that (Balance - SUM(Voice + Data)), so in any CASE where that is not true, then you know that you have found a row where a recharge was done.
You can create an artificial column (eg HasRecharge) in a CTE that uses a CASE expression to test this and return 1 for the rows that have a recharge, and 0 for the rows that don't.
Then you can do a 2nd CTE where you SELECT from the first CTE WHERE HasRecharge=1 and WHERE there EXISTS() a previous row that also HasRecharge=1. And calculate two additional columns:
A SUM of Voice + Data between this recharge and the last recharge (again using LAG() but this time WHERE HasRecharge=1)
A COUNT of rows between this recharge and the last.
Your final SELECT from the 2nd CTE would get the average simply by dividing the SUM column by the COUNT column.
A: This problem becomes simpler when you have a flat table containing the day's charge, and the previous day's charge. Note that this will only work if your data set has a contiguous range of dates.
The following queries will need to be edited of course:
Select
currentday.Date,
currentday.user,
currentday.balance - prevday.balance - (currentday.voice+currentday.data) as charge,
currentDay.voice,
currentDay.data,
--The above column will tell you what the difference
--is between the expected balance, and should
--resolve to the charge for the day
GETDATE() as nextCharge
--leave this empty, use GETDATE() to force it to be a datetime
into ##ChargeTable
from [RechargeTable] currentday
left join [RechargeTable] prevday
on DATEADD(d,1,prevday.Date) = currentday.Date
AND prevday.user = currentday.user
Now, we know when the user has charged. The charge column will be positive.
Select
user,
date
into ##uniquedates
from ##charge
where charge>0
Now that we have the dates, we need to go back to the first temp table and update it with the next dates. (not sure on the syntax on aliasing updates).
update ##ChargeTable up
set
nextCharge = (
select
date
from ##uniquedates
where up.user == user AND date > up.date
)
Now we can do some sub selects back on the table to get the data we need.
select
user,
(select
avg(voice)
from ##ChargeTable
where user = ct.user
and date>=ct.date
and date<=ct.nextCharge) as AvgVoice,
(select
avg(data)
from ##ChargeTable
where user = ct.user
and date>=ct.date
and date<=ct.nextCharge) as AvgData
from ##ChargeTable ct
where nextCharge is not null and charge>0 | unknown | |
d5820 | train | You consider to use a staging environment?
A staging environment (stage) is a nearly exact replica of a production environment for software testing. Staging environments are made to test codes, builds, and updates to ensure quality under a production-like environment before application deployment. The staging environment requires a copy of the same configurations of hardware, servers, databases, and caches. Everything in a staging environment should be as close a copy to the production environment as possible to ensure the software works correctly.
See the Font
To use it, i recommend you a application like Heroku, after configure, you can 'deploy' your app commiting in a branch (its not real time, but works for your case).
If you have a VM, i recommends you this tutorial: https://emaxime.com/2014/adding-a-staging-environment-to-rails.html
A: Open questions like this are not really best placed on StackOverflow, which is geared more toward solving specific issues, with provided code examples and errors etc.
However, in answer to your question:
I see you mention Github in your question, but do you fully understand the underlying concept of Git Version Control, or is there a speficic reason as to why it doesn't meet your needs? As far as I believe, it's main purpose is to solve your exact scenario.
https://guides.github.com/introduction/git-handbook/ | unknown | |
d5821 | train | Try:
public IList<T> List<T>() where T : class, IAdminDecimal | unknown | |
d5822 | train | You could take an array of ids and update the wanted value with a single loop
ids = ['DVD', 'Furniture', 'Book'];
// update
ids.forEach(id => document.querySelector(id).classList[id === value
? 'add'
: 'remove'
]('visible')); | unknown | |
d5823 | train | It's difficult to debug inside a Python generator expression. Try debugging by rewriting as loops, like the following:
obj = 0
for node, node_var in nodes.items():
# print(f"node={node}, prize={G.G.nodes[node]['prize']}, node_var={node_var}")
obj += G.G.nodes[node]['prize'] * node_var
for mod, mod_var in modules.items():
# print(f"mod_var={mod_var}, mod={mod[1][1]}")
obj -= mod_var * mod[1][1]
m.setObjective(obj, GRB.MAXIMIZE)
Since I can't test this code, it's possible there may be errors, but this should give you some ideas how to debug your issue. | unknown | |
d5824 | train | http://developers.facebook.com/blog/post/2011/01/14/platform-updates--new-user-object-fields--edge-remove-event-and-more/
say:
Update: The user_address and user_mobile_phone permissions have been removed. Please see this post for more info. | unknown | |
d5825 | train | The following blog entry will help you
http://conceptdev.blogspot.com/2009/04/mdf-cannot-be-opened-because-it-is.html
A: As soon as you attached it to SQL Server 2012, the database was upgraded to version 706. As the error message suggests, there is no way to downgrade the file back to version 662 (SQL Server 2008 R2).
You can run the script found in your Visual Studio folder -
[drive:]\%windir%\Microsoft.NET\Framework\version\asp_regsql.
It'll display a UI for you to select the server to install a new copy on. Here's a MSDN article about it. | unknown | |
d5826 | train | You are misinterpreting the data element in your curl command line; that is the already encoded POST body, while you are wrapping it in another data key and encoding again.
Either use just the value (and not encode it again), or put the individual elements in a dictionary and urlencode that:
value = "ajax=1&htd=20131111&pn=p1&htv=l"
req = urllib2.Request("https://www.google.co.in/trends/hottrends/hotItems", value)
or
param = {'ajax': '1', 'htd': '20131111', 'pn': 'p1', 'htv': 'l'}
value = urllib.urlencode(param)
req = urllib2.Request("https://www.google.co.in/trends/hottrends/hotItems", value)
Demo:
>>> import json
>>> import urllib, urllib2
>>> value = "ajax=1&htd=20131111&pn=p1&htv=l"
>>> req = urllib2.Request("https://www.google.co.in/trends/hottrends/hotItems", value)
>>> response = urllib2.urlopen(req)
>>> json.load(response).keys()
[u'trendsByDateList', u'lastPage', u'summaryMessage', u'oldestVisibleDate', u'dataUpdateTime']
>>> param = {'ajax': '1', 'htd': '20131111', 'pn': 'p1', 'htv': 'l'}
>>> value = urllib.urlencode(param)
>>> value
'htv=l&ajax=1&htd=20131111&pn=p1'
>>> req = urllib2.Request("https://www.google.co.in/trends/hottrends/hotItems", value)
>>> response = urllib2.urlopen(req)
>>> json.load(response).keys()
[u'trendsByDateList', u'lastPage', u'summaryMessage', u'oldestVisibleDate', u'dataUpdateTime']
A: Easiest using the requests library in Python. Here's an example using Python 2.7:
import requests
import json
payload = {'ajax': 1, 'htd': '20131111', 'pn':'p1', 'htv':'l'}
req = requests.post('http://www.google.com/trends/hottrends/hotItems', data=payload)
print req.status_code # Prints out status code
print json.loads(req.text) # Prints out json data | unknown | |
d5827 | train | As stated in the Monaca Backend API Reference Guide, Monaca backend requires a phonegap plugin thus the browser does not have these installed and therefore you will not be able to access those systems.
Ultimately, there is nothing you can do other than to test in the app, although if you use the Monaca Cloud IDE, you do gain a web interface to manage the backend. It is very useful.
Hope this helps! | unknown | |
d5828 | train | I will answer as if this is for real work, as you did not indicate explicitly schoolwork.
ThreadLocalRandom
Use ThreadLocalRandom to avoid any possible concurrency issues. There is no downside to using this class over Math.random. And this class has convenient methods for generating various types of numbers rather than just double.
java.time
Never use Calendar or Date. Those terrible date-time classes were supplanted years ago by the modern java.time classes defined in JSR 310.
Get today's date.
ZoneId z = ZoneId.of( "America/Montreal" ) ;
LocalDate today = LocalDate.now( z ) ;
Add random number of days within next 30 days.
int days = ThreadLocalRandom.current().nextInt( 1 , 31 ) ;
LocalDate localDate = today.plusDays( days ) ;
Days vary in length, such as 23, 24, 25, or other number of hours. So for your date in your zone, calculate maximum number of seconds.
ZonedDateTime start = localDate.atStartOfDay( z ) ;
ZonedDateTime stop = localDate.plusDays( 1 ).atStartOfDay( z ) ;
Duration d = Duration.between( start.toInstant() , stop.toInstant() ) ;
long seconds = d.toSeconds() ; // In Java 9 and later. For Java 8, call `Duration::getSeconds`.
That count of seconds becomes the maximum for our length of day. From this we pick a random number of seconds.
long secondsIntoDay = ThreadLocalRandom.current().nextInt( 0 , seconds ) ;
ZonedDateTime zdt = start.plusSeconds( secondsIntoDay ) ;
Determine a random duration from 1 to 60 minutes for the elapsed time of each event.
int minutes = ThreadLocalRandom.current().nextInt( 1 , 61 ) ; // At least one minute, and less than 61 minutes.
Duration duration = Duration.ofMinutes( minutes ) ;
Define your public class InterviewSlot with two member fields: a ZonedDateTime and a Duration.
A: Simple solution with date for your spezific case, including an example:
import java.util.Date;
import java.util.Random;
import org.apache.commons.lang3.time.DateUtils;
public class SimpleDateGenerator {
private static Random random = new Random();
public static Date getRandomDate(Date start, long timerangeSeconds) {
int randomTime = (int) Math.ceil(random.nextDouble() * timerangeSeconds);
return DateUtils.addSeconds((Date) start.clone(), randomTime);
}
public static void main(String[] args) {
Date now = new Date();
System.out.println(now);
for (int i = 0; i < 10; i++) {
System.out.println(getRandomDate(now, 30 * 24 * 60 * 60));
}
}
}
main gives following (example) output:
Tue Mar 31 08:26:14 CEST 2020
Sun Apr 19 16:06:48 CEST 2020
Fri Apr 03 20:49:58 CEST 2020
Wed Apr 22 22:27:00 CEST 2020
Mon Apr 06 03:39:48 CEST 2020
Wed Apr 22 19:13:28 CEST 2020
Fri Apr 03 12:36:16 CEST 2020
Wed Apr 22 20:27:35 CEST 2020
Mon Apr 06 13:58:37 CEST 2020
Fri Apr 03 03:57:17 CEST 2020
Wed Apr 15 09:05:47 CEST 2020
A: As per @Basil Bourque suggestion the following code should do the trick :
import java.time.*;
import java.util.concurrent.ThreadLocalRandom;
import java.util.concurrent.TimeUnit;
public static void main(String[] args) {
int numberOfRandomDates = 10;
ArrayList<InterviewSlot> interviewSlotArrayList= new ArrayList<>();
for (int i = 0; i< numberOfRandomDates ; i++) {
InterviewSlot interviewSlot = calculateInterviewSlot();
interviewSlotArrayList.add(interviewSlot);
System.out.println(interviewSlot);
}
Collections.sort(interviewSlotArrayList);
System.out.println("After sorting: \n");
//Using lamda
interviewSlotArrayList.forEach(element -> {
System.out.println(element);
});
//or Using method reference
interviewSlotArrayList.forEach(System.out::println);
}
public static InterviewSlot calculateInterviewSlot() {
//Getting time zone id and setting local time according to the time zone
ZoneId z = ZoneId.of("America/Montreal");
LocalDate today = LocalDate.now(z);
//Getting a random day between 1 and 31 and adding it to the current date
int days = ThreadLocalRandom.current().nextInt(1, 31);
LocalDate localDate = today.plusDays(days);
//Getting start and end time of the day as per time zone.End time is taken as next day start time(24 hr)
ZonedDateTime start = localDate.atStartOfDay(z);
ZonedDateTime stop = localDate.plusDays(1).atStartOfDay(z);
//Duration is taken which is the max duration for that time zone.
Duration duration = Duration.between(start.toInstant(), stop.toInstant());
long seconds = TimeUnit.SECONDS.convert(duration.toNanos(), TimeUnit.NANOSECONDS);
//Calculating random no of seconds keeping the computed seconds as max seconds in the day
long secondsIntoDay = ThreadLocalRandom.current().nextInt(0, Math.toIntExact(seconds));
ZonedDateTime zonedDateTime = start.plusSeconds(secondsIntoDay);
//Calculating random no of minutes for duration
int minutes = ThreadLocalRandom.current().nextInt(1, 61);
Duration durationMinutes = Duration.ofMinutes(minutes);
return new InterviewSlot(zonedDateTime, durationMinutes);
}
}
InterviewSlot.java :
public class InterviewSlot implements Comparable<InterviewSlot> {
private ZonedDateTime startTime;
private Duration duration;
public InterviewSlot() {
}
public InterviewSlot(ZonedDateTime startTime, Duration duration) {
this.startTime = startTime;
this.duration = duration;
}
public ZonedDateTime getStartTime() {
return startTime;
}
public void setStartTime(ZonedDateTime startTime) {
this.startTime = startTime;
}
public Duration getDuration() {
return duration;
}
public void setDuration(Duration duration) {
this.duration = duration;
}
@Override
public String toString() {
// TODO Auto-generated method stub
return "Interview Start " + getStartTime() + " Duration : " + getDuration();
}
@Override
public int compareTo(InterviewSlot s) {
return this.getStartTime().compareTo(s.getStartTime());
// TODO Auto-generated method stub
}
}
Generated sample output :
Interview Start 2020-04-24T02:16:09-04:00[America/Montreal] Duration : PT12M
Interview Start 2020-04-04T20:58:43-04:00[America/Montreal] Duration : PT38M
Interview Start 2020-04-25T00:09:12-04:00[America/Montreal] Duration : PT31M
Interview Start 2020-04-03T20:26:01-04:00[America/Montreal] Duration : PT22M
Interview Start 2020-04-06T03:48:29-04:00[America/Montreal] Duration : PT45M
Interview Start 2020-04-15T07:56:32-04:00[America/Montreal] Duration : PT34M
Interview Start 2020-04-21T09:25:15-04:00[America/Montreal] Duration : PT44M
Interview Start 2020-04-30T18:33:40-04:00[America/Montreal] Duration : PT52M
Interview Start 2020-04-16T07:12:54-04:00[America/Montreal] Duration : PT14M
Interview Start 2020-04-24T17:02:48-04:00[America/Montreal] Duration : PT50M | unknown | |
d5829 | train | i think for now the safest option is to add appendonly yes in your redis config.
if you are using version 1.1 or greater one.
appendfsync always is slowest among them. if you are okay with that then sure you can use it. but if you care about your DB's performance use appendfsync everysec.
The append-only file is a fully-durable strategy for Redis. every time Redis receives a command that changes the dataset (e.g. SET) it will append it to the AOF. When you restart Redis it will re-play the AOF to rebuild the state.
details | unknown | |
d5830 | train | The WP codex page http://codex.wordpress.org/Function_Reference/register_uninstall_hook has 2 important pieces of information, only 1 of which you list in your question. You do need to make sure that you register the hook.
That aside, if you want to remove all custom post data (regardless if it is upon uninstallation or as another person commented having a seperate button to remove data as many plugins do) you need to be sure to remove the postmeta records as well.
global $wpdb;
$cptName = 'messages';
$tablePostMeta = $wpdb->prefix . 'postmeta';
$tablePosts = $wpdb->prefix . 'posts';
$postMetaDeleteQuery = "DELETE FROM $tablePostMeta".
" WHERE post_id IN".
" (SELECT id FROM $tablePosts WHERE post_type='$cptName'";
$postDeleteQuery = "DELETE FROM $tablePosts WHERE post_type='$cptName'";
$wpdb->query($postMetaDeleteQuery);
$wpdb->query($postDeleteQuery); | unknown | |
d5831 | train | I can't explain why this fails as I am just inserting the content of the successful case into a container that simply performs the default vertical stacking (flex-direction: column).
The difference is that this new primary container has align-items: flex-start.
By experimentation I have discovered that removing the align-items property from the outer level container (#container) fixes the problem but I can't explain that either.
When you nest the .labelinput flex containers in the larger container (#container), then the .labelinput elements become flex items, in addition to flex containers.
Since the #container flex container is set to flex-direction: column, the main axis is vertical and the cross axis is horizontal1.
The align-items property works only along the cross axis. It's default setting is align-items: stretch2, which causes flex items to expand the full width of the container.
But when you override the default with align-items: flex-start, like in your code, you pack the two labelinput items to the start of the container, as illustrated in your problem image:
Because stretch is the default value for align-items, when you omit this property altogether, you get the behavior you want:
I understand that the outermost container (#container) is asked to layout two other containers (of class labelinput) so whatever property I set for align-items in the outermost container should apply to the inner containers as a whole, not change the layout of their internal items.
The outermost container is not changing the layout of the inner container's children. At least not directly.
The align-items: flex-start rule on the outermost container is applying directly to the inner containers. The internal items of the inner containers are just responding to the sizing adjustment of their parent.
Here's an illustration of align-items: flex-start impacting .labelinput (red borders added).
#container {
display: flex;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
margin: 5px;
border: 1px solid grey;
}
.labelinput {
display: flex;
flex-flow: row;
margin: 1px;
border: 2px dashed red;
}
.labelinput > *:first-child {
flex-basis: 7em;
flex-grow: 0;
flex-shrink: 0;
}
.labelinput > *:nth-child(2) {
flex-grow: 1;
flex-shrink: 1;
border: 3px solid purple;
}
<div id='container'>
<div class='labelinput'>
<div>1st input</div>
<div>this is just a div</div>
</div>
<div class='labelinput'>
<div>2nd:</div>
<input type="text" name="foo" value="this is the input box" />
</div>
</div>
Moreover I can't explain why the layout is changed based on the element type when there's nothing in my CSS that differentiates between items of element div versus items of element input.
There may be no difference between div and input in your code, but there are intrinsic differences.
Unlike a div, an input element has a minimum width set by the browser (maybe to always allow for character entry).
You may be able to reduce the input width by applying min-width: 0 or overflow: hidden3.
Footnotes
1. Learn more about flex layout's main axis and cross axis here: In CSS Flexbox, why are there no "justify-items" and "justify-self" properties?
2. Learn more about the align-items property in the spec: 8.3. Cross-axis Alignment: the align-items and align-self properties
3. Why doesn't flex item shrink past content size? | unknown | |
d5832 | train | If you want to determine if a user account is locked, then you can't use the user account information that you're checking for to determine this fact - because the user account is locked, you will be denied access.
You will not be told that the reason for being unable to log on is due to the account being locked, that would be considered excessive information disclosure.
If you want to determine if the reason for not being permitted to log on is due to the account being locked then you will need an already logged on account that can check the account lock state instead of trying from the failed connection.
A: You can use attribute lockout time for it too: it is 0 if user is not locked. (Connect to AD using administrator credentials).
DirectoryEntry _de = new DirectoryEntry (_path, domainAdmininstratorName, pwd); // get user as directory entry object
object largeInteger = _de.Properties["lockoutTime"].Value; // it's a large integer so we need to get it's value by a little bit complex way
long highPart =
(Int32)
largeInteger.GetType()
.InvokeMember("HighPart", BindingFlags.GetProperty, null, largeInteger, null);
long lowPart =
(Int32)
largeInteger.GetType()
.InvokeMember("LowPart", BindingFlags.GetProperty, null, largeInteger, null);
long result = (long) ((uint) lowPart + (((long) highPart) << 32));
if (result == 0)
{
// account is not locked
}
else
{
// account is locked
} | unknown | |
d5833 | train | As of version 1.1.4, test sessions execute sequentially, within one test session. The reason for that is to be deterministic about what happens when, so testers can make reliable assumptions about the execution flow. This is important because tests can have dependencies between them and must execute in a specific order for them to succeed. To be sure, this is a bad practice, but it's sometimes necessary for practical reasons.
To execute tests in parallel, you must create two (or more) separate test sessions, so you must split your current session template in two. In the future, OpenTest will introduce an option that will allow one single test session to execute against multiple actors, but the default will still be executing the tests sequentially. | unknown | |
d5834 | train | You could use ElementName (I'm assuming you mean members on the user control itself).
class UserControl1 : UserControl
{
public UserControl1()
{
InitializeComponent();
}
public int Value { get; set; }
}
<UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
mc:Ignorable="d"
Name="myUserControl"
d:DesignHeight="300" d:DesignWidth="300">
<Grid>
<Slider Value="{Binding Value, ElementName=myUserControl}" />
</Grid>
</UserControl>
A: You could set the class instance as the DataContext of your userControl and use the default bind mechanism with a relative Path.
To support two way binding (class updates UI, UI updates class) the class should implement INotifyPropertyChanged (or have the specified properties defined as DependencyProperty).
If you cant alter the Class code, you'll need to expose the required properties in the UserControl and invoke it's PropertyChanged event (to allow the UI to update) and update the instance with the new value | unknown | |
d5835 | train | FYI, Starting in version 4.0, React Router no longer uses the IndexRoute.
Also for your path, change "example" to "/example" | unknown | |
d5836 | train | The first instance of the application should create a named pipe, subsequent instances of the application would fail to create the same named pipe and should instead attempt to open the named pipe for use. Once opened, the string (or really any data) can be transferred to the already running instance of the app. The named pipe can then be closed and the app could exit.
Alternatively, you could use .NET Remoting and register a well-known type that other instances of the application could activate, with behavior similar to what is described above.
Ultimately, a quick search on "IPC" or "Inter-Process Communication" may open up other alternatives. But I believe the named pipe approach is the cleanest and would be the easiest to implement/extend.
For "first-run" detection you can also create a named mutex, in case you opt for an IPC mechanism that doesn't exhibit a failure condition on multiple-runs (for example, using a database as a middle-man for data share). No two processed can "Create" a mutex with the same name, this is the same constraint as with Named pipes, as these names are managed by the Kernel (and thus span process, session, desktop and window station boundaries.) | unknown | |
d5837 | train | Use MAX and GROUP BY
SELECT ProductName, MAX(Price) [Price]
FROM [Products]
GROUP BY ProductName
ORDER BY MAX(Price) DESC
LIMIT 1;
A: I've always done it with the following
SELECT top 1 Name
FROM tableName
ORDER BY Price DESC
A: select top 1 * from [Products] order by Price desc
A: You can use TOP 1 but you always have to consider the possibility of having a tie, so:
SELECT TOP 1 WITH TIES ProductName, Price
FROM [Products]
ORDER BY Price DESC | unknown | |
d5838 | train | Changing the place of forall in the main lemma makes it much easier to prove. I wrote it as follow:
Lemma strong_induct_is_correct : forall (nP : nat->Prop),
strong_induct nP -> (forall n k, k <= n -> nP k).
(Also note that in the definition of strong_induct you used <= so it's better to use the same relation in the lemma as I did.)
So I could use the following lemma:
Lemma leq_implies_le_or_eq: forall m n : nat,
m <= S n -> m <= n \/ m = S n.
to prove the main lemma like this:
Proof.
intros nP [Hl Hr] n.
induction n as [|n' IHn].
- intros k Hk. inversion Hk. apply Hl.
- intros k Hk. apply leq_implies_le_or_eq in Hk.
destruct Hk as [Hkle | Hkeq].
+ apply IHn. apply Hkle.
+ rewrite Hkeq. apply Hr in IHn. apply IHn.
Qed.
This is much a simpler proof and also you can prove a prettier lemma using the lemma above.
Lemma strong_induct_is_correct_prettier : forall (nP : nat->Prop),
strong_induct nP -> (forall n, nP n).
Proof.
intros nP H n.
apply (strong_induct_is_correct nP H n n).
auto.
Qed.
Note: Usually after using a destruct or induction tactic once, it is not very helpful to use one of them again. So I think using destruct n' after induction n would not bring you any further. | unknown | |
d5839 | train | could be you are using two different version of mysql one <5.7 (localhost) and one >= 5.7 (server)
do the fact you have an improper use of group by () allowed in mysql<5.7 but not allowed, by deafult, in mysql >= 5.7) this could produce an error
You should not use group by without aggregation function , for obtain disticnt row without aggreation function usw ->distinct()
$variant_ids1=VariantProducts::find()
->select(['variant_id1'])
->where(['shop_id'=>$searchModel->user_id])
->orWhere(['category_id'=>$searchModel->category_id,'position'=>null])
->distinct();
$variant_ids2=VariantProducts::find()
->select(['variant_id2'])
->where(['shop_id'=>$searchModel->user_id])
->orWhere(['category_id'=>$searchModel->category_id,'position'=>null])
->distinct();
$variant_ids3=VariantProducts::find()
->select(['variant_id3'])
->where(['shop_id'=>$searchModel->user_id])
->orWhere(['category_id'=>$searchModel->category_id,'position'=>null])
->distinct();
another issue could be related to the IN clause . the length of the in content is limited by max_allowed_packet param so you should check for this param both of your db and eval if these value are compatible with you union result
https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_allowed_packet | unknown | |
d5840 | train | i created an example of how to make it simple (in my opinion).
I merged the three states into one. this way i can get each one in more dynamic way. then i created on change handler that handles the changes and doing the all if statements (with less code).
each input is firing the change handler on change. and it sets the state according to the if statements.
this way we can create as many inputs as we want, we just need to pass the right arguments to the change handler and it will take care of the rest for us (and to make sure to include one more key value pair to the state)
this is the dummy data for Data:
const Data = { field1: "abc", field2: "efg" };
the state:
const [generalState, setGeneralState] = useState({
unsavedChanges: [],
customField1: "",
customField2: ""
});
the handler:
const changeTheState = (type, value, field, customField) => {
//setting the values to the state. so we can fetch the values for the inputs and for later use
setGeneralState((prev) => ({ ...prev, [type]: value }));
// i narrowed down the if statements. now its only two.
if (
value === Data[field] &&
generalState.unsavedChanges?.includes(customField)
) {
return setGeneralState((prev) => {
let filtered = prev.unsavedChanges.filter(
(item) => item !== customField
);
return { ...prev, unsavedChanges: filtered };
});
}
if (!generalState.unsavedChanges?.includes(customField)) {
setGeneralState((prev) => ({
...prev,
unsavedChanges: [...prev.unsavedChanges, customField]
}));
}
};
and the jsx:
<div className="App">
<input
value={generalState.customField1}
onChange={(e) => {
changeTheState(
"customField1",
e.target.value,
"field1",
"customField1"
);
}}
/>
<input
value={generalState.customField2}
onChange={(e) => {
changeTheState(
"customField2",
e.target.value,
"field2",
"customField2"
);
}}
/>
<h1>{generalState.unsavedChanges.length} # of unsaved changes.</h1>
<button disabled={generalState.unsavedChanges.length > 0 ? false : true}>
Submit
</button>
</div>
here is the example : codesandbox example
One more thing you can do , is to create a reusable component for the input. create array of objects to represent each input. loop through the array and generate as many inputs you want.
if you need extra explaination let me know. | unknown | |
d5841 | train | Change
protected void two()
{
Console.WriteLine("this is two method");
}
into
public void two()
{
Console.WriteLine("this is two method");
}
A: You yourself answered the question:
'PublicDemo.DemoPublic.two()' cannot implement an interface member because it is not public.
Answer is Interface members have to be public.
A: Change Protected to Public.You have defined an public Interface.Once you define Public Interface,the Contracts in the Interface also will be public by Default.
public void two()
{
Console.WriteLine("this is two method");
}
A: It means what is says. Your two method is protected. It needs to be public if implemented from an interface.
protected void two(){
Console.WriteLine("this is two method");
}
change it to public | unknown | |
d5842 | train | You can use the KeyDown event and check e.KeyCode == Keys.Enter. | unknown | |
d5843 | train | Create the strings in a resource file. You can then localise by adding additional resource files.
Check out http://geekswithblogs.net/dotNETPlayground/archive/2007/11/09/116726.aspx
A: Use string resources.
A: I've always defined constants wherever they make the most sense based on your language (a static class? application-wide controller? resource file?) and just call them where/whenever needed. Sure they're still "hard-coded" in a way at that point, but they're also nicely centralized, with naming conventions that make sense.
A: Create a Resource (.resx) file and add your strings there. VS will generate a class for you for easy access to these resources with full intellisence. You can then add localised resources in the same manner.
A: .net has a pretty good support for so-called ressource-files where you can store all strings for one language. | unknown | |
d5844 | train | Waiting for player to enter their name is an asynchronous process, therefore you have to wait for an event dispatched by the popup. Since the popup closes itself (gets removed from stage) once OK button is clicked, you can listen on that popup for Event.REMOVED_FROM_STAGE event, and only then gather the data from the popup. Don't for get to remove the event listener from the popup so that you'll not leak the instance.
private function NewHighScore():void{
highScorePopup = PopUpManager.createPopUp(this, Popup, true) as Popup;
highScorePopup.SetScore(playerscore);
PopUpManager.centerPopUp(highScorePopup);
highScorePopup.addEventListener(Event.REMOVED_FROM_STAGE,getPlayerName);
}
function getPlayerName(event:Event):void {
event.target.removeEventListener(Event.REMOVED_FROM_STAGE,getPlayerName);
var popup:Popup=event.target as Popup;
if (!popup) return;
var playerName:String=popup.getName(); // now the name will be populated
// do stuff with the name
} | unknown | |
d5845 | train | Most AB testing you hear about is referring to client-side tests powered by injecting JS in the browser. Testing in a Java app requires a different approach.
You can use a free, open-source tool such as Planout. This serves as a basic traffic splitter and uses a deterministic hashing algorithm so that you get consistent variations as long as you keep using the same user ID.
If you need more robust functionality, you may need to go for a commercial package. For in-app testing (mobile, web, backend), these usually take 1 of 2 forms:
*
*API approach: your app can send a request to the vendor-managed server asking for a variation for the current user. The downside here is the performance hit of waiting for a response every time you need the user's variation.
*SDK: vendor-provided SDK you implement in your Java code base that uses a deterministic hash to determine the variation for the user. The SDK will need retrieve some kind of data file with experiment status, traffic allocation, etc from a server prior to running the experiment (and on some update frequency), but then lets you determine variations in memory (no blocking network calls). Tracking calls can be sent async.
A commercial package will give you a few benefits over a free solution, like a user-friendly web portal to manage experiments, a statistical calculator, QA tools, feature flags, user permissions, ongoing updates and support, etc.
Disclaimer: I work for Optimizely who develop an SDK-driven platform for AB testing & feature flags. | unknown | |
d5846 | train | FTP credentials do not refer to your login details, it refers to credentials for File Transfer Protocol, it is given to you when you purchase a web hosting service or setup one yourself on your machine.
An alternative to this would be to download the plugin or theme you want and paste it to your /{website folder}/wp-content/themes or /{website folder}/wp-content/plugins
A: Try to
*
*Right click to htdocs folder, choose Get info
*Click the lock icon, type your MacOS account password to unlock below options.
*Allow everyone Read & Write permission, the click the cog icon and choose Apply to enclosed items..., this should apply all r+w permission to sub-folders.
*Done | unknown | |
d5847 | train | $targetsvr.Roles.Members is a legal expression that results in a collection of all members of all roles (it's equivalent to $targetsvr.Roles | Foreach { $_.Members }). But this collection is synthesized by PowerShell, not an actual member of something, so you can't modify it. You want $targetsvr.Roles["Administrators"].Members instead. And you explicitly need to update the role before the server will see the changes. And you want to give the cmdlet a better name as well. If I may:
Function Add-ASAdministrator {
param(
[Parameter(Mandatory=$True, ValueFromPipeline)]
[string] $User,
[string] $Instance = "."
)
$server = New-Object Microsoft.AnalysisServices.Server
$server.Connect($Instance)
$administrators = $server.Roles["Administrators"]
if ($administrators.Members.Name -notcontains $User) {
$administrators.Members.Add($User) | Out-Null
$administrators.Update()
}
$server.Disconnect()
} | unknown | |
d5848 | train | First as stated in the comments, there is no cost to using full columns:
=SUMIF(D:D,"Restaurant",C:C)
Which now it does not matter how large it gets.
excel
But if one wants to limit it using other cells, I would use INDEX, instead of INDIRECT as INDIRECT is volatile(This only works in Excel):
=SUMIF(INDEX(D:D,T1):INDEX(D:D,T2),"Restaurant",INDEX(C:C,T1):INDEX(C:C,T2))
Where Cell T1 holds the start row and T2 holds the end row.
A: you could use simple QUERY like:
=QUERY({C13:D}, "select Col2, sum(Col1)
where Col2 matches 'Restaurant'
group by Col2
label sum(Col1)''", 0)
or for the whole group:
=QUERY({C13:D}, "select Col2, sum(Col1)
where Col2 is not null
group by Col2
label sum(Col1)''", 0) | unknown | |
d5849 | train | Try simplifying your PrincipalContext line:
PrincipalContext oPrincipalContext = new PrincipalContext(ContextType.Domain, "XXXXXX.org", AUserThatWorks, PasswordThatWorks);
This assumes your domain is XXXXXXX.org. You can also try putting your domain in front of your username: "XXXXXX.org\username". | unknown | |
d5850 | train | Declaring Class
public class MyServlet extends HttpServlet
instead of
public MyServlet extends HttpServlet
A: you forget the keyword class when define a class, just put the class before the class name | unknown | |
d5851 | train | Type inference is a feature of some statically-typed languages. It is done by the compiler to assign types to entities that otherwise lack any type annotations. The compiler effectively just 'fills in' the static type information on behalf of the programmer.
Type inference tends to work more poorly in languages with many implicit coercions and ambiguities, so most type inferenced languages are functional languages with little in the way of coercions, overloading, etc.
Type inference is part of the language specification, for the example the F# spec goes into great detail about the type inference algorithm and rules, as this effectively determines 'what is a legal program'.
Though some (most?) languages support some limited forms of type inference (e.g. 'var' in C#), for the most part people use 'type inference' to refer to languages where the vast majority of types are inferred rather than explicit (e.g. in F#, function and method signatures, in addition to local variables, are typically inferred; contrast to C# where 'var' allows inference of local variables but method declarations require full type information).
A: A type inferencer determines what type a variable is from the context. It relies on strong typing to do so. For example, functional languages are very strongly, statically typed but completely rely on type inference.
C# and VB.Net are other examples of statically typed languages with type inference (they provide it to make generics usable, and it is required for queries in LINQ, specifically to support projections).
Dynamic languages do not infer type, it is discovered at runtime.
A: Type inferencing is a bit of a compromise found in some static languages. You can declare variables without specifying the type, provided that the type can be inferred at compile time. It doesn't offer the flexibility of latent typing, but you do get type safety and you don't have to write as much.
See the Wikipedia article.
A: A type inferencer is anything which deduces types statically, using a type inference algorithm. As such, it is not just a feature of static languages.
You may build a static analysis tool for dynamic languages, or those with unsafe or implicit type conversions, and type inference will be a major part of its job. However, type inference for languages with unsafe or dynamic type systems, or which include implicit conversions, can not be used to prove the type safety of a program in the general case.
As such type inference is used:
*
*to avoid type annotations in static languages,
*in optimizing compilers for dynamic languages (ie for Scheme, Self and Python),
*In bug checking tools, compilers and security analysis for dynamic languages. | unknown | |
d5852 | train | You have to use Adapter code to handle the click within the list items.
@Override
public View getView(int position, View convertView,
ViewGroup parent) {
LayoutInflater inflater = (LayoutInflater) con
.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
View rowView = inflater.inflate(R.layout.yourlayout, parent, false);
Button row = (Button) rowView.findViewById(R.id.btnSample);
final int n = position;
row.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent next = new Intent(context, NextActivity.class);
startActivity(next);
}
});
return rowView;
}
A: you can use image.setOnClickListener()
but i suggest to use ImageButton,it's extends from imageView ,it will Interception click event,then you can click it
A: public View getView(int position, View convertView, ViewGroup parent) {
View row = convertView;
RecordHolder holder = null;
if (row == null) {
LayoutInflater inflater = ((Activity)mcontext).getLayoutInflater();
row = inflater.inflate(R.layout.custom_layout, parent, false);
holder = new RecordHolder();
holder.tv_name=(TextView) row.findViewById(R.id.tv_name);
holder.btn_custom_delete=(Button) row.findViewById(R.id.btn_custom_delete);
holder.btn_custom_insert=(Button) row.findViewById(R.id.btn_custom_insert);
row.setTag(holder);
} else {
holder = (RecordHolder) row.getTag();
}
holder.tv_name.setTag(position);
holder.btn_custom_delete.setTag(position);
holder.tv_name.setText(getItem(position));
holder.tv_name.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
int pos = (Integer) view.getTag();
Intent intent = new Intent(mcontext , ActionActivity.class);
// Here You can also get you position of arraylist or hashmap by pos i.e integer variable value.
startActivity(intent);
}
});
return row;
} | unknown | |
d5853 | train | What you'll probably want to consider, is using Paperclip (or another similar gem) to store the data on S3. Within your application, you can then locate/compute the data you'll need to display through your Paperclip-based model, then have Paperclip retrieve and load the data you need.
The nice thing about such a solution is that it's completely transparent for you (as a user AND as a developer): you're just working with your objects, and they get stored/retrieved form S3 as needed.
A: Both Heroku and Engineyard (probably the two largest cloud providers for rails) use S3 as the backend, so using these providers could be great.
Heroku tends to be more 'out-of-the-box' so I think EngineYard with CLI(Command Line) access would be better suited to your needs.
More info on the two at
Heroku vs EngineYard: which one is more worth the money? and
http://mikemainguy.blogspot.com/2011/08/heroku-is-bus-engineyard-is-car.html and
http://www.cuberick.com/2010/04/engine-yard-vs-heroku-getting-started.html has more on setup / cost.
A: This is very possible. Use the fog gem.
I have to not recommend you use carrierwave or paperclip for your specific needs here. They're good when you are uploading the data initially, but in your case you're accessing existing data. Just use fog to connect to s3 directly. | unknown | |
d5854 | train | I try to avoid right join because it's needlessly confusing.
Here's an example left join approach:
select e.employee_id
, m.month
, a.project
, sum(a.worked)
from (
select distinct employee_id
from TableA
) e
, MonthsTable m
left join
TableA a
on a.employee_id = m.employee_id
and month(a.date_wrk) = m.month
group by
e.employee_id
, m.month
, a.project
The cross join creates a matrix of each employee and each month. For each employee-month combination in the matrix, it optionally looks up the project hours.
A: Ok I did it basing on Andomar suggestion - but as Access is not fond of sub-queries and cross join I created additional table based on :
select distinct employee_id,months.month
from TableA,months
which give me a cross join table which then I linked to tableA.
Cheers! | unknown | |
d5855 | train | I use build project instead of build application, but I think you need to add the PBDs like pbwsclient125.pbd to your set liblist command.
A: SET liblist "a1.pbl;a2.pbl;a3.pbl;pbwsclient125.pbd;pbdom125.pbd"
BUILD executable "performmed.exe" "pbshell.ico" "performmed.pbr" "yyynn" | unknown | |
d5856 | train | The Gang of Four's main contribution to Design Patterns is really giving names to some commonly-used patterns to assist communication of design intent. It's so much easier to write
// this is an observer
than a big ol' block of comments that no one will read. And if people shared the jargon, developers can communicate more effectively.
The Observer pattern has been around long before OO programming. Most often it was referred to using the term "callback", often implemented with function pointers in various languages, or perhaps even a flag that was used to indicate which function/procedure/subroutine should be called. This represented one of the earliest forms of abstract communication between modules. I've even seen similar approaches taken in assembler languages - storing a callback address and using it to indirectly notify that "something happened".
A big thing to remember... the implementations that the Gang of Four show in the Design Patterns book are not "absolute" - they're there to demonstrate an approach. You can just as easily implement the Observer pattern with a function pointer as you can with an abstract class, interface, or C# delegate.
(I teach a Design Patterns course at Johns Hopkins, btw ;) )
A: What The Gang of Four did wasn't invent patterns, they observed and researched the software field at the time in order to catalog the solutions to the common problems faced by developers.
As for who initially invented it your guess is as good as mine I suppose. Although I'll be interested if anyone do know who invented it. In my opinion it's like asking who invented fire...
ConcreteSubject refers to the implementation of the Subject interface. And it's not a variation it's simply necessary to have a interface to facilitate the pattern. (or a super class but an interface is much more better). | unknown | |
d5857 | train | you need to inject first $httpbackend
describe('MyController', function() {
var $httpBackend, $rootScope, createController, authRequestHandler;
// Set up the module
beforeEach(module('MyApp'));
beforeEach(inject(function($injector) {
// Set up the mock http service responses
$httpBackend = $injector.get('$httpBackend');
// backend definition common for all tests
authRequestHandler = $httpBackend.when('GET', '/auth.py')
.respond({userId: 'userX'}, {'A-Token': 'xxx'});
// Get hold of a scope (i.e. the root scope)
$rootScope = $injector.get('$rootScope');
// The $controller service is used to create instances of controllers
var $controller = $injector.get('$controller');
createController = function() {
return $controller('MyController', {'$scope' : $rootScope });
};
$httpBackend.when('GET', "/api/rest/").respond(data_to_respond);
}));
next write test cases
it('getTypes - should return 3 car manufacturers', function () {
service.getTypes().then(function(response) {
//expect will be here
});
$httpBackend.flush();
}); | unknown | |
d5858 | train | Maybe you can use Expectation-maximization algorithm. Your points would be (value, position). In your example, this would be something like:
With the E-M algorithm, the result would be something like (by hand):
This is the desired output, so you can consider using this, and if it really works in all your scenarios. An annotation, you must assign previously the number of clusters you want, but I think it's not a problem for you, as you have set out your question.
Let me know if this worked ;)
Edit:
See this picture, is what you talked about. With k-means you should control the delta value, this is, how the position increment, to have its value to the same scale that value. But with E-M this doesn't matter.
Edit 2:
Ok I was not correct, you need to control the delta value. It is not the same if you increment position by 1 or by 3: (two clusters)
Thus, as you said, this algorithm could decide to cluster points that are not neighbours if their position is far but their value is close. You need to guarantee this not to happen, with a high increment of delta. I think that with a increment of 2 * (max - min) values of your sequence this wouldn't happen.
Now, your points would have the form (value, delta * position). | unknown | |
d5859 | train | Not sure what caused the error, but running Docpad in an elevated command prompt solved the problem for me. I'm working on Windows 10. (I bet this issue is Windows related.)
To open a cmd with admin privileges, hit menu start and type cmd. Then hit ctrl+shift+enter or hit the right mouse button and select 'run as administrator'. The solution works in a Git Bash terminal with extra privileges, as well.
Edit: Today it broke again :/ (to the point that I question whether it did work yesterday...) The quest for an answer has started again...
Edit 2: The error is caused by docpad-plugin-gulp (or its dependencies) for sure; running the Gulp tasks directly works fine.
However, I found a setting in docpad-plugin-gulp called background. Setting this to true will run Gulp in the background. More important is that it removes the experienced errors altogether. This leads me to suspect the issue is caused by rule 64-67 of the plugin (out/gulp.plugin.js) or the functions it calls there:
this.safeps.spawn(command, {
cwd: rootPath,
output: true
}, next);
I hope people more knowledgeable of this topic can confirm my suspicions and/or fix the plugin. For now, updating your docpad.coffee like below should bypass the errors.
docpadConfig = {
# Other docpad settings
plugins:
gulp:
background: true
}
module.exports = docpadConfig | unknown | |
d5860 | train | If loading your configuration returns a promise, simply put the creation of the Vue instance in the .then() {} code. | unknown | |
d5861 | train | I'll just take a stab at rewriting the code.
It appears that you have 2 separate things going on here. You have the assignment of a and the assignment of x. Assignment of a is based off the clock and assignment of x is based off of I.
always @(clk) begin
if (posedge clk)
a <= b + 1;
end
always @(in_i) begin
if (posedge in_i)
x <= x + 1;
else if (negedge in_i)
x <= c + 2;
end | unknown | |
d5862 | train | I think what you are asking is if you can separate parts of an HTML page into smaller pages, so you can separate concerns.
In PHP this can be accomplished by referencing other files by a require() or include(). But I still don't believe this really answers your question. ASP.NET MVC allows you to render partial views within a webpage through `RenderPartial() but you didn't mention anything about using this.
You can find more at http://www.asp.net/mvc/videos/mvc-2/how-do-i/how-do-i-work-with-data-in-aspnet-mvc-partial-views
A: If you want to divide a single webpage into multiple hmtl files you can do it by inserting frames. This is an old way of programming and you don't really see it these days but its efficient at doing what you are asking.
A: Yes, this is highly recommended. You are trying to apply the DRY principle (see: http://en.wikipedia.org/wiki/Don't_repeat_yourself). It's an excellent idea to apply this to your HTML. You can achieve this using require, require_once, include, and include_once in PHP. If you want to get a bit fancier, take a look at templating systems like Smarty (see: http://www.smarty.net/) | unknown | |
d5863 | train | Use -d option to set the delimtier to space
$ echo 00:00 down server | cut -d" " -f3-
server
Note Use the field number 3 as the count starts from 1 and not 0
From man page
-d, --delimiter=DELIM
use DELIM instead of TAB for field delimiter
N- from N'th byte, character or field, to end of line
More Tests
$ echo 00:00 down server hello world| cut -d" " -f3-
server hello world
The for loop is capable of iterating through the files using globbing. So I would write something like
for servers in /data/field*
do
string=`cut -d" " -f3- /data/field/$servers/time`
...
...
A: You can use sed as well:
sed 's/^.* * //'
A: Please, do the things properly :
for servers in /data/field/*; do
string=$(cut -d" " -f3- /data/field/$servers/time)
echo "$string"
done
*
*backticks are deprecated in 2014 in favor of the form $( )
*don't parse ls output, use glob instead like I do with data/field/*
Check http://mywiki.wooledge.org/BashFAQ for various subjects
A: For the examples given, I prefer cut. But for the general problem expressed by the question, the answers above have minor short-comings. For instance, when you don't know how many spaces are between the words (cut), or whether they start with a space or not (cut,sed), or cannot be easily used in a pipeline (shell for-loop). Here's a perl example that is fast, efficient, and not too hard to remember:
| perl -pe 's/^\s*(\S+\s+){2}//'
Perl's -p operates like sed's. That is, it gobbles input one line at a time, like -n, and after dong work, prints the line again. The -e starts the command-line-based script. The script is simply a one-line substitute s/// expression; substitute matching regular expressions on the left hand side with the string on the right-hand side. In this case, the right-hand side is empty, so we're just cutting out the expression found on the left-hand side.
The regular expression, particular to Perl (and all PLRE derivatives, like those in Python and Ruby and Javascript), uses \s to match whitespace, and \S to match non-whitespace. So the combination of \S+\s+ matches a word followed by its whitespace. We group that sub-expression together with (...) and then tell sed to match exactly 2 of those in a row with the {m,n} expression, where n is optional and m is 2. The leading \s* means trim leading whitespace. | unknown | |
d5864 | train | There are essentially two ways to approach this (that I can think of ATM):
Note: I would rename cFunctor and bFunctor to simply Functor in both cases. They are nested inside respective classes and thus such prefix makes little sense.
Type erased
Example of type erasure is std::function.
class A {
public:
int x;
std::vector<std::function<void(void)>> functors;
A() : functors { B::bFunctor(), C::cFunctor() }
{ }
};
If you need the functors to have more advanced behaviour, Boost.TypeErasure any might help.
Polymorphic
*
*Create an abstract functor type.
*Make B::bFunctor and C::cFunctor inherit from it.
*Store vector of that abstract functor type smart pointers.
struct AbstractFunctor {
virtual void operator()() const = 0;
};
class B {
public:
struct Functor : public AbstractFunctor {
void operator()() const {
//some code
}
};
};
class A {
public:
int x;
std::vector<std::unique_ptr<AbstractFunctor>> functors;
A() {
// this could most probably be shortened with make_unique
functors.emplace_back(std::unique_ptr<AbstractFunctor>(new B::Functor()));
functors.emplace_back(std::unique_ptr<AbstractFunctor>(new C::Functor()));
}
}; | unknown | |
d5865 | train | I only added two lines to your minimal demo.
// Onload Show green ones:
$('a[class="green"]').click();
And it shows the green items onload.
(function($) {
'use strict';
var $filters = $('.filter [data-filter]'),
$boxes = $('.boxes [data-category]');
$filters.on('click', function(e) {
e.preventDefault();
var $this = $(this);
$filters.removeClass('active');
$this.addClass('active');
var $filterColor = $this.attr('data-filter');
if ($filterColor == 'all') {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
} else {
$boxes.removeClass('is-animated')
.fadeOut().finish().promise().done(function() {
$boxes.filter('[data-category = "' + $filterColor + '"]').each(function(i) {
$(this).addClass('is-animated').delay((i++) * 200).fadeIn();
});
});
}
});
// Onload Show green ones:
$('a[class="green"]').click();
})(jQuery);
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
html {
font: 18px/1.65 sans-serif;
text-align: center;
}
li {
list-style-type: none;
}
a {
text-decoration: none;
display: block;
color: #333;
}
h2 {
color: #333;
padding: 10px 0;
}
.filter {
margin: 30px 0 10px;
}
.filter a {
display: inline-block;
padding: 10px;
border: 2px solid #333;
position: relative;
margin-right: 20px;
margin-bottom: 20px;
}
.boxes {
display: flex;
flex-wrap: wrap;
}
.boxes a {
width: 23%;
border: 2px solid #333;
margin: 0 1% 20px 1%;
line-height: 60px;
}
.all {
background: khaki;
}
.green {
background: lightgreen;
}
.blue {
background: lightblue;
}
.red {
background: lightcoral;
}
.filter a.active:before {
content: '';
position: absolute;
left: 0;
top: 0;
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 15px 15px 0 0;
border-color: #333 transparent transparent transparent;
}
.is-animated {
animation: .6s zoom-in;
}
@keyframes zoom-in {
0% {
transform: scale(.1);
}
100% {
transform: none;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class="cta filter">
<a class="green" data-filter="green" href="#" role="button">Show Green Boxes</a>
<a class="blue" data-filter="blue" href="#" role="button">Show Blue Boxes</a>
<a class="red" data-filter="red" href="#" role="button">Show Red Boxes</a>
</div>
<div class="boxes">
<a class="red" data-category="red" href="#">Box1</a>
<a class="green" data-category="green" href="#">Box2</a>
<a class="blue" data-category="blue" href="#">Box3</a>
<a class="green" data-category="green" href="#">Box4</a>
<a class="red" data-category="red" href="#">Box5</a>
<a class="green" data-category="green" href="#">Box6</a>
<a class="blue" data-category="blue" href="#">Box7</a>
<a class="red" data-category="red" href="#">Box8</a>
<a class="green" data-category="green" href="#">Box9</a>
<a class="blue" data-category="blue" href="#">Box10</a>
<a class="red" data-category="red" href="#">Box11</a>
<a class="green" data-category="green" href="#">Box12</a>
<a class="blue" data-category="blue" href="#">Box13</a>
<a class="green" data-category="green" href="#">Box14</a>
<a class="red" data-category="red" href="#">Box15</a>
<a class="blue" data-category="blue" href="#">Box16</a>
</div> | unknown | |
d5866 | train | Looks like you have a process running on the 443 port so when apache tries to get on that, it fails.
netstat -tlpn | grep 443
use that to find out which process is using it. It should give you process id as well.
service <process> stop
or
kill <processID> to kill the process that is using your 443 port. Clear your apache logs and restart apache. | unknown | |
d5867 | train | It doesn't print reversed, nl is built reversed:
while not l.IsEmpty:
nl = Node(l.value,nl)
l = l.tail
each new nl is the tail of the next one. So 1 is the tail for 2, and so on.
A: Think about what is happening in the first while loop. It builds a new list in reverse. We start with Empty. Then each iteration:
new_list = Node(l.current_value, Node(l.previous_value))
Where previous_value is the value encountered first while iterating through the original list. So, this could be expanded to:
Node(l.last_value, Node(l.second_to_last, Node(...Node(l.first_value, Empty)...)))
A: It builds a forwards list
l = Node(1,Node(2,Node(3,Node(4,Empty))))
Then builds a backwards list from that forwards list
nl = Empty
while not l.IsEmpty:
nl = Node(l.value,nl)
l = l.tail
Then prints that list out in order, which is the reverse of the first list.
This is a pretty convoluted way of doing things. Better would be something recursive, like:
def printer(curr):
if curr.isEmpty:
return
printer(curr.tail)
print(curr.value) | unknown | |
d5868 | train | Bumpup yup to latest and use mixed().test() instead of string().test()
example :
passwordConfirm: Yup.mixed().test('is-same', 'Passwords not match.', value => value === values.newPassword)
A: The issue is the custom validation for matching the e-mail fields. I made a fork here which I fixed using the method from this Github issue to add a custom validation method to Yup for comparing equality of fields, a feature which is apparently not well-supported. | unknown | |
d5869 | train | In the event you have invalid form data, you should check if the $_POST['month_select'] variable is set and not empty and create your dropdown passing in it's value like so:
$selected = (!empty($_POST['month_select'])) ? $_POST['month_select'] : null;
createMonths('month_select', $selected);
function createMonths($id='month_select', $selected = null)
{
/*** array of months ***/
$months = array(
'01'=>'Jan',
'02'=>'Feb',
'03'=>'Mar',
'04'=>'Apr',
'05'=>'May',
'06'=>'Jun',
'07'=>'Jul',
'08'=>'Aug',
'09'=>'Sep',
'10'=>'Oct',
'11'=>'Nov',
'12'=>'Dec');
/*** current month ***/
$selected = is_null($selected) ? date('n') : $selected;
$select = '<select name="'.$id.'" id="'.$id.'">'."\n";
$select .= "<option value=""></option>\n";
foreach($months as $key => $mon)
{
$select .= '<option value="'.$key.'"';
$select .= ($key == $selected) ? ' selected="selected"' : '';
$select .= ">$mon</option>\n";
}
$select .= '</select>';
return $select;
}
I have also taken the liberty of fixing your createMonths() function by the recommendation regarding date('n') and changing your array keys to strings as this will avoid having to pad your months. | unknown | |
d5870 | train | Try this formula:
=SUMPRODUCT(NOT(ISERROR(MATCH($C:$C;J:J;0)))*SUBTOTAL(103;OFFSET(C1;ROW(C:C)-MIN(ROW(C:C));0));$D:$D)
Excel structure:
After applying a filter:
You can also include date criteria already in the formula:
=SUMPRODUCT(NOT(ISERROR(MATCH($C:$C;J:J;0)))*($B:$B<$L$1);$D:$D)
Where L1 is date criteria.
But, of course, if you need filter usage, use the first solution. | unknown | |
d5871 | train | This should get you the contents of the CkEditor textarea:
function words(content)
{
var f = CKEDITOR.instances.blah.getData();
$('#othman').load('wordcount.php?content='+ encodeURIComponent(f));
}
But,I don't think the onkeyup will work, because CkEditor replaces the textarea. You would need to create a plugin for CkEditor to catch the key up trigger within the editor instance.
A: I have used nlbr and everything works. | unknown | |
d5872 | train | bool Imagick::setSize ( int $columns , int $rows )
Sets the size of the Imagick object. Set it before you read a raw image format such as RGB, GRAY, or CMYK.
--php.net | unknown | |
d5873 | train | You can use aggregate
df_short = df.groupby(df.index.floor('D')).agg({'Distance': min, 'Value': max})
If you want the kept Value column is the same with minimum of Distance column:
df_short = df.loc[df.groupby(df.index.floor('D'))['Distance'].idxmin(), :]
A: Make a datetime Index:
df.DATE = pd.to_datetime(df.DATE) # If not already datetime.
df.set_index('DATE', inplace=True)
Resample and find the min Distance's location:
df.loc[df.resample('D')['Distance'].idxmin()]
Output:
Value Distance
DATE
1979-01-02 22:00:00 38.664642 23.251 | unknown | |
d5874 | train | OK,It is clear from HBase Api Pagination doc that the pagination filter does not guarantee to give rows <= pagination factor since the filter is applied for each region server | unknown | |
d5875 | train | Quite a few things are wrong in your trigger function. Here it is revised w/o changing your business logic.
However this will affect the second user, not the first. Probably you shall compare the count to 0. Then the condition shall be if not exists (select from public.user) then
CREATE OR REPLACE FUNCTION public.first_user()
RETURNS trigger LANGUAGE plpgsql AS
$function$
begin
if ((select count(*) from public.user) = 1) then
-- probably if not exists (select from public.user) then
new.is_verified := true;
new.roles := array['ROLE_USER', 'ROLE_ADMIN'];
end if;
return new;
end;
$function$; | unknown | |
d5876 | train | I think you miss a return when calling your recursive function:
i = i + 1;
return recurseThroughTree(randomCategories, outputString, i); | unknown | |
d5877 | train | From your post it seems you want to load it once and then just toggle.
$(document).on("click", ".more", function() {
var $wait = $("#wait");
if ($wait.html().length==0) $wait.load("about.html");
$wait.show();
$(this).toggleClass("more less");
});
$(document).on("click",".less",function(){
$("#wait").hide();
$(this).toggleClass("less more");
});
To add and delete each time you click the SAME button try this which seems to be very much what you already tried
$(document).on("click", ".more", function() {
$("#wait").load("about.html");
$(this).toggleClass("more less");
});
$(document).on("click",".less",function(){
$("#wait").empty();
$(this).toggleClass("less more");
});
One event handler:
$(document).on("click", ".moreorless", function() {
if ($(this)hasClass("more")) {
$("#wait").load("about.html");
$(this).toggleClass("more less");
}
else {
$("#wait").empty();
$(this).toggleClass("less more");
}
});
A: Why not just load the data on page load and then toggle() the display when button is clicked?
$(function(){
// add the content right when page loads
$("#wait").load("about.html");
$('#more').click(function(){
// toggle display
$("#wait").toggle();
// toggle class
$(this).toggleClass("less more");
});
}); | unknown | |
d5878 | train | Not an answer (yet?) to the real problem, only to "is it normal", but also much too long for comments.
It is possible for one SSL/TLS server to have more than one certificate, and provide different ones on connection requests, although it would be odd to so for the same domainname; this is more common on servers that support multiple domains -- either for different parts or services of one entity, or a CDN or frontend like Cloudflare or Fastly(!) that handles connections for multiple entities. However, in the specific case of www.google.com that is not a single server but hundreds or thousands, all over the world, and it is quite possible that different servers have different certificates, either intentionally (perhaps for testing) or because they are making a change which does not happen instantly everywhere.
According to my notes recently www.google.com as accessed from my location was using certs under GTS CA 1O1 with this intermediate cert under GlobalSign Root CA - R2 as shown in your first image. Note that intermediate cert expires in 6 months, as does the root it is under, which is a good reason to stop using it. (https://crt.sh/?caid=10 does show two 'cross' certs for R2 under GlobalSign Root CA (see below) with longer expiration, but they are both revoked.)
As of today I get a cert under GTS CA 1C3 under GTS Root R1 -- so far like your second figure -- but then crossed under GlobalSign Root CA WITHOUT R1. Specifically I get this intermediate cert for GTS CA 1C3 and this cross cert for GTS Root R1, which are valid until 2027 and 2028 and appear here on the GTS website as GTS CA 1C3 and GTS Root R1 Cross respectively. And this cert for that GlobalSign root -- which in effect is R1, since the other currently known GlobalSign roots are R2-6, but is not named R1 -- has been in at least Java and Mozilla/Firefox truststores for long time; I can't easily check history on others. crt.sh doesn't know of ANY cert named with R1. I'd note that Firefox does have in its truststore a root cert for GTS Root R1 (and also R2-R4) which means it could shorten the chain, but doesn't; AFAICS no Java has any of these GTS roots.
Moreover the leaf certs I get have both CommonName and SubjectAltName www.google.com NOT *.google.com. This pretty much confirms you are getting a different server than I am.
It might help if you can extract the "GlobalSign R1" cert you apparently have in some truststore, and post it to be looked at or searched for, and the rest of the chain which you can get with keytool -printcert -rfc -sslserver www.google.com and/or if you can identify which google address(es?) you are getting this cert chain from so others (like me) can try it -- although even one address might anycast to multiple physical servers.
A: You can refer to Google's PKI Repository which lists 5 CAs.
As per the FAQ, they actually list 36 different CAs in the roots.pem file (as of 28th April, 2022).
Ideally you will need to add all those CAs to your trust store to cover all bases. | unknown | |
d5879 | train | I figured it out. There were two things that I had to change.
*
*Add the vehicle ID to the current subquery to make sure that I only wanted those vehicles in the temp table
*Change the subquery table to the depreciation schedule
Select b.*
--this is month we want to compare (For example month 45)
From #changes As a
--this has all the months (for example month 1-50)
Inner Join work.dbo.DepreciationSchedule As b
On b.VehicleID = a.VehicleID
--If the previous months value exists in table b (Ex month 44), then take take that months value otherwise take the current value of table a (Ex month 45)
Where b.Month = Case When Exists (Select * From work.dbo.DepreciationSchedule As innerA Where innerA.month = a.month - 1 and innerA.VehicleID = a.vehicleID) Then a.Month -1 Else a.Month end | unknown | |
d5880 | train | Git is a distributed version control system. More simpler description: it is tool that helps to manage repo with sources.
Wiht purpose to share your repo with other project participants you need a public server where will be hosted your git repo.
GitHub it is web service that provide to you an opportunity to host your repo. You can host it like public or private repo. Also GitHub provide a lot of other helpful features (convenient code-review tool, edit files, manage team, graphs, wiki, gist, etc...).
A: "Git" is version control system that can use different hosts as server. Many companies use local "Git" servers.
Github is one of many public "Git" servers, but it is most popular one. | unknown | |
d5881 | train | Amazon CloudWatch has a RequestCount metric that measures "The number of requests received by the load balancer".
The Load Balancer can also generate Access Logs that provide detailed information about each request.
See:
*
*CloudWatch Metrics for Your Classic Load Balancer
*CloudWatch Metrics for Your Application Load Balancer | unknown | |
d5882 | train | A better title would be "WooCommerce up-sells as checkboxes".
A lot of research and several strategies to tackle this problem lead me to a solution which I thought was not even possible in the beginning.
The solution is now exactly what I wanted. A non-JavaScript, no-template-override, but a simple and pure addition to functions.php. It works for simple and variable products (and probably with grouped and external products too).
It misses some nice features still. It won't work yet if an up-sell is a variable product. Quantity selection and limiting up-sells per item or order would be nice additions too. Based on the code below adding those features should not be a big deal anymore.
// create the checkbox form fields and add them before the cart button
add_action( 'woocommerce_before_add_to_cart_button', 'action_woocommerce_before_add_to_cart_form', 10, 0 );
function action_woocommerce_before_add_to_cart_form(){
global $woocommerce, $product;
// get the product up-sells
$upsells = $product->get_upsells();
// store the number of up-sells and pass it on to the add-to-cart hook
?>
<input type="hidden" name="upsells_size" value="<?php echo(sizeof($upsells)); ?>">
<div id="wb-upsell-div">
<?php
// iterate through all upsells and add an input field for each
$i = 1;
foreach( $upsells as $value ){
$product_id = $value;
?>
<input id="wb-upsell-checkboxes" type="checkbox" name="upsell_<?php echo($i) ?>" value="<?php echo($product_id); ?>"><?php echo( '<a href="' . esc_url( get_permalink( $product_id )) . '" target="_blank">' . get_the_title( $product_id ) . "</a>". " ($" . get_post_meta( $product_id, '_regular_price', true) . ")"); ?><br>
<?php
$i++;
}
?>
</div>
<?php
}
// function to add all up-sells, where the checkbox have been checked, to the cart
add_action('woocommerce_add_to_cart', 'custom_add_to_cart', 10, 3);
function custom_add_to_cart() {
global $woocommerce;
// get the number of up-sells to iterate through
$upsell_size = $_POST['upsells_size'];
// iterate through up-sell fields
for ($i=1; $i<=$upsell_size; $i++){
// get the product id of the up-sell
$product_id = $_POST['upsell_' . $i];
$found = false;
//check if product already in cart
if ( sizeof( WC()->cart->get_cart() ) > 0 ) {
foreach ( WC()->cart->get_cart() as $cart_item_key => $values ) {
$_product = $values['data'];
if ( $_product->id == $product_id )
$found = true;
}
// if product not found, add it
if ( ! $found )
WC()->cart->add_to_cart( $product_id );
} else {
// if no products in cart, add it
WC()->cart->add_to_cart( $product_id );
}
}
}
And here is the CSS for formatting the <div>and the checkboxes. It goes into the style.css file:
#wb-upsell-div {
margin-top: 10px;
margin-bottom: 20px;
}
#wb-upsell-checkboxes{
}
A: So there's an actual answer to this question, you can add whatever you want inside the add to cart <form> using hooks. For example:
add_action( 'woocommerce_before_add_to_cart_button', 'so_34115452_add_input' );
function so_34115452_add_input(){
echo '<input type="checkbox" name="something"/>' . __( 'Some Checkbox', 'text-domain' );
} | unknown | |
d5883 | train | For getting element by name,
document.getElementsByName('BOE_NO').disabled = true;
For getting element by id
document.getElementsById('idOfBOE_NO').disabled = true; | unknown | |
d5884 | train | An equivalent in R tidyverse is dplyr::lag. Create the column in mutate and update the object by assigning (<-) back to the same object 'df'
library(dplyr)
df <- df %>%
mutate(shifted_x = lag(x))
or if we need to use the shift, there is shift in data.table
library(data.table)
setDT(df)[, shifted_x := shift(x)]
Also, if we need to create more than one column, shift can take a vector of values in n
setDT(df)[, paste0('shifted', 1:3) := shift(x, n = 1:3)]
A: We can also use the flag function from the collapse package.
library(collapse)
df %>%
ftransform(shifted_x = flag(x))
# # A tibble: 3 x 3
# x y shifted_x
# * <dbl> <dbl> <dbl>
# 1 1 4 NA
# 2 2 5 1
# 3 3 6 2 | unknown | |
d5885 | train | Unless you have huge amount of your cards your solution will work. Otherwise you can consider 2 dictionaries to make searches constants and keep O(N) complexity:
namespace ConsoleApplication
{
public class Dominoe
{
public Dominoe(int left, int right)
{
LeftSide = left;
RightSide = right;
}
public int LeftSide;
public int RightSide;
}
class Program
{
static void Main(string[] args)
{
var input = new List<Dominoe>()
{
new Dominoe(2, 3),
new Dominoe(1, 2),
new Dominoe(4, 5),
new Dominoe(3, 4)
};
var dicLeft = new Dictionary<int, Dominoe>();
var dicRigth = new Dictionary<int, Dominoe>();
foreach (var item in input)
{
dicLeft.Add(item.LeftSide, item);
dicRigth.Add(item.RightSide, item);
}
Dominoe first = null;
foreach(var item in input)
{
if (!dicRigth.ContainsKey(item.LeftSide))
{
first = item;
break;
}
}
Console.WriteLine(string.Format("{0} - {1}", first.LeftSide, first.RightSide));
for(int i = 0; i < input.Count - 1; i++)
{
first = dicLeft[first.RightSide];
Console.WriteLine(string.Format("{0} - {1}", first.LeftSide, first.RightSide));
}
Console.ReadLine();
}
}
} | unknown | |
d5886 | train | Document pdfDoc = new Document(PageSize.A4, 25f, 20f, 20f, 10f);
using (MemoryStream memoryStream = new MemoryStream())
{
PdfWriter writer = PdfWriter.GetInstance(pdfDoc, memoryStream);
Phrase phrase = null;
PdfPCell cell = null;
Color color = null;
pdfDoc.Open();
int columns = grdGridPrint.Columns.Count;
// Table and PdfTable classes removed in version 5.XXX
PdfPTable table = new PdfPTable(columns);
// table.TotalWidth = 500f;
phrase = new Phrase();
phrase.Add(new Chunk("Heading in PDF \n", FontFactory.GetFont("Arial", 14, Font.BOLD, Color.BLACK)));
phrase.Add(new Chunk("\n", FontFactory.GetFont("Arial", 6, Font.NORMAL, Color.BLACK)));
cell = ClsPDF.PhraseCell(phrase, PdfPCell.ALIGN_CENTER);
cell.VerticalAlignment = PdfCell.CHUNK;
// table.AddCell(cell);
cell.Colspan = 7;
// Colspan for Giving Spaces to Heading if you have to show data in grid of 10 columns that time Colspan =10
table.AddCell(cell);
//table.HeaderRows = 1;
iTextSharp.text.Font fontTable = FontFactory.GetFont("Arial", 8, iTextSharp.text.Font.NORMAL, Color.BLACK);
// padding can only be set for cells, __NOT__ PdfPTable object
int padding = 2;
float[] widths = new float[columns];
for (int x = 0; x < columns; x++)
{
widths[x] = (float)grdGridPrint.Columns[x].ItemStyle.Width.Value;
string cellText = Server.HtmlDecode(grdGridPrint.HeaderRow.Cells[x].Text);
// Cell and Color classes are gone too
cell = new PdfPCell(new Phrase(cellText, fontTable))
{
};
table.AddCell(cell);
}
// next two lines set the table's __ABSOLUTE__ width
table.SetTotalWidth(widths);
table.LockedWidth = true;
for (int i = 0; i < grdGridPrint.Rows.Count; i++)
{
if (grdGridPrint.Rows[i].RowType == DataControlRowType.DataRow)
{
for (int j = 0; j < columns; j++)
{
string cellText = Server.HtmlDecode(grdGridPrint.Rows[i].Cells[j].Text);
cell = new PdfPCell(new Phrase(cellText, fontTable))
{
Padding = padding
};
//if (i % 2 != 0)
//{
// //cell.BackgroundColor = new System.Drawing.ColorTranslator.FromHtml("#C2D69B");
//}
table.AddCell(cell);
}
}
}
Response.ContentType = "application/pdf";
PdfWriter.GetInstance(pdfDoc, Response.OutputStream);
pdfDoc.Open();
pdfDoc.Add(table);
pdfDoc.Close(); | unknown | |
d5887 | train | Starting with Flake8 3.7.0, you can ignore specific warnings for entire files using the --per-file-ignores option.
Command-line usage:
flake8 --per-file-ignores='project/__init__.py:F401,F403 setup.py:E121'
This can also be specified in a config file:
[flake8]
per-file-ignores =
__init__.py: F401,F403
setup.py: E121
other/*: W9
A: There is no way of specifying that in the file itself, as far as I'm concerned - but you can ignore these errors when triggering flake:
flake8 --ignore=E403,E405 lucy/settings/staging.py | unknown | |
d5888 | train | Section 9.7.1 of the Java SE specification states:
If the element type is an array type and the corresponding ElementValue is not an ElementValueArrayInitializer, then an array value whose sole element is the value represented by the ElementValue is associated with the element. Otherwise, if the corresponding ElementValue is an ElementValueArrayInitializer, then the array value represented by the ElementValueArrayInitializer is associated with the element.
With a comment clarifying the above stating:
In other words, it is permissible to omit the curly braces when a single-element array is to be associated with an array-valued annotation type element.
Because Scala has no equivalent array initializer syntax, you must use Array(elems).
A: This is part of the language specification for annotations that use the default value element.
See JLS 9.7.3 for examples, including one with the comment "Note that the curly braces are omitted." | unknown | |
d5889 | train | Assuming the language you use has a round function that rounds to the nearest integer, and calling v the value and n the grid size:
round(v * (n-1)) / (n-1) | unknown | |
d5890 | train | You imported React from "react-native";
import React, {
AppRegistry,
StyleSheet,
Text,
View,
TouchableHighlight,
AlertIOS,
Dimensions,
BackHandler,
PropTypes,
Component,
} from 'react-native';
instead of this, you need to import React from "react";
import React from 'react';
When we use JSX in our render functions, in the background JSX runs React.createElement(...). And in your code, React is not defined. Because of this, it gives that error. | unknown | |
d5891 | train | There are two steps to make this work:
*
*you need to reference the Teams Javascript SDK in your web page
*When your user clicks the button, you would call microsoftTeams.tasks.submitTask in your 'click' event handler. There are a few parameter options for this method, depending on whether you want it to send anything back to your bot. To simply close the window, call microsoftTeams.tasks.submitTask(null);, or if you want to send an object back, call microsoftTeams.tasks.submitTask(whateverObjectYouWantToSendBack); | unknown | |
d5892 | train | The function that you are assigning to xhr.onreadystatechange is called an event handler. This event handler function gets executed when the actual event 'readystatechange' gets fired in your case.
A: That is a event handler which differs a little bit from a callback. | unknown | |
d5893 | train | The contains-selector:
var value = $("td:contains('astrore')").next().text();
A: This allows for a repeating check for the value:
function scanForValue(value) {
$("td").each(function() {
if ($(this).text()==value) {
console.log($(this).next().text());
}
});
window.setTimeout("scanForValue('"+value+"');", 120000);
}
scanForValue('astrore');
http://jsfiddle.net/hLKRd/3/ | unknown | |
d5894 | train | Please try the following in your view:
@model IEnumerable<DysonADPTest.UserViewModel.USerViewModelADP>
Your problem lies in using the .Select() method which changes the type your controller action is returning from
IEnumerable<DysonADPTest.Models.tblEmployeeADP>
which your view is also expecting to something entirely different. For this to work, the type your controller action returns and your view uses should match. It's either not use the .Select() method in your controller action or change the type your view uses.
A: You are creating a list with a new object type (USerViewModelADP)
You can filter, but just keep the same type (an objects)
public ActionResult Index()
{
var tblEmployeeADPs = db.tblEmployeeADPs
.Where(p => p.Status == "Active")
.Select(p => p)
return View(tblEmployeeADPs.ToList());
} | unknown | |
d5895 | train | The real "unexpected" behavior is that setting the flag makes the heap executable as well as the stack. The flag is intended for use with executables that generate stack-based thunks (such as gcc when you take the address of a nested function) and shouldn't really affect the heap. But Linux implements this by globally making ALL readable pages executable.
If you want finer-grained control, you could instead use the mprotect system call to control executable permissions on a per-page basis -- Add code like:
uintptr_t pagesize = sysconf(_SC_PAGE_SIZE);
#define PAGE_START(P) ((uintptr_t)(P) & ~(pagesize-1))
#define PAGE_END(P) (((uintptr_t)(P) + pagesize - 1) & ~(pagesize-1))
mprotect((void *)PAGE_START(shellcode), PAGE_END(shellcode+67) - PAGE_START(shellcode),
PROT_READ|PROT_WRITE|PROT_EXEC);
A:
Is this expected behavior?
Looking at the Linux kernel code, I think that the kernel-internal name for this flag is "read implies exec". So yes, I think that it's expected.
It seems like it is bad idea to require the stack to be executable in order to make the heap executable, since there are many more legitimate use cases of the latter than the former.
Why would you need the complete heap to be executable? If you really need to dynamically generate machine code and run it or so, you can explicitly allocate executable memory using the mmap syscall.
what is the rationale?
I think that the idea is that this flag can be used for legacy programs that expect that everything that's readable is also executable. Those programs might try to run stuff on the stack and they might try to run stuff on the heap, so it's all permitted. | unknown | |
d5896 | train | Check this link for the solution. It checks for the pattern to occur 5 times. You can modify it in times(number_of_times)
[https://stackoverflow.com/questions/45033109/flink-complex-event-processing/45048866]
A: You can use .times(5) followed by the same pattern but with the quantifier .oneOrMore().optional(). The times requires exactly 5 and the zeroOrMore will give you the "atLeast...". | unknown | |
d5897 | train | It means that if n, the argument passed to the function, is falsey, 2000 will be assigned to it.
Here, it's probably to allow callers to have the option of either passing an argument, or to not pass any at all and use 2000 as a default:
function delay(n){
n = n || 2000
console.log(n);
}
delay();
delay(500);
But it would be more suitable to use default parameter assignments in this case.
function delay(n = 2000){
console.log(n);
}
delay();
delay(500); | unknown | |
d5898 | train | input() will always return a string. If you want to see if it is possible to be converted to an integer, you should do:
try:
int_user_var = int(user_var)
except ValueError:
pass # this is not an integer
You could write a function like this:
def try_convert(s):
try:
return int(s)
except ValueError:
try:
return float(s)
except ValueError:
try:
return bool(s)
except ValueError:
return s
However, as mentioned in the other answers, using ast.literal_eval would be a more concise solution.
A: from ast import literal_eval
def get_type(input_data):
try:
return type(literal_eval(input_data))
except (ValueError, SyntaxError):
# A string, so return str
return str
print(get_type("1")) # <class 'int'>
print(get_type("1.2354")) # <class 'float'>
print(get_type("True")) # <class 'bool'>
print(get_type("abcd")) # <class 'str'>
A: Input will always return a string. You need to evaluate the string to get some Python value:
>>> type(eval(raw_input()))
23423
<type 'int'>
>>> type(eval(raw_input()))
"asdas"
<type 'str'>
>>> type(eval(raw_input()))
1.09
<type 'float'>
>>> type(eval(raw_input()))
True
<type 'bool'>
If you want safety (here user can execute arbitrary code), you should use ast.literal_eval:
>>> import ast
>>> type(ast.literal_eval(raw_input()))
342
<type 'int'>
>>> type(ast.literal_eval(raw_input()))
"asd"
<type 'str'>
A: The problem here is that any input is taken a 'string'. So we need to treat 'string' as a special case, and keep it separate from everything else.
x = input("Enter something: ")
try:
if type(eval(x)) == float:
print(x, " is floating point number")
elif type(eval(x)) == int:
print(x, " is interger number")
elif type(eval(x)) == bool:
print(x, " is a boolean")
except:
print("That is a string")
Here the input is first evaluated. If it's anything other than a string, the eval function shall show the type. If it's a string, it is considered as an "error", and the error message is given as "That is a string". | unknown | |
d5899 | train | Maybe you can try to remove the background-color alpha, it will not become transparent background.
.menu{
background-color: rgb(255, 189, 109);
}
If you must keep the background-color for transparent, you may create one more layer on the .menu div
body{
font-family: sans-serif;
background-image: url("https://dl.fujifilm-x.com/global/products/cameras/x-t3/sample-images/ff_x_t3_002.JPG");
background-repeat: no-repeat;
background-attachment: fixed;
background-size: cover;
}
.menub{
position: relative;
border: 2px solid red;
max-width: 613px;
width: 100%;
height:100px;
margin: auto;
background-color: white;
}
.menu{
width: 100%;
height:100px;
background-color: rgba(255, 189, 109,0.382);
}
<html>
<body>
<div class="menub">
<div class="menu"></div>
</div>
</body>
</html> | unknown | |
d5900 | train | try this
[Route("api/messages")]
[HttpGet]
public HttpResponseMessage getMessage(DateTime? date = null, int? page = null) | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.