_id
stringlengths 2
6
| partition
stringclasses 3
values | text
stringlengths 4
46k
| language
stringclasses 1
value | title
stringclasses 1
value |
---|---|---|---|---|
d3801 | train | Are you sure that the data import configuration matches your Solr document schema? | unknown | |
d3802 | train | The problem is that the second argument to indexOf is the first index in the string that it searches.
Returns the index within this string of the first occurrence of the specified character, starting the search at the specified index.
Once it finds the first "C", it will then continue to always find that same "C" in the first index it looks in. You need to change your code to this:
index = dna.indexOf("C", index + 1);
To start from the first character after the "C" that you already found. You should also change the initial index to -1 so that it starts from the first character.
A: There are a few problems with your code:
*
*It uses counter as the number of cs in your dna string when it is really equal to dna.length(). This causes the ratio to be 1 if there are any cs at all, or 0 if there are none.
*There should be a variable that keeps track of how many cs there are in the string and how many gs there are (as seen in the above bullet, counter cannot be used for this.)
*As resueman said, this line:
index = dna.indexOf("C", index); should be changed to: index = dna.indexOf("C", index + 1);, or else the index will remain the same (it will always be equal to the index of the first c in the string.)
*A while loop really is not all that well suited to this kind of thing; instead, a for loop should be used.
*The ratio should be a double, not an int since doubles have more precision (they can be decimals.)
Here's code I came up with that works:
public static void cratio(String dna) {
int c = 0;
int g = 0;
for(int i =0; i < dna.length(); i++) {
if((dna.charAt(i) == 'c') || (dna.charAt(i) == 'C')) c++;
if((dna.charAt(i) == 'g') || (dna.charAt(i) == 'G')) g++;
}
System.out.println("Number of 'C's in DNA: " + c + " and number of 'G's in DNA: " + g);
int length = dna.length();
double ratio = (double) (c+g)/length * 100;
System.out.println("The ratio of 'C's and 'G's to the length of the DNA chain is: " + ratio + "%.");
}
If you have any questions, just let me know! | unknown | |
d3803 | train | Can you try this:
<p:progressBar
value="#{data.financingDataModel.mortgagePercentage}"
styleClass="animated ui-soba-progress-bar " global="false"
style="overflow:hidden"
labelTemplate="#{data.financingDataModel.mortgagePercentage}%">
</p:progressBar>
I only add the `labelTemplate'. It is working on me in PF 7 | unknown | |
d3804 | train | The API is not documented, however we can track it with tools...
You can add SSH public keys by calling below REST API:
Write a script to create the SSH keys with the ssh-keygen command for users, please see Use SSH key authentication for details.
Then call the REST API to add the public keys:
POST https://{Account}.visualstudio.com/_details/security/keys/Edit
Content-Type: application/json
Request body:
{"Description":"Test1001","__RequestVerificationToken":"","AuthorizationId":"","Data":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDGZyIoG6eH9nTm/Cu2nVDa7hTNfaMWkwayghFmYTvqCkOwao2YJesGVih1fA3oR4tPsVv4+Vr8wxPCfJCboUrL9NDoH1tAMsIlkQZHqgaJwnGNWnPrnp0r2+wjLQJFPq/pPd8xKwr6QU0BxzZ4RuLDfMFz/MR1cQ2iWWKJuO/TXYrSPtY9XqsmMC8Zo4zJln40PGZt+ecOyQCNHCXsEJ3C+QIUXSqAkb8yknZ4apLf1oqfFRngtV4w84Ua/ZLpNduPZrBcm/mCU5Jq6H37jxhx4kluheJrfpAXbvbQlPTKa2zaOHp7wb3B2E2HvESJmx5ExNuAHoygcq/QGjsRsiUR andy@xxx@ws0068"} | unknown | |
d3805 | train | import java.awt.*;
import javax.swing.*;
import javax.swing.border.EmptyBorder;
import java.net.URL;
import javax.imageio.ImageIO;
class ImagePanel extends JPanel {
Image image;
ImagePanel(Image image) {
this.image = image;
}
@Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image,0,0,getWidth(),getHeight(),this);
}
public static void main(String[] args) throws Exception {
URL url = new URL("http://pscode.org/media/stromlo2.jpg");
final Image image = ImageIO.read(url);
SwingUtilities.invokeLater(new Runnable() {
public void run() {
JFrame f = new JFrame("Image");
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setLocationByPlatform(true);
ImagePanel imagePanel = new ImagePanel(image);
imagePanel.setLayout(new GridLayout(5,10,10,10));
imagePanel.setBorder(new EmptyBorder(20,20,20,20));
for (int ii=1; ii<51; ii++) {
imagePanel.add(new JButton("" + ii));
}
f.setContentPane(imagePanel);
f.pack();
f.setVisible(true);
}
});
}
}
Raw image used
A: I have faced this problem several times while i was adding image to the JFrame. Either the image size would be small or the JFrame size was small.
Here is a great site wherein you can resize your images according to the JFrame without distorting the image.
You can visit this website Go to the site
I have explained and showed how to make use of photo editor that the website provides to resize the image.
Go to my video tutorial Go to the Video Session | unknown | |
d3806 | train | Agree with the comments on the Q. Either:
1.) Use Client Credentials grant type in OAuth 2 - with an embedded secret in your App. Understand that this isn't super secure and someone will reverse engineer it eventually. Ideally each client would get a unique secret - so you could revoke a client if they're abusing its use.
2.) Live with that API being open - thereby not requiring an OAuth 2 access token at all. Maybe that API would be known only to your app - but again, it would only be a matter of time before someone reverse engineers it.
A: My group is having a similar discussion. Users can get the app and browse a catalog without having to sign-in. The catalog and other data is accessed via an API and we would like to force users to have an access_token for all calls.
Our current thinking is to
*
*Always force the App to exchange a common clientId/secret for an access_token. So the app would get an access_token even for anonymous users. This would be via the client_credentials oAuth flow.
*If the user signs in, use the oAuth password flow. They would pass in clientId, secret, username, and password. We would additionally allow them to pass in their anonymous token so that we could transfer any history from their anonymous session.
So for example...
access_token = api.oAuth.client_credentials(clientId, secret)
catalog = api.getCatalog(access_token)
authenticated_access_token = api.oAuth.password(clientId, secret, username, password, access_token) | unknown | |
d3807 | train | Visual Studio is just calling git clone command to clone the repo.
Suggest you could directly use Git Command, such as follow
git clone https://dev.azure.com/fabrikam/DefaultCollection/_git/Fabrikam C:\Repos\FabrikamFiber
If you still get the same result. Afraid these files .xxx are all ignored in Git by default. You need to check .gitingore file.
Either manually add them or override this for particular folders in your .gitignore file. | unknown | |
d3808 | train | Changing the "names" of the grid areas from numbers to strings fixed it.
@import url("https://fonts.googleapis.com/css?family=Roboto:400,400i,700");
.grid {
display: grid;
grid-gap: 1rem;
grid-template-rows: 1fr 1fr 1fr;
grid-template-columns: repeat(7, 1fr);
grid-template-areas:
"p1 p1 p1 p1 p4 p4 p4"
"p2 p2 p3 p3 p4 p4 p4"
"p2 p2 p3 p3 p4 p4 p4";
max-width: 1000px;
margin: 0 auto;
}
.pro-features {
grid-area: p1;
}
.feature-privacy {
grid-area: p2;
}
.feature-collab {
grid-area: p3;
}
.feature-assets {
grid-area: p4;
}
a {
text-decoration-color: orange;
text-decoration-style: double;
text-decoration-skip: none;
color: inherit;
font-weight: bold;
display: inline-block;
}
.grid > div {
background: #444;
color: white;
border-radius: 1rem;
padding: 1rem;
border-top: 1px solid #666;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.75);
}
h1, h2 {
margin: 0;
line-height: 1;
}
body {
background: #222;
margin: 0;
padding: 1rem;
line-height: 1.3;
font-family: Roboto, sans-serif;
}
<div class="grid">
<div class="pro-features">
<h1>CodePen PRO Features</h1>
<p>CodePen has many PRO features including these four!</p>
</div>
<div class="feature-privacy">
<h2>Privacy</h2>
<p>You can make as many <a href="https://codepen.io/pro/privacy/">Private</a> Pens, Private Posts, and Private Collections as you wish! Private <a href="https://codepen.io/pro/projects">Projects</a> are only limited by how many total Projects your plan has.</p>
</div>
<div class="feature-collab">
<h2>Collab Mode</h2>
<p><a href="https://blog.codepen.io/documentation/pro-features/collab-mode/">Collab Mode</a> allows more than one person to edit a Pen <em>at the same time</em>.</p>
</div>
<div class="feature-assets">
<h2>Asset Hosting</h2>
<p>You'll be able to <a href="https://blog.codepen.io/documentation/pro-features/asset-hosting/">upload files</a> directly to CodePen to use in anything you build. Drag and drop to the Asset Manager and you'll get a URL to use. Edit your text assets at any time.</p>
</div>
</div> | unknown | |
d3809 | train | You need a template.reload after your second save. Otherwise, the template record will contain the values from when it was first loaded.
A: I GOT IT ! I just had to turn transactionnal fixtures to false inside spec_helper.rb :
RSpec.configure do |config|
...
config.use_transactional_fixtures = false # true by default
...
end
Now everything runs fine. Ho my... 4 hours on this, I was just getting mad. | unknown | |
d3810 | train | you could use the below SO post code for uploading images to server.
How can I upload a photo to a server with the iPhone?
upload image from iphone to the server folder | unknown | |
d3811 | train | To display multiple lines add :
android:ellipsize="none" //the text is not cut on textview width
android:scrollHorizontally="false" //the text wraps on as many lines as necessary
A: I recomend to use:
<TextView
android:singleLine="false"
/>
or
android:ellipsize="end"
android:singleLine="true"
with
android:layout_width="wrap_content"
or
android:inputType="textMultiLine" | unknown | |
d3812 | train | This is obviously a permission related issue so you need to check for permissions on folder for the users application pool, network service and aspnet
Check read only attribute of the files that are throwing access denied. many times, user uploads images from CD.
When you are saving file, try to remove read only flag and save the file so that when overwritten not throw errors. | unknown | |
d3813 | train | This is because you can't alias column used in WHERE clause and use that in ORDER BY clause. Instead you need to SELECT that column and use HAVING clause to filter it:
SELECT friends.*,
isNeighbour(lat,lon,friends.latitude,friends.longitude,rad) AS dist
FROM account friends
LEFT JOIN account_friendTest me
ON (friends.id=me.second_account)
OR (friends.id=me.first_account)
WHERE (me.first_account=id OR me.second_account=id) AND
friends.id <> id
HAVING dist < rad
ORDER BY dist; | unknown | |
d3814 | train | This is your query:
SELECT TOP 2 [Flight_Date], [No_Launches]
FROM Flights
WHERE [Claimed_By_ID] = ?
ORDER BY [Flight_Date] DESC
LIMIT 1,1;
You need to decide which database you are using. Some support TOP; some support LIMIT. Based on your error and the use of the square braces, I would guess that you are using SQL Server/Sybase and should remove the LIMIT clause:
SELECT TOP 2 [Flight_Date], [No_Launches]
FROM Flights
WHERE [Claimed_By_ID] = ?
ORDER BY [Flight_Date] DESC;
If this is true, you should change the tag on the question from "mysql" to "sql-server".
EDIT:
To get the second entry, I think you can use a subquery:
SELECT TOP 1 *
FROM (SELECT TOP 2 [Flight_Date], [No_Launches]
FROM Flights
WHERE [Claimed_By_ID] = ?
ORDER BY [Flight_Date] DESC
) as t
ORDER BY Flight_Date ASC | unknown | |
d3815 | train | Try this code:
if 'b' in l1 and 'b' in l2: # Separated both statements to prevent ValueErrors
if l1.index('b') == l2.index('b'):
print 'b is in both lists and same position!'
Unlike Volatility's code, the length in either list doesn't matter.
The index() function gets the position of an element in a string. For example, if there was:
>>> mylist = ['hai', 'hello', 'hey']
>>> print mylist.index('hello')
1
A: You can do:
def has_equal_element(list1, list2):
return any(e1 == e2 for e1, e2 in zip(list1, list2))
This function will return True when at least one element has the same value and position as in the other list. This function also works when the lists differ in length, you'll need to adjust the function if that's not desired.
A: Assuming the lists are the same length, you could use the zip function
for i, j in zip(l1, l2):
if i == j:
print '{0} and {1} are equal and in the same position'.format(i, j)
What the zip function does is something like this:
l1 = [1, 2, 3]
l2 = [2, 3, 4]
print zip(l1, l2)
# [(1, 2), (2, 3), (3, 4)]
If you want a function that returns True or False given an input, you could do this
def some_func(your_input, l1, l2):
return (your_input,)*2 in zip(l1, l2)
(your_input,) is a one-tuple containing your_input, and multiplying it by two makes it (your_input, your_input) - which is what you want to test for.
Or if you want the return True if any satisfy the condition
def some_func(l1, l2):
return any(i == j for i, j in zip(l1, l2))
The any function basically checks if any of the elements of a list (or in this case a generator) are True in a boolean context, so in this case it returns true if two lists satisfy your condition.
A: If you actually want a method to compare one position in two lists you can use the following:
def compare_pos(l1, l2, pos):
try:
return l1[pos] == l2[pos]
except IndexError:
return False
l1 = [0, 1, 2, 3]
l2 = [0, 2, 2, 4]
for i, _ in enumerate(l1):
print i, compare_pos(l1, l2, i)
# Output:
# 0 True
# 1 False
# 2 True
# 3 False
If you want to test whether two lists have all the same elements in the same positions you can just check for equality:
print l1 == l2
A: I'd get common elements from both lists:
l1 = [a, b, c, d]
l2 = [e, b, f, g]
common_elements = [(i, v) for i,v in enumerate(l1) if l2[i] == v]
This will create a list of tuples: (index, value) and then you can just check if your desired value or index is in the list. | unknown | |
d3816 | train | Extract the contants into functions that describe them (basic refactoring):
FooBar fb = { foo(), bar() };
I know that style is very close to the one you didn't want to use, but it enables easier replacement of the constant values and also explain them (thus not needing to edit comments), if they ever change that is.
Another thing you could do (since you are lazy) is to make the constructor inline, so you don't have to type as much (removing "Foobar::" and time spent switching between h and cpp file):
struct FooBar {
FooBar(int f, float b) : foo(f), bar(b) {}
int foo;
float bar;
};
A: Your question is somewhat difficult because even the function:
static FooBar MakeFooBar(int foo, float bar);
may be called as:
FooBar fb = MakeFooBar(3.4, 5);
because of the promotion and conversions rules for built-in numeric types. (C has never been really strongly typed)
In C++, what you want is achievable, though with the help of templates and static assertions:
template <typename Integer, typename Real>
FooBar MakeFooBar(Integer foo, Real bar) {
static_assert(std::is_same<Integer, int>::value, "foo should be of type int");
static_assert(std::is_same<Real, float>::value, "bar should be of type float");
return { foo, bar };
}
In C, you may name the parameters, but you'll never get further.
On the other hand, if all you want is named parameters, then you write a lot of cumbersome code:
struct FooBarMaker {
FooBarMaker(int f): _f(f) {}
FooBar Bar(float b) const { return FooBar(_f, b); }
int _f;
};
static FooBarMaker Foo(int f) { return FooBarMaker(f); }
// Usage
FooBar fb = Foo(5).Bar(3.4);
And you can pepper in type promotion protection if you like.
A: Many compilers' C++ frontends (including GCC and clang) understand C initializer syntax. If you can, simply use that method.
A: Since style A is not allowed in C++ and you don't want style B then how about using style BX:
FooBar fb = { /*.foo=*/ 12, /*.bar=*/ 3.4 }; // :)
At least help at some extent.
A: Designated initializes will be supported in c++2a, but you don't have to wait, because they are officialy supported by GCC, Clang and MSVC.
#include <iostream>
#include <filesystem>
struct hello_world {
const char* hello;
const char* world;
};
int main ()
{
hello_world hw = {
.hello = "hello, ",
.world = "world!"
};
std::cout << hw.hello << hw.world << std::endl;
return 0;
}
GCC Demo
MSVC Demo
Update 2021
As @Code Doggo noted, anyone who is using Visual Studio 2019 will need to set /std:c++latest for the "C++ Language Standard" field contained under Configuration Properties -> C/C++ -> Language.
A: Yet another way in C++ is
struct Point
{
private:
int x;
int y;
public:
Point& setX(int xIn) { x = Xin; return *this;}
Point& setY(int yIn) { y = Yin; return *this;}
}
Point pt;
pt.setX(20).setY(20);
A: Option D:
FooBar FooBarMake(int foo, float bar)
Legal C, legal C++. Easily optimizable for PODs. Of course there are no named arguments, but this is like all C++. If you want named arguments, Objective C should be better choice.
Option E:
FooBar fb;
memset(&fb, 0, sizeof(FooBar));
fb.foo = 4;
fb.bar = 15.5f;
Legal C, legal C++. Named arguments.
A: I know this question is old, but there is a way to solve this until C++20 finally brings this feature from C to C++. What you can do to solve this is use preprocessor macros with static_asserts to check your initialization is valid. (I know macros are generally bad, but here I don't see another way.) See example code below:
#define INVALID_STRUCT_ERROR "Instantiation of struct failed: Type, order or number of attributes is wrong."
#define CREATE_STRUCT_1(type, identifier, m_1, p_1) \
{ p_1 };\
static_assert(offsetof(type, m_1) == 0, INVALID_STRUCT_ERROR);\
#define CREATE_STRUCT_2(type, identifier, m_1, p_1, m_2, p_2) \
{ p_1, p_2 };\
static_assert(offsetof(type, m_1) == 0, INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_2) >= sizeof(identifier.m_1), INVALID_STRUCT_ERROR);\
#define CREATE_STRUCT_3(type, identifier, m_1, p_1, m_2, p_2, m_3, p_3) \
{ p_1, p_2, p_3 };\
static_assert(offsetof(type, m_1) == 0, INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_2) >= sizeof(identifier.m_1), INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_3) >= (offsetof(type, m_2) + sizeof(identifier.m_2)), INVALID_STRUCT_ERROR);\
#define CREATE_STRUCT_4(type, identifier, m_1, p_1, m_2, p_2, m_3, p_3, m_4, p_4) \
{ p_1, p_2, p_3, p_4 };\
static_assert(offsetof(type, m_1) == 0, INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_2) >= sizeof(identifier.m_1), INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_3) >= (offsetof(type, m_2) + sizeof(identifier.m_2)), INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_4) >= (offsetof(type, m_3) + sizeof(identifier.m_3)), INVALID_STRUCT_ERROR);\
// Create more macros for structs with more attributes...
Then when you have a struct with const attributes, you can do this:
struct MyStruct
{
const int attr1;
const float attr2;
const double attr3;
};
const MyStruct test = CREATE_STRUCT_3(MyStruct, test, attr1, 1, attr2, 2.f, attr3, 3.);
It's a bit inconvenient, because you need macros for every possible number of attributes and you need to repeat the type and name of your instance in the macro call. Also you cannot use the macro in a return statement, because the asserts come after the initialization.
But it does solve your problem: When you change the struct, the call will fail at compile-time.
If you use C++17, you can even make these macros more strict by forcing the same types, e.g.:
#define CREATE_STRUCT_3(type, identifier, m_1, p_1, m_2, p_2, m_3, p_3) \
{ p_1, p_2, p_3 };\
static_assert(offsetof(type, m_1) == 0, INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_2) >= sizeof(identifier.m_1), INVALID_STRUCT_ERROR);\
static_assert(offsetof(type, m_3) >= (offsetof(type, m_2) + sizeof(identifier.m_2)), INVALID_STRUCT_ERROR);\
static_assert(typeid(p_1) == typeid(identifier.m_1), INVALID_STRUCT_ERROR);\
static_assert(typeid(p_2) == typeid(identifier.m_2), INVALID_STRUCT_ERROR);\
static_assert(typeid(p_3) == typeid(identifier.m_3), INVALID_STRUCT_ERROR);\
A: The way /* B */ is fine in C++ also the C++0x is going to extend the syntax so it is useful for C++ containers too. I do not understand why you call it bad style?
If you want to indicate parameters with names then you can use boost parameter library, but it may confuse someone unfamiliar with it.
Reordering struct members is like reordering function parameters, such refactoring may cause problems if you don't do it very carefully.
A: You could use a lambda:
const FooBar fb = [&] {
FooBar fb;
fb.foo = 12;
fb.bar = 3.4;
return fb;
}();
More information on this idiom can be found on Herb Sutter's blog.
A: What about this syntax?
typedef struct
{
int a;
short b;
}
ABCD;
ABCD abc = { abc.a = 5, abc.b = 7 };
Just tested on a Microsoft Visual C++ 2015 and on g++ 6.0.2. Working OK.
You can make a specific macro also if you want to avoid duplicating variable name.
A: For me the laziest way to allow inline inizialization is use this macro.
#define METHOD_MEMBER(TYPE, NAME, CLASS) \
CLASS &set_ ## NAME(const TYPE &_val) { NAME = _val; return *this; } \
TYPE NAME;
struct foo {
METHOD_MEMBER(string, attr1, foo)
METHOD_MEMBER(int, attr2, foo)
METHOD_MEMBER(double, attr3, foo)
};
// inline usage
foo test = foo().set_attr1("hi").set_attr2(22).set_attr3(3.14);
That macro create attribute and self reference method.
A: For versions of C++ prior to C++20 (which introduces the named initialization, making your option A valid in C++), consider the following:
int main()
{
struct TFoo { int val; };
struct TBar { float val; };
struct FooBar {
TFoo foo;
TBar bar;
};
FooBar mystruct = { TFoo{12}, TBar{3.4} };
std::cout << "foo = " << mystruct.foo.val << " bar = " << mystruct.bar.val << std::endl;
}
Note that if you try to initialize the struct with FooBar mystruct = { TFoo{12}, TFoo{3.4} }; you will get a compilation error.
The downside is that you have to create one additional struct for each variable inside your main struct, and also you have to use the inner value with mystruct.foo.val. But on the other hand, it`s clean, simple, pure and standard.
A: I personally have found that using constructor with struct is the most pragmatic way to ensure struct members are initialized in code to sensible values.
As you say above, small downside is that one does not immediatelly see what param is which member, but most IDEs help here, if one hovers over the code.
What I consider more likely is that new member is added and in this case i want all constructions of the struct to fail to compile, so developer is forced to review. In our fairly large code base, this has proven itself, because it guides developer in what needs attention and therefore creates self-maintained code. | unknown | |
d3817 | train | SharpDevelop also has built-in capabilities for laying out a WiX dialog. I prefer it over WixEdit.
A: I created a full list of editors for WiX here: https://robmensching.com/blog/posts/2007/11/20/wix-editors/ (which is amazingly still up to date)
A: this is excellent GUI IDE and it is open source.....
try this...
http://community.sharpdevelop.net/blogs/mattward/archive/2006/09/17/WixIntegration.aspx
download IDE from here:
http://www.icsharpcode.net/OpenSource/SD/Download/
A: You can try WixEdit.
A: If you use Visual Studio 2008/2010 and want to install an application that requires .NET framework you might be interested in having a look at SharpSetup. It allows you to graphically edit installer UI as WinForms controls (and use VS designer for that). | unknown | |
d3818 | train | You're not running an event loop in the thread where the QProcess instance lives. Any QObject in a thread without an event loop is only partially functional - timers won't run, queued calls won't be delivered, etc. So you can't do that. Using QObjects with QtConcurrent::run requires care.
At the very least, you should have a temporary event loop for as long as the process lives - in that case you should hold QProcess by value, since deleteLater won't be executed after the event loop has quit.
QProcess process;
...
QEventLoop loop;
connect(process, &QProcess::finished, &loop, &QEventLoop::quit);
loop.exec();
Otherwise, you need to keep the process in a more durable thread, and keep that thread handle (QThread is but a handle!) in a thread that has an event loop that can dispose of it when it's done.
// This can be run from a lambda that runs in an arbitrary thread
auto thread = new QThread;
auto process = new QProcess;
...
connect(process, static_cast<void(QProcess::*)(int, QProcess::ExitStatus)>(&QProcess::finished),
[this, process](int exitCode, QProcess::ExitStatus exitStatus){
...
process->deleteLater();
process->thread()->quit();
});
process->start("VBoxManage", {"list", "vms"});
process->moveToThread(thread);
// Move the thread **handle** to the main thread
thread->moveToThread(qApp->thread());
connect(thread, &QThread::finished, thread, &QObject::deleteLater);
thread->start();
Alas, this is very silly since you're creating temporary threads and that's expensive and wasteful. You should have one additional worker thread where you take care of all low-overhead work such as QProcess interaction. That thread should always be running, and you can move all QProcess and similar object instances to it, from concurrent lambdas etc. | unknown | |
d3819 | train | There are a number of problems in this code. The one the compiler is whining about is that you have a function definition
fn (f,x) => x
on the left-hand side of a case arm, where only patterns are permitted.
Some other problems:
*
*Redundant parentheses make the code hard to read (advice is available on removing them).
*Your case expression is redundant; in the function definition
fun suc (P p) = ...
it should be possible just to compute with p without any more case analysis.
*Since P carries a function, you will probably have an easier time if you write
fun suc (P f) = ...
and make sure that in the result, f is applied to a pair (as required by the datatype declarations). | unknown | |
d3820 | train | Update the version of the native2ascii-maven-plugin to the newest version.
A: adding this works for me:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>native2ascii-maven-plugin</artifactId>
...
<!-- added for java 7 compilation -->
<dependencies>
<dependency>
<groupId>com.sun</groupId>
<artifactId>tools</artifactId>
<version>1.5.0</version>
<scope>system</scope>
<systemPath>${java.home}/../lib/tools.jar</systemPath>
</dependency>
</dependencies> ... | unknown | |
d3821 | train | It should be just
<nav id="navbuttons">
<button type="button" id="projectsMenu">Projects</button>
</nav>
then
$(document).ready(function () {
$("#projectsMenu").click(function () {
$("#projects").stop(true).slideToggle("slow");
});
});
Demo: Fiddle
because in your case, the actual slideToggle code is not executed in the first click... in the slideProjects method you are registering a click handler which is sliding the element. Instead you can just use the dom ready handler to add a click handler to the projectsMenu element and there is no need to have a onclick handler. | unknown | |
d3822 | train | Generally speaking, I prefer not to use stub chains, as they are often a sign that you are violating the Law of Demeter. But, if I had to, this is how I would mock that sequence:
let(:vanity_url) { 'https://vanity.url' }
let(:partner_campaigns) { double('partner_campaigns') }
let(:loaded_partner_campaigns) { double('loaded_partner_campaigns') }
let(:partner_campaign) do
double("Contentful::Model", fields {:promotion_type => "Promotion 1"}
end
before do
allow(Contentful::PartnerCampaign)
.to receive(:find_by)
.with(vanity_url: vanity_url)
.and_return(partner_campaigns)
allow(partner_campaigns)
.to receive(:load)
.and_return(loaded_partner_campaigns)
allow(loaded_partner_campaigns)
.to receive(:first)
.and_return(partner_campaign)
end
A: This is what I would do. Notice that I split the "mocking" part and the "expecting" part, because usually I'll have some other it examples down below (of which then I'll need those it examples to also have the same "mocked" logic), and because I prefer them to have separate concerns: that is anything inside the it example should just normally focus on "expecting", and so any mocks or other logic, I normally put them outside the it.
let(:expected_referral_source) { 'test_promo_path' }
let(:contentful_model_double) { instance_double(Contentful::Model, promotion_type: 'Promotion 1') }
before(:each) do
# mock return values chain
# note that you are not "expecting" anything yet here
# you're just basically saying that: if Contentful::PartnerCampaign.find_by(vanityUrl: expected_referral_source).load.first is called, that it should return contentful_model_double
allow(Contentful::PartnerCampaign).to receive(:find_by).with(vanityUrl: expected_referral_source) do
double.tap do |find_by_returned_object|
allow(find_by_returned_object).to receive(:load) do
double.tap do |load_returned_object|
allow(load_returned_object).to receive(:first).and_return(contentful_model_double)
end
end
end
end
end
it 'calls Contentful::PartnerCampaign.find_by(vanityUrl: referral_source).load.first' do
expect(Contentful::PartnerCampaign).to receive(:find_by).once do |argument|
expect(argument).to eq({ vanityUrl: expected_referral_source})
double.tap do |find_by_returned_object|
expect(find_by_returned_object).to receive(:load).once do
double.tap do |load_returned_object|
expect(load_returned_object).to receive(:first).once
end
end
end
end
end
it 'does something...' do
# ...
end
it 'does some other thing...' do
# ...
end
If you do not know about ruby's tap method, feel free to check this out
A: I think you need to refactor the chain in two lines like this:
model = double("Contentful::Model", fields: { promotion_type: "Promotion 1" })
campaign = double
allow(Contentful::PartnerCampaign).to receive(:find_by).with(vanityUrl: 'test_promo_path').and_return(campaign)
allow(campaign).to receive_message_chain(:load, :first).and_return(model)
Then you can write your spec that will pass that attribute to find_by and check the chain. | unknown | |
d3823 | train | Here goes:
#include <iostream>
#include <sstream>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/serialization/vector.hpp>
int main()
{
std::ostringstream oss;
boost::archive::binary_oarchive oa(oss);
std::vector<char> v(1000);
// stream
oa << v;
std::cout << "The number of bytes taken for the vector in an archive is " << oss.str().size() << "\n";
}
On my system it prints:
The number of bytes taken for the vector in an archive is 1048
See it Live On Coliru
It's possible that MPI's packed_oarchive does additional compression. I haven't found this in the docs on a quick scan. | unknown | |
d3824 | train | Your query is logically correct but syntactically wrong:
match(n:Student{id:2), <--- missed a closing curly brace here
(n)-[r1:STUDENT_CLASS]->(b:Class),
(n)-[r2:STUDENT_RANK]->(m:Rank)
delete r1,r2
return n.name
Try this:
match(n:Student{id:2}),
(n)-[r1:STUDENT_CLASS]->(b:Class),
(n)-[r2:STUDENT_RANK]->(m:Rank)
delete r1,r2
return n.name
A: The vertical stroke | acts as an OR operator when selecting relationships:
match (n:Student {id:2})-[r:STUDENT_CLASS|STUDENT_RANK]->(b)
where b:Class or b:Rank
delete r | unknown | |
d3825 | train | I was able to do this by following: https://superuser.com/questions/949560/how-do-i-set-system-environment-variables-in-windows-10
Once you have added the new Variable, make sure to restart PowerShell as @J. Bergmann has mentioned. | unknown | |
d3826 | train | The MSDN documentation does explain but it isn't laid out very clearly.
In a trigger, SQL server automatically makes 2 special in-memory tables available to you:
*
*inserted: the data which was added to the table (for insert and update statements)
*deleted: the data which was removed from the table (for update and delete statements)
They have the same columns as the actual table, but are completely read-only - you cannot add columns or indexes to them or change the data inside them.
So in your example, to get the name of the person being removed, you can do the following inside the trigger:
DECLARE @name varchar(100);
SELECT @name = name from deleted;
Important note
Be aware tho that if multiple rows were deleted from the table, then deleted will contain multiple rows - the trigger is not called individually once for each row. | unknown | |
d3827 | train | The issue is relating to async/sync communication. After separating js for html2, i was calling onLoad2() function inside onLoad() function after assuming the connection is established, but no. Should 'wait' till the connection is established and then call another function. js2:
let db;
let itemsCollection;
let stClient;
let globalName;
let temp;
let clientPromise = stitch.StitchClientFactory.create('facebookclone-tlwvi');
function onLoad(){
clientPromise.then(stitchClient=>{
stClient=stitchClient;
db = stClient.service('mongodb', 'mongodb-atlas').db('FbUsers');
itemsCollection=db.collection("ClonedFbUsers");
onLoad2(); //if you call outside then, wont work. onLoad2() will be interpreted before establishing connection to db
});
}
function onLoad2 () {
var pic;
var url = window.location.href,
params = url.split('?')[1].split('&'),
data = {}, tmp;
for (var i = 0, l = params.length; i < l; i++) {
tmp = params[i].split('=');
data[tmp[0]] = tmp[1];
}
document.getElementById('name').value = data.name;
document.getElementById('pp').src=decodeURIComponent(data.prof);
showComments(data.name);
}
function showComments(globalName){
console.log("i am here");
const userId = stClient.authedId();
stClient.login().then(()=>
itemsCollection.find({ userName : globalName }).execute()
).then(docs=>
document.getElementById("comments").innerHTML = docs.map(c => "<div>" + c.comments.msg + "</div>").join(" ")
);
}
function addComment(){
var n=document.getElementById("name").value;
var com= document.getElementById("comment").value
stClient.login().then(()=>
itemsCollection.updateOne({userName : n}, {$push:{comments:{msg:com,time:"22:30",like:4}}})
).then(() => itemsCollection.find({}).execute())
.then(docs =>
docs.forEach((doc, index) =>
console.log(`${index}: ${JSON.stringify(doc)}`)
)
);
}
HTML for the js2:
<!DOCTYPE html>
<html>
<head>
<title>MyProfile</title>
<link rel="stylesheet" type="text/css" href="style2.css">
<script src="https://s3.amazonaws.com/stitch-sdks/js/library/v3/stable/stitch.min.js"></script>
<script src="assig022.js"></script>
</head>
<header>
<div class="Header">
<div class="search">
<img src="white.png" >
</div>
<div class="profile curs">
<img src="me+.png">
<span class="barr" >     Home </span><span>                                    <img src="frndd.png">   <img src="messengerr.png"> <img src="notf.png" style="">         <img src="secret.png"></span>
</div>
</div>
</header>
<body onload="onLoad()">
<img id = "pp" src="" width="50" height="50">
<p> <input style="border: none; font-size: 20px; width: 100%" id="name" type="text" readonly></p>
<button type="submit" onclick="addPhoto()">Add Photo</button> <br>
<button type="submit">Choose Profile Photo</button><br>
<div class="comments" id="comments">
</div>
<br>
<input type="text" name="comment" id="comment" placeholder="Type comment">
<button onclick="addComment()">Send</button>
</body>
</html> | unknown | |
d3828 | train | simply use for loop to iterate every character from the string.
Demo:
In [28]: s = "qazxswedcvfrgbnhyujmkiopl"
In [29]: count = 0
In [30]: for i in s:
....: if i.lower() in ["a", "i", "o", "u", "e"]:
....: count += 1
....:
In [31]: print count
5 | unknown | |
d3829 | train | It seems that your old weblogic (10) had a different session descriptor on the weblogic.xml.
If you want to keep the same sessionID lenght you should update your weblogic 12c's weblogic.xml:
session-descriptor node id-length value (default is 52).
Reference: https://docs.oracle.com/cd/E24329_01/web.1211/e21049/weblogic_xml.htm#WBAPP587 | unknown | |
d3830 | train | See Automatic Naming and Relative Imports, in the docs:
http://celeryq.org/docs/userguide/tasks.html#automatic-naming-and-relative-imports
The tasks name is "tasks.Submitter" (as listed in the celeryd output),
but you import the task as "fable.jobs.tasks.Submitter"
I guess the best solution here is if the worker also sees it as "fable.jobs.tasks.Submitter",
it makes more sense from an app perspective.
CELERY_IMPORTS = ("fable.jobs.tasks", )
A: This is what I did which finally worked
in Settings.py I added
CELERY_IMPORTS = ("myapp.jobs", )
under myapp folder I created a file called jobs.py
from celery.decorators import task
@task(name="jobs.add")
def add(x, y):
return x * y
Then ran from commandline: python manage.py celeryd -l info
in another shell i ran python manage.py shell, then
>>> from myapp.jobs import add
>>> result = add.delay(4, 4)
>>> result.result
and the i get:
16
The important point is that you have to rerun both command shells when you add a new function. You have to register the name both on the client and and on the server.
:-)
A: I believe your tasks.py file needs to be in a django app (that's registered in settings.py) in order to be imported. Alternatively, you might try importing the tasks from an __init__.py file in your main project or one of the apps.
Also try starting celeryd from manage.py:
$ python manage.py celeryd -E -B -lDEBUG
(-E and -B may or may not be necessary, but that's what I use). | unknown | |
d3831 | train | Do you really need to use those versions?
Why not simply replace
gem "authlogic", "2.1.6"
with
gem "authlogic"
and let the bundle solve version dependencies?
Sometimes after doing so in the gemfile you get an error from the bundler and you have to run bundle update authlogic before the general bundle install | unknown | |
d3832 | train | You can do that in 2 ways :
1. Simple change the response :
exports.get = id => Model.sum('price',
{
where: {
id,
}
}
).then(sum => {sum}); //<----- Little hack
2. Use sequelize.fn , Below one will return array , so you need to return the first element of array , for your expected result
exports.get = id => Model.findAll({
attributes: [[sequelize.fn('SUM', sequelize.col('price')), 'sum']] ,
where : { id }
}); | unknown | |
d3833 | train | Your change to the indentation of the print statements is only changing what the console prints out, not any of the data. The first version only prints the date (and if its a weekday) before and after its been changed, while the second prints the date before, and then how it changes in each iteration of the while loop. I believe the first version is actually doing the processing you want, but you may need to be more specific about what you are looking for as output.
One thing though is that the new dates are not being stored anywhere. You start with a list twentycalender_after = [], but then in your for loop you reassign that name to reference the changed date: twentycalender_after = da_te + datetime.timedelta(20). You should append the new dates created in the loop to the list you start with:
output = [] #picking a new name to avoid confusion
start_dt = datetime.date(2020, 6, 6)
end_dt = datetime.date(2020, 6, 10)
mydates = pd.date_range(start_dt, end_dt)
for da_te in mydates:
twentycalender_after = da_te + datetime.timedelta(20)
print(twentycalender_after)
print(twentycalender_after.isoweekday())
while twentycalender_after.isoweekday() >5:
twentycalender_after += datetime.timedelta(1)
print(twentycalender_after) ## HERE
print(twentycalender_after.isoweekday()) ## HERE
output.append(twentycalender_after)
print('over')
Output:
[Timestamp('2020-06-26 00:00:00', freq='D'),
Timestamp('2020-06-29 00:00:00', freq='D'),
Timestamp('2020-06-29 00:00:00', freq='D'),
Timestamp('2020-06-29 00:00:00', freq='D'),
Timestamp('2020-06-30 00:00:00', freq='D')]
This doesn't address the public holiday bit yet though, but you could use something like this to check if each date is on a public holiday (as an or condition in the while loop). | unknown | |
d3834 | train | You can use ng-show or ng-if for this
<button class="button button-energized" ng-click="getPhoto()"><span ng-if="!lastPhoto ">Take Photos</span><span ng-if="lastPhoto">Retake Photo</span></button> | unknown | |
d3835 | train | I have reconstructed your code.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Test</title>
<style>
body {
background-color: powderblue;
}
@media print {
@page {
size: landscape;
}
}/*required because this page will always have to be printed in landscape */
.Row{
display: flex;
flex-direction: row;
/* align-items: center; */
justify-content: center
}
.Cell{
/* justify-content: center */
align-items: center;
text-align:center;
}
.Cell .logo{
width: 30%;
height: auto;
}
</style>
</head>
<body>
<div class="Row">
<div class="Cell">
<img src="https://i.picsum.photos/id/173/200/200.jpg?hmac=avUVgEVHNuQ4yZJQhCWlX3wpnR7d_fGOKvwZcDMLM0I" alt="logo1" class="logo">
</div>
<div class="Cell">
<h1>THIS IS A VERY LONG MAIN HEADING XXX</h1>
<h2>THIS IS A SUBHEADING</h2>
<br>
<h2>ANOTHER SUB HEADING</h2>
</div>
<div class="Cell">
<img src="https://i.picsum.photos/id/173/200/200.jpg?hmac=avUVgEVHNuQ4yZJQhCWlX3wpnR7d_fGOKvwZcDMLM0I" alt="logo2" class="logo">
</div>
</div>
</body>
</html>
This below link will guide you much better way:
https://css-tricks.com/snippets/css/a-guide-to-flexbox/
A:
Setting the width of <img> elements to percentage value is a little confusing because their containers don't have a specified width, so it isn't clear what the width should be.
I assume you wanted the first and last .Cell to be 30% wide, but logos smaller and pushed to the opposite ends of the page. I set the width of the logos to be 100px, but you can tweak that as you like. The key part is pushing the logo in the last .Cell to the right, one way to do it using Flexbox:
.Cell:last-child {
display: flex;
align-items: flex-start;
justify-content: flex-end;
}
The align-items declaration is there so that the logo doesn't stretch to take up the height of the container.
I also removed some of the the align-items declarations from your code that weren't doing anything because they weren't Flex containers.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Test</title>
<style>
body {
background-color: powderblue;
}
@media print {
@page {
size: landscape;
}
}
.Row {
display: flex;
justify-content: center
}
.Cell h1{
text-align: center;
}
.Cell h2{
text-align: center;
}
.Cell:first-child,
.Cell:last-child {
width: 30%;
}
.Cell:last-child {
display: flex;
align-items: flex-start;
justify-content: flex-end;
}
.Cell img {
width: 100px;
height: auto;
}
</style>
</head>
<body>
<div class="Row">
<div class="Cell">
<img src="https://i.picsum.photos/id/173/200/200.jpg?hmac=avUVgEVHNuQ4yZJQhCWlX3wpnR7d_fGOKvwZcDMLM0I" alt="logo1" class="logo">
</div>
<div class="Cell">
<h1>THIS IS A VERY LONG MAIN HEADING XXX</h1>
<h2>THIS IS A SUBHEADING</h2>
<br>
<h2>ANOTHER SUB HEADING</h2>
</div>
<div class="Cell">
<img src="https://i.picsum.photos/id/173/200/200.jpg?hmac=avUVgEVHNuQ4yZJQhCWlX3wpnR7d_fGOKvwZcDMLM0I" alt="logo2" class="logo">
</div>
</div>
</body>
</html> | unknown | |
d3836 | train | For: 1. Err: You must give at least one requirement to install (see "pip help install")
You need to run this on the Raspberry PI:
sudo pip install twilio
If you don't have pip installed then run:
sudo apt-get install python3-pip
and then again: sudo pip install twilio
for 2. Err: Traceback (most recent call last): Err: File "server.py". line12
Basically the twilio client definition needs to be similar to:
from twilio.rest import Client
client = Client(account_sid, auth_token)
so from the trace, line 12 in server.py should be similar to
from twilio.rest import Client //this should be also changed
twilioClient = Client(account_sid, auth_token) //this is line 12 | unknown | |
d3837 | train | If you are happy to split the string once you are in Python, you can try with regex and the module re:
# Python3
import re
p = re.compile(r"\['(.*)','(.*)']")
res = p.search("['ALLEGHANY','POLYGON((1308185.614362,...))']")
print(res.group(1)) # 'ALLEGHANY'
print(res.group(2)) # 'POLYGON((1308185.614362,...))' | unknown | |
d3838 | train | How can I get all combinations of lists that hold all the combinations of the sublists? The order of the lists in the list does not matter. E.g. [[1, 3, 2], [4, 2, 5, 6], [7, 2, 5], [8, 9, 10]], [[2, 1, 3], [4, 2, 5, 6], [7, 2, 5], [8, 9, 10]]
You need to permute all inner lists and then product those. First store permutations of each inner lists in a 2D list (named all_perms in the code), then the product of those will be the required answer, for uniqueness, we need to store them in a set. Python's itertools.permutations gives all permutations and itertools.product gives all cartesian product among them. Here is the code:
from itertools import permutations as perm, product
# lists = [[1, 2, 3], [4, 2, 5, 6], [7, 2, 5], [8, 9, 10]]
lists = [[1,2],[3,4]] # let's check for a small input first
all_perms = [[] for _ in range(len(lists))]
for i, lst in enumerate(lists):
all_perms[i].extend(list(perm(lst)))
answer = set()
prods = product(*all_perms)
for tup in prods:
answer.add(tup)
print(answer)
# Output: {((1, 2), (4, 3)), ((2, 1), (3, 4)), ((2, 1), (4, 3)), ((1, 2), (3, 4))}
Feel free to ask for furthur queries. | unknown | |
d3839 | train | No. You can choose whatever size you want, just make sure the button or other elements are within safeAreaLayoutGuide.
Besides, the guidelines are just Guidelines , they guide you as to what might look most appropriate as per apple, but these are not necessarily restrictions that must be enforced.
A: They are guidelines not rules, go crazy with it. | unknown | |
d3840 | train | Your code is very close to achieving what you want, except you are attempting to delete the Embed object that you created instead of the Message object of the embed. Here's a slight tweak that will achieve what you need:
const wait = 30000;
let count;
const embed = new Discord.MessageEmbed()
.setColor('#9EFF9A')
.setTitle('Question?')
.setDescription('');
message.channel.send(embed).then(embedMessage => {
embedMessage.channel.awaitMessages(m => m.author.id == message.author.id,
{ max: 1, time: wait }).then(collected => {
embedMessage.delete();
count = collected.first().content;
console.log(count);
}).catch(() => {
embedMessage.delete();
return message.reply('No reply after ' + (wait / 1000) + ' seconds, operation canceled.').then(m => {
m.delete({ timeout: 15000 });
});
});
})
The secret here is using .then() on the method that sends the embed. This allows you to obtain the actual Message object of the embed that was sent, which you can then interact with. Now that you have the Message object for your embed, you can directly interact with the message using its methods, such as delete() and edit().
A: .then(() => {
message.delete()
})
Is not working because you never passed in the message as a parameter, therefor your embed does not exist in the context of .then()
You can try using .then() or await to delete a send message.
then Method
// const embed = ...
message.channel.send(embed).then(msg => {
msg.delete();
});
Await Method
// Make sure you're in an async function
//const embed = ...
const msg = await message.channel.send(msg);
msg.delete();
A: I am not to familiar with discordjs but from what I understand you create a message with the bot under the variable "message" which has the properties seen here:
https://discord.js.org/#/docs/main/master/class/Message
Then you use that message to send an embed to the message's channel. The embed ask's a question and you then await for ensuing messages. You then want to take the first response and put it into the count variable. Then you want to delete the original embed. If this is all true I would suggest deleting the original message that houses the embed itself like so:
message.channel.awaitMessages(m => m.author.id == message.author.id,
{ max: 1, time: `${wait}` }).then(collected => {
message.delete();
count = collected.first().content;
console.log(count);
})
Or try this but I don't think this method will work:
message.channel.awaitMessages(m => m.author.id == message.author.id,
{ max: 1, time: `${wait}` }).then(collected => {
embed.delete();
count = collected.first().content;
console.log(count);
})
I would check out these two pages of documentation:
https://discord.js.org/#/docs/main/master/class/Message?scrollTo=delete
https://discord.js.org/#/docs/main/master/class/MessageEmbed
Welcome to Stack Overflow tell me if one of these worked. | unknown | |
d3841 | train | If need resample per groups is possible use Grouper for resample per days and then for add missing values is used Series.unstack with DataFrame.stack:
df = (df.groupby(['Type', pd.Grouper(freq='1D', key='Date')])['Value']
.mean()
.unstack()
.stack(dropna=False)
.reset_index(name='Value')
)
print (df)
Type Date Value
0 A 2021-01-01 1.0
1 A 2021-01-02 NaN
2 A 2021-01-03 2.0
3 B 2021-01-01 NaN
4 B 2021-01-02 3.0
5 B 2021-01-03 NaN
If need only append missing datetimes per groups is used DataFrame.reindex:
mux = pd.MultiIndex.from_product([df['Type'].unique(),
pd.date_range(df['Date'].min(), df['Date'].max())],
names=['Date','Type'])
df = df.set_index(['Type','Date']).reindex(mux).reset_index()
print (df)
Date Type Value
0 A 2021-01-01 1.0
1 A 2021-01-02 NaN
2 A 2021-01-03 2.0
3 B 2021-01-01 NaN
4 B 2021-01-02 3.0
5 B 2021-01-03 NaN | unknown | |
d3842 | train | I was able to figure it out. I changed my Utils.ShowMessage function as follows:
public static void ShowMessage(UIViewController myview, string message, string messagetype)
{
var window = UIApplication.SharedApplication.KeyWindow;
var vc = window.RootViewController;
while (vc.PresentedViewController != null)
{
vc = vc.PresentedViewController;
}
var okAlertController = UIAlertController.Create(message, messagetype, UIAlertControllerStyle.Alert);
okAlertController.AddAction(UIAlertAction.Create("OK", UIAlertActionStyle.Default, null));
vc.PresentViewController(okAlertController, true, null);
}
One needs to get the root view controller and display the alert from that. | unknown | |
d3843 | train | Why not just:
//An abstract class that represents a person
abstract class Person
{
public string Name { get; set; }
}
//A concrete person that represents a basketball player
class Player : Person
{
}
//A concrete person that represents a basketball coach
class Coach : Person
{
}
The usage of generics seems totally unnecessary. the simple hierarchy should be enough for you.
A: You seem to be mixing up inheritance and generics. While technically different, you can see a generic class as template, and not as an is-a relationship. Normal inheritance is all you want here.
A: Because you declare Team as type Lakers and Person as type Player. Those are not equal.
Do you need to constrain your List with ? Can't you just declare it as Person?
abstract class Team<T>
{
public List<Person> Members = new List<Person>();
} | unknown | |
d3844 | train | Perhaps the fonts you have in ./src/fonts are not copied to public? You can check by navigate to the Network tab in the developer tools of your preferred browser, filter by font and see the response. It's likely that they're 404.
A quick fix would be to manually copy the fonts to static directory (create one if you don't have it.)
If you're doing something special with the fonts (for example, subfontting) you might be interested in adding hash to the fonts file & replace the file name in your font.css. | unknown | |
d3845 | train | Variables declared with var have a file scope, and are indeed inside a closure like you mentioned. However, if you declare new variables without the var keyword, they are accessible throughout your project (given you load files in the right order), as Meteor declares these variables outside the closure.
In your case the solution is to declare your form functions without var, or maybe better declare a new object without var and put them as methods in there:
FormHelpers = {};
FormHelpers.ucwords = function(str)
{
return str.split(" ").map(function(i){return i[0].toUpperCase() + i.substring(1)}).join(" ");
};
...
You can then use these helpers in both you add and edit tenplates, or anywhere else you need them.
More info on namespacing in the Meteor docs. | unknown | |
d3846 | train | Ok i have it
if (intent.getAction().equals(Intent.ACTION_VIEW)) {
Toast.makeText(this, "work", Toast.LENGTH_SHORT).show();
Uri data = intent.getData();
// ContentResolver contentResolver = getContentResolver();
String text = getStringFromShare(data);
Log.d("sasas", "onCreate: sometext");
}
}
private String getStringFromShare(Uri data) {
String text = null;
try {
ContentResolver cr = getApplicationContext().getContentResolver();
InputStream is = cr.openInputStream(data);
if (is != null) {
StringBuffer buf = new StringBuffer();
BufferedReader reader = new BufferedReader(new InputStreamReader(is));
String str;
if (is != null) {
while ((str = reader.readLine()) != null) {
buf.append(str);
}
}
is.close();
text = buf.toString();
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return text;
} | unknown | |
d3847 | train | Well, I'm not sure I get your question correctly but, from what I understand, you just want to proxy the API calls to MS Graph and make some changes on the fly to the response.
OData queries are just simple query parameters (see the OData tutorial). So, basically, you just have to get those query parameters in your proxy and forward them to the MS Graph. The response you'll get will then be compliant with the original query.
However, depending on how you mangle the data, you may end up not being compliant with the user query. For example:
*
*The user made a $select(Id) query, but your logic add a custom property Foo. The user just wanted Id but you added Foo anyway.
*The user made an $orderby Name asc query, but your logic modify the property Name. It may not be ordered after your logic.
*The user wants to make $filter query on the Foo property. MS Graph will complain because it doesn't know the Foo property.
*Etc.
If you want to handle that cases, well, you'll have to parse the different OData queries and adapt your logic accordingly. $orderby, $top/$skip, $count, $expand and $select should be pretty straight-forward ; $filter and $search would require a bit more work.
A: Thanks. I was looking for a solution to this.
https://community.apigee.com/questions/8642/how-do-we-fetch-the-query-parameters-odata-standar.html
Instead of parsing the URL to get the OData query parameters, i wanted to understand the standard method to process the OData requests.
Now I am doing the below to extract the OData query parameters and passing them to MSGraph API.
string strODataQuery = String.Join("&", HttpContext.Request.Query.Where(kvp => kvp.Key.StartsWith("$")) .Select(kvp => String.Format("{0}={1}", kvp.Key, Uri.EscapeDataString(kvp.Value))));
And I am performing the conversions after retrieving the results.
Regards | unknown | |
d3848 | train | I use an environment file that stays on my computer and contains some variables linked to my environment.
In my Django settings.py (which is uploaded on github):
# MANDRILL API KEY
MANDRILL_KEY = os.environ.get('DJANGO_MANDRILL_KEY')
On dev env, my .env file (which is excluded from my Git repo) contains:
DJANGO_MANDRILL_KEY=PuoSacohjjshE8-5y-0pdqs
This is a "pattern" proposed by Heroku: https://devcenter.heroku.com/articles/config-vars
I suppose there is a simple way to setit without using Heroku though :)
To be honest, the primary goal to me is not security-related, but rather related to environment splitting. But it can help for both I guess.
A: I use something like this in settings.py:
import json
if DEBUG:
secret_file = '/path/to/development/config.json'
else:
secret_file = '/path/to/production/config.json'
with open(secret_file) as f:
SECRETS = json.loads(f)
secret = lambda n: str(SECRETS[n])
SECRET_KEY = secret('secret_key')
DATABASES['default']['PASSWORD'] = secret('db_password')
and the JSON file:
{
"db_password": "foo",
"secret_key": "bar"
}
This way you can omit the production config from git or move it outside your repository. | unknown | |
d3849 | train | This just isn't going to work, even if you fix the typos. COM interop doesn't have a standard mapping from List<T> to something in COM, and it certainly won't map it to std::list. Generics aren't allowed to appear in COM interfaces.
UPDATE
I tried using ArrayList as the return type, as that's non-generic I thought maybe the tlb would include type information for it. That didn't work so I tried IList. That didn't work either (the #import statement produced a .tlh file that referred to IList but had not definition for it.)
So as a workaround I tried declaring a simple list interface. The code ends up like this:
[Guid("7366fe1c-d84f-4241-b27d-8b1b6072af92")]
public interface IStringCollection
{
int Count { get; }
string Get(int index);
}
[Guid("8e8df55f-a90c-4a07-bee5-575104105e1d")]
public interface IMyThing
{
IStringCollection GetListOfStrings();
}
public class StringCollection : List<string>, IStringCollection
{
public string Get(int index)
{
return this[index];
}
}
public class Class1 : IMyThing
{
public IStringCollection GetListOfStrings()
{
return new StringCollection { "Hello", "World" };
}
}
So I have my own (very simplistic) string collection interface. Note that my StringCollection class doesn't have to define the Count property because it inherits a perfectly good on from List<string>.
Then I have this on the C++ side:
#include "stdafx.h"
#import "..\ClassLibrary5.tlb"
#include <vector>
#include <string>
using namespace ClassLibrary5;
int _tmain(int argc, _TCHAR* argv[])
{
CoInitialize(0);
IMyThingPtr thing(__uuidof(Class1));
std::vector<std::string> vectorOfStrings;
IStringCollectionPtr strings(thing->GetListOfStrings());
for (int n = 0; n < strings->GetCount(); n++)
{
const char *pStr = strings->Get(n);
vectorOfStrings.push_back(pStr);
}
return 0;
}
I have to manually copy the contents of the string collection a proper C++ standard container, but it works.
There may be a way to get proper type info from the standard collection classes, so you don't have to make your own collection interfaces, but if not, this should serve okay.
Alternatively, have you looked at C++/CLI? That would work pretty seamlessly, although it still wouldn't convert CLR collections to std containers automatically.
A: There look like there are several typos. In the C# you declare a class called TestLib, but are trying to construct a TestCls. In addition, neither the class not the method are public (which should be a compile error at least on Disp, since the interface has to be implemented publicly).
A: Guess: Disp() is not declared public | unknown | |
d3850 | train | You're adding the same array object to the array repeatedly. Instead clone:
var test_first = [ ...test[0] ];
A: You are manipulating the value test_first, and you are implicitly stringifying the value in test[0][0] by accessing test[0] - which returns an array of a single number, not a number. The code that produces what your requirement is would be
const test = [[10]]
const intervalID = setInterval(() =>{
const new_val = test[0][0] + 1;
test.unshift([new_val]);
console.log(test);
},1000);
I'm pretty sure you're looking for an array of numbers like [13, 12, 11, 10]. That's produced by below code
const test = [10]
const intervalID = setInterval(() =>{
const new_val = test[0] + 1;
test.unshift(new_val);
console.log(test);
},1000); | unknown | |
d3851 | train | Few mistakes I could find right off the bat:
*
*In your html, there are no opening and closing quotes for the ids
*counter variable is declared twice. You can name the counter button variable to something like counterButton
*In the code snippet, include jQuery library | unknown | |
d3852 | train | In order to do this sort of management, you should access the Databricks account portal at the tenant level:
Databricks Account
From there, you can create and manage the metastores, as well as assign a metastore with a Databricks Workspace, which is what you have created.
Take into account that for most of what you have described, you must be an account admin for the Databricks Account.
As per the official docs (source):
The first Azure Databricks account admin must be an Azure Active Directory Global Administrator at the time that they first log in to the Azure Databricks account console. Upon first login, that user becomes an Azure Databricks account admin and no longer needs the Azure Active Directory Global Administrator role to access the Azure Databricks account. The first account admin can assign users in the Azure Active Directory tenant as additional account admins (who can themselves assign more account admins). Additional account admins do not require specific roles in Azure Active Directory.
A:
You must be an Azure Databricks account admin to getting started using Unity Catalog this can be done for first time using Azure Active Directory Global Administrator of your subscription.
As per official documentation:
The first Azure Databricks account admin must be an Azure Active
Directory Global Administrator at the time that they first log in to
the Azure Databricks account console. Upon first login, that user
becomes an Azure Databricks account admin and no longer needs the
Azure Active Directory Global Administrator role to access the Azure
Databricks account. The first account admin can assign users in the
Azure Active Directory tenant as additional account admins (who can
themselves assign more account admins). Additional account admins do
not require specific roles in Azure Active Directory.
How to identify your Microsoft Azure global administrators for your subscriptions?
The global administrator has access to all administrative features. By default, the person who signs up for an Azure subscription is assigned the global administrator role for the directory. Only global administrators can assign other administrator roles.
Login into the Azure Databricks account console via Global admin and then account admin can assign users in the Azure Active Directory tenant.
For more details, refer to Azure Databricks - Get started using Unity Catalog and also refer to MS Q&A thread - How to access Azure Databricks account admin? addressing similar issue.
A: Configure your Unity Catalog Metastore
Go to + New add click on new notebook and open.
If you already have catalogs with data .then use below command to check,
# Show all catalogs in the metastore.
display(spark.sql("SHOW CATALOGS"))
If you don't have catalog . create utility catalog :
# Create a catalog.
spark.sql("CREATE CATALOG IF NOT EXISTS catalog_name")
# Set the current catalog.
spark.sql("USE CATALOG catalog_name")
for more information refer this offical_document and Notebook. | unknown | |
d3853 | train | to make it easier for you
I would re-download the whole bundle in http://developer.android.com/sdk/index.html | unknown | |
d3854 | train | Not sure if this is the most optimized solution, but you can use:
*
*rowClass (https://www.telerik.com/kendo-angular-ui/components/grid/api/GridComponent/#toc-rowclass)
*selectionChange (https://www.telerik.com/kendo-angular-ui/components/grid/api/GridComponent/#toc-selectionchange)
With that function and event, you can add a custom class to your selected rows and use CSS for the fade animation. Your code would be something like this:
import { Component } from '@angular/core';
import { products } from './products';
import { Component, ViewEncapsulation } from '@angular/core';
import { RowClassArgs } from '@progress/kendo-angular-grid';
@Component({
selector: 'my-app',
encapsulation: ViewEncapsulation.None,
styles: [`
.k-grid tr.isSelected {
background-color: #41f4df;
transition: background-color 1s linear;
}
.k-grid tr.isNotSelected {
background-color: transparent;
transition: background-color 2s linear;
}
`],
template: `
<kendo-grid [data]="gridData"
[height]="410"
kendoGridSelectBy="ProductID"
[rowClass]="rowCallback"
(selectionChange)="onSelect($event)">
<kendo-grid-column field="ProductID" title="ID" width="40">
</kendo-grid-column>
<kendo-grid-column field="ProductName" title="Name" width="250">
</kendo-grid-column>
</kendo-grid>
`
})
export class AppComponent {
public gridData: any[] = products;
public onSelect(e){
setTimeout(() => {
e.selectedRows[0].dataItem.isSelected = true;
setTimeout(() => {
e.selectedRows[0].dataItem.isSelected = false;
}, 2000);
}, 500);
}
public rowCallback(context: RowClassArgs) {
if (context.dataItem.isSelected){
return {
isSelected: true,
};
} else {
return {isNotSelected: true};
}
}
}
-- EDIT --
Just noticed that you want to do that only with the second row. In that case, you can replace line e.selectedRows[0].dataItem.isSelected = true; with: products[1].isSelected = true;.
And use your button to call the onSelect function. | unknown | |
d3855 | train | $("ul li").click(function(){
$("ul li.active").removeClass('active');
$(this).stop().addClass('active');
}) | unknown | |
d3856 | train | To run a transformation programmatically, you should do the following:
*
*Initialise Kettle
*Prepare a TransMeta object
*Prepare your steps
*
*Don't forget about Meta and Data objects!
*Add them to TransMeta
*Create Trans and run it
*
*By default, each transformation germinates a thread per step, so use trans.waitUntilFinished() to force your thread to wait until execution completes
*Pick execution's results if necessary
Use this test as example: https://github.com/pentaho/pentaho-kettle/blob/master/test/org/pentaho/di/trans/steps/textfileinput/TextFileInputTests.java
Also, I would recommend you create the transformation manually and to load it from file, if it is acceptable for your circumstances. This will help to avoid lots of boilerplate code. It is quite easy to run transformations in this case, see an example here: https://github.com/pentaho/pentaho-kettle/blob/master/test/org/pentaho/di/TestUtilities.java#L346 | unknown | |
d3857 | train | The underlying implementation of both is pretty much identical:
*
*push thread bindings (using the bindings supplied)
*try the body
*finally pop thread bindings
binding was added in Clojure 1.0 and with-bindings in 1.1. I don't see the latter used in any code tho', just the former. | unknown | |
d3858 | train | Try something like this
List<string> list = new List<string>();
DataTable dt1 = new DataTable();
dt1.Columns.Add("td",typeof(int));
var rows = doc.DocumentNode.SelectNodes("xpath link")
.Descendants("tr")
.Where(tr=>tr.Elements("td").Count()>1)
.Select(td => td.InnerText.Trim())
.ToList();
foreach (var row in rows)
{
dt.Rows.Add(new object[] { int.Parse(row)});
} | unknown | |
d3859 | train | That's because you're setting it inside an asynchronous block, and asynchronous blocks return immediately. If you look at the time stamps of the two logs, you'll see that the outer log is actually being posted before both the inner log, and the setting of the variable.
From the GCD docs on dispatch_async():
This function is the fundamental mechanism for submitting blocks to a
dispatch queue. Calls to this function always return immediately after
the block has been submitted and never wait for the block to be
invoked. The target queue determines whether the block is invoked
serially or concurrently with respect to other blocks submitted to
that same queue. Independent serial queues are processed concurrently
with respect to each other.
A: Your "outside" NSLog statement should actually go in that inner dispatch_async block that is set to run on the main thread, because that block will execute after you've set the value of nextTime. Any code you place below your asynchronous block call will likely execute way before the code inside the block. | unknown | |
d3860 | train | create a new file called authorization.guard.ts and add this
import { Injectable } from '@angular/core';
import { CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router';
import { Observable } from 'rxjs/Observable';
import {AppContextService} from './context';
@Injectable()
export class AuthorizationGuard implements CanActivate {
constructor(
private appContextService:AppContextService
){}
canActivate(
next: ActivatedRouteSnapshot,
state: RouterStateSnapshot): boolean {
return this.appContextService.getAuthAdminLoggednIn();
}
}
later in your main module import {AuthorizationGuard}
add this in your each router path
{
path: 'dashboard',
canActivate:[AuthorizationGuard]
},
Refer this files for complete authorization
Refer this | unknown | |
d3861 | train | Try escaping your slashes maybe?
system('"C:\\Program Files\\Java\\jre7\\bin\\java.exe" -server -Xincgc -Xmx8192M -jar craftbukkit.jar 2>&1'); | unknown | |
d3862 | train | The reason you're getting a null issue is that on @JoinColumn(name = "meeting_settings_name", nullable = false) you've got nullable = false. the column you're joining on is meeting_settings_name which doesn't seem to be a column on meeting_times and the actual name on meeting_settings is meeting_name. You'll have to add meeting_name to meeting_times to create a relation between the two tables to get this to work. | unknown | |
d3863 | train | You could define a set of serializable commands (see command design pattern for further details) that are generated whenever a change must be performed.
Then you can execute those commands locally to apply changes to your model and serialize those commands in a queue. Whenever a client pulls them, it can simply reapply the same commands in order to its local model and get the same result you achieved server side.
Somehow your server behaves exactly like a client in regard of the changes to be applied, with the difference that it pulls them immediately.
Considering your use case, commands can be defined as insertions in a list and created along with all the required parameters. You can easily extend it to deletions and updates to the objects of the list. | unknown | |
d3864 | train | It is possible, but it requires deep knowledge of shader writing. Why not use the built-in Volumetric Fog? Unity has its own implementation, and installation guide. | unknown | |
d3865 | train | Remove the WebLogic Domain data folder and setup it again. This time I restart the WebLogic server domain after the WebLogic Domain data folder setup and enable the SSL after. Next open the browser with the https address and it work. | unknown | |
d3866 | train | Here is a snippet from the descriptor_extractor_matcher.cpp sample available from OpenCV:
if( !isWarpPerspective && ransacReprojThreshold >= 0 )
{
cout << "< Computing homography (RANSAC)..." << endl;
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold );
cout << ">" << endl;
}
Mat drawImg;
if( !H12.empty() ) // filter outliers
{
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
double maxInlierDist = ransacReprojThreshold < 0 ? 3 : ransacReprojThreshold;
for( size_t i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) <= maxInlierDist ) // inlier
matchesMask[i1] = 1;
}
// draw inliers
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask
#if DRAW_RICH_KEYPOINTS_MODE
, DrawMatchesFlags::DRAW_RICH_KEYPOINTS
#endif
);
#if DRAW_OUTLIERS_MODE
// draw outliers
for( size_t i1 = 0; i1 < matchesMask.size(); i1++ )
matchesMask[i1] = !matchesMask[i1];
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg, CV_RGB(0, 0, 255), CV_RGB(255, 0, 0), matchesMask,
DrawMatchesFlags::DRAW_OVER_OUTIMG | DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
#endif
}
else
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg );
The key lines for the filtering are performed here:
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) <= maxInlierDist ) // inlier
matchesMask[i1] = 1;
Which is measuring the L2-norm distance between the points (either 3 pixels if nothing was specified, or user-defined number of pixels reprojection error).
Hope that helps!
A: you can use the size of the vector named "ptpairs" in order to decide how similiar the pictures are.
this vector contains the matching keypoints, so his size/2 is the number of matches.
i think you can use the size of ptpairs divided by the total number of keypoints in order to set an appropriate threshold.
this will probably give you an estimation to the similiarty between them. | unknown | |
d3867 | train | Turns out I didn't understand what the namespace was supposed to be. The following is correct:
foreach($rss_items as $item) {
$public_url = $item->get_item_tags('http://xml.theplatform.com/media/data/Media', 'publicUrl');
print_r($public_url);
} | unknown | |
d3868 | train | For the request test step, add Script Assertion with below snippet:
//Check response is not empty
assert context.response
//Parse response and fetch required value
def cResponse = new XmlSlurper().parseText(context.response).'**'.find {it.name() == 'CompressedResponse'}?.text()
log.info "Extracted data : $cResponse" | unknown | |
d3869 | train | You can store them in a map. The solution can be extended easily to arbitrarily many pointers, but I've used three here for concreteness.
std::unordered_map<MyType *, double> computed_values;
for (MyType *p: {A, B, C}) {
if (computed_values.find(p) == computed_values.end()) {
computed_values[p] = p->get();
}
}
double result = computed_values[A] + computed_values[B] * computed_values[C];
A->set(result);
As others have pointed out, make sure you profile to make sure this is actually worth the overhead of std::unordered_map lookups.
A: Assuming get() methods are really costly to the extent of producing measurable performance difference,
double a,b,c;
a = A->get();
b = (B==A?a:B->get());
c = (C==B?b:(C==A?a:c->get()));
return A->set(a+b*c);
A: Assuming the get() methods are reasonably cheap, you'd be better off just doing:
return A->set(A->get() + B->get() * C->get());
The other approach simply inserts a bunch of conditional jumps into your code, which could easily end up being more expensive than the original code. | unknown | |
d3870 | train | With my limited understanding of what REST is about, then the following might be the "most" restful.
GET /resource/?page=<pageenr>&asof=<datetime>
Since the content of the representation would never change unexpectedly, and caching could be used.
But to actually answer your question, I think the parameter page is the preferred method.
A: I'd go with option (2). Why?
*
*You can later add page-size parameter to the query so the client can specify the page size.
*In case no page parameter was specified you can just return the first page (the default). In many cases you client might need only the first page, so it simplifies the protocol between client and server.
A: What the URI looks like is not the most important part. What you should be thinking about instead is how it is presented to the user. A page should for example have a link to the "next" page and another link to the "previous" page (if there is one). Take a look at RFC 5005 Feed Paging and Archiving | unknown | |
d3871 | train | Use its internal routing framework with the donotlog action:
route = ^foo donotlog:
But ensure your instance has internal routing support compiled in (if not you should see a warning in the startup logs). | unknown | |
d3872 | train | It looks like you can simply use the centerCoordinate property of the MKMapView class - the docs say:
Changing the value in this property centers the map on the new coordinate without changing the current zoom level. It also updates the values in the region property to reflect the new center coordinate and the new span values needed to maintain the current zoom level.
So basically, you just do:
self.mapView.centerCoordinate = self.mapView.userLocation.location.coordinate; | unknown | |
d3873 | train | calloc() description from man7
#include <stdlib.h>
void *calloc(size_t nelem, size_t elsize);
The calloc() function shall allocate unused space for an array of
nelem elements each of whose size in bytes is elsize. The space shall
be initialized to all bits 0. The order and contiguity of storage
allocated by successive calls to calloc() is unspecified. The pointer
returned if the allocation ...
I encourage you to keep reading in the link man7_calloc.
now after reading the description above, calling calloc seems easy to me:
in your case we allocating array of one struct person
struct Person* p = NULL;
struct Person* p = calloc(1, sizeof(struct Person));
you must check the return value of calloc(if calloc succedd to allocate, like malloc):
if(p == NULL){
puts("calloc failed");
...
...
} | unknown | |
d3874 | train | Here:
char a[] = "Hello";
char * b = a;
You are taking advantage of array to pointer decay. So now b points to a[0]. That is, b holds the address of a[0]
and then
char ** c = &b;
c now points to the address of b, which is itself a pointer. c holds the address of b, which holds the address of a (see why people hate pointers now?)
If you want to access a[0] from c, you first need to de-reference it:
*c
which gives b, and then we need to dereference that to get back to a[0]:
*(*c)
If you want to use pointer arithmetic to access a[1], you want to increment b, so:
*c+1
and now we have the address of a[1], and if we want to print that, we need to dereference again:
*(*c+1)
You made the mistake of incrementing the address of b instead of the address of a[0] when you said:
**(c+2)
c held the address of b, not a, so incrementing on that will cause undefined behavior.
A: c points to the address of b on the current stack frame.
c + 2 points to some location on the stack frame, thanks to pointer arithmetic.
*(c + 2) You then access this location, taking unspecified bytes there as an address.
**(c + 2) Now you attempt to access said unspecified address, and luckily for you, it crashes.
A: Okay let's try again.
a is a pointer to the first element in the array, in this case "H".
c is a pointer to the address of b which is also a pointer to the first element.
When you increment c by 2, you are moving that pointer forward by two. So the memory address goes forward by two, but c is just a pointer to a and not a itself, so you're in unknown territory. Instead, what would likely work (untested):
cout<<*(*c+1)<<endl;
This dereferences c, so you get b (or a, same thing), you increment this pointer by 1, which stays in the array, and then you dereference again to access the value.!
A: &b + 2 is not a valid pointer.
For simplicity, let's assume that a pointer is just one char wide (our memory is very, very small).
Assume that a is stored at the address 10, b at 20, and c at 21.
char* b = a; is the same as char* b = &a[0]; – it stores the location of the array's first element (10) in b.
char**c = &b stores the location of b (20) in c.
It looks like this:
a b c
10 11 12 13 14 15 16 17 18 19 20 21 22...
| H | e | l | l | o | 0 | | | | | 10 | 20 |
It should be clear from this image that c + 2 is 22, which is not a valid address.
Dereferencing it is undefined and anything can happen – your crashing was just good luck.
If you were expecting the first l in "Hello", it's located at a[2], which is b[2] (or *(b + 2)), which is (*c)[2] (or *(*c + 2)).
A: char a[] = "Hello";
char * b = a;
char ** c = &b;
a and b point to the beginning of char array having "Hello" as its characters.
So, lets starting at memory location 0x100 Hello is stored.
a is pointer so it stores an address.
a and b both store this values 0x100. But their addresses are something else. Lets say address of a is 0x200 and address of b is 0x300.
c is also pointer. So it stores address of b.
So, c stores address 0x300 as its values.
c+1 = 0x304
c+2 = 0x308
*c : You want to access value stored at 0x300, which is the address of b.
*(c+1) : You want to access address 0x304
*(c+2) : You then access 0x308, taking unspecified bytes there as an address.
**(c+2) : Dereference the address 0x308. It may or may not be address of pointer variable. So, dereferencing may be undefined operation. | unknown | |
d3875 | train | I don't know what the problem was. However I've nuked everything from orbit and started over. I deleted everything except a single html file in my /home/public directory. I deleted /home/private/.npm-global. I recreated /home/private/.npm-global. I followed the steps listed in the answer to:
Global Node modules not installing correctly. Command not found
Specifically the steps provided by Vicente, posted Feb 9, 2019 at 16:30. Which were:
[username /home/public]$ mkdir /home/private/.npm-global
[username /home/public]$ npm config set prefix '~/.npm-global'
[username /home/public]$ export PATH=~/.npm-global/bin:$PATH
[username /home/public]$ source ~/.profile
[username /home/public]$ npm install -g hexo-cli
Then I tried to run from /home/public: hexo init blog
This command worked and it creates a blog directory in the cwd. However, then I tried to run hexo server from the same directory I was still in. This did not work. I changed directories to /home/public/blog/ and then typed hexo server. This still did not work and would only print out some help message about a few commands you could run.
Their website says this is generally due to a missing version line in the dependency of the blog/package.json file. I edited the file and added version 3.2.2, and then re-ran it as hexo --version. When I did this, it dumped out some Validating config output with a bunch of dependencies and their version numbers. The version number specified for hexo was hexo: 6.3.0. I went to open the blog/package.json file again to update the version from 3.2.2 to 6.3.0. However, hexo had already updated the file for me.
Then I typed hexo server while still in /home/public/blog and then the hexo server started up. It errored out with some NaN is not a valid date! error, but the server was still running.
In a separate terminal connected to the same machine and inside the /home/public/blog directory I typed hexo new "Hello Hexo". This worked. Then I typed hexo generate to generate the static files. This also worked.
Everything seems to be working now. Not sure what the problem was earlier. Would still like to know how to set the local prefix to a specific path though, if possible. | unknown | |
d3876 | train | Can we somehow pass the type HTML input attribute value to the $_POST array or grab it anyhow else with PHP?
Not per se.
I am aware that I can create a hidden field and basically put the type of the real input into the value of the hidden field
That is a way to do it.
It seems a real shortcoming that the type of an input is undetectable in a Form Submit
Usually you know what type of data you expect for a given field because you aren't processing them generically, so it would rarely be a useful feature.
perhaps (hopefully) I miss something?
No.
A: Well here is the breakdown;
GET accessed via $_GET in PHP tackling and POST accessed via $_POST in PHP are transport methods, so is PUT, and DELETE etc for a from it does not matter what method you use it only works on client side and only knows to map every thing in it into serialised query string or at least have it read for being serialised.
For example
<input type="text" id="firstname" name="fname">
it takes the name attribute and converts into this
?fname=ferret
See it didn't even bother with ID attribute. When we hit submit button form will only run through name attributes of each input and make LHS of the with value and add user input as RHS to the value. It will not do anything else at all.
On PHP side we ask $_GET tunnel are there any query strings in the request or $_POST tunnel. Each of these if there is any query string - emphasis on word string. explodes the string into array and gives it you. hence $POST['fname'].
Looks something like this
$_POST = [
fname => 'ferret',
someothingelse => 'someothervalue']
SO what you are trying to do is or at least asking to do is ...make browser change its BOM behaviour - which we cannot in real sense of the matter; to make form add some thing like this.
?fname=ferret,text
?fname=ferret-text
?fname=ferret/text
form by default will not do this, unless you run custom function updating each query before submit and that is pron to what we call escaping, 3/100 time you would miss it given the chance
Then on PHP side you want PHP to figure out on its own that after slash is type like so
$_POST = [
fname => 'ferret/text']
PHP would not do that on its own, unless you fork it make custom whatever like Facebook has done and then run it or at least make some kind of low level library but that too would be after the fact.
in case your not wondering, thats how XSS and injections happen.
SO query string standards are rigid to keep things a string with militaristic data and serialised.
So yes what you intended to do with hidden field is one tested way of achieving what you are want. | unknown | |
d3877 | train | Could it be that LFS tries to upload the new version of large file before deleting the previous one? If this is the case you need at least 1.2GB in order to update a 600MB file.
To test it, you could try with a smaller test version of the zip file (about 300MB). If you are able to update it and logging in Bitbucket you find previous version is no longer there, you know the problem is your 1GB limit.
A: For any object storage(zip file, blob etc), the whole file is uploaded and a pointer is created to keep track of latest version. If blob size is more than your allocated quota, you will face space issue as 2 different versions of same blob(of almost same size) will actually double the space usage. | unknown | |
d3878 | train | First, invert the dictionary, so that you can easily look up the digit symbol for a given letter:
num_code = {
letter: digit
for digit, letters in char_code.items()
for letter in letters
}
Then simply use that lookup to do the mapping:
word_list[:] = [num_code[letter] for letter in word_list]
Which gives us the expected result.
A: One way to do it with for looping on word_list and using list comprehension to find each list element inside char_codedictionary and finallyappend` the keys like this-
char_code = {'1':['b','f','v','p'],'2':['c','g','j','k','q','s','x','z'], '3':['d','t'], '4':['l'],'5':['m','n'], '6':['r']}
word_list = ['r', 'v', 'p', 'c']
expected_list = []
for word in word_list:
expected_list.append([k for k, v in char_code.items() if word in v][0])
print(expected_list)
Output:
['6', '1', '1', '2']
A: Your dictionary isn't structured efficiently to do the kind of lookup you're asking for - it's "inside out". You can do it, but it's clumsy. You want the key to be a letter and it's value to be the code, but you have the other way around.
Transform your dictionary and it will simplify your list creation:
>>> letter_to_code = {letter: code for code, lst in char_code.items() for letter in lst}
>>> [letter_to_code[letter] for letter in word_list]
['6', '1', '1', '2'] | unknown | |
d3879 | train | There is an exception that occurs when the items are painted, but it is not reported right away. On my system (PyQt 4.5.1, Python 2.6), no exception is reported when I monkey-patch the following method:
def drawItems(painter, items, options):
print len(items)
for idx, i in enumerate(items):
print idx, i
if idx > 5:
raise ValueError()
Output:
45
0 <PyQt4.QtGui.QGraphicsPathItem object at 0x3585270>
1 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ca68>
2 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356ce20>
3 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc88>
4 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cc00>
5 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356caf0>
6 <PyQt4.QtGui.QGraphicsSimpleTextItem object at 0x356cb78>
However, once I close the application, the following method is printed:
Exception ValueError: ValueError() in <module 'threading' from '/usr/lib/python2.6/threading.pyc'> ignored
I tried printing threading.currentThread(), but it returns the same thread whether it's called in- or outside the monkey-patched drawItems method.
In your code, this is likely due to the fact that you pass options (which is a list of style options objects) to the individual items rather than the respective option object. Using this code should give you the correct results:
def drawItems(self, painter, items, options):
for item, option in zip(items, options):
print "Processing", item
# ... Do checking ...
item.paint(painter, option, self.target)
Also, you say the self.target is the scene object. The documentation for paint() says:
This function, which is usually called by QGraphicsView, paints the contents of an item in local coordinates. ... The widget argument is optional. If provided, it points to the widget that is being painted on; otherwise, it is 0. For cached painting, widget is always 0.
and the type is QWidget*. QGraphicsScene inherits from QObject and is not a widget, so it is likely that this is wrong, too.
Still, the fact that the exception is not reported at all, or not right away suggests some foul play, you should contact the maintainer.
A: The reason why the loop suddenly exits is that an Exception is thrown. Python doesn't handle it (there is no try: block), so it's passed to the called (Qt's C++ code) which has no idea about Python exceptions, so it's lost.
Add a try/except around the loop and you should see the reason why this happens.
Note: Since Python 2.4, you should not override methods this way anymore.
Instead, you must derive a new class from QGraphicsView and add your drawItems() method to this new class. This will replace the original method properly.
Don't forget to call super() in the __init__ method! Otherwise, your object won't work properly. | unknown | |
d3880 | train | The answer is to move the call to next() under the new CallbackFilterIterator.
Here's the final version: https://gist.github.com/drupol/8513c7bfdbe1ad7d66fa710f51a21b32
Thanks @jeto ! | unknown | |
d3881 | train | TC CONTACT RECIVED!!"
$body = "Name: $name\nEmail: $email\nSubject: $subject\nMessage: $message"
mail($to, $about, body, "From: $name <$email>")
$success = "Message sent, thank you for contacting us!";
$name = $email = $message = '';
}
}
?>
JAVASCRIPT CODE
(function($) {
"use strict"; // Start of use strict
// Smooth scrolling using jQuery easing
$('a.js-scroll-trigger[href*="#"]:not([href="#"])').click(function() {
if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
var target = $(this.hash);
target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
if (target.length) {
$('html, body').animate({
scrollTop: (target.offset().top - 48)
}, 1000, "easeInOutExpo");
return false;
}
}
});
// Closes responsive menu when a scroll trigger link is clicked
$('.js-scroll-trigger').click(function() {
$('.navbar-collapse').collapse('hide');
});
// Activate scrollspy to add active class to navbar items on scroll
$('body').scrollspy({
target: '#mainNav',
offset: 48
});
// Collapse the navbar when page is scrolled
$(window).scroll(function() {
if ($("#mainNav").offset().top > 100) {
$("#mainNav").addClass("navbar-shrink");
} else {
$("#mainNav").removeClass("navbar-shrink");
}
});
// Scroll reveal calls
window.sr = ScrollReveal();
sr.reveal('.sr-icons', {
duration: 600,
scale: 0.3,
distance: '0px'
}, 200);
sr.reveal('.sr-button', {
duration: 1000,
delay: 200
});
sr.reveal('.sr-contact', {
duration: 600,
scale: 0.3,
distance: '0px'
}, 300);
// Magnific popup calls
$('.popup-gallery').magnificPopup({
delegate: 'a',
type: 'image',
tLoading: 'Loading image #%curr%...',
mainClass: 'mfp-img-mobile',
gallery: {
enabled: true,
navigateByImgClick: true,
preload: [0, 1]
},
image: {
tError: '<a href="%url%">The image #%curr%</a> could not be loaded.'
}
});
})(jQuery);
HTML CODE
<div class="container">
<div class="row">
<div class="col-lg-8 mx-auto text-center">
<h2 class="section-heading text-white">CONTACT FORM</h2>
<hr class="light">
<p class="text-faded">Fill the form table down below to contact us.</p>
</div>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-xs-12 col-sm-12 col-md-12 col-lg-12 text-center">
<h2>Contact</h2>
</div>
</div>
<div class="row">
<div class="col-lg-8 col-md-10 mx-auto">
<form id="contact-form" class="form" action="mail/contact.php" method="post" role="form">
<div class="form-group">
<label class="form-label" for="name">Your Name</label>
<input type="text" class="form-control" id="name" name="name" placeholder="Your name" tabindex="1" required>
</div>
<div class="form-group">
<label class="form-label" for="email">Your Email</label>
<input type="email" class="form-control" id="email" name="email" placeholder="Your Email" tabindex="2" required>
</div>
<div class="form-group">
<label class="form-label" for="subject">Subject</label>
<input type="text" class="form-control" id="subject" name="subject" placeholder="Subject" tabindex="3">
</div>
<div class="form-group">
<label class="form-label" for="message">Message</label>
<textarea rows="5" cols="50" name="message" class="form-control" id="message" placeholder="Message..." tabindex="4" required></textarea>
</div>
<div class="text-center">
<button type="submit" class="btn btn-start-order">Send Message</button>
</div>
</form>
</div>
</div>
</div>
PROBLEM:
As you can see that my PHP, JS, HTML code for the contact form, But for some reason it doesn't work!!??
Every time I try to use it I keep getting (method not allowed)!!!??
I most tried everything (new PHP code different html code)
and note (I'm using it on a site hosted on glitch) it doesn't officially support PHP but it does.
A: I tried you code. It should work except it should be $body instead of body in mail($to, $about, body, "From: $name <$email>");.
So the problem should be from the host provider.
BTW: Gmail seems having more strict security policy. I failed on Gmail and tried another mail service and succeed to receive the email. | unknown | |
d3882 | train | But object is not updating values to database
With the code from your question you are only reading data from the database (with the once() method).
If you want to update the values in the database, you would need to use the update() or set() methods, depending on your exact needs. | unknown | |
d3883 | train | The code is using a single contiguous block of memory to hold a 2-D array.
char *data = (char *)malloc(rows*cols*sizeof(char));
Ok -- this line is allocating space for the entire 2-D array. The 2-D array is rows rows by cols columns. So the total number of elements is rows * cols. Then you have to multiply that by the amount of space each element takes up, which is sizeof(char) since this is a 2-D array of char. Thus the total amount of memory to be allocated is rows * cols * sizeof(char) which is indeed the argument to malloc.
The malloc call returns a pointer to the allocated memory. Since this memory will be used to hold char, you cast the return value to char *.
char **array= (char **)malloc(rows*sizeof(char*));
array is being declared as type "pointer to pointer to char" because that's what it's going to do. It'll point to memory that will hold pointers to char. It will be one pointer for each row. So you have to allocate rows * sizeof(char *) memory: the number of pointers times the size of a pointer of the right type. And since this was allocated to point to pointers to char, we cast the return value to char **.
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
This is the magic :). This sets each pointer in array to point to within the block of actual data allocated earlier. Consider a concrete example where rows is 2 and cols is 3. Then you have the block of 6 characters in memory:
[0][1][2][3][4][5]
And data[n] (for n from 0 to 5) is the n-th element and &data[n] is the *address of the n-th element.
So what that loop does in this case is do:
array[0] = &data[0];
array[1] = &data[3];
So array[0] points to the sub-block starting at [0] and array[1] points to the sub-block starting at [3]. Then when you add the second subscript you're indexing from the start of that pointer. So array[0][2] means "get the pointer stored in array[0]. Find what it points to, then move ahead 2 elements from there.:
array[0] points to [0][1][2] (well, actually points to [0]). Then you move two elements ahead and get [2].
Or if you start with array[1][1], array[1] points to [3][4][5] (and actually points at [3]. Move one element ahead and get [4].
A: The first malloc is getting memory for the 2D character array. The second malloc is getting memory for rows index.
The for loop is setting the pointer to each row.
Finally the row index is returned.
A: You can think of the 2-D array that it is creating as an array of single-dimensional arrays. Each row entry points to an array of char that represents the column data for that row.
The following is the original code with comments added to attempt to describe each step:
char **alloc_2d_char(const int rows, const int cols)
{
// This allocates the chunk of memory that stores that actual data.
// It is a one single-dimensional array that has rows*cols characters.
char *data = (char *)malloc(rows*cols*sizeof(char));
// This allocates an array of pointers that will then be assigned to the
// individual rows (carved out of the previous allocation).
char **array= (char **)malloc(rows*sizeof(char*));
// This assigns each row to the appropriate position in the data array.
// The &(data[cols*i]) bit of it is the address of a portion in the
// memory pointed to by data. The cols*i is the offset to the portion that
// represents row i.
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
// After doing this, then you can assign a value such as:
// array[r][c] = 'x';
// That statement would store 'x' into data[r*c + c]
return array;
}
A: Is not hard to decipher...
char *data = (char *)malloc(rows*cols*sizeof(char));
simple memory allocation
char **array= (char **)malloc(rows*sizeof(char*));
memory allocation of #row char pointers
array[i] = &(data[cols*i]);
every array[i] is a pointer, a pointer to data[cols*i]
A: Each * in a declaration refers to one level of pointer indirection. so int ** means a pointer to a pointer to an int. So your function:
char **alloc_2d_char(const int rows, const int cols)
{
returns a pointer to a pointer to a char.
char *data = (char *)malloc(rows*cols*sizeof(char));
This declares a pointer to a char. The pointer is called data. The initialization calls malloc, which allocates a number of bytes equal to the value of the argument. This means there are rows*cols*sizeof(char) bytes, which will be equal to rows*cols, since a char is 1 byte. The malloc function returns the pointer to the new memory, which means that data now points to a chunk of memory that's rows*cols big. The (char *) before the call to malloc just casts the new memory to the same type as the pointer data.
char **array= (char **)malloc(rows*sizeof(char*));
array is a pointer to a pointer to a char. It is also being assigned using malloc. The amount of memory being allocated this time is rows*sizeof(char), which is equal to rows. This means that array is a pointer to a pointer to a chunk of memory big enough to hold 1 row.
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
The rest of your code initializes each element of array to hold the address of the corresponding row of the big chunk of memory. Your data pointer will point to a chunk of memory that looks like this:
col: 0 1 2 3 ... cols-1
row: 0 *
1 *
2 *
3 *
.
.
.
rows-1 *
And your array pointer will point to a chunk of memory with a list of pointers, each of which points to one of the asterisks in the memory chunk above.
return array;
}
This just returns your array pointer to a pointer, which matches the return type of the alloc_2d_char function: char **. This means that the caller of the function will essentially obtain an array of pointers, and each of these pointers points to one of the rows of the 2D array. | unknown | |
d3884 | train | If you add this:
$element = $('.element:last-child')
before
appendText($element);
I think will solve your problem
jsFindle here: http://jsfiddle.net/733Xd/5/.
Best regards!
A: That is an expensive thing to do. I would advise against it for performance reasons.
I did this pluggin in the beggining of last year https://github.com/fmsf/jQuery-obj-update
It doesn't trigger on every call, you have to request the update yourself:
$element.update();
The code is small enough to be pasted on the answer:
(function ( $ ) {
$.fn.update = function(){
var newElements = $(this.selector),i;
for(i=0;i<newElements.length;i++){
this[i] = newElements[i];
}
for(;i<this.length;i++){
this[i] = undefined;
}
this.length = newElements.length;
return this;
};
})(jQuery);
A: I think below one will solve your problem
appendText($element); //here you always referring to the node which was there initial.
http://jsfiddle.net/s9udJ/
A: Possible Solution will be
$(function(){
$elements = $('.elements');
$element = $('.element');
function appendText(element){
element.append('<em> appended text</em>');
}
appendText($element);
$('button').on('click', function(){
$elements.append('<span class="element">Span Element Appended after load</span>');
appendText($elements.find('span').last());
});
})
A: I don't think what you're asking is easily possible - when you call $element = $('.element'); you define a variable which equals to set of objects (well, one object). When calling appendText($element); you're operating on that object. It's not a cache - it's just how JS (and other programming languages) works.
The only solution I can see is to have a function that will update the variable, every time jquery calls one of its DOM manipulation methods, along the lines of this:
<div class='a'></div>
$(document).ready(function()
{
var element = $('.a');
$.fn.appendUpdate = function(elem)
{
// ugly because this is an object
// also - not really taking account of multiple objects that are added here
// just making an example
if ($(elem).is(this.selector))
{
this[this.length] = $(this).append(elem).get(0);
this.length++;
}
return this;
}
element.appendUpdate("<div class='a'></div>");
console.log(element);
});
Then you can use sub() to roll out your own version of append = the above. This way your variables would be up to date, and you wouldn't really need to change your code. I also need to say that I shudder about the thing I've written (please, please, don't use it).
Fiddle | unknown | |
d3885 | train | Join the table to a query that returns the maximum id for each user with type = 'good':
select t.*
from tablename t inner join (
select user, max(id) id
from tablename
where type = 'good'
group by user
) tt on tt.user = t.user and tt.id <= t.id
See the demo.
Results:
| id | user | type | amount |
| --- | ---- | ---- | ------ |
| 62 | 98 | good | 20 |
| 89 | 98 | bad | 60 |
| 93 | 98 | bad | 10 |
| 109 | 99 | good | 220 |
| 121 | 99 | bad | 640 |
| 193 | 99 | bad | 110 |
A: One method uses a correlated subquery:
select t.*
from t
where t.id >= (select max(t2.id)
from t t2
where t2.user = t.user and t2.type = 'good'
);
This should have good performance if you have an index on (user, type, id).
Based on the phrasing of your question, I am interpreting it as requiring at least one good row. If this is not the case, then the following logic can be used:
select t.*
from t
where t.id >= all (select t2.id
from t t2
where t2.user = t.user and t2.type = 'good'
);
You can also use window functions:
select t.*
from (select t.*,
max(case when type = 'good' then id end) over (partition by user) as max_good_id
from t
) t
where id >= max_good_id;
A: For MariaDB 10.2+, Window Analytic Functions might be used such as
SUM() OVER (PARTITION BY ... ORDER BY ...)
WITH T2 AS
(
SELECT SUM(CASE WHEN type = 'good' THEN 1 ELSE 0 END)
OVER (PARTITION BY user ORDER BY id DESC) AS sum,
T.*
FROM T
)
SELECT id, user, type, amount
FROM T2
WHERE ( type = 'good' AND sum = 1 ) OR ( type != 'good' AND sum = 0 )
ORDER BY id;
Demo | unknown | |
d3886 | train | Not sure why you don't want to / can't use row_number() but here's some code that works using CROSS APPLY.
First I created a table with your specification and my dummy data:
DROP TABLE IF EXISTS #applicants;
CREATE TABLE #applicants (
id INT PRIMARY KEY IDENTITY,
[name] VARCHAR(255),
[age] INT,
[address] VARCHAR(255),
[programming language] VARCHAR(255),
[cognitive score] INT
);
INSERT INTO #applicants (name, age, address, [programming language], [cognitive score])
VALUES ('a', 20, 'address1', 'SQL', 70),
('b', 31, 'address2', 'SQL', 80),
('c', 32, 'address3', 'SQL', 90),
('d', 33, 'address4', 'C', 71),
('e', 34, 'address5', 'C', 81),
('f', 35, 'address6', 'C', 91),
('g', 36, 'address7', 'C#', 72),
('h', 37, 'address8', 'C#', 82),
('i', 38, 'address9', 'C#', 92);
Then this code gets your actual output.
SELECT A.id,
A.name,
A.age,
A.id,
A.[programming language],
A.[cognitive score]
FROM #applicants AS A
CROSS APPLY (
SELECT TOP 2 t.id,
t.[cognitive score]
FROM #applicants AS t
WHERE A.[programming language] = t.[programming language]
ORDER BY t.[cognitive score] DESC
) AS A2
WHERE A.id = A2.id;
The initial SELECT * FROM Applicants would return absolutely everything. The CROSS APPLY works by looking for the TOP 2 based on Cognitive Score whilst matching on Programming Language.
Finally they join back using ID which forces the CROSS APPLY to act more like an inner join and only return the rows where the IDs match. Without it you'd end up with 18 rows because of the 9 rows each repeating twice. If this explanation is unclear take a look at this version of the query to see the duplication:
SELECT A.id,
A.name,
A.age,
A.id,
A.[programming language],
A.[cognitive score],
A2.ID,
A2.[cognitive score]
FROM #applicants AS A
CROSS APPLY (
SELECT TOP 2 t.id,
t.[cognitive score]
FROM #applicants AS t
WHERE A.[programming language] = t.[programming language]
ORDER BY t.[cognitive score] DESC
) AS A2 | unknown | |
d3887 | train | For PostgreSQL, you can use ROW_NUMBER() to basically mimic the plan you have for a tempid, but all within one query:
SELECT *
FROM (SELECT id, subID, dataTarget,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY subID asc) RN
FROM target
) T
JOIN (SELECT id, othersubID, dataSource,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY othersubID asc) RN
FROM source
) S ON S.id = T.id
AND S.RN = T.RN
The update would be:
UPDATE T
SET T.dataTarget = S.dataSource
FROM (SELECT id, subID, dataTarget,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY subID asc) RN
FROM target
) T
JOIN (SELECT id, othersubID, dataSource,
ROW_NUMBER() OVER (PARTITION BY id ORDER BY othersubID asc) RN
FROM source
) S ON S.id = T.id
AND S.RN = T.RN
I'm also curious how representative your example data is of the actual table. If it really looks just like that, you can add or remove a 0 in your join predicate:
SELECT *
FROM Target T
JOIN Source S ON T.id = S.id
AND (t.subId * 10 = s.othersubID OR
t.subId / 10 = s.othersubID)
This should work in any RDBMS, assuming subID is not a string. If it is, you have to concatenate or remove a 0 instead of doing math. | unknown | |
d3888 | train | You should change your partial to
<p><%= company_name(project) %><p>
<p><%= summary_description(project) %><p>
See the Rails documentation about this under "Rendering Collections".
A: I figured out what the problem was.
It was because I was using a differently-named partial to the model I was trying to render. I needed to just render a summary of the model, so I used a summary partial. In the partial though, the "name" of my project variable was "summary". So I changed my partial to:
<p><%= company_name(summary) %><p>
<p><%= summary_description(summary) %><p>
and it worked. Rails is still a mystery to me with stuff like this. From this post, the answer is to use: :as => :foo
<%= render :partial => "projects/summary", :collection => @projects, :as => :project %> | unknown | |
d3889 | train | Using Object.fromEntries(), you can build an array of [key, value] pairs by mapping (.map()) each key (ie: value) from a to an array of values from the same index from all the other arrays:
const a = ["F", "M"];
const b = ["female", "male"];
const c = ["fa-female", "fa-male"];
const buildObj = (keys, ...values) => Object.fromEntries(keys.map(
(key, i) => [key, values.map(arr => arr[i])]
));
const res = buildObj(a, b, c);
console.log(res);
Object.fromEntries() has limited browser support, however, it can easily be polyfilled. Alternatively, instead of using an object, you could use a Map, which would remove the need of .fromEntries():
const a = ["F", "M"];
const b = ["female", "male"];
const c = ["fa-female", "fa-male"];
const buildMap = (keys, ...values) => new Map(keys.map(
(key, i) => [key, values.map(arr => arr[i])]
));
const res = buildMap(a, b, c);
console.log("See browser console:", res); // see browser console for output
A: use this one.
var a = ["F", "M"];
var b = ["female", "male"];
var c = ["fa-female", "fa-male"];
var resultArray = [];
for(var i = 0; i < a.length; i++) {
resultArray [a[i]] = [b[i], c[i]];
}
A: You could combine your arrays to form key/value pairs for Object.fromEntries:
Object.fromEntries([['M', 'male'], ['F', 'female']]);
//=> {M: 'male', F: 'female'}
However Object.fromEntries does not handle collisions:
Object.fromEntries([['M', 'male'], ['F', 'female'], ['F', 'fa-female']]);
//=> {M: 'male', F: 'fa-female'}
As you can see, the previous value for F just got overridden :/
We can build a custom fromEntries function that puts values into arrays:
const fromEntries =
pairs =>
pairs.reduce((obj, [k, v]) => ({
...obj,
[k]: k in obj
? [].concat(obj[k], v)
: [v]
}), {});
fromEntries([['M', 'male'], ['M', 'fa-male'], ['F', 'female'], ['F', 'fa-female']]);
//=> {M: ["male", "fa-male"], F: ["female", "fa-female"]}
How do you create key/value pairs then?
One possible solution: zip
const zip = (x, y) => x.map((v, i) => [v, y[i]]);
zip(['F', 'M'], ['female', 'male']);
//=> [["F", "female"], ["M", "male"]]
So to produce all pairs (and your final object)
fromEntries([
...zip(['F', 'M'], ['female', 'male']),
...zip(['F', 'M'], ['fa-female', 'fa-male'])
]);
A: var a = ["male","female"];
var b = ["m","f"];
var c = ["fa male","fa female"];
var result = a.reduce((res,val,key) => {
var temp = [b,c];
res[val] = temp.map((v) => v[key]);
return res;
},{});
This is bit expensive. It is a nested loop.
A: Here is one line with forEach. Another way using reduce and Map.
var a = ["F", "M"];
var b = ["female", "male"];
var c = ["fa-female", "fa-male"];
const ans = {};
a.forEach((key, i) => (ans[key] = [b[i], c[i]]));
console.log(ans)
// Alternate way
var ans2 = Object.fromEntries(
a.reduce((acc, curr, i) => acc.set(curr, [b[i], c[i]]), new Map())
);
console.log(ans2);
A: A solution using map and filter
var a = ["M", "F"];
var b = ["female", "male"];
var c = ["fa-female", "fa-male"];
const bAndC = b.concat(c);
let returnObj = {};
a.map(category => {
let catArray = []
if(category === 'F') {
catArray = bAndC.filter(item => item.includes('female'));
} else {
catArray = bAndC.filter(item => item.includes('male') && !item.includes('female'));
}
return returnObj[category] = catArray;
}); | unknown | |
d3890 | train | I don't know if this is the most efficient method, but I can't come up with something better right now.
I assume this will have a terrible performance on a larger table.
with userlist as (
select array_agg(t.usr_id) as users,
a.address
from t_table t
left join unnest(t.address) as a(address) on true
group by a.address
), shared_users as (
select u.address,
array(select distinct ul.uid
from userlist u2, unnest(u2.users) as ul(uid)
where u.users && u2.users
order by ul.uid) as users
from userlist u
)
select users, array_agg(distinct address)
from shared_users
group by users;
What does it do?
The first CTE collects all users that share at least one address. The output of the userlist CTE is:
users | address
------+--------------
{1} | 95.155.38.120
{1,3} | 94.134.88.136
{1,2} | 44.154.48.125
{6} | 1.1.0.9
{4,5} | 127.0.0.1
{1} | 81.134.82.111
{5} | 5.5.5.5
Now this can be used to aggregate those user lists that share at least one address. The output of the shared_users CTE is:
address | users
--------------+--------
95.155.38.120 | {1,2,3}
94.134.88.136 | {1,2,3}
44.154.48.125 | {1,2,3}
1.1.0.9 | {6}
127.0.0.1 | {4,5}
81.134.82.111 | {1,2,3}
5.5.5.5 | {4,5}
As you can see we now have groups with the same list of usr_ids. In the final step we can group by those and aggregate the addresses, which will then return:
users | array_agg
--------+----------------------------------------------------------
{1,2,3} | {44.154.48.125,81.134.82.111,94.134.88.136,95.155.38.120}
{4,5} | {127.0.0.1,5.5.5.5}
{6} | {1.1.0.9}
Online example
A: Group the addresses using "group by" operator | unknown | |
d3891 | train | Use a file search to see if the following path is valid:
themes/1/ | unknown | |
d3892 | train | It is a bit unclear what you want as output. Are you looking for this:
from skimage.util.shape import view_as_windows
b = view_as_windows(a,(f,f,f),f).reshape(-1,f,f,f).transpose(1,2,3,0).reshape(f,f,-1)
suggested by @Paul with similar result (I prefer this answer in fact):
N = 8
b = a.reshape(2,N//2,2,N//2,N).transpose(1,3,0,2,4).reshape(N//2,N//2,N*4)
output:
print(np.array_equal(b[:, :, 4:8],a[0:4, 0:4, 4:8]))
#True
print(np.array_equal(b[:, :, 8:12],a[0:4, 4:8, 0:4]))
#True
print(np.array_equal(b[:, :, 12:16],a[0:4, 4:8, 4:8]))
#True
A: def flatten_by(arr, atomic_size):
a, b, c = arr.shape
x, y, z = atomic_size
r = arr.reshape([a//x, x, b//y, y, c//z, z])
r = r.transpose([0, 2, 4, 1, 3, 5])
r = r.reshape([-1, x, y, z])
return r
flatten_by(arr, [4,4,4]).shape
>>> (8, 4, 4, 4)
EDIT:
the function applies C-style flattening to the array, as shown below
NOTE:
this method and @Ehsan's method both produce "copies" NOT "views", im looking into it and would update the answer when if i find a solution
flattened = flatten_by(arr, [4,4,4])
required = np.array([
arr[0:4, 0:4, 0:4],
arr[0:4, 0:4, 4:8],
arr[0:4, 4:8, 0:4],
arr[0:4, 4:8, 4:8],
arr[4:8, 0:4, 0:4],
arr[4:8, 0:4, 4:8],
arr[4:8, 4:8, 0:4],
arr[4:8, 4:8, 4:8],
])
np.array_equal(required, flattened)
>>> True | unknown | |
d3893 | train | Your unit registrations and classes look a little off.
From what I can gather, this is what you really want to do.
Setup a factory that will determine at runtime which IAuthStrategy should be used:
public interface IAuthStrategyFactory
{
IAuthStrategy GetAuthStrategy();
}
public class AuthStrategyFactory : IAuthStrategyFactory
{
readonly IAuthStrategy _authStrategy;
public AuthStrategy(...)
{
//determine the concrete implementation of IAuthStrategy that you need
//This might be injected as well by passing
//in an IAuthStrategy and registering the correct one via unity at startup.
_authStrategy = SomeCallToDetermineWhichOne();
}
public IAuthStrategy GetAuthStrategy()
{
return _authStrategy;
}
}
This is your existing AuthStrategy:
public interface IAuthStrategy
{
OperationResponse<AuthenticationMechanismDTO> GetAuthenticationMechanism(string userName);
}
public class UserNamePasswordMechanism : IAuthStrategy
{
private IInstitutionRepository _institutionRepository;
public UserNamePasswordMechanism(IInstitutionRepository institutionRepository)
{
this._institutionRepository = institutionRepository;
}
public OperationResponse<AuthenticationMechanismDTO> GetAuthenticationMechanism(string userName)
{
throw new NotImplementedException();
}
}
Register the factory with unity:
container.RegisterType<IAuthStrategyFactory, AuthStrategyFactory>();
In your controller:
public class EmployeeController : ApiController
{
private IAuthStrategy _auth;
public EmployeeController(IAuthStrategyFactory authFactory)
{
this._employeeBL = employeeBL;
this._auth = authFactory.GetAuthStrategy();
}
}
A: Actually i missed implemented IAuthStrategyFactory on AuthStrategyFactory, once i implemented and register in unity container that worked.
Thanks | unknown | |
d3894 | train | You can just use style 108 instead of 114 in the CONVERT function to get only the hh:mm:ss:
CREATE PROCEDURE dbo.St_Proc_UpdateTimeSpent
@timeEntryID int,
@status int output
AS BEGIN
SET NOCOUNT ON;
DECLARE @Date DATETIME;
SET @Date = GETDATE();
UPDATE dbo.Production
SET TimeSpent = CONVERT(VARCHAR(20), DATEADD(SS, DATEDIFF(ss, CalendarDate, @Date)%(60*60*24),0), 108),
IsTaskCompleted = 1
WHERE
productionTimeEntryID = @timeEntryID
SET @status = 1;
RETURN @status;
END
See the excellent MSDN documentation on CAST and CONVERT for a comprehensive list of all supported styles when converting DATETIME to VARCHAR (and back)
BTW: SQL Server 2008 also introduced a TIME datatype which would probably be a better fit than a VARCHAR to store your TimeSpent values ... check it out! | unknown | |
d3895 | train | Several notes:
*
*foreach ($rmdata[$key] as $field=>$value) is the same as foreach ($properties as $field=>$value) in this context
*the whole if(isset($value)) thing can be avoided by starting that loop with if(!$value) continue;
*When you select for the propid, you are selecting every row in the table, surely this is not what you intend to do, since you loop through them all and only every use the last one.
*the section building the update section is flawed in a few ways. The simplest fix is to realize that you can achieve the correct result by re-combining the $fields and $values arrays after the loop (as shown later)
*I can't readily see where you would get the conflict of keys, unless $rmdata will contain a propid if it's an update instead of an insert, or else if there is some other key and it's just being handled without explicitly seeing it in the code, which would be fine.
Below is code which I copied directly from yours and just modified to address these issues:
foreach ($rmdata as $properties) {
$fields = array();
$values = array();
$updates = array();
foreach ($properties as $field=>$value) {
if (!$value) continue;
$fields[] = $field;
$values[] = "'".$value."'";
$updates[] = $field . '="'.$value.'"';
}
$sql_fields = implode(', ', $fields);
$sql_values = implode(', ', $values);
$sql_updates = implode(', ', $updates);
$sqlPropInsert = mysql_query('INSERT INTO epsales ('. $sql_fields .') VALUES ('. $sql_values .') ON DUPLICATE KEY UPDATE SET '. $sql_updates .'');
}
note that this technique requires something in the data to have a conflicting key in order for the ON DUPLICATE KEY to trigger. If there is some value in the data array which you can uniquely identify these rows by, that field should be a UNIQUE KEY in the database, which will 'cause this conflict to occur quite nicely. | unknown | |
d3896 | train | Maybe that's not the exact answer to your question, but what for you want to create file with settings? Wouldn't be much easier, to use Unity3d built-in feature for saving game preferences, which also works cross-platform?
If you want to give it a try, read about PlayerPrefs. | unknown | |
d3897 | train | You need to follow redirects too:
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true); | unknown | |
d3898 | train | You can just add ampersand '&' to separate each command:
php script1.php & php script2.php & php script3.php ...
This ampersand symbol will tell the shell to run command on background.
To check the output, you can redirect it to a file:
php script1.php > script1.log.txt & php script2.php > script2.log.txt
And you can just do a tail on it to read the log:
tail -f script1.log.txt
A: If you script is nicely numbered from 1 to 50, you can try the following in a .command file:
i=1;
while [ $i -lt 51 ]
do
osascript -e 'tell app "Terminal"
do script "php Script$i.php"
end tell' &
i=$[$i+1]
done
This should open 50 separate terminal windows each running script{$i}.php
A: You could also run them at the same time but not in the background.
php test1.php; php test2.php;
I don't know why you would want to "interact" with the script after its running. | unknown | |
d3899 | train | *
*first import user service class.
*then set it as a global variable inside the drool file.
import com.intervest.notification.service.RadiusFilterService;
global RadiusFilterService radiusFilterService;
rule "your rule name"
when
$map : Map();
then
Map $originDataMap = (Map) $map.get("originDataMap");
Long $hashValue = (Long) $originDataMap.get("request")
end
*then where the rule engine define.
@Autowired
private RouteService routerService;
kieRuntime.setGlobal("RouteService", RouteService); | unknown | |
d3900 | train | There are really good solutions which exploit the internal btree representation of sql indices. This is based on some great research done back around 1998.
Here is an example table (in mysql).
CREATE TABLE `node` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
`tw` int(10) unsigned NOT NULL,
`pa` int(10) unsigned DEFAULT NULL,
`sz` int(10) unsigned DEFAULT NULL,
`nc` int(11) GENERATED ALWAYS AS (tw+sz) STORED,
PRIMARY KEY (`id`),
KEY `node_tw_index` (`tw`),
KEY `node_pa_index` (`pa`),
KEY `node_nc_index` (`nc`),
CONSTRAINT `node_pa_fk` FOREIGN KEY (`pa`) REFERENCES `node` (`tw`) ON DELETE CASCADE
)
The only fields necessary for the tree representation are:
*
*tw: The Left to Right DFS Pre-order index, where root = 1.
*pa: The reference (using tw) to the parent node, root has null.
*sz: The size of the node's branch including itself.
*nc: is used as syntactic sugar. it is tw+sz and represents the tw of the node's "next child".
Here is an example 24 node population, ordered by tw:
+-----+---------+----+------+------+------+
| id | name | tw | pa | sz | nc |
+-----+---------+----+------+------+------+
| 1 | Root | 1 | NULL | 24 | 25 |
| 2 | A | 2 | 1 | 14 | 16 |
| 3 | AA | 3 | 2 | 1 | 4 |
| 4 | AB | 4 | 2 | 7 | 11 |
| 5 | ABA | 5 | 4 | 1 | 6 |
| 6 | ABB | 6 | 4 | 3 | 9 |
| 7 | ABBA | 7 | 6 | 1 | 8 |
| 8 | ABBB | 8 | 6 | 1 | 9 |
| 9 | ABC | 9 | 4 | 2 | 11 |
| 10 | ABCD | 10 | 9 | 1 | 11 |
| 11 | AC | 11 | 2 | 4 | 15 |
| 12 | ACA | 12 | 11 | 2 | 14 |
| 13 | ACAA | 13 | 12 | 1 | 14 |
| 14 | ACB | 14 | 11 | 1 | 15 |
| 15 | AD | 15 | 2 | 1 | 16 |
| 16 | B | 16 | 1 | 1 | 17 |
| 17 | C | 17 | 1 | 6 | 23 |
| 359 | C0 | 18 | 17 | 5 | 23 |
| 360 | C1 | 19 | 18 | 4 | 23 |
| 361 | C2(res) | 20 | 19 | 3 | 23 |
| 362 | C3 | 21 | 20 | 2 | 23 |
| 363 | C4 | 22 | 21 | 1 | 23 |
| 18 | D | 23 | 1 | 1 | 24 |
| 19 | E | 24 | 1 | 1 | 25 |
+-----+---------+----+------+------+------+
Every tree result can be done non-recursively.
For instance, to get a list of ancestors of node at tw='22'
Ancestors
select anc.* from node me,node anc
where me.tw=22 and anc.nc >= me.tw and anc.tw <= me.tw
order by anc.tw;
+-----+---------+----+------+------+------+
| id | name | tw | pa | sz | nc |
+-----+---------+----+------+------+------+
| 1 | Root | 1 | NULL | 24 | 25 |
| 17 | C | 17 | 1 | 6 | 23 |
| 359 | C0 | 18 | 17 | 5 | 23 |
| 360 | C1 | 19 | 18 | 4 | 23 |
| 361 | C2(res) | 20 | 19 | 3 | 23 |
| 362 | C3 | 21 | 20 | 2 | 23 |
| 363 | C4 | 22 | 21 | 1 | 23 |
+-----+---------+----+------+------+------+
Siblings and children are trivial - just use pa field ordering by tw.
Descendants
For example the set (branch) of nodes that are rooted at tw = 17.
select des.* from node me,node des
where me.tw=17 and des.tw < me.nc and des.tw >= me.tw
order by des.tw;
+-----+---------+----+------+------+------+
| id | name | tw | pa | sz | nc |
+-----+---------+----+------+------+------+
| 17 | C | 17 | 1 | 6 | 23 |
| 359 | C0 | 18 | 17 | 5 | 23 |
| 360 | C1 | 19 | 18 | 4 | 23 |
| 361 | C2(res) | 20 | 19 | 3 | 23 |
| 362 | C3 | 21 | 20 | 2 | 23 |
| 363 | C4 | 22 | 21 | 1 | 23 |
+-----+---------+----+------+------+------+
Additional Notes
This methodology is extremely useful for when there are a far greater number of reads than there are inserts or updates.
Because the insertion, movement, or updating of a node in the tree requires the tree to be adjusted, it is necessary to lock the table before commencing with the action.
The insertion/deletion cost is high because the tw index and sz (branch size) values will need to be updated on all the nodes after the insertion point, and for all ancestors respectively.
Branch moving involves moving the tw value of the branch out of range, so it is also necessary to disable foreign key constraints when moving a branch. There are, essentially four queries required to move a branch:
*
*Move the branch out of range.
*Close the gap that it left. (the remaining tree is now normalised).
*Open the gap where it will go to.
*Move the branch into it's new position.
Adjust Tree Queries
The opening/closing of gaps in the tree is an important sub-function used by create/update/delete methods, so I include it here.
We need two parameters - a flag representing whether or not we are downsizing or upsizing, and the node's tw index. So, for example tw=18 (which has a branch size of 5). Let's assume that we are downsizing (removing tw) - this means that we are using '-' instead of '+' in the updates of the following example.
We first use a (slightly altered) ancestor function to update the sz value.
update node me, node anc set anc.sz = anc.sz - me.sz from
node me, node anc where me.tw=18
and ((anc.nc >= me.tw and anc.tw < me.pa) or (anc.tw=me.pa));
Then we need to adjust the tw for those whose tw is higher than the branch to be removed.
update node me, node anc set anc.tw = anc.tw - me.sz from
node me, node anc where me.tw=18 and anc.tw >= me.tw;
Then we need to adjust the parent for those whose pa's tw is higher than the branch to be removed.
update node me, node anc set anc.pa = anc.pa - me.sz from
node me, node anc where me.tw=18 and anc.pa >= me.tw;
A: Well given the choice, I'd be using objects. I'd create an object for each record where each object has a children collection and store them all in an assoc array (/hashtable) where the Id is the key. And blitz through the collection once, adding the children to the relevant children fields. Simple.
But because you're being no fun by restricting use of some good OOP, I'd probably iterate based on:
function PrintLine(int pID, int level)
foreach record where ParentID == pID
print level*tabs + record-data
PrintLine(record.ID, level + 1)
PrintLine(0, 0)
Edit: this is similar to a couple of other entries, but I think it's slightly cleaner. One thing I'll add: this is extremely SQL-intensive. It's nasty. If you have the choice, go the OOP route.
A: If you use nested sets (sometimes referred to as Modified Pre-order Tree Traversal) you can extract the entire tree structure or any subtree within it in tree order with a single query, at the cost of inserts being more expensive, as you need to manage columns which describe an in-order path through thee tree structure.
For django-mptt, I used a structure like this:
id parent_id tree_id level lft rght
-- --------- ------- ----- --- ----
1 null 1 0 1 14
2 1 1 1 2 7
3 2 1 2 3 4
4 2 1 2 5 6
5 1 1 1 8 13
6 5 1 2 9 10
7 5 1 2 11 12
Which describes a tree which looks like this (with id representing each item):
1
+-- 2
| +-- 3
| +-- 4
|
+-- 5
+-- 6
+-- 7
Or, as a nested set diagram which makes it more obvious how the lft and rght values work:
__________________________________________________________________________
| Root 1 |
| ________________________________ ________________________________ |
| | Child 1.1 | | Child 1.2 | |
| | ___________ ___________ | | ___________ ___________ | |
| | | C 1.1.1 | | C 1.1.2 | | | | C 1.2.1 | | C 1.2.2 | | |
1 2 3___________4 5___________6 7 8 9___________10 11__________12 13 14
| |________________________________| |________________________________| |
|__________________________________________________________________________|
As you can see, to get the entire subtree for a given node, in tree order, you simply have to select all rows which have lft and rght values between its lft and rght values. It's also simple to retrieve the tree of ancestors for a given node.
The level column is a bit of denormalisation for convenience more than anything and the tree_id column allows you to restart the lft and rght numbering for each top-level node, which reduces the number of columns affected by inserts, moves and deletions, as the lft and rght columns have to be adjusted accordingly when these operations take place in order to create or close gaps. I made some development notes at the time when I was trying to wrap my head around the queries required for each operation.
In terms of actually working with this data to display a tree, I created a tree_item_iterator utility function which, for each node, should give you sufficient information to generate whatever kind of display you want.
More info about MPTT:
*
*Trees in SQL
*Storing Hierarchical Data in a Database
*Managing Hierarchical Data in MySQL
A: Now that MySQL 8.0 supports recursive queries, we can say that all popular SQL databases support recursive queries in standard syntax.
WITH RECURSIVE MyTree AS (
SELECT * FROM MyTable WHERE ParentId IS NULL
UNION ALL
SELECT m.* FROM MyTABLE AS m JOIN MyTree AS t ON m.ParentId = t.Id
)
SELECT * FROM MyTree;
I tested recursive queries in MySQL 8.0 in my presentation Recursive Query Throwdown in 2017.
Below is my original answer from 2008:
There are several ways to store tree-structured data in a relational database. What you show in your example uses two methods:
*
*Adjacency List (the "parent" column) and
*Path Enumeration (the dotted-numbers in your name column).
Another solution is called Nested Sets, and it can be stored in the same table too. Read "Trees and Hierarchies in SQL for Smarties" by Joe Celko for a lot more information on these designs.
I usually prefer a design called Closure Table (aka "Adjacency Relation") for storing tree-structured data. It requires another table, but then querying trees is pretty easy.
I cover Closure Table in my presentation Models for Hierarchical Data with SQL and PHP and in my book SQL Antipatterns Volume 1: Avoiding the Pitfalls of Database Programming.
CREATE TABLE ClosureTable (
ancestor_id INT NOT NULL REFERENCES FlatTable(id),
descendant_id INT NOT NULL REFERENCES FlatTable(id),
PRIMARY KEY (ancestor_id, descendant_id)
);
Store all paths in the Closure Table, where there is a direct ancestry from one node to another. Include a row for each node to reference itself. For example, using the data set you showed in your question:
INSERT INTO ClosureTable (ancestor_id, descendant_id) VALUES
(1,1), (1,2), (1,4), (1,6),
(2,2), (2,4),
(3,3), (3,5),
(4,4),
(5,5),
(6,6);
Now you can get a tree starting at node 1 like this:
SELECT f.*
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1;
The output (in MySQL client) looks like the following:
+----+
| id |
+----+
| 1 |
| 2 |
| 4 |
| 6 |
+----+
In other words, nodes 3 and 5 are excluded, because they're part of a separate hierarchy, not descending from node 1.
Re: comment from e-satis about immediate children (or immediate parent). You can add a "path_length" column to the ClosureTable to make it easier to query specifically for an immediate child or parent (or any other distance).
INSERT INTO ClosureTable (ancestor_id, descendant_id, path_length) VALUES
(1,1,0), (1,2,1), (1,4,2), (1,6,1),
(2,2,0), (2,4,1),
(3,3,0), (3,5,1),
(4,4,0),
(5,5,0),
(6,6,0);
Then you can add a term in your search for querying the immediate children of a given node. These are descendants whose path_length is 1.
SELECT f.*
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
AND path_length = 1;
+----+
| id |
+----+
| 2 |
| 6 |
+----+
Re comment from @ashraf: "How about sorting the whole tree [by name]?"
Here's an example query to return all nodes that are descendants of node 1, join them to the FlatTable that contains other node attributes such as name, and sort by the name.
SELECT f.name
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
WHERE a.ancestor_id = 1
ORDER BY f.name;
Re comment from @Nate:
SELECT f.name, GROUP_CONCAT(b.ancestor_id order by b.path_length desc) AS breadcrumbs
FROM FlatTable f
JOIN ClosureTable a ON (f.id = a.descendant_id)
JOIN ClosureTable b ON (b.descendant_id = a.descendant_id)
WHERE a.ancestor_id = 1
GROUP BY a.descendant_id
ORDER BY f.name
+------------+-------------+
| name | breadcrumbs |
+------------+-------------+
| Node 1 | 1 |
| Node 1.1 | 1,2 |
| Node 1.1.1 | 1,2,4 |
| Node 1.2 | 1,6 |
+------------+-------------+
A user suggested an edit today. SO moderators approved the edit, but I am reversing it.
The edit suggested that the ORDER BY in the last query above should be ORDER BY b.path_length, f.name, presumably to make sure the ordering matches the hierarchy. But this doesn't work, because it would order "Node 1.1.1" after "Node 1.2".
If you want the ordering to match the hierarchy in a sensible way, that is possible, but not simply by ordering by the path length. For example, see my answer to MySQL Closure Table hierarchical database - How to pull information out in the correct order.
A: This was written quickly, and is neither pretty nor efficient (plus it autoboxes alot, converting between int and Integer is annoying!), but it works.
It probably breaks the rules since I'm creating my own objects but hey I'm doing this as a diversion from real work :)
This also assumes that the resultSet/table is completely read into some sort of structure before you start building Nodes, which wouldn't be the best solution if you have hundreds of thousands of rows.
public class Node {
private Node parent = null;
private List<Node> children;
private String name;
private int id = -1;
public Node(Node parent, int id, String name) {
this.parent = parent;
this.children = new ArrayList<Node>();
this.name = name;
this.id = id;
}
public int getId() {
return this.id;
}
public String getName() {
return this.name;
}
public void addChild(Node child) {
children.add(child);
}
public List<Node> getChildren() {
return children;
}
public boolean isRoot() {
return (this.parent == null);
}
@Override
public String toString() {
return "id=" + id + ", name=" + name + ", parent=" + parent;
}
}
public class NodeBuilder {
public static Node build(List<Map<String, String>> input) {
// maps id of a node to it's Node object
Map<Integer, Node> nodeMap = new HashMap<Integer, Node>();
// maps id of a node to the id of it's parent
Map<Integer, Integer> childParentMap = new HashMap<Integer, Integer>();
// create special 'root' Node with id=0
Node root = new Node(null, 0, "root");
nodeMap.put(root.getId(), root);
// iterate thru the input
for (Map<String, String> map : input) {
// expect each Map to have keys for "id", "name", "parent" ... a
// real implementation would read from a SQL object or resultset
int id = Integer.parseInt(map.get("id"));
String name = map.get("name");
int parent = Integer.parseInt(map.get("parent"));
Node node = new Node(null, id, name);
nodeMap.put(id, node);
childParentMap.put(id, parent);
}
// now that each Node is created, setup the child-parent relationships
for (Map.Entry<Integer, Integer> entry : childParentMap.entrySet()) {
int nodeId = entry.getKey();
int parentId = entry.getValue();
Node child = nodeMap.get(nodeId);
Node parent = nodeMap.get(parentId);
parent.addChild(child);
}
return root;
}
}
public class NodePrinter {
static void printRootNode(Node root) {
printNodes(root, 0);
}
static void printNodes(Node node, int indentLevel) {
printNode(node, indentLevel);
// recurse
for (Node child : node.getChildren()) {
printNodes(child, indentLevel + 1);
}
}
static void printNode(Node node, int indentLevel) {
StringBuilder sb = new StringBuilder();
for (int i = 0; i < indentLevel; i++) {
sb.append("\t");
}
sb.append(node);
System.out.println(sb.toString());
}
public static void main(String[] args) {
// setup dummy data
List<Map<String, String>> resultSet = new ArrayList<Map<String, String>>();
resultSet.add(newMap("1", "Node 1", "0"));
resultSet.add(newMap("2", "Node 1.1", "1"));
resultSet.add(newMap("3", "Node 2", "0"));
resultSet.add(newMap("4", "Node 1.1.1", "2"));
resultSet.add(newMap("5", "Node 2.1", "3"));
resultSet.add(newMap("6", "Node 1.2", "1"));
Node root = NodeBuilder.build(resultSet);
printRootNode(root);
}
//convenience method for creating our dummy data
private static Map<String, String> newMap(String id, String name, String parentId) {
Map<String, String> row = new HashMap<String, String>();
row.put("id", id);
row.put("name", name);
row.put("parent", parentId);
return row;
}
}
A: Assuming that you know that the root elements are zero, here's the pseudocode to output to text:
function PrintLevel (int curr, int level)
//print the indents
for (i=1; i<=level; i++)
print a tab
print curr \n;
for each child in the table with a parent of curr
PrintLevel (child, level+1)
for each elementID where the parentid is zero
PrintLevel(elementID, 0)
A: It's a quite old question, but as it's got many views I think it's worth to present an alternative, and in my opinion very elegant, solution.
In order to read a tree structure you can use recursive Common Table Expressions (CTEs). It gives a possibility to fetch whole tree structure at once, have the information about the level of the node, its parent node and order within children of the parent node.
Let me show you how this would work in PostgreSQL 9.1.
*
*Create a structure
CREATE TABLE tree (
id int NOT NULL,
name varchar(32) NOT NULL,
parent_id int NULL,
node_order int NOT NULL,
CONSTRAINT tree_pk PRIMARY KEY (id),
CONSTRAINT tree_tree_fk FOREIGN KEY (parent_id)
REFERENCES tree (id) NOT DEFERRABLE
);
insert into tree values
(0, 'ROOT', NULL, 0),
(1, 'Node 1', 0, 10),
(2, 'Node 1.1', 1, 10),
(3, 'Node 2', 0, 20),
(4, 'Node 1.1.1', 2, 10),
(5, 'Node 2.1', 3, 10),
(6, 'Node 1.2', 1, 20);
*Write a query
WITH RECURSIVE
tree_search (id, name, level, parent_id, node_order) AS (
SELECT
id,
name,
0,
parent_id,
1
FROM tree
WHERE parent_id is NULL
UNION ALL
SELECT
t.id,
t.name,
ts.level + 1,
ts.id,
t.node_order
FROM tree t, tree_search ts
WHERE t.parent_id = ts.id
)
SELECT * FROM tree_search
WHERE level > 0
ORDER BY level, parent_id, node_order;
Here are the results:
id | name | level | parent_id | node_order
----+------------+-------+-----------+------------
1 | Node 1 | 1 | 0 | 10
3 | Node 2 | 1 | 0 | 20
2 | Node 1.1 | 2 | 1 | 10
6 | Node 1.2 | 2 | 1 | 20
5 | Node 2.1 | 2 | 3 | 10
4 | Node 1.1.1 | 3 | 2 | 10
(6 rows)
The tree nodes are ordered by a level of depth. In the final output we would present them in the subsequent lines.
For each level, they are ordered by parent_id and node_order within the parent. This tells us how to present them in the output - link node to the parent in this order.
Having such a structure it wouldn't be difficult to make a really nice presentation in HTML.
Recursive CTEs are available in PostgreSQL, IBM DB2, MS SQL Server, Oracle and SQLite.
If you'd like to read more on recursive SQL queries, you can either check the documentation of your favourite DBMS or read my two articles covering this topic:
*
*Do It In SQL: Recursive Tree Traversal
*Get to know the power of SQL recursive queries
A: You can emulate any other data structure with a hashmap, so that's not a terrible limitation. Scanning from the top to the bottom, you create a hashmap for each row of the database, with an entry for each column. Add each of these hashmaps to a "master" hashmap, keyed on the id. If any node has a "parent" that you haven't seen yet, create an placeholder entry for it in the master hashmap, and fill it in when you see the actual node.
To print it out, do a simple depth-first pass through the data, keeping track of indent level along the way. You can make this easier by keeping a "children" entry for each row, and populating it as you scan the data.
As for whether there's a "better" way to store a tree in a database, that depends on how you're going to use the data. I've seen systems that had a known maximum depth that used a different table for each level in the hierarchy. That makes a lot of sense if the levels in the tree aren't quite equivalent after all (top level categories being different than the leaves).
A: As of Oracle 9i, you can use CONNECT BY.
SELECT LPAD(' ', (LEVEL - 1) * 4) || "Name" AS "Name"
FROM (SELECT * FROM TMP_NODE ORDER BY "Order")
CONNECT BY PRIOR "Id" = "ParentId"
START WITH "Id" IN (SELECT "Id" FROM TMP_NODE WHERE "ParentId" = 0)
As of SQL Server 2005, you can use a recursive common table expression (CTE).
WITH [NodeList] (
[Id]
, [ParentId]
, [Level]
, [Order]
) AS (
SELECT [Node].[Id]
, [Node].[ParentId]
, 0 AS [Level]
, CONVERT([varchar](MAX), [Node].[Order]) AS [Order]
FROM [Node]
WHERE [Node].[ParentId] = 0
UNION ALL
SELECT [Node].[Id]
, [Node].[ParentId]
, [NodeList].[Level] + 1 AS [Level]
, [NodeList].[Order] + '|'
+ CONVERT([varchar](MAX), [Node].[Order]) AS [Order]
FROM [Node]
INNER JOIN [NodeList] ON [NodeList].[Id] = [Node].[ParentId]
) SELECT REPLICATE(' ', [NodeList].[Level] * 4) + [Node].[Name] AS [Name]
FROM [Node]
INNER JOIN [NodeList] ON [NodeList].[Id] = [Node].[Id]
ORDER BY [NodeList].[Order]
Both will output the following results.
Name
'Node 1'
' Node 1.1'
' Node 1.1.1'
' Node 1.2'
'Node 2'
' Node 2.1'
A: Bill's answer is pretty gosh-darned good, this answer adds some things to it which makes me wish SO supported threaded answers.
Anyway I wanted to support the tree structure and the Order property. I included a single property in each Node called leftSibling that does the same thing Order is meant to do in the original question (maintain left-to-right order).
mysql> desc nodes ;
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | YES | | NULL | |
| leftSibling | int(11) | NO | | 0 | |
+-------------+--------------+------+-----+---------+----------------+
3 rows in set (0.00 sec)
mysql> desc adjacencies;
+------------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+---------+------+-----+---------+----------------+
| relationId | int(11) | NO | PRI | NULL | auto_increment |
| parent | int(11) | NO | | NULL | |
| child | int(11) | NO | | NULL | |
| pathLen | int(11) | NO | | NULL | |
+------------+---------+------+-----+---------+----------------+
4 rows in set (0.00 sec)
More detail and SQL code on my blog.
Thanks Bill your answer was helpful in getting started!
A: If nested hash maps or arrays can be created, then I can simply go down the table from the beginning and add each item to the nested array. I must trace each line to the root node in order to know which level in the nested array to insert into. I can employ memoization so that I do not need to look up the same parent over and over again.
Edit: I would read the entire table into an array first, so it will not query the DB repeatedly. Of course this won't be practical if your table is very large.
After the structure is built, I must do a depth first traverse through it and print out the HTML.
There's no better fundamental way to store this information using one table (I could be wrong though ;), and would love to see a better solution ). However, if you create a scheme to employ dynamically created db tables, then you opened up a whole new world at the sacrifice of simplicity, and the risk of SQL hell ;).
A: To Extend Bill's SQL solution you can basically do the same using a flat array. Further more if your strings all have the same lenght and your maximum number of children are known (say in a binary tree) you can do it using a single string (character array). If you have arbitrary number of children this complicates things a bit... I would have to check my old notes to see what can be done.
Then, sacrificing a bit of memory, especially if your tree is sparse and/or unballanced, you can, with a bit of index math, access all the strings randomly by storing your tree, width first in the array like so (for a binary tree):
String[] nodeArray = [L0root, L1child1, L1child2, L2Child1, L2Child2, L2Child3, L2Child4] ...
yo know your string length, you know it
I'm at work now so cannot spend much time on it but with interest I can fetch a bit of code to do this.
We use to do it to search in binary trees made of DNA codons, a process built the tree, then we flattened it to search text patterns and when found, though index math (revers from above) we get the node back... very fast and efficient, tough our tree rarely had empty nodes, but we could searh gigabytes of data in a jiffy.
A: Pre-order transversal with on-the-fly path enumeration on adjacency representation
Nested sets from:
*
*Konchog https://stackoverflow.com/a/42781302/895245
*Jonny Buchanan https://stackoverflow.com/a/194031/895245
is the only efficient way I've seen of traversing, at the cost of slower updates. That's likely what most people will want for pre-order.
Closure table from https://stackoverflow.com/a/192462/895245 is interesting, but I don't see how to enforce pre-order there: MySQL Closure Table hierarchical database - How to pull information out in the correct order
Mostly for fun, here's a method that recursively calculates the 1.3.2.5. prefixes on the fly and sorts by them at the end, based only on the parent ID/child index representation.
Upsides:
*
*updates only need to update the indexes of each sibling
Downsides:
*
*n^2 memory usage worst case for a super deep tree. This could be quite serious, which is why I say this method is likely mostly for fun only. But maybe there is some ultra high update case where someone would want to use it? Who knows
*recursive queries, so reads are going to be less efficient than nested sets
Create and populate table:
CREATE TABLE "ParentIndexTree" (
"id" INTEGER PRIMARY KEY,
"parentId" INTEGER,
"childIndex" INTEGER NOT NULL,
"value" INTEGER NOT NULL,
"name" TEXT NOT NULL,
FOREIGN KEY ("parentId") REFERENCES "ParentIndexTree"(id)
)
;
INSERT INTO "ParentIndexTree" VALUES
(0, NULL, 0, 1, 'one' ),
(1, 0, 0, 2, 'two' ),
(2, 0, 1, 3, 'three'),
(3, 1, 0, 4, 'four' ),
(4, 1, 1, 5, 'five' )
;
Represented tree:
1
/ \
2 3
/ \
4 5
Then for a DBMS with arrays like PostgreSQL](https://www.postgresql.org/docs/14/arrays.html):
WITH RECURSIVE "TreeSearch" (
"id",
"parentId",
"childIndex",
"value",
"name",
"prefix"
) AS (
SELECT
"id",
"parentId",
"childIndex",
"value",
"name",
array[0]
FROM "ParentIndexTree"
WHERE "parentId" IS NULL
UNION ALL
SELECT
"child"."id",
"child"."parentId",
"child"."childIndex",
"child"."value",
"child"."name",
array_append("parent"."prefix", "child"."childIndex")
FROM "ParentIndexTree" AS "child"
JOIN "TreeSearch" AS "parent"
ON "child"."parentId" = "parent"."id"
)
SELECT * FROM "TreeSearch"
ORDER BY "prefix"
;
This creates on the fly prefixes of form:
1 -> 0
2 -> 0, 0
3 -> 0, 1
4 -> 0, 0, 0
5 -> 0, 0, 1
and then PostgreSQL then sorts by the arrays alphabetically as:
1 -> 0
2 -> 0, 0
4 -> 0, 0, 0
5 -> 0, 0, 1
3 -> 0, 1
which is the pre-order result that we want.
For a DBMS without arrays like SQLite, you can hack by encoding the prefix with a string of fixed width integers. Binary would be ideal, but I couldn't find out how, so hex would work. This of course means you will have to select a maximum depth that will fit in the number of bytes selected, e.g. below I choose 6 allowing for a maximum of 16^6 children per node.
WITH RECURSIVE "TreeSearch" (
"id",
"parentId",
"childIndex",
"value",
"name",
"prefix"
) AS (
SELECT
"id",
"parentId",
"childIndex",
"value",
"name",
'000000'
FROM "ParentIndexTree"
WHERE "parentId" IS NULL
UNION ALL
SELECT
"child"."id",
"child"."parentId",
"child"."childIndex",
"child"."value",
"child"."name",
"parent"."prefix" || printf('%06x', "child"."childIndex")
FROM "ParentIndexTree" AS "child"
JOIN "TreeSearch" AS "parent"
ON "child"."parentId" = "parent"."id"
)
SELECT * FROM "TreeSearch"
ORDER BY "prefix"
;
Some nested set notes
Here are a few points which confused me a bit after looking at the other nested set answers.
Jonny Buchanan shows his nested set setup as:
__________________________________________________________________________
| Root 1 |
| ________________________________ ________________________________ |
| | Child 1.1 | | Child 1.2 | |
| | ___________ ___________ | | ___________ ___________ | |
| | | C 1.1.1 | | C 1.1.2 | | | | C 1.2.1 | | C 1.2.2 | | |
1 2 3___________4 5___________6 7 8 9___________10 11__________12 13 14
| |________________________________| |________________________________| |
|__________________________________________________________________________|
which made me wonder why not just use the simpler looking:
__________________________________________________________________________
| Root 1 |
| ________________________________ _______________________________ |
| | Child 1.1 | | Child 1.2 | |
| | ___________ ___________ | | ___________ ___________ | |
| | | C 1.1.1 | | C 1.1.2 | | | | C 1.2.1 | | C 1.2.2 | | |
1 2 3___________| 4___________| | 5 6___________| 7___________| | |
| |________________________________| |_______________________________| |
|_________________________________________________________________________|
which does not have an extra number for each endpoint.
But then when I actually tried to implement it, I noticed that it was hard/impossible to implement the update queries like that, unless I had parent information as used by Konchog. The problem is that it was hard/impossible to distinguish between a sibling and a parent in one case while the tree was being moved around, and I needed that to decide if I was going to reduce the right hand side or not while closing a gap.
Left/size vs left/right: you could store it either way in the database, but I think left/right can be more efficient as you can index the DB with a multicolumn index (left, right) which can then be used to speed up ancestor queries, which are of type:
left < curLeft AND right > curLeft
Tested on Ubuntu 22.04, PostgreSQL 14.5, SQLite 3.34.0.
A: If elements are in tree order, as shown in your example, you can use something like the following Python example:
delimiter = '.'
stack = []
for item in items:
while stack and not item.startswith(stack[-1]+delimiter):
print "</div>"
stack.pop()
print "<div>"
print item
stack.append(item)
What this does is maintain a stack representing the current position in the tree. For each element in the table, it pops stack elements (closing the matching divs) until it finds the parent of the current item. Then it outputs the start of that node and pushes it to the stack.
If you want to output the tree using indenting rather than nested elements, you can simply skip the print statements to print the divs, and print a number of spaces equal to some multiple of the size of the stack before each item. For example, in Python:
print " " * len(stack)
You could also easily use this method to construct a set of nested lists or dictionaries.
Edit: I see from your clarification that the names were not intended to be node paths. That suggests an alternate approach:
idx = {}
idx[0] = []
for node in results:
child_list = []
idx[node.Id] = child_list
idx[node.ParentId].append((node, child_list))
This constructs a tree of arrays of tuples(!). idx[0] represents the root(s) of the tree. Each element in an array is a 2-tuple consisting of the node itself and a list of all its children. Once constructed, you can hold on to idx[0] and discard idx, unless you want to access nodes by their ID.
A: Think about using nosql tools like neo4j for hierarchial structures.
e.g a networked application like linkedin uses couchbase (another nosql solution)
But use nosql only for data-mart level queries and not to store / maintain transactions | unknown |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.