Jump to content

Wikipedia:Reference desk/Archives/Computing/2010 September 2

From Wikipedia, the free encyclopedia
Computing desk
< September 1 << Aug | September | Oct >> September 3 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 2

[edit]

Compiler Writing: NOT Parser Generators!

[edit]

I search for compiler writing tools or compiler-compilers on Wikipedia and on Google and when I look at their websites it says they are parser generators (and are used for making domain-specific languages for analysis, not general-purpose languages which I want to do, might I add) and make no mention of compiling SDFs or whatever into compilers! It is driving me nuts. Is "parser generator" just some fancy term for compiler among compiler-writing developers or am I missing something? Is there a good tool out there for Windows that I can use to make compilers (open source, BSD or other permissive license, preferably)? --Melab±1 01:04, 2 September 2010 (UTC)[reply]

A Lexical analyzer and a parser are two major components of a compiler. The other major component is the logic of what to do with the tokens after they've been parsed. That part will pretty much have to be written by a human.
Luckily for anyone crazy enough to want to do this, this is a common "final project" for C.S. majors, so there are books and stuff on how to do it. (Personally, I opted out and wrote a video game instead.) I make no claims for the quality of either of these two sources (Like I said, I skipped this.) , but this seems to get good reviews, and some folk seem to be very impressed by this older (but probably still mostly valid) guide on the same topic. APL (talk) 02:57, 2 September 2010 (UTC)[reply]
Programs are (conceptually) tree-like things, but, because of the human mind's amazing facilities with language, it turns out to be best to represent those trees in a linear fashion, as text, in languages that are (approximately) context-free languages. Turning the text back into a tree is considered a solved problem (though there's still a little bit of research in that area!), and that's what "compiler-compilers" or "parser generators" do. But a compiler is really something that translates from one language to another, and that part is a classic programming task that's best left in the hands of humans.
There are two things that any prospective compiler writer must think about:
  • Can an interpreter do the same job? For a lot of tasks, interpreting a language is easier than compiling it, and the speed penalty isn't worth worrying about.
  • What's the best destination language? Someone has to write a compiler that takes programs to assembly language, but most people don't need to. It's far easier to compile to a pre-existing language with a good implementation. Taking this philosophy to an extreme, the macros in Lisp or (especially) Scheme amount to a tower of gradually-more-powerful languages, each compiled down to the next with an incredibly simple compiler. Common destination languages include C and LLVM, but compiling down to a high-level language like Haskell or Scheme or C++ or ML is also done.
Paul (Stansifer) 04:25, 2 September 2010 (UTC)[reply]


There's no reason you need to re-invent the parser every time you want to invent a new compiler. Parsing is the boring task of sophisticated string tokenization. There's not much to improve or innovate there - it's just a necessary, difficult, and boring part of machine text analysis. Instead of re-inventing this (and similar parts of the compiler), learn to use the already existing Flex and Bison. (Our articles may be the best place to start). These tools do the grunt work of text analysis, leaving the compiler design to you. The real meat-and-potatos of compiler design is the sophisticated process of defining your high-level language into a structured form describable by (for example) Backus-Naur form. That is the first step. Once in this kind of intermediate representation, the second step (mapping BNF to a set of machine instructions) is very pedantic. You should not waste your time re-designing that second mapping (especially if you are unfamiliar with the 900 or so instructions available on a modern Intel CPU, or other CPU architecture of your choice).
Furthermore, using standard tools means that you can plug in your language to existing systems. A lot of compilers for many different languages use the same back-end, so that they can focus on design for their language instead of for the instruction set architecture of the machine(s) they are targeting. For example, almost all of the compiler tools in the gcc kit (C, FORTRAN, Delphi, Pascal, ... and on and on and on) all use the same gcc back-end structure. This way, your Delphi and your Pascal and your C++ program can all run through the same hardware-optimization routines (and so on). (...totally and irrefutably invalidating any claim about any programming language that inherently yields "better performance" or "faster code" - but I digress).
Anyway, if you really want, you can re-invent these steps; nothing stops you from writing your own parser and lexical analyzer. You can also code a compiler in assembly language; and you can also redefine a character encoding that uniquely re-interprets machine binary representations as characters of your choice, (ASCII has many shortcomings), and you can write your own text-editor and character-mode video driver that can understand your homebrew text-encoding. There is a certain line you have to draw - decide the tradeoff between how much time you want to spend re-inventing things, and how much time you want to spend learning the existing tools. Nimur (talk) 04:34, 2 September 2010 (UTC)[reply]
Responding to your small text: there are many reasons why some languages yield smaller or faster machine code independently of the backend. For example, C doesn't specify the effect of an out-of-bounds array access. This means that an array read or write can legally be compiled down to a single indexed read/write at the machine level. Most other languages require that the index be checked against the array's upper and lower bounds (which must also be loaded from memory) and a certain exception be raised if the test fails. A correct optimizer can't drop the bounds tests unless it can prove they will always succeed (or always fail). In many cases that can't be proven because it's false, and even when it's true, proving theorems is hard even for human mathematicians and it's even harder to program a computer to do it (with useful efficiency). The best you can probably hope for is optimizing away the tests in a simple loop like for (i = 0; i < x.Length; ++i) { ... }, but even that may fail if there's a function call in the body, because it may be hard to prove that the function won't modify x's length. The programmer apparently expects it not to, but programmers are often wrong and optimizers have to preserve program semantics.
There are lots of other examples like this. In an untyped language it's hard to optimize away the extra runtime type checks. In a language with powerful runtime reflection and/or a powerful eval statement, you can't inline any function because you can't prove that it won't change at runtime. In this old thread I listed a few reasons why it's hard to efficiently compile Java and Java-like languages. Aliasing is another interesting example. A C89 compiler must assume that any two pointer expressions of the same type may alias each other unless it can prove otherwise. FORTRAN prohibits aliasing of arrays, which makes automatic vectorization easier. This is one of the reasons why FORTRAN dominated numerical computing for so long. C99 added the restrict keyword to enable optimizations that assume no aliasing. -- BenRG (talk) 23:57, 2 September 2010 (UTC)[reply]
Maybe "irrefutable" was an overly-strong word... : ) You are right, and you bring up some cases (like exceptions-handling, and pointer mangling, and function inlining, and garbage-collection) that are language-specific and could limit performance. But for many (actually, for almost any code other than numerical data-processing kernels), these optimizations have minimal impact on wall-clock execution time. The most important optimizations - substituting machine-architecture (ISA) enhancements to replace long streams of common instructions - work on code generated from any source-language. For example, GFortran translates all FORTRAN code to GIMPLE, and then runs the exact same GIMPLE optimizer as it generates machine code, (so whether your program is C++ or Java/gcj, gccwill automatically compile in SSE and 64-bit operations if your computer supports them). So exactly those optimizations you talk about - like handling pointer aliasing and so on, can be handled in a source-language-invariant way (unless the language specifically interferes with these optimizations). Overlapping arrays are the best example of a case where the language-specification will actually enable or forbid certain optimizations (like out-of-order execution on pointer operations); if the language guarantees these can not exist, the compiler is free to do fancy things. But, outside of sophisticated vector-math numerical kernels, how many programs meaningfully reap the benefit of OOE array operation optimizations? (It should be noted that even ifort defaults to assuming aliasing - probably because unless you manually specify -fno-alias, it runs through a "C-language" style optimizer that refuses to make the assumption that arrays are "safe", which is specific to FORTRAN). Here's the User Guide for IFORT, with more information than you ever wanted about optimized compilation. Nimur (talk) 00:28, 3 September 2010 (UTC)[reply]

Backing-up to a NAS drive over the Internet?

[edit]

I would like to be able to attach a drive to my mum's router at home so that my computer away at University will be able to back-up via the Internet (updating the back-up to reduce the total file transfers). Is this possible, though? I contacted one or two companies a while ago, and I believe that their responses were that their NAS drives would allow back-up via the local network but not via the Internet. --89.243.142.40 (talk) 01:14, 2 September 2010 (UTC)[reply]

I believe that remote access to the NAS is possible, but you will need to configure your router to allow it. First, please note that any changes to the allow outside access into the home network represents a security risk, so you need to make sure that the proper security features (i.e. passwords) are enabled on the NAS device. I assume that your home has a typical network that includes some kind of modem (cable, ASDL, or similar), which is connected to a router, which in turn has wired and/or wireless connections to the other devices (computer(s), NAS device, etc.) in the home.
What is commonly referred to simply as a router also provides firewall and network address translation (NAT) features which act as a bridge between your private home network (LAN) and the rest of the internet (WAN). Inside your home network, devices have locally distinct IP addresses typically of the form 192.168.xx.xx, assigned by the router. However, your internet service provider will only assign a single IP address to the model, such as the 89.243.142.40 address we see in your original post. This is the address that the rest of the internet sees.
Normally, any computer inside your private network (or LAN) can initiate a connection out to the internet (the WAN), but any unsolicited attempts from the outside to connect into your network are refused. NAT serves to keep track of what's connected to what, so when one computer sends out a request, the response is routed back to the requesting computer. However, when an inbound connection request is received, the router does not automatically know which local device should receive the request, so the request is dropped. Port forwarding allows you to configure your router to send specific connection requests to specific devices, to send all requests to a designated device (sometimes called the DMZ, or a combination. Port numbers generally identify the type of internet connection request. For example, port 80 is HTTP, which you might want to route to your computer if it were running a web server. Port 21 is generally used for FTP and you may wish to configure your router to forward such requests to your NAS. (You may have already set up some port forwarding if you use some internet games.) You will need to check your router's documentation to see in detail how to configure fort forwarding. You will also need to check your NAS documentation to identify what ports and protocols are supported, and of course how to set up the security.
Port forwarding also allows you to translate port numbers, called port translation. For example, you may wish to map inbound port 8080 to port 80 in your local computer and map inbound port 12345 to port 21 on your NAS. Sometimes this is necessary because many ISPs block certain well known ports to discourage customers from running home based servers. It also allows you to assign an arbitrary number to the service to make it less likely that a hacker will detect your server using port scanning techniques.
The final key that you will need is the IP address of the home network. Ideally, it remains static over time, but ISPs generally do not guarantee this, and it may change after a power or service outage. I believe there are some services out there that allow you to periodically post the home's current IP address so that you can look it up remotely, but someone else will need to fill in these details. Once you have port forwarding configured and know the home IP address, you can remotely access the NAS as something like "ftp://89.243.142.40/" or "ftp://username:password@89.243.142.40:12345/" (where "username" and "password" are what you configured in your NAS security setup and ":12345" identifies your externally defined port). From this point you need to look at your backup software to see how to configure it to connect to your distant NAS.
Good luck. I'm sure others can fill in any missing details and correct any error in the above. -- Tom N (tcncv) talk/contrib 04:10, 2 September 2010 (UTC)[reply]
I use TeamViewer to remotely access my home PC. It allows file transfers. Not sure it it supports backups as such. ---— Gadget850 (Ed) talk 15:37, 2 September 2010 (UTC)[reply]
Re: changing IPs above, a simple solution would be to use one of the many free domain name with dynamic DNS support service. Many routers come with support for updating a dynamic DNS built in, if yours doesn't you can either install software on your mothers computer (should be easier but will obviously only work when your mothers computer is on) or since I believe many NAS devices use some sort of specialised Linux or similar you could probably install something on your NAS (if your NAS doesn't come with it built in, they may I don't know much about NAS). If you're using FTP there may be a bit more work to get it working properly however Nil Einne (talk) 08:40, 3 September 2010 (UTC)[reply]

This Type of Image

[edit]

Kindly have a good look at this type of Image. Please note that my question is not about the format but the style of the image. Such images are perhaps (if I am not wrong) only found on Wikipedia. What I find intriguing as an artist is easy way with which the contrasts of colors is played upon to fool the human eye, which is very-very difficult effect to achieve even digitally let alone with brush. What I'd like to know is what this movement in computer-art is known, what tool is typically used to conjure up this kind of stuff ( I don't think its GIMP). If there is a history behind this movement, i.e. who invented it etc. Are there any Wikipeople out there who are Gurus etc.  Jon Ascton  (talk) 05:57, 2 September 2010 (UTC)[reply]

This is a vector image; it uses gradient fill (specifically, radial gradients), which are part of the SVG vector image format specification, to define the smooth color blends. This same effect can be done using GIMP on raster-images; though in this case, InkScape was used. The style could be considered pop art, but it's hard to classify. These images aren't only found on Wikipedia; but this particular image comes from a freely-licensed icon set; so it is advantageous for use on Wikipedia. You can find stylistically similar icons and logos in many other free and non-free projects.Nimur (talk) 06:12, 2 September 2010 (UTC)[reply]
I would emphatically dispute classifying it as Pop art, unless you take the perspective that everything is Pop art. A quick look at the Pop art article makes it clear how different it is; it has really nothing in common except a very broad modernist base. --Mr.98 (talk) 11:33, 2 September 2010 (UTC)[reply]
Ok, I am not an art critic, so I don't know the definitions well! I see bright colors, bold lines, and stylized cartoonish figures; but if that is not "pop art" in the ordinary definition, then I am mistaken in my classification! Nimur (talk) 20:54, 2 September 2010 (UTC) [reply]
Pop art qua Pop art is defined not so much by its appearance as by its intentions. The article discusses this quite well. It's about using the visual language of popular commodities and moving them into a high art context. So it's Andy Warhol painting Campbell's soup cans, as a way of saying, "look, I'm making art by mimicking advertisements," or Roy Lichtenstein making fake "comic book" panels, saying, "look, this wouldn't normally look like art except for the fact that I am blowing it up and claiming it is art," and so on. The general aesthetic I think you are meaning is just modernism more broadly — generally simplistic, often (but not always) basic colors, appeals to function rather than ornate form, stripped down, etc. E.g. Bauhaus or late Piet Mondrian and so forth. I'm not an art critic either, but Pop art is the wrong category altogether. --Mr.98 (talk) 22:14, 2 September 2010 (UTC)[reply]
Obviously it is a vector image, and the GIMP and Photoshop specialize in raster images. It would actually be quite easy to draw such an image in an illustration program such as Adobe Illustrator. You draw three squares. Then, you stack them on top of each other and use the Pathfinder palette to make them cut into each other. Then, you go to Effect --> 3D --> Extrude & Bevel. The light above was probably added by drawing a white circle over the shape and then lowering its opacity.
It would be even easier to construct that shape in a dedicated 3D program like Autodesk Maya, where you can add lights. It would look more realistic, as well.--Best Dog Ever (talk) 06:16, 2 September 2010 (UTC)[reply]
Which, by the way, would ruin the stylistic effect of that nice icon. Comet Tuttle (talk) 06:26, 2 September 2010 (UTC)[reply]
Irrelevant. My point is simple: the drawing is simply three squares and a circle that have been transformed. It is true that it would take a lot of skill for a human to draw that. But it was drawn by a computer. The drawing program only sees three squares and a circle with a list of tranformations. (At least, that's how it would be represented in PostScript after being drawn by Adobe Illustrator.) The work was done mostly by a computer, and it did it with ease.--Best Dog Ever (talk) 06:38, 2 September 2010 (UTC)[reply]
This is quibbling now, but saying "the work was done mostly by a computer" is false. This was created by a computer artist who utilized some software as a tool to create the image. Comet Tuttle (talk) 15:22, 2 September 2010 (UTC)[reply]
The general look of the image, with its radial gradients and soft beveling, is generally modeled after the iconography that became popular "Aqua" styles used by Apple, Inc.. (see, e.g., their logo, Mail.app icon, Safari's icon). It has since been copied/emulated/improved upon/etc. by Windows (Windows Vista's Aero theme is an obvious descendant), and a number of Linux themes (like the icon you posted there, from KDE). If you Google "Apple icon effect" you can find many tutorials; it is not very hard to accomplish with their a raster or vector editor. --Mr.98 (talk) 11:33, 2 September 2010 (UTC)[reply]
See Computer_icon, icon design, and Graphical user interface elements.Smallman12q (talk) 01:09, 4 September 2010 (UTC)[reply]

Difference between SIMD and Vector Processors

[edit]

What is the exact difference between SIMD and Vector processors?

Is there any difference from computation philosophy?

I worked with few processors where 'data parallelism' is implemented by following two methodologies:

(1) Some processors(MIPS, SNE) have some instructions to treat a 32bit register as 'a set of 4( or 2) 8bit( or 16bit) independent elements' and operate over them. Vendors of such processors call it as SIMD.

(2) Some processors(SiliconHive, SPI) have special 256bit registers( vector register) and instructions to operate over these vector registers treating it as 'a set of 16 16bit elements'. Vendors of such processors call it as vector processor

Other than these increased element number, is there any difference between SIMD/Vector processing ?

—Preceding unsigned comment added by ArpanH (talkcontribs) 11:27, 2 September 2010 (UTC)[reply]

The terms are generally interchangeable. SIMD is a more specific description of how a processor operates on a vector. "Vector processor" just indicates that the processor is aware of vectors (groups) of data. Since the typical purpose for treating separate data as a single vector is to use the same operation on all of it, most vector operations are SIMD operations. Nimur (talk) 16:09, 2 September 2010 (UTC)[reply]

Conventional Explosives Simulation Software

[edit]

Hi.

   Is there any software (whether open-source, freeware or commercial) which can be used to simulate the deformation and detonation of an explosive warhead? This software should be able to accept input of shape, size, dimensions, mass, etc. of both the explosive filler and the warhead-wall; and output the following numerical values.

  • Total Blast Radius
  • Lethal Blast Radius
  • Brisance

   The software must support designing with the following 22 explosives:

  1. Mercury Fulminate
  2. Triazidotrinitrobenzene
  3. Triaminotrinitrobenzene
  4. Dinitrodiazenofuroxan
  5. Trinitroaniline
  6. Trinitrobenzene
  7. Trinitrotoluene
  8. Nitroglycol
  9. Tetranitroglycoluril
  10. Nitroglycerine
  11. Mannitol Hexanitrate
  12. Pentaerythritol Tetranitrate
  13. Cyclotrimethylenetrinitramine
  14. Cyclotrimethylene Tetranitramine
  15. Hexanitrohexaazaisowurtzitane
  16. Octanitrocubane
  17. Nitroguanidine
  18. Nitrocellulose
  19. Ammonium Nitrate
  20. Methyl Nitrate
  21. Urea Nitrate
  22. Lead Azide & Silver Azide

   Basically, now I'm looking for a free-form explosive warhead design software. Thanks to everyone. Rocketshiporion 12:11, 2 September 2010 (UTC)[reply]

r u a terrorist? —Preceding unsigned comment added by Tomjohnson357 (talkcontribs) 13:29, 2 September 2010 (UTC)[reply]
You're looking for open source software to design a warhead? I refer you to the answers to your similar question above about looking for open source software to design a nuclear weapon. Comet Tuttle (talk) 15:21, 2 September 2010 (UTC)[reply]
He/She wants to simulate an explosion with software. 82.44.55.25 (talk) 15:53, 2 September 2010 (UTC)[reply]
If you want to animate an explosion, here are some tutorials: Object Explosion for 3D Studio Max, and Large Scale Explosions for Blender (software). If you want to accurately simulate, then the responses to your previous question still apply. If you really want to research this stuff, consider applying for an advanced technical degree program at a school that studies energetic materials. For example, New Mexico Tech is a science and engineering research university with a specialty in energetic materials. But they won't just hand out information, software, and material support to anybody - because this stuff can be very dangerous. The process of building credibility in these kinds of areas is long and slow and subject to regulation. Nimur (talk) 16:23, 2 September 2010 (UTC)[reply]
I should clarify; I already have the relevant information (for most of the above-mentioned explosive compunds) with which to calculate blast radii. I currently have to manually calculate the blast radii based on the RE factor and mass (and shape, in the case of a shaped-charge) of the explosive filler. What I need is just a software into which I can plug the equations for each of the above explosive compounds, in order to automate the process of calculating blast radii, and thereafter animate the explosion. Rocketshiporion 18:15, 2 September 2010 (UTC)[reply]
So go use a spreadsheet, then. Masked Booby (talk) 02:45, 3 September 2010 (UTC)[reply]

Cyclotrimethylenetrinitramine? You have some pretty potent stuff there...are you looking to make an accurate simulation of this type of military grade explosives? What exactly with and why?Smallman12q (talk) 01:14, 4 September 2010 (UTC)[reply]

[edit]

Based on Steve Baker's question above and my reply... Does anyone know of a very tiny flash app that I can put on a web page and then save/fetch cookies through javascript? It would be best if it was a rather invisible little one-pixel app that could be shoved into a corner of the page and not hinder the rest of the page design. The goal is to save cookies in one web browser and fetch those same cookies in another web browser on the same computer. -- kainaw 12:14, 2 September 2010 (UTC)[reply]

Be easy to make one yourself; [1] [2] ¦ Reisio (talk) 16:28, 3 September 2010 (UTC)[reply]

broadband numbers

[edit]

my isp gave me these numbers for my broadband

   34.2                   -15.4              49.8   


they said -15.4 was very bad. what do these numbers mean? —Preceding unsigned comment added by Tomjohnson357 (talkcontribs) 13:28, 2 September 2010 (UTC)[reply]

Did they really not explain further? The first thing I thought they were were speeds, but I'm not sure. Chevymontecarlo 14:15, 2 September 2010 (UTC)[reply]

What's the ISP? From looking at them, I'd guess they're downstream signal-to-noise levels measured in decibels. 82.44.55.25 (talk) 14:33, 2 September 2010 (UTC)[reply]

Yeah, probably. I'm going to guess you have a cable modem, and not DSL. If those are cable modem signal levels, the -15.4 is probably your downstream power level, which is a bit outside the recommended range. If you research it a bit (look up your cable modem model number), you can probably figure out how to check these stats directly on your cable modem's configuration page. DSL modems have transceiver statistics, which look similar, but afaik they don't usually have any negative numbers. Indeterminate (talk) 16:11, 2 September 2010 (UTC)[reply]
The poor signal power probably indicates that either you are far away from their networking equipment, or that the cable connection is degraded or has many splits. This is something the cable provider needs to handle; there is nothing you can do about it. Nimur (talk) 16:27, 2 September 2010 (UTC)[reply]

Javascript wait

[edit]
Resolved

I'm trying to write a script for greasemonkey that autofills a box. However the box doesn't appear for 5 seconds after the page loads. Is there a "wait" command in javascript that could pause the script for 5 seconds before continuing? 82.44.55.25 (talk) 18:15, 2 September 2010 (UTC)[reply]

Write one function that waits with a timeout and another function that does the work. The timeout requires the function name of the second function and will have the syntax: window.setTimeout("yourSecondFunction()", 5000); // 5000 is how many microseconds to timeout for. -- kainaw 18:18, 2 September 2010 (UTC)[reply]
Thanks. What would be the function name of
document.getElementsByName("input")[0].value = "Hello";
82.44.55.25 (talk) 18:52, 2 September 2010 (UTC)[reply]
You can place that direction into the timeout as: window.setTimeout("document.getElementsByName('input')[0].value = 'Hello';", 5000); Keep an eye on those quotes. I changed to single quotes inside the double quotes to avoid escapes. -- kainaw 19:10, 2 September 2010 (UTC)[reply]
Thanks! 82.44.55.25 (talk) 19:21, 2 September 2010 (UTC)[reply]

NP theory

[edit]

sir, I have been through ur article about NP,NP-Complete and NP-HARD definations and problems but i can't understand the defination of NP-HARD class And the difference between NP complete and NP hard .

What do u mean by "at least as hard as the hardest problems in NP" please explain elaborately !!!!!!!!!! —Preceding unsigned comment added by A khan0001 (talkcontribs) 18:37, 2 September 2010 (UTC)[reply]

All NP problems are decision problems. They have an answer: Yes or No. There are many examples. The NP-Complete ones are the difficult ones and they all have the same solution. For example, if I give you a 3-SAT problem, you can rewrite it as a Travelling Salesman problem and then solve the Travelling Salesman problem to solve the 3-SAT problem that I gave you. NP-Hard problems include NP-Complete problems, but much more. NP-Hard is an intersection with NP. NP-Hard includes decision problems and optimization problems. So, consider this: "Is there a way to travel over all the bridges in town without crossing a single bridge twice?" The answer is Yes/No, so it is in NP. It can be NP-Hard also (if it is NP-Complete). Consider this: "What is the shortest route that crosses all bridges in town without crossing a single bridge twice?" The answer is a route, not Yes/No, so it is not in NP. It is an optimization problem. It is at least as hard as the previous decision problem. So, if the original decision problem is NP-Complete, this is NP-Hard. -- kainaw 18:43, 2 September 2010 (UTC)[reply]
Wait a minute, not all Yes/No problems are NP, and the "vice versa" part is only formal.
For example, the problem "does a given Turing machine halt?" has a Yes/No answer, but it's not NP. It's not even decidable by a fixed algorithm. See halting problem.
For the converse, yes, it's easier to do some formal manipulations when you restrict your attention to Yes/No problems, but NP is "morally" more general than that. For example, "factor a given number into primes" is NP in every way except purely formally. --Trovatore (talk) 18:53, 2 September 2010 (UTC)[reply]
I was purposely being general because this questioner appears to still be at the "I don't understand the difference between an NP and NP-Hard problem" stage. To many details leads to confusion, not clarity. -- kainaw 18:55, 2 September 2010 (UTC)[reply]
Do you think that flat wrong answers lead to "clarity"? Your answer was not even in the right direction; it was not helpful whatsoever. --Trovatore (talk) 19:04, 2 September 2010 (UTC)[reply]
Also, factoring into primes is not in NP for two reasons. First, it isn't a decision problem. Second, it is not verifiable in polynomial time because verifying extremely large numbers are truly prime is a difficult problem. The second step of understanding NP that I normally go to is verification of the answer. It must be simple to verify. That is why the halting problem is not NP. If you say "no, it won't halt", how can I verify that answer in polynomial time? -- kainaw 18:59, 2 September 2010 (UTC)[reply]
Factoring into primes can be rephrased as a decision problem, in such a way that a polynomial-time answer to one gives you a polynomial-time solution to the other, and vice versa. Something like "given n and k, is the k'th bit from the right of the smallest prime factor of n equal to 1?".
As for your second point, that's less trivial, but in fact it is now known that it is possible to decide in polynomial time whether a given number is prime. If that weren't the case, we could still talk about "factoring" in general being NP, I think, just not "factoring into primes" (it would require more care in the statement).
Your last point is correct, but you didn't say anything about it in the response to the OP, which made your answer completely useless. --Trovatore (talk) 19:10, 2 September 2010 (UTC)[reply]
Actually, for (the decision version of) factoring into primes being NP, you don't need primality testing to be in P. It is sufficient to know that it is in NP, and this has been shown two almost three decades before AKS (and it's much simpler to prove).—Emil J. 10:18, 3 September 2010 (UTC)[reply]
Simply put, an NP-hard problem is at least at hard as the hardest problems in NP, but it may be (much) harder and therefore not itself in NP; an NP-complete problem is NP-hard and is in NP. An NP-complete problem is a hardest problem in NP. --98.114.98.162 (talk) 04:32, 3 September 2010 (UTC)[reply]

Typing credit card details into SLL-enabled CGI proxies

[edit]

Is it safe? If not, is creating a bypass proxy a safe way of doing it (as described in this tutorial: http://www.erasparsa.com/Create-Your-Own-CGI-Proxy-to-Bypass-the-Great-Firewall-of-Indonesia). Thanks in advance --Mark PEA (talk) 19:58, 2 September 2010 (UTC)[reply]

I only checked the first script on that page, and that one apparently (according to its readme) offered you a form in which you can type an URL. I wouldn't trust any proxy of that type with confidential data. If the other CGIs work the same way - a form, as opposed to allowing you to set it as a proxy server in your browser - they'd be unsafe too. A proper proxy allows your browser to use the CONNECT verb (see Hypertext Transfer Protocol) for SSL, and then simply passes along the data without being able to see it. As long as you verify the SSL certificate and you trust the issuer, and as long as your browser hasn't been tampered with, such connections should then be secure. 82.75.185.247 (talk) 22:07, 2 September 2010 (UTC)[reply]
An SSL proxy would mean that the proxy would see your data...the SSL means that the traffic to the proxy is encrypted, but it doesn't mean the proxy is trustworthy.Smallman12q (talk) 01:17, 4 September 2010 (UTC)[reply]
So I assume it is something like: Me -> Proxy (HTTP), Proxy -> Website (HTTPS). Is there no way of getting a Me -> Proxy (HTTPS) connection? --Mark PEA (talk) 17:36, 4 September 2010 (UTC)[reply]
I think you misunderstand...it goes like this: You->SSL(Proxy SSL)->Proxy->SSL(website SSL)->website. If I understand correctly, what you want is to be able to securely access a website(that is also secure)...see Tunneling_protocol, Virtual_private_network, and Secure_Shell. Provided your proxy is actually secured, you shouldn't be susceptible to a Man-in-the-middle attack. Hope this helps.Smallman12q (talk) 22:45, 4 September 2010 (UTC)[reply]

Compressing

[edit]
Resolved

I'm want to compress around 90,000 .mht files. I'm using 7zip with the solid compression option, which works extremely well. I've read about solid compression being susceptible to corruption, for example if one part of the file is damaged everything after it also becomes unsalvageable. Would setting the "solid block" size lower to something like 1GB secure against complete failure? If part of the file got damaged, would only the parts in that "block" be unsalvageable, but files in other blocks would be ok? Is that how sold compression blocks work? 82.44.55.25 (talk) 20:11, 2 September 2010 (UTC)[reply]

Yes. If you're worried about corruption, some PAR2 parity files will protect you better than limiting the block size. -- BenRG (talk) 00:03, 3 September 2010 (UTC)[reply]
Just make redundant backups. ¦ Reisio (talk) 02:32, 3 September 2010 (UTC)[reply]
Actually depending on how many redundant backups you create and in what form, some sort of parity files may be better. I personally prefer ICE ECC [3] although it's closed source so may not be a good solution long term Nil Einne (talk) 13:04, 4 September 2010 (UTC)[reply]
For such a specific question, I think the best answer to your question is going to actually come from experimentation. Try compressing 1000 files, using two different block sizes, then use a hex editor to overwrite part of each file with a bunch of random digits, identically, then see what happens when you try to decompress. The experiment won't take long. Comet Tuttle (talk) 15:37, 3 September 2010 (UTC)[reply]
I did a few experiments and lowering the block size drastically improves recovery. I also discovered there is very little difference in compression size between full solid mode and 16mb block mode, so obviously 16mb blocks is best the way to go. Thanks! 82.44.55.25 (talk) 18:38, 3 September 2010 (UTC)[reply]

Domain Names

[edit]

Is it possible to find all the domain names that a given company have registered? Thanks Mo ainm~Talk 20:23, 2 September 2010 (UTC)[reply]

You could use this Reverse Whois which searches by owner name, but it's not free. It does give you an idea of how many domains *might* be owned by your search parameters (before you have to pay), but there's no real way of being sure that a company is using the same name for all it's domains, sub domains might be registered to smaller sister-companies or there might even be simple things like spelling errors in the owner name when it was registered. Also it wouldn't work with any domains that are using a domain privacy service to cloak the Whois information as it would literally point to the cloaking service with a reference code and wouldn't contain the company name.  ZX81  talk 20:59, 2 September 2010 (UTC)[reply]

e-mail account hijacked

[edit]

My wife had her Yahoo e-mail account hijacked today. Apparently everybody in her address book (except me) got a message saying she had been robbed and was stranded in London airport and could you please wire £1500 immediately. As soon as she was able she went in and changed the password and security questions in her account.

This doesn't appear to be e-mail spoofing; one friend wrote back and asked the perpetrator for the names of two dogs. In the response one answer was actually right (our daughter's dog, dead for three years).

The question is: what happened? Did someone actually get into the account? She said some profile fields were changed. If so, how could that have happened? Did they get the password? Also, having changed the password, how safe is she now? Thanks, --Halcatalyst (talk) 22:31, 2 September 2010 (UTC)[reply]

Some possible reasons are:
  • she used the same password for something else, or the password was something too obvious (the name of her kid, or dog, or favourite band)
  • her password was too poor, and a bulk script (probably running on a botnet) guessed it
  • she left herself logged in on a public machine (e.g. at the library)
  • her normal machine is compromised by malware
  • she logged in on another machine (web cafe, library) which was compromised by malware
  • she used an insecure login method (yahoo! mail does login by default over ssl, so that'd be an odd reason) over a public wireless network (e.g. at a cafe)
  • she donated, lost, or had stolen a computer, pda, phone, or backup, and someone recovered the password from it (this is quite possible, but in practice fairly unlikely)
  • she got a crooked email, or visited a crooked website, which directed her to a site constructed to closely mimic Yahoo! Mail, but that was in fact the spammers'.
They probably know the name of the dog because they searched the email account for "dog" when interrogated.
Sensible things to do:
  • have a genuinely secure password
  • don't use the same password for multiple things (except for trivial things that you don't care about, and definitely don't use the trivial-things password for something important like shopping, banking, or email)
  • just don't type your password into a public machine at all, or into any machine you don't absolutely trust. If you need to be mobile, setup a phone or laptop to check email, and use that.
  • make sure you use SSL for login
  • keep your own computers properly maintained, with up-to-date and effective anti-virus and anti-malware software. Practice caution when installing software. Don't allow incompetent people or those with poor judgement the ability to administer the same machine you use for work, banking, or an email account that you've registered with anything important (if they control the email, crooks can have shopping or banking sites reset your password, which is emailed to them, so they've leveraged email access into something more valuable).
-- Finlay McWalterTalk 23:07, 2 September 2010 (UTC)[reply]
Another wise thing: only visit an important site like email or banking from a URL you typed in yourself, or from a link you've stored on your own computer (e.g a bookmark or a desktop shortcut). Never click on a link on some site that purports to take you to Yahoo!Mail, in case it's a malicious spoof instead. -- Finlay McWalterTalk 23:24, 2 September 2010 (UTC)[reply]
And (just to scare you more) they might have done something like this (all with a script, all very fast, trying many many options):
  • get access to the email account (by some method above)
  • search the email account for mails from financial institutions (FIs) - banking, pensions, stocks, investments
  • from these, visit the FI's website and try to login with the password they know. If that fails, do a password-reminder
  • the FI emails the Yahoo! account the reminder or temp-password, which they use to login. Only the security questions are stopping them now: how good are her answers? Is her pet's name given as "Fido" or "h!fP9+3J>Q7"? If they have access to the email account, they can search it for clues (pets, kids, favourite places)
  • transfer monies away
  • delete the password reminder email(s) to cover their tracks a bit (even a few hours will be enough for them to forward the stolen money on through a maze of compromised or untraceable accounts).
-- Finlay McWalterTalk 23:18, 2 September 2010 (UTC)[reply]
While I'm not denying the importance of keeping your email secure, if your bank has a reset password by email option, I suggest you get a new bank before doing any of the above Nil Einne (talk) 15:52, 6 September 2010 (UTC)[reply]
She may also have somehow visited a "phishing website" that appeared to be a Yahoo login screen, but was REALLY just a false front that sent her password to the scammers. (It may have actually forwarded her to the real Yahoo website so she wouldn't notice anything was wrong.)APL (talk) 02:24, 3 September 2010 (UTC)[reply]