[GRLUG] PCI v1.2 Compliance.

Adam Tauno Williams awilliam at whitemice.org
Fri Dec 12 17:08:44 EST 2008


> > > All I can say it *IT SUCKS*.
> > Actually I think PCI is a pretty good standard.  I think 98% of the
> > recommendations are solid/good practices.   And it makes a nice club to
> > beat good security practices into an organization.
> Not to misunderstand me... *MOST* of the standards are HUGELY slanted
> towards the exploit-ability of Windows. You should be forced into good
> practices with Windows as your core infrastructure.

I disagree;  I think you should be forced into good practices period.
UNIX/LINUX isn't an excuse to be lacadazical about security. 

> Well managed Linux typically doesn't require more than a well educated
> *NIX slanted person for good LAZY practices. Practices that keep him or
> her from having to DO THEM AGAIN or worry about someone fscking up the
> machines.
> With Windows, you have to worry about Windows crapping itself over a
> website, which shouldn't be browsed from a server anyway!

Having seen numerous sites hosted on LINUX crapped upon I don't believe
this.  The purpose of security standards is to make a system "trusted".
This is different from a system you have 'faith' in,  which is what
assuming a LINUX host is secure actually is.  "Trust" implies that an
accepted rigor has been applied which is different than 'faith' (where
maybe *I* trust the system, but I've got nothing to present to make them
trust the system).

> > > Effectively, you have to be running an IDS at all times for all network
> > > traffic.
> > Or at least all traffic ingress/egress-ing from a machine with payment
> > card information.   Also you get a score regarding your PCI/DSS
> > compliance,  almost nobody is 100% compliant;  PCI/DSS compliance is
> > used as a risk analysis tool.
> > Personally this one bugs me because I haven't met an IDS yet that I
> > think is worth the trouble;  if there is a category for crappy over-sold
> > software then IDS is such a category.
> > > Also have to be running Anti-Virus on Linux machines that even "look
> > > like they might have CHD" near them.
> > Yes, I don't see why that is a problem.  Install CLAMAV.
> Its not a problem, but what benefit does running CLAMAV/Freshclam on a
> set of EXCLUSIVELY MySQL DB servers where nobody has a login except the
> DBA (only limited to MySQL related things) and I have only a Key-Auth
> setup for the DBA and myself.

Here is an example of 'faith': because you've granted only a few people
login access and the system only runs MySQL you believe the system is
secure.  I have no reason to believe that.  Installing malware detection
verifies, at least periodically, that a system has not been compromised
with malware.   On LINUX at least that is [currently] a fairly row risk,
but the cost of a defense against that possibility is equally low.

BTW, I run all my scans like "nice --adjustment=19 clamscan...." so
unless the system is completely swamped the impact is almost
unmeasurable.

> And I am blocked from the DB and I don't
> know the "admin passwords", though I *could* change it. Its a matter of
> having trust (err acceptable risk) in your setup.

But it is only "Trusted" if that trustworthiness is gained somehow.  It
isn't a question of it *you* trust the system.  It is about acceptable
risk,  "Trusted" shouldn't be confused with "perfect", but the
acceptability of the risk has to be demonstrable - hence standards like
PCI/DSS. 

> To be honest, I'd rather not have those resources soaked up by CLAMAV,
> but used by adding InnoDB buffer pool memory to the instance(s) for
> being able to hold more data in buffer. We are talking 32GB 64-bit
> machines here with 16 cores and lotsa disk space in circular replication
> enabled. the more memory they have the more I'd rather have MySQL use
> it. Yeah I know ClamAV isn't much at using resources when its not doing
> a thing... but *ANY* little bit helps in the trying times of people
> being absolutely insatiable for faster response times. (long story if
> you want to know in another e-mail sometime)

I'm resisting the urge to say switch-to-PostgreSQL. :)

Would it be possible to perform a snapshot of the system (on a SAN or
NAS?) and scan the snapshot of the database, etc...?   This would
accomplish the same thing, with no effect on the live system.

> > > Also have to have logging (transactional and logins and traffic) going
> > > back for 90 days minimum.
> > All good.
> Yes, but very LARGE disk space is required. Already doing 30 days and
> getting on the fat side of 1TB logs already.
> Which is 3 days uncompressed and 27 days compressed.

Oh, I'm with you there.  Log management is a monster unto itself,  there
are issues here beyond just PCI/DSS.  Data retention regulations are
important in regards to log management.  I did a presentation about
retention for the last barcamp
<http://www.whitemiceconsulting.com/node/157>   Given the concept of
spoilation I'd be concerned about deleting logs regarding communication
with external parties (mail logs, web logs, etc...) at 30 days.  There
have been legal cases where a 30 days retention policy has been deemed
"unreasonable";  I'd try to find some way of retaining at least 90 days,
120 days at best.

And regardless of whether you process credit card information *everyone*
is subject to data-retention rules.

> We are also logging all httpd
> traffic to a mysql server on yet another machine for statistics and
> billing for bandwidth.

Yikes.

> We have our Web Application Server doing logging to a named pipe that
> uses a syslog setup on our central server. 

Yep, we've centralized our syslog.  It used to log directly to the
database but I found that having syslog-ng write the messages to
timestamped SQL *files* and then have a cron sript that loads the files
ever hour to be much more efficient (and stable) then a direct connect. 

> Time Serving is another critical piece, if any machine becomes "lost"
> its hard to get good logging data to verify or reconstruct a transaction
> or session. We have four ntp peers referencing each other and 4
> different external level 1 and 2 time servers. It is a good thing. Then
> we use time1, time2, time3, time4 for all machine at regular intervals.

Syslog-ng can be configured to record two timestamps,  one is the
timestamp of the message (sent) and the other is the timestamp of when
the message was received.  It is a nice way of picking up if some box is
having a problem with time. 

> > > You are forced to have a "comprehensive" application firewall setup
> > > (like mod_security2 for Apache2) that actively blocks all "known"
> > > exploits and prevents common practices. This effective eliminates *ANY*
> > > CMS transaction handling of *ANY* card holder data.
> > > SOAP/XML/Streaming/AJAX virtually non-usable now unless fully double
> > > encrypted in both directions with unique keys on a regularly updated
> > > process.
> > It is difficult to interpret what it pragmatically means in some
> > circumstances.   It does pose some real problems for allot of CMS and
> > web systems,  but perhaps this is because most CMS / AJAX sites are
> > ridiculously insecure.    Language like "all known",  all known to whom?
> > It most cases regarding legal contracts such as SLAs this is really a
> > requirement for best-effort using accepted best practices.
> The language is very vague. Mod_Security2 Rules are written in LUA and
> the bulk of them are explicitly for PHP... 

No surprise.  "PHP" and "secure" don't usually go together. :)

> One particular thing is Video/flash/gallery uploads... some customers
> want 250MB or larger upload... this is a blatant rule violation for the
> "breach network" and "got root" rulesets and since having a completely
> separate CMS instance for https vs http is very unreasonable and
> extremely hard for our 50 some Web-App customers to wrap their heads
> around... let alone WebGUI in the first place.

I've never used WebGUI, but any transaction of 250MB over HTTP.... Ugh!
That's going to present interesting issues even aside from rules.  There
is not possibility of providing something like FTP uploads and using a
listener/collector method? 

> Ajax/Javascript/Java/Flash all of these are external application
> everyone wants to use now a days and don't REALLY CARE if they aren't
> secure, because Web Designers today just Want to have cool slick looking
> stuff without regard to CHD saftey. 

Preaching to the choir.  Web developers are the single biggest obstacle
regarding security.   It might improve things if PCI/DSS just came out
and banned LAMP applications altogether since every single one I've seen
[with the exception of Horde] is trash (often pretty, but peek under the
hood... Egads!)  

> One ministry got a judgment against them for $8M, mainly because they
> were led to believe that things were not what they really were and the
> four spunky Web Developers and IT staff were all prosecuted for
> something and got put in jail for a few months to one year.

Yep, this ain't stuff to kid around with.  The same applies to data retention.

> > > Disk Encryption for most everything application related must be used,
> > > goodbye NFS anything. 
> > I believe current versions of NFS are quite secure;  the latest versions
> > of NFS can even perform authorization via GSSAPI.
> Oh the current instances NFS v3/4 are secure, but they can't be
> encrypted at the NFS server level even with current NAS or most
> reasonable SAN appliances. This is an area that only local storage can
> work (or things that act like local storage).

Hence SAN and not NAS.  An iSCSI LUN is treated exactly like local storage.

> > > NO WIRELESS PERIOD. WPA2 suspect now and likely to become non-allowed
> > > shortly.
> > I don't believe WPA2 (if we mean PEAP or EAP-TLS + TKIP or AES) is
> > suspect.  The recent reports of exploitation were grossly exaggerated.
> Well according to some early drafts, WPA2 is going to heavily redacted
> to what and where and how it can be used. Everything else will be
> forcibly removed and kept that way. 

If they conclude that [no-wireless] it will be interesting to see how to
implement it in practice.  Simply blocking access by wireless
clients/subnets to the respective hosts/subnet would be easy enough,
but once connected anywhere it is so easy to route around such
blockades. 

> No 40-bit stuff or anything of the like.

No problem with that;  hardware that only does WEP is beyond obsolete now anyway.

> > > FYI, these are just a few of the things we have been told etc...
> One other piece, SINCE we are a Virtual company with a Data Center in
> the middle of the country and another one on the left coast... and one
> in Canada, we have also been told that the PCI Level 1 for Service
> Provider (I think that what it was transformed into) requires a physical
> visit to each and every Employee's home and evaluating and inspecting
> our machine and networking setup. 

Sounds like no-machine or VMware's VDI are the solutions here;  that way
no data is ever stored on the client system.  I imagine this helps
explain the rising popularity of such solutions. 

> Inspecting our machine for *ANY TYPE*
> of CHD. I freely admit I have CHD info on my computer... a file for
> every transaction from some days. But it seems as though the files are
> encrypted... with a huge long key... and a freakishly STUPIDLY long
> passcode/phrase.

Ok, but how many laptops containing CHD have been nabbed?  While you
claim the files are encrypted and safe how many of those nabbed laptop
owners/users would have said the same?  Only, apparently in most cases,
they were lying.  If one of your users had sensitive information on a
portable device would you take it at their word that the information was
secured?  I certainly wouldn't.  This is Master Card's / Visa's
confidential information.  What [good] personal practices you have are
pretty much irrelevant.

> This is big, we have people in Calgary, Grand Rapids, Chicago, Denver,
> Los Angeles, Seattle, Thousand Oaks, Mad-Town, Indianapolis. Contractors
> will have to provide PCI compliance ... some of these Contractors are
> former Employees that ONLY work for us... and now have to become PCI
> certified as well which means a $7K bill for them to become certified.
> Its also going to cost us $15K-55K (depending on scope) plus travel and
> per Diem during physical visits. All of these costs are YEARLY.

Compare this with the gain of being able to process credit card information.

> Not to
> mention the (upgraded) IDS console costs (Mainly NICE hardware with
> lotsa disk space locally), the Networking services we have to real time
> monitored now (port with all VLAN traffic tagged), nagios updates and
> writing of plugins for specific events that apply to Windows machines
> only but are forced to monitor them on Linux machines, because.... just
> because.
> I could go on and on and on and on... the specs are so vague that they
> encompass *EVERYTHING* when they really means Windows.

I'm unclear what specific events are mentioned in the latest spec that
are [effectively] specific to Windows?

> We are a non-profit that services non-profits and the requirements don't
> change, nor does the cost.

I don't see why they should.  You are managing the same data.

>  I can understand the requirements not
> changing, but dang now these trouble non-profits have to raise *THAT*
> much more money to be able to take donations via Credit Card... which
> made donation easier for the constituents.  

It is reasonable to consider out-sourcing Credit Card processing.  

> Its painfully obvious they want to see everything all the time, but in
> doing so makes the environment even MORE complex and fragile and
> exploitable. *SHOULD* someone get at my IDS or logging servers ... it
> would be fairly simple to do everything these things were designed to
> help prevent.

Sure, but that isn't an effective argument about having solid security
measures up-the-line.

> Of course, the penetration tester asked for a username and password on
> these machines and the ports the consoles were running on. And I didn't
> give them to him.
> He reported us as non-compliant, I challenged that declaration. I won as
> I was able to demonstrate the idiot wasn't a good penetration tester. He
> had a public interface *WIDE* open and a Private Interface *WIDE* open.

This all sounds like the typical consultant;  IT consultants are for the
most part just utterly worthless parasites (especially in the security
arena where most of them are that plus "annoying gas-bag").  I've got no
answer to this;  I can't count the time I've sat with some consultant
and wanted to slam his face into the desk.



More information about the grlug mailing list